CN115876786B - Wedge-shaped welding spot detection method and motion control device - Google Patents

Wedge-shaped welding spot detection method and motion control device Download PDF

Info

Publication number
CN115876786B
CN115876786B CN202310193688.9A CN202310193688A CN115876786B CN 115876786 B CN115876786 B CN 115876786B CN 202310193688 A CN202310193688 A CN 202310193688A CN 115876786 B CN115876786 B CN 115876786B
Authority
CN
China
Prior art keywords
image
wedge
welding spot
shaped welding
area
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202310193688.9A
Other languages
Chinese (zh)
Other versions
CN115876786A (en
Inventor
张超
冀运景
朱鹏程
代青平
余标
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Mingrui Ideal Technology Co ltd
Original Assignee
Shenzhen Magic Ray Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Magic Ray Technology Co ltd filed Critical Shenzhen Magic Ray Technology Co ltd
Priority to CN202310193688.9A priority Critical patent/CN115876786B/en
Publication of CN115876786A publication Critical patent/CN115876786A/en
Application granted granted Critical
Publication of CN115876786B publication Critical patent/CN115876786B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02PCLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
    • Y02P90/00Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
    • Y02P90/30Computing systems specially adapted for manufacturing

Abstract

The embodiment of the application relates to the field of automatic optical detection of semiconductors, and discloses a detection method and a motion control device for wedge-shaped welding spots, wherein the detection method for the wedge-shaped welding spots comprises the following steps: acquiring an image of a component to be tested; determining the position of the element to be detected in the image according to the image of the element to be detected; positioning the wedge-shaped welding spots according to the positions of the elements to be detected in the image, and determining the positions and the sizes of the wedge-shaped welding spots; according to the position and the size of the wedge-shaped welding spot, carrying out partial segmentation on the wedge-shaped welding spot after positioning, and determining the top area and the crimping area of the wedge-shaped welding spot; and judging whether the quality of the wedge-shaped welding spot is qualified or not according to the top area and the crimping area of the wedge-shaped welding spot. By the method, the wedge-shaped welding spot can be positioned and partially segmented more quickly and accurately, so that whether the quality of the wedge-shaped welding spot is qualified or not is judged, and the complexity, low efficiency and inaccuracy of distinguishing by means of manual visual inspection under a microscope in the spot inspection process are eliminated.

Description

Wedge-shaped welding spot detection method and motion control device
Technical Field
The application relates to the field of semiconductor automatic optical detection, in particular to a detection method of wedge-shaped welding spots and a motion control device.
Background
In the semiconductor inspection industry, IGBT modules (Insulated Gate Bipolar Transistor, modules made up of insulated gate bipolar transistors) are an important and common component module in the market. Due to the material and process of the IGBT module, it is necessary to detect the solder after wire bonding (also called pressure bonding, binding, bonding, wire bonding).
The welding spots are divided into two types: spherical welds and wedge welds. At present, the automatic detection of the spherical welding spots is basically realized, and the wedge-shaped welding spots are required to be locally identified due to different processes, so that the detection difficulty is higher.
At present, the wedge-shaped welding spot is detected, and the wedge-shaped welding spot is generally identified by means of manual visual inspection under a microscope by means of spot check, but the method is low in efficiency, complex in operation and incapable of keeping up with the production line speed by means of manual visual inspection.
Disclosure of Invention
The embodiment of the application provides a detection method and a motion control device for wedge-shaped welding spots, which can be used for rapidly and accurately positioning and locally dividing the wedge-shaped welding spots so as to judge whether the quality of the wedge-shaped welding spots is qualified or not, and the complexity, low efficiency and inaccuracy of distinguishing by means of manual visual inspection under a microscope are eliminated.
The embodiment of the application provides the following technical scheme:
in a first aspect, an embodiment of the present application provides a method for detecting a wedge-shaped solder joint, including:
acquiring an image of a component to be tested;
determining the position of the element to be detected in the image according to the image of the element to be detected;
positioning the wedge-shaped welding spots according to the positions of the elements to be detected in the image, and determining the positions and the sizes of the wedge-shaped welding spots;
according to the position and the size of the wedge-shaped welding spot, carrying out partial segmentation on the wedge-shaped welding spot after positioning, and determining the top area and the crimping area of the wedge-shaped welding spot;
and judging whether the quality of the wedge-shaped welding spot is qualified or not according to the top area and the crimping area of the wedge-shaped welding spot.
In some embodiments, locally segmenting the positioned wedge shaped weld includes:
acquiring an original image of the positioned wedge-shaped welding spot;
performing image format processing on the original image to obtain a first image;
creating an initial mask map according to the first image, wherein the initial mask map has the same size as the first image;
performing pixel binarization operation on the initial mask map according to the position and the size of the wedge-shaped welding spot to obtain a first mask map;
and dividing the top area and the crimping area of the wedge-shaped welding spot according to the first image and the first mask diagram.
In some embodiments, segmenting the top region of the wedge bond pad from the first image and the first mask map includes:
carrying out gray scale processing on the first image to obtain a gray scale image;
processing the gray level image and the first mask image according to the self-adaptive threshold segmentation algorithm to obtain a self-adaptive threshold;
extracting a region with a brightness value larger than the self-adaptive threshold value in the gray level image to obtain a first background image, and performing pixel binarization inversion operation on the first background image to obtain a second background image;
performing skeleton extraction operation on the second background image through an image refinement algorithm to obtain a first skeleton image;
performing outline extraction operation on the first skeleton diagram through an outline extraction algorithm to obtain a key skeleton diagram;
determining region growing seeds in the key skeleton diagram through a region growing algorithm;
and (5) carrying out segmentation treatment on the key skeleton diagram according to the region growing seeds to obtain the top region of the wedge-shaped welding spot.
In a second aspect, embodiments of the present application provide a motion control apparatus, including:
at least one processor; the method comprises the steps of,
a memory communicatively coupled to the at least one processor; wherein,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method of wedge bond pad detection as in the first aspect.
In a third aspect, embodiments of the present application also provide a non-volatile computer-readable storage medium storing computer-executable instructions for enabling a motion control device to perform a method of detecting a wedge bond pad as in the first aspect.
The beneficial effects of the embodiment of the application are that: under the condition different from the prior art, the method for detecting the wedge-shaped welding spot provided by the embodiment of the application comprises the following steps: acquiring an image of a component to be tested; determining the position of the element to be detected in the image according to the image of the element to be detected; positioning the wedge-shaped welding spots according to the positions of the elements to be detected in the image, and determining the positions and the sizes of the wedge-shaped welding spots; according to the position and the size of the wedge-shaped welding spot, carrying out partial segmentation on the wedge-shaped welding spot after positioning, and determining the top area and the crimping area of the wedge-shaped welding spot; and judging whether the quality of the wedge-shaped welding spot is qualified or not according to the top area and the crimping area of the wedge-shaped welding spot. The wedge-shaped welding spot is positioned and partially segmented, the top area and the compression joint area of the wedge-shaped welding spot are determined, whether the quality of the wedge-shaped welding spot is qualified or not is judged according to the top area and the compression joint area of the wedge-shaped welding spot, and the wedge-shaped welding spot positioning and partial segmentation method can be used for positioning and partially segmenting the wedge-shaped welding spot more rapidly and accurately, so that whether the quality of the wedge-shaped welding spot is qualified or not is judged, and the complexity, inefficiency and inaccuracy of distinguishing by means of manual visual inspection under a microscope are eliminated.
Drawings
One or more embodiments are illustrated by way of example and not limitation in the figures of the accompanying drawings, in which like references indicate similar elements, and in which the figures of the drawings are not to scale, unless expressly stated otherwise.
FIG. 1 is a schematic diagram of an application environment provided by an embodiment of the present application;
FIG. 2 is a schematic flow chart of a method for detecting wedge-shaped solder joints according to an embodiment of the present application;
FIG. 3 is a schematic flow chart of controlling the movement of an image acquisition device to the position of a component to be tested in a printed circuit board according to an embodiment of the present application;
FIG. 4 is a schematic diagram of a correction point offset provided in an embodiment of the present application;
FIG. 5 is a schematic view of a wedge bond pad provided in an embodiment of the present application;
FIG. 6 is a schematic illustration of a wedge bond pad according to an embodiment of the present application;
FIG. 7 is a schematic view of the direction of wedge shaped solder joints provided in an embodiment of the present application;
fig. 8 is a schematic diagram of a refinement flow of step S204 in fig. 2;
fig. 9 is a schematic diagram of an original image and a first image provided in an embodiment of the present application;
fig. 10 is a schematic diagram of a first mask diagram and a second mask diagram provided in an embodiment of the present application;
Fig. 11 is a schematic diagram of a refinement flow of step S245 in fig. 8;
fig. 12 is a schematic diagram of a gray scale image, a second background image, and a first skeleton image according to an embodiment of the present application;
FIG. 13 is a schematic illustration of a key skeleton diagram, top region of a wedge bond pad, provided in an embodiment of the present application;
FIG. 14 is a schematic flow chart of a crimping area with wedge shaped solder joints segmented according to an embodiment of the present application;
FIG. 15 is a schematic view of a third mask pattern, a first watershed image, and a first crimp zone pattern provided in an embodiment of the present application;
fig. 16 is a schematic diagram of a refinement flow of step S205 in fig. 2;
fig. 17 is a schematic structural diagram of a motion control device according to an embodiment of the present application.
Detailed Description
In order to facilitate an understanding of the present application, the present application will be described in more detail below with reference to the accompanying drawings and detailed description. It will be understood that when an element is referred to as being "fixed" to another element, it can be directly on the other element or one or more intervening elements may be present therebetween. When an element is referred to as being "connected" to another element, it can be directly connected to the other element or one or more intervening elements may be present therebetween. The terms "vertical," "horizontal," "left," "right," and the like are used herein for illustrative purposes only.
Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this application belongs. The terminology used in the description of the present application in this description is for the purpose of describing particular embodiments only and is not intended to be limiting of the application. The term "and/or" as used in this specification includes any and all combinations of one or more of the associated listed items.
The technical scheme of the application is specifically described below with reference to the accompanying drawings of the specification:
referring to fig. 1, fig. 1 is a schematic diagram of an application environment according to an embodiment of the present application;
as shown in fig. 1, the application scenario 100 includes an image acquisition device 10, a measured object 20, a carrier 30, and a motion control device 40.
The image acquisition device 10 is communicatively connected to a motion control device 40, for example: through a bus connection for acquiring images of the object 20 under test, for example: an image of the printed circuit board is acquired, or an image of the component to be tested is acquired. In the present embodiment, the image capture device 10 includes, but is not limited to, an industrial camera. Preferably, the image acquisition device 10 is a 12MP industrial camera with a resolution of 10 μm, the lens is a telecentric lens, and the adopted light source is an RGBW four-color light source.
The object 20 to be tested is placed on the carrier 30, and the object 20 to be tested has corresponding specific elements therein for providing information of wedge-shaped solder joints to be tested, and in this embodiment, the object 20 to be tested includes a printed circuit board having wedge-shaped solder joints, for example: and the printed circuit board is provided with wedge-shaped welding spots formed in the routing process of the IGBT module.
The carrier 30 is a platform for placing and fixing the measured object 20, and in some embodiments of the present application, the carrier 30 may include a sensor system for rapidly acquiring the length and width of the measured object 20, or moving in response to the control of the motion control device 40 to help the image acquisition device 10 acquire the corresponding image.
The motion control device 40 is communicatively connected to the image capturing device 10, for example: the method is used for controlling the image acquisition device to move to a preset position through bus connection, or executing the detection method of the wedge-shaped welding spots in any embodiment of the application. In the present embodiment, the motion control device 40 includes, but is not limited to, an electronic apparatus having logic operation capability.
In this embodiment, the motion control device 40 further includes a mechanical arm, and the motion control device 40 can control the image acquisition device 10 to move to a preset position through the mechanical arm, and control the image acquisition device 10 to acquire an image of the measured object 20 fixed on the carrier 30, meanwhile, the image acquisition device 10 is connected through a bus, and the image acquisition device also feeds back corresponding information to the motion control device 40 through the bus structure, so that the motion control device 40 can better adjust the position of the image acquisition device 10.
In the embodiment of the present application, the motion control device 40 is a system device platform that integrates multiple functions, and is not limited to a single structural device. The device can be composed of a plurality of device components which are mutually connected and respectively used for executing different functions, or a plurality of functional modules are integrated in the same device.
In the embodiment of the present application, the motion control device 40 further includes a motion control card and/or a programmable logic controller (Programmable Logic Controller, abbreviated as PLC) for controlling the image capturing device 10 to move to a preset position. The motion control device 40 may also include a memory for storing relevant data information and one or more different interaction devices for gathering user instructions or presenting and feeding back relevant information to the user. These interaction means include, but are not limited to: input keyboards, display screens, touch screens, speakers, and the like.
In some embodiments of the present application, the motion control device 40 may be configured with a display screen and an input keyboard, through which a user may learn about the currently executing detection process, or issue a corresponding user instruction to pause the detection process.
It should be noted that the application environment shown in fig. 1 is for illustration only. One skilled in the art may add or subtract one or more devices therein as may be desired in practice, and is not limited to that shown in fig. 1.
Referring to fig. 2, fig. 2 is a flow chart of a method for detecting wedge-shaped welding spots according to an embodiment of the present application;
the wedge-shaped welding spot detection method is applied to a motion control device, and specifically, the execution main body of the wedge-shaped welding spot detection method is one or at least two processors of the motion control device.
In this application embodiment, the motion control device is connected image acquisition device, and the motion control device is used for controlling image acquisition device to move to the preset position, and image acquisition device is used for gathering the image of printed circuit board.
As shown in fig. 2, the method for detecting wedge-shaped welding spots includes:
step S201: acquiring an image of a component to be tested;
specifically, after the motion control device controls the image acquisition device to move to the position of the element to be detected in the printed circuit board, the motion control device controls the image acquisition device to shoot the image of the element to be detected and receive the image of the element to be detected sent by the image acquisition device, wherein the element to be detected is positioned on the printed circuit board and comprises at least one wedge-shaped welding spot.
Referring to fig. 3, fig. 3 is a schematic flow chart of controlling an image capturing device to move to a position of a device under test in a printed circuit board according to an embodiment of the present disclosure;
in this embodiment of the present application, before acquiring the image of the element to be tested, the method for detecting the wedge-shaped solder joint further includes: the image acquisition device is controlled to move to the position of the element to be tested in the printed circuit board.
As shown in fig. 3, the process of controlling the position of the image acquisition device to the element to be measured in the printed circuit board includes:
step 301, controlling an image acquisition device to move to a preset position of a printed circuit board to shoot an image, so as to obtain an initial image;
the printed circuit board comprises at least one calibration point, and it is understood that the calibration point (also called MARK point or reference point) provides a common measurable point for all steps in the surface mounting process, and is preferably in the shape of a solid circle with a diameter of 1mm (+ -0.2 mm), and the material is bare copper (which can be protected by a clear anti-oxidation coating), tin plating or nickel plating, and the color of the solid circle is obviously different from the surrounding background color.
Specifically, the correction point is located at the center of the preset position, the preset position of the printed circuit board is fixed, that is, for each printed circuit board, the preset position and the position of the correction point on the printed circuit board are the same, and the preset position of the printed circuit board and the center position of the correction point are both stored in the memory of the motion control device in advance. The motion control device controls the image acquisition device to move to a preset position of the printed circuit board to shoot an image, and an initial image is obtained, wherein the initial image comprises at least one correction point.
Step S302, recognizing a correction point in an initial image through an image algorithm, and calculating offset coordinates and rotation angles of a center point of the correction point;
it can be understood that due to factors such as the material of the printed circuit board, the carrier, friction force between the printed circuit board and the track, the printed circuit board may shift or rotate to a certain extent when reaching a designated position on the carrier through the track, thereby causing the shift and rotation of the center point of the correction point.
Specifically, the image algorithm includes, but is not limited to, an image recognition algorithm based on a convolutional neural network, specifically, a correction point recognition model is pre-established based on the convolutional neural network, the correction point recognition model is used for recognizing correction points in an initial image, the model is trained by constructing a loss function and acquiring a large number of images of a printed circuit board including the correction points, a trained correction point recognition model is obtained, and a specific method for establishing the recognition model based on the convolutional neural network and training the model is the prior art and is not repeated herein.
Further, a trained correction point identification model is called to identify an initial image, a correction point in the initial image is identified, the actual coordinate of the center point of the correction point is determined, and the offset coordinate and the rotation angle of the center point of the correction point are calculated according to the original coordinate of the center point of the correction point which is stored in a memory of a motion control device in advance, wherein the abscissa value of the offset coordinate of the center point of the correction point=the abscissa value of the actual coordinate-the abscissa value of the original coordinate, and the ordinate value of the offset coordinate of the center point of the correction point=the ordinate value of the actual coordinate-the ordinate value of the original coordinate.
In the embodiment of the present application, if there is a correction point in the initial image, the rotation angle of the center point of the correction point is zero.
In the embodiment of the application, the coordinates of the center point of the correction point and the offset coordinates are in the same coordinate system, and the origin of the coordinate system is located at the upper left corner of the printed circuit board.
Step S303, determining the position of the element to be tested in the printed circuit board according to the offset coordinates and the rotation angle of the center point of the correction point;
specifically, if there is a correction point in the initial image, the abscissa value of the position of the element to be tested in the printed circuit board=the abscissa value of the preset position of the printed circuit board+the abscissa value of the offset coordinate of the center point of the correction point, and the ordinate value of the position of the element to be tested in the printed circuit board=the ordinate value of the preset position of the printed circuit board+the ordinate value of the offset coordinate of the center point of the correction point.
For example: a correction point is arranged in the initial image, the original coordinates of the center point of the correction point are
Figure SMS_1
The actual coordinates of the center point of the correction point are +.>
Figure SMS_2
Preset position of printed circuit board +.>
Figure SMS_3
The rotation angle of the center point of the correction point is zero, the offset coordinate of the center point of the correction point is +. >
Figure SMS_4
The position of the component to be tested in the printed circuit board is +.>
Figure SMS_5
Referring to fig. 4, fig. 4 is a schematic diagram illustrating a correction of point offset according to an embodiment of the present application;
as shown in figure 4 of the drawings,
Figure SMS_7
、/>
Figure SMS_9
correction point marks respectively stored in advance in a memory of a motion control device 1 Correction point mark 2 Is>
Figure SMS_12
、/>
Figure SMS_8
Correction point marks in initial images obtained by shooting when the image acquisition device moves to preset positions of the printed circuit board 1 Correction point mark 2 Is>
Figure SMS_11
For correction points mark pre-stored in the memory of the motion control device 1 Is +.>
Figure SMS_14
And correcting point mark 2 Is +.>
Figure SMS_15
An angle between the line of (c) and the ordinate axis, +.>
Figure SMS_6
For correction point mark in initial image 1 Is +.>
Figure SMS_10
And correcting point mark 2 Is +.>
Figure SMS_13
An angle between the line of (c) and the ordinate axis.
It will be appreciated that the number of components,
Figure SMS_16
、/>
Figure SMS_17
i.e. the offset correction point mark 1 Correction point mark 2 Is defined by a center point of the lens.
In this embodiment of the present application, the position of the element to be measured in the printed circuit board is determined according to the offset coordinate and the rotation angle of the center point of the correction point, and the method further includes steps (1) - (3), specifically as follows:
step (1): if the number of the correction points is two, determining a first rotation angle according to offset coordinates of center points of the two correction points;
Specifically, the first rotation angle is calculated by an inverse cotangent function.
If correct point mark 1 The primary coordinates of the center point of (2) are
Figure SMS_18
Correction point mark 1 The offset coordinate of the center point of (2) is +.>
Figure SMS_19
Correction point mark 2 The primary coordinates of the center point of (2) are +.>
Figure SMS_20
Correction point mark 2 The offset coordinate of the center point of (2) is +.>
Figure SMS_21
First rotation angle
Figure SMS_22
, wherein ,/>
Figure SMS_23
Figure SMS_24
Alternatively, if the dot mark is corrected 1 The primary coordinates of the center point of (2) are
Figure SMS_25
Correction point mark 1 The actual coordinates of the center point of (2) are +.>
Figure SMS_26
Correction point mark 2 The primary coordinates of the center point of (2) are +.>
Figure SMS_27
Correction point mark 2 The actual coordinates of the center point of (2) are +.>
Figure SMS_28
First rotation angle
Figure SMS_29
, wherein ,/>
Figure SMS_30
Figure SMS_31
In an embodiment of the present application, the method further includes:
if the number of the correction points is two, determining a first average offset coordinate according to the offset coordinates of the center points of the two correction points.
Specifically, the abscissa value of the first average offset coordinate is the average value of the abscissas of the offset coordinates of the center points of the two correction points, and the ordinate value of the first average offset coordinate is the average value of the abscissas of the offset coordinates of the center points of the two correction points.
For example: if correct point mark 1 The primary coordinates of the center point of (2) are
Figure SMS_32
Correction point mark 1 The actual coordinates of the center point of (2) are +. >
Figure SMS_33
Correction point mark 2 The primary coordinates of the center point of (2) are +.>
Figure SMS_34
Correction point mark 2 The actual coordinates of the center point of (2) are +.>
Figure SMS_35
Correction point mark 1 The offset coordinate of the center point of (2) is +.>
Figure SMS_36
Correction point mark 2 The offset coordinate of the center point of (2) is +.>
Figure SMS_37
The first average offset coordinate is
Figure SMS_38
Step (2), calculating a first actual offset coordinate according to the first rotation angle;
specifically, the first actual offset coordinate is calculated according to the following formula:
Figure SMS_39
wherein ,
Figure SMS_40
for the first actual biasMove coordinates->
Figure SMS_41
For the distance between the initial image center point and the first reference point,/or->
Figure SMS_42
Is the angle of the initial image center point relative to the first reference point, < >>
Figure SMS_43
For a first rotation angle, +>
Figure SMS_44
Is the coordinates of the first datum point,>
Figure SMS_45
the first reference point is the correction point with the largest value of the ordinate among the two correction points, and is the coordinate of the center point of the initial image.
Specifically, the first reference point is two correction point marks 1 and mark2 The correction point with the largest value of the middle ordinate can be understood that the first reference point can be the correction point mark in the initial image obtained by shooting when the image acquisition device moves to the preset position of the printed circuit board 1 Correction point mark 2 A correction point where the value of the ordinate of the center point of (c) is maximum,
Figure SMS_46
Coordinates of the center point which is the first reference point, +.>
Figure SMS_47
Distance between the initial image center point and the first reference point is +.>
Figure SMS_48
In the embodiment of the application, the angle of the initial image center point relative to the first reference point
Figure SMS_49
And (2) determining in step (1)First rotation angle->
Figure SMS_50
The calculation method of (2) is similar and will not be described in detail herein.
Step (3), determining the position of the element to be tested in the printed circuit board according to the coordinates of the preset position and the first actual offset coordinates;
specifically, the abscissa value of the position of the element to be measured in the printed circuit board=the abscissa value of the preset position of the printed circuit board+the abscissa value of the first actual offset coordinate, and the ordinate value of the position of the element to be measured in the printed circuit board=the ordinate value of the preset position of the printed circuit board+the ordinate value of the first actual offset coordinate.
In this embodiment of the present application, the position of the element to be measured in the printed circuit board is determined according to the offset coordinate and the rotation angle of the center point of the correction point, and the method further includes steps (4) - (6), specifically as follows:
if the number of the correction points is at least three, calculating to obtain a second rotation angle through an affine transformation algorithm;
Specifically, affine transformation (Affine Transformation or Affine Map) is a linear transformation from two-dimensional coordinates to two-dimensional coordinates, which maintains the flatness (i.e., the straight line remains as a straight line after transformation) and parallelism (i.e., the relative positional relationship between two-dimensional patterns remains unchanged, the parallelism remains as parallel lines, and the positional order of the points on the straight line remains unchanged) of the two-dimensional patterns. Affine transformation can be achieved by a composite of a series of atomic transformations, including: translation, scaling, rotation, flipping, and miscut, the process of transforming an original image into a transformed image can be described by an affine transformation matrix. And the transformation process can be obtained by multiplying the original image by a matrix of 2*3.
In the embodiment of the application, the second rotation angle can be calculated by using the estimateAffinePattical 2D and estimate RigidTransform algorithm in the affine transformation algorithm of the opencv in the image recognition field.
In the embodiment of the present application, the second average offset coordinate may also be calculated by using the estimazaffinepatical 2D and estimate Rigid Transform algorithm, where the second average offset coordinate is an average value of offset coordinates of at least three correction points.
Step (5): calculating a second actual offset coordinate according to the second rotation angle;
specifically, the second actual offset coordinate is calculated according to the following formula:
Figure SMS_51
wherein ,
Figure SMS_52
for the second actual offset coordinate, +.>
Figure SMS_53
For the distance between the initial image center point and the second reference point,/or->
Figure SMS_54
Is the angle of the initial image center point relative to the second reference point, < >>
Figure SMS_55
For a second rotation angle->
Figure SMS_56
Is the coordinates of the second datum point,>
Figure SMS_57
the second reference point is the correction point with the largest value of the ordinate among the at least three correction points, and is the coordinate of the center point of the initial image.
Specifically, the second reference point is a correction point with the largest value of the ordinate among the at least three correction points, and it can be understood that the second reference point can be a correction point with the largest value of the ordinate of the center point of the at least three correction points in the initial image obtained by shooting when the image acquisition device moves to the preset position of the printed circuit board,
Figure SMS_58
coordinates of the center point of the second reference point, +.>
Figure SMS_59
Distance between the initial image center point and the second reference point is the coordinates of the initial image center point
Figure SMS_60
In the embodiment of the application, the angle of the initial image center point relative to the second reference point
Figure SMS_61
In step (1) and determining a first rotation angle +. >
Figure SMS_62
The calculation method of (2) is similar and will not be described in detail herein.
Step (6): determining the position of the element to be tested in the printed circuit board according to the coordinates of the preset position and the second actual offset coordinates;
specifically, the abscissa value of the position of the element to be measured in the printed circuit board=the abscissa value of the preset position of the printed circuit board+the abscissa value of the second actual offset coordinate, and the ordinate value of the position of the element to be measured in the printed circuit board=the ordinate value of the preset position of the printed circuit board+the ordinate value of the second actual offset coordinate.
And S304, controlling the image acquisition device to move to the position of the element to be detected in the printed circuit board so as to acquire the image of the element to be detected.
Specifically, the motion control device controls the image acquisition device to move to the position of the element to be detected in the printed circuit board so as to acquire the image of the element to be detected.
Step S202: determining the position of the element to be detected in the image according to the image of the element to be detected;
specifically, determining the position of the element to be measured in the image through color extraction processing includes:
performing color extraction processing on the image of the element to be detected, and extracting the characteristic color of the element to be detected;
marking the image of the element to be detected according to the characteristic color;
Performing binarization processing on the marked image of the element to be detected to obtain a binary image of the element to be detected, wherein the gray value of each pixel in the binary image has a definite numerical value;
and identifying the specific contour of the element to be detected according to the preset gray value image, and extracting the contour of the binary image, wherein the area with the maximum contour is the position of the element to be detected in the image.
In the embodiment of the present application, the determining the position of the element to be measured in the image through template matching includes:
extracting and storing templates of areas where press-connection welding spots and press-connection wires are located in a sample graph of the element to be tested;
and performing template rotation and scaling matching operation on the image of the element to be detected according to the template to obtain the position and angle of the region of the element to be detected in the image.
Step S203: positioning the wedge-shaped welding spots according to the positions of the elements to be detected in the image, and determining the positions and the sizes of the wedge-shaped welding spots;
specifically, the wedge-shaped welding spots are positioned in a welding spot searching window through color extraction processing or template matching, and the positions and the sizes of the wedge-shaped welding spots are determined, wherein the welding spot searching window is a basic function in a semiconductor AOI software programming module and is used for determining which position of a printed circuit board is searched for the welding spots.
In this embodiment of the present application, the method for determining the position and the size of the wedge-shaped welding spot by performing color extraction processing or template matching is similar to the method for determining the position of the element to be measured in the image by performing color extraction processing or template matching, and will not be described herein.
Referring to fig. 5, fig. 5 is a schematic view of a wedge-shaped solder joint according to an embodiment of the present application;
as shown in fig. 5, the wedge bond pad includes a top region, a crimp region a, a crimp region B, and a cut region C. The top area is positioned in the middle of the wedge-shaped welding spot, the crimping area A and the crimping area B are areas formed by crimping two sides of the welding spot, and the cutting area C is an area formed by cutting the wire tail.
Referring to fig. 6, fig. 6 is a schematic diagram of a wedge-shaped solder joint according to an embodiment of the present application;
as shown in fig. 6, region 1 is the cut region of the wedge bond, region 2 is the crimp region of the wedge bond, region 3 is the top region of the wedge bond, and region 4 is the neck region of the wedge bond.
In an embodiment of the present application, after determining the position and the size of the wedge-shaped welding spot, the method further includes:
the direction of the wedge shaped weld is determined.
Specifically, the direction of the wedge-shaped welding spot is the direction of the tail of the cutting area of the wedge-shaped welding spot, and is defined according to the corresponding 1234 of the left upper part and the right lower part, for example: 1 indicates that the tail of the cutting area of the wedge-shaped welding spot is on the left, 2 indicates that the tail of the cutting area of the wedge-shaped welding spot is above, 3 indicates that the tail of the cutting area of the wedge-shaped welding spot is on the right, and 4 indicates that the tail of the cutting area of the wedge-shaped welding spot is below.
Referring to fig. 7, fig. 7 is a schematic view illustrating a direction of a wedge-shaped solder joint according to an embodiment of the present application;
as shown in part a of fig. 7, the tail of the cut area of the wedge bond pad is below, and at this time, the direction of the wedge bond pad is defined as 4.
In this embodiment of the present application, if the wedge-shaped welding spot is inclined, the direction of the wedge-shaped welding spot is determined after the wedge-shaped welding spot is rotated to 0 degrees, wherein the vertical direction represents 0 degrees, the direction when the wedge-shaped welding spot is rotated to 0 degrees is shown as a part a of fig. 7, and at this time, the direction of the wedge-shaped welding spot is defined as 4.
As shown in part b of fig. 7, the wedge-shaped welding spot is inclined with respect to the vertical direction by an angle of 15 degrees, and at this time, the tail of the cutting area of the wedge-shaped welding spot is at the lower right.
In the embodiment of the present application, the wedge-shaped welding spot shown in the b part of fig. 7 needs to be reversely rotated by 15 degrees, so that the direction of the wedge-shaped welding spot is determined when the wedge-shaped welding spot rotates to 0 degrees, wherein the image of the wedge-shaped welding spot in the b part of fig. 7 after being reversely rotated by 15 degrees is similar to that of the wedge-shaped welding spot in the a part of fig. 7, namely, the tail part of the cutting area of the wedge-shaped welding spot is below, and the direction is 4.
Step S204: according to the position and the size of the wedge-shaped welding spot, carrying out partial segmentation on the wedge-shaped welding spot after positioning, and determining the top area and the crimping area of the wedge-shaped welding spot;
Specifically, referring to fig. 8, fig. 8 is a schematic diagram of a refinement flow of step S204 in fig. 2;
as shown in fig. 8, this step S204: according to the position and the size of the wedge-shaped welding spot, carrying out partial segmentation on the wedge-shaped welding spot after positioning to determine the top area and the crimping area of the wedge-shaped welding spot, wherein the method comprises the following steps:
step S241: acquiring an original image of the positioned wedge-shaped welding spot;
specifically, the original image of the wedge-shaped welding spot after positioning in the welding spot search window is an RGBA four-way image, where RGBA represents color spaces of Red (Red), green (Green), blue (Blue) and Alpha, and Alpha is an opacity parameter.
Referring to fig. 9, fig. 9 is a schematic diagram of an original image and a first image according to an embodiment of the present application;
wherein, the image of the part a of fig. 9 is an original image;
as shown in part a of fig. 9, the original image is an RGBA four-channel image.
Step S242: performing image format processing on the original image to obtain a first image;
specifically, the original image may be subjected to image format processing through a Mat structure in an OpenCV (open source computer vision library), or an image/conversion function in a Python image library PIL (Python Image Library) is used to perform image format processing on the original image to obtain a first image, where the first image is a BGR three-channel image, and three channels sequentially include: blue (Blue), green (Green), red (Red).
Referring to fig. 9 again, fig. 9 is a schematic diagram of an original image and a first image provided in an embodiment of the present application;
wherein the image of part b of fig. 9 is a first image;
as shown in part b of fig. 9, the first image is a BGR three-channel map.
Step S243: creating an initial mask map from the first image;
specifically, an initial mask map is created through OpenCV (open source computer vision library), where the size of the initial mask map is the same as that of the first image, the initial mask map is a single-channel map, and the background brightness, that is, the gray value is 0, that is, a pure black map.
Step S244: performing pixel binarization operation on the initial mask map according to the position and the size of the wedge-shaped welding spot to obtain a first mask map;
specifically, according to the position and the size of the wedge-shaped welding spot, performing pixel binarization operation on the area where the wedge-shaped welding spot is located in the initial mask image, and filling the area in the initial mask image to obtain a first mask image, wherein the brightness value (i.e. gray value) of the image is 255, namely white.
Specifically, the binarization of the image pixels is to set the gray values of all pixel points on the image to 0 or 255, and the whole image is visually converted into a clear black-and-white effect graph. The purpose of image pixel binarization is to significantly reduce the amount of data in the image, thereby highlighting the contours of the object.
Referring to fig. 10, fig. 10 is a schematic diagram of a first mask pattern and a second mask pattern according to an embodiment of the present application;
wherein the image of the portion a of fig. 10 is a first mask image;
as shown in part a of fig. 10, the first mask pattern is a pattern in which a foreground color is white and a background color is black.
Step S245: and dividing the top area and the crimping area of the wedge-shaped welding spot according to the first image and the first mask diagram.
Specifically, referring to fig. 11, fig. 11 is a schematic diagram of a refinement flow of step S245 in fig. 8;
as shown in fig. 11, this step S245: dividing a top region and a crimping region of the wedge-shaped welding spot according to the first image and the first mask diagram, comprising:
step S2451: carrying out gray scale processing on the first image to obtain a gray scale image;
referring to fig. 12, fig. 12 is a schematic diagram of a gray scale image, a second background image, and a first skeleton image according to an embodiment of the present disclosure;
the image of the portion a in fig. 12 is a gray scale image.
In the embodiment of the present application, in the BGR three-channel chart, if r=g=b, the color represents a gray color, where the value of r=g=b is called a gray value, so that each pixel of the gray image only needs one byte to store the gray value (also called intensity value, brightness value), the gray range is 0-255, and when the gray is 255, the color represents the brightest (pure white); when the gradation is 0, the darkest (pure black) is indicated.
In the embodiment of the present application, the gray scale processing includes, but is not limited to, a maximum value method, an average value method and a weighted average method, and the benefits of performing the gray scale processing are: compared with a color image, the gray image occupies smaller memory, the running speed is higher, the contrast can be visually increased after the gray image, and the target area is highlighted.
Step S2452: processing the gray level image and the first mask image according to the self-adaptive threshold segmentation algorithm to obtain a self-adaptive threshold;
specifically, the gray level map and the first mask map are processed by adopting an Otsu method (OTSU algorithm) to obtain the self-adaptive threshold value.
Specifically, the Ojin method is an image gray self-adaptive threshold segmentation algorithm, and the Ojin method divides an image into a background part and a foreground part according to gray value distribution on the image, wherein the foreground part is the part which is needed to be segmented according to the threshold. The demarcation value of the background and the foreground is the threshold value we require. And traversing different thresholds, calculating the intra-class variance between the corresponding background and foreground under the different thresholds, and when the intra-class variance obtains the maximum value, obtaining the corresponding threshold at the moment as the threshold required by the Otsu method (OTSU algorithm).
Step S2453: extracting a region with a brightness value larger than the self-adaptive threshold value in the gray level image to obtain a first background image, and performing pixel binarization inversion operation on the first background image to obtain a second background image;
Referring to fig. 12 again, fig. 12 is a schematic diagram of a gray scale image, a second background image, and a first skeleton image according to an embodiment of the present application;
wherein the image of part b of fig. 12 is a second background image.
In the embodiment of the application, a first background is obtained by extracting a region with a brightness value larger than an adaptive threshold value in a gray level image through threshold segmentation, and a second background image is obtained by performing pixel binarization inversion operation on the first background image, wherein the pixel binarization inversion operation is that after performing pixel binarization operation on an image, the image is subjected to inversion operation, a pixel point with a gray level value of 0 is set as a gray level value of 255, and a pixel point with the gray level value of 255 is set as a gray level value of 0. The pixel binarization inversion operation can be realized by adding one-step inversion operation on the basis of the pixel binarization.
Step S2454: performing skeleton extraction operation on the second background image through an image refinement algorithm to obtain a first skeleton image;
referring to fig. 12 again, fig. 12 is a schematic diagram of a gray scale image, a second background image, and a first skeleton image according to an embodiment of the present application;
the image of the portion c in fig. 12 is a first skeleton diagram.
In the embodiment of the application, an image thinning algorithm (imaging thinning) is to find a central axis or skeleton of a graph, replace the graph with the skeleton, and on the premise of following the most basic principle that the thinning of the graph is to keep the connectivity of the graph, the pixel width of the graph after the thinning is 1, and the thinning process is the process of peeling the graph layer by layer, and the graph is regularly reduced along with the thinning. The skeleton is used for representing the image, so that the data volume can be effectively reduced, the storage difficulty and the recognition difficulty of the image are reduced, and the calculation cost of image processing is reduced.
Step S2455: performing outline extraction operation on the first skeleton diagram through an outline extraction algorithm to obtain a key skeleton diagram;
referring to fig. 13, fig. 13 is a schematic diagram of a key skeleton diagram and a top area of a wedge-shaped welding spot according to an embodiment of the present application;
the image of the portion a in fig. 13 is a key skeleton diagram.
In the embodiment of the application, the first skeleton diagram can be subjected to outline extraction operation through a Canny edge detection algorithm to obtain a key skeleton diagram, wherein the Canny edge detection algorithm comprises 5 steps, namely Gaussian filtering, pixel gradient calculation, non-maximum suppression, hysteresis threshold processing and isolated weak edge suppression.
In the embodiment of the application, the contour extraction algorithm includes, but is not limited to, a Canny edge detection algorithm, threshold segmentation, extraction of fourier transform high-frequency information, an ant colony algorithm, and the like.
Step S2456: determining region growing seeds in the key skeleton diagram through a region growing algorithm;
specifically, the region growing (region seedsgrowing, RSG) algorithm is an image segmentation method of serial region segmentation, the basic idea of which is to merge pixels with similar properties together. And firstly designating a seed point for each region as a starting point of growth, then comparing the pixel points in the surrounding areas of the seed points with the seed points, merging points with similar properties, and continuing to grow outwards until pixels which do not meet the conditions are included, so that the growth of one region is completed.
Specifically, the selection of the region growing seeds in the key skeleton diagram can be realized by adopting a manual interaction mode, for example: in python, seed selection is performed manually using a mouse based on OpenCV (open source computer vision library), or automatically, for example: dividing the key skeleton diagram into different areas through a threshold segmentation algorithm, and extracting different internal points in the different areas of the key skeleton diagram as area growth seeds.
Step S2457: and (5) carrying out segmentation treatment on the key skeleton diagram according to the region growing seeds to obtain the top region of the wedge-shaped welding spot.
Specifically, the method for dividing the key skeleton diagram according to the region growing seeds to obtain the top region of the wedge-shaped welding spot specifically comprises the following steps:
step 1, sequentially scanning the key skeleton diagram according to the region growing seeds, finding out the 1 st pixel which is not yet belonged, and setting the pixel as (x) 0 ,y 0 );
Step 2, in (x) 0 ,y 0 ) Taking as the center, consider (x 0 ,y 0 ) Is (x, y) if (x 0 ,y 0 ) Satisfying the growth criterion, and (x, y) and (x) 0 ,y 0 ) Merging (in the same area), while pushing (x, y) onto the stack;
step 3, taking a pixel from the stack, treating it as (x) 0 , y 0 ) Returning to the step 2;
step 4, returning to the step 1 when the stack is empty;
And 5, repeating the steps 1-4 until each point in the image is attributed, and ending the growth to obtain the top area of the wedge-shaped welding spot.
Referring again to fig. 13, fig. 13 is a schematic diagram of a key skeleton diagram and a top area of a wedge-shaped welding spot according to an embodiment of the present application;
wherein the image of part b of FIG. 13 is a schematic view of the top region of the wedge bond pad;
as shown in part b of fig. 13, the area located in the middle part of the image and surrounded by the rectangular frame is the top area of the wedge-shaped welding spot.
Referring to fig. 14, fig. 14 is a schematic flow chart of a press-connection area with wedge-shaped welding spots divided according to an embodiment of the present application;
as shown in fig. 14, the process of dividing the crimping region of the wedge-shaped solder joint includes:
step S1401: performing pixel binarization inversion operation on the first mask map to obtain a second mask map;
specifically, a pixel with a gray value of 0 in the first mask is set to a gray value of 255 (i.e., white), and a pixel with a gray value of 255 is set to a gray value of 0 (i.e., black), so as to obtain the second mask.
Referring to fig. 10 again, fig. 10 is a schematic diagram of a first mask pattern and a second mask pattern according to an embodiment of the present application;
As shown in part b of fig. 10, the image of part b of fig. 10 is a second mask pattern, the second mask pattern is a pattern with black foreground color and white background color, and the size of the second mask pattern is the same as that of the first mask pattern of part a of fig. 10.
Step S1402: performing format conversion on the second mask map to obtain a third mask map;
specifically, the format of the second mask map is converted into a cv_32s format through OpenCV (open source computer vision library), so as to obtain a third mask map, wherein the third mask map is a single-channel map, the brightness of each coordinate position is represented by int, and the type of the int value can represent a larger value (32 times of 2).
It will be appreciated that the conversion of the format of the second mask map to cv_32s format is mainly due to the watershed algorithm required in step S1403, which computes values that exceed the byte type (max 255), requiring that the incoming image be an image in cv_32s format.
Referring to fig. 15, fig. 15 is a schematic diagram of a third mask pattern, a first watershed image, and a first crimp region pattern according to an embodiment of the present application;
wherein the image of part a of fig. 15 is a third mask image;
as shown in part a of fig. 15, the third mask image is black to the naked eye due to the image format cv_32s, and actually has pixel information.
Step S1403: according to the position of the region growing seeds in the key skeleton diagram, processing the third mask diagram through a watershed algorithm to obtain a first watershed image;
specifically, a watershed (watershed) algorithm is a segmentation method of mathematical morphology based on a topological theory, the basic idea is that an image is regarded as a geodetic topological feature, the gray value of each pixel of the image represents the altitude of the point, each local minimum and an influence area thereof are called a water collecting basin, and the boundary of the water collecting basin forms the watershed. The concept and formation of watershed can be illustrated by modeling the immersion process: and (3) puncturing a small hole on the surface of each local minimum value, then slowly immersing the whole model in water, and gradually expanding the influence domain of each local minimum value outwards along with the deepening of the immersion, and constructing a dam at the junction of the two catchments, namely forming the watershed.
Specifically, the calculation process of the watershed is an iterative labeling process, and comprises two steps, namely a sequencing process and a flooding process. The gray level of each pixel is firstly ordered from low to high, and then in the process of realizing flooding from low to high, the influence domain of each local minimum value at the h-order height is judged and marked by adopting a first-in first-out (FIFO) structure.
Referring to fig. 15 again, fig. 15 is a schematic diagram of a third mask pattern, a first watershed image, and a first crimp region pattern according to an embodiment of the present application;
wherein, part b of fig. 15 is a first watershed image;
as shown in part b of fig. 15, the first watershed image is black to the naked eye due to the image format of cv_32s, and actually has pixel information.
Step S1404: performing format conversion on the first watershed image to obtain a second watershed image;
specifically, the format of the first watershed image is converted into a CV_8U format through OpenCV (open source computer vision library), and a second watershed image is obtained, wherein the second watershed image is a single-channel image, and the brightness of each coordinate position is represented by a byte type (maximum 255).
It will be appreciated that each coordinate luminance value in the cv_8u format image is represented by a byte type, and each coordinate luminance value in the cv_32s format image is represented by an int, which is mainly a difference between the numerical ranges, and the type of int can represent a larger numerical value.
Step S1405: performing pixel binarization inversion operation on the second watershed image to obtain a third watershed image;
specifically, the pixel point with the gray value of 0 in the second watershed image is set to be the gray value of 255 (i.e. white), and the pixel point with the gray value of 255 is set to be the gray value of 0 (i.e. black), so as to obtain the third watershed image.
Step S1406: extracting a region with a brightness value larger than a preset brightness threshold value from the third watershed image to obtain a first compression joint region diagram;
specifically, threshold segmentation is performed through OpenCV (open source computer vision library), and a region with a brightness value greater than a preset brightness threshold in the third watershed image is extracted to obtain a first crimping region diagram, wherein the preset brightness threshold may be 250.
Referring to fig. 15 again, fig. 15 is a schematic diagram of a third mask pattern, a first watershed image, and a first crimp region pattern according to an embodiment of the present application;
wherein the image of part c of fig. 15 is a first crimp region map;
as shown in part c of fig. 15, two white areas in the first crimp region diagram are crimp regions on both sides of the wedge-shaped solder joint.
Step S1407: performing contour extraction operation on the first compression joint region graph through a contour extraction algorithm, and determining the contour of the compression joint region of the wedge-shaped welding spot;
specifically, the contour extraction operation is performed on the first crimp region map, and a hollowed-out internal point method in a contour extraction algorithm can be adopted: if one point in the first crimp zone map is black and its 8 adjacent points are all black, the point is deleted, thereby determining the profile of the crimp zone of the wedge-shaped weld.
Alternatively, the contour extraction operation is performed on the first crimp region map by a contour extraction function findContours in OpenCV (open source computer vision library), and the contour of the crimp region of the wedge-shaped solder joint is determined.
Step S1408: and obtaining the crimping area of the wedge-shaped welding spot according to the outline of the crimping area of the wedge-shaped welding spot.
Specifically, the contour of the crimping area of the wedge-shaped welding spot comprises a left side contour and a right side contour, the crimping area of the wedge-shaped welding spot comprises a left side crimping area and a right side crimping area, a closed area formed by the left side contour of the crimping area of the wedge-shaped welding spot is the left side crimping area, and a closed area formed by the right side contour of the crimping area of the wedge-shaped welding spot is the right side crimping area.
Step S205: and judging whether the quality of the wedge-shaped welding spot is qualified or not according to the top area and the crimping area of the wedge-shaped welding spot.
Specifically, referring to fig. 16, fig. 16 is a schematic diagram of a refinement flow of step S205 in fig. 2;
as shown in fig. 16, this step S205: judging whether the quality of the wedge-shaped welding spot is qualified or not according to the top area and the crimping area of the wedge-shaped welding spot, comprising:
step S2051: according to the top area and the crimping area of the wedge-shaped welding spot, calculating to obtain an evaluation parameter of the wedge-shaped welding spot;
Specifically, the evaluation parameters of the wedge-shaped welding spot include: the width and length of the top region, the average width and average length of the crimp region, the width and length of the weld spot overall, and the ratio of the area of the crimp region to the area of the weld spot overall.
In the embodiment of the application, the evaluation parameters of the wedge-shaped welding spot are obtained through calculation, and specifically include:
calculating the average width and the average length of the crimping area;
calculating the area of the crimping area according to the average width and average length of the crimping area and a Gaussian area formula;
fitting the top area of the wedge-shaped welding spot with the crimping area to obtain the whole outline of the wedge-shaped welding spot;
calculating the whole area of the wedge-shaped welding spot according to the whole outline of the wedge-shaped welding spot;
and calculating the proportion of the area of the crimping region to the whole area of the wedge-shaped welding spot according to the area of the crimping region and the whole area of the wedge-shaped welding spot.
Specifically, the widths and lengths of the left and right crimping regions are calculated by line scan, respectively, for example: finding out the central axis of the contour, then removing the head end and the tail end in a certain proportion, automatically filtering abnormal parts according to the upper limit and the lower limit of the set width, then scanning the rest part in a row, and calculating to obtain the width of the left side crimping area and/or the right side crimping area through a width=sum (rowWidth)/row function, wherein the calculated width is more approximate to the real width of the crimping area; similarly, the lengths of the left-side crimping region and/or the right-side crimping region are calculated according to the set upper and lower limits of the lengths and the column scanning.
In the embodiment of the application, the width and length of the top area and the average width and length of the crimping area can be calculated through a minimum circumscribed rectangle algorithm, wherein the minimum circumscribed rectangle algorithm comprises an equidistant rotation searching method, namely an image object rotates at equal intervals within a range of 90 degrees, circumscribed rectangle parameters of the outline of the image object in the direction of a coordinate system are recorded each time, the minimum circumscribed rectangle is calculated through calculating a plurality of circumscribed rectangle areas, the lengths and the widths of two minimum circumscribed rectangles corresponding to the crimping areas on two sides are obtained, and the average width and the average length of the crimping area are obtained by respectively averaging the lengths and the widths of the two minimum circumscribed rectangles.
Further, a Gaussian area formula is combined, namely corresponding coordinates are multiplied in a crossing mode to find a region surrounding the polygon, the region is subtracted from surrounding polygons to find the region of the polygon, and the area of the crimping region is calculated; similarly, the area of the top region can be obtained.
Further, the functions boundingRect and minAreact are obtained through outline circumscribed rectangles in an OpenCV (open source computer vision library), outline fitting is carried out on the top area of the wedge-shaped welding spot and the crimping area, the whole outline of the wedge-shaped welding spot is obtained, the width and the length of the whole welding spot are determined through a minimum circumscribed rectangle algorithm, the whole area of the wedge-shaped welding spot is obtained through calculation by combining a Gaussian area formula, and then the proportion of the area of the crimping area to the whole area of the wedge-shaped welding spot is calculated.
In the embodiment of the present application, the width and length of the whole welding spot may also be obtained through image recognition model recognition, where the image recognition model is a convolutional neural network model, and the method for building and training the convolutional neural network model is the prior art and is not described herein again.
Step S2052: judging whether the evaluation parameter meets a first condition;
specifically, the first condition includes that the width and length of the top region are within a first evaluation interval, and the average width and average length of the crimp region are within a second evaluation interval, and the width and length of the entire weld spot are within a third evaluation interval, and the ratio of the area of the crimp region to the entire area of the weld spot is within a fourth evaluation interval.
The first evaluation interval, the second evaluation interval, the third evaluation interval and the fourth evaluation interval are preset in a memory of the motion control device, and can be obtained by measuring and averaging a plurality of wedge-shaped welding spots manually identified as qualified by a person skilled in the art.
In the embodiment of the present application, if the evaluation parameter satisfies the first condition, the process proceeds to step S2053: determining the quality qualification of wedge-shaped welding spots; if the evaluation parameter does not satisfy the first condition, the process advances to step S2054: and determining that the quality of the wedge-shaped welding spot is unqualified.
Step S2053: and determining that the quality of the wedge-shaped welding spots is qualified.
Step S2054: and determining that the quality of the wedge-shaped welding spot is unqualified.
In an embodiment of the present application, a method for detecting a wedge-shaped solder joint is provided, including: acquiring an image of a component to be tested; determining the position of the element to be detected in the image according to the image of the element to be detected; positioning the wedge-shaped welding spots according to the positions of the elements to be detected in the image, and determining the positions and the sizes of the wedge-shaped welding spots; according to the position and the size of the wedge-shaped welding spot, carrying out partial segmentation on the wedge-shaped welding spot after positioning, and determining the top area and the crimping area of the wedge-shaped welding spot; and judging whether the quality of the wedge-shaped welding spot is qualified or not according to the top area and the crimping area of the wedge-shaped welding spot.
The wedge-shaped welding spot is positioned and partially segmented, the top area and the compression joint area of the wedge-shaped welding spot are determined, whether the quality of the wedge-shaped welding spot is qualified or not is judged according to the top area and the compression joint area of the wedge-shaped welding spot, and the wedge-shaped welding spot positioning and partial segmentation method can be used for positioning and partially segmenting the wedge-shaped welding spot more rapidly and accurately, so that whether the quality of the wedge-shaped welding spot is qualified or not is judged, and the complexity, inefficiency and inaccuracy of distinguishing by means of manual visual inspection under a microscope are eliminated.
Referring to fig. 17 again, fig. 17 is a schematic structural diagram of a motion control device according to an embodiment of the present disclosure;
as shown in fig. 17, the motion control device 170 includes one or more processors 171 and a memory 172. In fig. 17, a processor 171 is taken as an example.
The processor 171 and the memory 172 may be connected by a bus or otherwise, for example in fig. 17.
The processor 171 is configured to provide computing and control capabilities for controlling the motion control device 170 to perform corresponding tasks, for example, controlling the motion control device 170 to perform the method for detecting a wedge-shaped welding spot in any of the method embodiments described above, including: acquiring an image of a component to be tested; determining the position of the element to be detected in the image according to the image of the element to be detected; positioning the wedge-shaped welding spots according to the positions of the elements to be detected in the image, and determining the positions and the sizes of the wedge-shaped welding spots; according to the position and the size of the wedge-shaped welding spot, carrying out partial segmentation on the wedge-shaped welding spot after positioning, and determining the top area and the crimping area of the wedge-shaped welding spot; and judging whether the quality of the wedge-shaped welding spot is qualified or not according to the top area and the crimping area of the wedge-shaped welding spot.
The wedge-shaped welding spot is positioned and partially segmented, the top area and the compression joint area of the wedge-shaped welding spot are determined, whether the quality of the wedge-shaped welding spot is qualified or not is judged according to the top area and the compression joint area of the wedge-shaped welding spot, and the wedge-shaped welding spot positioning and partial segmentation method can be used for positioning and partially segmenting the wedge-shaped welding spot more rapidly and accurately, so that whether the quality of the wedge-shaped welding spot is qualified or not is judged, and the complexity, inefficiency and inaccuracy of distinguishing by means of manual visual inspection under a microscope are eliminated.
The processor 171 may be a general-purpose processor including a central processing unit (CentralProcessingUnit, CPU), a network processor (NetworkProcessor, NP), a hardware chip, or any combination thereof; it may also be a digital signal processor (DigitalSignalProcessing, DSP), an application specific integrated circuit (ApplicationSpecificIntegratedCircuit, ASIC), a programmable logic device (programmable logic device, PLD), or a combination thereof. The PLD may be a complex programmable logic device (complex programmable logic device, CPLD), a field-programmable gate array (field-programmable gate array, FPGA), general-purpose array logic (generic array logic, GAL), or any combination thereof.
The memory 172 is used as a non-transitory computer readable storage medium, and may be used to store a non-transitory software program, a non-transitory computer executable program, and a module, such as program instructions/modules corresponding to the method for detecting a wedge bond pad in the embodiments of the present application. The processor 171 can implement the wedge bond pad detection method of any of the method embodiments described above by running non-transitory software programs, instructions, and modules stored in the memory 172. In particular, the memory 172 may include Volatile Memory (VM), such as random access memory (random access memory, RAM); the memory 172 may also include a non-volatile memory (NVM), such as read-only memory (ROM), flash memory (flash memory), hard disk (HDD) or Solid State Drive (SSD), or other non-transitory solid state storage devices; memory 172 may also include a combination of the above types of memory.
The memory 172 may include high-speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other non-volatile solid-state storage device. In some embodiments, memory 172 may optionally include memory located remotely from processor 171, which may be connected to processor 171 via a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
One or more modules are stored in the memory 172 that, when executed by the one or more processors 171, perform the method of wedge bond pad detection in any of the method embodiments described above, for example, performing the various steps described above and shown in fig. 2.
In the embodiment of the present application, the motion control device 170 may further include other components for implementing the functions of the apparatus, which are not described herein.
Embodiments of the present application also provide a computer program product comprising one or more program codes stored in a computer-readable storage medium. The program code is read from the computer readable storage medium by a processor of the electronic device, which is executed by the processor to perform the method steps of the wedge bond pad detection method provided in the above embodiments.
It will be appreciated by those of ordinary skill in the art that all or part of the steps of implementing the above embodiments may be implemented by hardware, or may be implemented by program code related hardware, where the program may be stored in a computer readable storage medium, where the storage medium may be a read only memory, a magnetic disk or optical disk, etc.
From the above description of embodiments, it will be apparent to those skilled in the art that the embodiments may be implemented by means of software plus a general purpose hardware platform, or may be implemented by hardware. Those skilled in the art will appreciate that all or part of the processes implementing the methods of the above embodiments may be implemented by a computer program for instructing relevant hardware, and the program may be stored in a computer readable storage medium, and the program may include processes of the embodiments of the methods described above when executed. The storage medium may be a magnetic disk, an optical disk, a Read-Only Memory (ROM), a random access Memory (RandomAccess Memory, RAM), or the like.
Finally, it should be noted that: the above embodiments are only for illustrating the technical solution of the present application, and are not limiting thereof; the technical features of the above embodiments or in the different embodiments may also be combined under the idea of the present application, the steps may be implemented in any order, and there are many other variations of the different aspects of the present application as described above, which are not provided in details for the sake of brevity; although the present application has been described in detail with reference to the foregoing embodiments, it should be understood by those of ordinary skill in the art that: the technical scheme described in the foregoing embodiments can be modified or some technical features thereof can be replaced by equivalents; such modifications and substitutions do not depart from the spirit of the corresponding technical solutions from the scope of the technical solutions of the embodiments of the present application.

Claims (9)

1. The wedge-shaped welding spot detection method is characterized by comprising the following steps of:
acquiring an image of a component to be tested;
determining the position of the element to be detected in the image according to the image of the element to be detected;
positioning the wedge-shaped welding spot according to the position of the element to be detected in the image, and determining the position and the size of the wedge-shaped welding spot;
according to the position and the size of the wedge-shaped welding spot, carrying out partial segmentation on the wedge-shaped welding spot after positioning, and determining a top area and a compression joint area of the wedge-shaped welding spot, wherein the top area is positioned in the middle of the wedge-shaped welding spot, and the compression joint area A and the compression joint area B are areas formed by compression joint on two sides of the wedge-shaped welding spot;
acquiring evaluation parameters according to the top area and the crimping area of the wedge-shaped welding spot, and judging whether the quality of the wedge-shaped welding spot is qualified or not based on the evaluation parameters;
the step of judging whether the quality of the wedge-shaped welding spot is qualified based on the evaluation parameter comprises the following steps:
according to the top area and the crimping area of the wedge-shaped welding spot, calculating and obtaining the evaluation parameters of the wedge-shaped welding spot;
if the evaluation parameters meet the first condition, determining that the quality of the wedge-shaped welding spot is qualified, wherein the first condition comprises that the width and the length of the top area are in a first evaluation interval, the average width and the average length of the crimping area are in a second evaluation interval, the width and the length of the whole welding spot are in a third evaluation interval, and the proportion of the area of the crimping area to the whole welding spot area is in a fourth evaluation interval;
And if the evaluation parameter does not meet the first condition, determining that the quality of the wedge-shaped welding spot is unqualified.
2. The method of claim 1, wherein the locally segmenting the wedge shaped weld after positioning comprises:
acquiring an original image of the wedge-shaped welding spot after positioning;
performing image format processing on the original image to obtain a first image;
creating an initial mask map according to the first image, wherein the initial mask map has the same size as the first image;
according to the position and the size of the wedge-shaped welding spot, performing pixel binarization operation on the initial mask map to obtain a first mask map;
and dividing the top area and the crimping area of the wedge-shaped welding spot according to the first image and the first mask diagram.
3. The method of claim 2, wherein the segmenting the top region of the wedge bond pad from the first image and the first mask map comprises:
carrying out gray scale processing on the first image to obtain a gray scale image;
processing the gray level image and the first mask image according to an adaptive threshold segmentation algorithm to obtain an adaptive threshold;
Extracting a region with a brightness value larger than the self-adaptive threshold value from the gray level image to obtain a first background image, and performing pixel binarization inversion operation on the first background image to obtain a second background image;
performing skeleton extraction operation on the second background image through an image refinement algorithm to obtain a first skeleton image;
performing outline extraction operation on the first skeleton diagram through an outline extraction algorithm to obtain a key skeleton diagram;
determining region growing seeds in the key skeleton diagram through a region growing algorithm;
and dividing the key skeleton diagram according to the region growing seeds to obtain the top region of the wedge-shaped welding spot.
4. The method of claim 3, wherein the segmenting the crimp zone of the wedge bond pad from the first image and the first mask map comprises:
performing pixel binarization inversion operation on the first mask map to obtain a second mask map;
performing format conversion on the second mask map to obtain a third mask map;
according to the position of the region growing seeds in the key skeleton diagram, processing the third mask diagram through a watershed algorithm to obtain a first watershed image;
Performing format conversion on the first watershed image to obtain a second watershed image;
performing pixel binarization inversion operation on the second watershed image to obtain a third watershed image;
extracting a region with a brightness value larger than a preset brightness threshold value from the third watershed image to obtain a first compression joint region diagram;
performing contour extraction operation on the first compression joint region graph through a contour extraction algorithm, and determining the contour of the compression joint region of the wedge-shaped welding spot;
and obtaining the crimping area of the wedge welding spot according to the profile of the crimping area of the wedge welding spot, wherein the crimping area comprises a left side crimping area and a right side crimping area, and the profile of the crimping area of the wedge welding spot comprises a left side profile and a right side profile.
5. The method according to claim 1, wherein the method further comprises:
calculating the average width and the average length of the crimping area;
calculating the area of the crimping area according to the average width and average length of the crimping area and a Gaussian area formula;
fitting the top area of the wedge-shaped welding spot with the crimping area to obtain the integral outline of the wedge-shaped welding spot;
calculating the whole area of the wedge-shaped welding spot according to the whole outline of the wedge-shaped welding spot;
And calculating the proportion of the area of the crimping region to the whole area of the wedge-shaped welding spot according to the area of the crimping region and the whole area of the wedge-shaped welding spot.
6. The method according to any one of claims 1-5, wherein the method is applied to a motion control device for controlling the motion of an image acquisition device to a preset position, the motion control device being connected to the image acquisition device for acquiring an image of a printed circuit board, the method comprising, prior to said acquiring an image of the component to be tested:
controlling the image acquisition device to move to a preset position of a printed circuit board to shoot an image, and obtaining an initial image, wherein the printed circuit board comprises at least one correction point;
identifying a correction point in the initial image through an image algorithm, and calculating offset coordinates and rotation angles of a center point of the correction point;
determining the position of the element to be tested in the printed circuit board according to the offset coordinate and the rotation angle of the center point of the correction point;
and controlling the image acquisition device to move to the position of the element to be detected in the printed circuit board so as to acquire the image of the element to be detected.
7. The method of claim 6, wherein the method further comprises:
if the number of the correction points is two, determining a first rotation angle according to offset coordinates of center points of the two correction points, wherein the first rotation angle is obtained through calculation of an anti-cotangent function;
calculating according to the first rotation angle to obtain a first actual offset coordinate;
determining the position of the element to be tested in the printed circuit board according to the coordinates of the preset position and the first actual offset coordinates;
the first actual offset coordinate is calculated according to the following formula:
Figure QLYQS_1
wherein ,
Figure QLYQS_2
for the first actual offset coordinate, +.>
Figure QLYQS_3
For the distance between the initial image center point and the first reference point,/or->
Figure QLYQS_4
Is the angle of the initial image center point relative to the first reference point, < >>
Figure QLYQS_5
For a first rotation angle, +>
Figure QLYQS_6
Is the coordinates of the first datum point,>
Figure QLYQS_7
the first datum point is the correction point with the largest value of the ordinate among the two correction points, and is the coordinate of the center point of the initial image.
8. The method of claim 7, wherein the method further comprises:
if the number of the correction points is at least three, calculating to obtain a second rotation angle through an affine transformation algorithm;
Calculating a second actual offset coordinate according to the second rotation angle;
determining the position of the element to be tested in the printed circuit board according to the coordinates of the preset position and the second actual offset coordinates;
the second actual offset coordinate is calculated according to the following formula:
Figure QLYQS_8
wherein ,
Figure QLYQS_9
for the second actual offset coordinate, +.>
Figure QLYQS_10
For the distance between the initial image center point and the second reference point,/or->
Figure QLYQS_11
Is the angle of the initial image center point relative to the second reference point, < >>
Figure QLYQS_12
For a second rotation angle->
Figure QLYQS_13
Is the coordinates of the second datum point,>
Figure QLYQS_14
the second reference point is the correction point with the largest value of the ordinate among the at least three correction points, and is the coordinate of the center point of the initial image.
9. A motion control apparatus, comprising:
at least one processor; the method comprises the steps of,
a memory communicatively coupled to the at least one processor; wherein,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method of any one of claims 1-8.
CN202310193688.9A 2023-03-03 2023-03-03 Wedge-shaped welding spot detection method and motion control device Active CN115876786B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310193688.9A CN115876786B (en) 2023-03-03 2023-03-03 Wedge-shaped welding spot detection method and motion control device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310193688.9A CN115876786B (en) 2023-03-03 2023-03-03 Wedge-shaped welding spot detection method and motion control device

Publications (2)

Publication Number Publication Date
CN115876786A CN115876786A (en) 2023-03-31
CN115876786B true CN115876786B (en) 2023-05-26

Family

ID=85761839

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310193688.9A Active CN115876786B (en) 2023-03-03 2023-03-03 Wedge-shaped welding spot detection method and motion control device

Country Status (1)

Country Link
CN (1) CN115876786B (en)

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105738202B (en) * 2014-07-29 2018-03-16 金华辉煌三联工具实业有限公司 A kind of method for carrying out intensity detection to guide plate solder joint using pressure testing machine
CN105606620A (en) * 2016-01-29 2016-05-25 广州立为信息技术服务有限公司 PCBA welding spot detection method and system based on vision
CN113644181B (en) * 2020-05-11 2024-04-16 京东方科技集团股份有限公司 Light-emitting substrate, manufacturing method thereof and display device
CN113344929B (en) * 2021-08-09 2021-11-05 深圳智检慧通科技有限公司 Welding spot visual detection and identification method, readable storage medium and equipment
CN114299086B (en) * 2021-12-24 2023-05-26 深圳明锐理想科技有限公司 Image segmentation processing method, electronic equipment and system for low-contrast imaging

Also Published As

Publication number Publication date
CN115876786A (en) 2023-03-31

Similar Documents

Publication Publication Date Title
CN110678901B (en) Information processing apparatus, information processing method, and computer-readable storage medium
CN109472271B (en) Printed circuit board image contour extraction method and device
CN110930390B (en) Chip pin missing detection method based on semi-supervised deep learning
CN111598889B (en) Identification method and device for inclination fault of equalizing ring and computer equipment
CN111915704A (en) Apple hierarchical identification method based on deep learning
CN110596120A (en) Glass boundary defect detection method, device, terminal and storage medium
CN106501272B (en) Machine vision soldering tin positioning detection system
CN115791822A (en) Visual detection algorithm and detection system for wafer surface defects
CN111539927B (en) Detection method of automobile plastic assembly fastening buckle missing detection device
CN112883881B (en) Unordered sorting method and unordered sorting device for strip-shaped agricultural products
CN115018846B (en) AI intelligent camera-based multi-target crack defect detection method and device
KR20080032856A (en) Recognition method of welding line position in shipbuilding subassembly stage
CN113298769B (en) FPC flexible flat cable appearance defect detection method, system and medium
CN115170669A (en) Identification and positioning method and system based on edge feature point set registration and storage medium
CN113034474A (en) Test method for wafer map of OLED display
CN114331986A (en) Dam crack identification and measurement method based on unmanned aerial vehicle vision
CN112115948A (en) Chip surface character recognition method based on deep learning
CN110866915A (en) Circular inkstone quality detection method based on metric learning
CN111861979A (en) Positioning method, positioning equipment and computer readable storage medium
CN115760820A (en) Plastic part defect image identification method and application
CN114863492A (en) Method and device for repairing low-quality fingerprint image
CN107239761B (en) Fruit tree branch pulling effect evaluation method based on skeleton angular point detection
CN115876786B (en) Wedge-shaped welding spot detection method and motion control device
CN108205641B (en) Gesture image processing method and device
CN117152165A (en) Photosensitive chip defect detection method and device, storage medium and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CP03 Change of name, title or address

Address after: 518000 Floor 5-6, Unit B, Building B4, Guangming Science Park, China Merchants Bureau, Fenghuang Community, Fenghuang Street, Guangming District, Shenzhen, Guangdong

Patentee after: Shenzhen Mingrui Ideal Technology Co.,Ltd.

Country or region after: China

Address before: 518000 floor 6, unit B, building B4, Guangming science and Technology Park, China Merchants Bureau, sightseeing Road, Fenghuang community, Fenghuang street, Guangming District, Shenzhen, Guangdong Province

Patentee before: SHENZHEN MAGIC-RAY TECHNOLOGY Co.,Ltd.

Country or region before: China

CP03 Change of name, title or address