CN110211183B - Multi-target positioning system based on single-imaging large-view-field LED lens mounting - Google Patents

Multi-target positioning system based on single-imaging large-view-field LED lens mounting Download PDF

Info

Publication number
CN110211183B
CN110211183B CN201910511605.XA CN201910511605A CN110211183B CN 110211183 B CN110211183 B CN 110211183B CN 201910511605 A CN201910511605 A CN 201910511605A CN 110211183 B CN110211183 B CN 110211183B
Authority
CN
China
Prior art keywords
target
positioning
image
sample
aluminum substrate
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910511605.XA
Other languages
Chinese (zh)
Other versions
CN110211183A (en
Inventor
钟球盛
侯文峰
吴隽
范钰淮
李志瑶
娄身龙
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangzhou Panyu Polytechnic
Original Assignee
Guangzhou Panyu Polytechnic
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangzhou Panyu Polytechnic filed Critical Guangzhou Panyu Polytechnic
Priority to CN201910511605.XA priority Critical patent/CN110211183B/en
Publication of CN110211183A publication Critical patent/CN110211183A/en
Application granted granted Critical
Publication of CN110211183B publication Critical patent/CN110211183B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/12Edge-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/136Segmentation; Edge detection involving thresholding
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/60Analysis of geometric attributes
    • G06T7/62Analysis of geometric attributes of area, perimeter, diameter or volume
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20092Interactive image processing based on input by user
    • G06T2207/20104Interactive definition of region of interest [ROI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30108Industrial image inspection
    • G06T2207/30152Solder

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Geometry (AREA)
  • Image Processing (AREA)
  • Image Analysis (AREA)
  • Manipulator (AREA)

Abstract

The invention discloses a single-imaging-based large-view-field LED lens mounting multi-target positioning system, which comprises large-format image acquisition, target positioning and position correction. The invention also provides a multi-target positioning method. The method comprises a first standard sample processing method and a second N sample processing method. Standard sample treatment procedure: acquiring an aluminum substrate image, setting a target and Mark point area, segmenting the image, performing BLOB analysis, extracting characteristics, positioning the target, positioning the center of a Mark point, positioning the connecting line of the Mark point and acquiring an inclination angle; the second to N sample processing steps: acquiring an image, positioning the center of a Mark point and the midpoint of a straight line segment of the Mark point, obtaining the offset of the angle and the displacement of a sample, constructing a rigid transformation matrix, and carrying out affine transformation on multiple target points of a standard sample. The system and the method provided by the invention have the advantages of high positioning speed, capability of realizing full automation, no manual teaching intervention, capability of acquiring the target point and capability of realizing continuous production.

Description

Multi-target positioning system based on single-imaging large-view-field LED lens mounting
Technical Field
The invention relates to the technical field of multi-target positioning, in particular to a multi-target positioning system device of a large-view-field LED aluminum substrate lens mounting pad based on single imaging. The invention provides a multi-target positioning method, and particularly relates to a center positioning and correcting technology. The invention relates to a system device and a method which are combined to realize the multi-target positioning of a large-view-field LED lens mounting pin bonding pad.
Background
With the development of industry, multi-target positioning identification technology in industrial application becomes a market demand. In the progress of science and technology, the artificial human eye positioning and identification target is gradually eliminated, but the current positioning and identification technology is still not mature enough, and the problems of large operation error, small detection precision and low work continuity exist in the regional multi-target positioning in domestic industrial application. Particularly, the multi-target positioning function of the existing domestic dispenser has the phenomena of complex system structure, low precision, poor stability, large operation error, small detection precision and low work continuity.
In industrial application, under the condition of LED illumination, an industrial camera executes photographing imaging acquisition on a PCB aluminum substrate pad in an imaging acquisition environment, but the technology for realizing regional multi-target positioning identification is not perfect enough, and the current multi-target positioning identification technology has the defects of small positioning target range, poor stability, short working time, low detection speed, low detection precision and large algorithm calculation amount.
Chinese patent (CN 109201413A) discloses a visual positioning point gluing system, which comprises a camera shooting mechanism, a gluing mechanism, a three-axis mechanism, a computer and a controller. The invention also discloses a visual positioning dispensing method, which comprises the following steps: shooting a photo of the sheet-shaped part, reading the shot photo to establish an image coordinate system, determining a conversion relation among the coordinate systems, performing threshold segmentation on a graph of an area to be subjected to glue dispensing of the sheet-shaped part, and extracting a glue dispensing position of the area to be subjected to glue dispensing; and transmitting the dispensing position data to a controller, and controlling the three-axis system to operate to an appointed position to finish dispensing. The system has the defects of low detection speed, small positioning range and high imaging acquisition frequency.
Chinese patent (CN 105964492A) relates to an automatic glue dispenser, which comprises a base, a positioning part, a glue dispensing part and a main panel cover, wherein the positioning part mainly comprises a positioning plate, a secondary positioning plate arranged at the upper end of the positioning plate, an adjusting rod arranged on the side surface of the secondary positioning plate, a locking part arranged at the front end of the secondary positioning plate, a magnetic positioning seat arranged at the upper end of the secondary positioning plate, a positioning block arranged inside the magnetic positioning, a connecting plate arranged at the lower end of the main positioning plate, a guide rail arranged at the lower end of the connecting plate, a rodless cylinder arranged at the lower end of the positioning plate, and a buffer arranged at the lower end of the positioning plate. But the method has lower detection precision, higher cost and more complex system structure.
Chinese patent [ CN202102970U ] discloses a take positioner's relay point gum machine, which comprises a bod, organism upper end installation is the feeding track again, the layer board is installed to feeding track one side, the orbital support of perpendicular to feeding is installed to the layer board upper end, support upper end slidable mounting has the transport claw, install the limiting plate between feeding track and the layer board, limiting plate upper end slidable mounting has the several reference column of stretching out and drawing back from top to bottom, the fixed cushion that has in reference column front end, the front end is grabbed in the transport is equipped with the several draw-in groove, the limiting plate height is higher than the transport and grabs, the transport is grabbed and is stretched out and drawn back in feeding track top, adopt the reference column to carry out multi-target location. The system has the advantages of low positioning accuracy, complex system structure and low work continuity.
European patent [ EP2066166A1 ] relates to a positioning system and a method of compensating for thermal expansion variations in the positioning system, as well as a component mounting machine, a jet dispenser, including a positioning system. The positioning system comprises an elongated beam extending along an axis (X), wherein the shape of the beam changes due to thermal expansion during operation of the positioning system. The system comprises a positioning unit movably suspended on a cross beam, a motor for providing movement of the positioning unit along an axis (X), and a work head mounted on or integrated with the positioning unit. A fixed-shape elongate reference element is provided having a longitudinal extension along an axis (X) substantially parallel to the beam. At a plurality of positions along the axis (X), reference distances between the positioning unit and the reference element are measured. The reference distance measured during positioning of the positioning unit compensates for thermal expansion. The system has the advantages of high cost, poor positioning efficiency and low detection precision.
In summary, there are still various disadvantages associated with the multi-target positioning system for industrial use, especially in the single-imaging target positioning, which is difficult to be widely used in the industrial production line. The invention aims to create a multi-target positioning system device and a multi-target positioning method for the LED aluminum substrate lens mounting pad with a large visual field based on single imaging.
Disclosure of Invention
The invention aims to provide a multi-target positioning system device which is composed of a camera acquisition part, a mechanism positioning part and a detection and identification part. In the system, under a stable illumination environment, the industrial camera acquires images of the lens welding pad of the LED aluminum substrate. The image acquisition platform mainly comprises an industrial camera (120), an LED aluminum substrate welding disc (121), a sensor (122) and a guide rail (123), and the working environment of the camera imaging platform is as follows:
the imaging object is based on the surface area of a pad (121) of the LED aluminum substrate, a guide rail (123) is positioned right in front of the aluminum substrate (121), a sensor (122) is arranged on the side of the aluminum substrate (121), and an industrial camera (120) is vertically fixed right above the pad (121) of the LED aluminum substrate; the transmission of the guide rail (123) realizes that the LED aluminum substrate pad (121) moves forwards, when the LED aluminum substrate pad (121) moves to the sensing range of the sensor (122), and when the sensor (122) senses the target position based on the LED aluminum substrate pad (121), the camera of the sensor sends out a signal instruction for image acquisition.
The system device acquires images based on a camera, and performs multi-target positioning on the acquired images so as to obtain the center coordinates of a plurality of targets. The system device and the method are combined for use, so that the problems of low positioning speed, low automation degree and insufficient system stability of the positioning system in the operation process are solved
The multi-target positioning method of the large-view-field LED lens mounting pad based on single imaging specifically comprises the following positioning steps:
step 1, carrying out image acquisition on a PCB aluminum substrate pad through an industrial camera to obtain image data of a first sample;
step 2, splitting the original image into an RGB channel and an HSV channel;
step 3, detecting an RGB channel and an HSV channel of the original image at the edge, and selecting a channel with the best contrast;
step 4, setting ROI areas of the target points and setting ROI areas of the two Mark points;
step 5, based on the ROI area image of each target point by the optimal threshold method, dividing the ROI area image of the Mark point by adopting a gray average value to realize image segmentation;
step 6, executing connectivity characteristic Blob analysis;
step 7, extracting a pad area according to the characteristics of different areas, lengths, widths, rectangularities, circularities and the like, and performing center positioning on a plurality of targets;
step 8, determining the central coordinates of Mark points according to the gray average value;
step 9, storing a plurality of target coordinate value data (X) of the dispensing point k ,Y k ) Wherein subscript K represents the kth target point;
step 10, converting pixel coordinates of a plurality of target coordinate value data of the dispensing points into mechanical coordinates of robot motion;
step 11, obtaining the inclination angle theta of the established straight line segment 0 And storing;
step 12, calculating the middle point coordinates (X) of the two Mark point straight-line segments 0 ,Y 0 ) And the inclination angle and the midpoint coordinate of the straight line segment are saved, and the processing algorithm flow for the first sample (standard sample) is ended;
step 13, obtaining images of second to N samples (current samples);
step 14, splitting the current image into an RGB channel and an HSV channel;
step 15, selecting a channel with the maximum contrast according to the target contrast;
step 16, selecting an ROI according to a Mark point region set in the first sample;
step 17, for the two Mark point areas, dividing the sub-images of the Mark point areas by adopting a method based on gray average;
step 18, performing connected region characteristic Blob analysis, and extracting a Mark point target region according to the area characteristics;
step 19, determining the central coordinate of the Mark point according to the coordinate average value of the target pixel;
and 20, aiming at the second to N samples, adopting a simplified positioning strategy. The target coordinate of the dispensing point of the current sample and the position control difference (namely the angle deviation and the offset) of the current sample and the first sample can be quickly and reliably obtained. Acquiring a current sample image, carrying out center positioning on the two selected Mark points to obtain a straight-line segment connecting center connecting lines of the two Mark points, and calculating the inclination angle theta of the straight-line segment i
Step 21, obtaining the middle point coordinates (X) of the straight line segment of the connecting line of two Mark points of the images of the second to N samples (current samples) i ,Y i )。
And step 22, obtaining a difference value (represented by delta theta) of the inclination angle according to the difference of the Mark positioning results of the current image (the second to N samples) and the first sample (the standard sample), and obtaining an offset value of a point coordinate in the straight line segment (wherein the offset in the X direction is represented by delta X, and the offset in the Y direction is represented by delta Y).
The mathematical expression can be expressed as:
Figure GDA0003812399040000051
wherein: subscript 0 indicates the number of the first sample and i indicates the numbers of the second to N samples.
Step 23, based on the offset and the angle deviation (Δ X, Δ Y, Δ θ) of the central point coordinate, a rigid body transformation matrix can be constructed;
and 24, performing two-dimensional linear affine transformation on the target coordinates of the multiple dispensing points of the first sample to obtain the target coordinates of the multiple dispensing points of the current sample, and converting the pixel coordinates into mechanical coordinates of robot motion. Therefore, the target coordinate of the current sample dispensing point can be quickly and stably positioned at a high speed.
The system device and the method of the invention can obtain hundreds or even thousands of target positioning points only by once imaging, and the process can be realized within 0.1 second. The device and the method not only overcome the complete interruption of a production line caused by extremely long time consumption of a manual teaching method, but also overcome the defects of multiple imaging required by a large-scale multiple-image splicing technology, so that the required multiple-image splicing is caused, further image splicing errors are introduced, and the production interruption is caused. Therefore, the system has the characteristics of high positioning speed, high positioning precision and good stability. Compared with the prior art, the method has the following advantages:
(1) The system of the invention has the following performance of high positioning speed: only one imaging is needed to obtain a plurality of target positions with the width W of the view field larger than 800mm and the height H of the view field larger than 600 mm. Wherein, the LED mounting aluminum substrate is provided with hundreds of glue dispensing positions (target bonding pads), even thousands of target points can be realized, and the imaging precision reaches up to 0.1mm. When the image is collected, the collection speed is high, and the image data can be transmitted to the memory space of the computer only by 30-50 ms. In image analysis, only about 50ms is needed for the first sample image processing analysis; the target localization time for the second to N samples is shorter, taking less than 10ms. Therefore, the invention can complete the image acquisition of an ultra-large breadth and the one-time accurate positioning of thousands of targets only in 0.1 second, and can realize the continuous production of a production line. Compared with a traditional manual teaching target obtaining method which is time-consuming and needs long-time production halt and an existing target positioning method which is time-consuming and needs production interruption and combines small-field image acquisition with multi-frame image splicing, the method provided by the invention is greatly improved in speed.
(2) The high positioning accuracy of the work of the invention is shown as follows: for the first time, the imaging precision reaches 0.1mm. The camera adopted to ensure the technical conditions is a method for positioning once imaging by adopting a large-area and high-resolution fixed high-speed industrial camera. The method avoids positioning manual operation errors caused by subjective factors, fatigue factors and the like introduced by manual teaching operation; in addition, the method avoids mechanical positioning errors caused by the fact that a transmission mechanism is required to move to drive a camera to image at different positions in the image splicing method and splicing algorithm errors introduced by an image splicing algorithm. Therefore, the invention avoids manual operation error, mechanical positioning error, splicing algorithm error and the like, and has the characteristic of high positioning precision.
(3) The invention has good stability as follows: the method of the invention adopts the first sample (standard sample) to carry out target positioning, and adopts a position correction method for processing the second to N samples (current samples), thereby improving the stability of the system. The specific implementation is as follows; carrying out target positioning on a first sample (a standard sample), realizing positioning of two Mark points of the standard sample, and determining an inclination angle between a connecting line midpoint and a connecting line segment of the Mark points; the processing for the second N samples (current samples) only needs to locate two Mark points and determine the inclination angles of the connecting lines and the midpoints of the Mark points. Compared with the first sample, the angle deviation and the displacement offset are obtained, so that a rigid body transformation matrix is constructed, hundreds of target points of the standard sample are subjected to affine transformation, the target point positions of the second to N samples can be obtained, and the processing amount and time of image data can be greatly reduced. The most important is to avoid the defects of target positioning caused by fine differences of products or false targets, which causes positioning errors and unstable use systems. Therefore, the invention improves the stability of the high-speed continuous production of the system.
Drawings
FIG. 1 is a schematic diagram of a single-imaging-based large-field-of-view LED lens-mounted multi-target positioning system device according to the present invention
FIG. 2 is a schematic diagram of the distribution of the located target objects
FIG. 3 is a schematic diagram of target points in the original image extraction area
FIG. 4 is a schematic diagram of target points in a secondary image extraction region
FIG. 5 is a schematic view of the trajectory of the target point transferring machine for positioning the original image and the secondary image
FIG. 6 is a flowchart of a preferred embodiment of the single-shot image acquisition multi-target positioning method of the present invention
FIG. 7 is a flowchart of a preferred embodiment of the multi-target positioning method for N image acquisitions according to the present invention, wherein: 120-industrial camera, 121-aluminum substrate pad, 122-sensor, 123-guide rail
Detailed Description
For a better understanding of the present invention, the present invention will be further described below with reference to the accompanying drawings, but embodiments of the present invention are not limited thereto.
The invention discloses a single-imaging-based multi-target positioning system device for a large-view-field LED aluminum substrate lens mounting pad.
As illustrated in fig. 1, the camera system imaging environment main body frame of the present invention includes: the LED light source comprises an industrial camera (120), LED aluminum substrate pads (121), a sensor (122) and a guide rail (123). The imaging of the camera system includes: the image acquisition target is based on the surface area of the LED aluminum substrate pad (121), the guide rail (123) is positioned right in front of the LED aluminum substrate pad (121), the sensor (122) is positioned on the side of the LED aluminum substrate pad (121), and the industrial camera (120) is vertically fixed right above the LED aluminum substrate pad (121); the LED aluminum substrate pad (121) moves forwards through the transmission of the guide rail (123), when the LED aluminum substrate pad (121) moves to the sensing range of the sensor (122), and the sensor (122) on the side senses the target position based on the LED aluminum substrate pad (121), an image acquisition signal command is transmitted to the industrial camera (120) to implement image acquisition, and camera imaging acquisition is completed.
As shown in fig. 2, the schematic distribution diagram of the positioning target object of the present invention includes: a right position identification point 131, a left position identification point 133, an image center point 132 and a target area 134 to be positioned.
The mechanism positioning part of the multi-target positioning system is based on an imaging acquisition area; reading a photographed imaging area, carrying out edge detection on the imaged area, splitting into an RGB channel and an HSV channel, selecting an optimal channel through contrast, extracting and selecting a threshold value according to characteristics, completing segmentation of an area image, sequencing the areas and calculating a central target through digital image morphological transformation, positioning and extracting an image position identification point, and positioning and extracting a central point coordinate and a linear end slope.
The identification, positioning and detection according to the system are based on the coordinate comparison of secondary imaging and an original image; after the original image center coordinate is obtained, the camera performs secondary photographing collection on the position offset aluminum substrate, edge detection is performed on a collected imaging area, secondary target area image extraction on the offset aluminum substrate is achieved, the secondary image center coordinate is obtained through positioning, the position deviation of the original image and the secondary image transfer machine coordinate is calculated, track planning is achieved, and a mounting pad motion command is executed. As the claim 1.
As shown in fig. 6 and 7, the multi-target positioning method for the large-field-of-view LED lens mounting pad based on single imaging completes multi-target positioning according to the following steps, including the following steps:
s201, carrying out image acquisition on a PCB aluminum substrate pad through an industrial camera to obtain image data of a first sample;
s202, splitting an original image into an RGB channel and an HSV channel;
s203, selecting a channel with the best contrast ratio from an RGB channel and an HSV channel of the original image for edge detection;
s204, setting ROI areas of target points and setting ROI areas of two Mark points;
s205, based on the ROI area image of each target point by the optimal threshold method, dividing the ROI area image of the Mark point by adopting a gray average value to realize image division;
s206, performing connectivity characteristic Blob analysis;
s207, extracting a pad region according to the characteristics of different areas, lengths, widths, rectangularity, roundness and the like, and carrying out center positioning on a plurality of targets;
s208, determining the central coordinates of the Mark points according to the gray average value;
s209, storing a plurality of target coordinate value data (X) of the dispensing point k ,Y k ) Wherein subscript K represents the kth target point;
s210, converting pixel coordinates of a plurality of target coordinate value data of the dispensing points into mechanical coordinates of robot motion;
s211, obtaining the inclination angle theta of the established straight line segment 0 And storing;
s212, calculating the middle point coordinates (X) of the straight line segments of the two Mark points 0 ,Y 0 ) And storing the inclination angle and the midpoint coordinate of the straight line segment, and ending the processing algorithm flow aiming at the first sample (standard sample);
s301, obtaining images of second to N samples (current samples);
s302, splitting the current image into an RGB channel and an HSV channel;
s303, selecting a channel with the maximum contrast ratio according to the target contrast ratio;
s304, selecting an ROI according to a Mark point region which is set in the first sample;
s305, dividing the sub-images of the Mark point areas by adopting a gray average value-based method for the two Mark point areas;
s306, performing connected region characteristic Blob analysis, and extracting a Mark point target region according to the area characteristics;
s307, determining the central coordinate of the Mark point according to the coordinate average value of the target pixel;
S308and aiming at the second to N samples, a simplified positioning strategy is adopted. The target coordinate of the dispensing point of the current sample and the position control difference (namely the angle deviation and the offset) of the current sample and the first sample can be quickly and reliably obtained. Acquiring a current sample image, carrying out center positioning on the two selected Mark points to obtain a straight-line segment connecting center connecting lines of the two Mark points, and calculating the inclination angle theta of the straight-line segment i
S309, obtaining the midpoint coordinates (X) of the straight line segment of the connecting line of the two Mark points of the images of the second to N samples (current samples) i ,Y i )。
And S310, obtaining a difference value (expressed by delta theta) of the inclination angle according to the difference of the Mark positioning results of the current image (the second sample to the N samples) and the first sample (the standard sample), and obtaining an offset value of a point coordinate in the straight line segment (wherein the offset value in the X direction is expressed by delta X, and the offset value in the Y direction is expressed by delta Y).
The mathematical expression can be expressed as:
Figure GDA0003812399040000101
wherein: subscript 0 indicates the number of the first sample and i indicates the numbers of the second to N samples.
S311, constructing a rigid body transformation matrix based on the offset and the angle deviation (delta X, delta Y and delta theta) of the central point coordinate;
and S312, performing two-dimensional linear affine transformation on the target coordinates of the multiple dispensing points of the first sample to obtain the target coordinates of the multiple dispensing points of the current sample, and converting the pixel coordinates into mechanical coordinates of robot motion. Therefore, the target coordinate of the current sample dispensing point can be quickly and stably positioned at a high speed.
Through the steps, the invention realizes the image acquisition and the target characteristic extraction of the LED aluminum substrate lens mounting pad, and is used for a positioning system of a plurality of targets of the industrial LED lens mounting pad by taking a multi-target positioning method as a core.
The above-described embodiments of the present invention are merely illustrative of the principles of the present invention and do not limit the embodiments of the present invention. Other variations will be apparent to persons skilled in the art upon consideration of the foregoing description. Therefore, any modification, equivalent improvement or the like made within the spirit and principle of the present invention should be included in the protection scope of the claims of the present invention.

Claims (1)

1. A multi-target positioning system based on single-imaging large-view-field LED lens mounting is characterized by comprising: the LED aluminum substrate comprises an industrial camera, a sensor, a guide rail and an LED aluminum substrate welding disc;
the industrial camera is vertically fixed right above the LED aluminum substrate welding plate;
the guide rail is positioned right in front of the LED aluminum substrate bonding pad; the sensor is positioned on the side edge of the LED aluminum substrate welding plate; the LED aluminum substrate pad is driven to move forwards by the transmission of the guide rail, and when the LED aluminum substrate pad moves to the sensing range of the sensor, the sensor sends an image acquisition signal instruction to the industrial camera;
the industrial camera acquires images of the surface area of the LED aluminum substrate bonding pad, and performs multi-target positioning according to the acquired images to obtain central coordinates of a plurality of targets;
the multi-target positioning method comprises the following steps:
step 1, carrying out image acquisition on a PCB aluminum substrate pad through an industrial camera to obtain image data of a first sample;
step 2, splitting the original image into an RGB channel and an HSV channel;
step 3, detecting an RGB channel and an HSV channel of the original image at the edge, and selecting a channel with the best contrast;
step 4, setting ROI areas of the target points and setting ROI areas of the two Mark points;
step 5, based on the optimal threshold value method and the ROI regional images of all the target points, dividing the ROI regional images of the Mark points by adopting a gray average value to realize image division;
step 6, executing connectivity characteristic Blob analysis;
step 7, extracting a pad area according to different area, length, width, rectangularity and roundness characteristics and carrying out center positioning on a plurality of targets;
step 8, determining the central coordinates of the Mark points according to the gray average value;
step 9, storing a plurality of target coordinate value data X of the dispensing point k 、Y k (ii) a Wherein subscript K represents the kth target point;
step 10, converting pixel coordinates of a plurality of target coordinate value data of the dispensing points into mechanical coordinates of robot motion;
step 11, obtaining the inclination angle theta of the established straight line segment 0 And storing;
step 12, calculating the midpoint coordinate X of the straight line segments of the two Mark points 0 、Y 0 And storing the inclination angle and the midpoint coordinate of the straight line segment, and ending the processing algorithm flow aiming at the first sample;
step 13, obtaining the second Images of the N samples and the current sample;
step 14, splitting the current sample image into an RGB channel and an HSV channel;
step 15, selecting a channel with the maximum contrast ratio according to the target contrast ratio;
step 16, selecting an ROI according to a Mark point region set in the first sample;
step 17, for the two Mark point areas, dividing the sub-images of the Mark point areas by adopting a method based on gray average;
step 18, performing connected region characteristic Blob analysis, and extracting a Mark point target region according to the area characteristics;
step 19, determining the central coordinate of the Mark point according to the coordinate average value of the target pixel;
step 20 for the second N samples are obtained by adopting a simplified positioning strategy through the target coordinates of the first sample and the angle deviation and offset between the current sample and the first sampleGluing point target coordinates; carrying out center positioning on the two selected Mark points to obtain a straight-line segment connecting the center connecting lines of the two Mark points, and calculating the inclination angle theta of the straight-line segment i
Step 21, obtaining the second Midpoint coordinate X of two Mark point connecting line straight-line segments of N samples and current sample image i 、Y i
Step 22, according to the current sample image and the second Obtaining the difference delta theta of the inclination angle and the offset value of the point coordinate in the straight line segment by the difference of Mark positioning results of the images of the N samples and the image of the first sample, wherein the offset in the X direction is represented by delta X, and the offset in the Y direction is represented by delta Y; the mathematical expression can be expressed as:
Figure FDA0003812399030000031
wherein: subscript 0 denotes the designation of the first sample and i denotes the second The designation of N samples;
step 23, constructing a rigid body transformation matrix based on the difference value between the deviation value of the midpoint coordinate of the straight line segment and the inclination angle;
and 24, performing two-dimensional linear affine transformation on the target coordinates of the multiple dispensing points of the first sample to obtain the target coordinates of the multiple dispensing points of the current sample, and converting the pixel coordinates into mechanical coordinates of robot motion to obtain the target coordinates of the dispensing points of the current sample.
CN201910511605.XA 2019-06-13 2019-06-13 Multi-target positioning system based on single-imaging large-view-field LED lens mounting Active CN110211183B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910511605.XA CN110211183B (en) 2019-06-13 2019-06-13 Multi-target positioning system based on single-imaging large-view-field LED lens mounting

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910511605.XA CN110211183B (en) 2019-06-13 2019-06-13 Multi-target positioning system based on single-imaging large-view-field LED lens mounting

Publications (2)

Publication Number Publication Date
CN110211183A CN110211183A (en) 2019-09-06
CN110211183B true CN110211183B (en) 2022-10-21

Family

ID=67792641

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910511605.XA Active CN110211183B (en) 2019-06-13 2019-06-13 Multi-target positioning system based on single-imaging large-view-field LED lens mounting

Country Status (1)

Country Link
CN (1) CN110211183B (en)

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110930455B (en) * 2019-11-29 2023-12-29 深圳市优必选科技股份有限公司 Positioning method, positioning device, terminal equipment and storage medium
CN113034418B (en) * 2019-12-05 2023-10-13 中国科学院沈阳自动化研究所 Circuit board identification and bonding pad/chip rapid positioning method for electronic industry
CN111257346B (en) * 2020-02-20 2022-02-25 清华大学 PCB positioning device and method based on projection filtering
CN111558939B (en) * 2020-05-06 2022-04-08 珠海格力智能装备有限公司 Valve body assembling method, system, device, storage medium and processor
CN113304966A (en) * 2021-04-26 2021-08-27 深圳市世宗自动化设备有限公司 Dynamic dispensing compensation method and device, computer equipment and storage medium thereof

Citations (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5233669A (en) * 1990-11-22 1993-08-03 Murata Manufacturing Co., Ltd. Device for and method of detecting positioning marks for cutting ceramic laminated body
JP2003208616A (en) * 2002-01-17 2003-07-25 Fujitsu Ltd Image recognition device and program
CN101750017A (en) * 2010-01-18 2010-06-23 战强 Visual detection method of multi-movement target positions in large view field
CN103308007A (en) * 2013-05-24 2013-09-18 华南理工大学 System and method for measuring coplanarity of integrated circuit (IC) pins through multistage reflection and raster imaging
CN104680570A (en) * 2015-03-24 2015-06-03 东北大学 Action capturing system and method based on video
CN106504262A (en) * 2016-10-21 2017-03-15 泉州装备制造研究所 A kind of small tiles intelligent locating method of multiple features fusion
CN106570904A (en) * 2016-10-25 2017-04-19 大连理工大学 Multi-target relative posture recognition method based on Xtion camera
CN107578431A (en) * 2017-07-31 2018-01-12 深圳市海思科自动化技术有限公司 A kind of Mark points visual identity method
CN108827316A (en) * 2018-08-20 2018-11-16 南京理工大学 Mobile robot visual orientation method based on improved Apriltag label
CN108966500A (en) * 2018-08-07 2018-12-07 向耀 The pcb board of view-based access control model tracking is secondary and multiple accurate drilling method
CN109410229A (en) * 2018-08-27 2019-03-01 南京珂亥韧光电科技有限公司 Multiple target lens position and male and fomale(M&F) know method for distinguishing
WO2019080229A1 (en) * 2017-10-25 2019-05-02 南京阿凡达机器人科技有限公司 Chess piece positioning method and system based on machine vision, storage medium, and robot
KR101975209B1 (en) * 2018-12-14 2019-05-07 서정원 Position tracing system using Affine transformation
CN109785324A (en) * 2019-02-01 2019-05-21 佛山市南海区广工大数控装备协同创新研究院 A kind of large format pcb board localization method

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090180085A1 (en) * 2008-01-15 2009-07-16 Kirtas Technologies, Inc. System and method for large format imaging
WO2010045271A1 (en) * 2008-10-14 2010-04-22 Joshua Victor Aller Target and method of detecting, identifying, and determining 3-d pose of the target
US20170242235A1 (en) * 2014-08-18 2017-08-24 Viewsiq Inc. System and method for embedded images in large field-of-view microscopic scans

Patent Citations (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5233669A (en) * 1990-11-22 1993-08-03 Murata Manufacturing Co., Ltd. Device for and method of detecting positioning marks for cutting ceramic laminated body
JP2003208616A (en) * 2002-01-17 2003-07-25 Fujitsu Ltd Image recognition device and program
CN101750017A (en) * 2010-01-18 2010-06-23 战强 Visual detection method of multi-movement target positions in large view field
CN103308007A (en) * 2013-05-24 2013-09-18 华南理工大学 System and method for measuring coplanarity of integrated circuit (IC) pins through multistage reflection and raster imaging
CN104680570A (en) * 2015-03-24 2015-06-03 东北大学 Action capturing system and method based on video
CN106504262A (en) * 2016-10-21 2017-03-15 泉州装备制造研究所 A kind of small tiles intelligent locating method of multiple features fusion
CN106570904A (en) * 2016-10-25 2017-04-19 大连理工大学 Multi-target relative posture recognition method based on Xtion camera
CN107578431A (en) * 2017-07-31 2018-01-12 深圳市海思科自动化技术有限公司 A kind of Mark points visual identity method
WO2019080229A1 (en) * 2017-10-25 2019-05-02 南京阿凡达机器人科技有限公司 Chess piece positioning method and system based on machine vision, storage medium, and robot
CN108966500A (en) * 2018-08-07 2018-12-07 向耀 The pcb board of view-based access control model tracking is secondary and multiple accurate drilling method
CN108827316A (en) * 2018-08-20 2018-11-16 南京理工大学 Mobile robot visual orientation method based on improved Apriltag label
CN109410229A (en) * 2018-08-27 2019-03-01 南京珂亥韧光电科技有限公司 Multiple target lens position and male and fomale(M&F) know method for distinguishing
KR101975209B1 (en) * 2018-12-14 2019-05-07 서정원 Position tracing system using Affine transformation
CN109785324A (en) * 2019-02-01 2019-05-21 佛山市南海区广工大数控装备协同创新研究院 A kind of large format pcb board localization method

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
A Fast Coplanarity Inspection System for Double-Sides IC Leads Using Single Viewpoint;Qiusheng Zhong et al.;《Lecture Notes in Artificial Intelligence》;20141220;第8918卷;第216-225页 *
Feature-Based Object Location of IC Pins by Using Fast Run Length Encoding BLOB Analysis;Qiusheng Zhong et al.;《IEEE TRANSACTIONS ON COMPONENTS PACKAGING AND MANUFACTURING TECHNOLOGY》;20141130;第4卷(第11期);第1887-1898页 *
机载光电成像平台的多目标自主定位系统研究;周前飞等;《光学学报》;20150110(第01期);第0112005(1-15)页 *
面向IC封装的在线检测理论与技术研究;钟球盛;《中国博士学位论文全文数据库 信息科技辑》;20160115(第01期);第I138-146页 *

Also Published As

Publication number Publication date
CN110211183A (en) 2019-09-06

Similar Documents

Publication Publication Date Title
CN110211183B (en) Multi-target positioning system based on single-imaging large-view-field LED lens mounting
CN100582657C (en) Three-dimensional microcosmic appearance inclined scanning method and apparatus
CN105447853A (en) Flight device, flight control system and flight control method
CN108801149B (en) Contact net geometric parameter measuring method based on geometric amplification principle and monocular computer vision
CN105651177A (en) Measuring system suitable for measuring complex structure
CN107966102A (en) A kind of plate production six-face detection device
CN106403828A (en) Monorail contact line remain height measurement method based on checkerboard calibration and monorail contact line remain height measurement system thereof
CN109177526B (en) Method and system for duplex printing
CN108942921A (en) A kind of grabbing device at random based on deep learning object identification
CN111127562B (en) Calibration method and automatic calibration system for monocular area-array camera
CN112396041B (en) Road marking alignment system based on image recognition
CN114660579A (en) Full-automatic laser radar and camera calibration method
CN208171175U (en) Six-face detection device is used in a kind of production of plate
CN108322736B (en) Calibration plate and calibration method for calibrating rotation angles of multiple linear array cameras around visual axis
CN110030988A (en) A kind of multi-beacon high-speed synchronous recognition methods for high dynamic pose measurement
CN205580380U (en) Measurement system suitable for measure complex construction
CN111260561A (en) Rapid multi-graph splicing method for mask defect detection
CN215868286U (en) Machine vision teaching experiment platform of linear array scanning type
CN214660775U (en) Water pump appearance detection system
CN108989690A (en) A kind of line-scan digital camera multiple labeling point focusing method, device, equipment and storage medium
CN116087222A (en) Wafer dark field detection device and detection method
CN102778980B (en) Fusion and interaction system for extra-large-breadth display contact
CN114104453A (en) Non-ferrous metal automatic labeling method and device based on image processing
CN105606027B (en) A kind of measurable side or it is internal and can automatic loading/unloading measuring system
CN205607339U (en) Light filling is adjustable and can go up vision imaging measurement system of unloading voluntarily

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant