CN115546016A - Method for acquiring and processing 2D (two-dimensional) and 3D (three-dimensional) images of PCB (printed Circuit Board) and related device - Google Patents

Method for acquiring and processing 2D (two-dimensional) and 3D (three-dimensional) images of PCB (printed Circuit Board) and related device Download PDF

Info

Publication number
CN115546016A
CN115546016A CN202211494737.4A CN202211494737A CN115546016A CN 115546016 A CN115546016 A CN 115546016A CN 202211494737 A CN202211494737 A CN 202211494737A CN 115546016 A CN115546016 A CN 115546016A
Authority
CN
China
Prior art keywords
image
line laser
height
scanning
original
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202211494737.4A
Other languages
Chinese (zh)
Other versions
CN115546016B (en
Inventor
雷志辉
周翔
陈状
熊祥祥
张弛
周宇轩
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Eagle Eye Online Electronics Technology Co ltd
Original Assignee
Shenzhen Eagle Eye Online Electronics Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Eagle Eye Online Electronics Technology Co ltd filed Critical Shenzhen Eagle Eye Online Electronics Technology Co ltd
Priority to CN202211494737.4A priority Critical patent/CN115546016B/en
Publication of CN115546016A publication Critical patent/CN115546016A/en
Application granted granted Critical
Publication of CN115546016B publication Critical patent/CN115546016B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • G06T3/02
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5005Allocation of resources, e.g. of the central processing unit [CPU] to service a request
    • G06F9/5027Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T1/00General purpose image data processing
    • G06T1/20Processor architectures; Processor configuration, e.g. pipelining
    • G06T3/06
    • G06T3/08
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery
    • G06T7/521Depth or shape recovery from laser ranging, e.g. using interferometry; from the projection of structured light
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/60Analysis of geometric attributes
    • G06T7/66Analysis of geometric attributes of image moments or centre of gravity
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30108Industrial image inspection
    • G06T2207/30141Printed circuit board [PCB]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30244Camera pose

Abstract

The invention discloses a method for collecting and processing 2D and 3D images of a PCB and a related device, which are applied to an image processor GPU of a PCB optical detection system, and comprise the following steps: acquiring a plurality of line laser images acquired by a plurality of scanning cameras, and performing sub-pixel line laser center feature extraction on the plurality of line laser images to respectively obtain line laser center pixel coordinates of each line laser image; obtaining a plurality of camera position and posture parameters according to the line laser center pixel coordinates, and unifying a plurality of scanning cameras to a machine table coordinate system; acquiring a plurality of original 2D gray-scale images and a plurality of original 3D height images which are obtained by shooting through a plurality of scanning cameras; obtaining a 2D gray level image and a 3D height image according to data fusion; and carrying out combined calibration on the 2D gray-scale image and the 3D height image to obtain an affine matrix of the 2D gray-scale image and the 3D height image. The problem that the field range is small when a single camera is used for high-precision measurement is effectively solved, and the imaging effect of PCB image field imaging is improved.

Description

Method for acquiring and processing 2D (two-dimensional) and 3D (three-dimensional) images of PCB (printed Circuit Board) and related device
Technical Field
The invention relates to the field of general image data processing in optical measurement, in particular to a method for acquiring and processing 2D and 3D images of a PCB and a related device.
Background
In the Automatic Optical Inspection (AOI) industry, the proposal of "industry 4.0" will further promote the AOI industry to perform comprehensive industrial upgrading from the aspects of efficiency, energy saving, informatization, security, and the like. Under the big background of the internet era, the application of the 5G information technology enables the production mode, the manufacturing process and even the supply chain management of enterprises to be greatly changed, and the whole AOI industry tends to be more intelligent measurement of 2D + 3D. How to realize the integration of the measurement function and the defect detection function of the PCB measurement board under the automatic optical detection system on the premise of full automation, micron precision, high speed and large coverage rate is one of the important problems to be solved urgently in the field.
Disclosure of Invention
In view of the above problems, the embodiment of the present application provides a method and a related device for acquiring and processing 2D and 3D images of a PCB, which effectively solve the problem of a small field range in high-precision measurement of a single camera, and can achieve a large-format field imaging effect in high-precision measurement. The 2D and 3D position information one-to-one correspondence can be accurately realized, and the detection efficiency and accuracy of the PCB image are improved.
In order to achieve the above object, in a first aspect, an embodiment of the present application provides a method for acquiring and processing 2D and 3D images of a PCB, which is applied to an image processor GPU of a PCB optical inspection system, where the PCB optical inspection system further includes a plurality of scanning cameras, and the method includes the following steps:
acquiring a plurality of line laser images of a target PCB irradiated by line laser acquired by a plurality of scanning cameras, and performing sub-pixel line laser center feature extraction on each line laser image in the plurality of line laser images to obtain line laser center pixel coordinates of each line laser image; respectively obtaining a plurality of camera position and attitude parameters according to the line laser center pixel coordinates, and unifying a plurality of scanning cameras to a machine table coordinate system through the position and attitude parameters; the method comprises the steps of obtaining a plurality of original 2D gray-scale images obtained by scanning a target PCB irradiated by natural light through a plurality of scanning cameras, obtaining a plurality of original 3D height images obtained by scanning the target PCB irradiated by line laser through the plurality of scanning cameras, and obtaining a plurality of original 3D height images according to line laser stripes of the original images; performing data fusion on the plurality of original 2D gray level images and the plurality of original 3D height images to obtain fused 2D gray level images and 3D height images; and carrying out combined calibration on the 2D gray-scale image and the 3D height image to obtain an affine matrix of the 2D gray-scale image and the 3D height image.
It can be seen that in the embodiment of the present application, a target PCB board is photographed by a plurality of scanning cameras, and original images of the plurality of scanning cameras are adjusted respectively so that the original images of the plurality of scanning cameras are on the same horizontal plane, so as to synthesize and obtain a complete 2D grayscale image and a complete 3D height image. The problem that the field range is small when a single camera is used for high-precision measurement is effectively solved, and the large-breadth field imaging effect of high-precision measurement is improved.
With reference to the first aspect, in a possible embodiment, acquiring a plurality of line laser images of a target PCB board irradiated by line laser acquired by a plurality of scanning cameras, and performing sub-pixel line laser center feature extraction on each line laser image in the plurality of line laser images to obtain line laser center pixel coordinates of each line laser image includes: carrying out mean value filtering processing on each line laser image to obtain an image subjected to mean value filtering processing; acquiring the position of the maximum value of the gray value of the image subjected to the average filtering processing in the image column direction, and determining the position of the maximum value of the gray value as the center of the whole pixel of the line laser image; and performing Taylor expansion along the normal direction of the line at the whole pixel center of the line laser, and calculating to obtain the central pixel coordinate of the line laser, wherein the central pixel coordinate of the line laser represents the height of the corresponding scanning camera in a coordinate system of the machine.
It can be seen that, in the embodiment of the application, the line laser is used as the light source, the line laser is projected to the surface of the object, and the line laser center with sub-pixel precision can be extracted through the gaussian distribution model according to the characteristic that the light intensity of the line laser is approximately gaussian distribution, so that the measurement precision of the detection system is improved. The information quantity displayed when the image of the PCB is detected is increased, and the accuracy of the detection efficiency of the image of the PCB is further increased.
With reference to the first aspect, in one possible embodiment, the attitude parameters include a rotation pitch angle about an X-axis, a rotation roll angle about a Y-axis, a rotation yaw angle about a Z-axis, an X-axis offset, a Y-axis offset, and a Z-axis offset; respectively obtaining a plurality of camera position and attitude parameters according to the line laser center pixel coordinate, and unifying a plurality of scanning cameras to a machine coordinate system through the position and attitude parameters, comprising: obtaining a plurality of original height images according to the line laser center pixel coordinates of the plurality of scanning cameras and the plurality of scanning cameras scanning the calibration plate, and combining the plurality of original height images on the same reference plane to obtain a first height image; solving a height plane coefficient of each original gray level image in the first height image, and a rotation pitch angle around an X axis, a rotation roll angle around a Y axis and a rotation yaw angle around a Z axis corresponding to each original gray level image; correcting the first height image according to a rotation pitch angle around an X axis, a rotation roll angle around a Y axis and a rotation yaw angle around a Z axis which correspond to the plurality of scanning cameras respectively to obtain a second height image; determining the Z-axis offset of each scanning camera according to the second height image, wherein the Z-axis offset is the offset of the height of each scanning camera relative to the average height of the plurality of scanning cameras; acquiring circle center patterns of a plurality of original gray level images, and determining circle center coordinates of a plurality of predictive calibration plates corresponding to a plurality of scanning cameras according to the circle center patterns of the plurality of original gray level images; acquiring circle center coordinates of a plurality of actual calibration plates, and determining the plane offset of each scanning camera in a plurality of scanning cameras according to the circle center coordinates of a plurality of predicted calibration plates and the circle center coordinates of the plurality of actual calibration plates, wherein the plane offset comprises an X-axis offset and a Y-axis offset; and unifying the plurality of scanning cameras to a machine table coordinate system according to the Z-axis offset, the plane offset, the rotation pitch angle around the X axis, the rotation roll angle around the Y axis and the rotation yaw angle around the Z axis of each scanning camera.
In the embodiment of the application, the position and posture parameters of a plurality of cameras are calibrated simultaneously in the same machine coordinate system, and the height data acquired by the plurality of cameras are unified into the machine coordinate system; the problem that the field range is small when a single camera is used for high-precision measurement is effectively solved, and the large-breadth field imaging effect of high-precision measurement is improved.
With reference to the first aspect, in one possible embodiment, each of the plurality of scanning cameras comprises a light source assembly and a camera assembly, the light source assembly comprising a line laser light source and a natural light source; the distance between the irradiation region of line laser source and natural light source is greater than first preset threshold, acquires a plurality of original 2D grey maps that the target PCB board that a plurality of scanning cameras scan natural light shines obtained, acquires a plurality of original 3D height maps that the target PCB board that a plurality of scanning cameras scan line laser shines obtained, includes: the plurality of scanning cameras move to scan along the Y-axis direction, so that the linear laser light source and the natural light source traverse the target PCB; obtaining a 2D gray scale image of the target PCB according to a natural light irradiation area in an original image of the target PCB irradiated by the laser; and obtaining a plurality of original 3D height maps of the target PCB according to the laser stripes in the image of the target PCB irradiated by the laser.
With reference to the first aspect, in a possible embodiment, acquiring an original image of a target PCB board irradiated by laser light of a scanning line of a plurality of scanning cameras, obtaining a plurality of original 2D grayscale images according to a natural light irradiation area of the original image, and obtaining a plurality of original 3D height maps according to a line laser stripe of the original image, includes: obtaining conversion formulas of the plurality of scanning cameras according to the posture parameters corresponding to the plurality of scanning cameras; acquiring data of a coincidence region between two original gray-scale images corresponding to adjacent scanning cameras; obtaining two results of data of the overlapping area of the original 2D gray level image and the original 3D height image of two adjacent scanning cameras according to a conversion formula; performing weighted data fusion on the two results of the overlapping area to obtain a fusion value of the data of the overlapping area; and generating a corresponding complete 2D gray-scale image and a complete 3D height image according to the fusion value of the data of the overlapping area.
It can be seen that in the embodiment of the application, a plurality of original 2D grayscale images and 3D height images are calibrated and fused into complete 2D grayscale images and 3D height images according to the attitude parameters corresponding to a plurality of scanning cameras, so that the problem of small field range during high-precision measurement of a single camera is solved, and high-precision large-format 2D grayscale images and high-precision large-format 3D height images are obtained.
With reference to the first aspect, in a possible embodiment, jointly calibrating the 2D grayscale map and the 3D height map to obtain an affine matrix of the 2D grayscale map and the 3D height map includes: acquiring a 2D actual circle center coordinate and a 3D actual circle center coordinate according to the camera position posture parameter; calculating an affine transformation matrix of the 2D actual circle center coordinates and the 3D actual circle center coordinates to obtain initial values of the 2D/3D position conversion model parameters; eliminating random errors from the initial values of the 2D/3D position conversion model parameters to obtain 2D/3D position conversion model parameter affine; after the 2D/3D position conversion model parameter affine set of all the actual circle center coordinates is obtained through calculation, data regression analysis is integrated, and affine matrixes of a 2D gray level graph and a 3D height graph are obtained.
It can be seen that in the embodiment of the application, affine matrixes of the 2D gray-scale image and the 3D height image are obtained through calculation according to the 2D actual circle center coordinates and the 3D actual circle center coordinates, and then the position information of the 2D gray-scale image and the 3D height image is in one-to-one correspondence to obtain the jointly calibrated 2D gray-scale image and the 3D height image, so that the large-breadth field imaging effect of high-precision measurement is improved.
With reference to the first aspect, in one possible embodiment, if the GPUs are multiple GPUs, the multiple scanning cameras and the multiple GPUs are connected and perform information interaction through PCIE buses, and allocate computing tasks to the many-core logic computing unit ALUs of the GPUs according to an asynchronous allocation method.
It can be seen that in the embodiment of the present application, the plurality of GPUs and the plurality of scanning cameras are connected through the PCIE bus, so that it is avoided that in a conventional camera acquisition and processing system, data acquired by the cameras are subjected to GPU parallel processing in the display card and need to be scheduled and allocated by the CPU system. The instruction streams are sent to the many cores in parallel, and different input data are adopted for execution, so that massive operations in the graphic processing are completed; the parallel computing power can be increased by multiples compared to a single graphics card.
In a second aspect, an embodiment of the present application provides an apparatus for acquiring and processing 2D and 3D images of a PCB, which is applied to an image processor GPU of an optical inspection system for the PCB, where the optical inspection system for the PCB further includes a plurality of scanning cameras, and the apparatus includes:
an acquisition unit: the system comprises a plurality of scanning cameras, a plurality of sub-pixel line laser center feature extraction module, a sub-pixel line laser center feature extraction module and a sub-pixel line laser center feature extraction module, wherein the scanning cameras are used for acquiring a plurality of line laser images of a target PCB irradiated by line laser and acquiring line laser center pixel coordinates of each line laser image;
a calculation unit: the scanning camera system is used for respectively obtaining a plurality of camera position and attitude parameters according to the line laser center pixel coordinate and unifying a plurality of scanning cameras to a machine table coordinate system through the position and attitude parameters;
the acquisition unit is further configured to: acquiring a plurality of original 2D gray-scale images obtained by scanning a target PCB irradiated by natural light by a plurality of scanning cameras, and acquiring a plurality of original 3D height images obtained by scanning the target PCB irradiated by line laser by the plurality of scanning cameras;
the computing unit is further to: performing data fusion on the plurality of original 2D gray level images and the plurality of original 3D height images to obtain fused 2D gray level images and 3D height images;
and the calibration unit is used for carrying out combined calibration on the 2D gray-scale image and the 3D height image to obtain an affine matrix of the 2D gray-scale image and the 3D height image.
In a third aspect, embodiments of the present application provide an electronic device, comprising a processor, a memory, a communication interface, and one or more programs, the one or more programs being stored in the memory and configured to be executed by the processor, the one or more instructions being adapted to be loaded by the processor and to perform part or all of the method according to the first aspect.
In a fourth aspect, embodiments of the present application provide a computer-readable storage medium storing a computer program for electronic data exchange, wherein the computer program causes a computer to perform part or all of the method according to the first aspect.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present application, and for those skilled in the art, other drawings can be obtained according to the drawings without creative efforts.
Fig. 1 is a schematic structural diagram of a system for acquiring and processing 2D and 3D images of a PCB according to an embodiment of the present disclosure;
fig. 2 is a schematic flowchart of a method for acquiring and processing 2D and 3D images of a PCB according to an embodiment of the present disclosure;
fig. 3 is a schematic structural diagram of a scanning camera set according to an embodiment of the present application;
fig. 4 is a schematic diagram of a unified process of a scanning camera according to an embodiment of the present disclosure;
fig. 5A is a schematic structural diagram of a single scanning camera according to an embodiment of the present disclosure;
fig. 5B is a schematic view of a light source of a scanning camera according to an embodiment of the present disclosure;
fig. 6A is a schematic diagram of an image before fusion according to an embodiment of the present application;
fig. 6B is a schematic diagram illustrating fused images according to an embodiment of the present application;
fig. 7 is a schematic structural diagram of an apparatus for acquiring and processing 2D and 3D images of a PCB according to an embodiment of the present application;
fig. 8 is a schematic structural diagram of an electronic device according to an embodiment of the present application.
Detailed Description
In order to make the technical solutions of the present application better understood, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all of the embodiments. All other embodiments obtained by a person of ordinary skill in the art based on the embodiments in the present application without making any creative effort belong to the protection scope of the present application.
The terms "first," "second," and the like in the description and claims of the present application and in the above-described drawings are used for distinguishing between different objects and not for describing a particular order. Furthermore, the terms "include" and "have," as well as any variations thereof, are intended to cover non-exclusive inclusions. For example, a process, method, system, article, or apparatus that comprises a list of steps or elements is not limited to only those steps or elements listed, but may alternatively include other steps or elements not listed, or inherent to such process, method, article, or apparatus.
Reference herein to "an embodiment" means that a particular feature, structure, or characteristic described in connection with the embodiment can be included in at least one embodiment of the application. The appearances of the phrase in various places in the specification are not necessarily all referring to the same embodiment, nor are separate or alternative embodiments mutually exclusive of other embodiments. It is explicitly and implicitly understood by one skilled in the art that the embodiments described herein can be combined with other embodiments.
Embodiments of the present application are described below with reference to the drawings.
How to realize the integration of the measurement function and the defect detection function of the PCB measurement board on the premise of full automation, micron precision, high speed and large coverage rate is one of the important problems to be solved urgently in the field.
Aiming at the problems, the method and the device for acquiring and processing the 2D and 3D images of the PCB effectively solve the problem of small field range during high-precision measurement of a single camera, and improve the large-breadth field imaging effect of high-precision measurement. The following description is made with reference to the accompanying drawings.
Referring to fig. 1, fig. 1 is a schematic structural diagram of a system for acquiring and processing 2D and 3D images of a PCB according to an embodiment of the present disclosure, and as shown in fig. 1, the system 100 includes a scanning camera 101, wherein the scanning camera 101 further includes a light source assembly 1011 and a camera assembly 1012, the light source assembly 1011 is configured to generate natural light stripes and line laser stripes to irradiate a PCB board to be detected; the camera component is used for scanning and shooting a detected target PCB; the graphic processor 102 is configured to obtain images such as an original grayscale image and an original height image obtained by shooting with the scanning camera, calculate attitude parameters of the scanning camera, synthesize a 2D grayscale image and a 3D height image of the PCB, and calculate an affine matrix of the 2D grayscale image and the 3D height image. The scan camera 101 and the graphic processor 102 are both directly connected to the PCIE bus, and can directly perform data transmission and interaction.
Referring to fig. 2, fig. 2 is a schematic flow chart of a method for acquiring and processing 2D and 3D images of a PCB according to an embodiment of the present disclosure, and as shown in fig. 2, the method includes steps S201 to S205.
S201, acquiring a plurality of line laser images of a target PCB irradiated by line laser acquired by a plurality of scanning cameras, and performing sub-pixel line laser center feature extraction on each line laser image in the plurality of line laser images to obtain line laser center pixel coordinates of each line laser image.
Specifically, according to the principle of structured light trigonometry, line laser is used as a light source, the line laser is projected to the surface of an object, and the light intensity of the line laser approximately accords with the characteristics of Gaussian distribution, so that the line laser centers of a plurality of line laser images with sub-pixel precision can be extracted through a Gaussian distribution model, the corresponding scanning camera can be calibrated under the machine table coordinate system of the PCB image collecting and processing system according to the pixel coordinate of the line laser center, and images shot by the scanning camera at the same height angle are obtained.
In a possible embodiment, acquiring a plurality of line laser images of a target PCB board irradiated by line laser collected by a plurality of scanning cameras, and performing sub-pixel line laser center feature extraction on each line laser image in the plurality of line laser images to obtain line laser center pixel coordinates of each line laser image includes: carrying out mean value filtering processing on each line laser image to obtain an image subjected to mean value filtering processing; acquiring the position of the maximum value of the gray value of the image subjected to the average filtering processing in the image column direction, and determining the position of the maximum value of the gray value as the center of the whole pixel of the line laser image; and performing Taylor expansion along the normal direction of the line at the whole pixel center of the line laser, and calculating to obtain a line laser center pixel coordinate which represents the height of the corresponding scanning camera in a machine table coordinate system.
Specifically, please refer to fig. 3, fig. 3 is a schematic structural diagram of a scanning camera according to an embodiment of the present application, and it can be seen from the diagram that a plurality of scanning cameras are required to jointly scan a PCB in a specific scanning system, although each independent scanning camera can adjust a specific height angle through a fixing frame in an actual scanning measurement process, the plurality of scanning cameras are sequentially installed in an X-axis direction and perform shooting along with translation of a light source on a Y-axis, and shooting is performed once after each movement of a preset distance. Therefore, the multiple scanning cameras may have the conditions of jitter, offset and the like in the translation process, and further have the fine offsets of height, distance, lens angle and the like in the shooting process, and after the processing such as synthesis and amplification, the errors are amplified, and further the final imaging effect is influenced. Therefore, before the original images shot by the multiple scanning cameras are fused, the line laser center pixel coordinates of each scanning camera need to be obtained, the position of each scanning camera relative to the machine coordinate system is obtained, and the multiple scanning cameras are unified under the machine coordinate system to eliminate the influence caused by errors as much as possible.
Illustratively, the line laser center pixel coordinates of the line laser image may be extracted from the sub-pixel line laser center feature. And obtaining a line laser image of any scanning camera to carry out mean value filtering processing, wherein the mean value filtering template can be a 7-by-5 template, the X axis of an image pixel coordinate system is the image row direction, and the Y axis of the image pixel coordinate system is the image row direction. And performing line laser whole pixel extraction on the image subjected to the line laser image mean value filtering processing, performing transposition processing on the image, and extracting the position of the maximum value of the gray value of the column image by adopting a line pointer method to obtain the whole pixel center of the line laser. Because the light intensity distribution of the line laser is Gaussian-like function distribution, the edge is only needed to be made at the center of the whole pixel of the line laserTaylor expansion is performed along the line normal direction, and a second-order taylor expansion formula is used as an explanation in the embodiment of the present application: firstly, determining the center of the whole pixel of the line laser, wherein the center of the whole pixel is the position of the maximum value of the gray scale in the column direction and is marked as (x) 0 ,y 0 ) (ii) a The second-order Taylor expansion is carried out along the normal direction by using the position:
Figure 800927DEST_PATH_IMAGE001
wherein g is x 、g y 、g xx 、g yy 、g xy The results of the convolution of the image with a gaussian kernel such as a first-order x-partial derivative, a first-order y-partial derivative, a second-order x-partial derivative, etc., are respectively obtained. The gaussian kernels of each order are as follows:
Figure 51911DEST_PATH_IMAGE002
for a two-dimensional image, the first derivative at the center point of the line edge is 0, and the point where the second directional derivative takes the minimum value is the line center point.
Order to
Figure 282648DEST_PATH_IMAGE003
The following can be obtained:
Figure 692901DEST_PATH_IMAGE004
finally, the edge normal direction (n) can be obtained by a Hessian matrix method x ,n y ) And the second derivative in that direction. The Hessian matrix is a square matrix formed by second-order partial derivatives of a multivariate function and describes the local curvature of the function.
The center point can be extracted by the criterion. If it is
Figure 681716DEST_PATH_IMAGE005
If the minimum value of the second derivative in the direction is less than a certain threshold, the pixel is judged by two layers, and finally the pixel is judgedAnd obtaining the sub-pixel central pixel coordinates meeting the central characteristic of the light intensity.
It can be seen that, in the embodiment of the application, the line laser is used as the light source, the line laser is projected to the surface of the object, and the line laser center with sub-pixel precision can be extracted through the gaussian distribution model according to the characteristic that the light intensity of the line laser is approximately gaussian distribution, so that the measurement precision of the detection system is improved.
S202, obtaining a plurality of scanning camera position and posture parameters according to the line laser center pixel coordinates respectively, and unifying the plurality of scanning cameras to a machine table coordinate system through the position and posture parameters.
Specifically, according to the laser center pixel coordinate, a calibration plate for calibrating the scanning cameras is scanned to obtain the position and posture parameters of the corresponding scanning cameras respectively, the offset degree of the multiple cameras relative to the machine coordinate system is obtained according to the position and posture parameters of the scanning cameras, and the initial image shot by each scanning camera is adjusted according to the offset degree of the scanning camera.
In one possible embodiment, the attitude parameters include a pitch angle of rotation about the X-axis, a roll angle of rotation about the Y-axis, a yaw angle of rotation about the Z-axis, an X-axis offset, a Y-axis offset, and a Z-axis offset; respectively obtaining a plurality of camera position and attitude parameters according to the line laser center pixel coordinate, and unifying a plurality of scanning cameras to a machine coordinate system through the position and attitude parameters, comprising: obtaining a plurality of original height images according to the line laser center pixel coordinates of the plurality of scanning cameras and the scanning calibration board of the plurality of scanning cameras, and combining the plurality of original height images on the same reference plane to obtain a first height image; solving a height plane coefficient of each original gray level image in the first height image, and a rotation pitch angle around an X axis, a rotation roll angle around a Y axis and a rotation yaw angle around a Z axis corresponding to each original gray level image; correcting the first height image according to a rotation pitch angle around an X axis, a rotation roll angle around a Y axis and a rotation yaw angle around a Z axis which correspond to the plurality of scanning cameras respectively to obtain a second height image; determining the Z-axis offset of each scanning camera according to the second height image, wherein the Z-axis offset is the offset of the height of each scanning camera relative to the average height of the plurality of scanning cameras; acquiring circle center patterns of a plurality of original gray level images, and determining circle center coordinates of a plurality of predictive calibration plates corresponding to a plurality of scanning cameras according to the circle center patterns of the plurality of original gray level images; acquiring circle center coordinates of a plurality of actual calibration plates, and determining the plane offset of each scanning camera in a plurality of scanning cameras according to the circle center coordinates of a plurality of predicted calibration plates and the circle center coordinates of the plurality of actual calibration plates, wherein the plane offset comprises X-axis offset and Y-axis offset; and unifying the plurality of scanning cameras to a machine table coordinate system according to the Z-axis offset, the plane offset, the rotation pitch angle around the X axis, the rotation roll angle around the Y axis and the rotation yaw angle around the Z axis of each scanning camera.
Specifically, please refer to fig. 4, fig. 4 is a schematic diagram of a unified process of a scanning camera according to an embodiment of the present disclosure, in which as shown in the figure, the target camera is a first scanning camera of a plurality of scanning cameras, the target camera needs to exist in a same straight line at a same height, but there is a slight deviation in an actual position of the camera in an actual installation process, and therefore before processing a picture taken by the scanning camera, the picture taken by the scanning camera needs to be processed according to an error between the actual position of the target camera and a theoretical position in a machine coordinate system. Firstly, the attitude parameters are obtained through the laser center pixel coordinates of the scanning camera, as shown in fig. 4, marked in fig. 4 are the actual position of the camera and the theoretical position where the camera should be located, the X-axis offset, the Y-axis offset and the Z-axis offset of the scanning camera with respect to the theoretical position need to be obtained according to the actual position of the camera 1, and if the shooting angle of the scanning camera is inclined, the rotation pitch angle around the X-axis, the rotation roll angle around the Y-axis and the rotation yaw angle around the Z-axis need to be obtained.
Placing a calibration plate in a measurement view field of a PCB image acquisition and processing system, and defining theta as a rotation pitch angle around an X axis, gamma as a rotation roll angle around a Y axis and psi as a rotation yaw angle around a Z axis; delta X, delta Y and delta Z are offset in XYZ direction in sequence; scanning a section of reference plane area of the calibration plate by using line laser stripes to obtain an original height image, and combining a plurality of original height images with the same reference plane to obtain a first height image. And sequentially solving height plane coefficients in the sub-block to which each scanning camera belongs in the first height image, wherein the height plane coefficients are obtained by adopting plane fitting of least squares, and then obtaining parameters of a pitch angle theta, a roll angle gamma and a yaw angle psi of each scanning camera relative to a machine table coordinate system. The least squares plane fitting method is as follows: the mathematical expression for the spatial plane can be written as:
Figure 482313DEST_PATH_IMAGE006
for a series of n points
Figure 581987DEST_PATH_IMAGE007
Only need to make
Figure 515921DEST_PATH_IMAGE008
At a minimum, wherein
Figure 298807DEST_PATH_IMAGE009
When S is minimized as a coefficient,
Figure 524383DEST_PATH_IMAGE010
the value of the coefficient; to S, i.e. partial derivatives
Figure 490065DEST_PATH_IMAGE011
The rectangular representation of the final result is:
Figure 609331DEST_PATH_IMAGE012
solving the above matrix equation to obtain
Figure 671440DEST_PATH_IMAGE013
And (4) obtaining a unit normal vector of the corresponding plane, and obtaining the parameters of theta, gamma and psi by the included angle between the unit normal vector and the axial vector. And then, correcting the first height image by using the obtained parameters of the pitch angle theta, the roll angle gamma and the yaw angle psi to obtain a second height image, wherein the correction formula is as follows:
Figure 508946DEST_PATH_IMAGE014
whereinX w1Y w1Z w1 Is the original position coordinates derived from the central pixel coordinates of the scanning camera,X cY cZ c according to the coordinate corrected by the scanning camera and a calibration plate model, the calibration plate has high flatness and small plane fluctuation, a height mean value set { S1, S2, \8230; sn } of each camera can be calculated by utilizing a second height image, and a set mean value S is calculated ave Then the relative S of each scanning camera can be calculated ave Δ Z offset of (c).
A plurality of original gray-scale images corresponding to a plurality of scanning cameras can be obtained according to a natural light irradiation area of a calibration plate, a feature extraction algorithm is utilized to extract circle center patterns in sub-blocks of each camera in the original gray-scale images for segmentation, self-adaptive threshold segmentation based on the maximum variance principle is acquired, and the maximum inter-class method principle is as follows: calculating an accumulation average value M of the gray level K and an image global average value MG:
Figure 950423DEST_PATH_IMAGE015
in which P is i For the probability value of the current gray level, the final inter-class variance formula in this embodiment is:
Figure 189774DEST_PATH_IMAGE016
where the formula-maximized gray level K is the calculated segmentation threshold. According to the segmentation threshold, the target hole and the background are segmented, and finally, a gray scale gravity center formula is used
Figure 425715DEST_PATH_IMAGE017
Figure 685270DEST_PATH_IMAGE018
And calculating the center position of the corresponding hole to obtain the 2D actual circle center coordinate and the 3D actual circle center coordinate. Calculating the 2D/3D circle center pattern coordinates in the sub-block of each camera in the original gray scale map, and calibratingAnd (4) correspondingly solving the actual circle center coordinates on the board, and calculating the delta X and delta Y offsets of each scanning camera relative to the coordinate system of the machine.
In the embodiment of the application, the position and posture parameters of a plurality of cameras are calibrated simultaneously in the same machine coordinate system, and the height data acquired by the plurality of cameras are unified into the machine coordinate system; the problem that the field range is small when a single camera is used for high-precision measurement is effectively solved, and the large-breadth field imaging effect of high-precision measurement is improved.
S203, acquiring a plurality of original 2D gray-scale images obtained by scanning the target PCB irradiated by the natural light by the plurality of scanning cameras, and acquiring a plurality of original 3D height images obtained by scanning the target PCB irradiated by the laser by the lines by the plurality of scanning cameras.
Specifically, after the line laser irradiates the target PCB to be detected, the scanning camera scans and shoots the target PCB, so as to obtain an original image of the target PCB including line laser stripes. Obtaining an original 2D gray-scale image of the target PCB according to the natural light irradiation part in the original image of the target PCB; and obtaining an original 3D height map of the target PCB according to the line laser stripes in the original image of the target PCB.
In one possible embodiment, each of the plurality of scanning cameras comprises a light source assembly comprising a line laser light source and a natural light source; the distance between the irradiation region of line laser light source and natural light source is greater than first preset threshold, acquires a plurality of original 2D grey level maps that the target PCB board that a plurality of scanning cameras scanning natural light shines obtained, acquires a plurality of original 3D height maps that the target PCB board that a plurality of scanning cameras scanning line laser shines obtained, includes: the plurality of scanning cameras move to scan along the Y-axis direction, so that the line laser light source and the natural light source traverse the target PCB; obtaining a 2D gray level image of the target PCB according to a natural light irradiation area in the original image of the target PCB irradiated by the laser; and obtaining a plurality of original 3D height maps of the target PCB according to the laser stripes in the image of the target PCB irradiated by the laser.
Specifically, an original 2D gray scale image and an original 3D height image can be respectively obtained by scanning and shooting a PCB to be detected irradiated by a natural light source and a laser source, wherein a natural light irradiation area corresponds to the original 2D gray scale image, and line laser stripes correspond to the original 3D height image. The original 2D gray scale map includes gray scale information in addition to horizontal axis coordinate information and vertical axis coordinate information, and the original 3D height map includes height information in addition to horizontal axis coordinate information and vertical axis coordinate information. Referring to fig. 5A, fig. 5A is a schematic structural diagram of a single scanning camera according to an embodiment of the present disclosure, where a light source assembly in the single scanning camera includes a line laser light source and a natural light source, which respectively generate line laser and natural light to irradiate a target PCB, emitting assemblies of the line laser light source and the natural light source and a camera assembly are fixed on a fixing frame, and the above assemblies may respectively and independently change directions and angles on the fixing frame according to actual requirements, and the scanning camera may obtain an original 2D grayscale image and an original 3D height image of the target PCB according to two light sources in one scanning. Referring to fig. 5B, fig. 5B is a schematic view of a light source of a scanning camera according to an embodiment of the present disclosure, where the light source of the scanning camera includes a line laser light source and a natural light source, and the line laser light source and the natural light source are respectively irradiated on a PCB to obtain a line laser stripe and a natural light irradiation area, and a part of an area between the line laser stripe and the natural light irradiation area is irradiated without a light source to distinguish the line laser stripe from the natural light irradiation area. According to the structure of fig. 5A, the light source of the scanning camera translates in the Y-axis direction, the line laser and the natural light can pass through the whole target PCB to be detected, and the scanning camera can shoot the line laser image and the natural light image of the target PCB to obtain the original 2D grayscale image and the 3D height image of the target PCB.
And S204, carrying out data fusion on the plurality of original 2D gray-scale images and the plurality of original 3D height images to obtain fused 2D gray-scale images and 3D height images.
Specifically, each original 2D grayscale image and each original 3D height image only include a partial image of a target PCB, and the original 2D grayscale images or the original 3D height images captured by the adjacent scanning cameras have an overlapping region, and after data fusion is performed on the overlapping regions of the original 2D grayscale images or the original 3D height images captured by the two adjacent scanning cameras, the two adjacent 2D grayscale images or the original 3D height images can be combined into one 2D grayscale image or one 3D height image according to the fused data.
For example, please refer to fig. 6A, fig. 6A is a schematic diagram of an image before fusion according to an embodiment of the present application; the method includes that two original 2D gray level images obtained by scanning a PCB with two adjacent scanning cameras exist, it can be seen that the two original 2D gray level images corresponding to the scanning of the adjacent scanning cameras exist in an overlapping region, that is, a dotted line region in the figure, it can be seen that images shot by the overlapping region are approximately consistent, the overlapping region after fusion can be obtained, please refer to fig. 6B, and fig. 6B is a schematic diagram after fusion of an image provided by an embodiment of the present application; the fused overlapping area is obtained by calculating the overlapping area of the two original gray level images, and the two original 2D gray level images obtained by scanning the PCB by the adjacent scanning cameras can be fused into one 2D gray level image according to the fused overlapping area.
In a possible embodiment, the data fusion of the multiple original 2D grayscale maps and the multiple original 3D height maps according to the position and orientation parameters to obtain a fused 2D grayscale map and a fused 3D height map, including: obtaining conversion formulas of the plurality of scanning cameras according to the attitude parameters corresponding to the plurality of scanning cameras; acquiring data of a superposition area between two original gray-scale images corresponding to adjacent scanning cameras; obtaining two results of data of the overlapping area of the original 2D gray-scale image and the original 3D height image of two adjacent scanning cameras according to a conversion formula; performing weighted data fusion on the two results of the overlapping area to obtain a fusion value of the data of the overlapping area; and generating a corresponding complete 2D gray-scale image and a complete 3D height image according to the fusion value of the data of the overlapped area.
Exemplarily, all camera measurement references in the image acquisition and processing system of the PCB board are unified according to the rotation pitch angle around the X-axis, the rotation roll angle around the Y-axis, the rotation yaw angle around the Z-axis and the offset in the XYZ directions solved by each camera, and as can be seen from the structural optical diagram, the included angle between the laser and the camera is different from the included angle between the natural light source emitter and the camera, and 6 parameters of the 2D coordinate system and the 3D coordinate system are not exactly the same, and the formula is as follows:
Figure 930438DEST_PATH_IMAGE019
according to the formula, 2D/3D coordinates of all cameras in the system can be integrated, a coincidence region exists between every two adjacent cameras, after data in the coincidence region passes through a conversion formula, two results of machine data at the same position need to be weighted and fused, weighted data fusion is to perform weighted average on multi-source redundant information, and the result is used as a fusion value.
And S205, carrying out combined calibration on the 2D gray level image and the 3D height image to obtain affine matrixes of the 2D gray level image and the 3D height image.
Specifically, the obtained 2D grayscale image and the 3D grayscale image are subjected to joint calibration to obtain affine matrices of the 2D grayscale image and the 3D height image, and image position information of the 2D grayscale image and the 3D grayscale image is in one-to-one correspondence.
In a possible embodiment, jointly calibrating the 2D grayscale map and the 3D height map to obtain an affine matrix of the 2D grayscale map and the 3D height map includes: acquiring a 2D actual circle center coordinate and a 3D actual circle center coordinate according to the camera position posture parameter; calculating an affine transformation matrix of the 2D actual circle center coordinates and the 3D actual circle center coordinates to obtain initial values of the parameters of the 2D/3D position conversion model; eliminating random errors from the initial values of the 2D/3D position conversion model parameters to obtain 2D/3D position conversion model parameter affine; after 2D/3D position conversion model parameter affine sets of all actual circle center coordinates are obtained through calculation, data regression analysis is collected, and affine matrixes of a 2D gray scale image and a 3D height image are obtained.
Exemplarily, after the 2D actual circle center coordinates and the 3D actual circle center coordinates are obtained according to the camera position attitude parameters, the affine transformation matrices thereof are calculated according to the 2D actual circle center coordinates and the 3D actual circle center coordinates, and initial values of the 2D/3D position conversion model parameters are obtained. The parameters of affine transformation can be estimated by using a least square method, and the calculation process is as follows:
the expression for affine transformation is:
Figure 758717DEST_PATH_IMAGE020
is the planar position of the pixel or pixels,Pis a rotation matrix of 2 x 2,
Figure 493454DEST_PATH_IMAGE021
is a translation vector of 2 x 1,P
Figure 39973DEST_PATH_IMAGE021
i.e. affine transformation parameters, i.e.:
Figure 88832DEST_PATH_IMAGE022
Figure 34267DEST_PATH_IMAGE023
therefore, it can be generalized to the solution of the coordinate transformation coefficients a, B, C, D, E, F. To prevent the occurrence of empty pixels, inverse mapping is generally used, which is obtained by the least squares method:
Figure 877589DEST_PATH_IMAGE024
Figure 911404DEST_PATH_IMAGE025
wherein
Figure 498374DEST_PATH_IMAGE026
;
Figure 238928DEST_PATH_IMAGE027
X, Y, U, V and I are vectors formed by X, Y, X ', Y' and 1 respectively.
After affine matrixes of the 2D gray level image and the 3D height image are obtained, affine transformation errors of corresponding points of each 2D gray level image and each 3D height image are calculated, through carrying out statistical analysis on affine projection errors of each point, setting random interference item parameters of an affine model, and after corresponding error points are eliminated, affine matrixes of the 2D gray level image and the 3D height image with random errors eliminated are obtained; namely, the Affinine (Affine) of the 2D/3D position conversion model parameters, repeating the steps for multiple times to obtain an Affinine set, performing data regression analysis on the Affinine set, and finally obtaining an Affine matrix of a 2D gray level diagram and a 3D height diagram.
It can be seen that in the embodiment of the application, a plurality of original 2D grayscale images and 3D height maps are calibrated and fused into complete 2D grayscale images and 3D height maps according to the attitude parameters corresponding to a plurality of scanning cameras, so that the problem of small field of view range during high-precision measurement of a single camera is solved, and high-precision large-format 2D grayscale images and high-precision 3D height maps are obtained.
In a possible embodiment, after jointly calibrating the 2D grayscale map and the 3D height map to obtain the affine matrix of the 2D grayscale map and the 3D height map, the method further includes: and generating a joint display graph of the 2D gray-scale map and the 3D height map according to the affine matrixes of the 2D gray-scale map and the 3D height map.
Specifically, the corresponding relation between the 2D gray-scale image of the PCB and the image displayed on the 3D height map can be obtained according to the affine matrix of the 2D gray-scale image and the 3D height map of the target PCB, and the same image position of the 3D height map can be quickly found according to any image position on the 2D gray-scale image during checking and detection; the same image position on the 2D gray scale image can be searched according to any image position of the 3D height image.
In one possible embodiment, generating a joint display graph of the 2D grayscale map and the 3D height map from affine matrices of the 2D grayscale map and the 3D height map comprises: displaying the 2D gray scale map in a first display area, and displaying the 3D height map in a second display area; if the display center and the scaling of the display area are changed, acquiring the display center and the scaling of the first display area; acquiring a corresponding display image of the 3D height map according to the display center and the scaling of the first display area; displaying a corresponding display image of the 3D height map in a second display area; and/or the display center and the scaling of the second display area, and then acquiring the display center and the scaling of the second display area; acquiring a corresponding display image of the 2D gray scale image according to the display center and the scaling of the second display area; and displaying the corresponding display image of the 2D gray scale map in the first display area.
Specifically, the first display area and the second display area may be two display windows on one display or two independent display areas of one display window, or the two independent displays may respectively display a 2D grayscale map and a 3D height map. When the image of any one display area is viewed, the image of the other display area is automatically adjusted and zoomed to the same position according to the parameters such as the display position and the zoom scale of the image of the corresponding display area.
It can be seen that in the embodiment of the application, the 2D grayscale image and the 3D height image of the target PCB are simultaneously displayed in different display areas, and the display picture of another display area is adjusted according to the display parameter of any one display area, so that the image of the target PCB is displayed to the greatest extent, the imaging effect of the large-format view field of the image of the PCB is improved, the information quantity displayed when the image of the PCB is detected is increased, and the efficiency and accuracy of detecting the image of the PCB are further increased.
In one possible embodiment, if the GPUs are multiple GPUs, the multiple scanning cameras and the multiple GPUs are connected and perform information interaction through a PCIE bus, and allocate computing tasks to the many-core logic computing unit ALUs of the GPUs according to an asynchronous allocation method.
Specifically, all the scanning cameras are directly connected to the video card through a PCIE (Peripheral Component Interconnect Express) bus, and the PCIE, that is, a high-speed serial computer expansion bus standard is a high-speed serial computer expansion bus standard. The data collected by the camera can be directly transmitted to the display card for display and storage. The processed data, the 2D gray-scale image and the 3D height image pole have the affine matrix which needs to be displayed on a human-computer interaction interface in real time, but the current display screen only has one data transmission line interface, the display card 1 carries out data display, and the display card 2 cannot display. In this embodiment, all the image methods and principles are implemented under a GPU, and a GPU many-core Logic computation Unit (ALU) is an execution Unit on the GPU. And simultaneously, other ALUs on the GPU can simultaneously process other computing tasks, and the ALUs are combined logic circuits capable of realizing multiple groups of arithmetic operations and logic operations. The multi-camera multi-display card interference-free processing is realized. According to the structure in the embodiment of the invention, the scanning camera can directly send the image of the target PCB to the GPU of the computer through the PCIE bus after shooting the image, and does not need to carry out secondary forwarding through the central processing unit, so that the time of data transmission is saved. After receiving the original image shot by the scanning camera, the GPU may obtain the 2D grayscale image and the 3D height image of the target PCB and the affine matrix of the 2D grayscale image and the 3D height image according to the method in the foregoing embodiment.
It can be seen that in the embodiment of the present application, the plurality of GPUs and the plurality of scanning cameras are connected through the PCIE bus, so that it is avoided that in a conventional camera acquisition and processing system, data acquired by the cameras are subjected to GPU parallel processing in the display card and need to be scheduled and allocated by the CPU system. The GPU directly receives an original image transmitted by the scanning camera to generate a 2D gray level image, a 3D height image and an affine matrix of the 2D gray level image and the 3D height image, secondary forwarding is not needed by a central processing unit, processing efficiency of image processing is improved, instruction streams are parallelly transmitted to a many-core, and different input data are adopted for execution, so that massive operations in the image processing are completed; the parallel computing power can be increased by multiples compared to a single graphics card.
The method of the embodiment of the application is implemented to extract sub-pixel line laser center features of each line laser image in a plurality of line laser images to obtain line laser center pixel coordinates, so that position posture parameters of a plurality of scanning cameras are obtained, the scanning cameras are unified under a machine coordinate system through the position posture parameters, a plurality of original 2D gray level images obtained by shooting natural light by the scanning cameras and a plurality of original 3D height images obtained by shooting line laser are subjected to data fusion to obtain a 2D gray level image and a 3D height image, the 2D gray level image and the 3D height image are subjected to combined calibration, the 2D gray level image and the 3D height image are obtained, position information is in one-to-one correspondence, the problem that the field range is small when a single camera is used for high-precision measurement is effectively solved, and the effect of high-precision measurement large-format field imaging is improved. The information quantity displayed when the image of the PCB is detected is increased, and the accuracy of the detection efficiency of the image of the PCB is further increased.
Based on the description of the above configuration method embodiment, the present application further provides an apparatus 700 for acquiring and processing 2D and 3D images of a PCB, where the apparatus 700 for acquiring and processing 2D and 3D images of a PCB may be a computer program (including program code) running in a terminal. The device 700 for acquiring and processing 2D and 3D images of PCBs may perform the method shown in fig. 1 and 2. Referring to fig. 7, fig. 7 is a schematic structural diagram of an apparatus for acquiring and processing 2D and 3D images of a PCB according to an embodiment of the present application, where the apparatus includes:
an acquisition unit 701: the system comprises a scanning camera, a sub-pixel line laser center feature extraction module and a sub-pixel line laser center feature extraction module, wherein the scanning camera is used for acquiring a plurality of line laser images of a target PCB board irradiated by line laser acquired by the scanning camera;
the calculation unit 702: the scanning camera system is used for respectively obtaining a plurality of camera position and posture parameters according to the line laser center pixel coordinates and unifying a plurality of scanning cameras to a machine platform coordinate system through the position and posture parameters;
the obtaining unit 701 is further configured to: acquiring a plurality of original 2D gray-scale images obtained by scanning a target PCB irradiated by natural light by a plurality of scanning cameras, and acquiring a plurality of original 3D height images obtained by scanning the target PCB irradiated by line laser by the plurality of scanning cameras;
the computing unit 702 is further configured to: performing data fusion on the plurality of original 2D gray level images and the plurality of original 3D height images to obtain fused 2D gray level images and 3D height images;
and the calibration unit 703 is configured to perform joint calibration on the 2D grayscale map and the 3D height map to obtain an affine matrix of the 2D grayscale map and the 3D height map.
The units (the acquisition unit 701, the calculation unit 702, and the calibration unit 703) are used for executing the steps related to the method. Such as an acquisition unit 701 for executing the relevant content of step S201, and a calculation unit 702 for executing the relevant content of S202.
In a possible embodiment, in acquiring a plurality of line laser images of a target PCB board irradiated by line laser acquired by a plurality of scanning cameras, and performing sub-pixel line laser center feature extraction on each line laser image in the plurality of line laser images to obtain line laser center pixel coordinates of each line laser image, the calculating unit 702 is further specifically configured to: carrying out mean value filtering processing on each line laser image to obtain an image subjected to mean value filtering processing; acquiring the position of the maximum value of the gray value of the image subjected to the average filtering processing in the image column direction, and determining the position of the maximum value of the gray value as the center of the whole pixel of the line laser image; and performing Taylor expansion along the normal direction of the line at the whole pixel center of the line laser, and calculating to obtain a line laser center pixel coordinate which represents the height of the corresponding scanning camera in a machine table coordinate system.
In one possible embodiment, the attitude parameters include a pitch angle of rotation about the X-axis, a roll angle of rotation about the Y-axis, a yaw angle of rotation about the Z-axis, an X-axis offset, a Y-axis offset, and a Z-axis offset; in terms of obtaining a plurality of camera position and posture parameters according to the line laser center pixel coordinates, and unifying the plurality of scanning cameras to the machine coordinate system through the position and posture parameters, the calculating unit 702 is further specifically configured to: obtaining a plurality of original height images according to the line laser center pixel coordinates of the plurality of scanning cameras and the scanning calibration board of the plurality of scanning cameras, and combining the plurality of original height images on the same reference plane to obtain a first height image; solving a height plane coefficient of each original gray level image in the first height image, and a rotation pitch angle around an X axis, a rotation roll angle around a Y axis and a rotation yaw angle around a Z axis corresponding to each original gray level image; correcting the first height image according to a rotation pitch angle around an X axis, a rotation roll angle around a Y axis and a rotation yaw angle around a Z axis which correspond to the plurality of scanning cameras respectively to obtain a second height image; determining the Z-axis offset of each scanning camera according to the second height image, wherein the Z-axis offset is the offset of the height of each scanning camera relative to the average height of the plurality of scanning cameras; acquiring circle center patterns of a plurality of original gray level images, and determining circle center coordinates of a plurality of predictive calibration plates corresponding to a plurality of scanning cameras according to the circle center patterns of the plurality of original gray level images; acquiring circle center coordinates of a plurality of actual calibration plates, and determining the plane offset of each scanning camera in a plurality of scanning cameras according to the circle center coordinates of a plurality of predicted calibration plates and the circle center coordinates of the plurality of actual calibration plates, wherein the plane offset comprises X-axis offset and Y-axis offset; and unifying the plurality of scanning cameras to a machine table coordinate system according to the Z-axis offset, the plane offset, the rotation pitch angle around the X axis, the rotation roll angle around the Y axis and the rotation yaw angle around the Z axis of each scanning camera.
In one possible embodiment, each of the plurality of scanning cameras includes a light source assembly and a camera assembly, the light source assembly including a line laser light source and a natural light source; in the aspect that the distance between the irradiation areas of the line laser light source and the natural light source is greater than a first preset threshold, a plurality of original 2D grayscale images obtained by scanning the target PCB irradiated by the natural light by the plurality of scanning cameras, and a plurality of original 3D height images obtained by scanning the target PCB irradiated by the line laser by the plurality of scanning cameras, the obtaining unit 701 is further specifically configured to: the plurality of scanning cameras move to scan along the Y-axis direction, so that the line laser light source and the natural light source traverse the target PCB; obtaining a 2D gray scale image of the target PCB according to a natural light irradiation area in an original image of the target PCB irradiated by the laser; and obtaining a plurality of original 3D height maps of the target PCB according to the laser stripes in the image of the target PCB irradiated by the laser.
In a possible embodiment, in terms of obtaining a fused 2D grayscale map and a fused 3D height map by performing data fusion on a plurality of original 2D grayscale maps and a plurality of original 3D height maps according to the position and orientation parameters, the calculating unit 702 is further specifically configured to: obtaining conversion formulas of the plurality of scanning cameras according to the attitude parameters corresponding to the plurality of scanning cameras; acquiring data of a superposition area between two original gray-scale images corresponding to adjacent scanning cameras; obtaining two results of data of the overlapping area of the original 2D gray level image and the original 3D height image of two adjacent scanning cameras according to a conversion formula; performing weighted data fusion on the two results of the overlapping area to obtain a fusion value of the data of the overlapping area; and generating a corresponding complete 2D gray-scale image and a complete 3D height image according to the fusion value of the data of the overlapped area.
In a possible embodiment, in terms of jointly calibrating the 2D grayscale map and the 3D height map to obtain affine matrices of the 2D grayscale map and the 3D height map, the calibration unit 703 is further specifically configured to: acquiring a 2D actual circle center coordinate and a 3D actual circle center coordinate according to the camera position posture parameter; calculating an affine transformation matrix of the 2D actual circle center coordinates and the 3D actual circle center coordinates to obtain initial values of the parameters of the 2D/3D position conversion model; eliminating random errors from the initial values of the 2D/3D position conversion model parameters to obtain 2D/3D position conversion model parameter affine; after 2D/3D position conversion model parameter affine sets of all actual circle center coordinates are obtained through calculation, data regression analysis is collected, and affine matrixes of a 2D gray scale image and a 3D height image are obtained.
In one possible embodiment, if the GPUs are multiple GPUs, the multiple scanning cameras and the multiple GPUs are connected and perform information interaction through a PCIE bus, and allocate computing tasks to the many-core logic computing unit ALUs of the GPUs according to an asynchronous allocation method.
Based on the description of the above method embodiment and apparatus embodiment, please refer to fig. 8, fig. 8 is a schematic structural diagram of an electronic device according to an embodiment of the present application, and the electronic device 800 described in this embodiment, as shown in fig. 8, the electronic device 800 includes a processor 801, a memory 802, a communication interface 803, and one or more programs, and the processor 801 may be a general Central Processing Unit (CPU), a microprocessor, an application-specific integrated circuit (ASIC), or one or more integrated circuits for controlling the execution of the above program. The Memory 802 may be, but is not limited to, a Read-Only Memory (ROM) or other type of static storage device that can store static information and instructions, a Random Access Memory (RAM) or other type of dynamic storage device that can store information and instructions, an Electrically Erasable Programmable Read-Only Memory (EEPROM), a Compact Disc Read-Only Memory (CD-ROM) or other optical Disc storage, optical Disc storage (including Compact Disc, laser Disc, optical Disc, digital versatile Disc, blu-ray Disc, etc.), a magnetic disk storage medium or other magnetic storage device, or any other medium that can be used to carry or store desired program code in the form of instructions or data structures and that can be accessed by a computer. The memory 802 may be separate and coupled to the processor 801 via a bus. The memory 802 may also be integrated with the processor 801. Communication interface 803 is used for communicating with other devices or communication Networks, such as ethernet, radio Access Network (RAN), wireless Local Area Networks (WLAN), etc. The one or more programs are stored in the memory by a form of program code and configured to be executed by the processor, and in an embodiment of the present application, the programs include instructions for performing the following steps:
acquiring a plurality of line laser images of a target PCB irradiated by line laser acquired by a plurality of scanning cameras, and performing sub-pixel line laser center feature extraction on each line laser image in the plurality of line laser images to obtain line laser center pixel coordinates of each line laser image; respectively obtaining a plurality of camera position and attitude parameters according to the line laser center pixel coordinates, and unifying a plurality of scanning cameras to a machine table coordinate system through the position and attitude parameters; acquiring a plurality of original 2D gray-scale images obtained by scanning a target PCB irradiated by natural light by a plurality of scanning cameras, and acquiring a plurality of original 3D height images obtained by scanning the target PCB irradiated by line laser by the plurality of scanning cameras; performing data fusion on the plurality of original 2D gray level images and the plurality of original 3D height images to obtain fused 2D gray level images and 3D height images; and carrying out combined calibration on the 2D gray-scale image and the 3D height image to obtain affine matrixes of the 2D gray-scale image and the 3D height image.
In a possible embodiment, acquiring a plurality of line laser images of a target PCB board irradiated by line laser collected by a plurality of scanning cameras, and performing sub-pixel line laser center feature extraction on each line laser image in the plurality of line laser images to obtain line laser center pixel coordinates of each line laser image includes: carrying out mean value filtering processing on each line laser image to obtain an image subjected to mean value filtering processing; acquiring the position of the maximum value of the gray value of the image subjected to the average filtering processing in the image column direction, and determining the position of the maximum value of the gray value as the center of the whole pixel of the line laser image; and performing Taylor expansion along the normal direction of the line at the whole pixel center of the line laser, and calculating to obtain a line laser center pixel coordinate which represents the height of the corresponding scanning camera in a machine table coordinate system.
In one possible embodiment, the attitude parameters include a pitch angle of rotation about the X-axis, a roll angle of rotation about the Y-axis, a yaw angle of rotation about the Z-axis, an X-axis offset, a Y-axis offset, and a Z-axis offset; respectively obtaining a plurality of camera position attitude parameters according to the line laser center pixel coordinates, and unifying a plurality of scanning cameras to a machine platform coordinate system through the position attitude parameters, comprising: obtaining a plurality of original height images according to the line laser center pixel coordinates of the plurality of scanning cameras and the plurality of scanning cameras scanning the calibration plate, and combining the plurality of original height images on the same reference plane to obtain a first height image; solving a height plane coefficient of each original gray level image in the first height image, and a rotation pitch angle around an X axis, a rotation roll angle around a Y axis and a rotation yaw angle around a Z axis corresponding to each original gray level image; correcting the first height image according to a rotation pitch angle around an X axis, a rotation roll angle around a Y axis and a rotation yaw angle around a Z axis which correspond to the plurality of scanning cameras respectively to obtain a second height image; determining the Z-axis offset of each scanning camera according to the second height image, wherein the Z-axis offset is the offset of the height of each scanning camera relative to the average height of the plurality of scanning cameras; acquiring circle center patterns of a plurality of original gray level images, and determining circle center coordinates of a plurality of predictive calibration plates corresponding to a plurality of scanning cameras according to the circle center patterns of the plurality of original gray level images; acquiring circle center coordinates of a plurality of actual calibration plates, and determining the plane offset of each scanning camera in a plurality of scanning cameras according to the circle center coordinates of a plurality of predicted calibration plates and the circle center coordinates of the plurality of actual calibration plates, wherein the plane offset comprises X-axis offset and Y-axis offset; and unifying the plurality of scanning cameras to a machine table coordinate system according to the Z-axis offset and the plane offset of each scanning camera, the rotation pitch angle around the X axis and the rotation roll angle of the Y axis.
In one possible embodiment, each of the plurality of scanning cameras comprises a light source assembly comprising a line laser light source and a natural light source; the distance between the irradiation region of line laser light source and natural light source is greater than first preset threshold, acquires a plurality of original 2D grey level maps that the target PCB board that a plurality of scanning cameras scanning natural light shines obtained, acquires a plurality of original 3D height maps that the target PCB board that a plurality of scanning cameras scanning line laser shines obtained, includes: the plurality of scanning cameras move to scan along the Y-axis direction, so that the line laser light source and the natural light source traverse the target PCB; obtaining a 2D gray level image of the target PCB according to a natural light irradiation area in the original image of the target PCB irradiated by the laser; and obtaining a plurality of original 3D height maps of the target PCB according to the laser stripes in the image of the target PCB irradiated by the laser.
In a possible embodiment, performing data fusion on the multiple original 2D grayscale maps and the multiple original 3D height maps according to the position and orientation parameters to obtain a fused 2D grayscale map and a fused 3D height map, including: obtaining conversion formulas of the plurality of scanning cameras according to the attitude parameters corresponding to the plurality of scanning cameras; acquiring data of a coincidence region between two original gray-scale images corresponding to adjacent scanning cameras; obtaining two results of data of the overlapping area of the original 2D gray level image and the original 3D height image of two adjacent scanning cameras according to a conversion formula; performing weighted data fusion on the two results of the overlapping area to obtain a fusion value of the data of the overlapping area; and generating a corresponding complete 2D gray-scale image and a complete 3D height image according to the fusion value of the data of the overlapped area.
In a possible embodiment, jointly calibrating the 2D grayscale map and the 3D height map to obtain an affine matrix of the 2D grayscale map and the 3D height map includes: acquiring a 2D actual circle center coordinate and a 3D actual circle center coordinate according to the camera position posture parameter; calculating an affine transformation matrix of the 2D actual circle center coordinates and the 3D actual circle center coordinates to obtain initial values of the parameters of the 2D/3D position conversion model; eliminating random errors from the initial values of the 2D/3D position conversion model parameters to obtain 2D/3D position conversion model parameter affine; after the 2D/3D position conversion model parameter affine set of all the actual circle center coordinates is obtained through calculation, data regression analysis is integrated, and affine matrixes of a 2D gray level graph and a 3D height graph are obtained.
In one possible embodiment, if the GPUs are multiple GPUs, the multiple scanning cameras and the multiple GPUs are connected and perform information interaction through PCIE buses, and computing tasks are distributed to the many-core logic computing units ALU of the GPUs according to an asynchronous distribution method.
In the foregoing embodiments, the descriptions of the respective embodiments have respective emphasis, and for parts that are not described in detail in a certain embodiment, reference may be made to related descriptions of other embodiments.
In the several embodiments provided in the present application, it should be understood that the disclosed apparatus may be implemented in other manners. For example, the above-described embodiments of the apparatus are merely illustrative, and for example, the division of the units is only one type of division of logical functions, and there may be other divisions when actually implementing, for example, a plurality of units or components may be combined or may be integrated into another system, or some features may be omitted, or not implemented. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection of some interfaces, devices or units, and may be an electric or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit may be implemented in the form of hardware, or may also be implemented in the form of a software functional unit.
The integrated unit, if implemented in the form of a software functional unit and sold or used as a stand-alone product, may be stored in a computer readable memory. Based on such understanding, the technical solutions of the present application, in essence or part of the technical solutions contributing to the prior art, or all or part of the technical solutions, can be embodied in the form of a software product, which is stored in a memory and includes several instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) to execute all or part of the steps of the methods described in the embodiments of the present application. And the aforementioned memory comprises: a U-disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a removable hard disk, a magnetic or optical disk, and other various media capable of storing program codes.
Those skilled in the art will appreciate that all or part of the steps in the methods of the above embodiments may be implemented by associated hardware instructed by a program, which may be stored in a computer-readable memory, which may include: flash Memory disks, read-Only memories (ROMs), random Access Memories (RAMs), magnetic or optical disks, and the like.
The above embodiments are only used to illustrate the technical solutions of the present application, and not to limit the same; although the present application has been described in detail with reference to the foregoing embodiments, it should be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; and the modifications or the substitutions do not make the essence of the corresponding technical solutions depart from the scope of the technical solutions of the embodiments of the present application.

Claims (10)

1. A method for collecting and processing 2D and 3D images of a PCB is applied to an image processor GPU of a PCB optical detection system of a printed circuit board, the PCB optical detection system also comprises a plurality of scanning cameras, and the method is characterized by comprising the following steps:
acquiring a plurality of line laser images of a target PCB irradiated by line laser acquired by the plurality of scanning cameras, and performing sub-pixel line laser central feature extraction on each line laser image in the plurality of line laser images to obtain line laser central pixel coordinates of each line laser image;
respectively obtaining position attitude parameters corresponding to a plurality of scanning cameras according to the line laser center pixel coordinates, and unifying the plurality of scanning cameras under a machine platform coordinate system through the position attitude parameters;
acquiring a plurality of original 2D gray-scale images obtained by scanning the target PCB irradiated by natural light by the plurality of scanning cameras, and acquiring a plurality of original 3D height images obtained by scanning the target PCB irradiated by line laser by the plurality of scanning cameras;
performing data fusion on the plurality of original 2D gray level images and the plurality of original 3D height images according to the position posture parameters to obtain fused 2D gray level images and fused 3D height images;
and carrying out combined calibration on the 2D gray-scale image and the 3D height image to obtain an affine matrix of the 2D gray-scale image and the 3D height image.
2. The method according to claim 1, wherein the obtaining a plurality of line laser images of a target PCB board irradiated by line laser collected by the plurality of scanning cameras, performing sub-pixel line laser center feature extraction on each line laser image in the plurality of line laser images, and obtaining line laser center pixel coordinates of each line laser image comprises:
carrying out mean value filtering processing on each line laser image to obtain an image subjected to mean value filtering processing;
acquiring the position of the maximum value of the gray value of the image subjected to the average filtering processing in the image column direction, and determining the position of the maximum value of the gray value as the center of the whole pixel of the line laser image;
and performing Taylor expansion along the normal direction of a line at the whole pixel center of the line laser, and calculating to obtain the center pixel coordinate of the line laser, wherein the center pixel coordinate of the line laser represents the height of the corresponding scanning camera in the coordinate system of the machine table.
3. The method of claim 1, wherein the attitude parameters include a pitch angle of rotation about an X-axis, a roll angle of rotation about a Y-axis, a yaw angle of rotation about a Z-axis, an X-axis offset, a Y-axis offset, and a Z-axis offset; the obtaining the position and posture parameters of the cameras according to the line laser center pixel coordinates respectively, and unifying the scanning cameras to a machine table coordinate system through the position and posture parameters comprises:
combining the multiple original height images on the same reference plane according to the line laser center pixel coordinates of the multiple scanning cameras and the multiple corresponding original gray level images and the multiple original height images obtained by the multiple scanning cameras scanning the calibration plate to obtain a first height image;
solving height plane coefficients of the plurality of original height images in the first height image, and a rotation pitch angle around an X axis, a rotation roll angle around a Y axis and a rotation yaw angle around a Z axis corresponding to the plurality of original height images;
correcting the first height image according to the rotation pitch angle around the X axis, the rotation roll angle around the Y axis and the rotation yaw angle around the Z axis which correspond to the plurality of scanning cameras respectively to obtain a second height image;
determining a Z-axis offset of each scanning camera according to the second height image, wherein the Z-axis offset is an offset of the height of each scanning camera relative to a mean height of the plurality of scanning cameras;
acquiring circle center patterns of a plurality of original gray level images, and determining circle center coordinates of a plurality of predictive calibration plates corresponding to the plurality of scanning cameras according to the circle center patterns of the plurality of original gray level images;
obtaining circle center coordinates of a plurality of actual calibration plates, and determining the plane offset of each scanning camera in the plurality of scanning cameras according to the circle center coordinates of the plurality of predicted calibration plates and the circle center coordinates of the plurality of actual calibration plates, wherein the plane offset comprises X-axis offset and Y-axis offset;
and unifying the plurality of scanning cameras to a machine table coordinate system according to the Z-axis offset, the plane offset, the rotation pitch angle around the X axis, the rotation roll angle around the Y axis and the rotation yaw angle around the Z axis of each scanning camera.
4. The method of claim 1, wherein each of the plurality of scanning cameras comprises a light source assembly comprising a line laser light source and a natural light source and a camera assembly; the distance between the irradiation areas of the line laser light source and the natural light source is greater than a first preset threshold, the obtaining of a plurality of original 2D gray-scale maps obtained by scanning the target PCB board irradiated by the natural light with the plurality of scanning cameras and the obtaining of a plurality of original 3D height maps obtained by scanning the line laser with the plurality of scanning cameras and the target PCB board irradiated by the target PCB board include:
the plurality of scanning cameras move to scan along the Y-axis direction, so that the linear laser light source and the natural light source both traverse the target PCB;
obtaining a 2D gray scale image of the target PCB according to a natural light irradiation area in the original image of the target PCB irradiated by the laser;
and obtaining a plurality of original 3D height maps of the target PCB according to the laser stripes of the image center line of the target PCB irradiated by the laser.
5. The method of claim 1, wherein the obtaining an original image of the target PCB board irradiated by the scanning line laser of the scanning cameras, obtaining a plurality of original 2D gray-scale images according to a plurality of original 2D gray-scale images obtained from a natural light irradiation area of the original image, and obtaining a plurality of original 3D height maps according to the line laser stripe of the original image comprises:
obtaining conversion formulas of the plurality of scanning cameras according to the attitude parameters corresponding to the plurality of scanning cameras;
acquiring data of a coincidence region between two original gray-scale images corresponding to the adjacent scanning cameras;
obtaining two results of data of the overlapped area of the original 2D gray level image and the original 3D height image of the two adjacent scanning cameras according to the conversion formula;
performing weighted data fusion on the two results of the overlapping area to obtain a fusion value of the overlapping area data;
and generating a corresponding complete 2D gray-scale image and a complete 3D height image according to the fusion value of the data of the overlapped area.
6. The method according to claim 1, wherein the jointly scaling the 2D grayscale map and the 3D height map to obtain an affine matrix of the 2D grayscale map and the 3D height map comprises:
acquiring a 2D actual circle center coordinate and a 3D actual circle center coordinate according to the camera position posture parameter;
calculating an affine transformation matrix of the 2D actual circle center coordinates and the 3D actual circle center coordinates to obtain initial values of the parameters of the 2D/3D position conversion model;
eliminating random errors from the initial values of the 2D/3D position conversion model parameters to obtain 2D/3D position conversion model parameter affine;
and after calculating to obtain a 2D/3D position conversion model parameter affine set of all the actual circle center coordinates, performing regression analysis on the set data to obtain affine matrixes of a 2D gray level graph and a 3D height graph.
7. The method of any one of claims 1-6, wherein if the GPUs are multiple GPUs, the multiple scanning cameras and the multiple GPUs are connected through a PCIE bus and perform information interaction, and computing tasks are distributed for multi-core logic computing units (ALUs) of the GPUs according to an asynchronous distribution method.
8. The utility model provides a PCB's 2D and 3D image collection and processing apparatus, is applied to the image processor GPU of PCB optical inspection system, PCB optical inspection system still includes a plurality of scanning cameras, its characterized in that, the device includes:
an acquisition unit: the scanning camera is used for acquiring a plurality of line laser images of a target PCB irradiated by line laser acquired by the scanning cameras, and performing sub-pixel line laser central feature extraction on each line laser image in the plurality of line laser images to obtain line laser central pixel coordinates of each line laser image;
a calculation unit: the scanning system is used for respectively obtaining position and attitude parameters corresponding to a plurality of scanning cameras according to the line laser center pixel coordinates, and unifying the scanning cameras to a machine coordinate system through the position and attitude parameters;
the acquisition unit is further configured to: acquiring a plurality of original 2D gray-scale images obtained by scanning the target PCB irradiated by natural light by the plurality of scanning cameras, and acquiring a plurality of original 3D height images obtained by scanning the target PCB irradiated by line laser by the plurality of scanning cameras;
the computing unit is further to: performing data fusion on the plurality of original 2D gray level maps and the plurality of original 3D height maps to obtain a fused 2D gray level map and a fused 3D height map;
a calibration unit: and the affine matrix is used for carrying out combined calibration on the 2D gray-scale image and the 3D height image to obtain an affine matrix of the 2D gray-scale image and the 3D height image.
9. An electronic device comprising a processor, memory, a communication interface, and one or more programs stored in the memory and configured to be executed by the processor, the programs including instructions for performing the steps in the method of any of claims 1-7.
10. A computer-readable storage medium, characterized in that the computer-readable storage medium stores a computer program for electronic data exchange, wherein the computer program causes a computer to perform the method according to any one of claims 1-7.
CN202211494737.4A 2022-11-26 2022-11-26 Method for acquiring and processing 2D (two-dimensional) and 3D (three-dimensional) images of PCB (printed Circuit Board) and related device Active CN115546016B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211494737.4A CN115546016B (en) 2022-11-26 2022-11-26 Method for acquiring and processing 2D (two-dimensional) and 3D (three-dimensional) images of PCB (printed Circuit Board) and related device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211494737.4A CN115546016B (en) 2022-11-26 2022-11-26 Method for acquiring and processing 2D (two-dimensional) and 3D (three-dimensional) images of PCB (printed Circuit Board) and related device

Publications (2)

Publication Number Publication Date
CN115546016A true CN115546016A (en) 2022-12-30
CN115546016B CN115546016B (en) 2023-03-31

Family

ID=84722755

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211494737.4A Active CN115546016B (en) 2022-11-26 2022-11-26 Method for acquiring and processing 2D (two-dimensional) and 3D (three-dimensional) images of PCB (printed Circuit Board) and related device

Country Status (1)

Country Link
CN (1) CN115546016B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116071240A (en) * 2023-03-07 2023-05-05 广东利元亨智能装备股份有限公司 Image stitching method, device, computer equipment and storage medium

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107907048A (en) * 2017-06-30 2018-04-13 长沙湘计海盾科技有限公司 A kind of binocular stereo vision method for three-dimensional measurement based on line-structured light scanning
CN113240674A (en) * 2021-07-09 2021-08-10 深圳市艾视铂智能技术有限公司 Coplanarity detection method based on three-dimensional point cloud and two-dimensional image fusion
CN114331977A (en) * 2021-12-16 2022-04-12 深圳市鹰眼在线电子科技有限公司 Splicing calibration system, method and device of multi-array three-dimensional measurement system
CN114463405A (en) * 2022-01-26 2022-05-10 熵智科技(深圳)有限公司 Method, device and system for accelerating surface scanning line laser 3D camera and FPGA
CN114612409A (en) * 2022-03-04 2022-06-10 广州镭晨智能装备科技有限公司 Projection calibration method and device, storage medium and electronic equipment
CN114638900A (en) * 2022-02-22 2022-06-17 杭州凌像科技有限公司 Iterative calibration method and system for optical distortion and pose of laser scanning system
WO2022188562A1 (en) * 2021-03-12 2022-09-15 苏州苏大维格科技集团股份有限公司 Three-dimensional micro-nano morphological structure manufactured by laser direct writing lithography machine, and preparation method therefor

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107907048A (en) * 2017-06-30 2018-04-13 长沙湘计海盾科技有限公司 A kind of binocular stereo vision method for three-dimensional measurement based on line-structured light scanning
WO2022188562A1 (en) * 2021-03-12 2022-09-15 苏州苏大维格科技集团股份有限公司 Three-dimensional micro-nano morphological structure manufactured by laser direct writing lithography machine, and preparation method therefor
CN113240674A (en) * 2021-07-09 2021-08-10 深圳市艾视铂智能技术有限公司 Coplanarity detection method based on three-dimensional point cloud and two-dimensional image fusion
CN114331977A (en) * 2021-12-16 2022-04-12 深圳市鹰眼在线电子科技有限公司 Splicing calibration system, method and device of multi-array three-dimensional measurement system
CN114463405A (en) * 2022-01-26 2022-05-10 熵智科技(深圳)有限公司 Method, device and system for accelerating surface scanning line laser 3D camera and FPGA
CN114638900A (en) * 2022-02-22 2022-06-17 杭州凌像科技有限公司 Iterative calibration method and system for optical distortion and pose of laser scanning system
CN114612409A (en) * 2022-03-04 2022-06-10 广州镭晨智能装备科技有限公司 Projection calibration method and device, storage medium and electronic equipment

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
吴福培等: "PCB焊点表面三维质量检测方法", 《仪器仪表学报》 *
范天海等: "基于机器视觉元件管脚高度检测系统研究", 《光学技术》 *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116071240A (en) * 2023-03-07 2023-05-05 广东利元亨智能装备股份有限公司 Image stitching method, device, computer equipment and storage medium

Also Published As

Publication number Publication date
CN115546016B (en) 2023-03-31

Similar Documents

Publication Publication Date Title
CN111179358B (en) Calibration method, device, equipment and storage medium
CN110717942B (en) Image processing method and device, electronic equipment and computer readable storage medium
CN109272574B (en) Construction method and calibration method of linear array rotary scanning camera imaging model based on projection transformation
CN113643414B (en) Three-dimensional image generation method and device, electronic equipment and storage medium
CN110705433A (en) Bridge deformation monitoring method, device and equipment based on visual perception
CN115546016B (en) Method for acquiring and processing 2D (two-dimensional) and 3D (three-dimensional) images of PCB (printed Circuit Board) and related device
WO2019001164A1 (en) Optical filter concentricity measurement method and terminal device
CN111311671B (en) Workpiece measuring method and device, electronic equipment and storage medium
CN115060162A (en) Chamfer dimension measuring method and device, electronic equipment and storage medium
CN114998448A (en) Method for calibrating multi-constraint binocular fisheye camera and positioning space point
CN115187612A (en) Plane area measuring method, device and system based on machine vision
CN117522963A (en) Corner positioning method and device of checkerboard, storage medium and electronic equipment
CN111583388A (en) Scanning method and device of three-dimensional scanning system
CN112894154B (en) Laser marking method and device
CN108564571B (en) Image area selection method and terminal equipment
JP4747293B2 (en) Image processing apparatus, image processing method, and program used therefor
CN114549613A (en) Structural displacement measuring method and device based on deep super-resolution network
CN113870365B (en) Camera calibration method, device, equipment and storage medium
EP4040392A1 (en) Camera calibration method and apparatus and electronic device
CN116823791A (en) PIN defect detection method, device, equipment and computer readable storage medium
CN117593378A (en) Device and method for calibrating internal parameters of vehicle-mounted camera module
CN116433848A (en) Screen model generation method, device, electronic equipment and storage medium
Bu et al. GPU-based distortion correction for CMOS positioning camera using star point measurement
Yang et al. Camera calibration with active standard Gaussian stripes for 3D measurement
CN116823639A (en) Image distortion correction method, device, equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant