CN117102661A - Visual positioning method and laser processing equipment - Google Patents
Visual positioning method and laser processing equipment Download PDFInfo
- Publication number
- CN117102661A CN117102661A CN202311384668.6A CN202311384668A CN117102661A CN 117102661 A CN117102661 A CN 117102661A CN 202311384668 A CN202311384668 A CN 202311384668A CN 117102661 A CN117102661 A CN 117102661A
- Authority
- CN
- China
- Prior art keywords
- position data
- target object
- data
- determining
- camera
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 230000000007 visual effect Effects 0.000 title claims abstract description 51
- 238000000034 method Methods 0.000 title claims abstract description 28
- 238000001514 detection method Methods 0.000 claims abstract description 32
- 230000008602 contraction Effects 0.000 claims description 31
- 108010001267 Protein Subunits Proteins 0.000 claims description 4
- 238000006243 chemical reaction Methods 0.000 claims description 3
- 238000010586 diagram Methods 0.000 description 6
- 238000005553 drilling Methods 0.000 description 5
- 239000000463 material Substances 0.000 description 4
- 230000009286 beneficial effect Effects 0.000 description 2
- 230000003044 adaptive effect Effects 0.000 description 1
- 230000008094 contradictory effect Effects 0.000 description 1
- 239000003550 marker Substances 0.000 description 1
Classifications
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B23—MACHINE TOOLS; METAL-WORKING NOT OTHERWISE PROVIDED FOR
- B23K—SOLDERING OR UNSOLDERING; WELDING; CLADDING OR PLATING BY SOLDERING OR WELDING; CUTTING BY APPLYING HEAT LOCALLY, e.g. FLAME CUTTING; WORKING BY LASER BEAM
- B23K26/00—Working by laser beam, e.g. welding, cutting or boring
- B23K26/02—Positioning or observing the workpiece, e.g. with respect to the point of impact; Aligning, aiming or focusing the laser beam
- B23K26/03—Observing, e.g. monitoring, the workpiece
- B23K26/032—Observing, e.g. monitoring, the workpiece using optical means
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B23—MACHINE TOOLS; METAL-WORKING NOT OTHERWISE PROVIDED FOR
- B23K—SOLDERING OR UNSOLDERING; WELDING; CLADDING OR PLATING BY SOLDERING OR WELDING; CUTTING BY APPLYING HEAT LOCALLY, e.g. FLAME CUTTING; WORKING BY LASER BEAM
- B23K26/00—Working by laser beam, e.g. welding, cutting or boring
- B23K26/02—Positioning or observing the workpiece, e.g. with respect to the point of impact; Aligning, aiming or focusing the laser beam
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B23—MACHINE TOOLS; METAL-WORKING NOT OTHERWISE PROVIDED FOR
- B23K—SOLDERING OR UNSOLDERING; WELDING; CLADDING OR PLATING BY SOLDERING OR WELDING; CUTTING BY APPLYING HEAT LOCALLY, e.g. FLAME CUTTING; WORKING BY LASER BEAM
- B23K26/00—Working by laser beam, e.g. welding, cutting or boring
- B23K26/36—Removing material
- B23K26/38—Removing material by boring or cutting
- B23K26/382—Removing material by boring or cutting by boring
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B23—MACHINE TOOLS; METAL-WORKING NOT OTHERWISE PROVIDED FOR
- B23K—SOLDERING OR UNSOLDERING; WELDING; CLADDING OR PLATING BY SOLDERING OR WELDING; CUTTING BY APPLYING HEAT LOCALLY, e.g. FLAME CUTTING; WORKING BY LASER BEAM
- B23K2101/00—Articles made by soldering, welding or cutting
- B23K2101/36—Electric or electronic devices
- B23K2101/42—Printed circuits
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Optics & Photonics (AREA)
- Plasma & Fusion (AREA)
- Mechanical Engineering (AREA)
- Length Measuring Devices By Optical Means (AREA)
Abstract
The invention discloses a visual positioning method and laser processing equipment, comprising the steps of moving and positioning and detecting a target object through a first camera to determine first position data and second position data; determining first offset data according to the first position data and the second position data; performing position compensation on the target object according to the first offset data and preset initial position data of the target object, and determining actual position data of the target object; determining third position data according to the actual position data of the target object; and moving according to the third position data, and carrying out positioning detection on the target object through the second camera to determine a positioning detection result. The first camera with a larger visual field range is utilized to capture the first identification point and the second identification point, the visual positioning speed is high, the offset condition of the target object is determined according to the position data of the first identification point and the second identification point and is compensated, and then the second camera is utilized to perform high-precision positioning, so that the positioning accuracy is high.
Description
Technical Field
The invention relates to the technical field of laser processing equipment, in particular to a visual positioning method and laser processing equipment.
Background
A laser processing apparatus, such as one applied to drilling a circuit board, needs to position a product to be processed at the start of a job to determine the origin of coordinates of the job. In the related art, most of laser processing devices use cameras for visual positioning, but due to the contradiction between the visual field range and the accuracy performance of the cameras and the factors such as position deviation of products to be processed in the placing process, the visual positioning method of the related art is difficult to quickly and accurately position the products.
Disclosure of Invention
The present invention aims to solve at least one of the technical problems existing in the prior art. Therefore, the invention provides a visual positioning method and laser processing equipment, which can improve the efficiency and accuracy of visual positioning.
In one aspect, an embodiment of the present invention provides a visual positioning method applied to a laser processing apparatus, where the laser processing apparatus includes a first camera and a second camera, including:
moving and carrying out positioning detection on a target object through the first camera, and determining first position data and second position data, wherein the first position data is used for representing actual position data of a first identification point arranged on the target object, the second position data is used for representing actual position data of a second identification point arranged on the target object, and the visual field range of the first camera is larger than that of the second camera;
determining first offset data according to the first position data and the second position data, wherein the first offset data is used for representing the offset between the actual position of the target object and a preset initial position;
performing position compensation on the target object according to the first offset data and preset initial position data of the target object, and determining actual position data of the target object;
determining third position data according to the actual position data of the target object, wherein the third position data is used for representing theoretical position data of a plurality of third identification points arranged on the target object;
and moving according to the third position data, and carrying out positioning detection on the target object through the second camera to determine a positioning detection result.
According to some embodiments of the invention, the moving and positioning detection of the target object by the first camera, determining the first position data and the second position data, includes:
acquiring first theoretical position data and second theoretical position data, wherein the first theoretical position data is used for representing a preset initial position of the first identification point, and the second theoretical position data is used for representing a preset initial position of the second identification point;
according to the first theoretical position data, moving and carrying out positioning detection on the target object through the first camera to determine the first position data;
and according to the second theoretical position data, moving and carrying out positioning detection on the target object through the first camera, and determining the second position data.
According to some embodiments of the invention, the determining the first offset data according to the first position data and the second position data includes:
performing difference operation on the first position data and the second position data under a preset first right-angle coordinate system to determine a first difference value and a second difference value, wherein the first difference value is used for representing a coordinate difference value on an X axis of the first right-angle coordinate system, the second difference value is used for representing a coordinate difference value on a Y axis of the first right-angle coordinate system, a plane of the first right-angle coordinate system is parallel to a plane of the target object, and an extending direction of the X axis of the first right-angle coordinate system is parallel to a moving direction of the first camera;
and determining the first offset data according to the first difference value and the second difference value.
According to some embodiments of the invention, the determining the first offset data according to the first position data and the second position data further includes:
acquiring first theoretical position data, wherein the first theoretical position data is used for representing a preset initial position of the first identification point;
performing difference operation on the first theoretical position data and the first position data under the first right-angle coordinate system, and determining the first offset data;
or,
acquiring second theoretical position data, wherein the second theoretical position data is used for representing a preset initial position of the second identification point;
and under the first right-angle coordinate system, carrying out difference operation on the second theoretical position data and the second position data, and determining the first offset data.
According to some embodiments of the present invention, the preset initial position data of the target object is data in a preset second rectangular coordinate system, the second rectangular coordinate system uses the first identification point or the second identification point as a coordinate origin, the position compensation is performed on the target object according to the first offset data and the preset initial position data of the target object, and determining the actual position data of the target object includes:
in the second rectangular coordinate system, performing position compensation on the target object according to the first offset data and preset initial position data of the target object, and determining first position compensation data;
and carrying out coordinate conversion from the second rectangular coordinate system to the first rectangular coordinate system on the first position compensation data, and determining the actual position data of the target object.
According to some embodiments of the invention, the moving according to the third position data and the positioning detection of the target object by the second camera further comprises:
determining fourth position data according to the positioning detection result, wherein the fourth position data is used for representing actual position data of the third identification points;
and carrying out expansion and contraction calculation on the target object according to the fourth position data, and determining expansion and contraction compensation data.
According to some embodiments of the invention, the plurality of third identification points are disposed at edges of the target object, and the determining the heave compensation data according to the fourth position data by performing a heave calculation on the target object includes:
and carrying out overall expansion and contraction calculation on the target object according to the fourth position data, and determining overall expansion and contraction compensation data.
According to some embodiments of the invention, the target object includes a plurality of sub-units, the plurality of third identification points are disposed at edges of the sub-units, the performing a collapsible computation on the target object according to the fourth position data, determining collapsible compensation data includes:
and carrying out local expansion and contraction calculation on the target object according to the fourth position data, and determining local expansion and contraction compensation data.
According to some embodiments of the invention, the plurality of third identification points are distributed in a rectangular shape to form a positioning rectangle, and the determining the expansion compensation data according to the expansion calculation of the fourth position data on the target object includes:
determining first side length data and second side length data according to the fourth position data, wherein the first side length data is used for representing length data of a first side and a second side of the positioning rectangle, the second side length data is used for representing length data of a third side and a fourth side of the positioning rectangle, and the first side and the second side are parallel to each other;
and determining the expansion and contraction compensation data according to the first side length data and the second side length data.
On the other hand, the embodiment of the invention provides laser processing equipment, which comprises a control module, a first camera and a second camera which are respectively and electrically connected with the control module, wherein the control module is used for executing the visual positioning method.
The embodiment of the invention has at least the following beneficial effects:
the first camera with a larger visual field range is utilized to capture the first identification point and the second identification point on the target object, the speed of visual positioning is improved, the offset condition of the target object is determined and compensated according to the position data of the first identification point and the second identification point, and then the second camera is utilized to perform high-precision positioning, so that the accuracy of positioning is improved.
Additional aspects and advantages of the invention will be set forth in part in the description which follows, and in part will be obvious from the description, or may be learned by practice of the invention.
Drawings
The foregoing and/or additional aspects and advantages of the invention will become apparent and may be better understood from the following description of embodiments taken in conjunction with the accompanying drawings in which:
FIG. 1 is a schematic diagram of a marker in a large field of view and a small field of view in the related art;
FIG. 2 is a schematic block diagram of a laser processing apparatus according to an embodiment of the present invention;
FIG. 3 is a schematic view of a laser processing apparatus according to an embodiment of the present invention;
FIG. 4 is a flowchart showing steps of a visual positioning method according to an embodiment of the present invention;
FIG. 5 is a schematic diagram of a target object in a preset initial position under a first rectangular coordinate system according to an embodiment of the present invention;
FIG. 6 is a schematic diagram of a target object in a real position under a first rectangular coordinate system according to an embodiment of the present invention;
FIG. 7 is a second flowchart illustrating a visual positioning method according to an embodiment of the present invention;
fig. 8 is a schematic plan view of a target object according to an embodiment of the present invention.
Reference numerals:
identification point 010, field of view 020, control module 100, first camera 210, second camera 220, motion mechanism 300, mobile platform 400, target object 500, first identification point 501, second identification point 502, third identification point 503, subunit 504.
Detailed Description
Embodiments of the present invention are described in detail below, examples of which are illustrated in the accompanying drawings, wherein like or similar reference numerals refer to like or similar elements or elements having like or similar functions throughout. The embodiments described below by referring to the drawings are illustrative only and are not to be construed as limiting the invention.
In the description of the present invention, it should be understood that references to orientation descriptions such as upper, lower, front, rear, left, right, etc. are based on the orientation or positional relationship shown in the drawings, are merely for convenience of description of the present invention and to simplify the description, and do not indicate or imply that the apparatus or elements referred to must have a particular orientation, be constructed and operated in a particular orientation, and thus should not be construed as limiting the present invention.
In the description of the present invention, the meaning of "a number" means one or more, the meaning of "a plurality" means two or more, and greater than, less than, exceeding, etc. are understood to not include the present number, and "above", "below", "within", etc. are understood to include the present number. If any, the terms "first," "second," etc. are used for distinguishing between technical features only, and should not be construed as indicating or implying a relative importance or implicitly indicating the number of technical features indicated or implicitly indicating the precedence of the technical features indicated.
In the description of the present invention, unless explicitly defined otherwise, terms such as "disposed," "mounted," "connected," and the like are to be construed broadly and the specific meaning of the terms in the present invention can be reasonably determined by those skilled in the art in combination with the specific contents of the technical solutions.
Cameras, also known as cameras, are devices for capturing images, and are commonly found in a variety of industrial devices that require visual positioning. The visual field range and the recognition accuracy of the camera are two contradictory attributes, the visual field range is large, the recognition accuracy is lower, otherwise, the recognition accuracy is high, the visual field range is smaller, for example, please refer to fig. 1, fig. 1 (a) shows a schematic diagram of the identification point 010 under the camera with a large visual field range, fig. 1 (b) shows a schematic diagram of the identification point 010 under the camera with a small visual field range, and as can be seen from comparison in the figure, the sizes displayed by the same identification point 010 in the visual field ranges 020 of different cameras are different, and the recognition accuracy is also different. In the related art, most of laser devices adopt a single-camera structure to position a product to be processed, but if a camera with a large visual field scope is adopted in the structure, the positioning recognition precision is insufficient, and if a camera with high precision is adopted, the positioning efficiency is lower because the recognized visual field scope is smaller. Aiming at the technical problems of the related art, the embodiment discloses a visual positioning method and laser processing equipment for executing the visual positioning method so as to improve the efficiency and accuracy of visual positioning.
Referring to fig. 2, a laser processing apparatus, such as a laser drilling apparatus for laser drilling, includes a control module 100, and a first camera 210 and a second camera 220 electrically connected to the control module 100, wherein the control module 100 is configured to execute the visual positioning method of the present embodiment.
For example, referring to fig. 3, the first camera 210 and the second camera 220 are mounted on the moving mechanism 300, and a moving platform 400 for placing a product to be processed is disposed below the moving mechanism 300, and the moving platform 400 can move relative to the moving mechanism 300, wherein the product to be processed may be a product such as a rigid circuit board, a flexible circuit board, or the like. The cooperation of the motion mechanism 300 and the moving platform 400 may enable the first camera 210 and the second camera 220 to visually locate a product to be processed placed on the moving platform 400. In order to facilitate identification of the identification point on the product to be processed, the present embodiment constructs a first rectangular coordinate system (i.e. X1-O1-Y1) in a first plane, where the first plane is parallel to the plane of the target object 500, and the X-axis extending direction of the first rectangular coordinate system is parallel to the moving direction of the first camera 210, and then the Y-axis extending direction of the first rectangular coordinate system is parallel to the moving direction of the moving platform 400.
Referring to fig. 4, the visual positioning method disclosed in the present embodiment includes steps S100 to S500, and it should be noted that the method steps are labeled in the present embodiment only for facilitating examination and reading understanding, and should not be considered as limiting the execution sequence. The specific contents of each step are as follows:
s100, referring to FIGS. 3 and 5, the target object 500 is moved and subjected to positioning detection by the first camera 210, and first position data and second position data are determined, wherein the first position data are used for representing actual position data of a first identification point 501 arranged on the target object 500, the second position data are used for representing actual position data of a second identification point 502 arranged on the target object 500, and the field of view of the first camera 210 is larger than that of the second camera 220;
the present embodiment will be described with reference to a laser processing apparatus for performing laser drilling on a circuit board, and as a circuit board of a target object 500 of the laser processing apparatus, a MARK point for positioning, commonly called MARK point, such as a hole or a bonding pad, is provided on a surface of the circuit board. Before starting production, the worker enters the production data of the target object 500 into the laser processing apparatus, wherein the production data includes position information of the identification points, information of the positions to be processed, and the like. When the production is started, the first camera 210 and the second camera 220 are both at the initial positions, and the worker places the target object 500 at the initial position preset by the laser processing apparatus and starts the production. At this time, the first camera 210 moves to a preset initial position and performs positioning detection on the target object 500, determining first position data and second position data. Because the visual field range of the first camera 210 is larger, the first camera 210 can capture the first identification point 501 and the second identification point 502 in a larger visual field range, so that positioning detection can be performed more quickly, even if the target object 500 generates position offset relative to the preset position, the large visual field range of the first camera 210 can provide a larger fault-tolerant visual field range for capturing the identification point, so that the situation that positioning detection cannot be performed due to the fact that the identification point exceeds the visual field range is avoided, and the efficiency of positioning detection is improved.
S200, determining first offset data according to the first position data and the second position data, wherein the first offset data is used for representing the offset between the actual position of the target object 500 and a preset initial position;
when the first camera 210 detects the second identification point 502 of the first identification point 501, the first position data and the second position data may be determined, so that a position offset condition of the target object 500 with respect to a preset initial position is determined according to the first position data and the second position data.
S300, performing position compensation on the target object 500 according to the first offset data and preset initial position data of the target object 500, and determining actual position data of the target object 500;
in the placement process, due to the manual or equipment placement precision, the target object 500 has a position offset, and the offset of the target object 500 may be a linear offset, a rotational offset or a combination of the two, when the position offset of the target object 500 relative to the preset initial position is detected, the position compensation is performed on the target object 500, the actual position data of the target object 500 is determined, and the placement position of the target object 500 does not need to be adjusted again, thereby being beneficial to improving the efficiency of visual positioning.
S400, referring to FIG. 5, determining third position data according to actual position data of the target object 500, wherein the third position data is used for representing theoretical position data of a plurality of third identification points 503 arranged on the target object 500;
as described above, there is a contradiction between the field of view of the camera and the recognition accuracy, and in order to achieve high-accuracy positioning, the present embodiment employs the second camera 220 having a smaller field of view but higher recognition accuracy to perform positioning detection of the third identification point 503 provided on the target object 500. Because the actual position of the target object 500 is shifted from the initial position, it is necessary to determine the theoretical positions of the plurality of third identification points 503 on the target object 500 based on the actual position data of the target object 500 in order to move the second camera 220 to capture the third identification points 503.
And S500, moving according to the third position data, and carrying out positioning detection on the target object 500 through the second camera 220 to determine a positioning detection result.
When the theoretical position of the third identification point 503 is determined, the second camera 220 is moved to the corresponding position to identify the third identification point 503 for positioning detection. It should be noted that, in this embodiment, the "movement" of the first camera 210 and the second camera 220 is a relative movement, and referring to fig. 2, the first camera 210 and the second camera 220 of this embodiment are disposed on the motion mechanism 300, and the target object 500 is disposed on the moving platform 400, and by moving the motion mechanism 300 and the moving platform 400 in the X-axis direction and the Y-axis direction in coordination, the first camera 210 and the second camera 220 can be moved in two directions relative to the target object 500, so as to detect the identification points disposed at different positions of the target object 500. Of course, in some embodiments, the first camera 210 and the second camera 220 may be disposed on a two-degree-of-freedom movement mechanism, such that movement in the X-axis direction and the Y-axis direction may be achieved.
In step S100, moving and positioning the target object 500 by the first camera 210, determining the first position data and the second position data includes:
s110, acquiring first theoretical position data and second theoretical position data, wherein the first theoretical position data is used for representing a preset initial position of a first identification point 501, and the second theoretical position data is used for representing a preset initial position of a second identification point 502;
as described above, the production material including the data of the preset initial positions of the first identification point 501 and the second identification point 502 is required to be entered before the production is started, wherein the first theoretical position data and the second theoretical position data may be obtained and entered by way of teaching, and illustratively, in the teaching mode, the target object 500 is placed on the laser processing apparatus, and the first camera 210 is moved to the position of the first identification point 501 or the second identification point 502 with the target object 500 by way of the manual mode or the automatic mode to determine the first theoretical position data or the second theoretical position data.
S120, moving and carrying out positioning detection on the target object 500 through the first camera 210 according to the first theoretical position data to determine first position data;
in order to determine whether the position of the target object 500 is the same as the initial position, the first camera 210 is moved to a preset initial position of the first identification point 501 according to the first theoretical position data, and if the actual position of the target object 500 is the same as the initial position, the first camera 210 may capture the first identification point 501 at the preset initial position of the first identification point 501, thereby determining that the actual position of the first identification point 501 is the same as the preset initial position; when the actual position of the target object 500 is shifted from the initial position by a position within the acceptable range, the first camera 210 may still capture the first identification point 501 in the field of view when moving to the preset initial position of the first identification point 501, so as to determine the actual position of the first identification point 501.
And S130, moving and carrying out positioning detection on the target object 500 through the first camera 210 according to the second theoretical position data to determine the second position data.
The present embodiment captures the second identification point 502 by the first camera 210 based on the second theoretical position data, thereby determining the second position data, in the same principle as the detection of the first identification point 501.
In step S200, determining first offset data according to the first position data and the second position data includes:
s210, under a preset first right-angle coordinate system, performing difference operation on the first position data and the second position data to determine a first difference value and a second difference value, wherein the first difference value is used for representing a coordinate difference value on an X axis of the first right-angle coordinate system, the second difference value is used for representing a coordinate difference value on a Y axis of the first right-angle coordinate system, as described above, a plane of the first right-angle coordinate system is parallel to a plane of the target object 500, and an extending direction of the X axis of the first right-angle coordinate system is parallel to a moving direction of the first camera 210;
s220, determining first offset data according to the first difference value and the second difference value.
Referring to fig. 5 and 6, fig. 5 shows a case where the target object 500 is at a preset initial position, and fig. 6 shows a case where the target object 500 is rotationally offset from the preset initial position. For ease of understanding, the present embodiment takes the simplest state as an example: assuming that the target object 500 is a rectangular circuit board with regular edges, the first identification point 501 and the second identification point 502 are along the edges of the target object 500Along the arrangement, and a line between the first identification point 501 and the second identification point 502 (hereinafter, simply referred to as "first line") is parallel to one edge of the target object 500. In fig. 5, the first line is parallel to the X-axis of the first right angle coordinate system, whereas in fig. 6, the first line forms an angle a with respect to the X-axis. Under the first right-angle coordinate system, the first position data is%x 1 ,y 1 ) The second position data is%x 2 ,y 2 ) And performing difference operation on the first position data and the second position data to determine components of the first connecting line on the X axis and the Y axis, thereby determining an included angle a, namely first offset data, according to a trigonometric function relation.
In the case that the actual position of the target object 500 is linearly shifted with respect to the preset initial position, in step S200, the determining the first offset data according to the first position data and the second position data further includes:
s230, acquiring first theoretical position data, wherein the first theoretical position data is used for representing a preset initial position of a first identification point 501;
s240, performing difference operation on the first theoretical position data and the first position data under the first right-angle coordinate system to determine first offset data;
or,
s250, acquiring second theoretical position data, wherein the second theoretical position data is used for representing a preset initial position of a second identification point 502;
and S260, performing difference operation on the second theoretical position data and the second position data under the first right-angle coordinate system, and determining first offset data.
For example, please refer to fig. 5 and 6, the first theoretical position data isx 10 ,y 10 ) And the first position data is%x 1 ,y 1 ) Since the position of the target object 500 is linearly shifted, the shift amounts of the target object 500 in the X-axis direction and the Y-axis direction can be determined by performing a difference operation on the first theoretical position data and the first position data. Similarly, by the second theoretical position dataAnd the second position data, the offset amount of the target object 500 in the X-axis direction and the Y-axis direction can be determined. Thus, the first offset data may be determined in one of two ways.
Referring to fig. 6, in order to facilitate determining the actual position of the target object 500, the present embodiment constructs a second rectangular coordinate system (i.e. X2-O2-Y2) for the target object 500, wherein the preset initial position data of the target object 500 is data in the preset second rectangular coordinate system, the second rectangular coordinate system uses the first identification point 501 or the second identification point 502 as the origin of coordinates, and in step S300, the position compensation is performed on the target object 500 according to the first offset data and the preset initial position data of the target object 500, so as to determine the actual position data of the target object 500, which includes:
s310, performing position compensation on the target object 500 according to the first offset data and preset initial position data of the target object 500 under a second rectangular coordinate system, and determining first position compensation data;
s320, performing coordinate conversion from the second rectangular coordinate system to the first rectangular coordinate system on the first position compensation data, and determining the actual position data of the target object 500.
Since the second rectangular coordinate system is established on the target object 500, even if the target object 500 is linearly or rotationally offset in the first coordinate system, in the second coordinate system, the distance between each point on the target object 500 and the origin of coordinates (i.e., the first identification point 501 or the second identification point 502) is constant, for example, the distance between the point a and the point O2 and the included angle b in fig. 6 are constant. When the target object 500 rotates and shifts, the points on the target object 500 also rotate relative to the origin of coordinates of the second coordinate system, and the rotation angle is an included angle a. Based on the included angle a and the original coordinates (i.e., the preset initial position data) of each point on the target object 500, a new coordinate (i.e., the first position compensation data) after the rotational offset occurs can be determined. The first position compensation data may be converted from the second rectangular coordinate system to the first rectangular coordinate system according to the relationship between the first rectangular coordinate system and the second rectangular coordinate system, thereby determining the actual position data of the target object 500.
Referring to fig. 7, step S500 includes moving and performing positioning detection on the target object 500 by the second camera 220 according to the third position data, and then further includes:
s600, determining fourth position data according to the positioning detection result, wherein the fourth position data is used for representing actual position data of a plurality of third identification points 503;
and S700, performing expansion and contraction calculation on the target object 500 according to the fourth position data, and determining expansion and contraction compensation data.
Because of the processing material characteristics of the circuit board, the circuit board may be expanded and contracted to different degrees due to different processing materials, so that the positions of points on the circuit board are shifted, and therefore expansion and contraction compensation needs to be performed on the target object 500. Wherein the second camera 220 is moved to a position corresponding to the third position data, and the third identification point 503 of the target object 500 is captured for positioning detection. When the actual position and the theoretical position of the third identification point 503 are the same, the fourth position data and the third position data are the same, and when the actual position of the third identification point 503 is slightly deviated due to factors such as expansion and contraction, the fourth position data is determined so as to facilitate expansion and contraction calculation of the target object 500.
In some embodiments, as shown in fig. 5, to perform overall positioning and expansion compensation of the target object 500, a plurality of third identification points 503 are provided at the edge of the target object 500, for example, at the board edge position of the circuit board. In step S500, the step of performing a collapsible computation on the target object 500 according to the fourth position data to determine collapsible compensation data includes:
and S710, carrying out overall expansion and contraction calculation on the target object 500 according to the fourth position data, and determining overall expansion and contraction compensation data.
In other embodiments, as shown in fig. 6, the target object 500 includes a plurality of sub-units 504, and the target object 500 may have different degrees of expansion and contraction at different positions due to the larger area, softer material, or more sub-units 504 of the target object 500. In order to improve positioning accuracy, a plurality of third identification points 503 are disposed at edges of the sub-unit 504, and perform a collapsible calculation on the target object 500 according to the fourth position data, to determine collapsible compensation data, including:
and S720, carrying out local expansion and contraction calculation on the target object 500 according to the fourth position data, and determining local expansion and contraction compensation data.
For example, referring to fig. 8, a plurality of third identification points 503 are distributed in a rectangular shape to form a positioning rectangle, in step S500, a collapsible calculation is performed on the target object 500 according to the fourth position data, and the determining the collapsible compensation data includes:
s701, determining first side length data and second side length data according to fourth position data, wherein the first side length data is used for representing length data of a first side and a second side of the positioning rectangle, the second side length data is used for representing length data of a third side and a fourth side of the positioning rectangle, and the first side and the second side are parallel to each other;
s702, determining expansion and contraction compensation data according to the first side length data and the second side length data.
Specifically, referring to fig. 8, for the overall expansion compensation, the number of the third identification points 503 is four, and the four third identification points 503 are disposed at four corners of the target object 500, so as to form four edges of a positioning rectangle, i.e., L11, L12, L13, and L14. By determining the position data of the four third identification points 503, the side length data of four sides of the positioning rectangle, namely, the first side length data, the second side length data, the third side length data and the fourth side length data, can determine the expansion and contraction condition of the target object 500 in the direction according to the side length data of two sides parallel to each other, thereby determining expansion and contraction compensation data, for example, performing mean value operation on the first side length data and the second side length data, determining expansion and contraction compensation data, and positioning each point on the target object 500 according to the expansion and contraction compensation data, for example, updating the drilling position according to the expansion and contraction compensation data, thereby improving the positioning accuracy.
The principle of local collapsible compensation is similar to that of global collapsible compensation, except that the local compensation is directed to each sub-unit 504 in the target object 500, and the third identification points 503 are distributed at the corners of each sub-unit 504, thereby forming four edges of a positioning rectangle, i.e. L21, L22, L23 and L24. In this way, adaptive local compensation can be performed according to the expansion and contraction conditions of each sub-unit 504, so that the visual positioning of each sub-unit 504 is more accurate.
In some embodiments, the plurality of third identification points 503 of the target object may be distributed at four corners of the whole board and at four corners of each subunit 504 at the same time, so that the overall expansion-contraction compensation and the local expansion-contraction compensation can be realized, the error of a single compensation mode is reduced, and the positioning accuracy is further improved.
The embodiments of the present invention have been described in detail with reference to the accompanying drawings, but the present invention is not limited to the above embodiments, and various changes can be made within the knowledge of one of ordinary skill in the art without departing from the spirit of the present invention.
Claims (10)
1. A visual positioning method applied to a laser processing device, the laser processing device comprising a first camera and a second camera, the visual positioning method comprising:
moving and carrying out positioning detection on a target object through the first camera, and determining first position data and second position data, wherein the first position data is used for representing actual position data of a first identification point arranged on the target object, the second position data is used for representing actual position data of a second identification point arranged on the target object, and the visual field range of the first camera is larger than that of the second camera;
determining first offset data according to the first position data and the second position data, wherein the first offset data is used for representing the offset between the actual position of the target object and a preset initial position;
performing position compensation on the target object according to the first offset data and preset initial position data of the target object, and determining actual position data of the target object;
determining third position data according to the actual position data of the target object, wherein the third position data is used for representing theoretical position data of a plurality of third identification points arranged on the target object;
and moving according to the third position data, and carrying out positioning detection on the target object through the second camera to determine a positioning detection result.
2. The visual positioning method of claim 1, wherein the moving and positioning the target object by the first camera determining the first and second position data comprises:
acquiring first theoretical position data and second theoretical position data, wherein the first theoretical position data is used for representing a preset initial position of the first identification point, and the second theoretical position data is used for representing a preset initial position of the second identification point;
according to the first theoretical position data, moving and carrying out positioning detection on the target object through the first camera to determine the first position data;
and according to the second theoretical position data, moving and carrying out positioning detection on the target object through the first camera, and determining the second position data.
3. The visual positioning method of claim 1, wherein the determining first offset data from the first position data and the second position data comprises:
performing difference operation on the first position data and the second position data under a preset first right-angle coordinate system to determine a first difference value and a second difference value, wherein the first difference value is used for representing a coordinate difference value on an X axis of the first right-angle coordinate system, the second difference value is used for representing a coordinate difference value on a Y axis of the first right-angle coordinate system, a plane of the first right-angle coordinate system is parallel to a plane of the target object, and an extending direction of the X axis of the first right-angle coordinate system is parallel to a moving direction of the first camera;
and determining the first offset data according to the first difference value and the second difference value.
4. A visual positioning method as set forth in claim 3 wherein said determining first offset data based on said first position data and said second position data further comprises:
acquiring first theoretical position data, wherein the first theoretical position data is used for representing a preset initial position of the first identification point;
performing difference operation on the first theoretical position data and the first position data under the first right-angle coordinate system, and determining the first offset data;
or,
acquiring second theoretical position data, wherein the second theoretical position data is used for representing a preset initial position of the second identification point;
and under the first right-angle coordinate system, carrying out difference operation on the second theoretical position data and the second position data, and determining the first offset data.
5. A visual positioning method according to claim 3, wherein the preset initial position data of the target object is data in a preset second rectangular coordinate system, the second rectangular coordinate system uses the first identification point or the second identification point as a coordinate origin, the position compensation is performed on the target object according to the first offset data and the preset initial position data of the target object, and determining the actual position data of the target object includes:
in the second rectangular coordinate system, performing position compensation on the target object according to the first offset data and preset initial position data of the target object, and determining first position compensation data;
and carrying out coordinate conversion from the second rectangular coordinate system to the first rectangular coordinate system on the first position compensation data, and determining the actual position data of the target object.
6. The visual positioning method according to any one of claims 1 to 5, characterized in that the moving according to the third position data and the positioning detection of the target object by the second camera are followed by:
determining fourth position data according to the positioning detection result, wherein the fourth position data is used for representing actual position data of the third identification points;
and carrying out expansion and contraction calculation on the target object according to the fourth position data, and determining expansion and contraction compensation data.
7. The visual positioning method of claim 6, wherein the plurality of third identification points are disposed at edges of the target object, wherein the performing a heave calculation on the target object based on the fourth position data, determining heave compensation data, comprises:
and carrying out overall expansion and contraction calculation on the target object according to the fourth position data, and determining overall expansion and contraction compensation data.
8. The visual positioning method of claim 6, wherein the target object comprises a plurality of sub-units, the plurality of third identification points being disposed at edges of the sub-units, the determining the heave compensation data by performing a heave calculation on the target object from the fourth position data comprises:
and carrying out local expansion and contraction calculation on the target object according to the fourth position data, and determining local expansion and contraction compensation data.
9. The visual positioning method according to claim 7 or 8, wherein the plurality of third identification points are distributed in a rectangular shape to form a positioning rectangle, the performing a collapsible calculation on the target object according to the fourth position data, and determining collapsible compensation data includes:
determining first side length data and second side length data according to the fourth position data, wherein the first side length data is used for representing length data of a first side and a second side of the positioning rectangle, the second side length data is used for representing length data of a third side and a fourth side of the positioning rectangle, and the first side and the second side are parallel to each other;
and determining the expansion and contraction compensation data according to the first side length data and the second side length data.
10. A laser processing apparatus comprising a control module and a first camera and a second camera electrically connected to the control module, respectively, characterized in that the control module is adapted to perform the visual positioning method according to any one of claims 1 to 9.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202311384668.6A CN117102661B (en) | 2023-10-25 | 2023-10-25 | Visual positioning method and laser processing equipment |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202311384668.6A CN117102661B (en) | 2023-10-25 | 2023-10-25 | Visual positioning method and laser processing equipment |
Publications (2)
Publication Number | Publication Date |
---|---|
CN117102661A true CN117102661A (en) | 2023-11-24 |
CN117102661B CN117102661B (en) | 2024-01-09 |
Family
ID=88798795
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202311384668.6A Active CN117102661B (en) | 2023-10-25 | 2023-10-25 | Visual positioning method and laser processing equipment |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN117102661B (en) |
Citations (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108662974A (en) * | 2017-03-28 | 2018-10-16 | 深圳市腾盛工业设备有限公司 | A kind of dispensing localization method and device based on double camera |
CN108694729A (en) * | 2017-04-07 | 2018-10-23 | 深圳市腾盛工业设备有限公司 | Localization method, unit based on image detection and storage medium |
CN109283804A (en) * | 2018-11-14 | 2019-01-29 | 江苏友迪激光科技有限公司 | A method of improving direct write exposure machine exposure accuracy and harmomegathus measurement accuracy |
CN110293559A (en) * | 2019-05-30 | 2019-10-01 | 上海理工大学 | A kind of installation method of automatic identification positioning alignment |
CN115143887A (en) * | 2022-09-05 | 2022-10-04 | 常州市建筑科学研究院集团股份有限公司 | Method for correcting measurement result of visual monitoring equipment and visual monitoring system |
CN115187658A (en) * | 2022-08-29 | 2022-10-14 | 合肥埃科光电科技股份有限公司 | Multi-camera visual large target positioning method, system and equipment |
CN115684019A (en) * | 2022-11-30 | 2023-02-03 | 合肥欣奕华智能机器股份有限公司 | Alignment device, calibration and alignment method of display panel detection equipment |
CN116153824A (en) * | 2023-04-20 | 2023-05-23 | 沈阳和研科技股份有限公司 | Discharging precision compensation method based on visual algorithm |
CN116858092A (en) * | 2023-06-25 | 2023-10-10 | 苏州维嘉科技股份有限公司 | Method for detecting vision system deviation and circuit board processing equipment |
-
2023
- 2023-10-25 CN CN202311384668.6A patent/CN117102661B/en active Active
Patent Citations (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108662974A (en) * | 2017-03-28 | 2018-10-16 | 深圳市腾盛工业设备有限公司 | A kind of dispensing localization method and device based on double camera |
CN108694729A (en) * | 2017-04-07 | 2018-10-23 | 深圳市腾盛工业设备有限公司 | Localization method, unit based on image detection and storage medium |
CN109283804A (en) * | 2018-11-14 | 2019-01-29 | 江苏友迪激光科技有限公司 | A method of improving direct write exposure machine exposure accuracy and harmomegathus measurement accuracy |
CN110293559A (en) * | 2019-05-30 | 2019-10-01 | 上海理工大学 | A kind of installation method of automatic identification positioning alignment |
CN115187658A (en) * | 2022-08-29 | 2022-10-14 | 合肥埃科光电科技股份有限公司 | Multi-camera visual large target positioning method, system and equipment |
CN115143887A (en) * | 2022-09-05 | 2022-10-04 | 常州市建筑科学研究院集团股份有限公司 | Method for correcting measurement result of visual monitoring equipment and visual monitoring system |
CN115684019A (en) * | 2022-11-30 | 2023-02-03 | 合肥欣奕华智能机器股份有限公司 | Alignment device, calibration and alignment method of display panel detection equipment |
CN116153824A (en) * | 2023-04-20 | 2023-05-23 | 沈阳和研科技股份有限公司 | Discharging precision compensation method based on visual algorithm |
CN116858092A (en) * | 2023-06-25 | 2023-10-10 | 苏州维嘉科技股份有限公司 | Method for detecting vision system deviation and circuit board processing equipment |
Also Published As
Publication number | Publication date |
---|---|
CN117102661B (en) | 2024-01-09 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
KR100857257B1 (en) | Screen printer and image sensor position alignment method | |
US20060188160A1 (en) | Device, method, and computer-readable medium for detecting changes in objects in images and their features | |
KR900002509B1 (en) | Apparatus for recognizing three demensional object | |
US10535157B2 (en) | Positioning and measuring system based on image scale | |
CN111263142A (en) | Method, device, equipment and medium for testing optical anti-shake of camera module | |
CN110355758B (en) | Machine following method and equipment and following robot system | |
US11826919B2 (en) | Work coordinate generation device | |
CN117102661B (en) | Visual positioning method and laser processing equipment | |
CN114383510A (en) | Optical sensing system and optical navigation system | |
JP3511551B2 (en) | Robot arm state detection method and detection system | |
KR101626374B1 (en) | Precision position alignment technique using edge based corner estimation | |
JP4890904B2 (en) | Component position detection method and apparatus | |
CN112651261A (en) | Calculation method for conversion relation between high-precision 2D camera coordinate system and mechanical coordinate system | |
JP2000211106A (en) | Screen mask aligning method in screen printing | |
JPH04269194A (en) | Plane measuring method | |
JP3725993B2 (en) | Electronic component mounting circuit board inspection method and apparatus | |
JP3334568B2 (en) | Position detection device | |
JP2000258121A (en) | Master substrate for calibrating a plurality of cameras and calibration method for image recognition camera | |
JPH0831715B2 (en) | Position correction method for leaded parts | |
JP3340599B2 (en) | Plane estimation method | |
JPH0680954B2 (en) | Device for mounting IC on printed circuit board | |
Kawanishi et al. | Quick 3D object detection and localization by dynamic active search with multiple active cameras | |
CN117969555B (en) | Ink drop printing leakage point detection method, device, equipment and medium | |
JP2000321024A (en) | Position detecting method utilizing image recognition | |
US20230011093A1 (en) | Adjustment support system and adjustment support method |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |