CN114332237A - Method for calculating conversion relation between camera coordinate system and laser coordinate system - Google Patents

Method for calculating conversion relation between camera coordinate system and laser coordinate system Download PDF

Info

Publication number
CN114332237A
CN114332237A CN202111543113.2A CN202111543113A CN114332237A CN 114332237 A CN114332237 A CN 114332237A CN 202111543113 A CN202111543113 A CN 202111543113A CN 114332237 A CN114332237 A CN 114332237A
Authority
CN
China
Prior art keywords
calibration
image
point
target
row number
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111543113.2A
Other languages
Chinese (zh)
Inventor
郭志红
安登奎
戴志强
姚毅
杨艺
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Luster LightTech Co Ltd
Original Assignee
Luster LightTech Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Luster LightTech Co Ltd filed Critical Luster LightTech Co Ltd
Priority to CN202111543113.2A priority Critical patent/CN114332237A/en
Publication of CN114332237A publication Critical patent/CN114332237A/en
Pending legal-status Critical Current

Links

Images

Abstract

The application provides a method for calculating a conversion relation between a camera coordinate system and a laser coordinate system, which comprises the following steps: and acquiring a calibration sheet image, wherein the calibration sheet image comprises an image of a rotating mirror image calibration graph and an image of a calibration point. And determining the parameters of the rotating mirror image according to the image of the rotating mirror image calibration graph. Correcting the calibration sheet image according to the rotation mirror image parameters to generate a corrected calibration sheet image; and extracting a target calibration point according to the corrected calibration sheet image. And acquiring the image coordinate and the laser coordinate of the target calibration point to calculate the coordinate system conversion relation. The image of the calibration graph of the rotating mirror image acquired by the camera can determine the parameter of the rotating mirror image, and the image of the calibration sheet is corrected according to the parameter of the rotating mirror image, so that the corrected image of the calibration sheet completely corresponds to the calibration sheet of the laser marking fixed point, and the image coordinate of the calibration point corresponding to the laser coordinate is accurately extracted according to the image of the calibration sheet, thereby improving the calibration accuracy of the camera.

Description

Method for calculating conversion relation between camera coordinate system and laser coordinate system
Technical Field
The application relates to the technical field of industrial vision, in particular to a method for calculating a conversion relation between a camera coordinate system and a laser coordinate system.
Background
In the field of industrial vision, camera calibration is one of the important prerequisites for performing processes such as detection, measurement, assembly and the like. By calibrating the camera, the internal and external parameters of the camera can be calculated, and the relation between the coordinate system of the camera and the world coordinate system is established, so that the size, the defect, the position and the like of the product are measured and detected, and the automatic production is realized. In the subdivided fields of cutting, welding, marking, etc., a laser is a widely used device. The laser can emit laser beams through the laser beam generating device, and the laser beams can move in the working range by adjusting the emitting positions of the laser beams, so that tasks such as cutting, welding, marking and the like are completed.
In order to ensure that the camera can accurately guide the laser to perform high-precision work, the conversion relation of a coordinate system between the laser and the camera needs to be efficiently, quickly and accurately completed, so that the camera calibration is realized. In combination with the characteristic of convenient marking by a laser, a commonly used calibration method in the industry at present is to place a calibration sheet in the visual field of a camera, fix the calibration sheet, print a calibration point on the calibration sheet by the laser, and calibrate the camera according to the calibration point.
However, when the calibration point in the calibration sheet is extracted. Because the cameras are arranged at different positions and different angles, the images of the calibration sheets acquired by the cameras at different positions are different for the same calibration sheet. If the image of the calibration sheet acquired by the camera is mirrored or rotated relative to the calibration sheet of the calibration point printed by the laser, the image coordinate of the calibration point corresponding to the laser coordinate cannot be accurately extracted according to the image of the calibration sheet, so that the calibration accuracy of the camera is reduced.
Disclosure of Invention
The application provides a method for calculating a conversion relation between a camera coordinate system and a laser coordinate system, which aims to solve the problem that the image coordinates of a calibration point corresponding to laser coordinates cannot be accurately extracted according to a calibration sheet image in the conventional camera calibration method.
The application provides a method for calculating a conversion relation between a camera coordinate system and a laser coordinate system, which comprises the following steps:
and acquiring a calibration sheet image, wherein the calibration sheet image comprises an image of a rotating mirror image calibration graph and an image of a calibration point.
And determining the parameters of the rotating mirror image according to the image of the rotating mirror image calibration graph.
Correcting the calibration sheet image according to the rotating mirror image parameters to generate a corrected calibration sheet image;
and extracting a target calibration point according to the corrected calibration sheet image.
And acquiring the image coordinate and the laser coordinate of the target calibration point.
And calculating a coordinate system conversion relation according to the image coordinate and the laser coordinate.
In a possible implementation manner, the rotated mirror image calibration graph is L-shaped, and the step of determining the rotated mirror image parameter according to the image of the rotated mirror image calibration graph includes:
and acquiring a line segment vector set in the calibration sheet image.
Extracting L-edge features according to the line segment vector set, wherein the L-edge features comprise: a long side vector and a short side vector.
And judging whether the calibration sheet image has a mirror image or not according to the long side vector and the short side vector.
And if the image of the calibration slice does not have mirror image, calculating image rotation parameters according to the long-edge vector.
In a possible implementation manner, the step of calculating an image rotation parameter according to the long-side vector further includes:
and correcting the long-edge vector to obtain an accurate long-edge vector.
And calculating the image rotation parameters according to the accurate long-edge vector.
In a possible implementation manner, after the step of judging whether the calibration patch image has a mirror image according to the long-side vector and the short-side vector, the method further includes:
and if the image of the calibration sheet has mirror image, calculating mirror image parameters according to the long side vector and the short side vector.
And generating a mirror image long-edge vector according to the mirror image parameters, and calculating image rotation parameters according to the mirror image long-edge vector.
In a possible implementation manner, the step of calculating an image rotation parameter according to the mirror image long-edge vector further includes:
and correcting the long-edge vector of the mirror image to obtain the long-edge vector of the accurate mirror image.
And calculating the image rotation parameters according to the precise mirror image long-edge vector.
In a possible implementation manner, the target calibration point is a circle, and the step of extracting the target calibration point according to the calibration patch image includes:
and acquiring a gray self-adaptive threshold value of the correction calibration sheet image.
Extracting a plurality of regions of the calibration slice image, each of the regions having the same color, according to the grayscale adaptive threshold.
And calculating the circularity value of the region.
And acquiring a target area according to the roundness value, wherein the target area is the area with the highest roundness value.
And acquiring a target color according to the target area, wherein the target color is the color of the target area.
And extracting the target calibration point according to the target color.
In a possible implementation manner, the step of extracting the target calibration point according to the target color includes:
obtaining a target color region set according to the target color, wherein the target color region set comprises regions with the target color.
And acquiring the roundness value of each region in the target color region set to generate a target roundness value set.
And calculating a roundness value threshold range according to the target roundness value set.
And acquiring a target color area meeting the roundness value threshold range as the target calibration point according to the roundness value threshold range and the target color area set.
In a possible implementation manner, the step of extracting the target calibration point according to the corrected calibration patch image further includes:
and acquiring the number of the target calibration points and a number theoretical value.
And if the number of the target calibration points is smaller than the number theoretical value, constructing a calibration point matrix according to the target calibration points.
And filling the target calibration point matrix to generate a filled calibration point matrix.
And acquiring the image coordinates of the index points in the filling index point matrix.
In a possible implementation manner, the step of constructing a scaling point matrix according to the target scaling point includes:
and acquiring the minimum coordinate value of the X axis, the minimum coordinate value of the Y axis, the minimum distance of the X axis, the minimum distance of the Y axis, the theoretical row number and the theoretical column number of the target calibration point.
And constructing a calibration point matrix with the theoretical row number as the row number and the theoretical column number as the column number by taking the X-axis minimum coordinate value and the Y-axis minimum coordinate value as starting points, taking the X-axis minimum distance as an X-axis distance and taking the Y-axis minimum distance as a Y-axis distance.
In a possible implementation manner, the step of padding the target scaling point matrix and generating a padded scaling point matrix includes:
and sequentially traversing each point location of the target calibration point matrix, and judging whether the point location has a calibration point.
And if the point location has no calibration point, searching whether the point location has a to-be-filled calibration point.
And if the mark point to be filled exists in the point position, filling the mark point to be filled to generate a filling mark point.
And generating a filling mark point matrix according to the target mark point and the filling mark point.
In a possible implementation manner, the step of generating a filling index point matrix according to the target index point and the filling index point further includes:
and acquiring a first row number and a first column number, wherein the first row number is the row number of the filled calibration point matrix, and the first column number is the column number of the filled calibration point matrix.
And judging whether the first row number is equal to the theoretical row number, and whether the first column number is equal to the theoretical column number.
And if the first row number is not equal to the theoretical row number and/or the first column number is not equal to the theoretical column number, acquiring a difference value of the first row number and the first row number, wherein the difference value of the first row number is a difference value of the first row number and the theoretical row number, and the difference value of the first row number is a difference value of the first column number and the theoretical column number.
And determining a reverse filling starting point according to the first row number difference value and the first column number difference value.
And constructing a reverse filling index point matrix according to the reverse filling starting point.
In a possible implementation manner, the step of constructing an inverse padding index point matrix according to the row number difference and the column number difference includes:
and acquiring a second row number and a second column number, wherein the second row number is the row number of the reverse filling calibration point matrix, and the second column number is the column number of the reverse filling calibration point matrix.
And judging whether the second row number is equal to the theoretical row number or not and whether the second column number is equal to the theoretical column number or not.
And if the second row number is not equal to the theoretical row number and/or the second column number is not equal to the theoretical column number, acquiring a second row number difference value and a second row number difference value, wherein the second row number difference value is a difference value between the second row number and the theoretical row number, and the second row number difference value is a difference value between the second column number and the theoretical column number.
And determining the minimum distance of the reconstructed X axis and the minimum distance of the reconstructed Y axis according to the difference value of the second row number and the difference value of the second column number.
And constructing a reconstructed filling index point matrix according to the reconstructed X-axis minimum distance and the reconstructed Y-axis minimum distance.
According to the technical scheme, the method for calculating the conversion relation between the camera coordinate system and the laser coordinate system comprises the following steps: and acquiring a calibration sheet image, wherein the calibration sheet image comprises an image of a rotating mirror image calibration graph and an image of a calibration point. And determining the parameters of the rotating mirror image according to the image of the rotating mirror image calibration graph. Correcting the calibration sheet image according to the rotating mirror image parameters to generate a corrected calibration sheet image; and extracting a target calibration point according to the corrected calibration sheet image. And acquiring the image coordinate and the laser coordinate of the target calibration point. And calculating a coordinate system conversion relation according to the image coordinate and the laser coordinate. The calibration piece is provided with the rotating mirror image calibration graph, the image of the rotating mirror image calibration graph obtained through the camera can determine the rotating mirror image parameter, the calibration piece image is corrected according to the rotating mirror image parameter, the corrected calibration piece image completely corresponds to the calibration piece of the laser marking fixed point, so that the image coordinate of the calibration point corresponding to the laser coordinate is accurately extracted according to the calibration piece image, and the calibration accuracy of the camera is improved.
Drawings
In order to more clearly explain the technical solution of the present application, the drawings needed to be used in the embodiments will be briefly described below, and it is obvious to those skilled in the art that other drawings can be obtained according to the drawings without creative efforts.
Fig. 1 is a schematic diagram illustrating an operating principle of a galvanometer laser according to an embodiment of the present disclosure;
FIG. 2 is a schematic view of a calibration sheet provided in an embodiment of the present application;
fig. 3 is a schematic diagram of calibration sheet images obtained by a camera at different positions according to an embodiment of the present disclosure;
fig. 4 is a flowchart of a method for calculating a transformation relationship between a camera coordinate system and a laser coordinate system according to an embodiment of the present disclosure;
FIG. 5 is a flowchart of a method for determining a rotational mirror parameter according to an embodiment of the present disclosure;
FIG. 6 is a flowchart of a method for extracting target calibration points according to a calibration patch image according to an embodiment of the present disclosure;
fig. 7 is a schematic diagram of an image Blob analysis result provided in an embodiment of the present application;
fig. 8 is a schematic diagram of a transformation relationship between a camera coordinate system and a laser coordinate system according to an embodiment of the present application;
FIG. 9 is a schematic diagram of an image of a calibration patch with poor quality according to an embodiment of the present disclosure;
FIG. 10 is a flowchart of a method for constructing a populated scaling point matrix according to an embodiment of the present disclosure;
FIG. 11 is a diagram illustrating a target-index-point matrix according to an embodiment of the present disclosure;
FIG. 12 is a diagram illustrating a method for constructing an inverse-populated scaling point matrix according to an embodiment of the present disclosure;
FIG. 13 is a flowchart of a method for constructing a reverse-populated scaling point matrix according to an embodiment of the present application;
FIG. 14 is a diagram illustrating a method for constructing a reconstructed padding index point matrix according to an embodiment of the present disclosure;
fig. 15 is a flowchart of a method for constructing a reconstructed padding index point matrix according to an embodiment of the present disclosure.
Detailed Description
Reference will now be made in detail to embodiments, examples of which are illustrated in the accompanying drawings. When the following description refers to the accompanying drawings, like numbers in different drawings represent the same or similar elements unless otherwise indicated. The embodiments described in the following examples do not represent all embodiments consistent with the present application. But merely as exemplifications of systems and methods consistent with certain aspects of the application, as recited in the claims.
In the field of industrial vision, camera calibration is one of the important prerequisites for performing processes such as detection, measurement, assembly and the like. By calibrating the camera, the internal and external parameters of the camera can be calculated, and the relation between the coordinate system of the camera and the world coordinate system is established, so that the size, the defect, the position and the like of the product are measured and detected, and the automatic production is realized.
In the subdivided fields of cutting, welding, marking, etc., a laser is a widely used device. Referring to fig. 1, fig. 1 is a schematic diagram illustrating an operating principle of a conventional galvanometer laser according to an embodiment of the present disclosure. As shown in fig. 1, the galvanometer laser comprises a laser beam generating device, a galvanometer connected with an X-axis direction movement motor, a galvanometer connected with a Y-axis direction movement motor, and a field flattener lens, wherein in the practical application process, the shooting position of the laser beam is unchanged, and the shooting position of the laser beam is changed by adjusting the positions of the X-axis direction motor and the Y-axis direction motor, so that the laser beam moves in the working range to complete tasks such as cutting, welding, marking and the like.
In order to ensure that the camera can accurately guide the laser to perform high-precision work, the conversion relation of a coordinate system between the laser and the camera needs to be efficiently, quickly and accurately completed, so that the camera calibration is realized. In combination with the characteristic of convenient marking of a laser, the commonly used calibration method in the industry at present is to place a calibration sheet in the visual field of a camera, and the material and the background of the calibration sheet are uniform. Referring to fig. 2, fig. 2 is a schematic view of a calibration sheet according to an embodiment of the present disclosure. As shown in fig. 2, the calibration sheet includes 36 calibration points in 6 rows and 6 columns. When the calibration piece is used, the calibration piece needs to be fixed, M rows and N columns of marking points are carved on the calibration piece through the laser, the coordinate value of the laser of each marking point can be recorded when the laser carves the marking points, and the camera can be calibrated according to the calibration points.
However, when the calibration point in the calibration sheet is extracted. Because the cameras are arranged at different positions and different angles, the images of the calibration sheets acquired by the cameras at different positions are different for the same calibration sheet. Referring to fig. 3, fig. 3 is a schematic diagram of calibration sheet images obtained by a camera at different positions according to an embodiment of the present disclosure. As shown in fig. 3, the camera 1 and the camera 2 are in different positions, and for the same calibration sheet, the camera 1 and the camera 2 obtain different calibration sheet images due to different positions. If the image of the calibration sheet acquired by the camera is mirrored or rotated relative to the calibration sheet of the calibration point printed by the laser, the image coordinate of the calibration point corresponding to the laser coordinate cannot be accurately extracted according to the image of the calibration sheet, so that the calibration accuracy of the camera is reduced.
The application provides a method for calculating a conversion relation between a camera coordinate system and a laser coordinate system, which aims to solve the problem that the image coordinates of a calibration point corresponding to laser coordinates cannot be accurately extracted according to a calibration sheet image in the conventional camera calibration method.
Referring to fig. 4, fig. 4 is a flowchart illustrating a method for calculating a transformation relationship between a camera coordinate system and a laser coordinate system according to an embodiment of the present disclosure. As shown in fig. 4, the present application provides a method for calculating a transformation relationship between a camera coordinate system and a laser coordinate system, comprising steps S101-S106:
s101: and acquiring a calibration sheet image, wherein the calibration sheet image comprises an image of a rotating mirror image calibration graph and an image of a calibration point.
In the present embodiment, a calibration sheet image taken by a camera is acquired. The calibration sheet is provided with a rotating mirror image calibration graph and a plurality of calibration points. The rotation mirror image calibration graph is used for determining rotation mirror image parameters, and the rotation mirror image calibration graph is an irregular graph capable of distinguishing directions, such as an L-shaped graph, an asymmetric cross graph, a DM code and the like. The calibration points are used for calibrating the camera, and usually comprise a plurality of calibration points to form a calibration point matrix. The calibration points can be circular, rectangular, cross-shaped and the like.
S102: and determining the parameters of the rotating mirror image according to the image of the rotating mirror image calibration graph.
In this embodiment, the image of the calibration graph of the rotated mirror image is used to automatically determine the parameters of the rotated mirror image for correcting the image of the calibration sheet, so that the corrected image of the calibration sheet corresponds to the calibration sheet of the calibration point printed by the laser, thereby eliminating the influence on the extraction of the coordinates of the image of the calibration point through the image of the calibration sheet due to the different installation positions of the cameras.
In a possible implementation manner, taking the example that the rotated mirror image calibration graph is L-shaped, the idea of determining the rotated mirror image parameter according to the image of the rotated mirror image calibration graph is as follows: the rotation mirror image parameters are calculated mainly by extracting L edge features. When the characteristics are automatically extracted, all line segments in the image of the calibration sheet are found out firstly, then whether the line segment set is empty or not is judged, if the line segment set is empty, the input image is invalid, and the process is finished directly; and if the image is not empty, the line segment which accords with the L-edge characteristic is screened out, and whether the image is mirrored or not and the image rotation angle is judged through the line segment.
Specifically, please refer to fig. 5, and fig. 5 is a flowchart illustrating a method for determining a rotational mirror parameter according to an embodiment of the present disclosure. As shown in fig. 5, the determining the rotation mirror image parameter according to the image of the rotation mirror image calibration graph includes the following steps S201 to S204:
s201: and acquiring a line segment vector set in the calibration sheet image.
In this embodiment, since the rotation mirror calibration graph is L-shaped, the L-shape is composed of two line segments. Therefore, all line segments in the calibration slice image can be extracted by defining an automatic feature search mode to generate a line segment vector set. And then, accurately extracting an L-shaped rotating mirror image calibration graph according to the line segment vector set.
S202: extracting L-edge features according to the line segment vector set, wherein the L-edge features comprise: a long side vector and a short side vector.
In this embodiment, a line segment meeting the requirement is screened out according to the line segment vector, which is an L-edge feature: a long side vector and a short side vector. The specific method for extracting the L-edge feature according to the line segment vector set can be implemented by the following steps S2021-S2021:
s2021: and calculating the included angle of any two vectors in the line segment vector set, and acquiring the two vectors of which the included angle meets the included angle threshold value. Since the angle between the two vectors constituting the L-side is 90 °, the angle threshold value of the angle may be set to 80 ° to 100 °. The angle threshold of the included angle can be set according to the requirements and precision requirements of practical application, and the method is not particularly limited in the application.
S2022: and calculating the intersection point of the two vectors meeting the angle threshold of the included angle, and forming new two vectors according to the intersection point. Specifically, the point farthest from the intersection point in each vector is obtained first, and a new vector is constructed according to the farthest point and the intersection point. And further screening the two L-side vectors which meet the conditions through the new vector and the original vector of the member.
S2023: and according to the original vector and the new vector, reserving two L-edge vectors meeting the condition through a screening condition. The screening condition can be, for example, that the ratio of the long side to the short side of the new vector to the old vector is between 1.5 and 2.5; the ratio of the new vector to the old vector is between 0.8 and 1.2. And finally, reserving two L-edge vectors meeting the screening condition, wherein the reserved vectors are original vectors instead of new vectors constructed later.
S2024: and sorting the two L-edge vectors reserved in the last step according to a sorting rule. The sorting rule is determined according to a preset rotating mirror image calibration graph, for example, in the embodiment of the present application, the rotating mirror image calibration graph is an L-shape with an aspect ratio of 2 and an included angle of 90 °. Therefore, the ordering rule may be that the aspect ratio is closest to 2, and the included angle is closest to 90 °, whereby the long-side vector and the short-side vector of the L-side may be extracted from the set of line segment vectors.
S203: and judging whether the calibration sheet image has a mirror image or not according to the long side vector and the short side vector.
In this embodiment, after the long-side vector and the short-side vector of the L-side are obtained through the above steps, whether a mirror image exists is determined according to the long-side vector and the short-side vector. Taking the long-side vector of the L side as: l1 ═ x1,y1) (ii) a The short side vector of the L side is: l2 ═ x2,y2) For example, calculate x1y2-x2y1If x is a difference of1y2-x2y1If the value is less than 0, the existence of mirror image in the calibration sheet image is indicated. Otherwise, the calibration slice image has no mirror image.
In one possible implementation, the calibration slice image is acquired by down-sampling. The down-sampling rate is first obtained. And then, before judging whether a mirror image exists according to the long side vector and the short side vector, restoring the original size of the L side according to the down-sampling multiplying power to obtain an original long side vector and an original short side vector. And finally, judging whether the image of the calibration sheet has a mirror image according to the original long-side vector and the original short-side vector, wherein the specific judgment method can refer to the above description and is not repeated herein.
S204: and if the image of the calibration slice does not have mirror image, calculating image rotation parameters according to the long-edge vector.
In this embodiment, if the calibration slice image does not have a mirror image, the image rotation parameter is directly calculated according to the long-side vector of the L side without performing mirror image reduction, and the long-side vector of the L side is taken as: l1 ═ x1,y1) For example, the image rotation parameter is an included angle between the long-side vector and a vector (1, 0), and the included angle can be calculated by the following formula:
Figure BDA0003414857280000071
wherein θ is an image rotation angle, i.e. the image rotation parameter.
In one possible implementation, if the calibration slice image is acquired by down-sampling, the long-side vector needs to be corrected before calculating the image rotation parameters. Specifically, the step of calculating the image rotation parameter according to the long-side vector further includes, before the step of calculating the image rotation parameter according to the long-side vector: correcting the long-edge vector to obtain an accurate long-edge vector; and calculating the image rotation parameters according to the accurate long-edge vector.
In this embodiment, if the calibration slice image is obtained by a down-sampling method and is affected by the down-sampling magnification, the obtained long-side vector of the L side has a certain error, and the long-side feature of the L side cannot be accurately reflected. Therefore, the long-side vector needs to be corrected before calculating the image rotation parameter from the long-side vector. Specifically, a search region in the calibration sheet image may be calculated based on the long-side vector with an error, and a straight line may be precisely positioned in the search region to extract a precise long-side vector.
S205: and if the image of the calibration sheet has mirror image, calculating mirror image parameters according to the long side vector and the short side vector.
S206: and generating a mirror image long-edge vector according to the mirror image parameters, and calculating image rotation parameters according to the mirror image long-edge vector.
In this embodiment, if the calibration slice image has a mirror image, then the mirror image parameters need to be calculated. In the present application, the mirror image parameter is the rotation angle of the short side of the L side. And restoring the long-edge vector according to the mirror image parameters to generate a mirror image long-edge vector. The specific process comprises the following steps: obtaining the coordinates (CenterX, CenterY) of the center point of the calibration sheet image, and setting the long-side vector as: (x)1,y1) Then the mirror long-side vector (X, Y) is:
Figure BDA0003414857280000081
wherein X is the abscissa of the mirror image long-side vector, and Y is the ordinate of the mirror image long-side vector.
The method in step S204 may be referred to for calculating the image rotation parameter according to the mirror image long-side vector, which is not described herein.
In one possible implementation, if the calibration slice image is acquired by down-sampling, the mirror long-side vector needs to be corrected before calculating the image rotation parameter. Specifically, the step of calculating the image rotation parameter according to the mirror image long-edge vector further includes: and correcting the long-edge vector of the mirror image to obtain the long-edge vector of the accurate mirror image. And calculating the image rotation parameters according to the precise mirror image long-edge vector.
In this embodiment, if the calibration slice image is obtained in a down-sampling manner and is affected by the down-sampling magnification, the obtained mirror image long-side vector of the L side has a certain error, and the long-side feature of the L side cannot be accurately reflected. Therefore, before calculating the image rotation parameter from the mirror image long-side vector, the mirror image long-side vector needs to be corrected. Specifically, a search area in the calibration sheet image may be calculated according to the long-side vector with the error and the mirror image parameter, and a straight line may be accurately positioned in the search area to extract an accurate long-side vector.
S103: and correcting the calibration sheet image according to the rotation mirror image parameters to generate a corrected calibration sheet image.
In this embodiment, the calibration patch image is corrected by the rotation mirror image parameter calculated in step S102. The rotation mirror image parameter comprises a rotation angle, and the process of correcting the calibration sheet image according to the rotation angle comprises the following steps: and taking the central point of the calibration sheet image as a rotation center, and clockwise rotating the rotation angle to obtain a corrected calibration sheet image. And correcting the calibration sheet image according to the rotating mirror image parameters, so that the influence caused by different installation positions of cameras can be eliminated, and the accuracy of extracting the target calibration point is improved.
S104: and extracting a target calibration point according to the corrected calibration sheet image.
In the present embodiment, by recognizing the calibration dot pattern in the correction calibration sheet image, the target calibration point can be extracted. The target calibration point may be used to determine the image coordinates and the laser coordinates to calculate a coordinate system transformation relationship.
Taking the target calibration point as a circle as an example, please refer to fig. 6, and fig. 6 is a flowchart of a method for extracting a target calibration point according to a calibration slice image according to an embodiment of the present application. As shown in fig. 6, in a possible implementation, the step of extracting the target calibration point according to the corrected calibration patch image includes:
s301: and acquiring a gray self-adaptive threshold value of the correction calibration sheet image.
In this embodiment, the gray value of each pixel in the calibration slice image may be analyzed, and a dynamic threshold value representing the image characteristic may be output as a gray adaptive threshold value. The gray adaptive threshold is used for extracting the area of the corrected calibration sheet image.
S302: extracting a plurality of regions of the calibration slice image, each of the regions having the same color, according to the grayscale adaptive threshold.
For example, the background color of the calibration sheet in the embodiment of the present application is black, and the calibration point is white. Therefore, the white area and the black area of the correction calibration sheet image can be extracted by Blob analysis. Blob in computer vision refers to a block in an image, namely a connected region, and Blob analysis is to extract and mark a connected region from a binary image after foreground/background separation. Each Blob marked represents a foreground object, and then some relevant features of the Blob can be calculated. This has the advantage that by Blob extraction, information of the relevant area, e.g. color, can be obtained. Referring to fig. 7, fig. 7 is a schematic diagram of an image Blob analysis result according to an embodiment of the present disclosure. As shown in fig. 7, the image area includes white Blob and black Blob, and the white Blob area is white in color and the black Blob area is black in color. In the application, the gray values of all the pixel points in the same region all meet certain requirements, and each region can be considered to have the same color.
S303: and calculating the circularity value of the region.
In step S302, only different regions having the same color are extracted, but the color of the target calibration point is unknown, and thus it is not possible to determine which region is the calibration point. In this embodiment, since the target calibration point is a circle, in combination with the feature of the circle, a circularity value for each region can be calculated, and the circularity value is used as a parameter representing circularity. And acquiring the color of the target area according to the circularity value.
S304: and acquiring a target area according to the roundness value, wherein the target area is the area with the highest roundness value.
And acquiring the region with the highest roundness value as a target region, wherein the region has the highest roundness value, namely is closest to a circle, and the region can be regarded as the region of the target calibration point.
S305: and acquiring a target color according to the target area, wherein the target color is the color of the target area.
And acquiring the color of the target area as a target color, wherein all areas with the target color are possible target calibration points. Therefore, the target color is first obtained, and all the regions having the target color are extracted according to the target color, so as to further extract the target calibration point.
S306: and extracting the target calibration point according to the target color.
In a possible implementation manner, the step of extracting the target calibration point according to the target color includes:
s3061: obtaining a target color region set according to the target color, wherein the target color region set comprises regions with the target color.
S3062: and acquiring the roundness value of each region in the target color region set to generate a target roundness value set.
S3063: and calculating a roundness value threshold range according to the target roundness value set.
For example, the method for determining the roundness threshold range includes: sorting the roundness values in the target roundness value set from high to low; calculating the average circularity value of the previous circularity value as a reference circularity value, namely 1 dd; the roundness value threshold range is set to (0.8dd, 1.6 dd). The specific value of the threshold range of the circularity value can be determined according to practical application, and the application is not particularly limited.
S3064: and acquiring a target color area meeting the roundness value threshold range as the target calibration point according to the roundness value threshold range and the target color area set.
S105: and acquiring the image coordinate and the laser coordinate of the target calibration point.
In this embodiment, the image coordinate of the target calibration point is the extracted center coordinate of the target calibration point, and the laser coordinate of the target calibration point may be directly obtained through a laser related file.
S106: and calculating a coordinate system conversion relation according to the image coordinate and the laser coordinate.
In this embodiment, please refer to fig. 8, and fig. 8 is a schematic diagram of a transformation relationship between a camera coordinate system and a laser coordinate system according to an embodiment of the present application. As shown in fig. 8, there is a certain rotation, zoom and translation relationship between the camera coordinate system and the laser coordinate system. The conversion relation between the two coordinate systems can be determined by the calibration point between the camera coordinate system and the laser coordinate system, and the expression of the conversion relation of the coordinate systems is as follows:
Figure BDA0003414857280000101
wherein, (Px, Py) is a laser coordinate, which represents the position of the index point in the laser coordinate system; (Ix, Iy) are image coordinates representing the position of the index point in the image coordinate system; (Tx, Ty) is the laser coordinates of the origin of the image coordinate system, representing the translation relation; theta is the angle from the image coordinate system to the laser coordinate system, represents the rotation relation and is defined as the angle of the positive direction of the X axis of the image coordinate system in the platform coordinate system; (Sx, Sy) is the actual physical distance of a pixel of the image coordinate system in the X and Y directions, representing the scaling relationship; ey represents the type of laser coordinate system, and has a value of 1 when the laser coordinate system is a left-hand coordinate system and a value of-1 when the laser coordinate system is a right-hand coordinate system.
According to the technical scheme, the method for calculating the conversion relation between the camera coordinate system and the laser coordinate system comprises the following steps: and acquiring a calibration sheet image, wherein the calibration sheet image comprises an image of a rotating mirror image calibration graph and an image of a calibration point. And determining the parameters of the rotating mirror image according to the image of the rotating mirror image calibration graph. Correcting the calibration sheet image according to the rotating mirror image parameters to generate a corrected calibration sheet image; and extracting a target calibration point according to the corrected calibration sheet image. And acquiring the image coordinate and the laser coordinate of the target calibration point. And calculating a coordinate system conversion relation according to the image coordinate and the laser coordinate. The calibration piece is provided with the rotating mirror image calibration graph, the image of the rotating mirror image calibration graph obtained through the camera can determine the rotating mirror image parameter, the calibration piece image is corrected according to the rotating mirror image parameter, the corrected calibration piece image completely corresponds to the calibration piece of the laser marking fixed point, so that the image coordinate of the calibration point corresponding to the laser coordinate is accurately extracted according to the calibration piece image, and the calibration accuracy of the camera is improved.
In some application scenarios with complex environments, the quality of the obtained calibration sheet image is poor, please refer to fig. 9, and fig. 9 is a schematic diagram of a calibration sheet image with poor quality provided in the embodiment of the present application. As shown in fig. 9, due to the influence of the imaging quality, the color of the calibration point in the calibration sheet image is different, the color of the calibration point 1 is white, and the color of the calibration point 9 is off gray, so that there may be a case where all the calibration points cannot be extracted. If the number of the extracted target calibration points is smaller than the theoretical number value, it indicates that the target calibration points are not extracted, and therefore, the missing calibration points need to be filled to form a complete calibration point matrix, and the integrity of the calibration point extraction result is improved.
In a possible implementation manner, please refer to fig. 10, and fig. 10 is a flowchart of a method for constructing a padding index point matrix according to an embodiment of the present application. As shown in fig. 10, the step of extracting the target calibration point from the correction calibration patch image further includes the following steps S401 to S404 at step S104:
s401: and acquiring the number of the target calibration points and a number theoretical value.
The theoretical value of the number may be obtained according to a preset calibration sheet, taking the calibration sheet shown in fig. 2 as an example, if the image of the calibration sheet is composed of a matrix of 6 rows and 6 columns of calibration points, the theoretical value of the number of the calibration points in the calibration sheet is 36.
S402: and if the number of the target calibration points is smaller than the number theoretical value, constructing a calibration point matrix according to the target calibration points.
And judging whether the target calibration points are completely extracted or not by comparing the number of the target calibration points with a quantitative theoretical value, wherein if the number of the target calibration points is less than the quantitative theoretical value, the target calibration points are lacked, are omitted and are not extracted, and thus the calibration points need to be filled to form a complete calibration point matrix.
In order to facilitate accurate and fast finding of missing target calibration points, in a possible implementation manner, a calibration point matrix is first constructed according to extracted target calibration points, and the method specifically includes the following steps S4021-S4022:
s4021: and acquiring the minimum coordinate value of the X axis, the minimum coordinate value of the Y axis, the minimum distance of the X axis, the minimum distance of the Y axis, the theoretical row number and the theoretical column number of the target calibration point.
S4022: and constructing a calibration point matrix with the theoretical row number as the row number and the theoretical column number as the column number by taking the X-axis minimum coordinate value and the Y-axis minimum coordinate value as starting points, taking the X-axis minimum distance as an X-axis distance and taking the Y-axis minimum distance as a Y-axis distance.
In this embodiment, the scaling point matrix is constructed by taking the theoretical row number of 7 rows and the theoretical column number of 7 columns as an example. Referring to fig. 11, fig. 11 is a schematic diagram of a target-scaling point matrix according to an embodiment of the present disclosure. As shown in fig. 11, the leftmost and upper point of the target calibration point matrix is a starting point, that is, the coordinates of the point are (X-axis minimum coordinate value, Y-axis minimum coordinate value), the point is used as the starting point, the X-axis minimum distance is an X-axis distance, the Y-axis minimum distance is a Y-axis distance, a 7-row and 7-column calibration point matrix is formed, and the theoretical quantity value of the final calibration point of the target calibration point matrix is 49.
S403: and filling the target calibration point matrix to generate a filled calibration point matrix.
In this embodiment, searching is performed according to the target calibration point matrix, whether each point location has an extracted target calibration point is determined, and if not, filling is performed to generate a filled calibration point matrix.
In a possible implementation manner, the step of filling the target landmark matrix and generating a filled landmark matrix includes steps S4031 to S4034:
s4031: and sequentially traversing each point location of the target calibration point matrix, and judging whether the point location has a calibration point.
S4032: and if the point location has no calibration point, searching whether the point location has a to-be-filled calibration point.
In this embodiment, the mark points to be filled may be searched for according to the graphic features of the mark points. Taking the calibration point as a circle as an example, the circle finding tool can find the mark point to be filled according to the circle feature, and determine whether the circle finding tool can be successfully executed, if the circle finding tool is successfully executed, the point is marked as the mark point to be filled, and if the circle finding tool is failed, the point is not marked as the mark point to be filled.
S4033: and if the mark point to be filled exists in the point position, filling the mark point to be filled to generate a filling mark point.
S4034: and generating a filling mark point matrix according to the target mark point and the filling mark point.
In this embodiment, the target calibration points and the filling calibration points are arranged in groups according to rows and columns to generate a filling calibration point matrix.
S404: and acquiring the image coordinates of the index points in the filling index point matrix.
In this embodiment, if the extracted target calibration point is smaller than the theoretical quantity value, the missing calibration point is filled to form a complete filled calibration point matrix, and the image coordinates of the calibration point are extracted according to the filled calibration point matrix, and the image coordinates are used for calculating the coordinate conversion relationship with the laser coordinates.
In some application scenarios, if the number of rows and columns of the filled calibration point matrix is still different from the number of theoretical rows and columns, it is assumed that the minimum coordinate values in the X direction and the Y direction calculated according to the extracted target calibration point do not match the actual situation. Referring to fig. 12, fig. 12 is a schematic diagram illustrating a method for constructing an inverse padding index point matrix according to an embodiment of the present application. As shown in fig. 12, the starting point (Xmin, Ymin) cannot truly reflect the actual situation due to the absence of the first row and the first column. At this time, it is necessary to re-determine the starting point according to the difference between the number of rows and the difference between the number of columns, and construct a reverse filling index point matrix with the starting point as a reverse filling starting point.
In one possible implementation manner, please refer to fig. 13, and fig. 13 is a flowchart of a method for constructing a reverse-padding index point matrix according to an embodiment of the present application. As shown in fig. 13, in step S403, the target index point matrix is filled, and the step of generating the filled index point matrix further includes steps S501 to S505:
s501: and acquiring a first row number and a first column number, wherein the first row number is the row number of the filled calibration point matrix, and the first column number is the column number of the filled calibration point matrix.
S502: and judging whether the first row number is equal to the theoretical row number, and whether the first column number is equal to the theoretical column number.
S503: and if the first row number is not equal to the theoretical row number and/or the first column number is not equal to the theoretical column number, acquiring a difference value of the first row number and the first row number, wherein the difference value of the first row number is a difference value of the first row number and the theoretical row number, and the difference value of the first row number is a difference value of the first column number and the theoretical column number.
S504: and determining a reverse filling starting point according to the first row number difference value and the first column number difference value.
S505: and constructing a reverse filling index point matrix according to the reverse filling starting point.
In this embodiment, after the reverse-padding index point matrix is constructed, the reverse-padding index point matrix is padded. And traversing each point location of the reverse filling calibration point matrix again, judging whether the point location has a calibration point, and if the point location has no calibration point, judging whether a marker point is filled. In the specific method, reference is made to steps S4031 to S4033, which are not described herein again.
In some application scenarios, if the number of rows and columns of the matrix of the reversely filled calibration points after filling is still different from the number of theoretical rows and columns, it is assumed that the minimum distance between the X direction and the Y direction calculated according to the extracted target calibration points does not match the actual situation. Referring to fig. 14, fig. 14 is a schematic diagram illustrating a method for constructing a reconstructed padding index point matrix according to an embodiment of the present disclosure. As shown in fig. 14, the minimum distance between the X direction and the Y direction does not truly reflect the actual situation because a certain row in the middle of the index point matrix is missing. At this time, the minimum distance between the X direction and the Y direction needs to be determined again according to the difference between the number of rows and the difference between the number of columns, and the filling index point matrix is reconstructed, that is, the reconstructed filling index point matrix is obtained.
In a possible implementation manner, please refer to fig. 15, and fig. 15 is a flowchart of a method for constructing a reconstructed padding index point matrix according to an embodiment of the present application. As shown in fig. 15, in step S505, the step of constructing an inverse padding index point matrix according to the row number difference and the column number difference includes:
s601: and acquiring a second row number and a second column number, wherein the second row number is the row number of the reverse filling calibration point matrix, and the second column number is the column number of the reverse filling calibration point matrix.
S602: and judging whether the second row number is equal to the theoretical row number or not and whether the second column number is equal to the theoretical column number or not.
S603: and if the second row number is not equal to the theoretical row number and/or the second column number is not equal to the theoretical column number, acquiring a second row number difference value and a second row number difference value, wherein the second row number difference value is a difference value between the second row number and the theoretical row number, and the second row number difference value is a difference value between the second column number and the theoretical column number.
S604: and determining the minimum distance of the reconstructed X axis and the minimum distance of the reconstructed Y axis according to the difference value of the second row number and the difference value of the second column number.
S605: and constructing a reconstructed filling index point matrix according to the reconstructed X-axis minimum distance and the reconstructed Y-axis minimum distance.
In this embodiment, after the reconstructed padding index point matrix is constructed, the reconstructed padding index point matrix is padded. And traversing each point location of the reconstructed filling calibration point matrix again, judging whether the point location has a calibration point, and if the point location has no calibration point, judging whether a mark point is filled. In the specific method, reference is made to steps S4031 to S4033, which are not described herein again.
According to the technical scheme, the method for calculating the conversion relation between the camera coordinate system and the laser coordinate system can automatically extract the calibration point according to the calibration sheet image. If all the calibration points are not extracted under the condition that the image quality of the calibration sheet is poor, the complete calibration points are obtained by constructing a calibration point matrix and filling the calibration point matrix, and the completeness and accuracy of the calibration point extraction are improved, so that the accuracy of calculating the conversion relation between the camera coordinate system and the laser coordinate system can be improved.
The embodiments provided in the present application are only a few examples of the general concept of the present application, and do not limit the scope of the present application. Any other embodiments extended according to the scheme of the present application without inventive efforts will be within the scope of protection of the present application for a person skilled in the art.

Claims (12)

1. A method of calculating a transformation relationship between a camera coordinate system and a laser coordinate system, comprising:
acquiring a calibration sheet image, wherein the calibration sheet image comprises an image of a rotating mirror image calibration graph and an image of a calibration point;
determining a rotation mirror image parameter according to the image of the rotation mirror image calibration graph;
correcting the calibration sheet image according to the rotating mirror image parameters to generate a corrected calibration sheet image;
extracting a target calibration point according to the corrected calibration sheet image;
acquiring the image coordinate and the laser coordinate of the target calibration point;
and calculating a coordinate system conversion relation according to the image coordinate and the laser coordinate.
2. The method according to claim 1, wherein the rotated mirror calibration pattern is L-shaped, and the step of determining the rotated mirror parameter from the image of the rotated mirror calibration pattern comprises:
acquiring a line segment vector set in the calibration sheet image;
extracting L-edge features according to the line segment vector set, wherein the L-edge features comprise: a long side vector and a short side vector;
judging whether the calibration sheet image has a mirror image or not according to the long side vector and the short side vector;
and if the image of the calibration slice does not have mirror image, calculating image rotation parameters according to the long-edge vector.
3. The method of claim 2, wherein the step of calculating image rotation parameters from the long-side vectors is preceded by:
correcting the long-edge vector to obtain an accurate long-edge vector;
and calculating the image rotation parameters according to the accurate long-edge vector.
4. The method according to claim 2, wherein the step of determining whether the calibration patch image has a mirror image according to the long side vector and the short side vector further comprises:
if the image of the calibration sheet has mirror image, calculating mirror image parameters according to the long side vector and the short side vector;
and generating a mirror image long-edge vector according to the mirror image parameters, and calculating image rotation parameters according to the mirror image long-edge vector.
5. The method of claim 4, wherein the step of calculating image rotation parameters from the mirrored long-edge vector is preceded by:
correcting the long-edge vector of the mirror image to obtain an accurate long-edge vector of the mirror image;
and calculating the image rotation parameters according to the precise mirror image long-edge vector.
6. The method of claim 1, wherein the target calibration point is circular, and wherein extracting the target calibration point from the calibration slice image comprises:
acquiring a gray level self-adaptive threshold value of the correction calibration sheet image;
extracting a plurality of regions of the calibration slice image, each of the regions having the same color, according to the grayscale adaptive threshold;
calculating a circularity value for the region;
acquiring a target area according to the roundness value, wherein the target area is the area with the highest roundness value;
acquiring a target color according to the target area, wherein the target color is the color of the target area;
and extracting the target calibration point according to the target color.
7. The method of claim 6, wherein the step of extracting the target calibration point according to the target color comprises:
obtaining a target color region set according to the target color, wherein the target color region set comprises regions with the target color;
acquiring a roundness value of each region in the target color region set to generate a target roundness value set;
calculating a roundness value threshold range according to the target roundness value set;
and acquiring a target color area meeting the roundness value threshold range as the target calibration point according to the roundness value threshold range and the target color area set.
8. The method according to any one of claims 1-7, wherein said step of extracting said target calibration point from said calibration slice image is followed by the further step of:
acquiring the number and the number theoretical value of the target calibration points;
if the number of the target calibration points is smaller than the number theoretical value, constructing a calibration point matrix according to the target calibration points;
filling the target calibration point matrix to generate a filled calibration point matrix;
and acquiring the image coordinates of the index points in the filling index point matrix.
9. The method of claim 8, wherein the step of constructing a matrix of index points from the target index points comprises:
acquiring the minimum coordinate value of the X axis, the minimum coordinate value of the Y axis, the minimum distance of the X axis, the minimum distance of the Y axis, the theoretical line number and the theoretical column number of the target calibration point;
and constructing a calibration point matrix with the theoretical row number as the row number and the theoretical column number as the column number by taking the X-axis minimum coordinate value and the Y-axis minimum coordinate value as starting points, taking the X-axis minimum distance as an X-axis distance and taking the Y-axis minimum distance as a Y-axis distance.
10. The method of claim 9, wherein the step of populating the target-fix-point matrix and generating a populated fix-point matrix comprises:
sequentially traversing each point location of the target calibration point matrix, and judging whether the point location has a calibration point;
if the point location has no calibration point, searching whether the point location has a to-be-filled calibration point;
if the mark point to be filled exists in the point position, filling the mark point to be filled to generate a filling mark point;
and generating a filling mark point matrix according to the target mark point and the filling mark point.
11. The method of claim 8, wherein the step of populating the target-indexed point matrix, and generating a populated indexed point matrix, further comprises:
acquiring a first row number and a first column number, wherein the first row number is the row number of the filled calibration point matrix, and the first column number is the column number of the filled calibration point matrix;
judging whether the first row number is equal to the theoretical row number, and whether the first column number is equal to the theoretical column number;
if the first row number is not equal to the theoretical row number and/or the first column number is not equal to the theoretical column number, acquiring a difference value between the first row number and the first row number, where the difference value between the first row number is a difference value between the first row number and the theoretical row number, and the difference value between the first row number is a difference value between the first column number and the theoretical column number;
determining a reverse filling starting point according to the first row number difference value and the first column number difference value,
and constructing a reverse filling index point matrix according to the reverse filling starting point.
12. The method of claim 11, wherein the step of constructing an inverse-populated scaling point matrix from the row number differences and the column number differences is followed by:
acquiring a second row number and a second column number, wherein the second row number is the row number of the reverse filling calibration point matrix, and the second column number is the column number of the reverse filling calibration point matrix;
judging whether the second row number is equal to the theoretical row number, and whether the second column number is equal to the theoretical column number;
if the second row number is not equal to the theoretical row number and/or the second column number is not equal to the theoretical column number, acquiring a second row number difference value and a second row number difference value, wherein the second row number difference value is a difference value between the second row number and the theoretical row number, and the second row number difference value is a difference value between the second column number and the theoretical column number;
determining a reconstructed X-axis minimum distance and a reconstructed Y-axis minimum distance according to the second row number difference and the second column number difference;
and constructing a reconstructed filling index point matrix according to the reconstructed X-axis minimum distance and the reconstructed Y-axis minimum distance.
CN202111543113.2A 2021-12-16 2021-12-16 Method for calculating conversion relation between camera coordinate system and laser coordinate system Pending CN114332237A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111543113.2A CN114332237A (en) 2021-12-16 2021-12-16 Method for calculating conversion relation between camera coordinate system and laser coordinate system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111543113.2A CN114332237A (en) 2021-12-16 2021-12-16 Method for calculating conversion relation between camera coordinate system and laser coordinate system

Publications (1)

Publication Number Publication Date
CN114332237A true CN114332237A (en) 2022-04-12

Family

ID=81051744

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111543113.2A Pending CN114332237A (en) 2021-12-16 2021-12-16 Method for calculating conversion relation between camera coordinate system and laser coordinate system

Country Status (1)

Country Link
CN (1) CN114332237A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115127493A (en) * 2022-09-01 2022-09-30 广东三姆森科技股份有限公司 Coordinate calibration method and device for product measurement
CN115799140A (en) * 2022-07-20 2023-03-14 拓荆键科(海宁)半导体设备有限公司 Calibration method and device, combined microscope and wafer bonding method

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115799140A (en) * 2022-07-20 2023-03-14 拓荆键科(海宁)半导体设备有限公司 Calibration method and device, combined microscope and wafer bonding method
CN115799140B (en) * 2022-07-20 2023-12-01 拓荆键科(海宁)半导体设备有限公司 Calibration method and device, combined microscope and wafer bonding method
CN115127493A (en) * 2022-09-01 2022-09-30 广东三姆森科技股份有限公司 Coordinate calibration method and device for product measurement

Similar Documents

Publication Publication Date Title
CN114332237A (en) Method for calculating conversion relation between camera coordinate system and laser coordinate system
CN110223226B (en) Panoramic image splicing method and system
CN104718428A (en) Pattern inspecting and measuring device and program
JPH02148180A (en) Method and device for inspecting pattern
CN112614188B (en) Dot-matrix calibration board based on cross ratio invariance and identification method thereof
JP2022528301A (en) Calibration method, positioning method, equipment, electronic devices and storage media
CN114022370B (en) Galvanometer laser processing distortion correction method and system
CN111391327B (en) Printing error determination method, printing error determination device, electronic equipment and storage medium
CN115601774B (en) Table recognition method, apparatus, device, storage medium and program product
CN112561873B (en) CDSEM image virtual measurement method based on machine learning
CN113257182B (en) Lamp point position correction method and device in LED display screen correction process
JP2730457B2 (en) Three-dimensional position and posture recognition method based on vision and three-dimensional position and posture recognition device based on vision
CN115546016B (en) Method for acquiring and processing 2D (two-dimensional) and 3D (three-dimensional) images of PCB (printed Circuit Board) and related device
CN115041705B (en) Multi-laser triaxial galvanometer calibration method, system, equipment and readable storage medium
CN110543798A (en) two-dimensional code identification method and device
CN115908562A (en) Different-surface point cooperation marker and measuring method
CN111914857B (en) Layout method, device and system for plate excess material, electronic equipment and storage medium
CN111753832B (en) Image processing method, image processing apparatus, electronic device, and storage medium
CN112991211A (en) Dark corner correction method for industrial camera
CN114295056A (en) Rapid correction method and application of visual positioning system of laser processing equipment
CN110310239B (en) Image processing method for eliminating illumination influence based on characteristic value fitting
CN116205988A (en) Coordinate processing method, coordinate processing device, computer equipment and computer readable storage medium
CN113269728B (en) Visual edge-tracking method, device, readable storage medium and program product
US20110262005A1 (en) Object detecting method and non-transitory computer-readable recording medium storing an object detection program
CN114800520B (en) High-precision hand-eye calibration method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination