CN118049922A - Displacement calculation method, device, equipment and medium based on multi-step calibration - Google Patents

Displacement calculation method, device, equipment and medium based on multi-step calibration Download PDF

Info

Publication number
CN118049922A
CN118049922A CN202410228162.4A CN202410228162A CN118049922A CN 118049922 A CN118049922 A CN 118049922A CN 202410228162 A CN202410228162 A CN 202410228162A CN 118049922 A CN118049922 A CN 118049922A
Authority
CN
China
Prior art keywords
data
calibration
image plane
virtual image
displacement
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202410228162.4A
Other languages
Chinese (zh)
Inventor
向往
姚文政
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Phoskey Shenzhen Precision Technology Co ltd
Original Assignee
Phoskey Shenzhen Precision Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Phoskey Shenzhen Precision Technology Co ltd filed Critical Phoskey Shenzhen Precision Technology Co ltd
Priority to CN202410228162.4A priority Critical patent/CN118049922A/en
Publication of CN118049922A publication Critical patent/CN118049922A/en
Pending legal-status Critical Current

Links

Landscapes

  • Length Measuring Devices By Optical Means (AREA)

Abstract

The application relates to a displacement calculation method, a device, equipment and a medium based on multi-step calibration, wherein the method comprises the following steps: adopting a long-spot laser and a shift projection optical layout, and receiving imaging data through a plurality of rows of area array CMOS to obtain gray level image data; carrying out multi-row data synthesis processing based on the gray map data to obtain contour data; performing optical parameter calibration based on the profile data and the calibration table displacement data by adopting a multi-step calibration mode to obtain target optical parameters; mapping the contour data to a virtual image plane based on the target optical parameters, and calculating to obtain a light spot center point in the virtual image plane; and calculating a target displacement value based on the light spot center point and the target optical parameter data in the virtual image plane. The application avoids the measurement error of the point laser displacement sensor and improves the accuracy of displacement calculation.

Description

Displacement calculation method, device, equipment and medium based on multi-step calibration
Technical Field
The present application relates to the field of laser measurement technologies, and in particular, to a displacement calculation method, device, equipment, and medium based on multi-step calibration.
Background
The main cause of the measurement error of the spot laser displacement sensor is the original waveform data caused by laser speckle and the microcosmic complex morphology of the surface of the measured object. In addition, the triangular mapping relation and the shift lens design lead to uneven resolution of a pixel shaft, so that collected waveform data are inclined, deviation of center point calculation is caused, and finally calculated displacement is caused to generate offset errors.
Disclosure of Invention
The embodiment of the application aims to provide a displacement calculation method, device, equipment and medium based on multi-step calibration, so as to avoid measurement errors of a point laser displacement sensor and improve the accuracy of displacement calculation.
In order to solve the above technical problems, an embodiment of the present application provides a displacement calculation method based on multi-step calibration, including:
adopting a long-spot laser and a shift projection optical layout, and receiving imaging data through a plurality of rows of area array CMOS to obtain gray level image data;
performing multi-line data synthesis processing based on the gray map data to obtain contour data;
Performing optical parameter calibration based on the profile data and the calibration table displacement data by adopting a multi-step calibration mode to obtain target optical parameters;
Mapping the contour data to a virtual image plane based on the target optical parameters, and calculating to obtain a light spot center point in the virtual image plane;
And calculating the light spot displacement distance based on the light spot center point in the virtual image plane and the target optical parameter data to obtain a target displacement value.
In order to solve the above technical problems, an embodiment of the present application provides a displacement calculation device based on multi-step calibration, including:
the data collection unit is used for adopting a long-spot laser and a shift projection optical layout, receiving imaging data through a multi-row area array CMOS and obtaining gray level image data;
The contour data generating unit is used for carrying out multi-row data synthesis processing based on the gray map data to obtain contour data;
The target optical parameter calibration unit is used for performing optical parameter calibration based on the profile data and the calibration table displacement data in a multi-step calibration mode to obtain target optical parameters;
the virtual image plane light spot center calculating unit maps the contour data to the virtual image plane based on the target optical parameters and calculates to obtain the light spot center point coordinates in the virtual image plane;
And the displacement calculation unit is used for calculating the displacement distance of the light spot based on the light spot center point in the virtual image plane and the target optical parameter data to obtain a target displacement value.
In order to solve the technical problems, the invention adopts a technical scheme that: an electronic device is provided that includes one or more processors; and the memory is used for storing one or more programs, so that the one or more processors can realize the displacement calculation method based on multi-step calibration.
In order to solve the technical problems, the invention adopts a technical scheme that: a computer readable storage medium having stored thereon a computer program which, when executed by a processor, implements the multi-step calibration based displacement calculation method of any one of the above.
The embodiment of the invention provides a displacement calculation method, a device, equipment and a medium based on multi-step calibration. The method comprises the following steps: adopting a long-spot laser and a shift projection optical layout, and receiving imaging data through a plurality of rows of area array CMOS to obtain gray level image data; performing multi-line data synthesis processing based on the gray map data to obtain contour data; performing optical parameter calibration based on the profile data and the calibration table displacement data by adopting a multi-step calibration mode to obtain target optical parameters; mapping the contour data to a virtual image plane based on the target optical parameters, and calculating to obtain the light spot center point coordinates in the virtual image plane; and calculating the light spot displacement distance based on the light spot center point in the virtual image plane and the target optical parameter data to obtain a target displacement value.
The embodiment of the invention adopts the long-spot laser and the shift-axis projection optical layout, receives imaging data in a multi-row area array CMOS through the long light spots, effectively avoids the measurement error of the spot laser displacement sensor, and realizes the accurate correction of the profile data by carrying out multi-step calibration on the profile data, thereby being beneficial to improving the accuracy of displacement calculation.
Drawings
In order to more clearly illustrate the solution of the present application, a brief description will be given below of the drawings required for the description of the embodiments of the present application, it being apparent that the drawings in the following description are some embodiments of the present application, and that other drawings may be obtained from these drawings without the exercise of inventive effort for a person of ordinary skill in the art.
FIG. 1 is a flow chart of an implementation of a flow of a displacement calculation method based on multi-step calibration provided by an embodiment of the application;
FIG. 2 is a schematic diagram of a point laser measurement based on a long spot and a multi-faceted array COMS provided by an embodiment of the present application;
FIG. 3 is a schematic diagram of imaging a light spot on a CMOS target surface according to an embodiment of the present application;
FIG. 4 is a schematic gray scale diagram of imaging a light spot on a CMOS target surface according to an embodiment of the present application;
FIG. 5 is a flowchart of an implementation of a sub-flow of a displacement calculation method based on multi-step calibration provided by an embodiment of the present application;
FIG. 6 is a schematic diagram of contour data synthesis provided by an embodiment of the present application;
FIG. 7 is a flow chart of contour data synthesis provided by an embodiment of the present application;
FIG. 8 is a flowchart of an implementation of a sub-process of a displacement calculation method based on multi-step calibration provided by an embodiment of the present application;
FIG. 9 is a schematic diagram of a direct-injection type triangulation ranging model provided by an embodiment of the present application;
FIG. 10 is a flowchart of an implementation of a sub-process of a displacement calculation method based on multi-step calibration provided by an embodiment of the present application;
FIG. 11 is a flowchart of an implementation of a sub-flow of a displacement calculation method based on multi-step calibration provided by an embodiment of the present application;
FIG. 12 is a flowchart of an implementation of a sub-process of a displacement calculation method based on multi-step calibration provided by an embodiment of the present application;
FIG. 13 is a schematic diagram of mapping contour data from an actual image plane to a virtual image plane according to an embodiment of the present application;
FIG. 14 is a schematic diagram of a displacement calculation device based on multi-step calibration according to an embodiment of the present application;
fig. 15 is a schematic diagram of an electronic device according to an embodiment of the present application.
Detailed Description
Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this application belongs; the terminology used in the description of the applications herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the application; the terms "comprising" and "having" and any variations thereof in the description of the application and the claims and the description of the drawings above are intended to cover a non-exclusive inclusion. The terms first, second and the like in the description and in the claims or in the above-described figures, are used for distinguishing between different objects and not necessarily for describing a sequential or chronological order.
Reference herein to "an embodiment" means that a particular feature, structure, or characteristic described in connection with the embodiment may be included in at least one embodiment of the application. The appearances of such phrases in various places in the specification are not necessarily all referring to the same embodiment, nor are separate or alternative embodiments mutually exclusive of other embodiments. Those of skill in the art will explicitly and implicitly appreciate that the embodiments described herein may be combined with other embodiments.
In order to make the person skilled in the art better understand the solution of the present application, the technical solution of the embodiment of the present application will be clearly and completely described below with reference to the accompanying drawings.
The present invention will be described in detail with reference to the drawings and embodiments.
It should be noted that, the displacement calculating method based on multi-step calibration provided in the embodiments of the present application is generally executed by a server, and accordingly, the displacement calculating device based on multi-step calibration is generally configured in the server.
Referring to fig. 1 to 4, fig. 1 illustrates a specific embodiment of a displacement calculation method based on multi-step calibration; FIG. 2 is a schematic diagram of a point laser measurement based on a long spot and a multi-faceted array COMS provided by an embodiment of the present application; FIG. 3 is a schematic diagram of imaging a light spot on a CMOS target surface according to an embodiment of the present application; fig. 4 is a schematic gray scale diagram of imaging a light spot on a CMOS target surface according to an embodiment of the present application.
It should be noted that, if there are substantially the same results, the method of the present invention is not limited to the flow sequence shown in fig. 1, and the method includes the following steps:
S1: and adopting a long-spot laser and a shift projection optical layout, and receiving imaging data through a multi-row area array CMOS to obtain gray scale image data.
Specifically, the embodiment of the application adopts the scheme of the long light spot, the multi-row area array CMOS and the point laser displacement sensor of the shift projection to detect the object, so that gray scale image data of each position needs to be acquired on the calibration table in the detection process. In an embodiment, however, the purpose of using a long spot is to: the narrower the light spot width is, the smaller the solution error of the central point is; the longer the light spot, the longer the length of the light irradiated on the surface of the object, so that conditions are created for inhibiting the influence of the surface roughness of the measured object and inhibiting speckle noise.
As shown in fig. 3 and 4, imaging data is received using a multi-row area array CMOS in an embodiment of the present application. The long light spot can be regarded as being formed by arranging a plurality of spot light spots, and when a plurality of rows of area array CMOS is adopted, each row of waveform data can be considered as imaging data of one spot light spot, and the plurality of rows of waveform data form the imaging data of the long light spot. The multi-line waveform data can be combined to have the advantages of inhibiting the influence of the surface roughness of the measured object and inhibiting speckle noise, one line of the waveform data can be selected to be output to have the advantage of measuring fine objects by spot light, and the multi-line data can be output to achieve the effect of narrow-area 3D scanning. The application adopts the mode of moving the axis projection to effectively enlarge the depth of field, so that objects to be measured with different distances can be clearly imaged on the target surface. Because the target surface is obliquely arranged in the shift projection layout, the light spot contours on the CMOS target surface are not symmetrical any more, a certain deviation occurs in the direct calculation center point, and finally the calculated displacement is caused to occur, so that the contour data are required to be calibrated in multiple steps in the subsequent steps, the contour data are corrected, and the accuracy of displacement calculation is improved.
S2: and carrying out multi-line data synthesis processing based on the gray map data to obtain contour data.
Specifically, the above steps have acquired gray-scale image data of each position, and in the embodiment of the present application, it is necessary to remove overexposure data from the gray-scale image data, and then calculate an average value of non-overexposure data of each column to synthesize and process the multi-line data to obtain contour data.
Referring to fig. 5 to fig. 7, fig. 5 shows a specific implementation manner of step S2, and fig. 6 is a schematic diagram of contour data synthesis provided by an embodiment of the present application; fig. 7 is a flow chart of contour data synthesis provided in an embodiment of the present application, which is described in detail below:
s21: and identifying overexposure data of each column in the gray scale image data, and removing the overexposure data from the gray scale image data to obtain non-overexposure data.
S22: and carrying out average calculation on the non-overexposure data so as to synthesize a plurality of rows of the non-overexposure data into a single row of the non-overexposure data, thereby obtaining the profile data.
As shown in fig. 6, in the embodiment of the present application, overexposure data of each column in the gray-scale image data of a given position needs to be identified, where the identification mode is to set a gray-scale maximum value, and if there is data exceeding the gray-scale maximum value in the gray-scale image data, the overexposure data is set. And removing overexposure data from the gray map data to obtain non-overexposure data. And carrying out average calculation on the non-overexposed data so as to synthesize a plurality of rows of non-overexposed data into a single row of non-overexposed data to obtain contour data.
As shown in fig. 7, a specific way of contour data synthesis is provided, which inputs the number of lines m, the number of columns n, the maximum gray value thd_max_val, the gray map mat_im [ m, n ], the initialization i=0, j=0, and the count count=0; and then carrying out multi-step judgment based on the input, identifying overexposure data, and carrying out average value calculation on each column of non-overexposure data so as to obtain contour data.
S3: and performing optical parameter calibration based on the profile data and the calibration table displacement data by adopting a multi-step calibration mode to obtain target optical parameters.
Specifically, in the embodiment of the application, according to the long light spot, the multi-row area array CMOS and the triangle ranging scheme of the shift projection, a profile data correction scheme based on multi-step calibration is provided. In the embodiment of the application, the optical parameter calibration is performed based on the spot center point and the contour data in a multi-step calibration mode, so that the contour data is corrected to obtain the target optical parameter. And specifically calibrating 4 parameters of an included angle between a laser line and the optical axis and an included angle between the optical axis and the target surface at a coordinate point from the optical axis to the target surface and an equivalent focal length.
Referring to fig. 8 and 9, fig. 8 shows a specific implementation manner of step S3, and fig. 9 is a schematic diagram of a direct triangular ranging model provided by an embodiment of the present application, which is described in detail below:
S31: and calculating to obtain a light spot center point based on the profile data, and inputting the light spot center point, calibration table displacement data and optical system design parameters into a direct-injection type triangular ranging model for parameter calibration to obtain a first calibration parameter, wherein the optical system design parameters comprise a coordinate point from an optical axis to a target surface, an included angle between a laser line and the optical axis, an included angle between the optical axis and the target surface and an equivalent focal length.
In the embodiment of the application, the included angle between the laser line and the optical axis, the included angle between the optical axis and the target surface and the equivalent focal length are calibrated preliminarily. The center point (uc) -displacement data(s) of each position in the profile data, and the optical system design parameters (coordinate point u 0 from the optical axis to the target surface, the included angle θ between the laser line and the optical axis, the included angle between the optical axis and the target surfaceEquivalent focal length l 2) is used as input, where u 0 is used as constant, θ,/>And l 2 is taken as the quantity to be calibrated, and is brought into the direct-injection type triangular ranging model for calibration to obtain a first calibration parameter, so that the first calibration of the profile data is completed. Wherein the first calibration parameter comprises an included angle theta_bar between the laser line and the optical axis, and an included angle/>, between the optical axis and the target surfaceEquivalent focal length l2_bar.
As shown in fig. 9, which is a schematic diagram of a direct-injection type triangular ranging model, a laser beam is incident perpendicularly to the surface of an object to be measured, and this type of measurement is called direct-injection type measurement. The direct measurement mode can be regarded as the oblique measurement mode, and when the incident angle is zero, the theoretical calculation mode between the displacement y of the object and the moving distance x of the light spot on the detector is as follows:
When meeting the layout of the shift lens, there are I.e./>L 2= l is brought into the above formula to simplify and obtain the calculation mode of the direct-injection type triangular ranging model:
Further, x= (u-u 0) du;
Wherein y is the displacement of the object, x is the moving distance of the light spot on the detector, l and l 2 are equivalent focal lengths, l 1 is the distance from the intersection point of the laser and the standard plane to the imaging lens group, The included angle between the optical axis and the target surface is theta, and the included angle between the laser line and the optical axis is theta; u is the pixel coordinates of the light spot on the detector, u 0 is the coordinates of the light on the detector on the optical axis, du is the pixel spacing, and du is a constant. Order theb=l2*tanθ*sinθ,/>The above formula can be written as y=ax/(b-cx). It can be seen that the displacement of the light spot of the CMOS can be converted into the measured moving distance by only 3 independent parameters, and it is obvious that a, b, c have stable solutions. In actual processing, x= (u-u 0) res_pixel, res_pixel is a pixel pitch.
S32: and mapping the contour data to a virtual image plane based on the first calibration parameters, and calculating to obtain a light spot center point in the virtual image plane to obtain a light spot center point in the initial virtual image plane.
Referring to fig. 10, fig. 10 shows a specific embodiment of step S32, which is described in detail as follows:
s321: and calculating coordinates of the contour data mapped to the virtual image plane based on the first calibration parameters to obtain contour data of the virtual image plane.
S322: and recalculating the light spot center points of all the positions based on the contour data of the virtual image plane to obtain the light spot center points in the initial virtual image plane.
Specifically, a coordinate u 'of each pixel coordinate u on the target surface on the virtual image surface is calculated based on a first calibration parameter, the synthesized { u, i } profile data is replaced by { u', i } profile data, { u ', i } is taken as input, a center point uc' corresponding to each position is calculated, and a mapped spot center point is obtained, wherein i refers to a gray value corresponding to target surface imaging. The method for calculating the center point uc' corresponding to each position is that contour data is taken as input, then effective waveform data is primarily screened out according to a set threshold value and the waveform data noise transition zone width, and the center point is calculated according to the waveform data, so that coarse center point extraction is completed; and then, symmetrically expanding the original waveform data based on the coarse center point, and extracting a fine center point according to the expanded waveform data, so as to obtain a light spot center point corresponding to each position, namely, obtain the light spot center point in the initial virtual image plane.
S33: and performing data fitting based on the light spot center point in the initial virtual image plane, the calibration table displacement parameter and the first calibration parameter to obtain a second calibration parameter.
Steps S32 and S3 are second calibration, which is to accurately calibrate the equivalent focal length and the included angle between the laser line and the optical axis, and primarily calibrate the coordinates from the optical axis to the target surface.
Referring to fig. 11, fig. 11 shows a specific embodiment of step S33, which is described in detail as follows:
s331: and taking the light spot center point in the initial virtual image plane, the calibration table displacement data, the included angle between the laser line and the optical axis in the first calibration parameters, the equivalent focal length, the included angle between the optical axis and the virtual image plane and the design value of the coordinate point from the optical axis to the target plane as initial values, and inputting the initial values into the direct-injection type triangular ranging model for data fitting to obtain the second calibration parameters.
Specifically, since the virtual plane is parallel to the object plane, the angle between the optical axis and the virtual image plane In secondary calibration, three variables of a coordinate point u0 from an optical axis to a target surface, an equivalent focal length l2 and an included angle theta between a laser line and the optical axis are taken as amounts to be calibrated, an included angle theta between the laser line and the optical axis in an initial virtual image surface, an included angle phi_bar=pi/2-theta between the calibration table displacement data and a first calibration parameter and an included angle phi_bar=pi/2-theta between the optical axis and the virtual image surface are taken as initial values, a design value of the coordinate point from the optical axis to the target surface is input into a direct-injection type triangular ranging model to be fitted, so that a second calibration parameter u0=u0_new, l2=l2_new and θ=θ_new are obtained, and the second-step calibration is completed.
S34: and reversely mapping the center point coordinates of the initial virtual image plane in the second calibration parameters to the original image plane to obtain reverse mapping center points, and carrying out parameter calibration based on the reverse mapping center points to obtain the target optical parameters.
The embodiment of the application aims to accurately calibrate the coordinate point from the optical axis to the target surface and the included angle between the optical axis and the target surface. And reversely mapping the coordinates of the central point of the virtual image surface in the second calibration process to the original image surface, taking the reversely mapped central point, calibration table displacement data, an equivalent focal length in the second calibration parameters and an included angle between the laser line and the optical axis as constants, and taking the coordinates of the optical axis and the target surface and the included angle between the optical axis and the target surface as variables to input the direct-injection type triangular ranging model for calibration, so as to obtain the accurately calibrated coordinates of the optical axis and the target surface and the included angle between the optical axis and the target surface, and finally, accurately obtaining the target optical parameters, namely, 4 parameters in total.
S4: and mapping the contour data to a virtual image plane based on the target optical parameters, and calculating to obtain a light spot center point in the virtual image plane.
Specifically, by correcting and calculating the profile data, the embodiment of the application effectively inhibits the waveform inclination caused by the shift projection, improves the calculation precision of the center point, and further improves the calculation precision of the subsequent displacement.
Referring to fig. 12 and 13, fig. 12 shows a specific implementation manner of step S4, and fig. 13 is a schematic diagram of mapping contour data provided by an embodiment of the present application from an actual image plane to a virtual image plane, which is described in detail below:
S41: and mapping the contour data to a virtual image plane based on the target optical parameters so as to correct pixel coordinates on the target plane to obtain a contour curve.
S42: and calculating a light spot center point according to the contour curve to obtain the light spot center point in the virtual image plane.
Specifically, the profile data is mapped to a virtual image plane based on the target optical parameter, so as to correct the pixel coordinates on the target plane, and a profile curve is obtained, and the specific correction method refers to the step S3. The mapping result is shown in fig. 13. In the embodiment of the application, the calibrated parameters map the contour data to the virtual image plane so as to inhibit waveform asymmetry errors caused by the shift projection and improve the accuracy of displacement calculation. And calculating the light spot center point according to the contour curve after the contour curve is obtained, so as to obtain the light spot center point in the virtual image plane.
S5: and calculating the light spot displacement distance based on the light spot center point in the virtual image plane and the target optical parameter data to obtain a target displacement value.
Specifically, the light spot center point and the target optical parameters in the virtual image plane are input into a direct-injection type triangular ranging model (the direct-injection type triangular ranging model is the model corrected by the steps) to calculate the light spot displacement distance, and the target displacement value is obtained.
In the embodiment of the application, a long-spot laser and a shift projection optical layout are adopted, imaging data are received through a plurality of rows of area array CMOS, and gray level image data are obtained; performing multi-line data synthesis processing based on the gray map data to obtain contour data; performing optical parameter calibration based on the profile data and the calibration table displacement data by adopting a multi-step calibration mode to obtain target optical parameters; mapping the contour data to a virtual image plane based on the target optical parameters, and calculating to obtain the light spot center point coordinates in the virtual image plane; and calculating the light spot displacement distance based on the light spot center point in the virtual image plane and the target optical parameter data to obtain a target displacement value. The embodiment of the application adopts the long-spot laser and the shift-axis projection optical layout, receives imaging data in a multi-row area array CMOS through the long light spots, effectively avoids the measurement error of the spot laser displacement sensor, and realizes the accurate correction of the profile data by carrying out multi-step calibration on the profile data, thereby being beneficial to improving the accuracy of displacement calculation.
Referring to fig. 14, as an implementation of the method shown in fig. 1, the present application provides an embodiment of a displacement calculating device based on multi-step calibration, where the embodiment of the device corresponds to the embodiment of the method shown in fig. 1, and the device may be applied to various electronic devices specifically.
As shown in fig. 14, the displacement calculation device based on multi-step calibration of the present embodiment includes: a data collection unit 61, a profile data generation unit 62, a target optical parameter calibration unit 63, a virtual image plane light spot center calculation unit 64, and a displacement calculation unit 65, wherein:
a data collection unit 61, configured to adopt a long-spot laser and a shift-axis projection optical layout, and receive imaging data through a multi-row area array CMOS to obtain gray-scale image data;
A contour data generating unit 62 for performing a multi-line data synthesis process based on the gray-scale image data to obtain contour data;
A target optical parameter calibration unit 63, configured to perform optical parameter calibration based on the profile data and the calibration stage displacement data in a multi-step calibration manner, so as to obtain a target optical parameter;
a virtual image plane spot center calculating unit 64 that maps the profile data to the virtual image plane based on the target optical parameters and calculates the spot center point coordinates in the virtual image plane;
The displacement calculation unit 65 calculates the spot displacement distance based on the spot center point in the virtual image plane and the target optical parameter data to obtain a target displacement value.
Further, the target optical parameter calibration unit 63 includes:
The first calibration unit is used for calculating to obtain a light spot center point based on the profile data, inputting the light spot center point, calibration table displacement data and optical system design parameters into a direct-injection type triangular ranging model for parameter calibration to obtain first calibration parameters, wherein the optical system design parameters comprise a coordinate point from an optical axis to a target surface, an included angle between a laser line and the optical axis, an included angle between the optical axis and the target surface and an equivalent focal length;
The coordinate mapping unit is used for mapping the contour data to a virtual image plane based on the first calibration parameters, calculating to obtain a light spot center point in the virtual image plane, and obtaining a light spot center point in an initial virtual image plane;
The second calibration unit is used for carrying out data fitting based on the light spot center point in the initial virtual image plane, the calibration table displacement parameter and the first calibration parameter to obtain a second calibration parameter;
and the third calibration unit is used for reversely mapping the center point coordinate of the initial virtual image plane in the second calibration parameters to the original image plane to obtain an inverse mapping center point, and carrying out parameter calibration based on the inverse mapping center point to obtain the target optical parameters.
Further, the coordinate mapping unit includes:
the mapping pixel coordinate generating unit is used for calculating the coordinate of the contour data mapped to the virtual image plane based on the first calibration parameter to obtain the contour data of the virtual image plane;
and the spot center point calculating unit is used for recalculating the spot center points of all the positions based on the contour data of the virtual image plane to obtain the spot center points in the initial virtual image plane.
Further, the second calibration unit includes:
And the data fitting unit is used for taking the light spot center point in the initial virtual image plane, the calibration table displacement data, the included angle between the laser line and the optical axis in the first calibration parameter, the equivalent focal length, the included angle between the optical axis and the virtual image plane and the design value of the coordinate point from the optical axis to the target surface as initial values, and inputting the initial values into the direct-injection type triangular ranging model for data fitting to obtain the second calibration parameter.
Further, the calculation mode of the direct-injection type triangular ranging model is as follows:
x=(u-u0)*du;
wherein y is the displacement of the object, x is the moving distance of the light spot on the detector, l is the equivalent focal length, The included angle between the optical axis and the target surface is theta, u is the included angle between the laser line and the optical axis, u 0 is the pixel coordinate of the light spot on the detector, u is the coordinate of the light ray on the optical axis on the detector, du is the pixel interval, and du is constant.
Further, the virtual image plane spot center calculating unit 64 includes:
the contour curve generating unit is used for mapping the contour data to a virtual image plane based on the target optical parameters so as to correct pixel coordinates on a target surface to obtain a contour curve;
and the light spot center point calculating unit is used for obtaining the light spot center point in the virtual image plane.
Further, the contour data generating unit 62 includes:
The overexposure data identification unit is used for identifying overexposure data of each column in the gray map data, and removing the overexposure data from the gray map data to obtain non-overexposure data;
And the data synthesis unit is used for carrying out average calculation on the non-overexposure data so as to synthesize a plurality of rows of the non-overexposure data into a single row of the non-overexposure data and obtain the profile data.
In the embodiment of the application, a long-spot laser and a shift projection optical layout are adopted, imaging data are received through a plurality of rows of area array CMOS, and gray level image data are obtained; performing multi-line data synthesis processing based on the gray map data to obtain contour data; performing optical parameter calibration based on the profile data and the calibration table displacement data by adopting a multi-step calibration mode to obtain target optical parameters; mapping the contour data to a virtual image plane based on the target optical parameters, and calculating to obtain the light spot center point coordinates in the virtual image plane; and calculating the light spot displacement distance based on the light spot center point in the virtual image plane and the target optical parameter data to obtain a target displacement value. The embodiment of the application adopts the long-spot laser and the shift-axis projection optical layout, receives imaging data in a multi-row area array CMOS through the long light spots, effectively avoids the measurement error of the spot laser displacement sensor, and realizes the accurate correction of the profile data by carrying out multi-step calibration on the profile data, thereby being beneficial to improving the accuracy of displacement calculation.
In order to solve the technical problems, the embodiment of the application also provides electronic equipment. Referring specifically to fig. 15, fig. 15 is a basic structural block diagram of the electronic device according to the present embodiment.
The electronic device 7 comprises a memory 71, a processor 72, a network interface 73 communicatively connected to each other via a system bus. It is noted that only an electronic device 7 having three components memory 71, a processor 72, a network interface 73 is shown in the figures, but it should be understood that not all of the illustrated components are required to be implemented and that more or fewer components may be implemented instead. It will be understood by those skilled in the art that the electronic device herein is a device capable of automatically performing numerical calculation and/or information processing according to a preset or stored instruction, and its hardware includes, but is not limited to, a microprocessor, an Application SPECIFIC INTEGRATED Circuit (ASIC), a Programmable gate array (Field-Programmable GATE ARRAY, FPGA), a digital Processor (DIGITAL SIGNAL Processor, DSP), an embedded device, and the like.
The electronic device may be a computing device such as a desktop computer, a notebook computer, a palm computer, a cloud server, and the like. The electronic device can perform man-machine interaction with a user through a keyboard, a mouse, a remote controller, a touch pad or voice control equipment and the like.
The memory 71 includes at least one type of readable storage medium including flash memory, hard disk, multimedia card, card memory (e.g., SD or DX memory, etc.), random Access Memory (RAM), static Random Access Memory (SRAM), read Only Memory (ROM), electrically Erasable Programmable Read Only Memory (EEPROM), programmable Read Only Memory (PROM), magnetic memory, magnetic disk, optical disk, etc. In some embodiments, the memory 71 may be an internal storage unit of the electronic device 7, such as a hard disk or a memory of the electronic device 7. In other embodiments, the memory 71 may also be an external storage device of the electronic device 7, such as a plug-in hard disk, a smart memory card (SMART MEDIA CARD, SMC), a Secure Digital (SD) card, a flash memory card (FLASH CARD) or the like, which are provided on the electronic device 7. Of course, the memory 71 may also comprise both an internal memory unit of the electronic device 7 and an external memory device. In this embodiment, the memory 71 is generally used for storing an operating system and various kinds of application software installed in the electronic device 7, such as program codes of a displacement calculation method based on multi-step calibration, and the like. In addition, the memory 71 may also be used to temporarily store various types of data that have been output or are to be output.
Processor 72 may be a central processing unit (Central Processing Unit, CPU), controller, microcontroller, microprocessor, or other data processing chip in some embodiments. The processor 72 is typically used to control the overall operation of the electronic device 7. In this embodiment, the processor 72 is configured to execute the program code stored in the memory 71 or process data, for example, the program code of the displacement calculation method based on multi-step calibration described above, to implement various embodiments of the displacement calculation method based on multi-step calibration.
The network interface 73 may comprise a wireless network interface or a wired network interface, which network interface 73 is typically used for establishing a communication connection between the electronic device 7 and other electronic devices.
The present application also provides another embodiment, namely, a computer readable storage medium, where a computer program is stored, where the computer program is executable by at least one processor, so that the at least one processor performs the steps of a displacement calculation method based on multi-step calibration as described above.
From the above description of the embodiments, it will be clear to those skilled in the art that the above-described embodiment method may be implemented by means of software plus a necessary general hardware platform, but of course may also be implemented by means of hardware, but in many cases the former is a preferred embodiment. Based on such understanding, the technical solution of the present application may be embodied essentially or in a part contributing to the prior art in the form of a software product stored in a storage medium (e.g. ROM/RAM, magnetic disk, optical disk) comprising several instructions for causing a terminal device (which may be a mobile phone, a computer, a server, an air conditioner, or a network device, etc.) to perform the method of the embodiments of the present application.
It is apparent that the above-described embodiments are only some embodiments of the present application, but not all embodiments, and the preferred embodiments of the present application are shown in the drawings, which do not limit the scope of the patent claims. This application may be embodied in many different forms, but rather, embodiments are provided in order to provide a thorough and complete understanding of the present disclosure. Although the application has been described in detail with reference to the foregoing embodiments, it will be apparent to those skilled in the art that modifications may be made to the embodiments described in the foregoing description, or equivalents may be substituted for elements thereof. All equivalent structures made by the content of the specification and the drawings of the application are directly or indirectly applied to other related technical fields, and are also within the scope of the application.

Claims (10)

1. The displacement calculation method based on multi-step calibration is characterized by comprising the following steps of:
adopting a long-spot laser and a shift projection optical layout, and receiving imaging data through a plurality of rows of area array CMOS to obtain gray level image data;
performing multi-line data synthesis processing based on the gray map data to obtain contour data;
Performing optical parameter calibration based on the profile data and the calibration table displacement data by adopting a multi-step calibration mode to obtain target optical parameters;
Mapping the contour data to a virtual image plane based on the target optical parameters, and calculating to obtain a light spot center point in the virtual image plane;
And calculating the light spot displacement distance based on the light spot center point in the virtual image plane and the target optical parameter data to obtain a target displacement value.
2. The displacement calculation method based on multi-step calibration according to claim 1, wherein the performing optical parameter calibration based on the profile data and calibration table displacement data in the multi-step calibration manner to obtain the target optical parameter comprises:
Calculating to obtain a light spot center point based on the profile data, and inputting the light spot center point, calibration table displacement data and optical system design parameters into a direct-injection type triangular ranging model for parameter calibration to obtain a first calibration parameter, wherein the optical system design parameters comprise a coordinate point from an optical axis to a target surface, an included angle between a laser line and the optical axis, an included angle between the optical axis and the target surface and an equivalent focal length;
mapping the contour data to a virtual image plane based on the first calibration parameters, and calculating to obtain a light spot center point in the virtual image plane to obtain a light spot center point in an initial virtual image plane;
performing data fitting based on the light spot center point in the initial virtual image plane, the calibration table displacement parameter and the first calibration parameter to obtain a second calibration parameter;
And reversely mapping the center point coordinates of the initial virtual image plane in the second calibration parameters to the original image plane to obtain reverse mapping center points, and carrying out parameter calibration based on the reverse mapping center points to obtain the target optical parameters.
3. The displacement calculation method based on multi-step calibration according to claim 2, wherein the mapping the profile data to a virtual image plane based on the first calibration parameter, and calculating to obtain a spot center point in the virtual image plane, to obtain a spot center point in an initial virtual image plane, includes:
Calculating coordinates of the contour data mapped to the virtual image plane based on the first calibration parameters to obtain contour data of the virtual image plane;
and recalculating the light spot center points of all the positions based on the contour data of the virtual image plane to obtain the light spot center points in the initial virtual image plane.
4. The displacement calculation method based on multi-step calibration according to claim 2, wherein the performing data fitting based on the spot center point in the initial virtual image plane, the calibration table displacement parameter and the first calibration parameter to obtain a second calibration parameter includes:
And taking the light spot center point in the initial virtual image plane, the calibration table displacement data, the included angle between the laser line and the optical axis in the first calibration parameters, the equivalent focal length, the included angle between the optical axis and the virtual image plane and the design value of the coordinate point from the optical axis to the target plane as initial values, and inputting the initial values into the direct-injection type triangular ranging model for data fitting to obtain the second calibration parameters.
5. The displacement calculation method based on multi-step calibration according to claim 2, wherein the calculation mode of the direct-injection type triangular ranging model is as follows:
x=(u-u0)*du;
wherein y is the displacement of the object, x is the moving distance of the light spot on the detector, l is the equivalent focal length, The included angle between the optical axis and the target surface is theta, u is the included angle between the laser line and the optical axis, u 0 is the pixel coordinate of the light spot on the detector, u is the coordinate of the light ray on the optical axis on the detector, du is the pixel interval, and du is constant.
6. The displacement calculation method based on multi-step calibration according to claim 2, wherein the mapping the profile data to a virtual image plane based on the target optical parameter and calculating to obtain a spot center point in the virtual image plane includes:
mapping the contour data to a virtual image plane based on the target optical parameters so as to correct pixel coordinates on a target surface to obtain a contour curve;
and calculating a light spot center point according to the contour curve to obtain the light spot center point in the virtual image plane.
7. The displacement calculation method based on multi-step calibration according to any one of claims 1 to 6, wherein the performing multi-line data synthesis processing based on the gray map data to obtain contour data includes:
Identifying overexposure data of each column in the gray scale image data, and removing the overexposure data from the gray scale image data to obtain non-overexposure data;
and carrying out average calculation on the non-overexposure data so as to synthesize a plurality of rows of the non-overexposure data into a single row of the non-overexposure data, thereby obtaining the profile data.
8. A displacement computing device based on multi-step calibration, comprising:
the data collection unit is used for adopting a long-spot laser and a shift projection optical layout, receiving imaging data through a multi-row area array CMOS and obtaining gray level image data;
The contour data generating unit is used for carrying out multi-row data synthesis processing based on the gray map data to obtain contour data;
The target optical parameter calibration unit is used for performing optical parameter calibration based on the profile data and the calibration table displacement data in a multi-step calibration mode to obtain target optical parameters;
the virtual image plane light spot center calculating unit maps the contour data to the virtual image plane based on the target optical parameters and calculates to obtain the light spot center point coordinates in the virtual image plane;
And the displacement calculation unit is used for calculating the displacement distance of the light spot based on the light spot center point in the virtual image plane and the target optical parameter data to obtain a target displacement value.
9. An electronic device comprising a memory and a processor, the memory having stored therein a computer program, the processor implementing the multi-step calibration based displacement calculation method according to any one of claims 1 to 7 when the computer program is executed.
10. A computer-readable storage medium, characterized in that the computer-readable storage medium has stored thereon a computer program which, when executed by a processor, implements the multi-step calibration-based displacement calculation method according to any one of claims 1 to 7.
CN202410228162.4A 2024-02-29 2024-02-29 Displacement calculation method, device, equipment and medium based on multi-step calibration Pending CN118049922A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202410228162.4A CN118049922A (en) 2024-02-29 2024-02-29 Displacement calculation method, device, equipment and medium based on multi-step calibration

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202410228162.4A CN118049922A (en) 2024-02-29 2024-02-29 Displacement calculation method, device, equipment and medium based on multi-step calibration

Publications (1)

Publication Number Publication Date
CN118049922A true CN118049922A (en) 2024-05-17

Family

ID=91046233

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202410228162.4A Pending CN118049922A (en) 2024-02-29 2024-02-29 Displacement calculation method, device, equipment and medium based on multi-step calibration

Country Status (1)

Country Link
CN (1) CN118049922A (en)

Similar Documents

Publication Publication Date Title
CN110322513B (en) Camera external parameter calibration method and device and electronic equipment
CN112017205B (en) Automatic calibration method and system for space positions of laser radar and camera sensor
CN112050751B (en) Projector calibration method, intelligent terminal and storage medium
CN111259890A (en) Water level identification method, device and equipment of water level gauge
CN110475078B (en) Camera exposure time adjusting method and terminal equipment
CN112556994B (en) Optical information detection method, device and equipment
CN111336917A (en) Volume measurement method, device, system and computer readable storage medium
US8068673B2 (en) Rapid and high precision centroiding method and system for spots image
CN111161339A (en) Distance measuring method, device, equipment and computer readable medium
CN112967347B (en) Pose calibration method, pose calibration device, robot and computer readable storage medium
JP2008152555A (en) Image recognition method and image recognition device
CN118049922A (en) Displacement calculation method, device, equipment and medium based on multi-step calibration
CN116819561A (en) Point cloud data matching method, system, electronic equipment and storage medium
CN112036232A (en) Image table structure identification method, system, terminal and storage medium
CN111445513A (en) Plant canopy volume obtaining method and device based on depth image, computer equipment and storage medium
CN115512343A (en) Method for correcting and recognizing reading of circular pointer instrument
CN115018735A (en) Fracture width identification method and system for correcting two-dimensional code image based on Hough transform
CN113884188B (en) Temperature detection method and device and electronic equipment
CN113191963B (en) Projector residual distortion full-field calibration method and device without additional operation
CN115375576A (en) Image correction method for biological characteristic photoelectric scanning device
CN109389595B (en) Table line intersection point detection method, electronic device and readable storage medium
CN115222900A (en) Method, device, equipment and computer program product for determining elevation of ground point
CN113177988A (en) Calibration method, device, equipment and storage medium for dome camera and laser
CN109214230B (en) Data matrix code identification method and device and electronic equipment
CN111648414A (en) Method and device for measuring horizontal and vertical displacement of foundation pit by using digital image

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination