CN114792345A - Calibration method based on monocular structured light system - Google Patents

Calibration method based on monocular structured light system Download PDF

Info

Publication number
CN114792345A
CN114792345A CN202210733202.1A CN202210733202A CN114792345A CN 114792345 A CN114792345 A CN 114792345A CN 202210733202 A CN202210733202 A CN 202210733202A CN 114792345 A CN114792345 A CN 114792345A
Authority
CN
China
Prior art keywords
camera
projector
vector
formula
parameters
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202210733202.1A
Other languages
Chinese (zh)
Other versions
CN114792345B (en
Inventor
杨静
时岭
高勇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hangzhou Lanxin Technology Co ltd
Original Assignee
Hangzhou Lanxin Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hangzhou Lanxin Technology Co ltd filed Critical Hangzhou Lanxin Technology Co ltd
Priority to CN202210733202.1A priority Critical patent/CN114792345B/en
Publication of CN114792345A publication Critical patent/CN114792345A/en
Application granted granted Critical
Publication of CN114792345B publication Critical patent/CN114792345B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
    • G06T7/85Stereo camera calibration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery
    • G06T7/521Depth or shape recovery from laser ranging, e.g. using interferometry; from the projection of structured light

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Computer Graphics (AREA)
  • Geometry (AREA)
  • Software Systems (AREA)
  • Optics & Photonics (AREA)
  • Length Measuring Devices By Optical Means (AREA)

Abstract

The invention relates to a calibration method based on a monocular structured light system, which comprises the following steps: after the projector projects the coding pattern, the control device decodes the coding image collected by the camera to obtain the coding values of all pixel points in the coding image; extracting characteristic points of the calibration board based on a calibration board image collected by a camera, acquiring depth values of all pixel points in the range of the calibration board according to basic parameters of the calibration board, and acquiring coding values of all pixel points in the range of the calibration board; and according to the mathematical relation among the coding values, the pixel point information and the depth values of all the pixel points in the range of the calibration plate, constructing distortion models of the camera vector and the projector vector, solving the constructed models based on the depth values of the pixel points, and acquiring camera internal parameters, projector internal parameters and system external parameters. The calibration method can achieve that only searching and simple calculation are needed in the three-dimensional reconstruction process to obtain the three-dimensional reconstruction result, and is high in speed and high in accuracy.

Description

Calibration method based on monocular structured light system
Technical Field
The invention relates to the technical field of robots, in particular to a monocular structured light system-based calibration method and an image three-dimensional reconstruction method.
Background
At present, surface structured light three-dimensional imaging is a non-contact high-precision three-dimensional measurement technology and is widely applied to the fields of measurement, detection, automation and the like. Compared with a binocular structured light system, the monocular structured light has the advantages of lower cost, higher algorithm speed and fewer blind areas, can ensure sufficient data quality, and is concerned and applied more and more widely. Unlike a mature and reliable calibration method for a binocular system, the calibration and algorithm of a monocular structured light system are complex, mainly the calibration of projector parameters, and at present, the research is more, the process is complex and not easy to operate, or the precision is low, so that the requirements cannot be met. Based on this, the industry has proposed calibration and three-dimensional reconstruction methods based on monocular structured light systems.
Specifically, there are generally several calibration and reconstruction methods available: the high-precision translation stage is used for collecting plane data at different distances to serve as a reference surface, and standard data are recovered during reconstruction, so that the calibration precision depends on the precision of the translation stage, the large-view and long-distance calibration is difficult to realize, and the on-site calibration cannot be realized; the method is not suitable for systems which can only project one-dimensional line structured light, such as a galvanometer, an MEMS micro galvanometer and the like, and the error is easy to accumulate by step calibration of the camera and the projector, and the calculation precision of the homography matrix of the camera and the projector is not high; the eight-parameter method directly calculates the relation between the projection phase and the camera coordinate, and the method is simple in operation and calculation, but poor in distortion processing of the projector and large in error.
Therefore, a calibration method and a three-dimensional reconstruction method based on a monocular structured light system, which are simple in operation, high in calibration accuracy and high in processing speed, are urgently needed.
Disclosure of Invention
Technical problem to be solved
In view of the above disadvantages and shortcomings of the prior art, the present invention provides a calibration method based on a monocular structured light system and a three-dimensional reconstruction method of an image.
(II) technical scheme
In order to achieve the purpose, the invention adopts the main technical scheme that:
in a first aspect, an embodiment of the present invention provides a calibration method based on a monocular structured light system, where the monocular structured light system includes: projector for projecting a coding pattern, a camera for acquiring the coding pattern, a control device connecting the projector and the camera, the calibration method comprising:
s10, after the projector projects the coding pattern, the control device decodes the coding image collected by the camera to obtain the coding values of all the pixel points in the coding image;
s20, the control device extracts the characteristic points of the calibration board based on the calibration board image collected by the camera, and obtains the depth values of all the pixel points in the calibration board range according to the basic parameters of the calibration board, and obtains the coding values of all the pixel points in the calibration board range based on the coding values of all the pixel points in the coding image;
s30, according to the mathematical relation among the encoding values, the pixel point information and the depth values of all the pixel points in the range of the calibration board, constructing a distortion model of a camera vector and a projector vector with unknown quantity, and solving the constructed model based on the depth values of the pixel points to obtain camera internal parameters, projector internal parameters and system external parameters;
the distortion model of the projector vector is a virtual model according to a three-dimensional reconstruction result;
the system external parameter is an external parameter rotation matrix when a projector vector is converted into a camera coordinate system
Figure 100002_DEST_PATH_IMAGE001
And external reference translation vector
Figure 452051DEST_PATH_IMAGE002
Optionally, the S30 includes:
s31, acquiring a distortion model of the camera vector, and determining camera internal parameters serving as unknown quantities;
s32, simulating a distortion model of the projector vector, and determining projector internal parameters serving as unknown quantities;
the throw-inFirst dimension of distortion model of shadowgraph vector
Figure 100002_DEST_PATH_IMAGE003
For the code/phase dimension, the second dimension
Figure 926894DEST_PATH_IMAGE004
Is the virtual row number;
s33, converting the projector vector into a camera coordinate system, and determining the virtual parameter of the projector vector according to the geometric constraint relation;
s34, constructing a solving equation of the depth value of the intersection point of the camera vector and the projector vector based on the distortion model of the camera vector and the distortion model of the virtual projector vector;
and S35, solving an equation based on the depth value of each pixel point in the calibration plate range to obtain camera internal parameters, projector internal parameters and system external parameters.
Optionally, the S30 includes:
s31 aiming at camera vector
Figure 100002_DEST_PATH_IMAGE005
The two-dimensional coordinates of each camera pixel point are expressed by adopting a standard camera model of a formula (2):
Figure 564680DEST_PATH_IMAGE006
formula (2);
in the formula (2), the first and second groups of the compound,
Figure 100002_DEST_PATH_IMAGE007
expressed as camera vectors
Figure 11842DEST_PATH_IMAGE008
The component in the horizontal direction is that of,
Figure 100002_DEST_PATH_IMAGE009
expressed as camera vectors
Figure 280144DEST_PATH_IMAGE010
The component in the vertical direction is,
Figure 100002_DEST_PATH_IMAGE011
is the row in which the current pixel point is located,
Figure 242283DEST_PATH_IMAGE012
the column in which the current pixel point is located,
Figure 100002_DEST_PATH_IMAGE013
is the focal length of the camera in the horizontal direction,
Figure 73973DEST_PATH_IMAGE014
is the focal length in the vertical direction of the camera,
Figure 100002_DEST_PATH_IMAGE015
Figure 451340DEST_PATH_IMAGE016
is the coordinates of the optical center;
camera vector based on formula (2)
Figure 100002_DEST_PATH_IMAGE017
The distortion model of the medium two-dimensional coordinates is expressed as formula (3):
Figure 77494DEST_PATH_IMAGE018
formula (3);
Figure 100002_DEST_PATH_IMAGE019
all of which are radial distortion coefficients,
Figure 543241DEST_PATH_IMAGE020
all of which are tangential distortion coefficients,
Figure 100002_DEST_PATH_IMAGE021
is an intermediate variable;
Figure 178622DEST_PATH_IMAGE022
Figure 100002_DEST_PATH_IMAGE023
as a camera vector
Figure 397114DEST_PATH_IMAGE017
The internal parameters of (a) are unknown quantities;
s32 projector vector under virtual projector coordinate system
Figure 928589DEST_PATH_IMAGE024
The distortion model of (2);
the standard projector model is:
Figure 100002_DEST_PATH_IMAGE025
formula (4);
virtual projector vector according to basic parameters of projector
Figure 616054DEST_PATH_IMAGE026
The distortion model of (a) is formula (5) or formula (5 a):
Figure 100002_DEST_PATH_IMAGE027
formula (5);
alternatively, equation (5 a) is:
Figure 851863DEST_PATH_IMAGE028
in the formula (5) and the formula (5 a),
Figure 100002_DEST_PATH_IMAGE029
represented as a projector vector
Figure 878856DEST_PATH_IMAGE030
In the component of the code/phase dimension,
Figure 100002_DEST_PATH_IMAGE031
represented as a projector vector
Figure 174708DEST_PATH_IMAGE032
In the component of the virtual row number, p is the encoded value corresponding to each pixel point in the projector vector,
Figure 100002_DEST_PATH_IMAGE033
for the number of rows corresponding to each pixel point in the projector vector,
Figure 536419DEST_PATH_IMAGE034
is the focal length of the projector in the horizontal direction,
Figure 100002_DEST_PATH_IMAGE035
is the focal length of the projector in the vertical direction,
Figure 323722DEST_PATH_IMAGE036
is the coordinates of the optical center;
Figure 100002_DEST_PATH_IMAGE037
all of which are the distortion coefficients of the projector,
Figure 454489DEST_PATH_IMAGE038
Figure 100002_DEST_PATH_IMAGE039
is a function of the intermediate variable(s),
Figure 671975DEST_PATH_IMAGE040
an internal reference of the projector vector is an unknown quantity;
s33, vector projector
Figure 100002_DEST_PATH_IMAGE041
Converting to a camera coordinate system, wherein the coordinate system conversion relation formula is as follows:
Figure 520982DEST_PATH_IMAGE042
formula (6);
camera vector in camera coordinate system
Figure 100002_DEST_PATH_IMAGE043
Projector vector in camera coordinate system
Figure 833015DEST_PATH_IMAGE044
Translation vector
Figure 100002_DEST_PATH_IMAGE045
The coplanar relationship is:
Figure 896917DEST_PATH_IMAGE046
formula (1);
Figure 100002_DEST_PATH_IMAGE047
in order to refer to the rotation matrix externally,
Figure 737834DEST_PATH_IMAGE048
is the translation vector of the external reference,
Figure 339717DEST_PATH_IMAGE047
and
Figure 675014DEST_PATH_IMAGE048
as an external parameter of the system is of unknown quantity,
Figure 100002_DEST_PATH_IMAGE049
to convert the projector vector to virtual parameters of the projector vector determined from the geometric constraint relationship in the camera coordinate system.
Optionally, the S34 includes:
calculating a depth value according to formula (7) based on the camera vector and the projector vector acquired by formula (1) to formula (6);
solving the unknown quantity based on the fact that the depth value obtained by the S20 is consistent with the depth value obtained by the formula (7), and obtaining camera internal parameters, projector internal parameters and system external parameters;
Figure 780374DEST_PATH_IMAGE050
equation (7).
Optionally, the calibration board is a checkerboard calibration board, and the basic parameters of the calibration board include: calibrating the total number of the checkerboards in the board and calibrating the size information of each check in the board;
the coded image is an image coded by one or more coding modes of Gray code, phase shift method and multi-frequency extrapolation.
Optionally, the S20 includes:
the control device extracts the checkerboard angular points serving as characteristic points according to the checkerboard image, obtains the rows and columns of the angular points as two-dimensional coordinates, establishes a world coordinate system, and obtains three-dimensional coordinates of the characteristic points according to basic parameters of the checkerboard;
and acquiring a homography matrix of the calibration plate and the camera based on the two-dimensional image coordinates and the three-dimensional coordinates of the feature points, and calculating the depth values of all pixel points in the checkerboard range according to the homography matrix.
In a second aspect, an embodiment of the present invention further provides a method for three-dimensional reconstruction of an image, including:
acquiring an image of a target to be detected, and acquiring a camera vector and a projector vector of each pixel point under a camera coordinate system based on pre-acquired camera internal parameters, projector internal parameters and system external parameters;
based on the camera vector and the projector vector, obtaining the three-dimensional space coordinates of each pixel point in the image;
the pre-acquired camera internal parameters, projector internal parameters and system external parameters are calibrated by using any one of the calibration methods of the first aspect.
Optionally, based on a camera vector and a projector vector, obtaining three-dimensional space coordinates of each pixel point in the image includes:
calculating a depth value z of an intersection of the camera vector and the projector vector according to formula (7);
Figure 792192DEST_PATH_IMAGE050
formula (7);
wherein the content of the first and second substances,
Figure 615791DEST_PATH_IMAGE051
is a camera vector in a camera coordinate system,
Figure 100002_DEST_PATH_IMAGE052
is the projector vector in the camera coordinate system,
Figure 548588DEST_PATH_IMAGE053
is a translation vector;
based on formula (8), obtaining a spatial coordinate value of each pixel point in the image under the camera coordinate system:
Figure 100002_DEST_PATH_IMAGE054
formula (8);
z is a depth value and is a depth value,
Figure 508454DEST_PATH_IMAGE055
expressed as camera vectors
Figure 100002_DEST_PATH_IMAGE056
The component in the horizontal direction is,
Figure 238643DEST_PATH_IMAGE057
expressed as camera vectors
Figure 100002_DEST_PATH_IMAGE058
The component in the vertical direction.
Optionally, the method includes acquiring an image of a target to be measured, and acquiring a camera vector of each pixel point under a camera coordinate system based on a pre-acquired camera internal parameter, a projector internal parameter and a system external parameter, where the projector vector includes:
searching a camera vector and a projector vector of each pixel point under a camera coordinate system based on a pre-acquired lookup table;
wherein, in the calibration method, a corresponding relation table of pixel points and each parameter is constructed according to camera internal parameters, projector internal parameters and system external parameters corresponding to each pixel point,
the corresponding relation table is a lookup table used for quickly searching camera parameters, projector parameters and each pixel point participating outside the system during three-dimensional reconstruction.
In a third aspect, an embodiment of the present invention further provides a control apparatus, including: a memory for storing a computer program and a processor for executing the computer program stored in the memory and performing the steps of the method of any of the first or second aspects.
(III) advantageous effects
The calibration process of the calibration method is simple to operate, is completely the same as the calibration process of the conventional binocular camera, only needs one calibration plate, can be synchronously completed with the calibration of hands and eyes, and is very convenient. Only one-dimensional line structured light needs to be projected during calibration, the working mode is completely consistent with normal working, and additional configuration is not needed. The method of the invention well solves the distortion problem of the projector, and is suitable for different projector models, including DMD projector, galvanometer, MEMS micro galvanometer and other one-dimensional linear structured light projection. All parameters are calibrated synchronously, no accumulated error exists, and the precision is higher than that of the conventional method.
The method provided by the embodiment of the invention can realize that only a lookup table is needed and simple calculation is carried out in the three-dimensional reconstruction process, and the three-dimensional reconstruction result is obtained, and the method is high in speed and high in accuracy.
Drawings
Fig. 1 is a schematic flowchart of a calibration method based on a monocular structured light system according to an embodiment of the present invention;
fig. 2 is a schematic flow chart of a method for three-dimensional reconstruction of an image according to an embodiment of the present invention.
Detailed Description
For a better understanding of the present invention, reference will now be made in detail to the present embodiments of the invention, which are illustrated in the accompanying drawings.
In this embodiment, vertical stripes can be projected without the line structured light of horizontal stripes, and it should be understood that the line structured light is the structured light with encoded images.
Specifically, the controller controls the camera to collect coded images and calibration plate images of more than one calibration plate position, then, a model is built according to the principle of coplanarity of camera vectors, projector vectors and translation vectors, unknown quantity parameters are solved, and camera internal parameters, projector internal parameters and system external parameters are obtained. The virtual projector internal parameters of the distortion of the projector are fully considered in the process of solving the unknown quantity, so that the three-dimensional reconstruction accuracy based on the camera internal parameters, the projector internal parameters and the system external parameters is higher.
The method can be suitable for different projector models, including DMD projectors, galvanometers, MEMS micro galvanometer and other one-dimensional line structured light projection, all parameters are calibrated synchronously, and no accumulated error exists.
Example one
As shown in fig. 1, an embodiment of the present invention provides a calibration method based on a monocular structured light system, where an execution subject of the method of this embodiment may be any computing device/control device, and the monocular structured light system of this embodiment includes: the projector for projecting the one-dimensional coding pattern, the camera for collecting the coding pattern, and the control device for connecting the projector and the camera, the specific implementation method comprises the following steps:
and S10, after the projector projects the coding pattern, the control device decodes the coding image collected by the camera to obtain the coding values of all the pixel points in the coding image.
This step can be implemented in a conventional manner, and the projector can project one or more encoding patterns.
S20, the control device extracts the characteristic points of the calibration board based on the calibration board image collected by the camera, and obtains the depth values of all the pixel points in the calibration board range according to the basic parameters of the calibration board, and obtains the code values of all the pixel points in the calibration board range based on the code values of all the pixel points in the code image.
Generally, the projector projects full bright light, a coded image, etc. according to the instruction of the control device, and in order to obtain the calibration board image, the projector projects full bright light according to the instruction, so that the camera acquires the calibration board image.
The feature points in this step may be predefined corner points.
For example, the control device extracts the checkerboard angular points as the characteristic points according to the checkerboard image, obtains the rows and columns of the angular points as the two-dimensional image coordinates, establishes a world coordinate system, and obtains the three-dimensional coordinates of the characteristic points according to the basic parameters of the checkerboard;
and acquiring a homography matrix of the calibration board and the camera based on the two-dimensional image coordinates and the three-dimensional coordinates of the feature points, and calculating the depth values of all pixel points in the checkerboard range according to the homography matrix.
S30, according to the mathematical relationship among the coding values, pixel point information (such as the row, column and identification of the pixel point) and depth values of all pixel points in the range of the calibration plate, constructing a distortion model of a camera vector and a projector vector with unknown quantity, and solving the constructed model based on the depth values of the pixel points to obtain camera internal parameters, projector internal parameters and system external parameters;
the distortion model of the projector vector is a virtual model according to a three-dimensional reconstruction result;
the system external parameters are an external parameter rotation matrix R and an external parameter translation vector when the projector vector is converted into a camera coordinate system
Figure 549539DEST_PATH_IMAGE059
The pixel information may include the pixel information mentioned in the following embodiments
Figure 475907DEST_PATH_IMAGE011
Figure 555858DEST_PATH_IMAGE012
Figure 191370DEST_PATH_IMAGE013
Figure 989562DEST_PATH_IMAGE014
And
Figure 922883DEST_PATH_IMAGE015
Figure 857341DEST_PATH_IMAGE016
and the like, and the present embodiment is not limited thereto, and is configured and adjusted according to actual needs.
In addition, the distortion model of the projector vector of the present embodiment may also be a distortion model of a corresponding camera vector, where the first dimension of the virtual model is a coding/phase dimension, and the second dimension is a virtual line number. The embodiment mainly ensures the precision of the three-dimensional reconstruction result and the distortion model of the virtual projector vector on the basis of ensuring the precision.
It is understood that, for the portability and the processing speed improvement of the subsequent three-dimensional reconstruction, after step S30, the following step S40 may be further performed to obtain a lookup table of parameters and pixel points of the camera vector and the projector vector.
S40, acquiring camera internal parameters, projector internal parameters and system external parameters, and constructing a corresponding relation table of pixel points and each parameter, wherein the corresponding relation table is a lookup table of camera internal parameters, projector internal parameters and system external participation pixel points which are used for fast lookup in three-dimensional reconstruction.
It should be noted that, in theory, the calibration step in the calibration method of this embodiment can be completed with the help of one encoded image, so as to obtain the internal reference of the camera vector, the internal reference of the projector vector, and the system external reference, and further obtain the lookup tables of all parameters and pixel points. The calibration method can realize that only a lookup table is needed and simple calculation is carried out in the three-dimensional reconstruction process, and the three-dimensional reconstruction result is obtained, and the calibration method is high in speed and high in accuracy.
In specific practical application, in order to improve the accuracy of the acquired parameters, the position of the calibration board can be adjusted, the information of the camera vector internal parameters, the projector vector internal parameters and the system external parameters corresponding to the calibration boards at different positions is acquired, the average value is further calculated, the most accurate information of the camera vector internal parameters, the projector vector internal parameters and the system external parameters is acquired, and therefore a lookup table with higher accuracy and faster lookup is acquired.
Example two
In this embodiment, the method of the present invention is described in detail, in this embodiment, both the 3D camera (camera for short) and the calibration board are connected to the controller, and all the calculation processes are completed in the controller. The specific scheme is that a calibration method is provided firstly, the calibration method obtains a lookup table of a camera vector and a projector vector, and then based on the lookup table, the embodiment also provides a three-dimensional reconstruction method of an image.
Before the method of the embodiment is executed, a calibration board can be placed at least one designated position in the common visual field range of the camera and the projector, and the projector projects the one-dimensional coded structured light pattern and defines the coding direction, so that the placing directions of the camera and the projector are consistent with the coding direction. In practical applications, the calibration board in this embodiment may be a checkerboard calibration board, and the basic parameters of the calibration board may include: the overall size of the calibration board, the total number of the checkerboards inside the calibration board, the size information of each check in the calibration board and the like. The current position information of the calibration plate is calculated based on the encoded image and the size information of the calibration plate.
The calibration method based on the monocular structured light system of the present embodiment may include the following steps:
and step 01, connecting the camera and the projector with a controller, controlling the projector to project the coded patterns by the controller, synchronously acquiring images by the camera, and decoding the acquired coded images by the controller.
In this embodiment, the controller may implement decoding by using a plurality of encoded images to obtain an encoded value of each pixel point.
For example, the encoded image is an image encoded by using one or more of gray code, phase shift method, and multi-frequency extrapolation.
And step 02, the controller identifies the boundary information of the calibration plate in the coded image, and acquires the coded value of each pixel point corresponding to the calibration plate according to the identified boundary information.
Certainly, in other embodiments, the controller may directly identify the code value of each pixel point corresponding to the calibration board in the coded image, without a process of identifying the boundary information and then determining the pixel point of the calibration board.
The coded image in this step may be an image coded in a gray code manner, and thus, the controller may directly obtain the coded value in the existing manner, and this step does not detail the process of obtaining the coded value.
Step 03, in this step, a zhangyingyou scaling method can be adopted to obtain the depth value of each pixel point within the scaling board range.
Of course, for ease of understanding, a brief explanation is provided here. The controller establishes a world coordinate system according to the basic parameters of the calibration plate, and obtains three-dimensional coordinates of the feature points, wherein the feature points can be corner points of a checkerboard. According to the image collected by the camera, extracting the characteristic points of the calibration plate, obtaining the two-dimensional coordinates of the characteristic points, and obtaining the homography matrix of the checkerboard and the camera by adopting the pnp solving principle. The three-dimensional coordinates of each pixel in the camera coordinate system can be obtained through the homography matrix, namely the depth value is obtained.
That is, a checkerboard image photographed by a camera extracts feature points (checkerboard corner points), two-dimensional coordinates (rows and columns of the corner points) are obtained, a world coordinate system is established, three-dimensional coordinates of the feature points are obtained according to checkerboard parameters, a homography matrix is calculated through the three-dimensional coordinates and the two-dimensional coordinates, and three-dimensional coordinates of all pixels in a checkerboard range are calculated according to the homography matrix. The pixel points can be used for subsequent calibration, and if only the characteristic points are used in the calibration process, the number of the pixel points is small, so that all the parameters to be solved cannot be obtained, and therefore, the three-dimensional coordinates of all the pixel points in the range of the calibration plate need to be calculated.
It can be understood that, since there is a specified mathematical relationship between the encoded value of each pixel and the three-dimensional coordinates, in step 04, a mathematical model is established based on the mathematical relationship, and relevant parameters, including camera intrinsic parameters, projector intrinsic parameters and system extrinsic parameters, are obtained.
And step 04, calculating information of camera internal parameters, projector internal parameters and external parameters used in three-dimensional reconstruction by the controller based on the acquired three-dimensional coordinates and the principle that the current camera vector, projector vector and translation vector are coplanar.
It will be appreciated that the calibration process is the process of constructing an equation, i.e., an equation of the depth value z as shown in the following formula (7).
For a better understanding of this step 04, it is explained below in connection with sub-step 041 to sub-step 044. It can be understood that each camera vector corresponds to each pixel point within the calibration board, the camera vector and the projector vector are paired, the projector vector is generated virtually, and the projector vector and the camera vector together correspond to one pixel point.
And a sub-step 041, calculating a projector vector corresponding to each pixel according to the principle that the camera vector, the projector vector and the translation vector are coplanar.
Camera coordinate system lower camera vector
Figure DEST_PATH_IMAGE060
And the projector vector under the camera coordinate system
Figure 194912DEST_PATH_IMAGE061
Translation vector
Figure DEST_PATH_IMAGE062
The principle of coplanarity is:
Figure 214821DEST_PATH_IMAGE063
formula (1)
For camera vector
Figure 748570DEST_PATH_IMAGE060
The two-dimensional coordinates of the corresponding pixel can be expressed by a standard camera model, such as formula (2):
Figure DEST_PATH_IMAGE064
equation (2).
In the formula (2), the first and second groups,
Figure 537535DEST_PATH_IMAGE065
expressed as camera vectors
Figure DEST_PATH_IMAGE066
The component in the horizontal direction is,
Figure 43078DEST_PATH_IMAGE067
expressed as camera vectors
Figure 550283DEST_PATH_IMAGE066
The component in the vertical direction is,
Figure DEST_PATH_IMAGE068
is the row in which the current pixel point is located,
Figure 356565DEST_PATH_IMAGE069
the column in which the current pixel point is located,
Figure DEST_PATH_IMAGE070
is the focal length of the camera in the horizontal direction,
Figure 78664DEST_PATH_IMAGE071
is the focal length in the vertical direction of the camera,
Figure DEST_PATH_IMAGE072
are the coordinates of the optical center. The optical center is the inherent structure of the optical center, camera.
Camera vector based on formula (2)
Figure 679410DEST_PATH_IMAGE073
The distortion model of the medium two-dimensional coordinates is expressed as formula (3):
Figure DEST_PATH_IMAGE074
formula (3)
Figure 486960DEST_PATH_IMAGE075
All of which are radial distortion coefficients,
Figure DEST_PATH_IMAGE076
all are tangential distortion coefficients, and r is an intermediate variable;
Figure 831353DEST_PATH_IMAGE077
Figure DEST_PATH_IMAGE078
as a camera vector
Figure 657227DEST_PATH_IMAGE079
The internal reference of (2) is used as an unknown quantity to be solved, namely calibrated.
Substep 042 of defining projector vector under projector coordinate system
Figure DEST_PATH_IMAGE080
Is a virtual model and defines a first dimension
Figure 179606DEST_PATH_IMAGE081
For coding, second dimension
Figure DEST_PATH_IMAGE082
Is the virtual row number. The first dimension and the second dimension are both dimensions in the projector coordinate system.
The virtual model may be a model without assigned rows and columns, and the first dimension may also be understood as a phase dimension, both of which are predefined according to user requirements.
Defining projector vectors with reference to the above equation (2)
Figure 723720DEST_PATH_IMAGE083
Standard projector model equation (4):
Figure DEST_PATH_IMAGE084
formula (4)
The distortion model of the virtual projector vector is either equation (5) or equation (5 a) depending on the basic parameters of the projector:
Figure 871805DEST_PATH_IMAGE085
formula (5);
alternatively, equation (5 a) is:
Figure DEST_PATH_IMAGE086
in the formula (5) and the formula (5 a),
Figure 565567DEST_PATH_IMAGE087
expressed as projector vector
Figure DEST_PATH_IMAGE088
In the component of the code/phase dimension,
Figure 586743DEST_PATH_IMAGE089
expressed as projector vector
Figure DEST_PATH_IMAGE090
In the components of the virtual number of lines, p is the encoded value corresponding to each pixel point in the projector vector, y is the number of lines corresponding to each pixel point in the projector vector,
Figure 290257DEST_PATH_IMAGE091
is the coordinates of the optical center;
Figure DEST_PATH_IMAGE092
all of which are the distortion coefficients of the projector,
Figure 38770DEST_PATH_IMAGE038
Figure 793231DEST_PATH_IMAGE093
Figure DEST_PATH_IMAGE094
Figure 906680DEST_PATH_IMAGE095
Figure DEST_PATH_IMAGE096
and the internal reference of the projector vector belongs to unknown quantity and is to be calibrated.
Here, y is a virtual parameter of the projector vector, and needs to be calculated according to equation (1). I.e. p is derived from the encoded image taken by the camera, the camera vector
Figure 159807DEST_PATH_IMAGE097
Is calculated for a certain pixel point, p is a value read from this pixel point, and then y is calculated according to formula (1). Each pixel has a distinct u, v, and p, and each pixel corresponds to a dynamically resolvable y value of y.
If the projector is a DMD projector, the distortion model of the distortion parameter can be considered to be similar to the camera distortion model described above, as shown in equation (5 a), and the distortion amount in this case can be a radial distortion parameter and a tangential distortion parameter.
If the projector is an MEMS micro-galvanometer, the linear structured light of the MEMS micro-galvanometer has no radial distortion at this time, and the nonlinearity of the coding direction is mainly considered, and the distortion model of the virtual projector vector at this time is formula (5).
Substep 043 of transforming projector vector in projector coordinate system
Figure DEST_PATH_IMAGE098
Require conversion to projector vectors in the camera coordinate system
Figure 400427DEST_PATH_IMAGE099
The coordinate system conversion relationship is formula (6):
Figure DEST_PATH_IMAGE100
formula (6)
And R is an external reference rotation matrix, unknown quantity and to be calibrated.
Figure 55399DEST_PATH_IMAGE101
And the external reference translation vector is an unknown quantity to be calibrated.
Substep 044, calculating a depth value according to formula (7) based on the camera vector and the projector vector acquired by formula (1) to formula (6);
based on the fact that the depth value obtained by the S20 is consistent with the depth value obtained by the formula (7), unknown quantity is solved, and camera internal parameters, projector internal parameters and system external parameters are obtained;
Figure DEST_PATH_IMAGE102
equation (7).
That is, the depth values of the pixel points with the number corresponding to the parameter to be solved are selected as the z value on the right side, the two-dimensional coordinates of the selected pixel points are used as the known numerical values on the left side, and the solution is carried out to obtain the camera internal parameter, the projector internal parameter and the system external parameter;
further, camera internal parameters, projector internal parameters and system external parameters are obtained to construct a corresponding relation table of pixel points and each parameter, and a lookup table facilitating parameter and pixel point lookup is obtained.
The calibration process of the method is simple to operate, is completely the same as the calibration process of a conventional binocular camera, only needs one calibration plate, can be completed synchronously with the calibration of hands and eyes, and is very convenient. Only one-dimensional line structured light needs to be projected during calibration, the working mode is completely consistent with normal working, and additional configuration is not needed. The problem of distortion of the projector is well solved, and the projector is suitable for different projector models, including DMD projectors, galvanometers, MEMS micro galvanometer and other one-dimensional linear structured light projection. All parameters are calibrated synchronously, no accumulated error exists, and the precision is higher than that of the conventional method.
EXAMPLE III
The embodiment further provides a method for three-dimensional reconstruction of an image, an implementation subject of which may be any computing device/controller, as shown in fig. 2, the method for three-dimensional reconstruction may include the following steps:
and A01, acquiring an image of the target to be measured, and acquiring a camera vector and a projector vector of each pixel point under a camera coordinate system based on the pre-acquired camera internal parameters, projector internal parameters and system external parameters.
The internal reference of the camera vector, the internal reference of the projector vector and the external system reference which are acquired in advance are calibrated by adopting the calibration method of any one of the first embodiment and the second embodiment.
A02, obtaining the three-dimensional space coordinates of each pixel point in the image based on the camera vector and the projector vector of each pixel point in the camera coordinate system;
for example, according to formula (7), a depth value z of the intersection of the camera vector and the projector vector is calculated;
Figure 605329DEST_PATH_IMAGE102
formula (7);
wherein, the first and the second end of the pipe are connected with each other,
Figure 296817DEST_PATH_IMAGE103
is a camera vector in a camera coordinate system,
Figure DEST_PATH_IMAGE104
is the projector vector in the camera coordinate system,
Figure 387133DEST_PATH_IMAGE105
is a translation vector;
based on formula (8), obtaining a spatial coordinate value of each pixel point in the image under the camera coordinate system:
Figure DEST_PATH_IMAGE106
formula (8);
z is a depth value and is a depth value,
Figure 834295DEST_PATH_IMAGE107
expressed as camera vectors
Figure DEST_PATH_IMAGE108
The component in the horizontal direction is that of,
Figure 368175DEST_PATH_IMAGE109
expressed as camera vectors
Figure 799157DEST_PATH_IMAGE108
The component in the vertical direction.
It can be understood that the spatial coordinates of each pixel point are the final result to be obtained by the 3D camera, and based on the foregoing calibration method, the reconstruction process of the embodiment is simple in calculation and fast in speed.
The present embodiment further provides a control device, including: a memory and a processor; the processor is configured to execute the computer program stored in the memory to implement the steps of performing the calibration method described in any of the first embodiment and the second embodiment or performing the three-dimensional reconstruction method provided in the third embodiment.
It should be noted that in the claims, any reference signs placed between parentheses shall not be construed as limiting the claim. The word "comprising" does not exclude the presence of elements or steps not listed in a claim. The word "a" or "an" preceding an element does not exclude the presence of a plurality of such elements. The invention can be implemented by means of hardware comprising several distinct elements, and by means of a suitably programmed computer. In the claims enumerating several means, several of these means may be embodied by one and the same item of hardware. The use of the terms first, second, third and the like are for convenience only and do not denote any order. These words are to be understood as part of the name of the component.
Furthermore, it should be noted that in the description of the present specification, the description of the term "one embodiment", "some embodiments", "examples", "specific examples" or "some examples", etc., means that a specific feature, structure, material or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the present invention. In this specification, the schematic representations of the terms used above are not necessarily intended to refer to the same embodiment or example. Furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or more embodiments or examples. Moreover, various embodiments or examples and features of various embodiments or examples described in this specification can be combined and combined by one skilled in the art without being mutually inconsistent.
While preferred embodiments of the present invention have been described, additional variations and modifications in those embodiments may occur to those skilled in the art once they learn of the basic inventive concepts. Therefore, the claims should be construed to include preferred embodiments and all such variations and modifications as fall within the scope of the invention.
It will be apparent to those skilled in the art that various modifications and variations can be made in the present invention without departing from the spirit or scope of the invention. Thus, if such modifications and variations of the present invention fall within the scope of the claims of the present invention and their equivalents, the present invention should also include such modifications and variations.

Claims (10)

1. A calibration method based on a monocular structured light system is characterized in that the monocular structured light system comprises the following steps: projector for projecting a coding pattern, a camera for acquiring the coding pattern, a control device connecting the projector and the camera, the calibration method comprising:
s10, after the projector projects the coding pattern, the control device decodes the coding image collected by the camera to obtain the coding values of all the pixel points in the coding image;
s20, the control device extracts the characteristic points of the calibration board based on the calibration board image collected by the camera, and obtains the depth values of all the pixel points in the calibration board range according to the basic parameters of the calibration board, and obtains the code values of all the pixel points in the calibration board range based on the code values of all the pixel points in the code image;
s30, according to the mathematical relation among the encoding values, the pixel point information and the depth values of all the pixel points in the range of the calibration board, constructing a distortion model of a camera vector and a projector vector with unknown quantity, and solving the constructed model based on the depth values of the pixel points to obtain camera internal parameters, projector internal parameters and system external parameters;
the distortion model of the projector vector is a virtual model according to a three-dimensional reconstruction result;
the system external parameter is an external parameter rotation matrix when a projector vector is converted into a camera coordinate system
Figure DEST_PATH_IMAGE001
And the external reference translation vector
Figure 527706DEST_PATH_IMAGE002
2. The calibration method according to claim 1, wherein the S30 includes:
s31, obtaining a distortion model of the camera vector, and determining camera internal parameters as unknown quantities;
s32, a distortion model of the virtual projector vector, and determining projector internal parameters as unknown quantities;
a first dimension of a distortion model of the projector vector
Figure DEST_PATH_IMAGE003
For the code/phase dimension, the second dimension
Figure 789055DEST_PATH_IMAGE004
Is the virtual row number;
s33, converting the projector vector into a camera coordinate system, and determining virtual parameters of the projector vector according to the geometric constraint relation;
s34, constructing a solving equation of the depth value of the intersection point of the camera vector and the projector vector based on the distortion model of the camera vector and the distortion model of the virtual projector vector;
and S35, solving an equation based on the depth value of each pixel point in the calibration board range to obtain camera internal parameters, projector internal parameters and system external parameters.
3. The calibration method according to claim 2, wherein the S30 includes:
s31 aiming at camera vector
Figure DEST_PATH_IMAGE005
The two-dimensional coordinates of each camera pixel point are expressed by adopting a standard camera model of a formula (2):
Figure 834371DEST_PATH_IMAGE006
formula (2);
in the formula (2), the first and second groups of the compound,
Figure DEST_PATH_IMAGE007
expressed as camera vectors
Figure 39700DEST_PATH_IMAGE008
The component in the horizontal direction is that of,
Figure DEST_PATH_IMAGE009
expressed as camera vectors
Figure 181968DEST_PATH_IMAGE010
The component in the vertical direction is that of,
Figure DEST_PATH_IMAGE011
is the row where the current pixel point is located,
Figure 270010DEST_PATH_IMAGE012
the column in which the current pixel point is located,
Figure DEST_PATH_IMAGE013
is the focal length of the camera in the horizontal direction,
Figure 615672DEST_PATH_IMAGE014
is the focal length in the vertical direction of the camera,
Figure DEST_PATH_IMAGE015
Figure 80151DEST_PATH_IMAGE016
is the coordinates of the optical center;
camera vector based on formula (2)
Figure DEST_PATH_IMAGE017
The distortion model of the medium two-dimensional coordinates is expressed as formula (3):
Figure 827658DEST_PATH_IMAGE018
formula (3);
Figure DEST_PATH_IMAGE019
all of which are radial distortion coefficients,
Figure 86601DEST_PATH_IMAGE020
all of which are tangential distortion coefficients,
Figure DEST_PATH_IMAGE021
is an intermediate variable;
Figure 903248DEST_PATH_IMAGE022
Figure DEST_PATH_IMAGE023
as a camera vector
Figure 718888DEST_PATH_IMAGE017
The internal parameters of (a) are unknown quantities;
s32, projector vector under virtual projector coordinate system
Figure 773432DEST_PATH_IMAGE024
The distortion model of (2);
the standard projector model is:
Figure DEST_PATH_IMAGE025
formula (4);
virtual projector vector according to basic parameters of projector
Figure 468855DEST_PATH_IMAGE026
The distortion model of (a) is formula (5) or formula (5 a):
Figure DEST_PATH_IMAGE027
formula (5);
alternatively, equation (5 a) is:
Figure 786180DEST_PATH_IMAGE028
in the formula (5) and the formula (5 a),
Figure DEST_PATH_IMAGE029
expressed as projector vector
Figure 326882DEST_PATH_IMAGE030
In the component of the code/phase dimension,
Figure DEST_PATH_IMAGE031
expressed as projector vector
Figure 48982DEST_PATH_IMAGE032
In the component of the virtual row number, p is the encoded value corresponding to each pixel point in the projector vector,
Figure DEST_PATH_IMAGE033
as a projector vectorThe number of rows corresponding to each pixel point in the pixel array,
Figure 649727DEST_PATH_IMAGE034
is the focal length of the projector in the horizontal direction,
Figure DEST_PATH_IMAGE035
is the focal length of the projector in the vertical direction,
Figure 706545DEST_PATH_IMAGE036
is the coordinates of the optical center;
Figure DEST_PATH_IMAGE037
are all the distortion coefficients of the projector,
Figure 598409DEST_PATH_IMAGE038
Figure DEST_PATH_IMAGE039
is the intermediate variable(s) of the variable,
Figure 361966DEST_PATH_IMAGE040
is the internal parameter of the projector vector is an unknown quantity;
s33, transforming the projector vector
Figure DEST_PATH_IMAGE041
Converting into a camera coordinate system, wherein the coordinate system conversion relation formula is as follows:
Figure 212241DEST_PATH_IMAGE042
formula (6);
camera vector in camera coordinate system
Figure DEST_PATH_IMAGE043
Projector vector in camera coordinate system
Figure 428459DEST_PATH_IMAGE044
Translation vector
Figure DEST_PATH_IMAGE045
The coplanar relationship is:
Figure 638860DEST_PATH_IMAGE046
formula (1);
Figure DEST_PATH_IMAGE047
in order to refer to the rotation matrix externally,
Figure 4726DEST_PATH_IMAGE048
the vector is an external reference translation vector,
Figure 478433DEST_PATH_IMAGE047
and
Figure 447526DEST_PATH_IMAGE048
belonging to an unknown quantity as a system external parameter,
Figure DEST_PATH_IMAGE049
to convert the projector vector to virtual parameters of the projector vector determined from the geometric constraint relationship in the camera coordinate system.
4. The calibration method according to claim 3, wherein the S34 includes:
calculating a depth value according to formula (7) based on the camera vector and the projector vector acquired by formula (1) to formula (6);
solving the unknown quantity based on the fact that the depth value obtained by the S20 is consistent with the depth value obtained by the formula (7), and obtaining camera internal parameters, projector internal parameters and system external parameters;
Figure 399301DEST_PATH_IMAGE050
equation (7).
5. Calibration method according to claim 1,
the calibration board is a checkerboard calibration board, and the basic parameters of the calibration board comprise: calibrating the total number of the checkerboards in the board and calibrating the size information of each check in the board;
the coded image is an image coded by one or more coding modes of Gray code, phase shift method and multi-frequency extrapolation.
6. The calibration method according to claim 5, wherein the S20 includes:
the control device extracts the checkerboard angular points serving as the characteristic points according to the checkerboard images, obtains lines and columns of the angular points as two-dimensional coordinates, establishes a world coordinate system, and obtains three-dimensional coordinates of the characteristic points according to basic parameters of the checkerboard;
and acquiring a homography matrix of the calibration plate and the camera based on the two-dimensional image coordinates and the three-dimensional coordinates of the feature points, and calculating the depth values of all pixel points in the checkerboard range according to the homography matrix.
7. A method of three-dimensional reconstruction of an image, comprising:
acquiring an image of a target to be measured, and acquiring a camera vector and a projector vector of each pixel point under a camera coordinate system based on pre-acquired camera internal parameters, projector internal parameters and system external parameters;
based on the camera vector and a projector vector, obtaining three-dimensional space coordinates of each pixel point in the image;
the pre-acquired camera internal parameters, projector internal parameters and system external parameters are calibrated by the calibration method of any one of the claims 1 to 6.
8. The three-dimensional reconstruction method of claim 7, wherein obtaining three-dimensional space coordinates of each pixel point in the image based on a camera vector and a projector vector comprises:
calculating a depth value z of an intersection of the camera vector and the projector vector according to formula (7);
Figure 153762DEST_PATH_IMAGE050
formula (7);
wherein the content of the first and second substances,
Figure 63949DEST_PATH_IMAGE051
is a camera vector in a camera coordinate system,
Figure DEST_PATH_IMAGE052
is the projector vector in the camera coordinate system,
Figure 51497DEST_PATH_IMAGE053
is a translation vector;
based on formula (8), obtaining a spatial coordinate value of each pixel point in the image under the camera coordinate system:
Figure DEST_PATH_IMAGE054
formula (8);
z is a depth value and is a depth value,
Figure 292116DEST_PATH_IMAGE055
expressed as camera vectors
Figure DEST_PATH_IMAGE056
The component in the horizontal direction is that of,
Figure 681509DEST_PATH_IMAGE057
expressed as camera vectors
Figure DEST_PATH_IMAGE058
The component in the vertical direction.
9. The three-dimensional reconstruction method according to claim 7, wherein the step of acquiring an image of the target to be measured and acquiring a camera vector of each pixel point in a camera coordinate system based on the pre-acquired camera internal reference, projector internal reference and system external reference comprises the steps of:
searching a camera vector and a projector vector of each pixel point under a camera coordinate system based on a pre-acquired lookup table;
wherein, in the calibration method, a corresponding relation table of pixel points and each parameter is constructed according to camera internal parameters, projector internal parameters and system external parameters corresponding to each pixel point,
the corresponding relation table is a lookup table used for quickly searching camera parameters, projector parameters and each pixel point participating outside the system during three-dimensional reconstruction.
10. A control device comprising a memory for storing a computer program and a processor for executing the computer program stored in the memory and for performing the steps of the method of any of the preceding claims 1 to 9.
CN202210733202.1A 2022-06-27 2022-06-27 Calibration method based on monocular structured light system Active CN114792345B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210733202.1A CN114792345B (en) 2022-06-27 2022-06-27 Calibration method based on monocular structured light system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210733202.1A CN114792345B (en) 2022-06-27 2022-06-27 Calibration method based on monocular structured light system

Publications (2)

Publication Number Publication Date
CN114792345A true CN114792345A (en) 2022-07-26
CN114792345B CN114792345B (en) 2022-09-27

Family

ID=82463574

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210733202.1A Active CN114792345B (en) 2022-06-27 2022-06-27 Calibration method based on monocular structured light system

Country Status (1)

Country Link
CN (1) CN114792345B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115546311A (en) * 2022-09-28 2022-12-30 中国传媒大学 Projector calibration method based on scene information
CN116091619A (en) * 2022-12-27 2023-05-09 北京纳通医用机器人科技有限公司 Calibration method, device, equipment and medium

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107945268A (en) * 2017-12-15 2018-04-20 深圳大学 A kind of high-precision three-dimensional method for reconstructing and system based on binary area-structure light
US20180322648A1 (en) * 2015-11-11 2018-11-08 Zhejiang Dahua Technology Co., Ltd. Methods and systems for binocular stereo vision
CN109506589A (en) * 2018-12-25 2019-03-22 东南大学苏州医疗器械研究院 A kind of measuring three-dimensional profile method based on light field imaging
CN111028297A (en) * 2019-12-11 2020-04-17 凌云光技术集团有限责任公司 Calibration method of surface structured light three-dimensional measurement system
CN112927340A (en) * 2021-04-06 2021-06-08 中国科学院自动化研究所 Three-dimensional reconstruction acceleration method, system and equipment independent of mechanical placement
CN113008163A (en) * 2021-03-01 2021-06-22 西北工业大学 Encoding and decoding method based on frequency shift stripes in structured light three-dimensional reconstruction system

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20180322648A1 (en) * 2015-11-11 2018-11-08 Zhejiang Dahua Technology Co., Ltd. Methods and systems for binocular stereo vision
CN107945268A (en) * 2017-12-15 2018-04-20 深圳大学 A kind of high-precision three-dimensional method for reconstructing and system based on binary area-structure light
CN109506589A (en) * 2018-12-25 2019-03-22 东南大学苏州医疗器械研究院 A kind of measuring three-dimensional profile method based on light field imaging
CN111028297A (en) * 2019-12-11 2020-04-17 凌云光技术集团有限责任公司 Calibration method of surface structured light three-dimensional measurement system
CN113008163A (en) * 2021-03-01 2021-06-22 西北工业大学 Encoding and decoding method based on frequency shift stripes in structured light three-dimensional reconstruction system
CN112927340A (en) * 2021-04-06 2021-06-08 中国科学院自动化研究所 Three-dimensional reconstruction acceleration method, system and equipment independent of mechanical placement

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
ANIL SINGH PARIHAR,AND ETC: "Dimensional analysis of objects in a 2D image", 《2017 8TH INTERNATIONAL CONFERENCE ON COMPUTING, COMMUNICATION AND NETWORKING TECHNOLOGIES (ICCCNT)》 *
孙茜等: "基于双目视觉的植物三维重建方法及应用", 《安徽农业科学》 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115546311A (en) * 2022-09-28 2022-12-30 中国传媒大学 Projector calibration method based on scene information
CN116091619A (en) * 2022-12-27 2023-05-09 北京纳通医用机器人科技有限公司 Calibration method, device, equipment and medium

Also Published As

Publication number Publication date
CN114792345B (en) 2022-09-27

Similar Documents

Publication Publication Date Title
CN110276808B (en) Method for measuring unevenness of glass plate by combining single camera with two-dimensional code
US9965870B2 (en) Camera calibration method using a calibration target
US10916033B2 (en) System and method for determining a camera pose
CN105758426B (en) The combined calibrating method of the multisensor of mobile robot
JP4245963B2 (en) Method and system for calibrating multiple cameras using a calibration object
CN114792345B (en) Calibration method based on monocular structured light system
CN101697233B (en) Structured light-based three-dimensional object surface reconstruction method
US9183634B2 (en) Image processing apparatus and image processing method
Resch et al. On-site semi-automatic calibration and registration of a projector-camera system using arbitrary objects with known geometry
WO2020136523A1 (en) System and method for the recognition of geometric shapes
CN115187612A (en) Plane area measuring method, device and system based on machine vision
CN116758063B (en) Workpiece size detection method based on image semantic segmentation
Cauchois et al. Calibration of the omnidirectional vision sensor: SYCLOP
Tezaur et al. A new non-central model for fisheye calibration
CN114993207B (en) Three-dimensional reconstruction method based on binocular measurement system
CN115082538A (en) System and method for three-dimensional reconstruction of surface of multi-view vision balance ring part based on line structure light projection
Su et al. An automatic calibration system for binocular stereo imaging
CN114170321A (en) Camera self-calibration method and system based on distance measurement
CN110874820B (en) Material simulation deformation data acquisition method and device
CN113160393A (en) High-precision three-dimensional reconstruction method and device based on large field depth and related components thereof
CN112785685A (en) Assembly guiding method and system
Uyanik et al. A method for determining 3D surface points of objects by a single camera and rotary stage
CN104156962A (en) Method of calibrating spatial position relationship of camera and projector based on trapezoidal patterns
CN116233392B (en) Calibration method and device of virtual shooting system, electronic equipment and storage medium
Han et al. The Calibration of Kinect Camera Based on ARtoolkit

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant