CN114485479B - Structured light scanning and measuring method and system based on binocular camera and inertial navigation - Google Patents

Structured light scanning and measuring method and system based on binocular camera and inertial navigation Download PDF

Info

Publication number
CN114485479B
CN114485479B CN202210047496.2A CN202210047496A CN114485479B CN 114485479 B CN114485479 B CN 114485479B CN 202210047496 A CN202210047496 A CN 202210047496A CN 114485479 B CN114485479 B CN 114485479B
Authority
CN
China
Prior art keywords
coordinate system
camera
inertial navigation
structured light
plane
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202210047496.2A
Other languages
Chinese (zh)
Other versions
CN114485479A (en
Inventor
刘长英
侯禹光
安博文
刘洋
王浩
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Jilin University
Original Assignee
Jilin University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Jilin University filed Critical Jilin University
Priority to CN202210047496.2A priority Critical patent/CN114485479B/en
Publication of CN114485479A publication Critical patent/CN114485479A/en
Application granted granted Critical
Publication of CN114485479B publication Critical patent/CN114485479B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01BMEASURING LENGTH, THICKNESS OR SIMILAR LINEAR DIMENSIONS; MEASURING ANGLES; MEASURING AREAS; MEASURING IRREGULARITIES OF SURFACES OR CONTOURS
    • G01B11/00Measuring arrangements characterised by the use of optical techniques
    • G01B11/24Measuring arrangements characterised by the use of optical techniques for measuring contours or curvatures
    • G01B11/25Measuring arrangements characterised by the use of optical techniques for measuring contours or curvatures by projecting a pattern, e.g. one or more lines, moiré fringes on the object
    • G01B11/2518Projection by scanning of the object
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/277Analysis of motion involving stochastic approaches, e.g. using Kalman filters
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • Length Measuring Devices By Optical Means (AREA)

Abstract

The invention discloses a structured light scanning and measuring method and system based on a binocular camera and inertial navigation. The method comprises the following steps: respectively establishing coordinate systems for the binocular camera, the plane target, the structured light sensor and the inertial navigation; continuously scanning a measured object through a structured light sensor to obtain a plurality of groups of profile data points of the measured object under a light plane coordinate system; respectively acquiring a first three-dimensional coordinate and a second three-dimensional coordinate for each group of contour data points; performing data optimization fusion on the first three-dimensional coordinate and the second three-dimensional coordinate through a Kalman filter to obtain profile data of the measured object scanned at a single time; and performing overall fitting on the profile data of the measured object in each single scanning to obtain the overall fitting profile data of the measured object, and finishing the structured light scanning measurement. According to the invention, by optimally fusing the pose detection data of the structured light sensor, the accumulated error in measurement can be reduced, and the structured light pose detection precision is improved.

Description

Structured light scanning and measuring method and system based on binocular camera and inertial navigation
Technical Field
The invention relates to the technical field of binocular camera and structured light three-dimensional measurement, in particular to a structured light scanning measurement method and system based on a binocular camera and inertial navigation.
Background
The three-dimensional scanning technology has gained attention with the continuous development of science and technology and industry, and has wide application in the fields of manufacturing industry, reverse engineering and the like. The structured light scanning measurement technology is used most frequently, but due to the limited scanning range of the structured light, the scanning data needs to be spliced subsequently, so that the measurement precision is reduced.
Therefore, with the rapid development of computer vision technology, a method of combining a vision sensor and a structured light sensor is frequently used at present to improve the precision and the measurement efficiency of three-dimensional measurement, and the vision sensor can identify the posture of structured light, so that the subsequent splicing of the scanning data of the structured light is realized. However, the measurement method based on the single sensor still has the problem that the detection accuracy of the structured light attitude is low, which is easy to generate.
Disclosure of Invention
The invention aims to provide a structured light scanning measurement method and a system based on a binocular camera and inertial navigation, which can reduce the accumulated error in measurement and improve the detection precision of the structured light attitude.
In order to achieve the purpose, the invention provides the following scheme:
a structured light scanning measurement method based on a binocular camera and inertial navigation comprises the following steps:
respectively establishing coordinate systems for a binocular camera, a plane target, a structured light sensor and inertial navigation to obtain a camera coordinate system, a plane target coordinate system, a light plane coordinate system and an inertial navigation coordinate system;
continuously scanning the measured object through the structured light sensor to obtain a plurality of groups of profile data points of the measured object under the light plane coordinate system;
for each group of contour data points, transforming the group of contour data points to the camera coordinate system through the plane target coordinate system to obtain a first three-dimensional coordinate; transforming the group of contour data points to the camera coordinate system through the inertial navigation coordinate system to obtain a second three-dimensional coordinate;
performing data optimization fusion on the first three-dimensional coordinate and the second three-dimensional coordinate through a Kalman filter to obtain profile data of the measured object scanned at a single time;
and performing overall fitting on the profile data of the measured object in each single scanning to obtain the overall fitting profile data of the measured object, thereby completing the structured light scanning measurement.
Optionally, the scanning, by the structured light sensor, the measured object to obtain the profile data point of the measured object in the optical plane coordinate system further includes:
jointly calibrating the binocular camera and the inertial navigation to obtain a first transformation matrix, camera internal parameters and camera external parameters between the camera coordinate system and the inertial navigation coordinate system;
fixing the positions of the binocular camera, the plane target, the structured light sensor and the inertial navigation according to the camera internal parameters and the camera external parameters;
respectively acquiring a second transformation matrix between the camera coordinate system and the plane target coordinate system, a third transformation matrix between the camera coordinate system and the light plane coordinate system, and a fifth transformation matrix between the inertial navigation coordinate system and the light plane coordinate system according to the positions;
and acquiring a fourth transformation matrix between the plane target coordinate system and the light plane coordinate system through the second transformation matrix and the third transformation matrix.
Optionally, the jointly calibrating the binocular camera and the inertial navigation to obtain a first transformation matrix, camera internal parameters, and camera external parameters between the camera coordinate system and the inertial navigation coordinate system specifically includes:
fixing the relative position between the binocular camera and the inertial navigation, shooting the high-precision checkerboard calibration plates under different poses through the binocular camera, obtaining a calibration picture, and determining a first transformation matrix, camera internal parameters and camera external parameters between the camera coordinate system and the inertial navigation coordinate system according to the calibration picture.
Optionally, the fixing the positions of the binocular camera, the planar target, the structured light sensor, and the inertial navigation according to the camera internal reference and the camera external reference specifically includes:
fixedly placing the binocular camera relative to the structured light sensor;
fixing both the planar target and the inertial navigation on the structured light sensor;
the plane target is fixedly connected with the structured light sensor through a special connecting piece.
Optionally, the method for obtaining the second transformation matrix specifically includes:
setting five circular mark points on a plane target, acquiring the three-dimensional coordinates of the circular mark points under the coordinate system of the plane target, and sequencing the circular mark points;
establishing an image pixel coordinate system according to the position;
shooting the plane target through the binocular camera, extracting an area containing plane target information in the binocular camera, and performing pixel extraction on each sorted circular mark point outline to obtain mark point centroid pixel coordinates under the image pixel coordinate system;
according to the mark point centroid pixel coordinates, acquiring the three-dimensional coordinates of the circular mark point in the camera coordinate system by using a triangulation method;
and acquiring the second transformation matrix according to the three-dimensional coordinates of the circular mark point in the plane target coordinate system and the three-dimensional coordinates of the circular mark point in the camera coordinate system.
Optionally, the five circular mark points are distributed on the planar target in a T shape, and the distance between every two adjacent circular mark points is the same.
Optionally, the method for obtaining the third transformation matrix specifically includes:
placing a three-dimensional target as the measured object under the structured light sensor, and enabling the light bars of the structured light sensor and the edge lines of the three-dimensional target to be intersected on four non-collinear points to obtain four-point three-dimensional coordinates of the three-dimensional target under the light plane coordinate system;
shooting the three-dimensional target and the structured light sensor through the binocular camera, extracting an area containing the three-dimensional target and light bar information in the binocular camera, performing pixel extraction on four points of the three-dimensional target, and acquiring four-point pixel coordinates of the three-dimensional target under a camera coordinate system;
according to the four-point pixel coordinates of the three-dimensional target, the four-point three-dimensional coordinates of the three-dimensional target under the camera coordinate system are obtained by a triangulation method;
and acquiring the third transformation matrix according to the three-dimensional coordinates of the four points of the three-dimensional target under the optical plane coordinate system and the three-dimensional coordinates of the four points of the three-dimensional target under the camera coordinate system.
Optionally, the method for acquiring the first three-dimensional coordinate specifically includes: transforming the contour data points into the camera coordinate system through the plane target coordinate system by using the second transformation matrix and the fourth transformation matrix.
Optionally, the method for acquiring the second three-dimensional coordinate specifically includes: and transforming the contour data points to be under the camera coordinate system through the inertial navigation coordinate system by using the first transformation matrix and the fifth transformation matrix.
A structured light scanning measurement system based on binocular cameras and inertial navigation, comprising:
the coordinate system establishing unit is used for respectively establishing coordinate systems for the binocular camera, the plane target, the structured light sensor and the inertial navigation to obtain a coordinate system of the binocular camera, a coordinate system of the plane target, a coordinate system of a light plane and a coordinate system of the inertial navigation, and continuously scanning the measured object through the structured light sensor;
the contour acquisition unit is used for acquiring a plurality of sets of contour data points of the measured object under the optical plane coordinate system;
the coordinate system transformation unit is used for transforming each group of contour data points to the camera coordinate system through the plane target coordinate system to obtain a first three-dimensional coordinate; transforming the set of contour data points to the camera coordinate system through the inertial navigation coordinate system to obtain a second three-dimensional coordinate;
the data optimization fusion unit is used for performing data optimization fusion on the first three-dimensional coordinate and the second three-dimensional coordinate through a Kalman filter to obtain profile data of a measured object scanned once;
and the integral fitting unit is used for integrally fitting the profile data of the measured object in each single scanning to obtain the integral fitting profile data of the measured object.
According to the specific embodiment provided by the invention, the invention discloses the following technical effects:
the invention detects the pose of the structured light sensor by combining a binocular camera and inertial navigation, simultaneously acquires the profile data of the measured object through two paths of a light plane coordinate system-a plane target coordinate system-a camera coordinate system and a light plane coordinate system-an inertial navigation coordinate system-a camera coordinate system after the measured object is scanned, and integrally fits the data by using a Kalman filter, thereby overcoming the defect that redundant data can be generated in multi-sensor measurement, improving the pose detection precision of the structured light sensor, further improving the local profile scanning data point precision of the measured object through multiple continuous scans, further improving the precision of subsequent data splicing and realizing the improvement of the detection precision of the structured light pose.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings needed to be used in the embodiments will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art to obtain other drawings without inventive exercise.
FIG. 1 is a schematic view of a camera coordinate system of a structured light scanning measurement method based on a binocular camera and inertial navigation according to the present invention;
FIG. 2 is a schematic view of a plane target coordinate system of the structured light scanning measurement method based on a binocular camera and inertial navigation according to the present invention;
FIG. 3 is a schematic view of a light plane coordinate system of the structured light scanning measurement method based on a binocular camera and inertial navigation according to the present invention;
FIG. 4 is a schematic view of an inertial navigation coordinate system of the structured light scanning measurement method based on a binocular camera and inertial navigation according to the present invention;
FIG. 5 is a schematic view of a sequence of planar target marking points of the structured light scanning measurement method based on a binocular camera and inertial navigation according to the present invention;
FIG. 6 is a four-point schematic view of intersection of a light plane and a three-dimensional target ridge of the structured light scanning measurement method based on a binocular camera and inertial navigation according to the present invention;
FIG. 7 is a schematic logic flow diagram of a structured light scanning measurement method based on a binocular camera and inertial navigation according to the present invention;
FIG. 8 is a schematic flow chart of the structured light scanning measurement method based on a binocular camera and inertial navigation according to the present invention;
fig. 9 is a schematic block diagram of a binocular camera and inertial navigation based structured light scanning measurement system according to the present invention.
Description of the symbols: the method comprises the following steps of 1-establishing a coordinate system, 2-acquiring a contour, 3-transforming the coordinate system, 4-optimizing and fusing data and 5-integrally fitting.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
The invention aims to provide a structured light scanning measurement method and a structured light scanning measurement system based on a binocular camera and inertial navigation.
In order to make the aforementioned objects, features and advantages of the present invention comprehensible, embodiments accompanied with figures are described in further detail below.
As shown in fig. 1 to 9, the structured light scanning measurement method based on the binocular camera and inertial navigation of the present invention includes:
step 100: and respectively establishing coordinate systems for the binocular camera, the plane target, the structured light sensor and the inertial navigation to obtain a camera coordinate system, a plane target coordinate system, a light plane coordinate system and an inertial navigation coordinate system.
Step 200: and continuously scanning the measured object through the structured light sensor to obtain a plurality of groups of profile data points of the measured object under the light plane coordinate system.
Step 300: for each group of contour data points, transforming the group of contour data points to a position under the camera coordinate system through the plane target coordinate system to obtain a first three-dimensional coordinate; and transforming the set of contour data points to the camera coordinate system through the inertial navigation coordinate system to obtain a second three-dimensional coordinate.
Step 400: and performing data optimization fusion on the first three-dimensional coordinate and the second three-dimensional coordinate through a Kalman filter to obtain profile data of the measured object scanned at a single time.
Step 500: and performing overall fitting on the profile data of the measured object in each single scanning to obtain the overall fitting profile data of the measured object, and finishing the structured light scanning measurement, thereby improving the precision of three-dimensional reconstruction.
Wherein in step 100 the camera coordinate system O c -X c Y c Z c Using the optical center of the left camera in the binocular camera as the origin of coordinates O c Optical axis is Z c ,X c Axis and Y c The axes are parallel to the x-axis and y-axis of the image physical coordinate system (as shown in particular in fig. 2). As shown in fig. 3, the planar target coordinate system O t -X t Y t Z t Is established by taking the centroid of the middle circular mark point as the origin of coordinates O t ,X t Axis and Y t Axes perpendicular to each other and parallel to the plane target, Z t Perpendicular to the planar target. As shown in fig. 4, an optical plane coordinate system O l -X l Y l Z l In the method, the dotted line part is a schematic diagram that a laser in the structured light sensor emits laser and is received by an internal receiver, the solid line part is an effective measurement range, and the center of the effective range is taken as a coordinate origin O l ,X l Axis and Y l Axes perpendicular to each other and coplanar with the structured light plane, Z l Perpendicular to the light plane. As shown in fig. 5, the inertial navigation coordinate system O m -X m Y m Z m Using inertial navigation gravity center as coordinate origin O m ,X m Axis and Y m The axes being mutually perpendicular and parallel to the inertial navigation upper surface, Z m Perpendicular to the inertial navigation upper surface.
In addition, in order to facilitate the subsequent calculation, an image pixel coordinate system and an image physical coordinate system are also required to be established,wherein the image pixel coordinate system O o -the uv coordinate origin is in the upper left corner of the image plane, the u-axis is parallel to the image plane horizontally to the right, the v-axis is perpendicular to the u-axis downwards, the coordinates are expressed using (u, v); the origin of coordinates of the image physical coordinate system is at the center of the image plane, the x-axis and the y-axis are respectively parallel to the coordinate axes of the image pixel coordinate system, and the coordinates are expressed by (x, y).
Before proceeding to step 200, transformation matrices for each coordinate system transformation are also acquired.
Firstly, fixing the relative position between the binocular camera and the inertial navigation system, carrying out combined calibration on the binocular camera and the inertial navigation system, shooting a high-precision checkerboard calibration board under different poses through the binocular camera to obtain a calibration picture, and determining a first transformation matrix [ R ] between a camera coordinate system and the inertial navigation coordinate system through the calibration picture by utilizing a calibration tool box in MATLAB according to the principle of Zhang Zhengyou calibration method and utilizing the calibration tool box in the MATLAB 0 T 0 ]Camera internal reference and camera external reference.
Secondly, fixing the positions of the binocular camera, the plane target, the structured light sensor and the inertial navigation according to the camera internal parameters and the camera external parameters.
The position placing method specifically comprises the following steps: fixedly placing the binocular camera relative to the structured light sensor; fixing both the planar target and the inertial navigation on the structured light sensor; the plane target is fixedly connected with the structured light sensor through a special connecting piece.
The position relation among the devices is to ensure that the view field of the binocular camera can shoot the plane target and the three-dimensional target simultaneously, and the three-dimensional target is placed in the effective scanning range of the structured light sensor, so that the light bars of the structured light sensor can be intersected with the four ridge lines of the three-dimensional target.
Thirdly, respectively obtaining a second transformation matrix [ R ] between the camera coordinate system and the plane target coordinate system according to the positions 1 T 1 ]A third transformation matrix [ R ] between the camera coordinate system and the light plane coordinate system 2 T 2 ]A fifth transformation matrix [ R ] between the inertial navigation coordinate system and the light plane coordinate system 4 T 4 ]。
Finally, passing said second transformation matrix [ R ] 1 T 1 ]And said third transformation matrix [ R ] 2 T 2 ]Obtaining a fourth transformation matrix [ R ] between the plane target coordinate system and the light plane coordinate system 3 T 3 ]。
The method for acquiring the second transformation matrix specifically comprises the following five steps.
The method comprises the steps of firstly, setting five circular mark points on a plane target, wherein the five circular mark points are distributed on the plane target in a T shape, the distance between every two adjacent circular mark points is the same, and obtaining the three-dimensional coordinates (X) of the circular mark points under the coordinate system of the plane target ti ,Y ti ,Z ti ) And sorting the circular mark points, wherein the arrangement sequence is shown in fig. 6.
And secondly, establishing an image pixel coordinate system according to the position.
Thirdly, shooting the plane target through the binocular camera, extracting an ROI (region of interest) containing plane target information in the binocular camera through a YOLOv3 algorithm, extracting pixels of each sequenced circular mark point outline by using MATLAB (matrix laboratory), and acquiring pixel coordinates (u) of the sequenced circular mark points under a left camera and a right camera in the binocular camera cli ,v cli ) And (u) cri ,v cri )。
Fourthly, according to the coordinates of the centroid pixel of the mark point and a triangulation method, by utilizing MATLAB with a function triangle, Z in the three-dimensional coordinates of the circular mark point under the camera coordinate system can be obtained ci . Obtaining Z in the camera coordinate system ci Then, according to the formula (1), the X of the pixel point under the camera coordinate system can be obtained ci And Y ci Further acquiring the three-dimensional coordinates (X) of the circular mark point in the camera coordinate system ci ,Y ci ,Z ci )。
Figure GDA0003557664800000081
Wherein (u) ci ,v ci ) Is a pixel coordinate; (X) ci ,Y ci ,Z ci ) Coordinates under a camera coordinate system; k is camera internal reference; s is a scale factor; f is the camera focal length.
Fifthly, according to the three-dimensional coordinates (X) of the circular mark point under the plane target coordinate system ti ,Y ti ,Z ti ) And the three-dimensional coordinates (X) of the circular mark point in the camera coordinate system ci ,Y ci ,Z ci ) Obtaining said second transformation matrix [ R ] 1 T 1 ]。
The method for acquiring the third transformation matrix specifically includes the following four steps.
The method comprises the following steps of firstly, taking a three-dimensional target as a detected object to be placed under a structured light sensor, and enabling light bars of the structured light sensor and ridge lines of the three-dimensional target to be intersected on four non-collinear points. The redundant structural design of determining the optical plane information by four points improves the calculation precision, compared with the traditional two-dimensional target, the optical plane information can be solved by calibrating once, the position of the three-dimensional target is not required to be specially fixed, and only the intersection of the optical strip and the four ridge lines is required to be ensured. The method comprises the steps that a structured light sensor scans contour coordinates of a three-dimensional target under an optical plane coordinate system to obtain four-point three-dimensional coordinates (X) of the three-dimensional target under the optical plane coordinate system li ,Y li ,Z li ) As shown in fig. 7.
And step two, shooting the three-dimensional target and the structured light sensor through the binocular camera, extracting an ROI (region of interest) area containing the three-dimensional target and light bar information in the binocular camera, carrying out pixel extraction on four points of the three-dimensional target, acquiring four-point pixel coordinates of the three-dimensional target under a left camera coordinate system and a right camera coordinate system of the binocular camera, and obtaining (u) cli ,v cli ) And (u) cri ,v cri )。
Thirdly, according to the three-dimensional target four-point pixel coordinate and a triangulation method, utilizing an MATLAB self-carrying functiontriangulate, Z in the three-dimensional coordinates of the four points of the three-dimensional target under the camera coordinate system can be obtained ci
According to the formula (1), the intersection four points of the light strip and the ridge line of the three-dimensional target in the three-dimensional coordinate of the camera coordinate system (X) ci ,Y ci ) Further acquiring the four-point three-dimensional coordinates (X) of the three-dimensional target under the camera coordinate system ci ,Y ci ,Z ci )。
The fourth step, according to the three-dimensional target four-point three-dimensional coordinate (X) in the light plane coordinate system li ,Y li ,Z li ) And the four-point three-dimensional coordinates (X) of the three-dimensional target under the camera coordinate system ci ,Y ci ,Z ci ) The third transformation matrix [ R ] can be obtained by SVD singular value decomposition method 2 T 2 ]。
The method for acquiring the fourth transformation matrix specifically includes: obtaining a fourth transformation matrix [ R ] between the plane target coordinate system and the light plane coordinate system according to formula (2) 3 T 3 ];
Figure GDA0003557664800000091
The method for acquiring the fifth transformation matrix specifically includes: since the inertial navigation and the structured light sensor are fixed together, i.e. the relative position is fixed and known, a fifth transformation matrix [ R ] between the inertial navigation coordinate system and the light plane coordinate system can be obtained 4 T 4 ]。
After the above operation, the first transformation matrix [ R ] is obtained 0 T 0 ]Second transformation matrix [ R ] 1 T 1 ]The third transformation matrix [ R ] 2 T 2 ]Fourth transformation matrix [ R ] 3 T 3 ]Fifth transformation matrix [ R ] 4 T 4 ]。
In step 300, the method for acquiring the first three-dimensional coordinate specifically includes: using said second transformation matrix [ R 1 T 1 ]And said fourth transformation matrix [ R 3 T 3 ]The profile is adjusted according to equation (3)The data point is transformed to the camera coordinate system through the plane target coordinate system to obtain a three-dimensional coordinate
Figure GDA0003557664800000101
Figure GDA0003557664800000102
The method for acquiring the second three-dimensional coordinate specifically comprises the following steps: transforming the contour data points to the camera coordinate system through the inertial navigation coordinate system according to a formula (4) by using the first transformation matrix and the fifth transformation matrix to obtain three-dimensional coordinates
Figure GDA0003557664800000103
Figure GDA0003557664800000104
Through the formula (3) and the formula (4), the structured light sensor scans the contour data point of the measured object once to obtain two groups of three-dimensional coordinates of the contour data point of the measured object converted to the camera coordinate system through the plane target coordinate system and the inertial navigation coordinate system respectively
Figure GDA0003557664800000105
And
Figure GDA0003557664800000106
in step 400, the algorithm iteration formula of the kalman filter specifically includes:
X(q|q-1)=A(q)X(q-1) (5);
p(q|q-1)=A(q)p(q-1)A(q) T +B(q)Q(q)B(q) T (6);
X(q)=X(q|q-1)+H(q)[Y(q)-C(q)X(q|q-1)] (7);
H(q)=p(q|q-1)C(q) T [C(q)p(q|q-1)C(q) T +R(q)] -1 (8);
p(q)=[I-H(q)C(q)]p(q|q-1) (9);
wherein X (q) represents the contour data point of the measured object in the camera coordinate system obtained by the transformation of the inertial navigation coordinate system at the moment of q
Figure GDA0003557664800000107
A (Q) and B (Q) represent transition matrices (determined by Kalman filter), Q (Q) represents noise matrix, R (Q) represents measurement noise matrix, and Y (Q) represents contour data point of measured object under camera coordinate system obtained by plane target coordinate system transformation at time Q
Figure GDA0003557664800000111
p (q) represents a covariance matrix; h (q) represents a Kalman gain matrix; c (q) represents a transition matrix between the state variable and the measured quantity; i denotes an identity matrix.
As shown in fig. 9, the structured light scanning measurement system based on binocular camera and inertial navigation of the present invention includes a coordinate system establishing unit 1, a contour acquiring unit 2, a coordinate system transforming unit 3, a data optimization fusion unit 4, and an overall fitting unit 5.
Specifically, the coordinate system establishing unit 1 is configured to respectively establish coordinate systems for a binocular camera, a planar target, a structured light sensor, and inertial navigation, obtain a coordinate system of the binocular camera, a coordinate system of the planar target, a coordinate system of a light plane, and a coordinate system of the inertial navigation, and continuously scan a measured object through the structured light sensor; the contour acquisition unit 2 is used for acquiring a plurality of sets of contour data points of the measured object under the optical plane coordinate system; the coordinate system transformation unit 3 is configured to transform, for each set of contour data points, the set of contour data points to the camera coordinate system through the plane target coordinate system, so as to obtain a first three-dimensional coordinate; transforming the set of contour data points to the camera coordinate system through the inertial navigation coordinate system to obtain a second three-dimensional coordinate; the data optimization fusion unit 4 is used for performing data optimization fusion on the first three-dimensional coordinate and the second three-dimensional coordinate through a Kalman filter to obtain profile data of a measured object scanned once; and the integral fitting unit 5 is used for integrally fitting the profile data of the measured object in each single scanning to obtain the integral fitting profile data of the measured object.
The embodiments in the present description are described in a progressive manner, each embodiment focuses on differences from other embodiments, and the same and similar parts among the embodiments are referred to each other. For the system disclosed by the embodiment, the description is relatively simple because the system corresponds to the method disclosed by the embodiment, and the relevant points can be referred to the method part for description.
The principle and the embodiment of the present invention are explained by applying specific examples, and the above description of the embodiments is only used to help understanding the method and the core idea of the present invention; meanwhile, for a person skilled in the art, according to the idea of the present invention, the specific embodiments and the application range may be changed. In view of the above, the present disclosure should not be construed as limiting the invention.

Claims (7)

1. A structured light scanning measurement method based on a binocular camera and inertial navigation is characterized by comprising the following steps:
respectively establishing coordinate systems for a binocular camera, a plane target, a structured light sensor and inertial navigation to obtain a camera coordinate system, a plane target coordinate system, a light plane coordinate system and an inertial navigation coordinate system;
continuously scanning the measured object through the structured light sensor to obtain a plurality of groups of profile data points of the measured object under the light plane coordinate system;
for each group of contour data points, transforming each group of contour data points to the camera coordinate system through the plane target coordinate system to obtain a first three-dimensional coordinate; transforming each group of contour data points to the camera coordinate system through the inertial navigation coordinate system to obtain a second three-dimensional coordinate;
performing data optimization fusion on the first three-dimensional coordinate and the second three-dimensional coordinate through a Kalman filter to obtain profile data of the measured object scanned at a single time;
carrying out integral fitting on the profile data of the measured object subjected to single scanning to obtain integral fitting profile data of the measured object, and finishing structured light scanning measurement;
the continuous scanning is carried out on the measured object through the structured light sensor, and a plurality of groups of profile data points of the measured object under the light plane coordinate system are obtained, wherein the method comprises the following steps:
jointly calibrating the binocular camera and the inertial navigation to obtain a first transformation matrix, camera internal parameters and camera external parameters between the camera coordinate system and the inertial navigation coordinate system;
fixing the positions of the binocular camera, the plane target, the structured light sensor and the inertial navigation according to the camera internal parameters and the camera external parameters;
respectively acquiring a second transformation matrix between the camera coordinate system and the plane target coordinate system, a third transformation matrix between the camera coordinate system and the light plane coordinate system, and a fifth transformation matrix between the inertial navigation coordinate system and the light plane coordinate system according to the positions;
acquiring a fourth transformation matrix between the plane target coordinate system and the light plane coordinate system through the second transformation matrix and the third transformation matrix;
the method for acquiring the second transformation matrix specifically includes:
setting five circular mark points on a plane target, acquiring the three-dimensional coordinates of the circular mark points under the coordinate system of the plane target, and sequencing the circular mark points;
establishing an image pixel coordinate system according to the position;
shooting the plane target through the binocular camera, extracting an area containing plane target information in the binocular camera, and performing pixel extraction on each sorted circular mark point outline to obtain mark point centroid pixel coordinates under the image pixel coordinate system;
according to the mark point centroid pixel coordinates, utilizing a triangulation method to obtain the circular mark point three-dimensional coordinates under the camera coordinate system;
acquiring the second transformation matrix according to the three-dimensional coordinates of the circular mark point in the plane target coordinate system and the three-dimensional coordinates of the circular mark point in the camera coordinate system;
the fixing of the positions of the binocular camera, the plane target, the structured light sensor and the inertial navigation according to the camera internal parameters and the camera external parameters specifically comprises:
fixedly placing the binocular camera relative to the structured light sensor;
fixing both the planar target and the inertial navigation on the structured light sensor;
the plane target is fixedly connected with the structured light sensor through a special connecting piece.
2. The binocular camera and inertial navigation-based structured light scanning measurement method according to claim 1, wherein the binocular camera and the inertial navigation are jointly calibrated to obtain a first transformation matrix, camera internal parameters and camera external parameters between the camera coordinate system and the inertial navigation coordinate system, and specifically comprises:
fixing the relative position between the binocular camera and the inertial navigation, shooting the high-precision checkerboard calibration plates under different poses through the binocular camera, obtaining a calibration picture, and determining a first transformation matrix, camera internal parameters and camera external parameters between the camera coordinate system and the inertial navigation coordinate system according to the calibration picture.
3. The binocular camera and inertial navigation based structured light scanning measurement method according to claim 1, wherein the five circular mark points are distributed on the planar target in a T shape, and the distance between every two adjacent circular mark points is the same.
4. The binocular camera and inertial navigation based structured light scanning measurement method according to claim 1, wherein the third transformation matrix acquisition method specifically comprises:
placing a three-dimensional target as the measured object under the structured light sensor, and enabling light bars of the structured light sensor and ridge lines of the three-dimensional target to be intersected on four non-collinear points to obtain four-point three-dimensional coordinates of the three-dimensional target under the light plane coordinate system;
shooting the three-dimensional target and the structured light sensor through the binocular camera, extracting an area containing the three-dimensional target and light bar information in the binocular camera, performing pixel extraction on four points of the three-dimensional target, and acquiring four-point pixel coordinates of the three-dimensional target under a camera coordinate system;
according to the four-point pixel coordinates of the three-dimensional target, the four-point three-dimensional coordinates of the three-dimensional target under the camera coordinate system are obtained by a triangulation method;
and acquiring the third transformation matrix according to the three-dimensional coordinates of the four points of the three-dimensional target under the optical plane coordinate system and the three-dimensional coordinates of the four points of the three-dimensional target under the camera coordinate system.
5. The binocular camera and inertial navigation-based structured light scanning measurement method according to claim 1, wherein the first three-dimensional coordinate acquisition method specifically comprises: transforming the contour data points into the camera coordinate system through the plane target coordinate system by using the second transformation matrix and the fourth transformation matrix.
6. The binocular camera and inertial navigation based structured light scanning measurement method according to claim 4, wherein the second three-dimensional coordinate acquisition method specifically comprises: and transforming the contour data points to be under the camera coordinate system through the inertial navigation coordinate system by using the first transformation matrix and the fifth transformation matrix.
7. A structured light scanning measurement system based on binocular camera and inertial navigation, comprising:
the coordinate system establishing unit is used for respectively establishing coordinate systems for the binocular camera, the plane target, the structured light sensor and the inertial navigation to obtain a coordinate system of the binocular camera, a coordinate system of the plane target, a coordinate system of a light plane and a coordinate system of the inertial navigation, and continuously scanning the measured object through the structured light sensor;
the contour acquisition unit is used for acquiring a plurality of sets of contour data points of the measured object under the light plane coordinate system;
the coordinate system transformation unit is used for transforming each group of contour data points to the camera coordinate system through the plane target coordinate system to obtain a first three-dimensional coordinate; transforming each group of contour data points to the camera coordinate system through the inertial navigation coordinate system to obtain a second three-dimensional coordinate;
the data optimization fusion unit is used for performing data optimization fusion on the first three-dimensional coordinate and the second three-dimensional coordinate through a Kalman filter to obtain profile data of a measured object scanned once;
the integral fitting unit is used for carrying out integral fitting on the profile data of the measured object which is scanned once to obtain the integral fitting profile data of the measured object;
the continuous scanning is carried out on the measured object through the structured light sensor, a plurality of groups of profile data points of the measured object under the light plane coordinate system are obtained, and the method also comprises the following steps:
jointly calibrating the binocular camera and the inertial navigation to obtain a first transformation matrix, camera internal parameters and camera external parameters between the camera coordinate system and the inertial navigation coordinate system;
fixing the positions of the binocular camera, the plane target, the structured light sensor and the inertial navigation according to the camera internal parameters and the camera external parameters;
respectively acquiring a second transformation matrix between the camera coordinate system and the plane target coordinate system, a third transformation matrix between the camera coordinate system and the light plane coordinate system, and a fifth transformation matrix between the inertial navigation coordinate system and the light plane coordinate system according to the positions;
acquiring a fourth transformation matrix between the plane target coordinate system and the light plane coordinate system through the second transformation matrix and the third transformation matrix;
the method for acquiring the second transformation matrix specifically includes:
setting five circular mark points on a plane target, acquiring the three-dimensional coordinates of the circular mark points under the coordinate system of the plane target, and sequencing the circular mark points;
establishing an image pixel coordinate system according to the position;
shooting the plane target through the binocular camera, extracting an area containing plane target information in the binocular camera, and performing pixel extraction on each sorted circular mark point outline to obtain mark point centroid pixel coordinates under the image pixel coordinate system;
according to the mark point centroid pixel coordinates, utilizing a triangulation method to obtain the circular mark point three-dimensional coordinates under the camera coordinate system;
acquiring the second transformation matrix according to the three-dimensional coordinates of the circular mark point in the plane target coordinate system and the three-dimensional coordinates of the circular mark point in the camera coordinate system;
the fixing of the positions of the binocular camera, the plane target, the structured light sensor and the inertial navigation according to the camera internal parameters and the camera external parameters specifically comprises:
fixedly placing the binocular camera relative to the structured light sensor;
fixing both the planar target and the inertial navigation on the structured light sensor;
the plane target is fixedly connected with the structured light sensor through a special connecting piece.
CN202210047496.2A 2022-01-17 2022-01-17 Structured light scanning and measuring method and system based on binocular camera and inertial navigation Active CN114485479B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210047496.2A CN114485479B (en) 2022-01-17 2022-01-17 Structured light scanning and measuring method and system based on binocular camera and inertial navigation

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210047496.2A CN114485479B (en) 2022-01-17 2022-01-17 Structured light scanning and measuring method and system based on binocular camera and inertial navigation

Publications (2)

Publication Number Publication Date
CN114485479A CN114485479A (en) 2022-05-13
CN114485479B true CN114485479B (en) 2022-12-30

Family

ID=81511283

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210047496.2A Active CN114485479B (en) 2022-01-17 2022-01-17 Structured light scanning and measuring method and system based on binocular camera and inertial navigation

Country Status (1)

Country Link
CN (1) CN114485479B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115607320A (en) * 2022-10-25 2023-01-17 无锡赛锐斯医疗器械有限公司 Pose measuring instrument and pose parameter determining method for extraoral scanning connection base station

Family Cites Families (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
DE19536294C2 (en) * 1995-09-29 2003-12-18 Daimler Chrysler Ag Method for geometric navigation of 3D optical sensors for the three-dimensional measurement of objects
GB9716240D0 (en) * 1997-07-31 1997-10-08 Tricorder Technology Plc Scanning apparatus and methods
EP2400261A1 (en) * 2010-06-21 2011-12-28 Leica Geosystems AG Optical measurement method and system for determining 3D coordination in a measuring object surface
EP2511656A1 (en) * 2011-04-14 2012-10-17 Hexagon Technology Center GmbH Measuring system for determining the 3D coordinates of an object surface
CN102607457B (en) * 2012-03-05 2014-12-10 西安交通大学 Measuring device and measuring method for large three-dimensional morphology based on inertial navigation technology
CN104964656A (en) * 2015-06-26 2015-10-07 天津大学 Self-positioning flowing-type rapid scanning measuring device and method based on inertial navigation
CN105222724B (en) * 2015-09-10 2018-09-18 北京天远三维科技股份有限公司 Multi-thread array laser 3 D scanning system and multi-thread array laser 3-D scanning method
CN106123802A (en) * 2016-06-13 2016-11-16 天津大学 A kind of autonomous flow-type 3 D measuring method
CN108692661A (en) * 2018-05-08 2018-10-23 深圳大学 Portable three-dimensional measuring system based on Inertial Measurement Unit and its measurement method
CN113514008B (en) * 2020-04-10 2022-08-23 杭州思看科技有限公司 Three-dimensional scanning method, three-dimensional scanning system, and computer-readable storage medium
CN111879235A (en) * 2020-06-22 2020-11-03 杭州思看科技有限公司 Three-dimensional scanning detection method and system for bent pipe and computer equipment
CN113034571B (en) * 2021-04-16 2023-01-20 广东工业大学 Object three-dimensional size measuring method based on vision-inertia
CN113409406A (en) * 2021-08-19 2021-09-17 武汉中观自动化科技有限公司 Large-range monocular tracking scanning device
CN113870366B (en) * 2021-10-18 2024-04-02 中国科学院长春光学精密机械与物理研究所 Calibration method and calibration system of three-dimensional scanning system based on pose sensor

Also Published As

Publication number Publication date
CN114485479A (en) 2022-05-13

Similar Documents

Publication Publication Date Title
CN110021046B (en) External parameter calibration method and system for camera and laser radar combined sensor
CN110276808B (en) Method for measuring unevenness of glass plate by combining single camera with two-dimensional code
CN106595528B (en) A kind of micro- binocular stereo vision measurement method of telecentricity based on digital speckle
CN110296667B (en) High-reflection surface three-dimensional measurement method based on line structured light multi-angle projection
CN108986070B (en) Rock crack propagation experiment monitoring method based on high-speed video measurement
CN107588721A (en) The measuring method and system of a kind of more sizes of part based on binocular vision
CN110044374B (en) Image feature-based monocular vision mileage measurement method and odometer
CN109341668B (en) Multi-camera measuring method based on refraction projection model and light beam tracking method
CN112017248B (en) 2D laser radar camera multi-frame single-step calibration method based on dotted line characteristics
CN104567727A (en) Three-dimensional target and global unified calibration method for linear structured light profile sensor
CN110763204B (en) Planar coding target and pose measurement method thereof
JP2012058076A (en) Three-dimensional measurement device and three-dimensional measurement method
CN110779491A (en) Method, device and equipment for measuring distance of target on horizontal plane and storage medium
CN108469254A (en) A kind of more visual measuring system overall calibration methods of big visual field being suitable for looking up and overlooking pose
CN113870366B (en) Calibration method and calibration system of three-dimensional scanning system based on pose sensor
CN104167001B (en) Large-visual-field camera calibration method based on orthogonal compensation
CN113310433A (en) Virtual binocular stereo vision measuring method based on line structured light
Ding et al. A robust detection method of control points for calibration and measurement with defocused images
CN111307046A (en) Tree height measuring method based on hemispherical image
CN114485479B (en) Structured light scanning and measuring method and system based on binocular camera and inertial navigation
CN102881040A (en) Three-dimensional reconstruction method for mobile photographing of digital camera
CN113012238B (en) Method for quick calibration and data fusion of multi-depth camera
CN111583342A (en) Target rapid positioning method and device based on binocular vision
CN115272474A (en) Three-dimensional calibration plate for combined calibration of laser radar and camera and calibration method
CN107941241B (en) Resolution board for aerial photogrammetry quality evaluation and use method thereof

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant