CN113446957A - Three-dimensional contour measuring method and device based on neural network calibration and speckle tracking - Google Patents

Three-dimensional contour measuring method and device based on neural network calibration and speckle tracking Download PDF

Info

Publication number
CN113446957A
CN113446957A CN202110501533.8A CN202110501533A CN113446957A CN 113446957 A CN113446957 A CN 113446957A CN 202110501533 A CN202110501533 A CN 202110501533A CN 113446957 A CN113446957 A CN 113446957A
Authority
CN
China
Prior art keywords
speckle
coordinates
measured
image
plane
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110501533.8A
Other languages
Chinese (zh)
Other versions
CN113446957B (en
Inventor
潘翀
韩雨坤
刘彦鹏
王晋军
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beihang University
Original Assignee
Beihang University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beihang University filed Critical Beihang University
Priority to CN202110501533.8A priority Critical patent/CN113446957B/en
Publication of CN113446957A publication Critical patent/CN113446957A/en
Application granted granted Critical
Publication of CN113446957B publication Critical patent/CN113446957B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01BMEASURING LENGTH, THICKNESS OR SIMILAR LINEAR DIMENSIONS; MEASURING ANGLES; MEASURING AREAS; MEASURING IRREGULARITIES OF SURFACES OR CONTOURS
    • G01B11/00Measuring arrangements characterised by the use of optical techniques
    • G01B11/24Measuring arrangements characterised by the use of optical techniques for measuring contours or curvatures

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Length Measuring Devices By Optical Means (AREA)

Abstract

The invention provides a three-dimensional contour measuring method and a three-dimensional contour measuring device based on neural network calibration and speckle tracking, which comprise the following steps: the laser generates speckles on the surface of an object to be measured through a diffractive optical element, at least three cameras respectively and synchronously shoot a plane target and the object to be measured to obtain a calibration image and a speckle image, the mapping relation between a pixel space and a physical space formed by image planes of the cameras and the mapping relation between the image planes are determined based on a neural network algorithm according to the calibration image, the coordinates of speckle point clouds in the speckle images in the pixel spaces are extracted, the mapping relation between the image planes of the cameras is used for transforming to the same common image plane, the matching of each speckle point is carried out based on an ant colony particle tracking speed measurement algorithm, the matched speckle point clouds are subjected to three-dimensional reconstruction through the mapping relation between the pixel spaces and the physical space, and the three-dimensional profile of the surface of the object to be measured is obtained. The invention has the advantages of no interference to the surface to be measured, high measurement precision, high spatial resolution and the like.

Description

Three-dimensional contour measuring method and device based on neural network calibration and speckle tracking
Technical Field
The invention relates to the field of multi-camera stereoscopic vision, in particular to a three-dimensional contour measuring method and device based on neural network calibration and speckle tracking.
Background
The measurement of the three-dimensional profile of the surface of an object is widely applied to a plurality of industrial fields such as geological survey, precise instruments, deformation monitoring, motion measurement and the like. At present, the research on complex topography is the focus of many researchers, and therefore the measurement of complex surface profiles is also receiving wide attention.
In the prior art, the measurement of the three-dimensional profile can be performed by placing a sensor around the topography by a direct contact method. However, the method can interfere with the feature characteristics when measuring complex features, and has low spatial resolution and poor accuracy. The surface three-dimensional profile measurement based on machine vision can realize non-contact field measurement, has the advantages of quick and convenient operation, no interference to the surface to be measured and the like, and has measurement precision closely related to a three-dimensional space reconstruction algorithm used by the surface three-dimensional profile measurement.
In the non-contact measurement of the three-dimensional surface profile based on the machine vision, multi-target calibration, feature extraction and feature matching are three technical difficulties. In the aspect of multi-camera calibration, the traditional pinhole imaging model is difficult to adapt to a scene calibrated by multiple cameras; when the number of cameras is small, the measurement accuracy will be significantly affected by the problems of surface occlusion, unobvious feature points, or feature point missing at a certain viewing angle.
In the aspect of feature extraction and matching, the existing method generally adopts the speckle projected to the surface to be measured in a structured or random distribution manner as a technical route of features, so that the problem of insufficient texture features of the natural surface is solved, but the speckle blocks with different visual angles are matched through cross-correlation calculation, and the spatial resolution is limited by a cross-correlation query window.
In order to solve the technical problem, the invention provides a three-dimensional contour measurement method and a three-dimensional contour measurement device based on neural network calibration and speckle tracking.
Disclosure of Invention
The invention provides a three-dimensional contour measuring method and device based on neural network calibration and speckle tracking, which have the advantages of no interference to a surface to be measured, high measuring precision, high spatial resolution and the like.
In a first aspect, the present invention provides a three-dimensional profile measurement method based on neural network calibration and speckle tracking, the method comprising:
acquiring calibration images obtained by shooting plane targets with different heights by a plurality of cameras and speckle images obtained by shooting the speckle morphology on the surface of an object to be detected;
determining a mapping relation between a pixel space formed by camera image planes and a physical space and a mapping relation between the camera image planes based on a neural network algorithm according to the acquired calibration image;
performing speckle point cloud extraction on the obtained speckle images to obtain coordinates of the speckle point clouds in each pixel space, and transforming the coordinates to the same reference image plane by using the mapping relation between the camera image planes;
matching the speckle point cloud coordinates after mapping transformation one by one based on an ant colony particle tracking speed measurement algorithm to obtain the matching relation of each speckle point on different camera image planes;
and performing three-dimensional reconstruction on each matched speckle point based on the mapping relation established by the neural network to obtain the three-dimensional profile of the surface of the object to be measured.
Specifically, determining a mapping relationship from a pixel space formed by each camera image plane to a physical space and a mapping relationship between each camera image plane based on a neural network algorithm includes:
using physical space coordinates
Figure BDA0003056617000000021
And camera image plane coordinates
Figure BDA0003056617000000022
The known plane target characteristic points are used as samples, and a space calibration neural network M is obtained through training, so that
Figure BDA0003056617000000023
K is 1, a, K represents the total number of cameras, C represents the coordinates of each image plane feature point in the calibration process, and i represents the ith plane target feature point;
using coordinates in each image plane
Figure BDA0003056617000000024
The known plane target characteristic points are used as samples, the image plane corresponding to k 1 is used as a common image plane, and the neural network F is obtained through trainingkSo that
Figure BDA0003056617000000025
Specifically, the speckle point cloud extraction is performed on the obtained speckle image to obtain coordinates of the speckle point cloud in each pixel space, and the method includes the following steps:
selecting the surface area of the object to be measured in the measurement image as an interested area, and setting the gray value of other areas as 0;
and based on a set gray threshold, carrying out binarization processing on the region of interest, searching for connected domains, and determining the coordinates of the scattered spots in a pixel space according to the mass center horizontal and vertical coordinates of each connected domain.
Specifically, matching the speckle point cloud coordinates after mapping transformation one by one based on an ant colony particle tracking speed measurement algorithm to obtain the matching relationship of each scattered spot on different camera image planes, including:
coordinates of speckle points on each image plane
Figure BDA0003056617000000026
Projected onto a common image plane corresponding to k-1, i.e.
Figure BDA0003056617000000027
The value of k is a value corresponding to other image planes except the common image plane, and R represents the coordinate of the speckle point on each camera image plane in the measurement process;
the minimum displacement criterion between the image planes and the similarity criterion of the speckle point cloud distribution pattern are taken as oneConstructing a mixed objective function for the whole, and performing global minimization solution by using an ant colony algorithm to obtain a matching relation
Figure BDA0003056617000000031
Specifically, based on the mapping relationship established by the neural network, three-dimensional reconstruction is performed on each matched speckle point to obtain a three-dimensional profile of the surface of the object to be measured, and the three-dimensional reconstruction method comprises the following steps:
coordinates of the matched speckle points on all camera image planes
Figure BDA0003056617000000032
Input to the spatial scaling neural network M, output
Figure BDA0003056617000000033
Namely the three-dimensional coordinates of the scattered spots in the physical space;
optionally, performing surface fitting on the three-dimensional point cloud obtained by reconstruction, filtering the original data according to the root mean square error of fitting of each point, and screening out the points with the error larger than a preset threshold;
and performing curved surface interpolation on the residual speckle point cloud to obtain a three-dimensional profile of the surface of the object to be measured.
In a second aspect, the present invention provides a three-dimensional profile measuring apparatus based on neural network calibration and speckle tracking, the apparatus comprising:
the system comprises a laser, a diffraction optical element, a three-axis displacement table, a synchronous controller, a plurality of cameras and computer equipment;
the laser is used for generating a laser beam, and the laser beam generates pseudo-randomly distributed speckles on the surface of an object to be measured through a diffraction optical element;
the three-axis displacement table is used for placing a plane target in the calibration process and controlling the plane target to move along the height direction; the three-axis displacement table is used for placing an object to be measured in the measuring process;
the synchronous controller is respectively connected with the cameras through connecting wires and is used for controlling the cameras to realize synchronous acquisition of images;
the camera is used for shooting plane targets with different heights in the calibration process to obtain a calibration image; the camera is also used for shooting the speckle appearance of the surface of the object to be measured in the measuring process to obtain a speckle image;
and the computer equipment is used for acquiring the calibration image and the speckle image and executing the measuring method to realize surface reconstruction of the object to be measured.
Specifically, when the three-axis displacement table controls the planar target to move along the height direction, the three-axis displacement table is specifically used for: controlling the plane target to move along the height direction and traversing the thickness of the object to be measured;
the three-axis displacement stage is further configured to: adjusting the position of the plane target to enable the plane target to be always positioned in the shooting range of the cameras in the calibration process; and adjusting the position of the object to be measured to enable the surface of the object to be measured to be always positioned in the shooting range of the cameras in the measuring process.
Optionally, the number of cameras is three or more.
The invention provides a three-dimensional contour measurement method and a three-dimensional contour measurement device based on neural network calibration and speckle tracking, wherein multi-view combined calibration is carried out based on a neural network algorithm, the problem that the traditional calibration method limits the number of cameras is solved, a multi-view system provides redundant information for matching, the surface appearance measurement precision is improved, the problem that the appearance characteristics of a surface to be measured are not obvious is solved through speckle projection, the matching of a single scattered spot in a multi-view imaging image group is realized by applying an ant colony particle tracking speed measurement algorithm, and the measurement precision is higher compared with the traditional window-based matching; the invention has the advantages of no interference to the surface to be measured, high measurement precision, high spatial resolution and the like.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the invention and together with the description, serve to explain the principles of the invention.
FIG. 1 is a perspective side view of an object having a complex three-dimensional contour according to an embodiment of the present invention;
FIG. 2 is a top view of a complex three-dimensional contoured object provided by an embodiment of the present invention;
fig. 3A is a schematic structural diagram of a three-dimensional profile measuring device based on neural network calibration and speckle tracking according to an embodiment of the present invention;
fig. 3B is a schematic diagram of a planar target according to an embodiment of the present invention;
fig. 4 is a schematic flowchart of a three-dimensional profile measurement method based on neural network calibration and speckle tracking according to an embodiment of the present invention;
FIG. 5 is a schematic diagram of a neural network provided in an embodiment of the present invention;
fig. 6A is a schematic diagram of original speckle images captured by multiple cameras according to an embodiment of the present invention;
FIG. 6B is an image after a region of interest has been selected according to an embodiment of the present invention;
fig. 6C is an image obtained after finding a connected domain on the surface of the object to be measured according to the embodiment of the present invention;
fig. 6D is a schematic diagram of extracted speckle point cloud coordinates provided by an embodiment of the present invention;
fig. 7 is a schematic diagram of an ant colony particle tracking velocity measurement technique according to an embodiment of the present invention;
fig. 8A is an original three-dimensional space coordinate diagram of a speckle point cloud on the surface of an object to be measured according to an embodiment of the present invention;
fig. 8B is a three-dimensional space coordinate diagram of the object to be measured after the speckle point cloud processing is performed on the surface of the object;
fig. 8C is a three-dimensional contour reconstruction diagram of the object to be measured according to the embodiment of the present invention;
fig. 8D is a top-view contour diagram of a three-dimensional contour of an object to be measured obtained through reconstruction according to an embodiment of the present invention.
With the above figures, certain embodiments of the invention have been illustrated and described in more detail below. The drawings and the description are not intended to limit the scope of the inventive concept in any way, but rather to illustrate it by those skilled in the art with reference to specific embodiments.
Detailed Description
Reference will now be made in detail to the exemplary embodiments, examples of which are illustrated in the accompanying drawings. When the following description refers to the accompanying drawings, like numbers in different drawings represent the same or similar elements unless otherwise indicated. The embodiments described in the following exemplary embodiments do not represent all embodiments consistent with the present invention. Rather, they are merely examples of apparatus and methods consistent with certain aspects of the invention, as detailed in the appended claims.
The following describes the technical solution of the present invention and how to solve the above technical problems with specific examples. The following several specific embodiments may be combined with each other, and details of the same or similar concepts or processes may not be repeated in some embodiments.
The embodiment of the invention provides a three-dimensional contour measurement method and a three-dimensional contour measurement device based on neural network calibration and speckle tracking, wherein multi-view combined calibration is carried out based on a neural network algorithm, the problem that the traditional calibration method limits the number of cameras is solved, a multi-view system provides redundant information for matching, the surface morphology measurement precision is improved, the problem that the surface morphology features to be measured are not obvious is solved through speckle projection, the ant colony particle tracking speed measurement algorithm is applied, the matching of a single scattered spot in a multi-view imaging image group is realized, and the measurement precision is higher compared with the traditional window-based matching; the embodiment of the invention has the advantages of no interference to the surface to be measured, high measurement precision, high spatial resolution and the like.
Fig. 1 is a perspective side view of an object with a complex three-dimensional contour according to an embodiment of the present invention. As shown in fig. 1, the surface of the complex three-dimensional contour object is irregular.
Fig. 2 is a top view of a complex three-dimensional contour object according to an embodiment of the present invention. As shown in fig. 2, the top view of the complex-shaped object is a square with a side length of 200mm, and the included angle with the vertical direction of the image is 19.5 °. The parts 1, 2, 3 and 4 in the figure respectively represent high potentials of different degrees, and form a certain topography contrast with the positions which are not marked. It can be seen that the bottom surface of the complex topography is a regular pattern.
Some embodiments of the invention are described in detail below with reference to the accompanying drawings. The features of the embodiments and examples described below may be combined with each other without conflict between the embodiments.
Fig. 3A is a schematic structural diagram of a three-dimensional profile measuring device based on neural network calibration and speckle tracking according to an embodiment of the present invention. As shown in fig. 3A, the apparatus in this embodiment may include:
the laser device comprises a laser 1, a diffractive optical element 2, a three-axis displacement table 4, a synchronous controller 6, a plurality of cameras 5 and a computer device 7.
The laser 1 is used for generating a laser beam which generates pseudo-randomly distributed speckles on the surface of an object 3 to be measured through the diffractive optical element 2.
The laser beam generated by the laser 1 is a visible light beam, and the visible light may be any color of visible light. The laser 1 is matched with the diffraction optical element 2 to enable the laser beam to generate a plurality of pseudo-randomly distributed speckles on the surface of the object to be measured 3 through the diffraction optical element. The generated pseudo-randomly distributed speckles can be hundreds, etc. Specifically, in this embodiment, a visible red laser beam with a wavelength of 650nm may be used, and the laser 1 is connected to a power supply, so that 230 pseudo-randomly distributed red speckles are generated on the surface of the object 3 to be measured via the diffractive optical element 2.
The three-axis displacement table 4 is used for placing the plane target 8 in the calibration process and controlling the plane target 8 to move along the height direction; the three-axis displacement table 4 is used for placing the object 3 to be measured in the measuring process.
Specifically, when the three-axis displacement table 4 controls the planar target 8 to move in the height direction, the three-axis displacement table is specifically used for: controlling the plane target 8 to move along the height direction and traversing the thickness of the object 3 to be measured; the three-axis displacement stage 4 is also configured to: adjusting the position of the plane target 8 to be always positioned in the shooting range of the plurality of cameras 5 in the calibration process; the position of the object to be measured is adjusted so that the surface thereof is positioned within the shooting range of the plurality of cameras 4.
Specifically, during calibration, the three-axis displacement table 4 can control the plane target 8 to move in the height direction, so that the plane target 8 traverses the thickness of the surface to be measured. When the plane target 8 moves along the height direction, the moving height is fixed and moves from low to high until the highest point of the movement reaches the highest point of the surface of the object 3 to be measured.
Specifically, the measurement range of the object 3 to be measured in the thickness direction can be divided into 21 layers, when the plane target 8 moves on the three-axis displacement table 4 along the height, the height of the three-axis displacement table 4 needs to be increased 20 times, the increased heights are the same each time, and the plane target needs to be shot according to the plurality of cameras 5 after being adjusted each time, so that a calibration image is obtained. The position when moving to the highest layer is the highest point of the object to be measured. Assuming a total of 5 cameras, 5 × 21 calibration images can be obtained.
Optionally, the position of the plane target 8 is adjusted to be always within the shooting range of the plurality of cameras 5 in the calibration process, that is, each camera can shoot a complete plane target image. Similarly, when the object to be measured is placed on the three-axis displacement table 4 before measurement, each camera 5 can capture a complete image of the object to be measured 3.
The plane target 8 and the object 3 to be measured are controlled by the three-axis displacement table 4, so that the obtained image is complete and clear; the planar target can obtain a plurality of images of the planar target 8 with corresponding heights through the thickness of the object to be measured of the planar target 8, so that the calibration result is adapted to the object to be measured 3.
Fig. 3B is a schematic diagram of a planar target according to an embodiment of the present invention. The planar target 8 adopts a checkerboard pattern, and checkerboard nodes are used as characteristic points when the camera is calibrated.
The synchronous controller 6 is respectively connected with the plurality of cameras 5 through connecting wires and is used for controlling the plurality of cameras 5 to realize synchronous image acquisition.
Specifically, the plurality of cameras 5 and the synchronous controller 6 are connected by a connecting line, and the plurality of cameras 5 are synchronously triggered to shoot the plane target 8 or the object 3 to be measured by the synchronous controller 6. The plurality of cameras 5 and the synchronous controller 6 may be connected by a BNC (Bayonet Nut Connector) data line.
Specifically, the control signals of the plurality of cameras 5 are from the synchronous controller 6, and when the plurality of cameras 5 receive a trigger command given by the synchronous controller 6, the plurality of cameras 5 synchronously acquire images at the same time.
Specifically, when the synchronous controller 6 sends a shooting signal to the plurality of cameras 5, the plurality of cameras 5 simultaneously and synchronously shoot images of the plane target 8 or the object 3 to be measured after receiving the signal. The camera 5 is used for shooting the plane targets 8 with different heights in the calibration process to obtain calibration images; the camera 5 is further configured to shoot the speckle morphology on the surface of the object 3 to be measured in the measurement process, so as to obtain a speckle image. Specifically, the surface of the plane target 8 and the surface of the object 3 to be measured need to be photographed by the camera 5 before calibration, the angle and the focal length of the camera 5 are adjusted, and the positions of the plane target 8 and the object 3 to be measured are moved, so that the image photographed by each camera 5 is clear and complete.
Optionally, the number of the cameras 5 is three or more.
Among others, the camera 5 may be a high resolution camera. And the number of cameras 5 is at least three and more. When different cameras are used for shooting, due to different positions and angles, images obtained by shooting the same scene are different, and some visual angles may have problems such as occlusion. Therefore, a plurality of cameras are required to capture the plane target 8 and the object 3 from a plurality of angles and obtain a plurality of different captured images. After a series of calculations are carried out on the obtained image, the reconstructed surface topography is more practical.
And the computer equipment 7 is used for acquiring the calibration image and the speckle image and performing post-processing to realize surface reconstruction of the object 3 to be measured.
Fig. 4 is a schematic flow chart of a three-dimensional profile measurement method based on neural network calibration and speckle tracking according to an embodiment of the present invention. The method in this embodiment may be implemented based on the apparatus provided in the foregoing embodiment. As shown in fig. 4, the method may include:
step 401, calibration images obtained by shooting plane targets with different heights by a plurality of cameras and speckle images obtained by shooting speckle shapes on the surface of an object to be detected are obtained.
Step 402, calibrating the cameras based on a neural network algorithm according to the obtained calibration images, and determining the mapping relation between the image planes of the cameras and the mapping relation between the pixel space formed by the image planes of the cameras and the physical space.
And 403, performing speckle point cloud extraction on the obtained speckle images to obtain coordinates of the speckle point clouds in each pixel space, and transforming the coordinates to the same reference image plane by using the mapping relation between the camera image planes.
And step 404, matching the speckle point cloud coordinates after mapping transformation one by one based on an ant colony particle tracking speed measurement algorithm to obtain the matching relation of each scattered spot on different camera image planes.
And 405, performing three-dimensional reconstruction on each matched speckle point based on the mapping relation established by the neural network to obtain a three-dimensional profile of the surface of the object to be measured.
Wherein the speckle point cloud is a set of scattered spots on the speckle image.
The neural network is an algorithm for simulating a biological nervous system learning process, is applied to camera calibration, has a simple and stable input and output relationship, and has generalized high-order fitting capability, so that the method is suitable for high-distortion scenes and multi-camera combined calibration. Fig. 5 is a schematic diagram of a neural network according to an embodiment of the present invention. A schematic diagram of a cascaded error back propagation neural network adopted in this embodiment is shown in fig. 5, a weight and bias information are respectively stored in a path of each node, and coordinate information enters a hidden layer of the neural network from an input layer and acts on the weight and bias information of each node until reaching an output layer. The process of training the neural network is a process of finding the optimal weights and biases using the samples.
Specifically, optionally, determining a mapping relationship between a pixel space formed by each camera image plane and a physical space based on a neural network algorithm, and determining a mapping relationship between each camera image plane includes:
using physical space coordinates
Figure BDA0003056617000000081
And camera image plane coordinates
Figure BDA0003056617000000082
The known plane target characteristic points are used as samples, and a space calibration neural network M is obtained through training, so that
Figure BDA0003056617000000091
K is 1, a, K represents the total number of cameras, C represents the coordinates of each image plane feature point in the calibration process, and i represents the ith plane target feature point;
using coordinates in each image plane
Figure BDA0003056617000000092
The known plane target characteristic points are used as samples, the image plane corresponding to k 1 is used as a common image plane, and the neural network F is obtained through trainingkSo that
Figure BDA0003056617000000093
The plane target feature points can be expressed as four vertexes of each white square on the plane target as feature points.
In this embodiment, a mapping relationship M from a pixel space formed by each camera image plane to a physical space is obtained through a neural network algorithm. Specifically, in the space calibration process stage, a camera is adjusted to an experimental state, a plane target is placed at an experimental plane to be tested, a precisely printed characteristic dot matrix with known space coordinates is arranged on the plane target, the plane target is shot by the camera, and coordinates of characteristic points in a picture in a pixel space are extracted through a numerical image processing technology
Figure BDA0003056617000000094
Coordinate of characteristic point of plane target in physical space
Figure BDA0003056617000000095
And coordinates in pixel space
Figure BDA0003056617000000096
As a sample, training to obtain a mapping relation M between a pixel space and an actual physical space, so that
Figure BDA0003056617000000097
Specifically, the training sample may be pixel coordinates of the same feature point corresponding to each image in the calibration image, and the label is a three-dimensional coordinate of a physical space. Through training of the neural network model, the feature points are mapped on the physical space by the coordinates of the pixel space in the image planes of the cameras A, B, C, D and E, and the mapping relation M from the pixel space formed by the image planes of the cameras to the physical space is obtained.
It should be noted that M is an inverse calibration spatial mapping function, i.e. the calibration process is reversed from the image plane to the physical space, consistent with the direction of the three-dimensional reconstruction.
Similarly, the mapping relation F between the image planes of the cameras is obtained through training of a neural network algorithmk. Specifically, for each camera image plane imaging result obtained by shooting a plane target, an image plane of a certain view angle (for example, K is 1, where K is 1., K is the total number of cameras in the multi-view vision system) is taken as a common image plane, coordinates of a feature lattice on a reference image plane and coordinates of the rest image planes are taken as samples, and a mapping relation F from each image plane to the common image plane is obtained through trainingkSo that
Figure BDA0003056617000000098
The superscript C is used for representing coordinates of characteristic points belonging to each image plane in the calibration process.
Specifically, the training sample may be coordinates of feature points in the calibration image, and the label is coordinates of the coordinates in the common image plane. And (3) mapping the coordinates of pixel spaces in the image planes of the cameras B, C, D and E to the image plane of the camera A by training the neural network model to obtain the mapping relation between the coordinate values on each camera image plane and the coordinate values of the public image plane.
For example, a planar target is used to calibrate five cameras with numbers a, B, C, D, and E, and assuming that the image plane of camera a is a common image plane, the coordinates of the target feature points in the same plane in the images captured by different cameras are different due to the different positions and capturing angles of the different cameras. For example, the coordinate values of a certain feature point of the plane target in the pixel space in the camera a image plane are (1, 1), and the coordinate values in the pixel space in the camera B, C, D, E image planes are (1, 2), (2, 1), (2, 2), (1, 1), respectively. The mapping relation F from each camera image plane to the reference plane is obtained by training the coordinate value of the pixel space in the camera image plane A and the coordinate values of the pixel spaces in the other 4 camera image planes through a neural network algorithmk
In this embodiment, the purpose of training the neural network model is to find out the corresponding relationship between the coordinates of the same feature point in each camera image plane, and preliminarily reduce the difference in coordinates of the feature point caused by the difference in the shooting angles of different cameras, thereby preparing for feature point matching.
Optionally, the speckle point cloud extraction is performed on the obtained speckle image to obtain coordinates of the speckle point cloud in each pixel space, including:
selecting the surface area of the object to be measured in the measurement image as an interested area, and setting the gray value of other areas as 0;
and based on a set gray threshold, carrying out binarization processing on the region of interest, searching for connected domains, and determining the coordinates of the scattered spots in a pixel space according to the mass center horizontal and vertical coordinates of each connected domain.
Fig. 6A is a schematic diagram of raw speckle images captured by multiple cameras according to an embodiment of the present invention. As shown in fig. 6A, the picture taken by the camera includes the entire surface of the object to be measured, wherein an edge portion having the surface to be measured is displayed in the taken image.
Fig. 6B is an image after a region of interest is selected according to an embodiment of the present invention. As shown in fig. 6B, from the image taken in the original image, the region of the object to be measured and other regions in the image. In order to distinguish the two regions, the surface Region Of the object to be measured is selected as a Region Of Interest (ROI), and the gray value Of the other Region is set to 0. The region of the object to be measured in the image can be highlighted, and other regions can be omitted. Wherein the selection of the ROI may be achieved by an edge detection algorithm or manually by a human.
Fig. 6C is an image obtained after finding a connected domain on the surface of the object to be measured according to the embodiment of the present invention. As shown in fig. 6C, on the basis of fig. 6B, a gray threshold is set for the region of interest, and a binarization process is performed on the pixel points of the region of interest to find a connected region on the image. For example, the gradation threshold value of the binarization processing may be set to 35.
Fig. 6D is a schematic diagram of extracted speckle point cloud coordinates provided in the embodiment of the present invention. As shown in fig. 6D, on the basis of fig. 6C, the horizontal and vertical coordinates of the centroid of each connected domain can be extracted as the coordinates of the speckle point cloud through a function for measuring the image area attribute in the image processing program. The black cross part in the graph is the scattered spots of the extracted interested region, and the coordinates of the scattered spots are obtained through the horizontal and vertical coordinates of the centroid of each connected region.
The coordinate value of the obtained speckle point in the pixel space is good in robustness and high in accuracy by setting an interested area, a gray threshold value and binarization processing on the measured image.
The Particle Tracking Velocimetry (PTV) technology is a high-resolution measurement method for fluid motion velocity fields, and the core algorithm of the method is to match the tracer particles in two image pairs one by one according to a certain matching criterion. Fig. 7 is a schematic diagram of an ant colony particle tracking velocity measurement technique according to an embodiment of the present invention. The method uses a hybrid ant colony matching algorithm, the schematic diagram of which is shown in fig. 7, the algorithm mixes two matching criteria, such as shortest cross-frame displacement of particles, most similar particle swarm distribution morphology and the like, to construct an optimized objective function, and then the ant colony algorithm is used for solving the optimized objective function to obtain the matching of tracer particles in two frames of images. The invention uses the algorithm to match scattered spots in camera images with different visual angles according to the following steps: the position relation and the distribution form among all speckle points on the image plane can not be changed obviously along with the change of the visual angle.
Specifically, the mapping relationship between the camera image planes is transformed to the same reference image plane, the speckle point cloud coordinates after mapping transformation are matched one by one based on an ant colony particle tracking speed measurement algorithm, and the matching relationship of each scattered spot on different camera image planes is obtained, which includes:
calculating the speckle point coordinates obtained by mapping transformation and the speckle point coordinates of the reference image plane by using a particle tracking speed measurement algorithm, and calculating the speckle point coordinates on each image plane according to an objective function
Figure BDA0003056617000000111
Projected onto a common image plane corresponding to k-1, i.e.
Figure BDA0003056617000000112
The value of k is a value corresponding to other image planes except the common image plane, and R represents the coordinate of the speckle point on each camera image plane in the measurement process;
taking the minimum displacement criterion between the image planes and the speckle point cloud distribution pattern similarity criterion as a whole, constructing a mixed objective function, and performing global minimization solution by using an ant colony algorithm to obtain a matching relation
Figure BDA0003056617000000113
The superscript R is used for representing speckle point coordinates on each camera image plane obtained in the measurement process. The image plane minimum displacement criterion is expressed as the minimum displacement from a certain scattered spot on the reference image plane to other scattered spot coordinates of the mapping and reference image plane and is used as a judgment basis, and the speckle point cloud distribution pattern similarity criterion is expressed as the similarity between the distribution mode of all speckle points on the reference image plane and the speckle point cloud distribution mapped on the reference image plane and is used as a judgment basis.
Optionally, a particle tracking velocity measurement algorithm is applied to map the speckle point coordinates obtained by transformation
Figure BDA0003056617000000121
(K2.. K.) and the speckle point coordinates of the reference image plane
Figure BDA0003056617000000122
Modeling and constructing a mixed target function by taking the minimum displacement criterion of the image plane and the similarity criterion of the speckle point cloud distribution pattern as a whole, and performing global minimum solving on each scattered spot by using an ant colony particle tracking algorithm to obtain a matching relation
Figure BDA0003056617000000123
And then matching the mapping transformation corresponding to the speckle point coordinates of the reference plane to obtain the speckle point coordinates.
The optimization problem solution in the embodiment of the invention can be used for solving a global minimization problem.
Specifically, when an objective function is constructed for the coordinates of the speckle points on the reference plane and the coordinates of the speckle points mapped to the reference plane, the hybrid objective function may be constructed according to the highest similarity of the speckle distribution patterns of the displacement sum of the speckle points, so as to obtain the optimal solution of the objective function. Wherein the displacement corresponding to each speckle point may comprise a displacement between the coordinates of the scattered spot mapped from the respective image plane to the reference plane and the coordinates of the speckle point in the reference plane. The similarity of the speckle distribution patterns is the degree of similarity in the speckle point cloud. Thereby obtaining the matching relation between the speckle point coordinates of the reference plane and the speckle point coordinates of other image planes.
By applying a neural network algorithm and an ant colony particle tracking speed measurement technology, the number of cameras can be effectively increased, redundant information for matching is provided, and the accuracy and the spatial resolution of surface measurement are further improved.
Optionally, based on the mapping relationship established by the neural network, performing three-dimensional reconstruction on each matched speckle point to obtain a three-dimensional profile of the surface of the object to be measured, including:
coordinates of the matched speckle points on all camera image planes
Figure BDA0003056617000000124
Input to the spatial scaling neural network M, output
Figure BDA0003056617000000125
Namely the three-dimensional coordinates of the scattered spots in the physical space;
performing curve fitting on the three-dimensional point cloud obtained by reconstruction, filtering the original data according to the root mean square error of fitting of each point, and screening out points with errors larger than a preset threshold value;
and performing curved surface interpolation on the residual speckle point cloud to obtain a three-dimensional profile of the surface of the object to be measured.
In this embodiment, the coordinate values of the scattered spots in all the k camera pixel spaces
Figure BDA0003056617000000126
Figure BDA0003056617000000127
In the neural network M obtained by input training of the input layer, the three-dimensional coordinates of the point in the physical space can be output at the output layer
Figure BDA0003056617000000128
Fig. 8A is an original three-dimensional space coordinate diagram of a speckle point cloud on the surface of an object to be measured according to an embodiment of the present invention.
Fig. 8B is a three-dimensional space coordinate diagram of the object to be measured after the speckle point cloud processing is performed on the surface of the object. As shown in fig. 8B, the speckle point cloud is transformed by a coordinate system to be leveled, a function is applied to perform surface fitting, a threshold value is set according to the root mean square error of each point, and bad points scattered above or below the surface map can be screened out.
Specifically, a Lowess model can be used to perform surface fitting on the speckle point cloud, after a fitted surface is obtained, the distance between each coordinate and the curved surface is calculated, a threshold value, for example, 2.5, is set according to the root mean square error, and when the root mean square error of a certain scattered spot is greater than 2.5, the speckle point is screened out as a bad spot.
Fig. 8C is a three-dimensional contour reconstruction diagram of the object to be measured according to the embodiment of the present invention. Fig. 8D is a top-view contour diagram of a three-dimensional contour of an object to be measured obtained through reconstruction according to an embodiment of the present invention. As shown in fig. 8C, according to the speckle point cloud coordinate values remaining after leveling and dead points screening, an image processing program is applied to perform interpolation surface fitting, so as to obtain a final three-dimensional contour reconstruction map of the surface of the object to be measured. Similarly, after obtaining the three-dimensional contour reconstruction map of the surface of the object to be measured, the plan view contour map shown in fig. 8D can be obtained. The contour lines in the graph are closed curves formed by connecting adjacent points with equal height on the complex topography. The number marked on the contour is the altitude of the contour. For example, when the number marked in the figure is 25, the height from the horizontal plane is 25 mm.
Specifically, when the function is as
Figure BDA0003056617000000131
When an object to be measured with the wave amplitude of 2.6-34 mm is measured, the three-dimensional profile measuring method and device based on neural network calibration and speckle tracking are high in precision by comparing the obtained coordinate value with the theoretical value, and the root mean square error percentage of the experimental result is 1.9%.
Other embodiments of the invention will be apparent to those skilled in the art from consideration of the specification and practice of the invention disclosed herein. This invention is intended to cover any variations, uses, or adaptations of the invention following, in general, the principles of the invention and including such departures from the present disclosure as come within known or customary practice within the art to which the invention pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the invention being indicated by the following claims.
It will be understood that the invention is not limited to the precise arrangements described above and shown in the drawings and that various modifications and changes may be made without departing from the scope thereof. The scope of the invention is limited only by the appended claims.

Claims (8)

1. A three-dimensional contour measurement method based on neural network calibration and speckle tracking is characterized by comprising the following steps:
acquiring calibration images obtained by shooting plane targets with different heights by a plurality of cameras and speckle images obtained by shooting the speckle morphology on the surface of an object to be detected;
determining a mapping relation between a pixel space formed by each image plane and a physical space and a mapping relation between each camera image plane based on a neural network algorithm according to the acquired calibration image;
performing speckle point cloud extraction on the obtained speckle images to obtain coordinates of the speckle point clouds in each pixel space, and transforming the coordinates to the same reference image plane by using the mapping relation between the camera image planes;
matching the speckle point cloud coordinates after mapping transformation one by one based on an ant colony particle tracking speed measurement algorithm to obtain the matching relation of each speckle point on different camera image planes;
and performing three-dimensional reconstruction on each matched speckle point based on the mapping relation established by the neural network to obtain the three-dimensional profile of the surface of the object to be measured.
2. The method of claim 1, wherein determining the mapping relationship between the pixel space composed by each camera image plane to the physical space and the mapping relationship between each camera image plane based on a neural network algorithm comprises:
using physical space coordinates
Figure FDA0003056616990000011
And camera image plane coordinates
Figure FDA0003056616990000012
The known plane target characteristic points are used as samples, and a space calibration neural network M is obtained through training, so that
Figure FDA0003056616990000013
WhereinK is 1, …, K is the total number of cameras, C is the coordinates of each image plane feature point in the calibration process, and i is the ith plane target feature point;
using coordinates in each image plane
Figure FDA0003056616990000014
The known plane target characteristic points are used as samples, the image plane corresponding to k 1 is used as a common image plane, and the neural network F is obtained through trainingkSo that
Figure FDA0003056616990000015
3. The method of claim 1, wherein the speckle point cloud extraction of the obtained speckle images to obtain coordinates of the speckle point cloud in each pixel space comprises:
selecting the surface area of the object to be measured in the measurement image as an interested area, and setting the gray value of other areas as 0;
and based on a set gray threshold, carrying out binarization processing on the region of interest, searching for connected domains, and determining the coordinates of the scattered spots in a pixel space according to the mass center horizontal and vertical coordinates of each connected domain.
4. The method of claim 1, wherein matching speckle point cloud coordinates after mapping transformation one by one based on an ant colony particle tracking velocimetry algorithm to obtain a matching relationship of each speckle point on different camera image planes comprises:
coordinates of speckle points on each image plane
Figure FDA0003056616990000021
Projected onto a common image plane corresponding to k-1, i.e.
Figure FDA0003056616990000022
Where k is a value corresponding to an image plane other than the common image plane, R tableShowing the coordinates of speckle points on the image plane of each camera in the measuring process;
taking the minimum displacement criterion between the image planes and the speckle point cloud distribution pattern similarity criterion as a whole to construct a mixed target function, and performing global minimization solving by using the ant colony particle tracking speed measurement algorithm to obtain a matching relation
Figure FDA0003056616990000023
5. The method of claim 1, wherein the three-dimensional reconstruction of each matched speckle point based on the mapping relationship established by the neural network to obtain the three-dimensional profile of the surface of the object to be measured comprises:
coordinates of the matched speckle points on all camera image planes
Figure FDA0003056616990000024
Input to the spatial scaling neural network M, output
Figure FDA0003056616990000025
Namely the three-dimensional coordinates of the scattered spots in the physical space;
performing curve fitting on the three-dimensional point cloud obtained by reconstruction, filtering the original data according to the root mean square error of fitting of each point, and screening out points with errors larger than a preset threshold value;
and performing curved surface interpolation on the residual speckle point cloud to obtain a three-dimensional profile of the surface of the object to be measured.
6. A three-dimensional profile measuring device based on neural network calibration and speckle tracking is characterized by comprising:
the system comprises a laser, a diffraction optical element, a three-axis displacement table, a synchronous controller, a plurality of cameras and computer equipment;
the laser is used for generating a laser beam, and the laser beam generates pseudo-randomly distributed speckles on the surface of an object to be measured through a diffraction optical element;
the three-axis displacement table is used for placing a plane target in the calibration process and controlling the plane target to move along the height direction; the three-axis displacement table is used for placing an object to be measured in the measuring process;
the synchronous controller is respectively connected with the cameras through connecting wires and is used for controlling the cameras to realize synchronous acquisition of images;
the camera is used for shooting plane targets with different heights in the calibration process to obtain a calibration image; the camera is also used for shooting the speckle appearance of the surface of the object to be measured in the measuring process to obtain a speckle image;
the computer equipment is used for acquiring the calibration image and the speckle image and executing the measuring method of claim 1 to reconstruct the three-dimensional profile of the surface of the object to be measured.
7. The apparatus of claim 6, wherein the three-axis translation stage is configured to control the movement of the planar target in the elevation direction, and is further configured to: controlling the plane target to move along the height direction and traversing the thickness of the object to be measured;
the three-axis displacement stage is further configured to: adjusting the position of the plane target to enable the plane target to be always positioned in the shooting range of the cameras in the calibration process; and adjusting the position of the object to be measured to enable the surface of the object to be measured to be always positioned in the shooting range of the cameras in the measuring process.
8. The apparatus of claim 6, wherein the number of cameras is three and more.
CN202110501533.8A 2021-05-08 2021-05-08 Three-dimensional contour measuring method and device based on neural network calibration and speckle tracking Active CN113446957B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110501533.8A CN113446957B (en) 2021-05-08 2021-05-08 Three-dimensional contour measuring method and device based on neural network calibration and speckle tracking

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110501533.8A CN113446957B (en) 2021-05-08 2021-05-08 Three-dimensional contour measuring method and device based on neural network calibration and speckle tracking

Publications (2)

Publication Number Publication Date
CN113446957A true CN113446957A (en) 2021-09-28
CN113446957B CN113446957B (en) 2022-06-17

Family

ID=77809719

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110501533.8A Active CN113446957B (en) 2021-05-08 2021-05-08 Three-dimensional contour measuring method and device based on neural network calibration and speckle tracking

Country Status (1)

Country Link
CN (1) CN113446957B (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114494455A (en) * 2022-01-07 2022-05-13 西北工业大学 High-precision displacement measuring method under large visual angle
CN114708333A (en) * 2022-03-08 2022-07-05 智道网联科技(北京)有限公司 Method and device for generating external reference model of automatic calibration camera
CN114859072A (en) * 2022-05-11 2022-08-05 北京航空航天大学 Stereoscopic particle tracking speed measuring method

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2007033040A (en) * 2005-07-22 2007-02-08 Moritex Corp Method and device for calibrating optical head part in three-dimensional shape measuring instrument by optical cutting method
CN106127789A (en) * 2016-07-04 2016-11-16 湖南科技大学 Stereoscopic vision scaling method in conjunction with neutral net Yu virtual target
CN106595528A (en) * 2016-11-10 2017-04-26 华中科技大学 Digital speckle-based telecentric microscopic binocular stereoscopic vision measurement method
EP3232151A1 (en) * 2016-01-22 2017-10-18 Beijing Qingying Machine Visual Technology Co., Ltd. Three-dimensional measurement system and measurement method for feature point based on plane of four-camera set array
CN107941168A (en) * 2018-01-17 2018-04-20 杨佳苗 Reflective stripe surface shape measuring method and device based on speckle position calibration
CN112036072A (en) * 2020-07-10 2020-12-04 北京航空航天大学 Three-dimensional tracer particle matching method and velocity field measuring device

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2007033040A (en) * 2005-07-22 2007-02-08 Moritex Corp Method and device for calibrating optical head part in three-dimensional shape measuring instrument by optical cutting method
EP3232151A1 (en) * 2016-01-22 2017-10-18 Beijing Qingying Machine Visual Technology Co., Ltd. Three-dimensional measurement system and measurement method for feature point based on plane of four-camera set array
CN106127789A (en) * 2016-07-04 2016-11-16 湖南科技大学 Stereoscopic vision scaling method in conjunction with neutral net Yu virtual target
CN106595528A (en) * 2016-11-10 2017-04-26 华中科技大学 Digital speckle-based telecentric microscopic binocular stereoscopic vision measurement method
CN107941168A (en) * 2018-01-17 2018-04-20 杨佳苗 Reflective stripe surface shape measuring method and device based on speckle position calibration
CN112036072A (en) * 2020-07-10 2020-12-04 北京航空航天大学 Three-dimensional tracer particle matching method and velocity field measuring device

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114494455A (en) * 2022-01-07 2022-05-13 西北工业大学 High-precision displacement measuring method under large visual angle
CN114494455B (en) * 2022-01-07 2024-04-05 西北工业大学 High-precision displacement measurement method under large visual angle
CN114708333A (en) * 2022-03-08 2022-07-05 智道网联科技(北京)有限公司 Method and device for generating external reference model of automatic calibration camera
CN114708333B (en) * 2022-03-08 2024-05-31 智道网联科技(北京)有限公司 Method and device for generating automatic calibration camera external parameter model
CN114859072A (en) * 2022-05-11 2022-08-05 北京航空航天大学 Stereoscopic particle tracking speed measuring method

Also Published As

Publication number Publication date
CN113446957B (en) 2022-06-17

Similar Documents

Publication Publication Date Title
CN113446957B (en) Three-dimensional contour measuring method and device based on neural network calibration and speckle tracking
US9965870B2 (en) Camera calibration method using a calibration target
CN104156972B (en) Perspective imaging method based on laser scanning distance measuring instrument and multiple cameras
CN101299270B (en) Multiple video cameras synchronous quick calibration method in three-dimensional scanning system
CN103649674B (en) Measuring equipment and messaging device
US5852672A (en) Image system for three dimensional, 360 DEGREE, time sequence surface mapping of moving objects
Ivanov et al. Computer stereo plotting for 3-D reconstruction of a maize canopy
CN105069743B (en) Detector splices the method for real time image registration
CN108986070B (en) Rock crack propagation experiment monitoring method based on high-speed video measurement
CN104537707B (en) Image space type stereoscopic vision moves real-time measurement system online
EP2104365A1 (en) Method and apparatus for rapid three-dimensional restoration
CN104616292A (en) Monocular vision measurement method based on global homography matrix
CN109859272A (en) A kind of auto-focusing binocular camera scaling method and device
CN110334701B (en) Data acquisition method based on deep learning and multi-vision in digital twin environment
CN107025663A (en) It is used for clutter points-scoring system and method that 3D point cloud is matched in vision system
CN109712232B (en) Object surface contour three-dimensional imaging method based on light field
CN109523595A (en) A kind of architectural engineering straight line corner angle spacing vision measuring method
CN114998499A (en) Binocular three-dimensional reconstruction method and system based on line laser galvanometer scanning
CN108010125A (en) True scale three-dimensional reconstruction system and method based on line structure light and image information
Dekiff et al. Three-dimensional data acquisition by digital correlation of projected speckle patterns
CN112365545A (en) Calibration method of laser radar and visible light camera based on large-plane composite target
Li et al. Laser scanning based three dimensional measurement of vegetation canopy structure
CN113012238B (en) Method for quick calibration and data fusion of multi-depth camera
CN112525106B (en) Three-phase machine cooperative laser-based 3D detection method and device
Sulej et al. Improvement of accuracy of the membrane shape mapping of the artificial ventricle by eliminating optical distortion

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant