CN117670990A - Positioning method and device of three-dimensional camera, electronic equipment and storage medium - Google Patents

Positioning method and device of three-dimensional camera, electronic equipment and storage medium Download PDF

Info

Publication number
CN117670990A
CN117670990A CN202311541826.4A CN202311541826A CN117670990A CN 117670990 A CN117670990 A CN 117670990A CN 202311541826 A CN202311541826 A CN 202311541826A CN 117670990 A CN117670990 A CN 117670990A
Authority
CN
China
Prior art keywords
image
point
coordinate system
speckle
camera
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311541826.4A
Other languages
Chinese (zh)
Inventor
王辰
王晓南
成剑华
任关宝
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Wuhan Zhongguan Automation Technology Co ltd
Original Assignee
Wuhan Zhongguan Automation Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Wuhan Zhongguan Automation Technology Co ltd filed Critical Wuhan Zhongguan Automation Technology Co ltd
Priority to CN202311541826.4A priority Critical patent/CN117670990A/en
Publication of CN117670990A publication Critical patent/CN117670990A/en
Pending legal-status Critical Current

Links

Landscapes

  • Length Measuring Devices By Optical Means (AREA)

Abstract

The application provides a positioning method and device of a three-dimensional camera, electronic equipment and a storage medium, wherein the method comprises the following steps: acquiring at least one speckle characteristic point in a current image shot by a positioning camera at a current position; matching the speckle characteristic points in the current image with the speckle characteristic points in the calibration image to obtain at least one target speckle characteristic point and three-dimensional coordinates of the target speckle characteristic point; determining target pose information of a positioning camera when the current image is shot according to two-dimensional coordinates of each target speckle characteristic point in the current image, three-dimensional coordinates of each target speckle characteristic point and initial pose information of the positioning camera when the current image is shot; and determining the target pose information of the three-dimensional camera at the current position according to the target pose information of the positioning camera and pose conversion information of the three-dimensional camera and the positioning camera at the current position. Compared with the method of point cloud registration in the prior art, the method has the advantages that the precision of the positioned three-dimensional camera is more accurate and the efficiency is higher.

Description

Positioning method and device of three-dimensional camera, electronic equipment and storage medium
Technical Field
The present invention relates to the field of three-dimensional scanning technologies, and in particular, to a positioning method and apparatus for a three-dimensional camera, an electronic device, and a storage medium.
Background
The three-dimensional scanning is a high-new technology integrating light, mechanical, electric and computer technologies, and is mainly used for scanning the spatial appearance, structure and color of an object so as to obtain the spatial coordinates of the surface of the object. In order to improve the accuracy of non-stick point scanning for a specific position, scanners incorporating three-dimensional cameras have emerged.
Currently, a scanner based on a three-dimensional camera generally needs external conditions to position the three-dimensional camera in real time, such as combining a mechanical arm, an optical tracking instrument with a three-dimensional spherical scanner and the like, but the cost is greatly increased; the method can also directly carry out point cloud splicing through methods such as point cloud registration and the like without external conditions, but the three-dimensional reconstruction accuracy depends on the accuracy of the point cloud splicing, and the error accumulation of the splicing is easy to cause.
Therefore, a positioning method of a three-dimensional camera is needed to improve the scanning efficiency and positioning accuracy of the three-dimensional camera.
Disclosure of Invention
The present application aims to provide a positioning method, a device, an electronic device and a storage medium for a three-dimensional camera, which are used for solving the defects in the prior art, and improving the positioning precision and scanning efficiency of the three-dimensional camera.
In order to achieve the above purpose, the technical solution adopted in the embodiment of the present application is as follows:
in a first aspect, an embodiment of the present application provides a positioning method of a three-dimensional camera, where the method includes:
acquiring at least one speckle characteristic point in a current image shot by a positioning camera at a current position;
matching speckle characteristic points in a current image with speckle characteristic points in a calibration image to obtain at least one target speckle characteristic point and three-dimensional coordinates of the target speckle characteristic point, wherein the calibration image is obtained by shooting the positioning camera at the current position in a calibration stage, and a shooting view of the positioning camera comprises a plurality of speckles;
determining target pose information of the positioning camera when the current image is shot according to two-dimensional coordinates of each target speckle characteristic point in the current image, three-dimensional coordinates of each target speckle characteristic point and initial pose information of the positioning camera when the current image is shot;
and determining the target pose information of the three-dimensional camera at the current position according to the target pose information of the positioning camera and pose conversion information of the three-dimensional camera and the positioning camera at the current position.
Optionally, the matching the feature point with a feature point in the calibration image to obtain at least one target feature point and a three-dimensional coordinate of the target feature point includes:
extracting marks and key descriptors of speckle characteristic points in the current image;
searching a target speckle characteristic point matched with the mark of the speckle characteristic point and the key descriptor of the speckle characteristic point in the calibration image, and taking the three-dimensional coordinate of the target speckle characteristic point in the calibration image as the three-dimensional coordinate of the target speckle characteristic point, wherein the three-dimensional coordinate of each point in the calibration image is the three-dimensional coordinate under a standard scale coordinate system.
Optionally, before the matching the feature points with the feature points in the calibration image, the matching includes:
acquiring a plurality of first images of a speckle control field with a standard ruler and speckles, which are shot by the positioning camera, wherein the standard ruler comprises a plurality of mark points, and each first image comprises a mark point and a speckle characteristic point;
according to each first image, determining at least one group of feature point homonymy point pairs, and determining three-dimensional coordinates of each speckle feature point in each feature point homonymy point pair under a reference coordinate system and pose information of each first image under the reference coordinate system;
Determining at least one group of mark point homonymy point pairs according to each first image, determining three-dimensional coordinates of each mark point in each mark point homonymy point pair under a reference coordinate system, and converting the three-dimensional coordinates of each speckle characteristic point under the reference coordinate system into three-dimensional coordinates under a standard scale coordinate system according to the three-dimensional coordinates of each mark point under the reference coordinate system and the three-dimensional coordinates of each mark point under the standard scale coordinate system;
determining pose information of each first image under a standard scale coordinate system according to pose information of each first image under a reference coordinate system;
and obtaining and storing target three-dimensional coordinates of the speckle characteristic points and target pose information of the first images under the standard scale coordinate system according to the three-dimensional coordinates of the speckle characteristic points under the standard scale coordinate system and the pose information of the first images under the standard scale coordinate system.
Optionally, the determining at least one group of feature point homonymy point pairs according to each first image, and determining three-dimensional coordinates of each speckle feature point in each feature point homonymy point pair under a reference coordinate system and pose information of each first image under the reference coordinate system includes:
Traversing the plurality of first images, and matching speckle characteristic points in a current first image and a previous image of the current first image aiming at the traversed current first image to obtain at least one group of characteristic point homonymous point pairs, wherein the characteristic point homonymous point pairs comprise first speckle characteristic points and second speckle characteristic points, the first speckle characteristic points are contained in the current first image, and the second speckle characteristic points are contained in the previous image;
determining three-dimensional coordinates of the first speckle feature points and the second speckle feature points in a reference coordinate system according to the two-dimensional coordinates of the first speckle feature points, the two-dimensional coordinates of the second speckle feature points, the initial pose information of the current first image and the pose information of the previous image in the reference coordinate system;
determining a transformation matrix of the current first image and the previous image according to the three-dimensional coordinates of the first speckle characteristic points and the second speckle characteristic points under a reference coordinate system;
and determining pose information of the current first image under the reference coordinate system according to the transformation matrix and the pose information of each image before the current first image under the reference coordinate system.
Optionally, determining at least one group of point pairs with the same name of the marker points according to each first image, determining three-dimensional coordinates of each marker point in each point pair with the same name of the marker points under a reference coordinate system, and converting the three-dimensional coordinates of each speckle feature point under the reference coordinate system into the three-dimensional coordinates under a standard scale coordinate system according to the three-dimensional coordinates of each marker point under the reference coordinate system and the three-dimensional coordinates of each marker point under the standard scale coordinate system, including:
traversing the plurality of first images, and matching the current first image with mark points in a first image aiming at the traversed current first image to obtain mark point homonymous point pairs, wherein the mark point homonymous point pairs comprise first mark points and second mark points, the first mark points are contained in the current first image, and the second mark points are contained in the first image;
determining a three-dimensional coordinate of the first mark point under a reference coordinate system according to the two-dimensional coordinate of the first mark point and pose information of the current first image under the reference coordinate system;
determining conversion pose information of the current first image according to the three-dimensional coordinates of the first mark points under the reference coordinate system and the three-dimensional coordinates of the first mark points under the standard scale coordinate system;
And converting the three-dimensional coordinates of each speckle characteristic point under the reference coordinate system into three-dimensional coordinates under the standard scale coordinate system according to the conversion pose information of the current first image.
Optionally, the determining pose information of each first image in the standard scale coordinate system according to pose information of each first image in the reference coordinate system includes:
and determining pose information of the first image under a standard scale coordinate system according to the pose information of the first image under the reference coordinate system and the converted pose information of the first image.
Optionally, before determining the target pose information of the three-dimensional camera at the current position according to the target pose information of the positioning camera and the conversion information of the three-dimensional camera and the positioning camera at the current position, the method includes:
acquiring a second image in a calibration point control field shot by the positioning camera at the current position and a third image in the calibration point control field shot by the three-dimensional camera at the current position, wherein the shooting fields of the positioning camera and the three-dimensional camera respectively comprise a plurality of calibration points;
determining target pose information of the positioning camera according to the two-dimensional coordinates of each target point in the second image, the three-dimensional coordinates of each target point in the target point control field and the initial pose information of the positioning camera when the positioning camera shoots the second image at the current position;
Determining target pose information of the three-dimensional camera according to the two-dimensional coordinates of each target point in the third image, the three-dimensional coordinates of each target point in the target point control field and the initial pose information of the three-dimensional camera when the three-dimensional camera shoots the third image at the current position;
and determining pose conversion information of the positioning camera and the three-dimensional camera at the current position according to the positioning pose information of the positioning camera and the positioning pose information of the three-dimensional camera.
In a second aspect, embodiments of the present application further provide a positioning device of a three-dimensional camera, where the device includes:
the acquisition module is used for acquiring at least one speckle characteristic point in a current image shot by the positioning camera at a current position;
the matching module is used for matching the speckle characteristic points in the current image with the speckle characteristic points in the calibration image to obtain at least one target speckle characteristic point and the three-dimensional coordinates of the target speckle characteristic point, the calibration image is obtained by shooting the positioning camera at the current position in a calibration stage, and the shooting view of the positioning camera comprises a plurality of speckles;
the determining module is used for determining target pose information of the positioning camera when the current image is shot according to the two-dimensional coordinates of each target speckle characteristic point in the current image, the three-dimensional coordinates of each target speckle characteristic point and the initial pose information of the positioning camera when the current image is shot;
And the determining module is used for determining the target pose information of the three-dimensional camera at the current position according to the target pose information of the positioning camera and the pose conversion information of the three-dimensional camera and the positioning camera at the current position.
Optionally, the matching module is specifically configured to:
extracting marks and key descriptors of speckle characteristic points in the current image;
searching a target speckle characteristic point matched with the mark of the speckle characteristic point and the key descriptor of the speckle characteristic point in the calibration image, and taking the three-dimensional coordinate of the target speckle characteristic point in the calibration image as the three-dimensional coordinate of the target speckle characteristic point, wherein the three-dimensional coordinate of each point in the calibration image is the three-dimensional coordinate under a standard scale coordinate system.
Optionally, the determining module is specifically configured to:
acquiring a plurality of first images of a speckle control field with a standard ruler and speckles, which are shot by the positioning camera, wherein the standard ruler comprises a plurality of mark points, and each first image comprises a mark point and a speckle characteristic point;
according to each first image, determining at least one group of feature point homonymy point pairs, and determining three-dimensional coordinates of each speckle feature point in each feature point homonymy point pair under a reference coordinate system and pose information of each first image under the reference coordinate system;
Determining at least one group of mark point homonymy point pairs according to each first image, determining three-dimensional coordinates of each mark point in each mark point homonymy point pair under a reference coordinate system, and converting the three-dimensional coordinates of each speckle characteristic point under the reference coordinate system into three-dimensional coordinates under a standard scale coordinate system according to the three-dimensional coordinates of each mark point under the reference coordinate system and the three-dimensional coordinates of each mark point under the standard scale coordinate system;
determining pose information of each first image under a standard scale coordinate system according to pose information of each first image under a reference coordinate system;
and obtaining and storing target three-dimensional coordinates of the speckle characteristic points and target pose information of the first images under the standard scale coordinate system according to the three-dimensional coordinates of the speckle characteristic points under the standard scale coordinate system and the pose information of the first images under the standard scale coordinate system.
Optionally, the determining module is specifically configured to:
traversing the plurality of first images, and matching speckle characteristic points in a current first image and a previous image of the current first image aiming at the traversed current first image to obtain at least one group of characteristic point homonymous point pairs, wherein the characteristic point homonymous point pairs comprise first speckle characteristic points and second speckle characteristic points, the first speckle characteristic points are contained in the current first image, and the second speckle characteristic points are contained in the previous image;
Determining three-dimensional coordinates of the first speckle feature points and the second speckle feature points in a reference coordinate system according to the two-dimensional coordinates of the first speckle feature points, the two-dimensional coordinates of the second speckle feature points, the initial pose information of the current first image and the pose information of the previous image in the reference coordinate system;
determining a transformation matrix of the current first image and the previous image according to the three-dimensional coordinates of the first speckle characteristic points and the second speckle characteristic points under a reference coordinate system;
and determining pose information of the current first image under the reference coordinate system according to the transformation matrix and the pose information of each image before the current first image under the reference coordinate system.
Optionally, the determining module is specifically configured to:
traversing the plurality of first images, and matching the current first image with mark points in a first image aiming at the traversed current first image to obtain mark point homonymous point pairs, wherein the mark point homonymous point pairs comprise first mark points and second mark points, the first mark points are contained in the current first image, and the second mark points are contained in the first image;
Determining a three-dimensional coordinate of the first mark point under a reference coordinate system according to the two-dimensional coordinate of the first mark point and pose information of the current first image under the reference coordinate system;
determining conversion pose information of the current first image according to the three-dimensional coordinates of the first mark points under the reference coordinate system and the three-dimensional coordinates of the first mark points under the standard scale coordinate system;
and converting the three-dimensional coordinates of each speckle characteristic point under the reference coordinate system into three-dimensional coordinates under the standard scale coordinate system according to the conversion pose information of the current first image.
Optionally, the determining module is specifically configured to:
and determining pose information of the first image under a standard scale coordinate system according to the pose information of the first image under the reference coordinate system and the converted pose information of the first image.
Optionally, the determining module is specifically configured to:
acquiring a second image in a calibration point control field shot by the positioning camera at the current position and a third image in the calibration point control field shot by the three-dimensional camera at the current position, wherein the shooting fields of the positioning camera and the three-dimensional camera respectively comprise a plurality of calibration points;
Determining target pose information of the positioning camera according to the two-dimensional coordinates of each target point in the second image, the three-dimensional coordinates of each target point in the target point control field and the initial pose information of the positioning camera when the positioning camera shoots the second image at the current position;
determining target pose information of the three-dimensional camera according to the two-dimensional coordinates of each target point in the third image, the three-dimensional coordinates of each target point in the target point control field and the initial pose information of the three-dimensional camera when the three-dimensional camera shoots the third image at the current position;
and determining pose conversion information of the positioning camera and the three-dimensional camera at the current position according to the positioning pose information of the positioning camera and the positioning pose information of the three-dimensional camera.
In a third aspect, an embodiment of the present application further provides an electronic device, including: the three-dimensional camera positioning method comprises a processor, a storage medium and a bus, wherein the storage medium stores program instructions executable by the processor, when an application program runs, the processor and the storage medium are communicated through the bus, and the processor executes the program instructions to execute the steps of the three-dimensional camera positioning method according to the first aspect.
In a fourth aspect, embodiments of the present application further provide a computer readable storage medium, where a computer program is stored, where the computer program is read and executed to perform the steps of the positioning method of the three-dimensional camera according to the first aspect.
The beneficial effects of this application are:
according to the positioning method, the device, the electronic equipment and the storage medium of the three-dimensional camera, the three-dimensional coordinates of the target speckle characteristic point and the target speckle characteristic point under the standard scale coordinate system can be rapidly determined by matching the speckle characteristic point in the current image shot by the positioning camera at the current position with the speckle characteristic point in the calibration image; the position of the speckle characteristic point in space is not changed, so that the pose of the positioning camera under a standard coordinate system when shooting the current image is more accurate according to the three-dimensional coordinates of each target speckle characteristic point obtained by matching; and then according to the target pose information of the positioning camera and the pose conversion relation from the three-dimensional camera to the positioning camera, obtaining the pose of the three-dimensional camera at the current position under the standard scale coordinate system. Compared with the method of point cloud registration in the prior art, the method has the advantages that the precision of the positioned three-dimensional camera is more accurate and the efficiency is higher.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings that are needed in the embodiments will be briefly described below, it being understood that the following drawings only illustrate some embodiments of the present application and therefore should not be considered limiting the scope, and that other related drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
FIG. 1 is a schematic view of an exemplary scenario provided in an embodiment of the present application;
fig. 2 is a flow chart of a positioning method of a three-dimensional camera according to an embodiment of the present application;
fig. 3 is a flowchart of another positioning method of a three-dimensional camera according to an embodiment of the present application;
FIG. 4 is a schematic flow chart of determining three-dimensional coordinates of speckle feature points in a calibration image according to an embodiment of the present application;
FIG. 5 is a flowchart of determining coordinates of speckle feature points in a reference coordinate system according to an embodiment of the present application;
FIG. 6 is a schematic flow chart of determining coordinates of speckle feature points in a standard scale coordinate system according to an embodiment of the present application;
fig. 7 is a schematic flow chart of determining pose conversion information of a three-dimensional camera and a positioning camera according to an embodiment of the present application;
Fig. 8 is a schematic device diagram of a positioning method of a three-dimensional camera according to an embodiment of the present application;
fig. 9 is a block diagram of an electronic device according to an embodiment of the present application.
Detailed Description
For the purpose of making the objects, technical solutions and advantages of the embodiments of the present application more clear, the technical solutions of the embodiments of the present application will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present application, and it should be understood that the accompanying drawings in the present application are only for the purpose of illustration and description, and are not intended to limit the protection scope of the present application. In addition, it should be understood that the schematic drawings are not drawn to scale. A flowchart, as used in this application, illustrates operations implemented according to some embodiments of the present application. It should be understood that the operations of the flow diagrams may be implemented out of order and that steps without logical context may be performed in reverse order or concurrently. Moreover, one or more other operations may be added to the flow diagrams and one or more operations may be removed from the flow diagrams as directed by those skilled in the art.
In addition, the described embodiments are only some, but not all, of the embodiments of the present application. The components of the embodiments of the present application, which are generally described and illustrated in the figures herein, may be arranged and designed in a wide variety of different configurations. Thus, the following detailed description of the embodiments of the present application, as provided in the accompanying drawings, is not intended to limit the scope of the application, as claimed, but is merely representative of selected embodiments of the application. All other embodiments, which can be made by those skilled in the art based on the embodiments of the present application without making any inventive effort, are intended to be within the scope of the present application.
It should be noted that the term "comprising" will be used in the embodiments of the present application to indicate the presence of the features stated hereinafter, but not to exclude the addition of other features.
Fig. 1 is a schematic view of an exemplary scenario provided in an embodiment of the present application, as shown in fig. 1, where the scenario includes a three-dimensional camera (3D), a positioning camera, a laser speckle projector, a gauge, an upper computer, etc., and the positioning camera and the 3D camera may be rigidly connected; the laser speckle projector and the wall surface are kept relatively static, and can project speckle texture patterns to the fixed wall surface (or a three-dimensional surface with any shape), and all the speckle texture patterns form a speckle control field. The laser speckle projector may also be a projector of other different patterns, which is not limited in this application.
Optionally, the 3D camera is mainly used for three-dimensional reconstruction of the surface of the object to be measured; the positioning camera is used for positioning the 3D camera; the laser speckle projector is used for generating a speckle control field; the standard rule is used for calibrating a speckle control field, a plurality of mark points are further arranged on the standard rule, and particularly the standard rule can be arranged at two ends of the standard rule, the real length of the standard rule can be determined through the coordinates of the mark points at the two ends of the standard rule, the standard rule is provided with an independent coordinate system, and the coordinates of the mark points at the two ends of the standard rule are arranged in the coordinate system; the upper computer can be used for controlling the communication between the 3D camera and the positioning camera, particularly controlling the movement of the 3D camera and the positioning camera, and acquiring the parameters of the 3D camera and the positioning camera and the shot images; the upper computer may be an electronic device, for example, a terminal device with computing processing capability and display function, such as a mobile phone, a tablet computer, a notebook computer, a palm computer, a desktop computer, or may be a server. The method can be applied to application programs in terminal equipment, such as: APP (application) of a mobile phone, an application system on a computer, and the like.
Optionally, in the embodiment of the present application, in the calibration stage, the 3D camera and the positioning camera may both shoot the speckle control field with the standard ruler and the speckle at different positions, and the three-dimensional coordinates of the speckle feature points in the calibration image are obtained for the shot image by using the method provided in the embodiment of the present application; in the calibration stage, the 3D camera and the positioning camera can shoot a control field with fixed mark points at different positions, and pose conversion information between the 3D camera and the positioning camera at different positions is determined according to the shot images; therefore, when the 3D camera is positioned in real time, the third camera shoots the speckle control field in real time, and according to the speckle characteristic points shot in real time and the three-dimensional coordinates of the speckle characteristic points in the calibration image obtained in the calibration stage, the pose conversion information between the 3D camera and the positioning camera utilizes the method in the embodiment of the application to accurately position the 3D camera in real time, so that the point cloud data scanned by the 3D camera are converted into three-dimensional data, and then the target three-dimensional reconstruction process based on the 3D camera scanning is completed.
The following specifically explains a specific implementation procedure of positioning of the three-dimensional camera provided in the embodiment of the present application.
Fig. 2 is a flow chart of a positioning method of a three-dimensional camera according to an embodiment of the present application, where an execution subject of the method is the above-mentioned host computer. As shown in fig. 2, the method includes:
s101, acquiring at least one speckle characteristic point in a current image shot by a positioning camera at a current position.
Optionally, when the 3D camera is positioned in real time, the shooting direction of the positioning camera is aligned to the wall surface on which the speckle pattern is projected, that is, the positioning camera shoots the speckle pattern in the speckle control field to obtain a current image, and the current image shot includes at least one speckle feature point. As can be seen from the above, the speckle control field is obtained by projecting the laser speckle projector on the wall surface, and the laser speckle projector and the wall surface are kept relatively static, so that the projected speckle pattern does not change with time, the positioning camera can capture a current image containing the speckle pattern, the speckle pattern is distributed with a plurality of speckle feature points, the positions of the feature points in space do not change, the feature points in the current image are extracted by Scale-invariant feature transform (SIFT-invariant feature transform), and the feature points of the speckle pattern in the current image captured by the positioning camera are extracted by the SIFT feature, so as to obtain a plurality of speckle feature points.
S102, matching the speckle characteristic points in the current image with the speckle characteristic points in the calibration image to obtain at least one target speckle characteristic point and three-dimensional coordinates of the target speckle characteristic point.
The calibration image is obtained by shooting the positioning camera at the current position in the calibration stage, and the positioning camera comprises a plurality of speckles in a shooting view.
Optionally, during the calibration stage, the three-dimensional coordinates of each speckle characteristic point and the key descriptors of each speckle characteristic point in the calibration image shot by the positioning camera at the current position can be stored, so that during real-time positioning, the three-dimensional coordinates of the target speckle characteristic point and the target speckle characteristic point can be obtained according to matching between the speckle characteristic point in the current image shot by the positioning camera in real time and each speckle characteristic point in the calibration image shot by the positioning camera at the current position during the calibration stage. The target speckle characteristic point can refer to the speckle characteristic point which is the same as that in the calibration image in the current image, and the three-dimensional coordinate of the target speckle characteristic point refers to the three-dimensional coordinate under the standard scale coordinate system.
S103, obtaining target pose information of the positioning camera when the current image is shot according to the two-dimensional coordinates of each target speckle characteristic point in the current image, the three-dimensional coordinates of each target speckle characteristic point and the initial pose information of the positioning camera when the current image is shot.
The target pose information of the positioning camera when the current image is shot refers to pose information of the positioning camera under a standard scale coordinate system.
Specifically, if the two-dimensional coordinates of the target speckle feature point in the current image are PF i (PF i (x i ,y i ) I=1, 2,3, …, M5), the three-dimensional coordinates of the target speckle feature point obtained according to the above step S102 areThe initial pose of the positioning camera when shooting the current image is X s ,Y s ,Z s ,/>Omega, kappa; wherein X is s ,Y s ,Z s For locating the line elements of the camera, reference is made to the coordinates of the locating camera, +>ω, κ refer to the angular element of the positioning camera. Then, for each target feature point, the following equation can be usedAnd (3) obtaining the optimal pose information of the current image shot by the positioning camera through a least square method, and taking the optimal pose information as target pose information when the current image is shot by the positioning camera.
Wherein,target speckle characteristic points>In locating the re-projection coordinates of the camera image, a j ,b j ,c j (j=1, 2, 3) is +.>Nine directions of omega, kappa are left and right>f 1 The internal parameters of the camera are located.
Error equation (II) above, x i ,y i V is the two-dimensional coordinate of the target feature point in the current image x ,v y Is thatAnd->The correction number of a) ij (i=1, 2j=1, 2,3,4,5, 6) is the colinear equation pair X s ,Y s ,Z s ,/>Deflection of ω, κ, Δx s ,ΔY s ,ΔZ s ,/>Δω, Δk are X s ,Y s ,Z s ,/>Correction of ω, k. The optimal pose information of the positioning camera can be calculated through a least square optimization method: x is X se ,Y se ,Z se ,/>ω e ,k e The rotation matrix formed by the target pose information is R e Translation vector t e ,R e A matrix of 3*3, t e Is a column vector of 3*1.
S104, determining target pose information of the three-dimensional camera at the current position according to the target pose information of the positioning camera and pose conversion information of the three-dimensional camera and the positioning camera at the current position.
The pose conversion information of the three-dimensional camera and the positioning camera at the current position is obtained by shooting a control field with fixed mark points at the current position in a calibration stage and is determined according to the shot images, and the pose conversion information is used for indicating external parameters, namely pose information, of the three-dimensional camera under a positioning camera coordinate system; the target pose information of the three-dimensional camera at the current position refers to pose information of the three-dimensional camera under a standard scale coordinate system.
For example, at the current position, the pose conversion information from the three-dimensional camera to the positioning camera may be R 21 、t 21 Specifically, it can be obtained by the following formula (seventeen); calculating the target pose information of the positioning camera under the standard scale coordinate system as R through the S103 e 、t e The target pose R, t of the three-dimensional camera at the current position is obtained by the following formula (three) and formula (four).
R=R e R 21 Formula (III)
t=R e t 21 +t e Formula (IV)
Wherein r is 21 For the rotation matrix of the three-dimensional camera under the coordinate system of the positioning camera, t 21 The translation vector from the three-dimensional camera to the positioning camera under the coordinate system is obtained by pose information of the three-dimensional camera and the positioning camera at the current position in the calibration stage.
Optionally, after the target pose information of the three-dimensional camera at the current position is determined, the point cloud data scanned by the three-dimensional camera can be converted into the three-dimensional coordinates under the standard scale coordinate system according to the determined target pose information, so that the three-dimensional reconstruction of the three-dimensional camera is realized.
Specifically, if the coordinates of an arbitrary point scanned by the three-dimensional camera at the current position in the three-dimensional camera coordinate system are P (X, Y, Z), the coordinates Q (X) of the point in the scale coordinate system * ,Y * ,Z * ) Obtained by the following formula (five).
In the embodiment, the three-dimensional coordinates of the target speckle characteristic points and the three-dimensional coordinates of the target speckle characteristic points under a standard ruler coordinate system can be rapidly determined by matching the speckle characteristic points in the current image shot by the positioning camera at the current position with the speckle characteristic points in the calibration image; the position of the speckle characteristic point in space is not changed, so that the pose of the positioning camera under a standard coordinate system when shooting the current image is more accurate according to the three-dimensional coordinates of each target speckle characteristic point obtained by matching; and then according to the target pose information of the positioning camera and the pose conversion relation from the three-dimensional camera to the positioning camera, obtaining the pose of the three-dimensional camera at the current position under the standard scale coordinate system. Compared with the method of point cloud registration in the prior art, the method has the advantages that the precision of the positioned three-dimensional camera is more accurate and the efficiency is higher.
Fig. 3 is a flow chart of another positioning method of a three-dimensional camera according to the embodiment of the present application, as shown in fig. 3, where in S102, a speckle feature point in a current image is matched with a speckle feature point in a calibration image to obtain at least one target speckle feature point and three-dimensional coordinates of the target speckle feature point, which may include:
s201, extracting marks and key descriptors of speckle characteristic points in the current image.
Optionally, each of the speckle feature points has an identifier of the speckle feature point and a key descriptor, where the identifier of the speckle feature point may be, for example, an ID number of the speckle feature point, and the key descriptor may include, for example, information such as a color of the speckle feature point and a specific feature.
S202, searching a target speckle characteristic point matched with the mark of the speckle characteristic point and the key descriptor of the speckle characteristic point in the calibration image, and taking the three-dimensional coordinate of the target speckle characteristic point in the calibration image as the three-dimensional coordinate of the target speckle characteristic point.
The three-dimensional coordinates of each point in the calibration image are three-dimensional coordinates under a standard scale coordinate system.
Optionally, if the speckle characteristic point consistent with the mark of the speckle characteristic point in the current image and the key descriptor is found in the calibration image, the speckle characteristic point in the current image is taken as a target speckle characteristic point, and the three-dimensional coordinate of the target speckle characteristic point is obtained from the calibration image.
In this embodiment, the three-dimensional coordinates of the speckle feature point in the current image can be quickly determined by matching the current image captured by the positioning camera at the current position in real time with the speckle feature point in the calibration image captured at the current position to obtain the three-dimensional coordinates of the target speckle feature point under the standard scale coordinate system, that is, the real three-dimensional coordinates under the world coordinate system.
Fig. 4 is a schematic flow chart of determining three-dimensional coordinates of speckle feature points in a calibration image according to an embodiment of the present application, as shown in fig. 4, before matching the speckle feature points in the current image with the speckle feature points in the calibration image in S102, may include:
s301, acquiring a plurality of first images of a speckle control field with a standard ruler and speckle, which are shot by a positioning camera.
The standard rule comprises a plurality of mark points, and specifically, each mark point can be arranged at two ends of the standard rule. In the calibration stage, a speckle control field can be generated by the laser speckle projector, and a standard ruler is placed in the speckle control field, so that the speckle control field with the standard ruler and speckle is formed, and the speckle control field cannot be recalibrated due to the change of the color of the mark point and the falling-off of the mark point.
Optionally, the positioning camera may shoot the speckle control field with the standard ruler and the speckle at different positions calibrated in advance to obtain a plurality of first images, where each first image includes a mark point and a speckle feature point, and each first image may refer to an image shot by the positioning camera at a calibrated position.
S302, determining at least one group of feature point homonymy point pairs according to each first image, and determining three-dimensional coordinates of each speckle feature point in each feature point homonymy point pair under a reference system coordinate system and pose information of each first image under the reference system coordinate system.
Optionally, a preset number of mark points of the standard ruler need to be shot in the first image, wherein the preset number can be more than three mark points, the mark points which are the same as the first image are shot in other subsequent first images at least once, speckle feature points and mark points are extracted by using a SIFT feature point extraction algorithm and a mark point extraction algorithm, and the mark points are numbered and identified.
The feature point same-name point pair refers to speckle feature points of which the same-name points exist in two first images.
Wherein the reference coordinate system refers to a positioning camera coordinate system when the positioning camera shoots the first image, and pose information of the jth first image under the reference coordinate system is X s,j ,Y s,j ,Z s,jω j ,κ j (j=2, 3, …, N) the corresponding transformation matrix is T j ,T j Is a matrix of 3 x 4, T j Can be divided into a rotation matrix R j And translation vector t j Two parts, wherein R j Is a 3*3 matrix, t j Is a 3*1 matrix.
S303, determining at least one group of mark point homonymous point pairs according to each first image, determining three-dimensional coordinates of each mark point in each mark point homonymous point pair under a reference coordinate system, and converting the three-dimensional coordinates of each speckle characteristic point under the reference coordinate system into the three-dimensional coordinates under a standard scale coordinate system according to the three-dimensional coordinates of each mark point under the reference coordinate system and the three-dimensional coordinates of each mark point under the standard scale coordinate system.
Alternatively, the mark point refers to a mark point on a standard ruler in the speckle control field. The same name point pair of the mark points refers to the mark points of which the same name points exist in the two first images.
Wherein, because each mark point is arranged in the standard ruler, the three-dimensional coordinates of each mark point under the standard ruler coordinate system can be obtained in advance, namely known. The three-dimensional coordinates of each of the speckle feature points determined in S302 above in the reference coordinate system may be converted into three-dimensional coordinates in the scale coordinate system.
S304, according to the pose information of each first image under the reference coordinate system, determining the pose information of each first image under the standard scale coordinate system.
Optionally, the pose information of each first image determined in S302 in the reference coordinate system is converted into pose information in the standard scale coordinate system.
S305, obtaining and storing target three-dimensional coordinates of each speckle characteristic point and target pose information of each first image under the standard scale coordinate system according to the three-dimensional coordinates of each speckle characteristic point under the standard scale coordinate system and the pose information of each first image under the standard scale coordinate system.
Optionally, the target three-dimensional coordinates of each speckle feature point in each first image stored herein refer to the three-dimensional coordinates of the speckle feature point in the calibration image in S102, which can be specifically optimized by using the following formula (fifteen) and formula (sixteen).
In this embodiment, in the calibration stage, three-dimensional coordinates of each speckle characteristic point in each image shot by the positioning camera under the standard scale coordinate system are obtained according to the method, so that the real-time pose of the positioning camera during real-time positioning can be accurately obtained by using the three-dimensional coordinates of each speckle characteristic point in each image in the subsequent real-time positioning process.
Fig. 5 is a schematic flow chart of determining coordinates of speckle feature points in a reference frame, as shown in fig. 5, in S302, at least one group of feature point homonymous point pairs is determined according to each first image, and three-dimensional coordinates of speckle feature points in the reference frame coordinate system and pose information of each first image in the reference frame coordinate system are determined, which may include:
S401, traversing the plurality of first images, and matching speckle characteristic points in the current first image and the previous image of the current first image aiming at the traversed current first image to obtain at least one group of characteristic point homonymous point pairs.
The pair of identical-name points of the feature points can include a first speckle feature point and a second speckle feature point, wherein the first speckle feature point can be contained in a current first image, and the second speckle feature point can be contained in a previous image.
Specifically, the first image of the j Zhang Dangqian is subjected to speckle characteristic point matching with the j-1 previous image to obtain a characteristic point homonymous point pair Wherein, (x) i,j ,y i,j ) In the first speckle characteristic point, (x) i,j-1 ,y i,j-1 ) Is the second speckle feature point.
S402, determining three-dimensional coordinates of the first speckle feature point and the second speckle feature point in a reference coordinate system according to the two-dimensional coordinates of the first speckle feature point, the two-dimensional coordinates of the second speckle feature point, the initial pose information of the current first image and the pose information of the previous image in the reference coordinate system.
Wherein the two-dimensional coordinates of the first speckle feature point refer to the coordinate information of the first speckle feature point in the current image, as in (x i,j ,y i,j ) The method comprises the steps of carrying out a first treatment on the surface of the The two-dimensional coordinates of the second speckle feature point refer to the coordinate information of the second speckle feature point in the previous image, as in (x i,j-1 ,y i,j-1 ) The method comprises the steps of carrying out a first treatment on the surface of the The initial pose information of the current first image refers to the initial pose information of the current first image relative to the first image, and the initial pose information refers to the initial pose information of each first image under a reference coordinate system. Specifically, the three-dimensional coordinates of the second speckle feature point in the reference coordinate system are obtained by the following formula (six), and the three-dimensional coordinates of the first speckle feature point in the reference coordinate system are obtained by the following formula (seven).
Wherein T in formula (six) j-1 Refers to the pose matrix of the previous image in the reference coordinate system is known, (x) i,j-1 ,y i,j-1 ) For the two-dimensional coordinates of the second speckle characteristic point in the previous image, f 1 Refers to the internal reference of the positioning camera, and the three-dimensional coordinates (X) of the second speckle characteristic point are obtained through the formula (six) i,j-1 ,Y i,j-1 ,Z i,j-1 )。
Wherein T in formula (seven) j Refers to the initial pose matrix of the current image, (x) i,j ,y i,j ) Two-dimensional coordinates of the first speckle characteristic point in the current image,f 1 Refers to the internal reference of the positioning camera, and the three-dimensional coordinates (Y) of the first speckle characteristic point are obtained through the formula (seventh) i,j ,Y i,j ,Z i,j )。
S403, determining a transformation matrix of the current first image and the previous image according to the three-dimensional coordinates of each first speckle characteristic point and each second speckle characteristic point under the reference coordinate system.
The transformation matrix of the current first image and the previous image can be used for converting pose information of the current first image into pose information under a coordinate system of the previous image.
Specifically, the three-dimensional coordinates of the first speckle feature point and the second speckle feature point obtained in S402 under the reference coordinate system are substituted into the following formula (eight), so as to obtain pose information of the current first image.
The transformation matrix of the j-th image (the current first image) relative to the j-1-th image (the previous image) can be calculated to be delta T by the coplanarity conditional equation (eight) and the least square principle j,j-1 The transformation matrix of the current first image and the previous image can be obtained.
S404, according to the transformation matrix and the transformation matrix of each image before the current first image and the previous image of each image, pose information of the current first image under the reference coordinate system is determined.
If the current first image is the second first image, the pose information of the second first image in the reference coordinate system is DeltaT 2,1 =T 2 This is because the reference coordinate system refers to the coordinate system of the first image. For the pose information of the other jth first image in the reference coordinate system, the pose information of the current first image may be obtained according to the formula (nine), specifically, the pose information of the current first image may be sequentially converted into the pose information of the reference coordinate system, and for example, if the current first image is the 4 th first image, the pose information of the 4 th image is firstly convertedChanging the pose information under the 3 rd first image coordinate system, then converting the pose information under the 3 rd first image coordinate system into the pose information under the 2 nd first image coordinate system, and finally converting the pose information under the first image coordinate system to obtain the pose information under the reference coordinate system.
Wherein DeltaT j,j-1 Refers to the transformation matrix of the j-th image (current first image) with respect to the j-1 th image (previous image).
Fig. 6 is a schematic flow chart of determining coordinates of speckle feature points in a standard scale coordinate system, as shown in fig. 6, where in S303, at least one group of point pairs with the same name are determined according to each first image, three-dimensional coordinates of each mark point in each point pair with the same name in a reference coordinate system are determined, and three-dimensional coordinates of each mark point in the reference coordinate system and three-dimensional coordinates of each mark point in the standard scale coordinate system are converted into three-dimensional coordinates in the standard scale coordinate system according to the three-dimensional coordinates of each mark point in the reference coordinate system, which may include:
S501, traversing the plurality of first images, and matching the current first image with the mark points in the first image aiming at the traversed current first image to obtain the mark point homonymous point pair.
The same name point pair of the mark points can comprise a first mark point and a second mark point, the first mark point can be contained in the current first image, and the second mark point can be contained in the first image.
Specifically, the j Zhang Dangqian first image and the first image are subjected to mark point matching to obtain a mark point homonymous point pairWherein, (x) i,j ,y i,j ) At the first mark point, (x i,1 ,y i,1 ) Is the second landmark.
S502, determining the three-dimensional coordinate of the first mark point under the reference coordinate system according to the two-dimensional coordinate of the first mark point and the pose information of the current first image under the reference coordinate system.
Wherein the two-dimensional coordinates of the first marker point refer to the position information of the first marker point in the current image, as in (x i,j ,y i,j ) If pose information of the current first image in the reference coordinate system is obtained in S404, the two-dimensional coordinate of the first marker point and pose information of the current first image in the reference coordinate system are substituted into the following formula (ten) to obtain the three-dimensional coordinate of the first marker point in the reference coordinate system, for example, PM may be used i {PM i (X i ,Y i ,Z i ) I=1, 2,3, …, M4 }.
Wherein,
wherein,is according to->ω j ,κ j Direction of compositionThe rest being brown, f 1 、/>Refers to the internal reference of a positioning camera, X s,j ,Y s,j ,Z s,j ,/>ω j ,κ j (j=2, 3, …, N) refers to pose information of each first image in the reference coordinate system.
S503, determining the conversion pose information of the current first image according to the three-dimensional coordinates of each first mark point under the reference coordinate system and the three-dimensional coordinates of each first mark point under the standard scale coordinate system.
Wherein the three-dimensional coordinates of the first marker point in the standard scale coordinates are known at the time of calibration, for example, QM can be used i {QM i (X i ,Y i ,Z i ) I=1, 2,3, …, M4 }.
Optionally, because the three-dimensional coordinates of the first marker point under the reference coordinate system and the three-dimensional coordinates under the standard scale coordinate system have rotation translation and scaling, the transformation pose information of the current judicial image can be obtained according to the principle of spatial similarity transformation and the principle of least square, and the transformation pose information can include a rotation matrix R S Translation matrix t s And a scaling coefficient λ (constant greater than 0), specifically, can be calculated according to the following formula (eleven).
QM i =λM s PM i +t s Formula (eleven)
Wherein PM i QM is the three-dimensional coordinate of the first marker point in the reference coordinate system i Is the three-dimensional coordinates of the first mark point in the standard scale coordinate system.
S504, converting the three-dimensional coordinates of each speckle characteristic point under the reference coordinate system into the three-dimensional coordinates under the standard scale coordinate system according to the conversion pose information of the current first image.
Specifically, the three-dimensional coordinates P of each speckle feature point in the reference coordinate system can be obtainedF i Substituting the formula (twelve) below.
QF i =λR s PF i +t S Formula (twelve)
Wherein QF (quad Flat No lead) i For the three-dimensional coordinates of each speckle characteristic point under the standard ruler coordinate system, the matrix R is rotated s Translation matrix t s And the scaling factor λ is calculated in S503.
In the step S304, determining pose information of each first image in the standard scale coordinate system according to pose information of each first image in the reference coordinate system may include:
optionally, pose information of the first image under the standard scale coordinate system is determined according to pose information of the first image under the reference coordinate system and converted pose information of the first image.
Wherein the pose information of the first image in the reference coordinate system is obtained in the step S404, the transformation pose information of the first image is obtained in the step S503, and each speckle characteristic point in the first image passes through R j And t j Converted into a reference coordinate system and then passed through lambda, R s ,t s Transforming to the standard ruler coordinate system to obtain QF i (X i ,Y i ,Z i ). Therefore, the pose information of the first image in the standard scale coordinate system can be obtained according to the following formula (thirteen) and formula (fourteen).
Will beAnd->Conversion into the form of angle elements and line elementsI.e. get +.>
Optionally, pose information of each first image under the standard scale coordinate system and three-dimensional coordinates of each speckle characteristic point under the standard scale coordinate system can be optimized by using a beam adjustment method, and specifically, two-dimensional coordinates of each speckle characteristic point in the first image can be optimizedThree-dimensional coordinate QF under obtained standard ruler coordinate system i (X i ,Y i ,Z i ) According to the following formula (fifteen) and formula (sixteen), the target three-dimensional coordinates of each speckle characteristic point and the target pose information of each first image under a standard scale coordinate system can be obtained by optimizing according to the least square principle.
Wherein the method comprises the steps ofIs->The direction of the two-to-two combination is remained and +.>Is QF (quad flat No. f) i (X i ,Y i ,Z i ) And re-projecting coordinates on the first image.
But due toError->Therefore, the following error equation formula (sixteen) can be listed, and the optimal value can be calculated by optimizing the error equation by utilizing the least square principleX i ,Y i ,Z i Then will be optimal- > As the target pose information of the first image under the standard scale coordinate system, the optimal X i ,Y i ,Z i As the three-dimensional coordinates of the target of each speckle characteristic point in the standard scale coordinate system.
Wherein v is x ,v y Is x i,j And y i,j The correction number of a) ij (i=1, 2j=1, 2,3,4,5, 6) is a collinear equation pairIs of the type of (A) and (B)> ΔX i ,ΔY i ,ΔZ i Respectively->X i ,Y i ,Z i Is the correction of (c).
Fig. 7 is a schematic flow chart of determining pose conversion information of a three-dimensional camera and a positioning camera according to an embodiment of the present application, as shown in fig. 7, before determining target pose information of the three-dimensional camera at a current position according to target pose information of the positioning camera and pose conversion information of the three-dimensional camera and the positioning camera at the current position, S104 may include:
s601, acquiring a second image in a calibration point control field shot by the positioning camera at the current position and a third image in the calibration point control field shot by the three-dimensional camera at the current position.
The shooting visual fields of the positioning camera and the three-dimensional camera respectively comprise a plurality of calibration points. The setpoint control field is different from the speckle control field described above in that the setpoint points in the setpoint control field are distributed in at least two mutually perpendicular planes, such as wall surfaces, and the three-dimensional coordinates of all of the setpoint points in the setpoint control field were previously measured using a photogrammetric camera to obtain the three-dimensional coordinates of the setpoint points in the setpoint control field.
Optionally, in the calibration stage, the three-dimensional camera and the positioning camera may simultaneously shoot the calibration point in the calibration point control field at the same position, so as to obtain the second image and the third image.
S602, determining target pose information of the positioning camera according to the two-dimensional coordinates of each target point in the second image, the three-dimensional coordinates of each target point in the target point control field and the initial pose information of the positioning camera at the current position.
Alternatively, each of the calibration points in the second image captured by the positioning camera may be, for exampleThe three-dimensional coordinates of the respective setpoint in the setpoint control field are, for exampleIf the initial pose information of the positioning camera at the current position is thatω 11 The calibration pose signal of the positioning camera can be obtained according to the collineation equation formula (I) and the error equation formula (II)Rest->ω 11
S603, determining target pose information of the three-dimensional camera according to the two-dimensional coordinates of each target point in the third image, the three-dimensional coordinates of each target point in the target point control field and the initial pose information of the three-dimensional camera at the current position.
Alternatively, each of the calibration points in the third image captured by the three-dimensional camera may be, for exampleThe three-dimensional coordinates of the respective setpoint in the setpoint control field are, for example If the initial pose information of the three-dimensional camera at the current position is thatω 2 ,k 2 The calibration pose information of the three-dimensional camera can be obtained according to the collineation equation formula (I) and the error equation formula (II)>ω 22
S604, determining pose conversion information of the positioning camera and the three-dimensional camera at the current position according to the target pose information of the positioning camera and the target pose information of the three-dimensional camera.
Specifically, it can be obtained by the following formula (seventeen).
/>
Wherein, the pose conversion information of the positioning camera and the three-dimensional camera at the current position is as followsω 2121 The corresponding rotation matrix is R 21 Translation vector t 21 . Wherein (1)>ω 1 ,k 1 Means that the positioning camera is positioned in the calibration stage at the current position of the target pose information, +.>ω 22 Refers to the calibration pose information of the current position of the three-dimensional camera in the calibration stage.
Fig. 8 is a schematic device diagram of a positioning method of a three-dimensional camera according to an embodiment of the present application, where, as shown in fig. 8, the device includes:
an obtaining module 701, configured to obtain at least one speckle feature point in a current image captured by the positioning camera at a current position;
the matching module 702 is configured to match a speckle characteristic point in a current image with a speckle characteristic point in a calibration image, so as to obtain at least one target speckle characteristic point and a three-dimensional coordinate of the target speckle characteristic point, wherein the calibration image is obtained by shooting the positioning camera at the current position in a calibration stage, and a shooting field of the positioning camera includes a plurality of speckles;
A determining module 703, configured to determine target pose information of the positioning camera when the current image is captured according to two-dimensional coordinates of each target speckle feature point in the current image, three-dimensional coordinates of each target speckle feature point, and initial pose information when the positioning camera captures the current image;
a determining module 703, configured to determine target pose information of the three-dimensional camera at the current position according to the target pose information of the positioning camera and pose conversion information of the three-dimensional camera and the positioning camera at the current position.
Optionally, the matching module 702 is specifically configured to:
extracting marks and key descriptors of speckle characteristic points in the current image;
searching a target speckle characteristic point matched with the mark of the speckle characteristic point and the key descriptor of the speckle characteristic point in the calibration image, and taking the three-dimensional coordinate of the target speckle characteristic point in the calibration image as the three-dimensional coordinate of the target speckle characteristic point, wherein the three-dimensional coordinate of each point in the calibration image is the three-dimensional coordinate under a standard scale coordinate system.
Optionally, the determining module 703 is specifically configured to:
Acquiring a plurality of first images of a speckle control field with a standard ruler and speckles, which are shot by the positioning camera, wherein the standard ruler comprises a plurality of mark points, and each first image comprises a mark point and a speckle characteristic point;
according to each first image, determining at least one group of feature point homonymy point pairs, and determining three-dimensional coordinates of each speckle feature point in each feature point homonymy point pair under a reference coordinate system and pose information of each first image under the reference coordinate system;
determining at least one group of mark point homonymy point pairs according to each first image, determining three-dimensional coordinates of each mark point in each mark point homonymy point pair under a reference coordinate system, and converting the three-dimensional coordinates of each speckle characteristic point under the reference coordinate system into three-dimensional coordinates under a standard scale coordinate system according to the three-dimensional coordinates of each mark point under the reference coordinate system and the three-dimensional coordinates of each mark point under the standard scale coordinate system;
determining pose information of each first image under a standard scale coordinate system according to pose information of each first image under a reference coordinate system;
and obtaining and storing target three-dimensional coordinates of the speckle characteristic points and target pose information of the first images under the standard scale coordinate system according to the three-dimensional coordinates of the speckle characteristic points under the standard scale coordinate system and the pose information of the first images under the standard scale coordinate system.
Optionally, the determining module 703 is specifically configured to:
traversing the plurality of first images, and matching speckle characteristic points in a current first image and a previous image of the current first image aiming at the traversed current first image to obtain at least one group of characteristic point homonymous point pairs, wherein the characteristic point homonymous point pairs comprise first speckle characteristic points and second speckle characteristic points, the first speckle characteristic points are contained in the current first image, and the second speckle characteristic points are contained in the previous image;
determining three-dimensional coordinates of the first speckle feature points and the second speckle feature points in a reference coordinate system according to the two-dimensional coordinates of the first speckle feature points, the two-dimensional coordinates of the second speckle feature points, the initial pose information of the current first image and the pose information of the previous image in the reference coordinate system;
determining a transformation matrix of the current first image and the previous image according to the three-dimensional coordinates of the first speckle characteristic points and the second speckle characteristic points under a reference coordinate system;
and determining pose information of the current first image under the reference coordinate system according to the transformation matrix and the pose information of each image before the current first image under the reference coordinate system.
Optionally, the determining module 703 is specifically configured to:
traversing the plurality of first images, and matching the current first image with mark points in a first image aiming at the traversed current first image to obtain mark point homonymous point pairs, wherein the mark point homonymous point pairs comprise first mark points and second mark points, the first mark points are contained in the current first image, and the second mark points are contained in the first image;
determining a three-dimensional coordinate of the first mark point under a reference coordinate system according to the two-dimensional coordinate of the first mark point and pose information of the current first image under the reference coordinate system;
determining conversion pose information of the current first image according to the three-dimensional coordinates of the first mark points under the reference coordinate system and the three-dimensional coordinates of the first mark points under the standard scale coordinate system;
and converting the three-dimensional coordinates of each speckle characteristic point under the reference coordinate system into three-dimensional coordinates under the standard scale coordinate system according to the conversion pose information of the current first image.
Optionally, the determining module 703 is specifically configured to:
and determining pose information of the first image under a standard scale coordinate system according to the pose information of the first image under the reference coordinate system and the converted pose information of the first image.
Optionally, the determining module 703 is specifically configured to:
acquiring a second image in a calibration point control field shot by the positioning camera at the current position and a third image in the calibration point control field shot by the three-dimensional camera at the current position, wherein the shooting fields of the positioning camera and the three-dimensional camera respectively comprise a plurality of calibration points;
determining target pose information of the positioning camera according to the two-dimensional coordinates of each target point in the second image, the three-dimensional coordinates of each target point in the target point control field and the initial pose information of the positioning camera when the positioning camera shoots the second image at the current position;
determining target pose information of the three-dimensional camera according to the two-dimensional coordinates of each target point in the third image, the three-dimensional coordinates of each target point in the target point control field and the initial pose information of the three-dimensional camera when the three-dimensional camera shoots the third image at the current position;
and determining pose conversion information of the positioning camera and the three-dimensional camera at the current position according to the positioning pose information of the positioning camera and the positioning pose information of the three-dimensional camera.
Fig. 9 is a block diagram of an electronic device 800 according to an embodiment of the present application. As shown in fig. 9, the electronic device may include: a processor 801, and a memory 802.
Optionally, a bus 803 may be further included, where the memory 802 is configured to store machine readable instructions executable by the processor 801, where the processor 801 communicates with the memory 802 via the bus 803 when the electronic device 800 is running, where the machine readable instructions are executed by the processor 801 to perform the method steps in the method embodiments described above.
The present application also provides a computer readable storage medium, on which a computer program is stored, which when being executed by a processor performs the method steps in the above-mentioned positioning method embodiment of the three-dimensional camera.
It will be clearly understood by those skilled in the art that, for convenience and brevity of description, specific working procedures of the above-described system and apparatus may refer to corresponding procedures in the method embodiments, which are not described in detail in this application. In the several embodiments provided in this application, it should be understood that the disclosed systems, devices, and methods may be implemented in other manners. The above-described apparatus embodiments are merely illustrative, and the division of the modules is merely a logical function division, and there may be additional divisions when actually implemented, and for example, multiple modules or components may be combined or integrated into another system, or some features may be omitted or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed with each other may be through some communication interface, indirect coupling or communication connection of devices or modules, electrical, mechanical, or other form.
In addition, each functional unit in each embodiment of the present application may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit. The functions, if implemented in the form of software functional units and sold or used as a stand-alone product, may be stored in a computer-readable storage medium. Based on such understanding, the technical solution of the present application may be embodied essentially or in a part contributing to the prior art or in a part of the technical solution, in the form of a software product stored in a storage medium, including several instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) to perform all or part of the steps of the methods described in the embodiments of the present application. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a random access Memory (RAM, random Access Memory), a magnetic disk, or an optical disk, or other various media capable of storing program codes.
The foregoing is merely a specific embodiment of the present application, but the protection scope of the present application is not limited thereto, and any person skilled in the art can easily think about changes or substitutions within the technical scope of the present application, and the changes or substitutions are covered in the protection scope of the present application.

Claims (10)

1. A method of positioning a three-dimensional camera, the method comprising:
acquiring at least one speckle characteristic point in a current image shot by a positioning camera at a current position;
matching speckle characteristic points in a current image with speckle characteristic points in a calibration image to obtain at least one target speckle characteristic point and three-dimensional coordinates of the target speckle characteristic point, wherein the calibration image is obtained by shooting the positioning camera at the current position in a calibration stage, and a shooting view of the positioning camera comprises a plurality of speckles;
determining target pose information of the positioning camera when the current image is shot according to two-dimensional coordinates of each target speckle characteristic point in the current image, three-dimensional coordinates of each target speckle characteristic point and initial pose information of the positioning camera when the current image is shot;
and determining the target pose information of the three-dimensional camera at the current position according to the target pose information of the positioning camera and pose conversion information of the three-dimensional camera and the positioning camera at the current position.
2. The method according to claim 1, wherein the matching the feature points with feature points in the calibration image to obtain at least one target feature point and three-dimensional coordinates of the target feature point includes:
Extracting marks and key descriptors of speckle characteristic points in the current image;
searching a target speckle characteristic point matched with the mark of the speckle characteristic point and the key descriptor of the speckle characteristic point in the calibration image, and taking the three-dimensional coordinate of the target speckle characteristic point in the calibration image as the three-dimensional coordinate of the target speckle characteristic point, wherein the three-dimensional coordinate of each point in the calibration image is the three-dimensional coordinate under a standard scale coordinate system.
3. The method of claim 1, wherein before matching the feature points with feature points in a calibration image, comprising:
acquiring a plurality of first images of a speckle control field with a standard ruler and speckles, which are shot by the positioning camera, wherein the standard ruler comprises a plurality of mark points, and each first image comprises a mark point and a speckle characteristic point;
according to each first image, determining at least one group of feature point homonymy point pairs, and determining three-dimensional coordinates of each speckle feature point in each feature point homonymy point pair under a reference coordinate system and pose information of each first image under the reference coordinate system;
determining at least one group of mark point homonymy point pairs according to each first image, determining three-dimensional coordinates of each mark point in each mark point homonymy point pair under a reference coordinate system, and converting the three-dimensional coordinates of each speckle characteristic point under the reference coordinate system into three-dimensional coordinates under a standard scale coordinate system according to the three-dimensional coordinates of each mark point under the reference coordinate system and the three-dimensional coordinates of each mark point under the standard scale coordinate system;
Determining pose information of each first image under a standard scale coordinate system according to pose information of each first image under a reference coordinate system;
and obtaining and storing target three-dimensional coordinates of the speckle characteristic points and target pose information of the first images under the standard scale coordinate system according to the three-dimensional coordinates of the speckle characteristic points under the standard scale coordinate system and the pose information of the first images under the standard scale coordinate system.
4. A method according to claim 3, wherein determining at least one set of feature point homonymous point pairs according to each first image, and determining three-dimensional coordinates of each speckle feature point in each feature point homonymous point pair in a reference coordinate system, and pose information of each first image in the reference coordinate system, comprises:
traversing the plurality of first images, and matching speckle characteristic points in a current first image and a previous image of the current first image aiming at the traversed current first image to obtain at least one group of characteristic point homonymous point pairs, wherein the characteristic point homonymous point pairs comprise first speckle characteristic points and second speckle characteristic points, the first speckle characteristic points are contained in the current first image, and the second speckle characteristic points are contained in the previous image;
Determining three-dimensional coordinates of the first speckle feature points and the second speckle feature points in a reference coordinate system according to the two-dimensional coordinates of the first speckle feature points, the two-dimensional coordinates of the second speckle feature points, the initial pose information of the current first image and the pose information of the previous image in the reference coordinate system;
determining a transformation matrix of the current first image and the previous image according to the three-dimensional coordinates of the first speckle characteristic points and the second speckle characteristic points under a reference coordinate system;
and determining pose information of the current first image under the reference coordinate system according to the transformation matrix and the pose information of each image before the current first image under the reference coordinate system.
5. A method according to claim 3, wherein determining at least one set of marker point homonymous point pairs from each first image, determining three-dimensional coordinates of each marker point in each marker point homonymous point pair under a reference coordinate system, and converting the three-dimensional coordinates of each speckle feature point under the reference coordinate system to three-dimensional coordinates under a standard scale coordinate system based on the three-dimensional coordinates of each marker point under the reference coordinate system and the three-dimensional coordinates of each marker point under the standard scale coordinate system, comprises:
Traversing the plurality of first images, and matching the current first image with mark points in a first image aiming at the traversed current first image to obtain mark point homonymous point pairs, wherein the mark point homonymous point pairs comprise first mark points and second mark points, the first mark points are contained in the current first image, and the second mark points are contained in the first image;
determining a three-dimensional coordinate of the first mark point under a reference coordinate system according to the two-dimensional coordinate of the first mark point and pose information of the current first image under the reference coordinate system;
determining conversion pose information of the current first image according to the three-dimensional coordinates of the first mark points under the reference coordinate system and the three-dimensional coordinates of the first mark points under the standard scale coordinate system;
and converting the three-dimensional coordinates of each speckle characteristic point under the reference coordinate system into three-dimensional coordinates under the standard scale coordinate system according to the conversion pose information of the current first image.
6. The method of claim 5, wherein determining pose information of each first image in the standard coordinate system based on pose information of each first image in the reference coordinate system comprises:
And determining pose information of the first image under a standard scale coordinate system according to the pose information of the first image under the reference coordinate system and the converted pose information of the first image.
7. The method according to claim 1, wherein before determining the target pose information of the three-dimensional camera at the current position based on the target pose information of the positioning camera and the conversion information of the three-dimensional camera and the positioning camera at the current position, the method comprises:
acquiring a second image in a calibration point control field shot by the positioning camera at the current position and a third image in the calibration point control field shot by the three-dimensional camera at the current position, wherein the shooting fields of the positioning camera and the three-dimensional camera respectively comprise a plurality of calibration points;
determining target pose information of the positioning camera according to the two-dimensional coordinates of each target point in the second image, the three-dimensional coordinates of each target point in the target point control field and the initial pose information of the positioning camera when the positioning camera shoots the second image at the current position;
determining target pose information of the three-dimensional camera according to the two-dimensional coordinates of each target point in the third image, the three-dimensional coordinates of each target point in the target point control field and the initial pose information of the three-dimensional camera when the three-dimensional camera shoots the third image at the current position;
And determining pose conversion information of the positioning camera and the three-dimensional camera at the current position according to the positioning pose information of the positioning camera and the positioning pose information of the three-dimensional camera.
8. A positioning device of a three-dimensional camera, comprising:
the acquisition module is used for acquiring at least one speckle characteristic point in a current image shot by the positioning camera at a current position;
the matching module is used for matching the speckle characteristic points in the current image with the speckle characteristic points in the calibration image to obtain at least one target speckle characteristic point and the three-dimensional coordinates of the target speckle characteristic point, the calibration image is obtained by shooting the positioning camera at the current position in a calibration stage, and the shooting view of the positioning camera comprises a plurality of speckles;
the determining module is used for determining target pose information of the positioning camera when the current image is shot according to the two-dimensional coordinates of each target speckle characteristic point in the current image, the three-dimensional coordinates of each target speckle characteristic point and the initial pose information of the positioning camera when the current image is shot;
and the determining module is used for determining the target pose information of the three-dimensional camera at the current position according to the target pose information of the positioning camera and the pose conversion information of the three-dimensional camera and the positioning camera at the current position.
9. An electronic device comprising a memory and a processor, the memory storing a computer program executable by the processor, the processor implementing the steps of the method of positioning a three-dimensional camera according to any one of claims 1-7 when the computer program is executed.
10. A computer-readable storage medium, characterized in that the computer-readable storage medium has stored thereon a computer program which, when run by a processor, performs the steps of the positioning method of a three-dimensional camera as claimed in any one of claims 1-7.
CN202311541826.4A 2023-11-16 2023-11-16 Positioning method and device of three-dimensional camera, electronic equipment and storage medium Pending CN117670990A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311541826.4A CN117670990A (en) 2023-11-16 2023-11-16 Positioning method and device of three-dimensional camera, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311541826.4A CN117670990A (en) 2023-11-16 2023-11-16 Positioning method and device of three-dimensional camera, electronic equipment and storage medium

Publications (1)

Publication Number Publication Date
CN117670990A true CN117670990A (en) 2024-03-08

Family

ID=90076202

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311541826.4A Pending CN117670990A (en) 2023-11-16 2023-11-16 Positioning method and device of three-dimensional camera, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN117670990A (en)

Similar Documents

Publication Publication Date Title
CN113532311B (en) Point cloud splicing method, device, equipment and storage equipment
CN109859272B (en) Automatic focusing binocular camera calibration method and device
WO2014061372A1 (en) Image-processing device, image-processing method, and image-processing program
CN110728715A (en) Camera angle self-adaptive adjusting method of intelligent inspection robot
CN112184811B (en) Monocular space structured light system structure calibration method and device
CN113592721B (en) Photogrammetry method, apparatus, device and storage medium
Li et al. Cross-ratio–based line scan camera calibration using a planar pattern
CN113329179B (en) Shooting alignment method, device, equipment and storage medium
CN113920206A (en) Calibration method of perspective tilt-shift camera
CN111383264B (en) Positioning method, positioning device, terminal and computer storage medium
CN112070844A (en) Calibration method and device of structured light system, calibration tool diagram, equipment and medium
US7046839B1 (en) Techniques for photogrammetric systems
RU2384882C1 (en) Method for automatic linking panoramic landscape images
CN113034615B (en) Equipment calibration method and related device for multi-source data fusion
CN117670990A (en) Positioning method and device of three-dimensional camera, electronic equipment and storage medium
CN115965697A (en) Projector calibration method, calibration system and device based on Samm's law
Tagoe et al. Determination of the Interior Orientation Parameters of a Non-metric Digital Camera for Terrestrial Photogrammetric Applications
CN115375773A (en) External parameter calibration method and related device for monocular laser speckle projection system
CN114926542A (en) Mixed reality fixed reference system calibration method based on optical positioning system
CN114170321A (en) Camera self-calibration method and system based on distance measurement
CN110232715B (en) Method, device and system for self calibration of multi-depth camera
CN113223163A (en) Point cloud map construction method and device, equipment and storage medium
CN112819900A (en) Method for calibrating internal azimuth, relative orientation and distortion coefficient of intelligent stereography
CN111292297A (en) Welding seam detection method, device and equipment based on binocular stereo vision and storage medium
CN116862999B (en) Calibration method, system, equipment and medium for three-dimensional measurement of double cameras

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination