CN117994334A - Pose estimation method and device of shooting equipment, computer equipment and storage medium - Google Patents

Pose estimation method and device of shooting equipment, computer equipment and storage medium Download PDF

Info

Publication number
CN117994334A
CN117994334A CN202211355744.6A CN202211355744A CN117994334A CN 117994334 A CN117994334 A CN 117994334A CN 202211355744 A CN202211355744 A CN 202211355744A CN 117994334 A CN117994334 A CN 117994334A
Authority
CN
China
Prior art keywords
value
constraint equation
coordinate
rotation angle
rotation
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211355744.6A
Other languages
Chinese (zh)
Inventor
孟强
袁文亮
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Insta360 Innovation Technology Co Ltd
Original Assignee
Insta360 Innovation Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Insta360 Innovation Technology Co Ltd filed Critical Insta360 Innovation Technology Co Ltd
Priority to CN202211355744.6A priority Critical patent/CN117994334A/en
Publication of CN117994334A publication Critical patent/CN117994334A/en
Pending legal-status Critical Current

Links

Landscapes

  • Image Analysis (AREA)

Abstract

The application relates to a pose estimation method, a pose estimation device, a computer device, a storage medium and a computer program product of a shooting device. The method comprises the following steps: acquiring at least three groups of matching point pairs formed by characteristic points on a first image and characteristic points on a second image acquired by shooting equipment and a rotation matrix corrected by the shooting equipment in a preset direction; the rotation matrix contains a rotation angle for representing the rotation amount of the photographing apparatus about a preset direction as an axis; generating a constraint equation according to the rotation matrix, the translation vector to be determined and the epipolar constraint condition; obtaining a constraint equation set based on coordinate values of all the matching point pairs in the first coordinate system and the second coordinate system and constraint equations; solving a constraint equation set, and determining the value of the rotation angle and the value of each element in the translation vector to obtain the pose of the shooting equipment; the pose includes a translation vector and a rotation angle of the photographing apparatus moving from a position where the first image is photographed to a position where the second image is photographed. The method can improve the pose calculating speed.

Description

Pose estimation method and device of shooting equipment, computer equipment and storage medium
Technical Field
The present application relates to the field of computer vision, and in particular, to a pose estimation method, apparatus, computer device, storage medium, and computer program product for a photographing device.
Background
With the development of computer vision technology, how to determine the pose of a shooting device according to an image shot by the shooting device becomes an important problem in the field of computer vision, and plays an extremely important role in the fields of robots, augmented reality, unmanned aerial vehicles, unmanned driving and the like. How to increase the calculation speed of the pose of the shooting device becomes an important problem for restricting the development of computer vision technology.
Disclosure of Invention
In view of the foregoing, it is desirable to provide a pose estimation method, apparatus, computer device, computer-readable storage medium, and computer program product of a photographing device capable of improving a calculation speed.
In a first aspect, the present application provides a pose estimation method of a photographing apparatus. The method comprises the following steps:
Acquiring at least three groups of matching point pairs formed by characteristic points on a first image and characteristic points on a second image acquired by the shooting equipment and a rotation matrix corrected by the shooting equipment in a preset direction; the rotation matrix comprises a rotation angle to be determined; the rotation angle is used for representing the rotation amount of the shooting equipment by taking the preset direction as an axis;
Generating a constraint equation according to the rotation matrix, the translation vector to be determined and the epipolar constraint condition;
Obtaining a constraint equation set based on coordinate values of all the matching point pairs in the first coordinate system and the second coordinate system and the constraint equation;
Solving the constraint equation set, and determining the value of the rotation angle and the value of each element in the translation vector to obtain the pose of the shooting equipment; the pose includes a translation vector and a rotation angle of the photographing apparatus moving from a position where the first image is photographed to a position where the second image is photographed.
In one embodiment, the generating a constraint equation from the rotation matrix, the translation vector to be determined, and the epipolar constraint condition comprises:
splitting the rotation matrix into a unitary matrix and a rotation vector constructed by the rotation angle;
And generating a constraint equation according to the identity matrix, the rotation vector, the translation vector to be determined and the epipolar constraint condition.
In one embodiment, the matching point pair includes a first feature point and a second feature point; the coordinate values comprise a first coordinate value and a second coordinate value; the obtaining a constraint equation set based on the coordinate values of each matching point pair in the first coordinate system and the second coordinate system and the constraint equation includes:
Acquiring a first image obtained by shooting a target object at a first position point by the shooting equipment and a second image obtained by shooting the target object at a second position point by the shooting equipment;
Determining a first coordinate value of the first feature point in a first coordinate system based on the first image, and determining a second coordinate value of the second feature point in a second coordinate system based on the second image;
and obtaining a constraint equation set based on the first coordinate value, the second coordinate value and the constraint equation.
In one embodiment, the matching point pair includes a first feature point and a second feature point; the coordinate values comprise a first coordinate value of the first feature point in the first coordinate system and a second coordinate value of the second feature point in the second coordinate system; the obtaining a constraint equation set based on the coordinate values of all the matching point pairs in the first coordinate system and the second coordinate system and the constraint equation includes:
Forming a matching coordinate pair by the first coordinate value of the first characteristic point and the second coordinate value of the second characteristic point in each group of matching point pairs;
respectively transmitting each group of matching coordinates into the constraint equation;
And forming a constraint equation set based on each constraint equation with the matched coordinate pair.
In one embodiment, said solving said constraint equation set, determining values of said rotation angle and values of elements in said translation vector comprises:
acquiring a parameter matrix of the constraint equation set;
Calculating determinant of the parameter matrix;
Calculating the determinant to obtain the value of the rotation angle;
And calculating the value of the rotation angle to obtain the value of each element in the translation vector.
In one embodiment, the calculating based on the value of the rotation angle, to obtain the value of each element in the translation vector includes:
transmitting the value of the rotation angle into the constraint equation set;
and calculating a constraint equation set transmitted with the value of the rotation angle to obtain the value of each element in the translation vector.
In a second aspect, the application further provides a pose estimation device of the shooting equipment. The device comprises:
The acquisition module is used for acquiring at least three groups of matching point pairs formed by the characteristic points on the first image and the characteristic points on the second image acquired by the shooting equipment and a rotation matrix corrected by the shooting equipment in a preset direction; the rotation matrix comprises a rotation angle to be determined; the rotation angle is used for representing the rotation amount of the shooting equipment which rotates by taking the preset direction as an axis;
the generation module is used for generating a constraint equation according to the rotation matrix, the translation vector to be determined and the epipolar constraint condition;
the obtaining module is used for obtaining a constraint equation set based on coordinate values of all the matching point pairs in the first coordinate system and the second coordinate system and the constraint equation;
the determining module is used for solving the constraint equation set, determining the value of the rotation angle and the value of each element in the translation vector, and obtaining the pose of the shooting equipment; the pose includes a translation vector and a rotation angle of the photographing apparatus moving from a position where the first image is photographed to a position where the second image is photographed.
In one embodiment, the generating module is further configured to:
splitting the rotation matrix into a unitary matrix and a rotation vector constructed by the rotation angle;
And generating a constraint equation according to the identity matrix, the rotation vector, the translation vector to be determined and the epipolar constraint condition.
In one embodiment, the matching point pair includes a first feature point and a second feature point; the coordinate values comprise a first coordinate value and a second coordinate value; the obtaining module is further configured to:
Acquiring a first image obtained by shooting a target object at a first position point by the shooting equipment and a second image obtained by shooting the target object at a second position point by the shooting equipment;
Determining a first coordinate value of the first feature point in a first coordinate system based on the first image, and determining a second coordinate value of the second feature point in a second coordinate system based on the second image;
and obtaining a constraint equation set based on the first coordinate value, the second coordinate value and the constraint equation.
In one embodiment, the matching point pair includes a first feature point and a second feature point; the coordinate values comprise a first coordinate value of the first feature point in the first coordinate system and a second coordinate value of the second feature point in the second coordinate system; the obtaining module is further configured to:
Forming a matching coordinate pair by the first coordinate value of the first characteristic point and the second coordinate value of the second characteristic point in each group of matching point pairs;
respectively transmitting each group of matching coordinates into the constraint equation;
And forming a constraint equation set based on each constraint equation with the matched coordinate pair.
In one embodiment, the determining module is further configured to:
acquiring a parameter matrix of the constraint equation set;
Calculating determinant of the parameter matrix;
Calculating the determinant to obtain the value of the rotation angle;
And calculating the value of the rotation angle to obtain the value of each element in the translation vector.
In one embodiment, the determining module is further configured to:
transmitting the value of the rotation angle into the constraint equation set;
and calculating a constraint equation set transmitted with the value of the rotation angle to obtain the value of each element in the translation vector.
In a third aspect, the present application also provides a computer device. The computer device comprises a memory storing a computer program and a processor which when executing the computer program performs the steps of:
Acquiring at least three groups of matching point pairs formed by characteristic points on a first image and characteristic points on a second image acquired by the shooting equipment and a rotation matrix corrected by the shooting equipment in a preset direction; the rotation matrix comprises a rotation angle to be determined; the rotation angle is used for representing the rotation amount of the shooting equipment by taking the preset direction as an axis;
Generating a constraint equation according to the rotation matrix, the translation vector to be determined and the epipolar constraint condition;
Obtaining a constraint equation set based on coordinate values of all the matching point pairs in the first coordinate system and the second coordinate system and the constraint equation;
Solving the constraint equation set, and determining the value of the rotation angle and the value of each element in the translation vector to obtain the pose of the shooting equipment; the pose includes a translation vector and a rotation angle of the photographing apparatus moving from a position where the first image is photographed to a position where the second image is photographed.
In a fourth aspect, the present application also provides a computer-readable storage medium. The computer readable storage medium having stored thereon a computer program which when executed by a processor performs the steps of:
Acquiring at least three groups of matching point pairs formed by characteristic points on a first image and characteristic points on a second image acquired by the shooting equipment and a rotation matrix corrected by the shooting equipment in a preset direction; the rotation matrix comprises a rotation angle to be determined; the rotation angle is used for representing the rotation amount of the shooting equipment by taking the preset direction as an axis;
Generating a constraint equation according to the rotation matrix, the translation vector to be determined and the epipolar constraint condition;
Obtaining a constraint equation set based on coordinate values of all the matching point pairs in the first coordinate system and the second coordinate system and the constraint equation;
Solving the constraint equation set, and determining the value of the rotation angle and the value of each element in the translation vector to obtain the pose of the shooting equipment; the pose includes a translation vector and a rotation angle of the photographing apparatus moving from a position where the first image is photographed to a position where the second image is photographed.
In a fifth aspect, the present application also provides a computer program product. The computer program product comprises a computer program which, when executed by a processor, implements the steps of:
Acquiring at least three groups of matching point pairs formed by characteristic points on a first image and characteristic points on a second image acquired by the shooting equipment and a rotation matrix corrected by the shooting equipment in a preset direction; the rotation matrix comprises a rotation angle to be determined; the rotation angle is used for representing the rotation amount of the shooting equipment by taking the preset direction as an axis;
Generating a constraint equation according to the rotation matrix, the translation vector to be determined and the epipolar constraint condition;
Obtaining a constraint equation set based on coordinate values of all the matching point pairs in the first coordinate system and the second coordinate system and the constraint equation;
Solving the constraint equation set, and determining the value of the rotation angle and the value of each element in the translation vector to obtain the pose of the shooting equipment; the pose includes a translation vector and a rotation angle of the photographing apparatus moving from a position where the first image is photographed to a position where the second image is photographed.
The pose estimation method, the pose estimation device, the computer equipment, the storage medium and the computer program product of the shooting equipment acquire at least three groups of matching point pairs formed by the characteristic points on the first image and the characteristic points on the second image acquired by the shooting equipment and a rotation matrix of the shooting equipment, wherein the rotation matrix is corrected in a preset direction; the rotation matrix comprises a rotation angle to be determined; the rotation angle is used to represent the rotation amount of the photographing apparatus about a preset direction as an axis. Therefore, the constraint equation can be generated according to the rotation matrix corrected in the preset direction, constraint conditions of the constraint equation are reduced, namely, the number of feature points required for solving the constraint equation is reduced. Generating a constraint equation according to the rotation matrix, the translation vector to be determined and the epipolar constraint condition; obtaining a constraint equation set based on coordinate values of all the matching point pairs in the first coordinate system and the second coordinate system and constraint equations; and solving a constraint equation set, and determining the value of the rotation angle and the value of each element in the translation vector to obtain the pose of the shooting equipment, wherein the pose comprises the translation vector and the rotation angle of the shooting equipment from the position of shooting the first image to the position of shooting the second image. Therefore, the pose of the shooting equipment can be obtained by only solving the rotation quantity of the shooting equipment taking the preset direction as the axis, the rotation quantity of the shooting equipment taking other directions as the axis is not needed to be solved, the characteristic points required by solving the pose of the shooting equipment are reduced, the calculated quantity is reduced, iterative calculation is not needed in the process of solving the constraint equation set, the calculation speed is high, and the calculation speed of the pose of the shooting equipment is improved.
Drawings
FIG. 1 is an application environment diagram of a pose estimation method of a photographing apparatus in one embodiment;
fig. 2 is a flow chart of a pose estimation method of a photographing apparatus in one embodiment;
FIG. 3 is a schematic diagram of a rotational matrix and translation vectors in one embodiment;
FIG. 4 is a flow diagram of a method of obtaining a set of constraint equations in one embodiment;
FIG. 5 is a schematic diagram of a photographing device photographing feature points at a first location point and a second location point according to an embodiment;
Fig. 6 is a flowchart of a pose estimation method of a photographing apparatus in another embodiment;
fig. 7 is a block diagram showing a configuration of a pose estimation apparatus of a photographing device in one embodiment;
FIG. 8 is an internal block diagram of a computer device in one embodiment;
fig. 9 is an internal structural diagram of a computer device in one embodiment.
Detailed Description
The present application will be described in further detail with reference to the drawings and examples, in order to make the objects, technical solutions and advantages of the present application more apparent. It should be understood that the specific embodiments described herein are for purposes of illustration only and are not intended to limit the scope of the application.
According to the pose estimation method of the shooting equipment, provided by the application, the pose estimation of the shooting equipment is realized by using a computer vision technology in an artificial intelligence technology (ARTIFICIAL INTELLIGENCE, AI).
Computer Vision (CV) is a science of studying how to "look" a machine, and more specifically, to replace human eyes with a camera and a Computer to perform machine Vision such as recognition, tracking and measurement on a target, and further perform graphic processing to make the Computer process into an image more suitable for human eyes to observe or transmit to an instrument to detect. As a scientific discipline, computer vision research-related theory and technology has attempted to build artificial intelligence systems that can acquire information from images or multidimensional data. Computer vision techniques typically include image processing, image recognition, image semantic understanding, image retrieval, OCR, video processing, video semantic understanding, video content/behavior recognition, three-dimensional object reconstruction, 3D techniques, virtual reality, augmented reality, synchronous positioning, and map construction, among others, as well as common biometric recognition techniques such as face recognition, fingerprint recognition, and others.
The pose estimation method of the shooting equipment provided by the embodiment of the application can be applied to an application environment shown in fig. 1. The computer device 102 acquires at least three groups of matching point pairs formed by the characteristic points on the first image and the characteristic points on the second image acquired by the shooting device and a rotation matrix corrected by the shooting device in a preset direction; the rotation matrix comprises a rotation angle to be determined; the rotation angle is used for representing the rotation amount of the shooting device with the preset direction as an axis; generating a constraint equation according to the rotation matrix, the translation vector to be determined and the epipolar constraint condition; obtaining a constraint equation set based on coordinate values of all the matching point pairs in the first coordinate system and the second coordinate system and constraint equations; solving a constraint equation set, and determining the value of the rotation angle and the value of each element in the translation vector to obtain the pose of the shooting equipment; the pose includes a translation vector and a rotation angle of the photographing apparatus moving from a position where the first image is photographed to a position where the second image is photographed. The computer device 102 may be a terminal or a server, and the terminal may be, but is not limited to, various personal computers, notebook computers, smart phones, tablet computers, internet of things devices and portable wearable devices, and the internet of things devices may be smart speakers, smart televisions, smart air conditioners, smart vehicle devices and the like. The portable wearable device may be a smart watch, smart bracelet, headset, or the like. The server may be implemented as a stand-alone server or as a server cluster composed of a plurality of servers.
In one embodiment, as shown in fig. 2, there is provided a pose estimation method of a photographing apparatus, which is described by taking a computer apparatus in fig. 1 as an example, including the steps of:
S202, acquiring at least three groups of matching point pairs formed by characteristic points on a first image and characteristic points on a second image acquired by shooting equipment and a rotation matrix corrected by the shooting equipment in a preset direction; the rotation matrix comprises a rotation angle to be determined; the rotation angle is used to represent the rotation amount of the photographing apparatus about a preset direction as an axis.
The matching point pair is a point pair formed by matched pixel points in two images, and the two images are respectively obtained by shooting points in space at different positions by shooting equipment. Two pixel points in the matching point pair correspond to the same point in space and respectively belong to different shooting equipment coordinate systems. For example, the photographing apparatus photographs the point P in the space at different positions, respectively, to obtain the image 1 and the image 2, the pixel point imaged by the point P in the image 1 is the point P ', the pixel point imaged by the image 1 is the point P ", and the P' and the P" form a matching point pair. For example, in the unmanned field, the matching point pair may be composed of pixels where points on some object in the driving environment are imaged in different images. Specifically, the computer device shoots the point A in the driving environment through shooting equipment deployed on the unmanned tool, and a matching point pair formed by corresponding pixel points of the point A in different images is obtained. As shown in fig. 3, the photographing apparatus performs a translational motion and a rotational motion, and the rotational matrix is a matrix for describing the rotational motion of the photographing apparatus. For example, the rotation matrix may be a3×3 matrix. For another example, the rotation matrix may be a three-dimensional oblique symmetry matrix. The rotation angle is used to represent the rotation amount of the photographing apparatus about a preset direction as an axis. For example, the preset direction may be a direction of gravitational acceleration. For another example, the preset direction may be an X-axis, Y-axis, or Z-axis direction in a cartesian coordinate system, the rotation angle around the X-axis is Pitch, the rotation angle around the Y-axis is Yaw, and the rotation angle around the Z-axis is Roll.
In one embodiment, the rotation angle in the two-dimensional space is θ, and θ is positive indicating a clockwise rotation, and the rotation matrix may be
In one embodiment, the rotation matrix is a rotation matrix corrected in a preset direction by the photographing apparatus. For example, the rotation matrix is corrected in the direction of gravity, and the gravity corrected rotation matrix may be, for exampleWherein θ represents the rotation angle. It can be understood that the rotation matrix may also be corrected in other preset directions, and the rotation amounts of the solving photographing apparatus in the multiple directions are converted into rotation amounts of the solving photographing apparatus in the preset directions. Because the gyroscope may be affected by zero offset and noise during use, the accumulated zero offset and noise over time produce larger and larger errors. In order to reduce the error, the rotation is corrected by using the preset direction, so that the error of the rotation angle in three axes is reduced to the error in the preset direction only, thereby only the rotation angle of the shooting equipment around the preset direction can be estimated, the rotation angle of the shooting equipment around the three axes is not required to be estimated, the calculation complexity is reduced, and the calculation speed is improved. The computer device may correct the rotation in other directions, reducing the error in the rotation angle in three axes to only the error in that direction, so that only the rotation amount of the photographing device about that direction can be estimated.
S204, generating a constraint equation according to the rotation matrix, the translation vector to be determined and the epipolar constraint condition.
The translation vector is a vector for describing the translation motion of the photographing apparatus. For example, the translation vector may be t= (t x ty tz)T,tx represents the motion of the photographing device in the X-axis direction, t y represents the motion of the photographing device in the Y-axis direction, t z represents the motion of the photographing device in the Z-axis direction. The epipolar constraint condition is a constraint condition satisfied by the matching points for the pixels, specifically, P 1 and P 2 are the matching points for the pixels in the P, respectively. The rotation matrix is represented by R, and t represents the translation vector, then the epipolar constraint condition may beWhere E is an essential matrix, and e= [ t ] × R, where [ t ] × represents an antisymmetric matrix resulting from translation vector t conversion.
The constraint equation is an equation for representing a constraint relation between a rotation angle to be determined and a translation vector. In one embodiment, the constraint equation is constant with the coordinates of the feature points in different photographing device coordinate systems, and variable with the rotation angle and the translation vector. The values of the elements in the rotation angle and translation vector can be obtained by solving the constraint equation.
S206, obtaining a constraint equation set based on coordinate values of all the matching point pairs in the first coordinate system and the second coordinate system and the constraint equation.
The first coordinate system and the second coordinate system are shooting equipment coordinate systems. The photographing device coordinate system is a three-dimensional rectangular coordinate system established by taking the focusing center of the photographing device as an origin and taking the optical axis of the photographing device as a Z axis. When the photographing apparatus moves, the origin of coordinates of the photographing apparatus coordinate system moves with the movement of the photographing apparatus. The first coordinate system and the second coordinate system are respectively shooting device coordinate systems corresponding to different points of the focusing center of the shooting device. For example, the first coordinate system is a photographing apparatus coordinate system when the focus center of the photographing apparatus is at the point a, and the second coordinate system is a photographing apparatus coordinate system when the focus center of the photographing apparatus is at the point B. The constraint equation set is an equation set composed of a plurality of constraint equations, and coordinate values of constraint equations in the constraint equation set are different. For example, the constraint equation set may be a ternary once homogeneous equation set.
In one embodiment, the matching point pair includes a first feature point and a second feature point; the coordinate values comprise a first coordinate value of the first feature point in a first coordinate system and a second coordinate value of the second feature point in a second coordinate system; s206 specifically includes: forming a matching coordinate pair by the first coordinate value of the first characteristic point and the second coordinate value of the second characteristic point in each group of matching point pairs; respectively transmitting each group of matching coordinates into a constraint equation; and forming a constraint equation set based on each constraint equation with the matched coordinate pair.
The first coordinate value is the coordinate value of the first feature point in the first coordinate system in the matching point pair, and the second coordinate value is the coordinate value of the second feature point in the second coordinate system in the matching point pair. For example, the coordinate value of the first feature point p 1 in the first coordinate system is (x 1,y1,z1), and the coordinate value of the second feature point p 2 in the second coordinate system is (x 2,y2,z2). The computer device composes a matching coordinate pair from the first coordinate value of the first feature point and the second coordinate value of the second feature point in each set of matching point pairs. For example, when the matching point pairs are three groups, the composed matching coordinate pairs are respectively The computer device will have entered the matching coordinate pair/>And/>And/>And/>And the three constraint equations of (2) constitute a constraint equation set.
S208, solving a constraint equation set, and determining the value of the rotation angle and the value of each element in the translation vector to obtain the pose of the shooting equipment; the pose includes a translation vector and a rotation angle of the photographing apparatus moving from a position where the first image is photographed to a position where the second image is photographed.
The elements in the translation vector are used for representing the movement amount of the shooting equipment in the direction of the corresponding coordinate axis, and the movement direction of the shooting equipment can be determined according to the values of the elements. For example, an element t x in the translation vector represents a moving distance of the photographing apparatus in the X-axis direction. The pose includes a translation vector and a rotation angle of the photographing apparatus moving from a position where the first image is photographed to a position where the second image is photographed. For example, when the photographing apparatus moves from a position a where the first image is photographed to a position B where the second image is photographed, the pose includes a translation vector and a rotation angle of the photographing apparatus moving from the position a to the position B.
The computer equipment obtains a solution of the constraint equation set by solving the constraint equation set, and the obtained solution of the constraint equation set is a value of a rotation angle and a value of each element in the translation vector because the constraint equation set takes the rotation angle and the translation vector as variables, so that the pose of the shooting equipment is obtained.
In the above embodiment, at least three sets of matching point pairs formed by the feature points on the first image and the feature points on the second image acquired by the photographing device and the rotation matrix corrected by the photographing device in the preset direction are acquired; the rotation matrix comprises a rotation angle to be determined; the rotation angle is used to represent the rotation amount of the photographing apparatus about a preset direction as an axis. Therefore, the constraint equation can be generated according to the rotation matrix corrected in the preset direction, constraint conditions of the constraint equation are reduced, namely, the number of feature points required for solving the constraint equation is reduced. Generating a constraint equation according to the rotation matrix, the translation vector to be determined and the epipolar constraint condition; obtaining a constraint equation set based on coordinate values of all the matching point pairs in the first coordinate system and the second coordinate system and constraint equations; and solving a constraint equation set, and determining the value of the rotation angle and the value of each element in the translation vector to obtain the pose of the shooting equipment, wherein the pose comprises the translation vector and the rotation angle of the shooting equipment from the position of shooting the first image to the position of shooting the second image. Therefore, the pose of the shooting equipment can be obtained by only solving the rotation quantity of the shooting equipment taking the preset direction as the axis, the rotation quantity of the shooting equipment taking other directions as the axis is not needed to be solved, the characteristic points required by solving the pose of the shooting equipment are reduced, the calculated quantity is reduced, iterative calculation is not needed in the process of solving the constraint equation set, the calculation speed is high, and the calculation speed of the pose of the shooting equipment is improved.
In one embodiment, S204 specifically includes: splitting the rotation matrix into a unit matrix and a rotation vector constructed by rotation angles; and generating a constraint equation according to the identity matrix, the rotation vector, the translation vector to be determined and the epipolar constraint condition.
Wherein, the unit matrix is a square matrix with elements on the main diagonal of 1. For example, the identity matrix may be a3×3 square matrix with 1 element on the principal diagonal, and in particular, the identity matrix may beThe identity matrix is the same as the rotation matrix in order. The rotation vector is a vector containing the rotation angle to be determined. For example, the rotation vector is s= (0 0 θ) T. The computer device splits the rotation matrix into an identity matrix and a rotation vector constructed from the rotation angles, e.g. when rotating the matrixWhen the computer device splits the rotation matrix into identity matrix/>And a rotation vector s= (0 0 θ) T, satisfying r=i+ [ s ] ×,[s]× between the rotation matrix and the identity matrix and rotation vector represents an antisymmetric matrix converted from the rotation vector.
The computer device generates a constraint equation based on the identity matrix, the rotation vector, the translation vector to be determined, and the epipolar constraint condition. For example, the computer device substitutes R=I+ [ s ] × into the epipolar constraintObtain (s T[p1]×[p2]×-p2 T[p1]×) t=0, and then substituting p1=(x1 y1 z1)T,p2=(x2 y2 z2)T, to obtain constraint equation
In the above embodiment, the computer device splits the rotation matrix into the identity matrix and the rotation vector constructed by the rotation angle, and generates the constraint equation according to the identity matrix, the rotation vector, the translation vector to be determined, and the epipolar constraint condition. Therefore, after the matching coordinate pairs are substituted into the constraint equation, a ternary once homogeneous equation set can be obtained, and a solution of the ternary once homogeneous equation set can be obtained only by the matching coordinate pairs of three characteristic points, namely, the value of the rotation angle and the element value of the translation vector can be obtained, so that the number of characteristic points required for solving the constraint equation set is reduced, namely, the calculation complexity is reduced, and the calculation speed is increased.
In one embodiment, the matching point pair includes a first feature point and a second feature point; the coordinate values comprise a first coordinate value and a second coordinate value; as shown in fig. 4, S206 specifically includes the following steps:
S402, a first image obtained by shooting the target object at a first position point by the shooting device and a second image obtained by shooting the target object at a second position point by the shooting device are obtained.
The first position point and the second position point are position points in the three-dimensional space. The camera device can be moved from a first position point to a second position point by a translational movement and a rotational movement. For example, in the robot field, the first position point and the second position point are two position points on the robot moving path. For example, in the field of unmanned aerial vehicles, the first location point and the second location point are two location points on the unmanned aerial vehicle flight path. The first image is an image obtained by photographing the object at the first position point by the photographing apparatus. The second image is an image obtained by photographing the object at the second position point by the photographing apparatus. Since the photographing apparatus photographs the object at different positions, the photographing angles of view corresponding to the first image and the second image are different.
In one embodiment, as shown in fig. 5, the robot may move from a first location point to a second location point with the photographing device, photograph the target object at the first location point and the second location point, respectively, photograph the first image at the first location point, and photograph the second image at the second location point.
S404, determining a first coordinate value of the first feature point in the first coordinate system based on the first image, and determining a second coordinate value of the second feature point in the second coordinate system based on the second image.
In one embodiment, S404 specifically includes: the computer equipment determines a first pixel coordinate of a first feature point in the first image, and determines a first coordinate value of the first feature point in a first coordinate system corresponding to the shooting equipment according to the first pixel coordinate; and determining a second pixel coordinate of the second feature point in the second image, and determining a second coordinate value of the second feature point in a second coordinate system corresponding to the shooting equipment according to the second pixel coordinate.
The computer device determines pixel coordinates of the feature points in the first image and the second image through image recognition respectively, then converts the pixel coordinates into coordinates of a coordinate system of the shooting device according to internal parameters (internal parameters) of the shooting device, specifically, determines first coordinate values of the feature points in a first coordinate system corresponding to the shooting device according to the first pixel coordinates, and determines second coordinate values of the feature points in a second coordinate system corresponding to the shooting device according to the second pixel coordinates.
S406, obtaining a constraint equation set based on the first coordinate value, the second coordinate value and the constraint equation.
The computer equipment transmits the first coordinate value and the second coordinate value into a constraint equation to obtain a constraint equation set. Specifically, the computer equipment forms a first coordinate value and a second coordinate value of each group of matching point pairs into matching coordinate pairs, and then respectively transmits each matching coordinate to a coordinate constant of a constraint equation to obtain a constraint equation set. For example, the constraint equation is Where (x 1 y1 z1) and (x 2 y2 z2) are the coordinate constants in the constraint equation, (x 1 y1z1) represents the first coordinate value and (x 2 y2 z2) represents the second coordinate value. Will first coordinate valueAnd a second coordinate value/>Wherein i=1, 2,3, and the matching coordinates composed are input into the constraint equation, and the obtained constraint equation set is shown in formula (1).
In the above embodiment, a first image obtained by photographing the target object at the first location point by the photographing apparatus and a second image obtained by photographing the target object at the second location point by the photographing apparatus are obtained; determining a first coordinate value of the first feature point in a first coordinate system based on the first image, and determining a second coordinate value of the second feature point in a second coordinate system based on the second image; and obtaining a constraint equation set based on the first coordinate value, the second coordinate value and the constraint equation. Therefore, the rotation angle value and the element value in the translation vector can be obtained by solving the constraint equation set to determine the pose of the shooting equipment, all constraint properties of epipolar constraint conditions are utilized in the solving process, the obtained shooting equipment is an accurate solution, and the calculation accuracy of the pose of the shooting equipment is improved.
In one embodiment, S208 specifically includes: acquiring a parameter matrix of a constraint equation set; calculating determinant of parameter matrix; calculating the determinant to obtain a value of the rotation angle; and calculating the value of the rotation angle to obtain the value of each element in the translation vector.
The parameter matrix is a matrix formed by parameters in a constraint equation set. For example, assume that the constraint equation set isThe parameter matrix is/>The determinant of the parameter matrix is a determinant formed by all elements in the parameter matrix, and if a is the parameter matrix, the determinant of the parameter matrix is det (a).
In one embodiment, the constraint equation set is shown in equation (1), parameter matrixDet (a (θ)) is a cubic polynomial on θ. Since the translation vector t is a non-zero vector, det (a (θ))=0 can be obtained according to formula (1), and the value θ 0 of θ can be obtained by solving the formula, that is, the value θ 0 of the rotation angle can be obtained. The values of the elements in the translation vector can then be calculated based on the values of the rotation angle.
In the above embodiment, the parameter matrix of the constraint equation set is obtained, and the determinant of the parameter matrix is calculated. And calculating according to the determinant to obtain a value of the rotation angle, and calculating based on the value of the rotation angle to obtain values of elements in the translation vector. Therefore, a polynomial with lower degree, such as a cubic polynomial, can be obtained according to the determinant of the parameter matrix, the calculation complexity of solving the value of the rotation angle is reduced, and the calculation speed is improved. And because det (A (theta))=0 has the root solution, the process of solving the equation does not need iteration, and the calculation speed is further improved.
In one embodiment, the computer device passes the value of the rotation angle into a set of constraint equations; and calculating according to a constraint equation set of values of the rotation angles to obtain values of elements in the translation vector.
Assuming that the constraint equation set is expressed as a (θ) t=0, when the value θ 0 of the rotation angle is calculated, θ 0 is substituted into the constraint equation set to obtain a (θ 0) t=0, and the obtained a (θ 0) t=0 is a ternary system of primary equations. The values of the elements in the translation vector are obtained by solving equation a (θ 0) t=0.
In the above embodiment, when the value θ 0 of the rotation angle is obtained by solving the unitary cubic equation det (a (θ))=0, the value of each element in the translation vector is solved according to the ternary once equation of a (θ 0) t=0. The calculation accuracy of the values of the elements in the translation vector can be guaranteed, and the calculation speed is improved.
In one embodiment, S208 further comprises: and estimating coordinates of the feature points in a world coordinate system according to the pose of the shooting equipment. Specifically, if the photographing device moves from the first position point to the second position point, when the pose of the second position point is obtained, the second position coordinate of the photographing device at the second position point can be obtained according to the pose and the first position coordinate of the photographing device at the first position point. The first position coordinates and the second position coordinates are world coordinate system coordinates. Since the first position point and the characteristic point determine a straight line, the second position point and the characteristic point determine a straight line, and the intersection point of the two straight lines is the characteristic point, the coordinates of the characteristic point can be obtained through calculation according to the first position coordinates and the second position coordinates. The computer equipment can construct a map of an operating environment corresponding to the robot, the unmanned aerial vehicle or the unmanned aerial vehicle according to the calculated coordinates of the feature points, so that the robot, the unmanned aerial vehicle or the unmanned aerial vehicle can avoid the obstacle according to the constructed map.
In one embodiment, as shown in fig. 6, the pose estimation method of the photographing apparatus includes the steps of:
S602, acquiring at least three groups of matching point pairs formed by characteristic points on a first image and characteristic points on a second image acquired by shooting equipment and a rotation matrix corrected by the shooting equipment in a preset direction; the rotation matrix comprises a rotation angle to be determined; the rotation angle is used to represent the rotation amount of the photographing apparatus about a preset direction as an axis.
S604, dividing the rotation matrix into an identity matrix and a rotation vector constructed by the rotation angle.
S606, a constraint equation is generated according to the identity matrix, the rotation vector, the translation vector to be determined and the epipolar constraint condition.
S608, a first image obtained by the photographing device photographing the target object at the first location point and a second image obtained by the photographing device photographing the target object at the second location point are obtained.
S610, determining a first coordinate value of the first feature point in a first coordinate system based on the first image, and determining a second coordinate value of the second feature point in a second coordinate system based on the second image.
S612, forming a matching coordinate pair by the first coordinate value of the first feature point and the second coordinate value of the second feature point in each group of matching points.
S614, each set of matching coordinates is respectively input into constraint equations, and constraint equations are formed based on each constraint equation input with the matching coordinate pairs.
S616, a parameter matrix of the constraint equation set is obtained.
And S618, calculating a determinant of the parameter matrix, and calculating according to the determinant to obtain a value of the rotation angle.
S620, the value of the rotation angle is transmitted into a constraint equation set.
S622, calculating according to a constraint equation set into which the value of the rotation angle is input, so as to obtain the value of each element in the translation vector, and obtain the pose of the shooting equipment; the pose includes a translation vector and a rotation angle of the photographing apparatus moving from a position where the first image is photographed to a position where the second image is photographed.
The specific contents of S602 to S622 described above may refer to the above specific implementation procedure.
It should be understood that, although the steps in the flowcharts related to the embodiments described above are sequentially shown as indicated by arrows, these steps are not necessarily sequentially performed in the order indicated by the arrows. The steps are not strictly limited to the order of execution unless explicitly recited herein, and the steps may be executed in other orders. Moreover, at least some of the steps in the flowcharts described in the above embodiments may include a plurality of steps or a plurality of stages, which are not necessarily performed at the same time, but may be performed at different times, and the order of the steps or stages is not necessarily performed sequentially, but may be performed alternately or alternately with at least some of the other steps or stages.
Based on the same inventive concept, the embodiment of the application also provides a device for realizing the pose estimation method of the shooting equipment. The implementation of the solution provided by the apparatus is similar to the implementation described in the above method, so the specific limitation in one or more apparatus embodiments provided below may refer to the limitation of the pose estimation method of the photographing apparatus hereinabove, and will not be described herein.
In one embodiment, as shown in fig. 7, there is provided a pose estimation apparatus of a photographing device, including: an acquisition module 702, a generation module 704, an acquisition module 706, and a determination module 708, wherein:
An acquisition module 702, configured to acquire at least three sets of matching point pairs formed by feature points on a first image and feature points on a second image acquired by a photographing device and a rotation matrix corrected by the photographing device in a preset direction; the rotation matrix comprises a rotation angle to be determined; the rotation angle is used for indicating the rotation amount of the shooting device which rotates by taking the preset direction as an axis;
a generating module 704, configured to generate a constraint equation according to the rotation matrix, the translation vector to be determined, and the epipolar constraint condition;
An obtaining module 706, configured to obtain a constraint equation set based on coordinate values of all matching point pairs in the first coordinate system and the second coordinate system and constraint equations;
A determining module 708, configured to solve the constraint equation set, determine a value of the rotation angle and a value of each element in the translation vector, and obtain a pose of the photographing apparatus; the pose includes a translation vector and a rotation angle of the photographing apparatus moving from a position where the first image is photographed to a position where the second image is photographed.
In the above embodiment, at least three sets of matching point pairs formed by the feature points on the first image and the feature points on the second image acquired by the photographing device and the rotation matrix corrected by the photographing device in the preset direction are acquired; the rotation matrix comprises a rotation angle to be determined; the rotation angle is used to represent the rotation amount of the photographing apparatus about a preset direction as an axis. Therefore, the constraint equation can be generated according to the rotation matrix corrected in the preset direction, constraint conditions of the constraint equation are reduced, namely, the number of feature points required for solving the constraint equation is reduced. Generating a constraint equation according to the rotation matrix, the translation vector to be determined and the epipolar constraint condition; obtaining a constraint equation set based on coordinate values of all the matching point pairs in the first coordinate system and the second coordinate system and constraint equations; and solving a constraint equation set, and determining the value of the rotation angle and the value of each element in the translation vector to obtain the pose of the shooting equipment, wherein the pose comprises the translation vector and the rotation angle of the shooting equipment from the position of shooting the first image to the position of shooting the second image. Therefore, the pose of the shooting equipment can be obtained by only solving the rotation quantity of the shooting equipment taking the preset direction as the axis, the rotation quantity of the shooting equipment taking other directions as the axis is not needed to be solved, the characteristic points required by solving the pose of the shooting equipment are reduced, the calculated quantity is reduced, iterative calculation is not needed in the process of solving the constraint equation set, the calculation speed is high, and the calculation speed of the pose of the shooting equipment is improved.
In one embodiment, the generating module 704 is further configured to:
Splitting the rotation matrix into a unit matrix and a rotation vector constructed by rotation angles;
and generating a constraint equation according to the identity matrix, the rotation vector, the translation vector to be determined and the epipolar constraint condition.
In one embodiment, the matching point pair includes a first feature point and a second feature point; the coordinate values comprise a first coordinate value and a second coordinate value; the obtaining module 706 is further configured to:
Acquiring a first image obtained by shooting a target object at a first position point by shooting equipment and a second image obtained by shooting the target object at a second position point by shooting equipment;
Determining a first coordinate value of the first feature point in a first coordinate system based on the first image, and determining a second coordinate value of the second feature point in a second coordinate system based on the second image;
and obtaining a constraint equation set based on the first coordinate value, the second coordinate value and the constraint equation.
In one embodiment, the matching point pair includes a first feature point and a second feature point; the coordinate values comprise a first coordinate value of the first feature point in a first coordinate system and a second coordinate value of the second feature point in a second coordinate system; the obtaining module 706 is further configured to:
forming a matching coordinate pair by the first coordinate value of the first characteristic point and the second coordinate value of the second characteristic point in each group of matching point pairs;
Respectively transmitting each group of matching coordinates into a constraint equation;
And forming a constraint equation set based on each constraint equation with the matched coordinate pair.
In one embodiment, the determining module 708 is further configured to:
Acquiring a parameter matrix of a constraint equation set;
calculating determinant of parameter matrix;
calculating the determinant to obtain a value of the rotation angle;
And calculating the value of the rotation angle to obtain the value of each element in the translation vector.
In one embodiment, the determining module 708 is further configured to:
transmitting the value of the rotation angle into a constraint equation set;
And calculating a constraint equation set transmitted with the value of the rotation angle to obtain the value of each element in the translation vector.
The above-described respective modules in the pose estimation device of the photographing apparatus may be implemented in whole or in part by software, hardware, and combinations thereof. The above modules may be embedded in hardware or may be independent of a processor in the computer device, or may be stored in software in a memory in the computer device, so that the processor may call and execute operations corresponding to the above modules.
In one embodiment, a computer device is provided, which may be a server, and the internal structure of which may be as shown in fig. 8. The computer device includes a processor, a memory, an Input/Output interface (I/O) and a communication interface. The processor, the memory and the input/output interface are connected through a system bus, and the communication interface is connected to the system bus through the input/output interface. Wherein the processor of the computer device is configured to provide computing and control capabilities. The memory of the computer device includes a non-volatile storage medium and an internal memory. The non-volatile storage medium stores an operating system, computer programs, and a database. The internal memory provides an environment for the operation of the operating system and computer programs in the non-volatile storage media. The database of the computer device is used for storing pose estimation data of the shooting device. The input/output interface of the computer device is used to exchange information between the processor and the external device. The communication interface of the computer device is used for communicating with an external terminal through a network connection. The computer program, when executed by a processor, implements a method of pose estimation for a photographing apparatus.
In one embodiment, a computer device is provided, which may be a terminal, and the internal structure thereof may be as shown in fig. 9. The computer device includes a processor, a memory, an input/output interface, a communication interface, a display unit, and an input means. The processor, the memory and the input/output interface are connected through a system bus, and the communication interface, the display unit and the input device are connected to the system bus through the input/output interface. Wherein the processor of the computer device is configured to provide computing and control capabilities. The memory of the computer device includes a non-volatile storage medium and an internal memory. The non-volatile storage medium stores an operating system and a computer program. The internal memory provides an environment for the operation of the operating system and computer programs in the non-volatile storage media. The input/output interface of the computer device is used to exchange information between the processor and the external device. The communication interface of the computer device is used for carrying out wired or wireless communication with an external terminal, and the wireless mode can be realized through WIFI, a mobile cellular network, NFC (near field communication) or other technologies. The computer program, when executed by a processor, implements a method of pose estimation for a photographing apparatus. The display unit of the computer equipment is used for forming a visual picture, and can be a display screen, a projection device or a virtual reality imaging device, wherein the display screen can be a liquid crystal display screen or an electronic ink display screen, the input device of the computer equipment can be a touch layer covered on the display screen, can also be a key, a track ball or a touch pad arranged on a shell of the computer equipment, and can also be an external keyboard, a touch pad or a mouse and the like.
It will be appreciated by persons skilled in the art that the structures shown in fig. 8 and 9 are merely block diagrams of portions of structures associated with aspects of the application and are not intended to limit the computer apparatus to which aspects of the application may be applied, and that a particular computer apparatus may include more or less components than those shown, or may combine some of the components, or have a different arrangement of components.
In one embodiment, a computer device is provided, comprising a memory and a processor, the memory having stored therein a computer program, the processor implementing the steps of the method embodiments described above when the computer program is executed.
In one embodiment, a computer-readable storage medium is provided, on which a computer program is stored which, when executed by a processor, implements the steps of the method embodiments described above.
In an embodiment, a computer program product is provided, comprising a computer program which, when executed by a processor, implements the steps of the method embodiments described above.
It should be noted that, the user information (including but not limited to user equipment information, user personal information, etc.) and the data (including but not limited to data for analysis, stored data, presented data, etc.) related to the present application are information and data authorized by the user or sufficiently authorized by each party, and the collection, use and processing of the related data need to comply with the related laws and regulations and standards of the related country and region.
Those skilled in the art will appreciate that implementing all or part of the above described methods may be accomplished by way of a computer program stored on a non-transitory computer readable storage medium, which when executed, may comprise the steps of the embodiments of the methods described above. Any reference to memory, database, or other medium used in embodiments provided herein may include at least one of non-volatile and volatile memory. The nonvolatile Memory may include Read-Only Memory (ROM), magnetic tape, floppy disk, flash Memory, optical Memory, high density embedded nonvolatile Memory, resistive random access Memory (ReRAM), magneto-resistive random access Memory (Magnetoresistive Random Access Memory, MRAM), ferroelectric Memory (Ferroelectric Random Access Memory, FRAM), phase change Memory (PHASE CHANGE Memory, PCM), graphene Memory, and the like. Volatile memory can include random access memory (Random Access Memory, RAM) or external cache memory, and the like. By way of illustration, and not limitation, RAM can be in various forms such as static random access memory (Static Random Access Memory, SRAM) or dynamic random access memory (Dynamic Random Access Memory, DRAM), etc. The databases referred to in the embodiments provided herein may include at least one of a relational database and a non-relational database. The non-relational database may include, but is not limited to, a blockchain-based distributed database, and the like. The processor referred to in the embodiments provided in the present application may be a general-purpose processor, a central processing unit, a graphics processor, a digital signal processor, a programmable logic unit, a data processing logic unit based on quantum computing, or the like, but is not limited thereto.
The technical features of the above embodiments may be arbitrarily combined, and all possible combinations of the technical features in the above embodiments are not described for brevity of description, however, as long as there is no contradiction between the combinations of the technical features, they should be considered as the scope of the description.
The foregoing examples illustrate only a few embodiments of the application and are described in detail herein without thereby limiting the scope of the application. It should be noted that it will be apparent to those skilled in the art that several variations and modifications can be made without departing from the spirit of the application, which are all within the scope of the application. Accordingly, the scope of the application should be assessed as that of the appended claims.

Claims (10)

1. A pose estimation method of a photographing apparatus, the method comprising:
Acquiring at least three groups of matching point pairs formed by characteristic points on a first image and characteristic points on a second image acquired by the shooting equipment and a rotation matrix corrected by the shooting equipment in a preset direction; the rotation matrix comprises a rotation angle to be determined; the rotation angle is used for representing the rotation amount of the shooting equipment by taking the preset direction as an axis;
Generating a constraint equation according to the rotation matrix, the translation vector to be determined and the epipolar constraint condition;
Obtaining a constraint equation set based on coordinate values of all the matching point pairs in the first coordinate system and the second coordinate system and the constraint equation;
Solving the constraint equation set, and determining the value of the rotation angle and the value of each element in the translation vector to obtain the pose of the shooting equipment; the pose includes a translation vector and a rotation angle of the photographing apparatus moving from a position where the first image is photographed to a position where the second image is photographed.
2. The method of claim 1, wherein the generating a constraint equation from the rotation matrix, the translation vector to be determined, and the epipolar constraint condition comprises:
splitting the rotation matrix into a unitary matrix and a rotation vector constructed by the rotation angle;
And generating a constraint equation according to the identity matrix, the rotation vector, the translation vector to be determined and the epipolar constraint condition.
3. The method of claim 1, wherein the matching point pair comprises a first feature point and a second feature point; the coordinate values comprise a first coordinate value and a second coordinate value; the obtaining a constraint equation set based on the coordinate values of each matching point pair in the first coordinate system and the second coordinate system and the constraint equation includes:
Acquiring a first image obtained by shooting a target object at a first position point by the shooting equipment and a second image obtained by shooting the target object at a second position point by the shooting equipment;
Determining a first coordinate value of the first feature point in a first coordinate system based on the first image, and determining a second coordinate value of the second feature point in a second coordinate system based on the second image;
and obtaining a constraint equation set based on the first coordinate value, the second coordinate value and the constraint equation.
4. The method of claim 1, wherein the matching point pair comprises a first feature point and a second feature point; the coordinate values comprise a first coordinate value of the first feature point in the first coordinate system and a second coordinate value of the second feature point in the second coordinate system; the obtaining a constraint equation set based on the coordinate values of all the matching point pairs in the first coordinate system and the second coordinate system and the constraint equation includes:
Forming a matching coordinate pair by the first coordinate value of the first characteristic point and the second coordinate value of the second characteristic point in each group of matching point pairs;
respectively transmitting each group of matching coordinates into the constraint equation;
And forming a constraint equation set based on each constraint equation with the matched coordinate pair.
5. The method of claim 1, wherein solving the set of constraint equations, determining the value of the rotation angle and the value of each element in the translation vector comprises:
acquiring a parameter matrix of the constraint equation set;
Calculating determinant of the parameter matrix;
Calculating the determinant to obtain the value of the rotation angle;
And calculating the value of the rotation angle to obtain the value of each element in the translation vector.
6. The method of claim 5, wherein calculating the value of the rotation angle to obtain the value of each element in the translation vector comprises:
transmitting the value of the rotation angle into the constraint equation set;
and calculating a constraint equation set transmitted with the value of the rotation angle to obtain the value of each element in the translation vector.
7. A pose estimation device of a photographing apparatus, the device comprising:
The acquisition module is used for acquiring at least three groups of matching point pairs formed by the characteristic points on the first image and the characteristic points on the second image acquired by the shooting equipment and a rotation matrix corrected by the shooting equipment in a preset direction; the rotation matrix comprises a rotation angle to be determined; the rotation angle is used for representing the rotation amount of the shooting equipment which rotates by taking the preset direction as an axis;
the generation module is used for generating a constraint equation according to the rotation matrix, the translation vector to be determined and the epipolar constraint condition;
the obtaining module is used for obtaining a constraint equation set based on coordinate values of all the matching point pairs in the first coordinate system and the second coordinate system and the constraint equation;
the determining module is used for solving the constraint equation set, determining the value of the rotation angle and the value of each element in the translation vector, and obtaining the pose of the shooting equipment; the pose includes a translation vector and a rotation angle of the photographing apparatus moving from a position where the first image is photographed to a position where the second image is photographed.
8. A computer device comprising a memory and a processor, the memory storing a computer program, characterized in that the processor implements the steps of the method of any of claims 1 to 6 when the computer program is executed.
9. A computer readable storage medium, on which a computer program is stored, characterized in that the computer program, when being executed by a processor, implements the steps of the method of any of claims 1 to 6.
10. A computer program product comprising a computer program, characterized in that the computer program, when being executed by a processor, implements the steps of the method of any of claims 1 to 6.
CN202211355744.6A 2022-11-01 2022-11-01 Pose estimation method and device of shooting equipment, computer equipment and storage medium Pending CN117994334A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211355744.6A CN117994334A (en) 2022-11-01 2022-11-01 Pose estimation method and device of shooting equipment, computer equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211355744.6A CN117994334A (en) 2022-11-01 2022-11-01 Pose estimation method and device of shooting equipment, computer equipment and storage medium

Publications (1)

Publication Number Publication Date
CN117994334A true CN117994334A (en) 2024-05-07

Family

ID=90892216

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211355744.6A Pending CN117994334A (en) 2022-11-01 2022-11-01 Pose estimation method and device of shooting equipment, computer equipment and storage medium

Country Status (1)

Country Link
CN (1) CN117994334A (en)

Similar Documents

Publication Publication Date Title
CN108364319B (en) Dimension determination method and device, storage medium and equipment
WO2018119889A1 (en) Three-dimensional scene positioning method and device
CN109683699B (en) Method and device for realizing augmented reality based on deep learning and mobile terminal
CN111754579B (en) Method and device for determining external parameters of multi-view camera
CN111127524A (en) Method, system and device for tracking trajectory and reconstructing three-dimensional image
CN112465877B (en) Kalman filtering visual tracking stabilization method based on motion state estimation
CN111161398B (en) Image generation method, device, equipment and storage medium
US20230237683A1 (en) Model generation method and apparatus based on multi-view panoramic image
CN113643414B (en) Three-dimensional image generation method and device, electronic equipment and storage medium
CN113256718A (en) Positioning method and device, equipment and storage medium
CN111415420A (en) Spatial information determination method and device and electronic equipment
CN113361365A (en) Positioning method and device, equipment and storage medium
US8509522B2 (en) Camera translation using rotation from device
CN113436267B (en) Visual inertial navigation calibration method, device, computer equipment and storage medium
KR102372298B1 (en) Method for acquiring distance to at least one object located in omni-direction of vehicle and vision device using the same
CN114882106A (en) Pose determination method and device, equipment and medium
CN113034582A (en) Pose optimization device and method, electronic device and computer readable storage medium
CN116109799B (en) Method, device, computer equipment and storage medium for training adjustment model
CN115294280A (en) Three-dimensional reconstruction method, apparatus, device, storage medium, and program product
CN113570659B (en) Shooting device pose estimation method, device, computer equipment and storage medium
CN117994334A (en) Pose estimation method and device of shooting equipment, computer equipment and storage medium
CN113048985B (en) Camera relative motion estimation method under known relative rotation angle condition
WO2018100230A1 (en) Method and apparatuses for determining positions of multi-directional image capture apparatuses
CN113159197A (en) Pure rotation motion state judgment method and device
CN112615993A (en) Depth information acquisition method, binocular camera module, storage medium and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination