CN116645392A - Space target relative pose iterative estimation method and system based on key point weight - Google Patents

Space target relative pose iterative estimation method and system based on key point weight Download PDF

Info

Publication number
CN116645392A
CN116645392A CN202310478444.5A CN202310478444A CN116645392A CN 116645392 A CN116645392 A CN 116645392A CN 202310478444 A CN202310478444 A CN 202310478444A CN 116645392 A CN116645392 A CN 116645392A
Authority
CN
China
Prior art keywords
key point
points
point
coordinates
key
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310478444.5A
Other languages
Chinese (zh)
Inventor
张泽旭
苏宇
袁帅
袁萌萌
王艺诗
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Harbin Institute of Technology
Original Assignee
Harbin Institute of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Harbin Institute of Technology filed Critical Harbin Institute of Technology
Priority to CN202310478444.5A priority Critical patent/CN116645392A/en
Publication of CN116645392A publication Critical patent/CN116645392A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • Image Analysis (AREA)

Abstract

The application discloses a space target relative pose iterative estimation method and system based on key point weights, relates to the technical field of computer vision navigation, and aims to solve the problems that the accuracy and the stability of the existing method for estimating the relative pose of a space target are low. The technical key points of the application include: acquiring an image sequence containing a space target; extracting key point pixel coordinates in an image sequence; calculating an initial centroid position area of the space target according to the pixel coordinates of the key points; according to the initial centroid position and the key point pixel coordinates, calculating and obtaining a weight value of each key point; and according to the weight value of the key point and the corresponding relation between the weight value and the three-dimensional key point, obtaining the relative pose tracking result of the space target by weighted iterative calculation. The method and the device can better utilize time domain information, improve the stability of pose estimation and improve the precision of relative pose estimation.

Description

Space target relative pose iterative estimation method and system based on key point weight
Technical Field
The application relates to the technical field of computer vision navigation, in particular to a space target relative pose iterative estimation method and system based on key point weights.
Background
With the improvement of the performance of camera hardware, computer vision has also been rapidly developed as an important part of robot development, and relative pose navigation by using the computer vision has also become a relatively common application. Because the space target motion has greater regularity but is accompanied by more severe illumination changes and more extensive depth changes, the ground common vision method cannot be perfectly transplanted. The 6D gesture tracking of the on-orbit non-cooperative spacecraft is always processed by combining various visual sensors, and the sensors used have excellent performance due to long observation distance and poor illumination condition. Among these, the low power consumption and lightweight of the optical camera have led to the study of the students around them. The monocular camera can acquire a large number of images through frequent shooting, and the shooting angles basically ensure that all angles of a target can be comprehensively shot.
Because the motion of the space target always has a certain rule, the application of the context information can bring positive influence to the estimation of the gesture, and the bad illumination condition also causes that some targets in the sequence image are difficult to identify and extract the characteristics, so that some scholars choose to introduce some time domain optimization methods to cope with the situation, and common methods comprise Kalman filtering, global BA algorithm, loop detection algorithm and the like. Although the stability of gesture tracking is improved by the method, most of the method directly corrects the gesture, and when the complexity of the target motion is high, such as including the situations of precession nutation, over-high angular speed and the like, the optimization method is difficult to analyze the target motion rule by using a small amount of data, so that the optimization effect is weak.
Disclosure of Invention
Therefore, the application provides a space target relative pose iterative estimation method and a system based on key point weights, which are used for solving the problems of low accuracy and low stability of the existing method for estimating the relative pose of the space target.
According to an aspect of the present application, there is provided a spatial target relative pose iterative estimation method based on key point weights, the method comprising the steps of:
step one, acquiring an image sequence containing a space target;
step two, extracting key point pixel coordinates in an image sequence;
step three, calculating an initial centroid position area of the space target according to the pixel coordinates of the key points;
step four, calculating and obtaining a weight value of each key point according to the initial centroid position and the key point pixel coordinates;
and fifthly, obtaining a relative pose tracking result of the space target through weighted iterative calculation according to the weight value of the key point and the corresponding relation between the key point and the three-dimensional key point.
Further, the specific process of the second step comprises: extracting key points of the target according to the pixel positions and gray level changes of the image, wherein the key points comprise corner points of an external boundary frame of the target; and calculating and determining the pixel coordinate values of the key points.
Further, the specific process of the third step comprises the following steps: dividing the key points into areas, and dividing a plurality of corner areas containing targets; determining the center coordinates of the region according to the corner regions; and (3) combining the central coordinates of the different areas and the points contained in the areas, and regressing the centers of the areas of the plurality of key points so as to estimate the initial centroid position area of the target.
Further, the specific process of the fourth step comprises: calculating the pixel distance li between the key point of each initial centroid position area and the central point of the area where the key point is positioned; calculating the pixel distance dis between each key point and the initial centroid position; eliminating points with the li/dis larger than a preset threshold value, and defining points with the li/dis larger than the preset threshold value as outer points; after the outliers are screened out initially, the weight value is further estimated according to the following formula, and the weight value of each key point is calculated as follows:
wherein diameter is the diameter of the sphere outside the bounding box of the target cuboid.
Further, the specific process of the fifth step comprises: let the three-dimensional coordinates of the space point be P i [X i ,Y i ,Z i ] T The spatial point has a coordinate u in the camera coordinate system i [u i ,v i ] T The method comprises the steps of carrying out a first treatment on the surface of the The coordinates of the 3D points in the camera coordinate systems of the adjacent frames have the following relationship:
s i u i =KTP i
wherein K represents an internal reference of the camera; t represents a population of plums; [ u ] i ′,v i ′]Representing estimated [ u ] i ,v i ]Pixel point location coordinates; s is(s) i Representing scale factors;
re-projecting the 3D point coordinates onto a two-dimensional plane, wherein the difference between the projected point pixel coordinates and the estimated point pixel coordinates is a re-projection error; the reprojection error is used as an optimized error function and the errors are summed to construct a least squares problem:
wherein T is * A value of T which satisfies the above formula and is the smallest; n represents the number of images;
when solving an error, multiplying T by disturbance delta xi, then considering the derivative of a reprojection error e on the disturbance, and describing the first-order change of the reprojection error relative to the pose lie algebra of a camera by solving the derivative of each error item on an optimized variable to obtain a jacobian matrix, wherein the first-order change is as follows:
wherein e represents a reprojection error function; f (f) x ,f y Derived from camera internal parameters; [ X ', Y ', Z ] ']Representing three-dimensional point coordinates;
and (5) gradually iterating and calculating to obtain the optimal camera pose through Newton-Gaussian iteration.
According to another aspect of the present application, there is provided a system for iterative estimation of relative pose of a spatial target based on keypoint weights, the system comprising:
an image acquisition module configured to acquire a sequence of images containing a spatial target;
a keypoint extraction module configured to extract keypoint pixel coordinates in the image sequence;
a centroid position calculation module configured to calculate an initial centroid position area of the spatial target from the keypoint pixel coordinates;
the key point weight calculation module is configured to calculate and acquire a weight value of each key point according to the initial centroid position and the key point pixel coordinates;
and the pose calculation module is configured to obtain a relative pose tracking result of the space target by weighted iterative calculation according to the weight value of the key point and the corresponding relation between the key point and the three-dimensional key point.
Further, the specific process of extracting the coordinates of the pixels of the keypoints in the image sequence in the keypoint extraction module includes: extracting key points of the target according to the pixel positions and gray level changes of the image, wherein the key points comprise corner points of an external boundary frame of the target; and calculating and determining the pixel coordinate values of the key points.
Further, the specific process of calculating the initial centroid position area of the space target in the centroid position calculation module comprises the following steps: dividing the key points into areas, and dividing a plurality of corner areas containing targets; determining the center coordinates of the region according to the corner regions; and (3) combining the central coordinates of the different areas and the points contained in the areas, and regressing the centers of the areas of the plurality of key points so as to estimate the initial centroid position area of the target.
Further, the specific process of calculating and obtaining the weight value of each key point in the key point weight calculation module includes: calculating the pixel distance li between the key point of each initial centroid position area and the central point of the area where the key point is positioned; calculating the pixel distance dis between each key point and the initial centroid position; eliminating points with the li/dis larger than a preset threshold value, and defining points with the li/dis larger than the preset threshold value as outer points; after the outliers are screened out initially, the weight value is further estimated according to the following formula, and the weight value of each key point is calculated as follows:
wherein diameter is the diameter of the sphere outside the bounding box of the target cuboid.
Further, the specific process of obtaining the relative pose tracking result of the space target by weighting iterative computation in the pose computing module comprises the following steps: let the three-dimensional coordinates of the space points be P i [X i ,Y i ,Z i ] T The spatial point has a coordinate u in the camera coordinate system i [u i ,v i ] T The method comprises the steps of carrying out a first treatment on the surface of the The coordinates of the 3D points in the camera coordinate systems of the adjacent frames have the following relationship:
s i u i =KTP i
wherein K represents an internal reference of the camera; t represents a population of plums; [ u ] i ′,v i ′]Representing estimated [ u ] i ,v i ]Pixel point location coordinates; s is(s) i Representing scale factors;
re-projecting the 3D point coordinates onto a two-dimensional plane, wherein the difference between the projected point pixel coordinates and the estimated point pixel coordinates is a re-projection error; the reprojection error is used as an optimized error function and the errors are summed to construct a least squares problem:
wherein T is * A value of T which satisfies the above formula and is the smallest; n represents the number of images;
when solving an error, multiplying T by disturbance delta xi, then considering the derivative of a reprojection error e on the disturbance, and describing the first-order change of the reprojection error relative to the pose lie algebra of a camera by solving the derivative of each error item on an optimized variable to obtain a jacobian matrix, wherein the first-order change is as follows:
wherein e represents a reprojection error function; f (f) x ,f y Derived from camera internal parameters [ X ', Y ', Z ] ']Representing three-dimensional point coordinates;
and (5) gradually iterating and calculating to obtain the optimal camera pose through Newton-Gaussian iteration.
The beneficial technical effects of the application are as follows:
the method is suitable for estimating the relative pose of the sequence image aiming at the space target. According to the method, the initial centroid position area of the target is calculated by acquiring the key point pixel coordinates extracted from the sequence images; and estimating the weight value of each key point by using a neural network according to the initial centroid position and the key point pixel coordinates, and obtaining the relative pose tracking result of the target by weighting iteration according to the weight value of the key point and the corresponding relation between the weight value and the three-dimensional key point. The method and the device can better utilize time domain information, improve the stability of pose estimation and improve the precision of relative pose estimation.
Drawings
The application may be better understood by reference to the following description taken in conjunction with the accompanying drawings, which are included to provide a further illustration of the preferred embodiments of the application and to explain the principles and advantages of the application, together with the detailed description below.
Fig. 1 is a flowchart of a method for iterative estimation of relative pose of a spatial target based on key point weights according to an embodiment of the present application.
Fig. 2 is a schematic diagram of a method for extracting key points of images with different scales according to an embodiment of the present application.
FIG. 3 is a schematic diagram of estimating a center position of a key point according to an embodiment of the present application.
Fig. 4 is a schematic diagram of a weight estimation network according to an embodiment of the present application.
FIG. 5 is a schematic diagram of a weight supervision value design of a weight estimation network loss function according to an embodiment of the present application.
Fig. 6 is an exemplary diagram of a key point extraction result according to an embodiment of the present application.
Fig. 7 is an exemplary diagram of a final image pose estimation result of a sequence according to an embodiment of the present application.
Fig. 8 is another exemplary diagram of a final image pose estimation result of a sequence according to an embodiment of the present application.
Detailed Description
In order that those skilled in the art will better understand the present application, exemplary embodiments or examples of the present application will be described below with reference to the accompanying drawings. It is apparent that the described embodiments or examples are only implementations or examples of a part of the application, not all. All other embodiments or examples, which may be made by one of ordinary skill in the art without undue burden, are intended to be within the scope of the present application based on the embodiments or examples herein.
The application provides a space target relative pose iterative estimation method and a space target relative pose iterative estimation system based on key point weights, which simplify analysis of a target motion rule, only analyze pixel position changes of key points, determine a weight change rule, and track relative poses by using a weighted pose iterative solving algorithm. For a given set of sequence images, the purpose is to estimate 6D relative pose, assuming that the target is a rigid body, firstly, a pyramid model is used for extracting key angular points of the target, further, the centroid region and the weight of each characteristic point are determined by analyzing the geometric structure and the motion rule of the target, and the relative pose of the target in the time sequence is finally estimated by using the 2D key angular points of the target and the weights of the key angular points as input and utilizing a PNP algorithm of weighted iteration.
In the traditional method, the corresponding relation of the homonymous points can be established through the matching relation of the similar parts among the images, and modeling of the target is completed by utilizing the relation. The points used therein are generally referred to as keypoints or feature points. The embodiment of the application provides a space target relative pose iterative estimation method based on key point weight, which comprises the following steps of:
step one, acquiring an image sequence containing a space target;
step two, extracting key point pixel coordinates in an image sequence;
step three, calculating and obtaining an initial centroid position area of the space target according to the pixel coordinates of the key points;
step four, calculating and obtaining a weight value of each key point according to the initial centroid position and the key point pixel coordinates;
and fifthly, obtaining a relative pose tracking result of the space target through weighted iterative calculation according to the weight value of the key point and the corresponding relation between the key point and the three-dimensional key point.
In the second step, coordinates of key points in the sequence image are extracted, as shown in fig. 2, and the specific process includes: and extracting key points of the target, especially 8 corner points of an external boundary frame of the target, according to the pixel positions and gray level changes of the image, and calculating and determining pixel coordinate values of the key points. Because the condition of multiple targets does not need to be considered, the scoring parts of targets of different categories can be omitted, the 2D key point coordinates of the targets are integrated finally, the targets of different scales are subjected to threshold division and feature extraction, and the 2D projection positions corresponding to the 3D feature points of the targets are output for subsequent weight evaluation.
According to the embodiment of the application, the purpose is to obtain the key points of the target, and the scale change of the target is obvious in the space environment, and the observation distance is usually tens of meters to hundreds of meters, so that the adjustment is performed on the current most advanced framework of the key point estimation network FPN (characteristic pyramid network): the Darknet-53 is added as a backbone, and the key point estimation network consists of five levels of feature graphs, so that the key point estimation network has larger and larger receptive fields. Because the condition of multiple targets does not need to be considered, the scoring parts of targets of different categories can be omitted, the 2D key point coordinates of the targets are integrated finally, the targets of different scales are subjected to threshold division and feature extraction, and the 2D projection positions corresponding to the 3D feature points of the targets are output for subsequent weight evaluation. As an example, fig. 6 shows the key point extraction result.
In the third step, an initial centroid position area of the target is calculated according to the coordinates of the pixels of the keypoint, as shown in fig. 3, and the specific process includes: firstly, dividing the key points into areas, wherein the target is provided with angular point shielding, so that at least 5 main angular point areas of the target need to be divided, determining area center coordinates according to the angular point areas, and regressing more proper 8 key point area centers by combining different area center coordinates and the contained points thereof, so as to estimate the initial centroid position area of the target. Regression is to estimate the distribution rule of the points in the area according to the positions of the points, and further estimate the center of the area.
According to the embodiment of the application, since the key point estimation result of the target is generally offset from the actual position to a certain extent, the approximate centroid position is estimated according to the key points for subsequent weight determination, each key point is selected as the sphere center for neighborhood determination in the process, the radius is gradually increased, when 75% of key points exist in the field, the radius is recorded, the total radius (the number is equal to the number of the key points) is counted, finally the highest 5 points are determined according to the distribution, the average value of the five points is solved, and the pixel coordinates of the center point are determined.
In the fourth step, according to the initial centroid position and the key point pixel coordinates, estimating the weight value of each key point, wherein the specific process comprises the following steps: calculating the pixel distance li between the key points of each region and the central point of the region where the key points are located, calculating the pixel distance dis between each key point and the initial centroid position, eliminating the points with the li/dis larger than a preset threshold value, defining the points as outer points, and predefining a screening threshold value of noise points according to an actual task, wherein the threshold value is generally set to be (0.3-0.5), and the larger the threshold value is, the more the key points participate in estimation; the weight value of each key point is obtained through further screening calculation by using the following formula:
wherein diameter is the diameter of the sphere outside the bounding box of the target cuboid.
According to the embodiment of the application, as shown in fig. 4, a key point composition sequence is firstly input into an LSTM network structure to replace the original one-dimensional expansion, the network structure utilizes a convolution layer to expand latitude, the data type and the meaning thereof are not influenced, and then the MLP structure formed by 3 layers of full-connection layers is finally output into a score value corresponding to the number of key points one by one, and the weight calculation formula designed by the application is utilized to conduct supervised learning, so that the score represents the weight of the key points. In fig. 5, some keypoints defined as outliers will have weights set to 0 and are not substituted into subsequent calculations, while other keypoints and weights will all be used in the iterative estimation algorithm for the relative pose.
In the fifth step, according to the weight value of the key point and the corresponding relation between the weight value and the three-dimensional key point, the weighted iteration obtains the relative pose tracking result of the target, and the specific process comprises the following steps:
since the 2D keypoint pixel coordinates and their weights are pre-acquired, a weighted iterative method is used to maximize the utilization of these data. Let the three-dimensional coordinates of the space points be P i [X i ,Y i ,Z i ] T The spatial point has a coordinate u in the camera coordinate system i [u i ,v i ] T The method comprises the steps of carrying out a first treatment on the surface of the The internal reference of the camera is K. The pose of the camera in the current frame is R, T, and the corresponding lie group is represented as T. At this time, the coordinates of the 3D points in the camera coordinate system of the adjacent frame have the following relationship:
s i u i =KTP i
in the formula, [ u ] i ′,v i ′]Representing estimated [ u ] i ,v i ]Is a pixel point location coordinate of (a); s is(s) i Representing the scale factor.
Re-projecting the 3D point coordinates onto a two-dimensional plane, wherein the difference between the projected point pixel coordinates and the estimated point pixel coordinates is a re-projection error; the reprojection error is used as an optimized error function and the errors are summed to construct a least squares problem:
wherein T is * A value of T which satisfies the above formula and is the smallest; n represents the number of images;
when solving an error, multiplying T by disturbance delta zeta and then considering the derivative of a reprojection error e on the disturbance, and describing the first-order change of the reprojection error relative to the pose lie algebra of a camera by solving the derivative of each error item on an optimized variable to obtain a jacobian matrix, wherein the first-order change is as follows:
wherein e represents a reprojection error function; f (f) x ,f y Derived from camera internal parameters [ X ', Y ', Z ] ']Representing three-dimensional point coordinates;
and (5) gradually iterating and calculating to obtain the optimal camera pose through Newton-Gaussian iteration. Because of the weight acquisition, the position points with larger errors in the key point estimation process can be removed, the re-projection error loss function is optimized in a weighted iteration mode, and finally the relative pose of a time sequence is acquired.
Another embodiment of the present application provides a system for iterative estimation of relative pose of a spatial target based on weights of key points, the system comprising:
an image acquisition module configured to acquire a sequence of images containing a spatial target;
a keypoint extraction module configured to extract keypoint pixel coordinates in the image sequence;
a centroid position calculation module configured to calculate an initial centroid position area of the spatial target from the keypoint pixel coordinates;
the key point weight calculation module is configured to calculate and acquire a weight value of each key point according to the initial centroid position and the key point pixel coordinates;
and the pose calculation module is configured to obtain a relative pose tracking result of the space target by weighted iterative calculation according to the weight value of the key point and the corresponding relation between the key point and the three-dimensional key point.
In this embodiment, preferably, the specific process of extracting the coordinates of the pixels of the keypoints in the image sequence in the keypoint extraction module includes: extracting key points of the target according to the pixel positions and gray level changes of the image, wherein the key points comprise corner points of an external boundary frame of the target; and calculating and determining the pixel coordinate values of the key points.
In this embodiment, preferably, the specific process of calculating the initial centroid position area of the space object in the centroid position calculation module includes: dividing the key points into areas, and dividing a plurality of corner areas containing targets; determining the center coordinates of the region according to the corner regions; and (3) combining the central coordinates of the different areas and the points contained in the areas, and regressing the centers of the areas of the plurality of key points so as to estimate the initial centroid position area of the target.
In this embodiment, preferably, the specific process of calculating and obtaining the weight value of each key point in the key point weight calculation module includes: calculating the pixel distance li between the key point of each initial centroid position area and the central point of the area where the key point is positioned; calculating the pixel distance dis between each key point and the initial centroid position; eliminating points with the li/dis larger than a preset threshold value, and defining points with the li/dis larger than the preset threshold value as outer points; after the outliers are screened out initially, the weight value is further estimated according to the following formula, and the weight value of each key point is calculated as follows:
wherein diameter is the diameter of the sphere outside the bounding box of the target cuboid.
In this embodiment, preferably, the specific process of obtaining the relative pose tracking result of the space target by weighted iterative computation in the pose computing module includes: the three-dimensional coordinate of the space point is P i [X i ,Y i ,Z i ] T The spatial point has a coordinate u in the camera coordinate system i [u i ,v i ] T The method comprises the steps of carrying out a first treatment on the surface of the The coordinates of the 3D points in the camera coordinate systems of the adjacent frames have the following relationship:
s i u i =KTP i
wherein K represents an internal reference of the camera; t represents a population of plums; [ u ] i ′,v i ′]Representing estimated [ u ] i ,v i ]Pixel point location coordinates; s is(s) i Representing scale factors;
re-projecting the 3D point coordinates onto a two-dimensional plane, wherein the difference between the projected point pixel coordinates and the estimated point pixel coordinates is a re-projection error; the reprojection error is used as an optimized error function and the errors are summed to construct a least squares problem:
wherein T is * A value of T which satisfies the above formula and is the smallest; n represents the number of images;
when solving an error, multiplying T by disturbance delta xi, then considering the derivative of a reprojection error e on the disturbance, and describing the first-order change of the reprojection error relative to the pose lie algebra of a camera by solving the derivative of each error item on an optimized variable to obtain a jacobian matrix, wherein the first-order change is as follows:
wherein e represents a reprojection error function; f (f) x ,f y Derived from camera internal parameters, [ X ', Y, Z ]']Representing three-dimensional point coordinates; and (5) gradually iterating and calculating to obtain the optimal camera pose through Newton-Gaussian iteration.
The function of the spatial target relative pose iterative estimation system based on the key point weight according to the embodiment may be described by the foregoing spatial target relative pose iterative estimation method based on the key point weight, so that details of the embodiment are not described, and reference is made to the above method embodiments, which are not described herein.
While the application has been described with respect to a limited number of embodiments, those skilled in the art, having benefit of the above description, will appreciate that other embodiments are contemplated within the scope of the application as described herein. The disclosure of the present application is intended to be illustrative, but not limiting, of the scope of the application, which is defined by the appended claims.

Claims (10)

1. The iterative estimation method of the relative pose of the space target based on the key point weight is characterized by comprising the following steps:
step one, acquiring an image sequence containing a space target;
step two, extracting key point pixel coordinates in an image sequence;
step three, calculating an initial centroid position area of the space target according to the pixel coordinates of the key points;
step four, calculating and obtaining a weight value of each key point according to the initial centroid position and the key point pixel coordinates;
and fifthly, obtaining a relative pose tracking result of the space target through weighted iterative calculation according to the weight value of the key point and the corresponding relation between the key point and the three-dimensional key point.
2. The iterative estimation method of relative pose of a space object based on key point weights according to claim 1, wherein the specific process of the second step comprises: extracting key points of the target according to the pixel positions and gray level changes of the image, wherein the key points comprise corner points of an external boundary frame of the target; and calculating and determining the pixel coordinate values of the key points.
3. The iterative estimation method of relative pose of a space object based on key point weights according to claim 1, wherein the specific process of the third step comprises: dividing the key points into areas, and dividing a plurality of corner areas containing targets; determining the center coordinates of the region according to the corner regions; and (3) combining the central coordinates of the different areas and the points contained in the areas, and regressing the centers of the areas of the plurality of key points so as to estimate the initial centroid position area of the target.
4. The iterative estimation method of relative pose of a space object based on key point weights according to claim 1, wherein the specific process of the fourth step comprises: calculating the pixel distance li between the key point of each initial centroid position area and the central point of the area where the key point is positioned; calculating the pixel distance dis between each key point and the initial centroid position; eliminating points with the li/dis larger than a preset threshold value, and defining points with the li/dis larger than the preset threshold value as outer points; after the outliers are screened out initially, the weight value is further estimated according to the following formula, and the weight value of each key point is calculated as follows:
wherein diameter is the diameter of the sphere outside the bounding box of the target cuboid.
5. A key-based device according to claim 4The iterative estimation method of the relative pose of the space target with the point weight is characterized by comprising the following specific processes: let the three-dimensional coordinates of the space points be P i [X i ,Y i ,Z i ] T The spatial point has a coordinate u in the camera coordinate system i [u i ,v i ] T The method comprises the steps of carrying out a first treatment on the surface of the The coordinates of the 3D points in the camera coordinate systems of the adjacent frames have the following relationship:
s i u i =KTP i
wherein K represents an internal reference of the camera; t represents a population of plums; [ u ] i ′,v i ']Representing estimated [ u ] i ,v i ]Pixel point location coordinates; s is(s) i Representing scale factors;
re-projecting the 3D point coordinates onto a two-dimensional plane, wherein the difference between the projected point pixel coordinates and the estimated point pixel coordinates is a re-projection error; the reprojection error e is used as an optimized error function and the errors are summed to construct a least squares problem:
wherein T is * A value of T which satisfies the above formula and is the smallest; n represents the number of images;
when solving an error, multiplying T by disturbance delta xi, then considering the derivative of a reprojection error e on the disturbance, and describing the first-order change of the reprojection error relative to the pose lie algebra of a camera by solving the derivative of each error item on an optimized variable to obtain a jacobian matrix, wherein the first-order change is as follows:
wherein e represents a weightA projection error function; f (f) x ,f y Derived from camera internal parameters; [ X ', Y ', Z ] ']Representing three-dimensional point coordinates;
and (5) gradually iterating and calculating to obtain the optimal camera pose through Newton-Gaussian iteration.
6. The utility model provides a space target relative pose iterative estimation system based on key point weight which characterized in that includes:
an image acquisition module configured to acquire a sequence of images containing a spatial target;
a keypoint extraction module configured to extract keypoint pixel coordinates in the image sequence;
a centroid position calculation module configured to calculate an initial centroid position area of the spatial target from the keypoint pixel coordinates;
the key point weight calculation module is configured to calculate and acquire a weight value of each key point according to the initial centroid position and the key point pixel coordinates;
and the pose calculation module is configured to obtain a relative pose tracking result of the space target by weighted iterative calculation according to the weight value of the key point and the corresponding relation between the key point and the three-dimensional key point.
7. The iterative estimation system of relative pose of spatial target based on keypoint weights of claim 6, wherein the specific process of extracting the coordinates of the pixels of the keypoint in the image sequence by the keypoint extraction module comprises: extracting key points of the target according to the pixel positions and gray level changes of the image, wherein the key points comprise corner points of an external boundary frame of the target; and calculating and determining the pixel coordinate values of the key points.
8. The iterative estimation system of claim 6, wherein the specific process of calculating the initial centroid location area of the spatial target in the centroid location calculation module comprises: dividing the key points into areas, and dividing a plurality of corner areas containing targets; determining the center coordinates of the region according to the corner regions; and (3) combining the central coordinates of the different areas and the points contained in the areas, and regressing the centers of the areas of the plurality of key points so as to estimate the initial centroid position area of the target.
9. The iterative estimation system of relative pose of space object based on key point weight according to claim 6, wherein said key point weight calculation module calculates a specific process of obtaining weight value of each key point comprises: calculating the pixel distance li between the key point of each initial centroid position area and the central point of the area where the key point is positioned; calculating the pixel distance dis between each key point and the initial centroid position; eliminating points with the li/dis larger than a preset threshold value, and defining points with the li/dis larger than the preset threshold value as outer points; after the outliers are screened out initially, the weight value is further estimated according to the following formula, and the weight value of each key point is calculated as follows:
wherein diameter is the diameter of the sphere outside the bounding box of the target cuboid.
10. The iterative estimation system of relative pose of space target based on key point weight according to claim 9, wherein the specific process of obtaining the relative pose tracking result of space target by weighting iterative calculation in the pose calculation module comprises: let the three-dimensional coordinates of the space points be P i [X i ,Y i ,Z i ] T The spatial point has a coordinate u in the camera coordinate system i [u i ,v i ] T The method comprises the steps of carrying out a first treatment on the surface of the The coordinates of the 3D points in the camera coordinate systems of the adjacent frames have the following relationship:
s i u i =KTP i
in the middle ofK represents an internal reference of the camera; t represents a population of plums; [ u ] i ′,v i ']Representing estimated [ u ] i ,v i ]Is a pixel point location coordinate of (a); s is(s) i Representing scale factors;
re-projecting the 3D point coordinates onto a two-dimensional plane, wherein the difference between the projected point pixel coordinates and the estimated point pixel coordinates is a re-projection error; the reprojection error is used as an optimized error function and the errors are summed to construct a least squares problem:
wherein T is * A value of T which satisfies the above formula and is the smallest; n represents the number of images;
when solving an error, multiplying T by disturbance delta xi, then considering the derivative of a reprojection error e on the disturbance, and describing the first-order change of the reprojection error relative to the pose lie algebra of a camera by solving the derivative of each error item on an optimized variable to obtain a jacobian matrix, wherein the first-order change is as follows:
wherein e represents a reprojection error function; f (f) x ,f y Derived from camera internal parameters; [ X ', Y ', Z ] ']Representing three-dimensional point coordinates;
and (5) gradually iterating and calculating to obtain the optimal camera pose through Newton-Gaussian iteration.
CN202310478444.5A 2023-04-28 2023-04-28 Space target relative pose iterative estimation method and system based on key point weight Pending CN116645392A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310478444.5A CN116645392A (en) 2023-04-28 2023-04-28 Space target relative pose iterative estimation method and system based on key point weight

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310478444.5A CN116645392A (en) 2023-04-28 2023-04-28 Space target relative pose iterative estimation method and system based on key point weight

Publications (1)

Publication Number Publication Date
CN116645392A true CN116645392A (en) 2023-08-25

Family

ID=87623781

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310478444.5A Pending CN116645392A (en) 2023-04-28 2023-04-28 Space target relative pose iterative estimation method and system based on key point weight

Country Status (1)

Country Link
CN (1) CN116645392A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117237451A (en) * 2023-09-15 2023-12-15 南京航空航天大学 Industrial part 6D pose estimation method based on contour reconstruction and geometric guidance
CN117495970A (en) * 2024-01-03 2024-02-02 中国科学技术大学 Template multistage matching-based chemical instrument pose estimation method, equipment and medium

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117237451A (en) * 2023-09-15 2023-12-15 南京航空航天大学 Industrial part 6D pose estimation method based on contour reconstruction and geometric guidance
CN117237451B (en) * 2023-09-15 2024-04-02 南京航空航天大学 Industrial part 6D pose estimation method based on contour reconstruction and geometric guidance
CN117495970A (en) * 2024-01-03 2024-02-02 中国科学技术大学 Template multistage matching-based chemical instrument pose estimation method, equipment and medium
CN117495970B (en) * 2024-01-03 2024-05-14 中国科学技术大学 Template multistage matching-based chemical instrument pose estimation method, equipment and medium

Similar Documents

Publication Publication Date Title
CN107301654B (en) Multi-sensor high-precision instant positioning and mapping method
CN108986037A (en) Monocular vision odometer localization method and positioning system based on semi-direct method
US7680300B2 (en) Visual object recognition and tracking
CN116645392A (en) Space target relative pose iterative estimation method and system based on key point weight
Yu et al. Robust robot pose estimation for challenging scenes with an RGB-D camera
US20140142891A1 (en) Generaton of map data
CN109859266A (en) Vision positions and drawing practice simultaneously under a kind of big visual angle change based on pre-transform
Lee et al. Vision-based terrain referenced navigation for unmanned aerial vehicles using homography relationship
Simard Bilodeau et al. Pinpoint lunar landing navigation using crater detection and matching: design and laboratory validation
Jiang et al. Learned local features for structure from motion of uav images: A comparative evaluation
CN103839280B (en) A kind of human body attitude tracking of view-based access control model information
Han et al. A PTV-based feature-point matching algorithm for binocular stereo photogrammetry
CN114812601A (en) State estimation method and device of visual inertial odometer and electronic equipment
CN117036404A (en) Monocular thermal imaging simultaneous positioning and mapping method and system
Grundmann et al. A gaussian measurement model for local interest point based 6 dof pose estimation
CN115507752A (en) Monocular vision distance measurement method and system based on parallel environment elements
CN112906573B (en) Planet surface navigation road sign matching method based on contour point set
CN115578417A (en) Monocular vision inertial odometer method based on feature point depth
Yan et al. Horizontal velocity estimation via downward looking descent images for lunar landing
CN114723811A (en) Stereo vision positioning and mapping method for quadruped robot in unstructured environment
Jiang et al. Real-time target detection and tracking system based on stereo camera for quadruped robots
Jurado Enhanced image-aided navigation algorithm with automatic calibration and affine distortion prediction
Skjellaug Feature-Based Lidar SLAM for Autonomous Surface Vehicles Operating in Urban Environments
Ye et al. 3D Surfel Map-Aided Visual Relocalization with Learned Descriptors
Recker et al. Hybrid Photogrammetry Structure-from-Motion Systems for Scene Measurement and Analysis

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination