CN115760984A - Non-cooperative target pose measurement method based on monocular vision by cubic star - Google Patents

Non-cooperative target pose measurement method based on monocular vision by cubic star Download PDF

Info

Publication number
CN115760984A
CN115760984A CN202211470026.3A CN202211470026A CN115760984A CN 115760984 A CN115760984 A CN 115760984A CN 202211470026 A CN202211470026 A CN 202211470026A CN 115760984 A CN115760984 A CN 115760984A
Authority
CN
China
Prior art keywords
image
target
target star
detected
template
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211470026.3A
Other languages
Chinese (zh)
Inventor
廖文和
朱奕潼
张翔
杜荣华
范书珲
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanjing University of Science and Technology
Original Assignee
Nanjing University of Science and Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanjing University of Science and Technology filed Critical Nanjing University of Science and Technology
Priority to CN202211470026.3A priority Critical patent/CN115760984A/en
Publication of CN115760984A publication Critical patent/CN115760984A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Image Analysis (AREA)

Abstract

The invention discloses a non-cooperative target pose measurement method based on monocular vision for a cube, and relates to the technical field of satellites. The pose measurement method comprises the steps of establishing a three-dimensional image and a feature point template library of a target star; acquiring a real-time image of a target satellite, and matching the real-time image with a template image in a template library to obtain a template image corresponding to an image to be detected; matching the profile image rotation of the image to be detected with the profile image of the template image to determine the rotation angle of the image to be detected relative to the template image; performing gray level processing and threshold processing on an image to be detected to extract an edge contour of a target star, and obtaining a target star feature point according to the edge contour; and screening the corresponding relation of the characteristic point sequence in the template library according to the rotation angle, and obtaining the pose information of the target star relative to the camera through a pose solving algorithm by combining the three-dimensional coordinates of the characteristic points and the image characteristic points.

Description

Non-cooperative target pose measurement method based on monocular vision by cubic star
Technical Field
The invention relates to the technical field of satellites, in particular to a method for measuring a non-cooperative target pose of a cubic satellite based on monocular vision.
Background
In recent years, based on the advantages of short development cycle, low manufacturing cost, low research and development cost and the like of the cubic satellite, more and more scientific research institutions and commercial companies shift attention to the cubic satellite, and besides scientific research and teaching and verification of electronic products, the cubic satellite is also applied to a series of on-orbit services, such as formation flight of the cubic satellite, maintenance and refueling of space vehicles, cleaning of space garbage and the like. And the series of on-orbit services are all independent of a cubic star vision-based navigation technology, so that a method for measuring the known non-cooperative target pose of a model aiming at monocular vision is provided.
Existing systems for satellite vision navigation are classified into monocular vision systems, binocular vision systems, and multi-view vision systems. In the aerospace field with strict requirements, monocular vision measurement is suitable for a cubic satellite platform due to the advantages of non-contact, low cost, high speed, small required space, flexible use and the like. In the prior art, a multi-choice binocular camera is used for measuring the pose of a target, and a binocular system is difficult to realize on a cube satellite platform. By looking up various data, no related patent for measuring the pose of the known non-cooperative target of the model for the cuboidal-star monocular visual navigation exists in China at present.
Disclosure of Invention
The invention aims to provide a non-cooperative target pose measuring method based on monocular vision for a cube star, which solves the problem of stably and efficiently acquiring the relative pose information of a target spacecraft during an on-orbit task of the cube star.
The technical solution for realizing the invention is as follows: a method for measuring the position and posture of a cubic star non-cooperative target based on monocular vision comprises the following steps:
step 1, collecting target star images from different angles, inputting three-dimensional coordinate values of target star feature points in a world coordinate system when a three-dimensional model of a target star is known, establishing a feature point sequence and establishing a template library in which the target star images and the feature points are in one-to-one correspondence. And (5) transferring to the step 2.
And 2, acquiring a real-time image of the target star as an image to be detected by using one camera, matching the image to be detected with the image of the target star in the template library, calculating the similarity of the images, calling the image of the target star with the highest similarity with the image to be detected as a template image, and turning to the step 3.
And 3, respectively carrying out edge detection on the image to be detected and the template image, correspondingly obtaining the profile image of the image to be detected and the profile image of the template image, rotating the profile image of the image to be detected, simultaneously matching the profile image of the template image, calculating rotation similarity, determining the rotation angle of the image to be detected relative to the template image according to the rotation similarity, and turning to the step 4.
And 4, performing gray processing, closed operation and threshold processing on the image to be detected to obtain an appearance image of the target star separated from the background, extracting a complete edge profile in the appearance image of the target star, obtaining target star feature points according to the edge profile, determining the corresponding relation between the target star feature points and the feature point sequence in the template library according to the rotation angle of the image to be detected relative to the template image, and turning to the step 5.
And 5, obtaining the pose information of the target star relative to the camera by using the corresponding relation between the target star feature points and the feature point sequence in the template library, the three-dimensional coordinates of the target star feature points and the target star feature points through an optimized EPNP pose solving algorithm, and optimizing and correcting the pose information.
Compared with the prior art, the invention has the remarkable advantages that:
(1) The pose solving method based on the feature points has the advantages of small calculated amount, high calculating efficiency and capability of obtaining real-time pose information.
(2) The invention is based on monocular vision, occupies small space on the satellite, has low power consumption requirement and is more suitable for a cubic satellite platform.
(3) The extraction of the feature points is based on the whole contour of the target star instead of a certain feature part, the extraction of the feature points is less influenced by illumination, and the robustness of the extraction of the feature points is higher.
Drawings
FIG. 1 is a flow chart of a measuring method of a non-cooperative target pose of a cube star based on monocular vision.
Figure 2 is a front view of a target star.
Detailed Description
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the present disclosure and together with the description, serve to explain the principles of the disclosure. It is to be understood that the drawings in the following description are merely exemplary of the disclosure, and that other drawings may be derived from those drawings by one of ordinary skill in the art without the exercise of inventive faculty.
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
With reference to fig. 1, a method for measuring a non-cooperative target pose of a cube star based on monocular vision comprises the following steps:
step 1, collecting target star images from different angles, inputting three-dimensional coordinate values of target star feature points in a world coordinate system, establishing a feature point sequence and establishing a template library in which the target star images and the feature points correspond to each other one by one, wherein the three-dimensional model of the target star is known, and the method specifically comprises the following steps:
the method comprises the steps of collecting three-dimensional coordinates of feature points of a target star, marking serial numbers, collecting images of the target star from angles at the same distance, processing the images of the target star, extracting the feature points of the images, and marking the serial numbers of the three-dimensional feature points corresponding to the upper, lower, left and right image feature points in a clockwise sequence.
Specifically, three-dimensional feature points of the target appearance are extracted and marked in sequence according to the three-dimensional model of the target star; and (3) establishing a coordinate system by taking the front face image (after the sailboard is unfolded) of the target star as an xy face (as shown in figure 2) and the front center of the image of the target star as an origin to obtain a three-dimensional coordinate value of each characteristic point, and corresponding the serial number of the characteristic point to the three-dimensional coordinate value thereof one by one to obtain a template library.
And 2, acquiring a real-time image of the target star as an image to be detected by using one camera, matching the image to be detected with the image of the target star in the template library, calculating the similarity of the images, and calling the image of the target star with the highest similarity with the image to be detected as a template image.
Specifically, a camera and a target star are fixed on a six-degree-of-freedom experimental platform, the positions of the camera and the target star are adjusted to ensure that the center of the image of the principal point of the camera and the front face of the target star are in the same horizontal line, meanwhile, the imaging plane of the camera is parallel to the front face of the target star, three-axis rotation and offset are carried out on the target star, and a real-time image is collected to serve as an image to be detected.
And carrying out feature point matching on the images to be detected and the target star images in the template library one by one, calculating the image similarity k, wherein the target star image with the maximum similarity k to the images to be detected is the template image corresponding to the images to be detected.
Figure BDA0003958154390000031
Wherein, P m The number of characteristic points P matched with the target star image in the image to be detected c And counting the number of all the characteristic points detected in the image to be detected.
And 3, respectively carrying out edge detection on the image to be detected and the template image, correspondingly obtaining the profile image of the image to be detected and the profile image of the template image, rotating the profile image of the image to be detected, simultaneously matching the profile image of the template image, calculating rotation similarity, and determining the rotation angle of the image to be detected relative to the template image according to the rotation similarity.
Specifically, edge detection processing is respectively carried out on the image to be detected and the template image through a Canny operator, the outline image of the image to be detected and the outline image of the template image are correspondingly obtained, the outline image of the image to be detected and the outline image of the template image are segmented through a pyramid segmentation method, and the outline image of the simple version of the image to be detected and the outline image of the template image are obtained.
The method comprises the steps of cutting a simple edition profile image of a template image into a circle with a target star centered, taking the current position of the simple edition profile image of the template image as a 0-degree starting point, increasing 10 degrees for each rotation angle from 0 degrees as the starting point, rotating the circular image until the circular image rotates 360 degrees, carrying out similarity matching on the profile image of the rotated template image and the profile image of an image to be detected in a normalized square difference mode to determine the approximate range of the optimal rotation angle, then rotating the profile image of the template image in a +/-5-degree area of the optimal rotation angle by taking 1 degree as the accuracy, then comparing the similarity of the images in the normalized square difference mode, and taking the rotation angle corresponding to the profile image of the template image with the lowest square difference value as the rotation angle.
And 4, performing gray processing, closed operation and threshold processing on the image to be detected to obtain an appearance image of the target star separated from the background, extracting a complete edge profile in the appearance image of the target star, obtaining target star feature points according to the edge profile, and determining the corresponding relation between the target star feature points and the feature point sequence in the template library according to the rotation angle of the image to be detected relative to the template image.
Specifically, gray processing is carried out on an image to be detected, user-defined threshold processing is carried out, as a space illumination environment is simulated through light absorption black cloth, a binary image with a black background and a white target star is obtained, all outlines of the white target star in the binary image are extracted, the most complete envelope outline of the target star is obtained through area screening, when the rotation angle is approximate to 0 degrees, 90 degrees, 180 degrees and 270 degrees, pixel point coordinate values (x, y) on the outline are traversed, the maximum value and the minimum value of x + y and x-y of each pixel point are calculated, namely four feature extreme points on the periphery of the unfolded solar array are obtained and are read in a clockwise sequence, and meanwhile, the obtaining mode of the feature pole of the satellite radome of the fifth feature point is determined according to the rotation angle (when the rotation angle is 0 degrees, the feature point corresponding to the minimum value of the longitudinal coordinate y is the coordinate of the fifth feature point); when the rotation angle of the target satellite is at other angles, traversing pixel point coordinate values (x, y) on the outline, calculating the maximum value and the minimum value of the abscissa x and the ordinate y of each pixel point, obtaining four characteristic points, reading the four characteristic points in a clockwise sequence, and determining the solving mode of the characteristic pole of the fifth characteristic point satellite radome according to the rotation angle (when the rotation angle is (0-90 degrees), the characteristic point corresponding to the maximum value of x-y is the coordinate of the fifth characteristic point) and marking the serial numbers corresponding to the five characteristic points according to the sequence.
The specific flow of threshold processing is as follows:
firstly, generating a gray level histogram of an image to be measured, and solving each gray level value I
(I =0,1,2, … …, 255) corresponding to the number of pixels N I
Then calculating the average gray value I of the image to be measured a
Figure BDA0003958154390000051
Calculating the average gray value I of the image to be measured a Ideal gray value T of image to be measured e Difference value I between e =I a -T e Subtracting I from the gray values of all pixel points of all the images to be detected e And obtaining gray values of all pixel points of the processed image to be detected.
The gray values of all pixel points of the processed image to be detected and the threshold value [ Thresh ] of the ideal image gray value min ,Thresh max ]Comparing, if the gray value I of the jth pixel point j Greater than the threshold maximum Thresh of the ideal image grey value max Then, then
Figure BDA0003958154390000052
Similarly, if the gray value I of the jth pixel point j Threshold minimum less than ideal image gray scale valueThresh min Then, then
Figure BDA0003958154390000053
And 5, obtaining the pose information of the target star relative to the camera by using the corresponding relation between the target star feature points and the feature point sequence in the template library, the three-dimensional coordinates of the target star feature points and the target star feature points through an optimized EPNP pose solving algorithm, and optimizing and correcting the pose information.
Specifically, according to the rotation angle of the image to be detected relative to the template image, a feature point sequence and three-dimensional coordinate values of the feature point sequence in the template library are determined, pixel coordinate values of the feature point in the image to be detected correspond to the three-dimensional coordinate values of the feature point in the template library one by one, and a relative rotation matrix R and a relative translation vector t are determined through an optimized EPNP pose solving algorithm.
The traditional EPNP algorithm is based on four characteristic points, the four characteristic points can obtain a unique relative pose solution, but the coordinates of the characteristic points obtained according to an image may have errors, so the unique solution also has errors, the precision of the EPNP algorithm is not high in the actual process, and in order to obtain a more accurate calculation result, a more accurate relative rotation matrix R and a more accurate translation matrix t are obtained, and the calculated relative rotation matrix R and the calculated relative translation matrix t are optimized on the basis of an initial relative rotation matrix and a translation matrix.
S5.1, defining the state average value mu of 5 characteristic points A
Figure BDA0003958154390000061
A i The three-dimensional position of the ith characteristic point in the template library in a world coordinate system is shown.
Calculating the average three-dimensional position mu of all the characteristic points of the image to be measured B
Figure BDA0003958154390000062
B i And the three-dimensional position of the ith characteristic point in the image to be detected.
S5.2, defining a covariance matrix H:
Figure BDA0003958154390000063
s5.3, performing singular value decomposition on H
H=UΣV T
Wherein U and V are unitary matrices and Σ is a diagonal matrix.
S5.4, calculating a relative rotation matrix R and a relative translation matrix t:
R=VU T
t=-Rμ AB
the above optimization method is effective not only for the case of 5 feature points, but also for the case of more than 5 feature points.
S5.5, if the determinant det (R) =1 of R, then R is the relative rotation matrix;
if det (R) = -1, then R is a reflection matrix, and the reflection matrix is corrected:
R=V·diag(1,1,-1)·U T
and (3) solving a relative translation matrix by using a rotation matrix:
t=μ A -R·μB
in summary, the invention provides a monocular vision-based non-cooperative target pose measurement method for a cube, which comprises the steps of establishing a template base based on a target star to be measured, acquiring an image of the target star as an image to be measured in real time through a camera, selecting a template image most similar to the image to be measured from the template base, rotating the outline image of the image to be measured to match with the outline image of the template image to determine the rotation angle of the outline image of the image to be measured relative to the outline image of the template image, extracting the edge outline of the target star and extracting feature points through gray scale and threshold processing, obtaining a translation vector and a rotation matrix through a pose solution algorithm by combining three-dimensional coordinate points in the template base, and optimizing and correcting the translation vector and the rotation matrix to obtain the required pose information of the target star relative to the camera.
The above description is only a preferred embodiment of the present invention and is not intended to limit the present invention, and various modifications and changes may be made by those skilled in the art. Any modification, equivalent replacement, or improvement made within the spirit and principle of the present invention should be included in the protection scope of the present invention.

Claims (6)

1. A method for measuring the pose of a cubic star non-cooperative target based on monocular vision is characterized by comprising the following steps:
step 1, acquiring target star images from different angles, inputting three-dimensional coordinate values of target star feature points in a world coordinate system, establishing a feature point sequence and establishing a template library in which the target star images and the feature points correspond to each other one by one, wherein three-dimensional models of the target star are known; turning to the step 2;
step 2, acquiring a real-time image of the target star as an image to be detected by using one camera, matching the image to be detected with the image of the target star in a template library, calculating the similarity of the images, calling the image of the target star with the highest similarity with the image to be detected as a template image, and turning to step 3;
step 3, respectively carrying out edge detection on the image to be detected and the template image, correspondingly obtaining a profile image of the image to be detected and a profile image of the template image, rotating the profile image of the image to be detected, simultaneously matching the profile image of the template image, calculating rotation similarity, determining the rotation angle of the image to be detected relative to the template image according to the rotation similarity, and turning to step 4;
step 4, performing gray processing, closed operation and threshold processing on the image to be detected to obtain an appearance image of the target star separated from the background, extracting a complete edge contour in the appearance image of the target star, obtaining target star feature points according to the edge contour, determining the corresponding relation between the target star feature points and the feature point sequence in the template library according to the rotation angle of the image to be detected relative to the template image, and turning to step 5;
and 5, obtaining the pose information of the target star relative to the camera by using the corresponding relation between the target star feature points and the feature point sequence in the template library, the three-dimensional coordinates of the target star feature points and the target star feature points through an optimized EPNP pose solving algorithm, and optimizing and correcting the pose information.
2. The method for measuring the position and pose of a cubic star non-cooperative target based on monocular vision according to claim 1, wherein in step 1, images of a target star are collected from different angles, a three-dimensional model of the target star is known, three-dimensional coordinate values of feature points of the target star in a world coordinate system are input, a feature point sequence is established, and a template library of one-to-one correspondence between the images of the target star and the feature points is established, specifically as follows:
acquiring three-dimensional coordinates of the feature points of the target star, marking serial numbers, acquiring images of the target star from all angles at the same distance, processing the images of the target star, extracting the feature points of the images, and marking the serial numbers of the three-dimensional feature points corresponding to the upper, lower, left and right image feature points in a clockwise sequence;
extracting and sequentially marking three-dimensional feature points of the target appearance according to the three-dimensional model of the target star; and (3) establishing a coordinate system by taking the unfolded surface of the target satellite sailboard as the front and the center of the front of the image of the target satellite as the origin to obtain the three-dimensional coordinate value of each characteristic point, and corresponding the serial number of the characteristic point to the three-dimensional coordinate value thereof one by one to obtain a template library.
3. The method for measuring the position and orientation of a cubic star non-cooperative target based on monocular vision according to claim 2, wherein in step 2, a camera is used for acquiring a real-time image of the target star as an image to be measured, the image to be measured is matched with the image of the target star in the template library, the image similarity is calculated, and the image of the target star with the highest similarity to the image to be measured is called as a template image, specifically as follows:
fixing a camera and a target star on a six-degree-of-freedom experimental platform, adjusting the positions of the camera and the target star to ensure that a principal point of the camera and the center of a front image of the target star are in a horizontal line, simultaneously enabling an imaging plane of the camera to be parallel to the front of the target star, performing three-axis rotation and offset on the target star, and acquiring a real-time image as an image to be detected;
carrying out feature point matching on the images to be detected and target star images in the template library one by one, calculating the image similarity k, wherein the target star image with the maximum similarity k to the images to be detected is the template image corresponding to the images to be detected:
Figure FDA0003958154380000021
wherein, P m The number of characteristic points, P, matched with the target star image in the image to be measured c And counting the number of all the characteristic points detected in the image to be detected.
4. The method for measuring the position and pose of a cubic star non-cooperative target based on monocular vision according to claim 3, wherein in step 3, the image to be measured and the template image are respectively subjected to edge detection, the contour image of the image to be measured and the contour image of the template image are correspondingly obtained, the contour image of the image to be measured is rotated and simultaneously matched with the contour image of the template image, the rotation similarity is calculated, and the rotation angle of the image to be measured relative to the template image is determined according to the rotation similarity, which specifically comprises the following steps:
respectively carrying out edge detection processing on the image to be detected and the template image through a Canny operator, correspondingly obtaining a profile image of the image to be detected and a profile image of the template image, and segmenting the profile image of the image to be detected and the profile image of the template image through a pyramid segmentation method to obtain the profile image of the simple version of the image to be detected and the profile image of the template image;
cutting a simple edition outline image of a template image into a circle centered by a target star, taking the current position of the simple edition outline image of the template image as a 0-degree starting point, increasing 10 degrees for rotating the circle image from 0 degrees as the starting point until the circle image rotates 360 degrees, carrying out similarity matching on the rotated outline image of the template image and the outline image of an image to be detected in a normalized square difference mode to determine the approximate range of the optimal rotating angle, then rotating the outline image of the template image to be detected in a +/-5-degree area of the optimal rotating angle by taking 1 degree as accuracy, then comparing the similarity of the images in the normalized square difference mode, and taking the rotating angle corresponding to the outline image of the template image with the lowest square difference value as the rotating angle.
5. The method for measuring the non-cooperative target pose of a cube star based on monocular vision according to claim 4, wherein in step 4, the appearance image of the target star separated from the background is obtained by performing gray processing, closing operation and threshold processing on the image to be measured, the complete edge contour in the appearance image of the target star is extracted, the target star feature points are obtained according to the edge contour, and the corresponding relation between the target star feature points and the feature point sequence in the template library is determined according to the rotation angle of the image to be measured relative to the template image, which is specifically as follows:
firstly, a gray level histogram of an image to be measured is generated, and the pixel number N corresponding to each gray level I is calculated I ,I=0,1,2,……,255;
Then calculating the average gray value I of the image to be measured a
Figure FDA0003958154380000031
Calculating the average gray value I of the image to be measured a Ideal gray value T of image to be measured e Difference value I between e =I a -T e Subtracting I from the gray values of all pixel points of all the images to be measured e Obtaining gray values of all pixel points of the processed image to be detected; the gray values of all pixel points of the processed image to be detected and the threshold value [ Thresh ] of the ideal image gray value min ,Thresh max ]Comparing, if the gray value I of the jth pixel point j Greater than the threshold maximum Thresh of the ideal image grey value max Then, then
Figure FDA0003958154380000032
Similarly, if the gray value I of the jth pixel point j Is less thanThreshold minimum Thresh of ideal image gray values min Then, then
Figure FDA0003958154380000033
6. The method for measuring the position and pose of a cubic star non-cooperative target based on monocular vision according to claim 5, wherein in step 5, the position and pose information of the target star relative to the camera is obtained and optimized and corrected by using the corresponding relation between the target star feature point and the feature point sequence in the template library, the three-dimensional coordinates of the target star feature point and the target star feature point through the optimized EPNP position and pose solving algorithm, which is specifically as follows:
s5.1, defining the state average value mu of 5 characteristic points A
Figure FDA0003958154380000034
A i The three-dimensional position of the ith characteristic point in the template library in a world coordinate system is determined;
calculating the average three-dimensional position mu of all the characteristic points of the image to be measured B
Figure FDA0003958154380000041
B i The three-dimensional position of the ith characteristic point in the image to be detected is obtained;
s5.2, defining a covariance matrix H:
Figure FDA0003958154380000042
s5.3, performing singular value decomposition on H
H=UΣV T
U and V are unitary matrixes, and sigma is a diagonal matrix;
s5.4, calculating a relative rotation matrix R and a relative translation matrix t:
R=VU T
t=-Rμ AB
s5.5, if the determinant det (R) =1 of R, then R is the relative rotation matrix;
if det (R) = -1, then R is a reflection matrix, and the reflection matrix is corrected:
R=V·diag(1,1,-1)·U T
and (3) solving a relative translation matrix by using a rotation matrix:
t=μ A -R·μB。
CN202211470026.3A 2022-11-23 2022-11-23 Non-cooperative target pose measurement method based on monocular vision by cubic star Pending CN115760984A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211470026.3A CN115760984A (en) 2022-11-23 2022-11-23 Non-cooperative target pose measurement method based on monocular vision by cubic star

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211470026.3A CN115760984A (en) 2022-11-23 2022-11-23 Non-cooperative target pose measurement method based on monocular vision by cubic star

Publications (1)

Publication Number Publication Date
CN115760984A true CN115760984A (en) 2023-03-07

Family

ID=85335711

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211470026.3A Pending CN115760984A (en) 2022-11-23 2022-11-23 Non-cooperative target pose measurement method based on monocular vision by cubic star

Country Status (1)

Country Link
CN (1) CN115760984A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116681733A (en) * 2023-08-03 2023-09-01 南京航空航天大学 Near-distance real-time pose tracking method for space non-cooperative target

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116681733A (en) * 2023-08-03 2023-09-01 南京航空航天大学 Near-distance real-time pose tracking method for space non-cooperative target
CN116681733B (en) * 2023-08-03 2023-11-07 南京航空航天大学 Near-distance real-time pose tracking method for space non-cooperative target

Similar Documents

Publication Publication Date Title
CN108562274B (en) Marker-based non-cooperative target pose measurement method
CN108052942B (en) Visual image recognition method for aircraft flight attitude
Zhang et al. Vision-based pose estimation for textureless space objects by contour points matching
CN107679537B (en) A kind of texture-free spatial target posture algorithm for estimating based on profile point ORB characteristic matching
Kolomenkin et al. Geometric voting algorithm for star trackers
CN111862201B (en) Deep learning-based spatial non-cooperative target relative pose estimation method
CN109345588A (en) A kind of six-degree-of-freedom posture estimation method based on Tag
EP4081938A1 (en) Systems and methods for pose detection and measurement
Urban et al. Finding a good feature detector-descriptor combination for the 2D keypoint-based registration of TLS point clouds
CN110097584A (en) The method for registering images of combining target detection and semantic segmentation
Petit et al. A robust model-based tracker combining geometrical and color edge information
CN112364805B (en) Rotary palm image detection method
CN108320310B (en) Image sequence-based space target three-dimensional attitude estimation method
CN112734844A (en) Monocular 6D pose estimation method based on octahedron
CN115760984A (en) Non-cooperative target pose measurement method based on monocular vision by cubic star
CN113295171B (en) Monocular vision-based attitude estimation method for rotating rigid body spacecraft
Harvard et al. Spacecraft pose estimation from monocular images using neural network based keypoints and visibility maps
Cao et al. Detection method based on image enhancement and an improved faster R-CNN for failed satellite components
Li et al. Vision-based target detection and positioning approach for underwater robots
Koizumi et al. Development of attitude sensor using deep learning
CN117197333A (en) Space target reconstruction and pose estimation method and system based on multi-view vision
Azad et al. Accurate shape-based 6-dof pose estimation of single-colored objects
US10366278B2 (en) Curvature-based face detector
CN115131433A (en) Non-cooperative target pose processing method and device and electronic equipment
CN111680552B (en) Feature part intelligent recognition method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination