WO2015096508A1 - 一种模型约束下的在轨三维空间目标姿态估计方法及系统 - Google Patents

一种模型约束下的在轨三维空间目标姿态估计方法及系统 Download PDF

Info

Publication number
WO2015096508A1
WO2015096508A1 PCT/CN2014/085717 CN2014085717W WO2015096508A1 WO 2015096508 A1 WO2015096508 A1 WO 2015096508A1 CN 2014085717 W CN2014085717 W CN 2014085717W WO 2015096508 A1 WO2015096508 A1 WO 2015096508A1
Authority
WO
WIPO (PCT)
Prior art keywords
target
image
characteristic view
orbit
attitude estimation
Prior art date
Application number
PCT/CN2014/085717
Other languages
English (en)
French (fr)
Inventor
张天序
王亮亮
周钢
李明
杨卫东
刘宽
郑亚云
Original Assignee
华中科技大学
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 华中科技大学 filed Critical 华中科技大学
Priority to US15/106,690 priority Critical patent/US20170008650A1/en
Publication of WO2015096508A1 publication Critical patent/WO2015096508A1/zh

Links

Images

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B64AIRCRAFT; AVIATION; COSMONAUTICS
    • B64GCOSMONAUTICS; VEHICLES OR EQUIPMENT THEREFOR
    • B64G3/00Observing or tracking cosmonautic vehicles
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/20Instruments for performing navigational calculations
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B64AIRCRAFT; AVIATION; COSMONAUTICS
    • B64GCOSMONAUTICS; VEHICLES OR EQUIPMENT THEREFOR
    • B64G1/00Cosmonautic vehicles
    • B64G1/22Parts of, or equipment specially adapted for fitting in or to, cosmonautic vehicles
    • B64G1/24Guiding or controlling apparatus, e.g. for attitude control
    • B64G1/244Spacecraft control systems
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B64AIRCRAFT; AVIATION; COSMONAUTICS
    • B64GCOSMONAUTICS; VEHICLES OR EQUIPMENT THEREFOR
    • B64G1/00Cosmonautic vehicles
    • B64G1/22Parts of, or equipment specially adapted for fitting in or to, cosmonautic vehicles
    • B64G1/24Guiding or controlling apparatus, e.g. for attitude control
    • B64G1/244Spacecraft control systems
    • B64G1/245Attitude control algorithms for spacecraft attitude control
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/24Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for cosmonautical navigation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • G06T7/75Determining position or orientation of objects or cameras using feature-based methods involving models
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • G06V10/75Organisation of the matching processes, e.g. simultaneous or sequential comparisons of image or video features; Coarse-fine approaches, e.g. multi-scale approaches; using context analysis; Selection of dictionaries
    • G06V10/751Comparing pixel values or logical combinations thereof, or feature values having positional relevance, e.g. template matching
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/60Type of objects
    • G06V20/64Three-dimensional objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10032Satellite or aerial image; Remote sensing

Definitions

  • the invention belongs to the field of aerospace technology and pattern recognition, and particularly relates to a method and system for estimating a three-dimensional space object in orbit, such as a satellite and a space vehicle.
  • Space targets such as a large number of communication satellites and resource satellites launched at home and abroad can be used in applications such as network communication, aerial photography, and geodetic survey.
  • Ground-based photoelectric observation of these spatial targets, analysis and adjustment of their attitudes are indispensable parts of such systems. Due to the spatial resolution limitation of the ground-based telescope system and the random interference of the atmospheric environment to long-distance optical imaging, the image acquired by the ground-based sensor is prone to ambiguity of the target boundary. When the boundary of the imaging target is ambiguous, the accuracy of the traditional feature-point matching based pose estimation and 3D reconstruction algorithm tends to decrease rapidly as the target's blur degree increases.
  • the attitude estimation is intended to calculate the elevation angle ⁇ and the yaw angle ⁇ of the target in the three-dimensional target coordinate system from the target projection image acquired from the two-dimensional camera coordinate system, and the pair of ( ⁇ , ⁇ ) angle values correspond to one posture.
  • the accuracy of pose estimation is of great significance for analyzing the component size of the space target, the relative positional relationship of the components, and the functional properties of the space target. Therefore, it is necessary to study the robust attitude estimation algorithm under long-distance optical imaging conditions.
  • the iterative solution of the nonlinear equations is used to obtain the target pose parameters.
  • the algorithm has high precision and good robustness, but it needs to know the target points on the target in advance. It is not suitable for the solution of non-cooperative targets and non-marked targets.
  • the algorithm has poor adaptability.
  • Random sample consensus a paradigm for model fitting with applications to image analysis and automated cartography"[J].Communications of the ACM,1981,24(6):381 -395.
  • the present invention provides an on-orbit three-dimensional space target attitude estimation method and system, which can effectively estimate the target three-dimensional space from the spatial target two-dimensional image. Attitude information, high precision, small amount of calculation, and good adaptability.
  • An on-orbit three-dimensional space target attitude estimation method including an offline feature library construction step and an online posture estimation step:
  • the offline feature library construction step is specifically:
  • the geometric features include target body aspect ratio T i,1 , target longitudinal symmetry T i,2 , target lateral symmetry T i , 3 and the target spindle tilt angle T i,4 ;
  • the target body aspect ratio T i,1 refers to the aspect ratio of the target minimum circumscribed rectangle;
  • the target longitudinal symmetry T i,2 refers to the target minimum external connection
  • the target lateral symmetry T i, 3 means the left half of the target in the rectangular area enclosed by the target minimum circumscribed rectangle
  • the target main axis inclination angle T i,4 refers to the angle between the main axis of the target cylinder and the horizontal direction of the view;
  • the online posture estimation step is specifically:
  • step (2) extracting features from the image to be tested after preprocessing, the features being the same as those extracted in step (2);
  • the feature target body aspect ratio T i,1 is extracted as follows:
  • Target body aspect ratio of feature view F i H i
  • ,W i
  • the symbol ⁇ represents the pi
  • the symbol atan2 represents the arctangent function
  • the geometric feature library constructed in the step (A2) is further normalized, and the image features to be tested extracted in the step (B2) are normalized.
  • step (A1) is to obtain, according to the spatial target three-dimensional model, a specific implementation manner of the target multi-viewpoint characteristic view that represents various postures of the target;
  • the spatial object three-dimensional model disposed Gaussian observation O T ball center, the three-dimensional model from O T respectively orthogonal projection to the center of the sphere K two-dimensional plane, to give a total of three dimensional target template K multiview characteristics of view F i; each
  • the characteristic view F i is a pixel matrix of width n and height m
  • step (B1) firstly performs noise suppression on the image to be measured by using non-local mean filtering, and then uses a maximum likelihood estimation algorithm to perform deblurring.
  • An on-orbit three-dimensional space target attitude estimation system comprising an offline feature library building module and an online attitude estimation module:
  • the offline feature library building module specifically includes:
  • a first sub-module configured to acquire a target multi-viewpoint characteristic view representing various postures of the spatial target according to the spatial target three-dimensional model
  • the geometric feature includes a target body aspect ratio T i,1 , a target longitudinal symmetry T i,2 , and a target lateral direction
  • the target body aspect ratio T i,1 refers to the aspect ratio of the target minimum circumscribed rectangle;
  • the target longitudinal symmetry T i,2 means The ratio of the area of the upper half of the target to the area of the lower half in a rectangular area surrounded by the target minimum circumscribed rectangle;
  • the target lateral symmetry T i, 3 is within the rectangular area enclosed by the target minimum circumscribed rectangle
  • the tilt angle T i,4 of the target spindle refers to the angle between the main axis of the target cylinder and the horizontal direction of the view;
  • the online posture estimation module specifically includes:
  • a fourth sub-module configured to extract a feature from the pre-processed image to be tested, the feature being the same as the feature extracted by the second sub-module;
  • the fifth sub-module is configured to match the features extracted by the image to be tested in the geometric feature library, and the spatial target posture represented by the characteristic view corresponding to the matching result is the target posture in the image to be tested.
  • Steps (A1) to (A2) of the present invention are offline training stages, and the target multi-viewpoint feature view is obtained by using the three-dimensional template target model, and the feature view geometric feature is extracted, and then the template target geometric feature library is established; steps (B1) to (B2) are In the online estimation stage of the attitude of the image to be measured, the geometric features of the image to be tested are compared with the geometrical feature library of the template target, and then the posture of the image to be tested is estimated.
  • the geometric feature of the specific matching used by the invention has the scale invariance, and as long as the relative size ratio and the positional relationship of the target components are accurately obtained in the three-dimensional modeling stage, the subsequent higher matching precision can be ensured.
  • the whole method is simple to implement, robust, and has high accuracy of attitude estimation. It is affected by imaging conditions and has good applicability.
  • the extracted geometric features are normalized, which can effectively balance the influence of each feature quantity on the attitude estimation; the pre-processing operation of the image to be measured, preferably the non-local mean filtering and the maximum likelihood estimation algorithm are performed on the image to be measured.
  • the noise and deblurring process improves the pose estimation accuracy of the algorithm under turbulent fuzzy imaging conditions.
  • the arithmetic weighted average of the pose estimation results improves the stability of the pose estimation algorithm.
  • Figure 1 is a schematic diagram of attitude estimation
  • Figure 3 is a schematic view of a Gaussian observation sphere
  • Figure 4 is a schematic diagram of a three-dimensional model of the Hubble telescope
  • Figure 6 (a) is a Hubble telescope feature view F i ;
  • Figure 6 (b) is the result of threshold segmentation using the maximum inter-class variance criterion for Figure 6 (a);
  • FIG 6 (c) is the Hubble telescope objective aspect ratio schematic rectangle ABCD is characteristic minimum bounding rectangle of the view F i,
  • a characteristic view F W is the width of the target body i i;
  • Figure 6 (d) is a schematic diagram of the longitudinal symmetry of the target of the Hubble telescope.
  • the area enclosed by the rectangular frame abcd is the upper half of the target of the characteristic view F i
  • the area enclosed by the rectangular cdef is the lower half of the target of the characteristic view F i . section;
  • Fig. 6(e) is a schematic diagram of the lateral symmetry of the target of the Hubble telescope.
  • the area enclosed by the rectangle box hukv is the left half of the target of the characteristic view F i
  • the area enclosed by the rectangle ujvl is the target right half of the characteristic view F i . section;
  • Figure 6(f) is a schematic diagram of the tilt angle of the target spindle of the Hubble telescope, vector For the main axis of the target cylinder of the characteristic view F i , the angle ⁇ QOR with the horizontal direction is the target main shaft tilt angle, that is, the main shaft tilt angle of the Hubble telescope satellite platform;
  • Figure 7 (b) is a non-local mean filtering result for Figure 7 (a);
  • Figure 7 (c) is a correction result for the 7 (b) Maximum Likelihood Estimation Algorithm (MAP) algorithm;
  • the on-orbit three-dimensional space target is an on-orbit Hubble telescope
  • the satellite platform structure is a cylinder
  • the satellite platform is mainly equipped with two rectangular solar panels
  • the estimated target posture is the three-dimensional target coordinates of the satellite platform.
  • Fig. 1 the attitude estimation diagram is shown.
  • the X axis points to the prime meridian
  • the Z axis points to the true north
  • the Y axis direction is determined according to the right hand spiral rule.
  • the target satellite centroid In the target coordinate system, the target satellite centroid always points to the center of the earth, the X s axis is parallel to the Y axis in the geocentric coordinate system, and the Y s axis is parallel to the Z axis in the geocentric coordinate system.
  • the attitude estimation is intended from the camera coordinate system.
  • the target satellite projection image estimates the elevation angle ⁇ of the three-dimensional target satellite in the target coordinate system, ie ⁇ N′O S N, yaw angle ⁇ , ie ⁇ N′O S X S ;
  • O S N is a cylinder
  • O S N' is the projection of the satellite platform axis O S N on the plane X S O S Y S ;
  • the camera plane X m O m Y m is parallel to the X S O S Y S plane in the target coordinate system, It is also parallel to the YOZ plane in the geocentric coordinate system.
  • the flow of the present invention is shown in FIG. 2, and the specific implementation method includes the following steps.
  • the method includes: obtaining a template target multi-viewpoint characteristic view step, establishing a template target geometric feature library step, calculating a geometric feature of the image to be tested, and a target pose estimation step;
  • Steps of obtaining a template target multi-viewpoint feature view including the following sub-steps:
  • the multi-viewpoint projection image infers the approximate geometric structure and relative positional relationship of the target components.
  • the satellite platform's centroid and geocentric line are perpendicular to the satellite platform, and its solar windsurfing always points to the prior knowledge of the incident direction of the sunlight, and further determines the spatial positional relationship between the satellite components.
  • the 3D model of the target satellite is built using the 3D modeling tool Multigen Creator;
  • Figure 4 is a schematic diagram of the 3D model of the Hubble telescope built using Multigen Creator;
  • the invention uses the Hubble telescope simulation satellite as a template target.
  • the three-dimensional template target Hubble telescope O T is placed in the Gaussian observation sphere center, and the three-dimensional template target O T is respectively moved from the center of the sphere to 2592 two-dimensional.
  • the plane is orthographically projected to obtain a multi-viewpoint feature view F i of 2592 3D template objects.
  • ,W i
  • the target lateral symmetry of the characteristic view F i is defined as the ratio of the target left half area SL i to the right half area SR i in a rectangular area surrounded by the target minimum circumscribed rectangle.
  • T i,3 0.9909.
  • the target spindle tilt angle is defined as the angle ⁇ between the target cylinder axis of the characteristic view F i and the horizontal direction of the image. This feature most clearly reflects the pose characteristics of the target, which ranges from 0° to 180° and is represented by a one-dimensional floating point number.
  • Figure 6 (f) shows the tilt angle of the target axis of the Hubble telescope, vector The main axis of the target cylinder of the characteristic view F i (in this example, the main axis of the Hubble telescope satellite platform), which is horizontal
  • the angle ⁇ QOR is the target spindle tilt angle
  • the symbol ⁇ represents the pi
  • the symbol atan2 represents the arctangent function
  • the target spindle tilt angle T i4 50.005°.
  • the template target multi-view feature view F i geometric feature library MF is normalized to obtain a template target normalized geometric feature library SMF:
  • the online posture estimation step is specifically as follows:
  • the spatial target imaging data is very noisy, has a low signal-to-noise ratio, and is heavily blurred. Therefore, before performing subsequent processing on the imaging data, the imaging data must be pre-processed first, that is, the imaging data is first denoised, and then the image restoration processing is performed on the spatial target image by using an effective correction algorithm for the characteristics of the imaging data.
  • non-local mean filtering is used, and the following parameters are selected: the similarity window size is 5 ⁇ 5, the search window size is 15 ⁇ 15, and the attenuation parameter is 15) the image to be measured is first subjected to noise suppression, as shown in Fig. 7(a).
  • Figure 7 (b) is the non-local mean filtering noise suppression result for Figure 7 (a); then use the maximum likelihood estimation algorithm, this example uses the following parameters: the number of outer loops is 8 times, the estimated point spread function and the target image The number of internal loops is set to 3 times) to perform deblurring, and the preprocessed image g(x, y) is obtained, as shown in Fig. 7(c) for deblurring the maximum likelihood estimation algorithm of Fig. 7(b). As a result, the preprocessed image g(x, y).
  • sub-steps (2.1) to (2.4) are performed to obtain geometric features of the image to be tested ⁇ G 1 , G 2 , G 3 , G 4 ⁇ , normalize the geometric features ⁇ G 1 , G 2 , G 3 , G 4 ⁇ to obtain the normalized geometric features of the image to be tested ⁇ SG 1 , SG 2 , SG 3 , SG 4 ⁇ :
  • the target pose estimation step includes the following sub-steps:

Landscapes

  • Engineering & Computer Science (AREA)
  • Remote Sensing (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Automation & Control Theory (AREA)
  • Multimedia (AREA)
  • Aviation & Aerospace Engineering (AREA)
  • Astronomy & Astrophysics (AREA)
  • Chemical & Material Sciences (AREA)
  • Combustion & Propulsion (AREA)
  • Software Systems (AREA)
  • Medical Informatics (AREA)
  • General Health & Medical Sciences (AREA)
  • Evolutionary Computation (AREA)
  • Databases & Information Systems (AREA)
  • Computing Systems (AREA)
  • Artificial Intelligence (AREA)
  • Health & Medical Sciences (AREA)
  • Image Analysis (AREA)
  • Length Measuring Devices By Optical Means (AREA)

Abstract

一种在轨三维空间目标姿态估计方法,包括离线特征库构建步骤和在线姿态估计步骤。所述离线特征库构建步骤为:根据空间目标三维模型获取目标多视点特性视图,从各空间目标多视点特性视图中提取几何特征构成几何特征库;所述几何特征库包含目标主体高宽比、目标纵向对称度、目标横向对称度和目标主轴倾斜角;所述在线姿态估计步骤为:对待测在轨目标图像预处理和提取特征,将提取的特征在几何特征库中匹配,匹配结果对应的特性视图所表征的目标姿态即为姿态估计结果。由于在三维建模阶段准确获取目标各部件间的尺寸比例和位置关系,保证了后续较高的匹配精度。还提供了一种在轨三维空间目标姿态估计系统。

Description

一种模型约束下的在轨三维空间目标姿态估计方法及系统 【技术领域】
本发明属于航天技术与模式识别交叉领域,具体涉及一种适用于卫星、空间飞行器等在轨三维空间目标姿态估计方法及系统。
【背景技术】
国内外发射的大量通信卫星、资源卫星等空间目标可用于网络通信、航空摄影、大地测量等应用场所。对这些空间目标进行地基光电观察,分析、调整其姿态是该类系统中不可或缺的部分。由于地基望远镜系统的空间分辨率限制以及大气环境对长距离光学成像的随机干扰,使得地基传感器获取的图像容易出现目标边界模糊不清的现象。当成像目标边界模糊不清时,传统的基于特征点匹配的姿态估计及三维重建算法准确性往往会随着目标的模糊程度增加而迅速下降。姿态估计意在从二维相机坐标系获取的目标投影图像中计算出目标在三维目标坐标系下的俯仰角α和偏航角度β,一对(α,β)的角度值对应一个姿态。姿态估计的准确性对分析空间目标的部件尺寸、部件相对位置关系以及空间目标的功能属性等具有重要意义。因此,有必要研究地基长距离光学成像条件下稳健的姿态估计算法。
国内外学者对这类成像下的空间目标姿态估计算法进行了详细的研究,并且取得了相关成果。如,赵汝进、张启衡、徐智勇的“一种基于特征点间线段倾角的姿态测量方法”,见《光子学报》,2010年2月,第39卷,第2期。研究了一种基于目标特征点间倾角角度信息的目标3维姿态迭代解算方法,该方法适用于远距离弱透视成像目标和相机内参量未知条件下目标姿态求解。但算法精度严重依赖于提取到的边缘、直线、角点精度,且在迭代初值偏离真实姿态误差较大时,算法需要较多的迭代次数,运算量大,且可能出现迭代不收敛情况。地基长距离光学成像目标边界容易出现 模糊不清的现象,影响特征点定位精度,因此,算法精度差。王鲲鹏、张小虎、于起峰的“基于目标特征点比例的单站图像定姿方法”,见《应用光学》2009年11月,第30卷,第6期,提出了一种针对实况记录图像的单站定姿方法,利用目标特征点坐标的比例信息及目标成像模型及坐标系间位姿参数关系,采用非线性方程组的迭代求解获得目标姿态参数。该算法求解精度高、鲁棒性好,但需要事先知道目标上的标记点,不适合非合作目标和非标记目标的姿态求解,算法适应能力差。FISHL ER MA,FISHL ER M A,BOLL ES R C.”Random sample consensus:a paradigm for model fitting with applications to image analysis and automated cartography”[J].Communications of the ACM,1981,24(6):381-395.通过提取目标及其投影图像上的大量点对,采用一致交叉验证的方式选择最少的特征点进行姿态的三维重建。该算法需要提取大量的特征点对,运算量大,特征点对存在误匹配时,算法误差大。上述研究成果都对该类问题的特殊情况提出了自己的解决方案,各个方案具有自己的算法特点。但是算法都存在运算量大、精度差、或适应性差等问题。
【发明内容】
为解决现有方法运算量大、精度差、或适应性差的问题,本发明提供一种在轨三维空间目标姿态估计方法及系统,能有效地从空间目标二维图像中估计出目标的三维空间姿态信息,精度高,计算量小,适应性好。
一种在轨三维空间目标姿态估计方法,包括离线特征库构建步骤和在线姿态估计步骤:
所述离线特征库构建步骤,具体为:
(A1)根据空间目标三维模型获取表征空间目标各种姿态的目标多视点特性视图;
(A2)从各空间目标多视点特性视图中提取几何特征构成几何特征库;所述几何特征包含目标主体高宽比Ti,1、目标纵向对称度Ti,2、目标横向对 称度Ti,3和目标主轴倾斜角Ti,4;所述目标主体高宽比Ti,1是指目标最小外接矩形的高宽比;所述目标纵向对称度Ti,2是指在目标最小外接矩形所围成的矩形区域内,目标上半部分面积与下半部分面积之比;所述目标横向对称度Ti,3是指在目标最小外接矩形所围成的矩形区域内,目标左半部分面积与右半部分面积之比;所述目标主轴倾斜角Ti,4是指特性视图的目标柱体主轴线与视图水平方向的夹角;
所述在线姿态估计步骤,具体为:
(B1)对待测在轨空间目标图像预处理;
(B2)对预处理后的待测图像提取特征,该特征与步骤(2)提取的特征相同;
(B3)将对待测图像提取的特征在几何特征库中进行匹配,匹配结果对应的特性视图所表征的空间目标姿态即为待测图像中的目标姿态。
进一步地,所述特征目标主体高宽比Ti,1的提取方式为:
(A2.1.1)对特性视图Fi使用最大类间方差阈值准则得到阈值Ti,将特性视图Fi中大于阈值Ti的像素灰度值fi(x,y)设置为255,小于或等于阈值Ti的像素灰度值fi(x,y)设置为零,由此得到二值图像Gi,Gi为宽度n、高度m的像素矩阵,gi(x,y)为Gi中点(x,y)处像素灰度值;
(A2.1.2)对二值图像Gi按照从上往下、从左往右的顺序进行扫描,若当前点像素值gi(x,y)等于255,则记录当前像素横坐标x=Topj、纵坐标y=Topi,停止扫描;
(A2.1.3)对二值图像Gi按照从下往上、从左往右的顺序进行扫描,若当前点像素值gi(x,y)等于255,则记录当前像素横坐标x=Bntj、纵坐标y=Bnti,停止扫描;
(A2.1.4)对二值图像Gi按照从左往右,从上往下的顺序进行扫描,若当前点像素值gi(x,y)等于255,则记录当前像素横坐标x=Leftj、纵坐标 y=Lefti,停止扫描;
(A2.1.5)对二值图像Gi按照从右往左,从上往下的顺序进行扫描,若当前点像素值gi(x,y)等于255,则记录当前像素横坐标x=Rightj、纵坐标y=Righti,停止扫描;
(A2.1.6)特性视图Fi的目标主体高宽比
Figure PCTCN2014085717-appb-000001
Hi=|Topi-Bnti|,Wi=|Leftj-Rightj|,符号|V|表示变量V的绝对值。
进一步地,所述特征目标纵向对称度Ti,2的提取方式为:
(A2.2.1)计算特性视图Fi的中心点横坐标
Figure PCTCN2014085717-appb-000002
纵坐标
Figure PCTCN2014085717-appb-000003
符号
Figure PCTCN2014085717-appb-000004
表示对变量V取整数部分;
(A2.2.2)统计二值图像Gi中的,1≤横坐标x≤n,1≤纵坐标y≤Ciy的区域内,灰度值为255的像素点个数,即为特性视图Fi的目标上半部分面积STi
(A2.2.3)统计二值图像Gi中,1≤横坐标x≤n,Ciy+1≤纵坐标y≤m的区域内,灰度值为255的像素点个数,即为特性视图Fi的目标下半部分面积SDi
(A2.2.4)计算特性视图Fi的目标纵向对称度
Figure PCTCN2014085717-appb-000005
进一步地,所述特征目标横向对称度Ti,3的提取方式为:
(A2.3.1)统计二值图像Gi中,1≤横坐标x≤Cix,1≤纵坐标y≤m的区域内,灰度值为255的像素点个数,即为特性视图Fi的目标左半部分面积SLi
(A2.3.2)统计二值图像Gi中,Cix+1≤横坐标x≤n,1≤纵坐标y≤m的区域内,灰度值为255的像素点个数,即为特性视图Fi的目标右半部分面积SRi
(A2.3.3)计算特性视图Fi的目标横向对称度
Figure PCTCN2014085717-appb-000006
进一步地,所述特征目标主轴倾斜角Ti,4的提取方式为:
(A2.4.1)计算特性视图Fi对应的二值图像Gi重心横坐标xi0、纵坐标yi0
Figure PCTCN2014085717-appb-000007
式中,
Figure PCTCN2014085717-appb-000008
k=0、1,j=0、1;
(A2.4.2)计算特性视图Fi对应的二值图像Gi对应的p+q阶中心矩μi(p,q):
Figure PCTCN2014085717-appb-000009
p=0、1、2,q=0、1、2;
(A2.4.3)构建实对称矩阵
Figure PCTCN2014085717-appb-000010
计算矩阵Mat的特征值V1,V2,及其对应的特征向量
Figure PCTCN2014085717-appb-000011
(A2.4.4)计算特性视图Fi目标主轴倾斜角Ti4
Figure PCTCN2014085717-appb-000012
Figure PCTCN2014085717-appb-000013
式中,符号π表示圆周率,符号atan2表示反正切函数。
进一步地,还对所述步骤(A2)构建的几何特征库进行归一化处理,对所述步骤(B2)提取的待测图像特征进行归一化处理。
进一步地,所述步骤(A1)根据空间目标三维模型获取表征目标各种姿态的目标多视点特性视图的具体实现方式为;
按俯仰角α每隔角度γ、偏航角β每隔γ将高斯观测球划分为K个二维 平面,α=-180°~0°,β=-180°~180°,K=360*180/γ2
将空间目标三维模型OT置于高斯观测球球心,从球心将三维模型OT分别向K个二维平面进行正投影,共得到K个三维模板目标的多视点特性视图Fi;各特性视图Fi为宽度n、高度m的像素矩阵,fi(x,y)为Fi在点(x,y)处像素灰度值,1≤横坐标x≤n,1≤纵坐标y≤m,i=1、2、…、K。
进一步地,所述步骤(B1)首先采用非局部均值滤波对待测图像先进行噪声抑制,再选用最大似然估计算法进行去模糊。
进一步地,所述(B3)的具体实现方式为:
(B3.1)遍历整个几何特征库SMF,计算待测图像的四种几何特征{SG1,SG2,SG3,SG4}与几何特征库SMF中各行向量之间的欧式距离记为D1、…、DK,K为目标多视点特性视图的数量;
(B3.2)从各欧式距离D1、…、DK中选取四个最小值DS、Dt、Du、Dv,对其对应的四个目标姿态求取算术平均,即为待测图像中的目标姿态。
一种在轨三维空间目标姿态估计系统,包括离线特征库构建模块和在线姿态估计模块:
所述离线特征库构建模块,具体包括:
第一子模块,用于根据空间目标三维模型获取表征空间目标各种姿态的目标多视点特性视图;
第二子模块,用于从各空间目标多视点特性视图中提取几何特征构成几何特征库;所述几何特征包含目标主体高宽比Ti,1、目标纵向对称度Ti,2、目标横向对称度Ti,3和目标主轴倾斜角Ti,4;所述目标主体高宽比Ti,1是指目标最小外接矩形的高宽比;所述目标纵向对称度Ti,2是指在目标最小外接矩形所围成的矩形区域内,目标上半部分面积与下半部分面积之比;所述目标横向对称度Ti,3是指在目标最小外接矩形所围成的矩形区域内,目标左半部分面积与右半部分面积之比;所述目标主轴倾斜角Ti,4是指特性视图的 目标柱体主轴线与视图水平方向的夹角;
所述在线姿态估计模块,具体包括:
第三子模块,用于对待测在轨空间目标图像预处理;
第四子模块,用于对预处理后的待测图像提取特征,该特征与第二子模块提取的特征相同;
第五子模块,用于将对待测图像提取的特征在几何特征库中进行匹配,匹配结果对应的特性视图所表征的空间目标姿态即为待测图像中的目标姿态。
本发明的技术效果体现在:
本发明步骤(A1)~(A2)为离线训练阶段,利用三维模板目标模型获取目标多视点特性视图,提取特性视图几何特征,进而建立模板目标几何特征库;步骤(B1)~(B2)为待测图像姿态在线估计阶段,将待测图像几何特征与模板目标几何特征库对比,进而估计出待测图像的姿态。本发明匹配所特定使用的几何特征具有尺度不变性,只要在三维建模阶段准确获取目标各部件的相对尺寸比例和位置关系,就可保证后续较高的匹配精度。整个方法实现简单、鲁棒性好、姿态估计精度高,受成像条件影响小,适用性好。
作为优化,对提取的几何特征进行归一化处理,可有效均衡各特征量对姿态估计的影响;对待测图像进行预处理操作,优选非局部均值滤波和最大似然估计算法对待测图像进行去噪、去模糊处理,提高了算法在湍流模糊成像条件下姿态估计精度;对姿态估计结果进行算术加权平均,提高了姿态估计算法的稳定性。
【附图说明】
图1为姿态估计示意图;
图2为本发明流程示意图;
图3为高斯观测球示意图;
图4为哈勃望远镜三维模型示意图;
图5(a)为哈勃望远镜在俯仰角α=0°、偏航角β=0°情况下投影特性视图;
图5(b)为哈勃望远镜在俯仰角α=0°、偏航角β=90°情况下投影特性视图;
图5(c)为哈勃望远镜在俯仰角α=-90°、偏航角β=90°情况下投影特性视图;
图5(d)为哈勃望远镜在俯仰角α=-180°、偏航角β=90°情况下投影特性视图;
图6(a)为哈勃望远镜某一帧特征视图Fi
图6(b)为对图6(a)使用最大类间方差准则阈值分割结果;
图6(c)为哈勃望远镜目标高宽比示意图,矩形框ABCD为特性视图Fi的最小外接矩形,|AC|为特性视图Fi的目标主体高度Hi,|CD|为特性视图Fi的目标主体宽度Wi
图6(d)为哈勃望远镜目标纵向对称度示意图,矩形框abcd所围成的区域为特性视图Fi的目标上半部分,矩形cdef所围成的区域为特性视图Fi的目标下半部分;
图6(e)为哈勃望远镜目标横向对称度示意图,矩形框hukv所围成的区域为特性视图Fi的目标左半部分,矩形ujvl所围成的区域为特性视图Fi的目标右半部分;
图6(f)为哈勃望远镜目标主轴倾斜角示意图,向量
Figure PCTCN2014085717-appb-000014
为特性视图Fi的目标柱体主轴线,其与水平方向的夹角∠QOR即为目标主轴倾斜角即哈勃望远镜卫星平台的主轴倾斜角;
图7(a)为仿真的哈勃望远镜图像,其对应的俯仰角α、偏航角度β为(α,β)=(-40°,-125°);
图7(b)为对图7(a)非局部均值滤波结果;
图7(c)为对7(b)最大似然估计算法(MAP)算法校正结果;
图7(d)为对图7(c)姿态估计结果1(α,β)=(-40°,-130°);
图7(e)为对图7(c)姿态估计结果2(α,β)=(-40°,-140°);
图7(f)为对图7(c)姿态估计结果3(α,β)=(-40°,-120°);
图7(g)为对图7(c)为姿态估计结果4(α,β)=(-40°,-150°);
图7(h)为图7(d)~图7(g)算术平均值结果,作为图7(c)最终的姿态估计结果(α,β)=(-40°,-135°)。
【具体实施方式】
为了使本发明的目的、技术方案及优点更加清楚明白,以下结合附图及实施例,对本发明进行进一步详细说明。应当理解,此处所描述的具体实施例仅仅用以解释本发明,并不用于限定本发明。此外,下面所描述的本发明各个实施方式中所涉及到的技术特征只要彼此之间未构成冲突就可以相互组合。
本发明中在轨三维空间目标为在轨哈勃望远镜,其卫星平台结构为圆柱体;卫星平台上主要搭载了两块矩形太阳能帆板,所需估计的目标姿态是指卫星平台在三维目标坐标系下的姿态。如图1所示为姿态估计示意图,地心坐标系中,X轴指向本初子午线,Z轴指向正北,按照右手螺旋法则确定Y轴方向。目标坐标系中,目标卫星质心始终指向地心,Xs轴平行于地心坐标系中的Y轴,Ys轴平行于地心坐标系中的Z轴,姿态估计意在从相机坐标系中的目标卫星投影图像中估计出三维目标卫星在目标坐标系下的俯仰角α,即∠N'OSN、偏航角β即∠N'OSXS;其中,OSN为圆柱体卫星平台轴线;OSN'为卫星平台轴线OSN在平面XSOSYS上的投影;相机平面XmOmYm平行于目标坐标系中的XSOSYS平面,同时也平行于地心坐标系中的YOZ平面。
下面以图4所示目标结构为例对本发明做进一步详细说明。以下结合 附图和实施例对本发明进一步说明。
本发明流程如图2所示,具体实施方法包括以下步骤。包括:获取模板目标多视点特性视图步骤、建立模板目标几何特征库步骤、计算待测图像几何特征步骤、目标姿态估计步骤;
(A1)获取模板目标多视点特性视图步骤,包括下述子步骤:
(A1.1)建立模板目标三维模型步骤:
对于合作空间目标,如卫星目标,可精确获得卫星平台、卫星搭载的负荷、以及卫星各部件之间的相对位置关系等详细的三维结构及相对位置关系;对于非合作的空间目标,以从目标的多视点投影图像中推断出目标各部件大致的几何结构及相对位置关系。利用目标卫星在轨道上运行时卫星平台质心与地心连线垂直于卫星平台,其太阳能帆板始终指向太阳光入射方向等先验知识,进一步确定卫星各部件之间的空间位置关系。使用三维建模工具Multigen Creator建立目标卫星三维模型;如图4为使用Multigen Creator建立的哈勃望远镜三维模型示意图;
(A1.2)获取模板目标多视点特性视图步骤:
如图3所示,按俯仰角α每隔γ、偏航角β每隔γ将高斯观测球划分为2592个二维平面,α=-180°~0°,β=-180°~180°,30<γ<100,本实例γ=5°;
本发明以哈勃望远镜仿真卫星作为模板目标,如图4所示,将三维模板目标哈勃望远镜OT置于高斯观测球球心,从球心将三维模板目标OT分别向2592个二维平面进行正投影,共得到2592个三维模板目标的多视点特性视图Fi。如图5(a)为仿真的哈勃望远镜俯仰角、偏航角(α,β)=(0°,0°)对应的特性视图;图5(b)为(α,β)=(0°,90°)对应的特性视图;图5(c)为(α,β)=(-90°,90°)对应的特性视图;图5(d)为(α,β)=(-180°,90°)对应的特性视图;各特性视图Fi为宽度n=500、高度m=411的像素矩阵,fi(x,y)为Fi在点(x,y)处像素灰度值,1≤横坐标x≤500,1≤纵坐标y≤411, i=1、2、…、K,K=2592;
(A2)建立模板目标几何特征库步骤,包括下述子步骤:
以2592帧特性视图中的i=1886帧为例说明本实例:
(A2.1)计算各特性视图Fi的目标主体高宽比Ti,1
(A2.1.1)对输入的特性视图Fi如图6(a)所示,其对应的俯仰角、偏航角(α,β)=(-50°,-115°)使用最大类间方差阈值准则,得到阈值Ti=95。将像素矩阵Fi中大于95的像素灰度值fi(x,y)设置为255,小于或等于95的像素灰度值fi(x,y)设置为零,得到二值图像Gi如图6(b)所示,gi(x,y)为像素矩阵Gi在点(x,y)处像素灰度值。
(A2.1.2)对二值图像Gi按照从上往下、从左往右的顺序进行扫描,若当前点像素值gi(x,y)等于255,则记录当前像素横坐标x=Topj、纵坐标y=Topi,停止扫描,本实例中Topj=272,Topi=87。
(A2.1.3)对二值图像Gi按照从下往上、从左往右的顺序进行扫描,若当前点像素值gi(x,y)等于255,则记录当前像素横坐标x=Bntj、纵坐标y=Bnti,停止扫描,本实例中Bntj=330,Bnti=315。
(A2.1.4)对二值图像Gi按照从左往右,从上往下的顺序进行扫描,若当前点像素值gi(x,y)等于255,则记录当前像素横坐标x=Leftj、纵坐标y=Lefti,停止扫描,本实例中Leftj=152,Lefti=139。
(A2.1.5)对二值图像Gi按照从右往左,从上往下的顺序进行扫描,若当前点像素值gi(x,y)等于255,则记录当前像素横坐标x=Rightj、纵坐标y=Righti,停止扫描,本实例中Rightj=361,Righti=282。
(A2.1.6)特性视图Fi的目标主体高宽比定义为目标的高度Hi与目标的宽度Wi的比值
Figure PCTCN2014085717-appb-000015
Hi=|Topi-Bnti|,Wi=|Leftj-Rightj|,符号|V|表示变量V的绝对值。如图6(c)所示,目标主体高宽比Ti,1为目标主体高度AC与目标主体宽度CD的比值。本实例中Ti,1=1.0909,Hi=228,Wi=209。
(A2.2)计算各特性视图Fi的目标纵向对称度Ti,2
(A2.2.1)计算特性视图Fi的中心点横坐标
Figure PCTCN2014085717-appb-000016
纵坐标
Figure PCTCN2014085717-appb-000017
符号
Figure PCTCN2014085717-appb-000018
表示对变量V取整数部分,本实例中,Cix=256,Ciy=201。
(A2.2.2)统计二值图像Gi中,1≤横坐标x≤500,1≤纵坐标y≤201的区域内,gi(x,y)的灰度值为255的像素点个数,即为特性视图Fi的目标上半部分面积STi。本实例中,即为图6(d)中矩形框abcd所围成的区域面积,STi=10531。
(A2.2.3)统计二值图像Gi中,1≤横坐标x≤500,202≤纵坐标y≤411的区域内,gi(x,y)的灰度值为255的像素点个数,即为特性视图Fi的目标下半部分面积SDi。本实例中,即为图6(d)中矩形框cdef所围成的区域面积,SDi=9685。
(A2.2.4)特性视图Fi的目标纵向对称度
Figure PCTCN2014085717-appb-000019
特性视图Fi的目标纵向对称度定义为目标最小外接矩形所围成的矩形区域内,目标上半部分面积STi与下半部分面积SDi之比,本实例中,Ti,2=1.0873。
(A2.3)计算各特性视图Fi的目标横向对称度Ti,3
(A2.3.1)统计二值图像Gi中,1≤横坐标x≤Cix,1≤纵坐标y≤m的区域内,gi(x,y)的灰度值为255的像素点个数,即为特性视图Fi的目标左半部分面积SLi。本实例中,即为图6(e)中矩形框hukv所围成的区域面积,SLi=10062。
(A2.3.2)统计二值图像Gi中,Cix+1≤横坐标x≤n,1≤纵坐标y≤m的区域内,gi(x,y)的灰度值为255的像素点个数,即为特性视图Fi的目标右半部分面积SRi。本实例中,即为图6(e)中矩形框ujvl所围成的区域面积, SRi=10154。
(A2.3.3)计算特性视图Fi的目标横向对称度
Figure PCTCN2014085717-appb-000020
特性视图Fi的目标横向对称度定义为目标最小外接矩形所围成的矩形区域内,目标左半部分面积SLi与右半部分面积SRi之比。本实例中Ti,3=0.9909。
(A2.4)计算特性视图Fi的目标主轴倾斜角Ti,4
目标主轴倾斜角定义为特性视图Fi的目标柱体轴线与图像水平方向的夹角θ。该特征最明显地反映了目标的姿态特征,其取值范围为:0°~180°,采用一维浮点数表示。
如图6(f)所示为哈勃望远镜目标主轴倾斜角示意图,向量
Figure PCTCN2014085717-appb-000021
为特性视图Fi的目标柱体主轴线(本示例中为哈勃望远镜卫星平台主轴线),其与水平方向
Figure PCTCN2014085717-appb-000022
的夹角∠QOR即为目标主轴倾斜角;
(A2.4.1)计算各特性视图Fi对应的二值图像Gi重心横坐标xi0、纵坐标yi0:本实例中,xi0=252,yi0=212。
(A2.4.2)计算特性视图Fi对应的二值图像Gi的p+q阶中心矩μi(p,q):
(A2.4.3)构建实对称矩阵
Figure PCTCN2014085717-appb-000023
计算矩阵Mat的特征值V1,V2,及其对应的特征向量
Figure PCTCN2014085717-appb-000024
本实例中,
Figure PCTCN2014085717-appb-000025
特征值V1=6.2955×109,V2=2.3455×1010,特征向量
Figure PCTCN2014085717-appb-000026
(A2.4.4)使用如下公式计算特性视图Fi,如图6(a)目标主轴倾斜角Ti4
Figure PCTCN2014085717-appb-000027
Figure PCTCN2014085717-appb-000028
式中,符号π表示圆周率,符号atan2表示反正切函数。
本实例中目标主轴倾斜角Ti4=50.005°。
(A2.5)构建模板目标多视点特性视图Fi几何特征库MF:
Figure PCTCN2014085717-appb-000029
式中,第i行{Ti,1,Ti,2,Ti,3,Ti,4},表示第i帧特征视图Fi的几何特征;本实例中如图6(a),{Ti,1,Ti,2,Ti,3,Ti,4}={1.0909,1.0873,0.9909,50.005}。
(A2.6)归一化处理步骤:
对于模板目标多视点特性视图Fi几何特征库MF进行归一化处理,得到模板目标归一化几何特征库SMF:
Figure PCTCN2014085717-appb-000030
式中,
Figure PCTCN2014085717-appb-000031
Vecj=max{T1,j,T2,j,…Ti,j…,TK,j},i=1、2、…、K,j=1、2、3、4;符号max{V}表示取集合V的最大值。
在线姿态估计步骤,具体为:
(B1)计算待测图像几何特征步骤,包括下述子步骤:
(B1.1)待测图像预处理步骤
空间目标成像数据噪声很大、信噪比低,而且模糊严重。因此在对成像数据进行后续处理之前,必须首先对成像数据进行预处理,即首先对成像数据进行去噪,随后针对成像数据的特点,利用有效的校正算法对空间目标图像进行图像恢复处理。本实例中,选用非局部均值滤波,选用如下参数:相似性窗口大小为5×5,搜索窗口大小为15×15,衰减参数为15)对待测图像先进行噪声抑制,如图7(a)所示为仿真的哈勃望远镜地基长距离光学成像数据,其对应的俯仰角α、偏航角度β为(α,β)=(-40°,-125°)。如图7(b)为对图7(a)使用非局部均值滤波噪声抑制结果;再选用最大似然估计算法,本实例选用如下参数:外循环次数为8次,估计点扩展函数和目标图像的内循环次数都设置为3次)进行去模糊,得到预处理后的图像g(x,y),如图7(c)所示为对图7(b)使用最大似然估计算法去模糊结果,即预处理后的图像g(x,y)。
(B2)待测图像几何特征提取步骤
将预处理后的图像g(x,y)代替fi(x,y),进行子步骤(2.1)~(2.4),得到待测图像几何特征{G1,G2,G3,G4},对几何特征{G1,G2,G3,G4}进行归一化处理,得到待测图像归一化几何特征{SG1,SG2,SG3,SG4}:
SGj=Gj/Vecj,j=1、2、3、4;
(B3)目标姿态估计步骤,包括下述子步骤:
(B3.1)遍历整个模板目标几何特征库SMF,计算待测图像几何特征 {SG1,SG2,SG3,SG4}与SMF中各行向量之间的欧式距离D1、…、DK
(B3.2)从各欧式距离D1、…、DK中选取4个最小值DS、Dt、Du、Dv,待测图像的姿态设为DS、Dt、Du、Dv所代表的模式姿态的算术平均值。如图7(d)~图7(g)所示为DS、Dt、Du、Dv所代表的模式姿态,其对应的俯仰角α、偏航角度β分别为(α,β)=(-40°,-130°),(α,β)=(-40°,-140°),(α,β)=(-40°,-120°),(α,β)=(-40°,-150°)。图7(h)为对图7(d)~图7(g)进行算术平均得到的姿态估计结果,(α,β)=(-40°,-135°),即为对图7(a)进行姿态估计的结果。
结果表明俯仰角估计结果的精度误差为零度,偏航角度β估计结果的精度误差为10度。
本领域的技术人员容易理解,以上所述仅为本发明的较佳实施例而已,并不用以限制本发明,凡在本发明的精神和原则之内所作的任何修改、等同替换和改进等,均应包含在本发明的保护范围之内。

Claims (10)

  1. 一种在轨三维空间目标姿态估计方法,包括离线特征库构建步骤和在线姿态估计步骤:
    所述离线特征库构建步骤,具体为:
    (A1)根据空间目标三维模型获取表征空间目标各种姿态的目标多视点特性视图;
    (A2)从各空间目标多视点特性视图中提取几何特征构成几何特征库;所述几何特征包含目标主体高宽比Ti,1、目标纵向对称度Ti,2、目标横向对称度Ti,3和目标主轴倾斜角Ti,4;所述目标主体高宽比Ti,1是指目标最小外接矩形的高宽比;所述目标纵向对称度Ti,2是指在目标最小外接矩形所围成的矩形区域内,目标上半部分面积与下半部分面积之比;所述目标横向对称度Ti,3是指在目标最小外接矩形所围成的矩形区域内,目标左半部分面积与右半部分面积之比;所述目标主轴倾斜角Ti,4是指特性视图的目标柱体主轴线与视图水平方向的夹角;
    所述在线姿态估计步骤,具体为:
    (B1)对待测在轨空间目标图像预处理;
    (B2)对预处理后的待测图像提取特征,该特征与步骤(A2)提取的特征相同;
    (B3)将对待测图像提取的特征在几何特征库中进行匹配,匹配结果对应的特性视图所表征的空间目标姿态即为待测图像中的目标姿态。
  2. 根据权利要求1所述的在轨三维空间目标姿态估计方法,其特征在于,所述特征目标主体高宽比Ti,1的提取方式为:
    (A2.1.1)对特性视图Fi使用最大类间方差阈值准则得到阈值Ti,将特性视图Fi中大于阈值Ti的像素灰度值fi(x,y)设置为255,小于或等于阈值 Ti的像素灰度值fi(x,y)设置为零,由此得到二值图像Gi,Gi为宽度n、高度m的像素矩阵,gi(x,y)为Gi中点(x,y)处像素灰度值;
    (A2.1.2)对二值图像Gi按照从上往下、从左往右的顺序进行扫描,若当前点像素值gi(x,y)等于255,则记录当前像素横坐标x=Topj、纵坐标y=Topi,停止扫描;
    (A2.1.3)对二值图像Gi按照从下往上、从左往右的顺序进行扫描,若当前点像素值gi(x,y)等于255,则记录当前像素横坐标x=Bntj、纵坐标y=Bnti,停止扫描;
    (A2.1.4)对二值图像Gi按照从左往右,从上往下的顺序进行扫描,若当前点像素值gi(x,y)等于255,则记录当前像素横坐标x=Leftj、纵坐标y=Lefti,停止扫描;
    (A2.1.5)对二值图像Gi按照从右往左,从上往下的顺序进行扫描,若当前点像素值gi(x,y)等于255,则记录当前像素横坐标x=Rightj、纵坐标y=Righti,停止扫描;
    (A2.1.6)特性视图Fi的目标主体高宽比
    Figure PCTCN2014085717-appb-100001
    Hi=|Topi-Bnti|,Wi=|Leftj-Rightj|,符号|V|表示变量V的绝对值。
  3. 根据权利要求2所述的在轨三维空间目标姿态估计方法,其特征在于,所述特征目标纵向对称度Ti,2的提取方式为:
    (A2.2.1)计算特性视图Fi的中心点横坐标
    Figure PCTCN2014085717-appb-100002
    纵坐标
    Figure PCTCN2014085717-appb-100003
    符号
    Figure PCTCN2014085717-appb-100004
    表示对变量V取整数部分;
    (A2.2.2)统计二值图像Gi中的,1≤横坐标x≤n,1≤纵坐标y≤Ciy的区域内,灰度值为255的像素点个数,即为特性视图Fi的目标上半部分面积STi
    (A2.2.3)统计二值图像Gi中,1≤横坐标x≤n,Ciy+1≤纵坐标y≤m的 区域内,灰度值为255的像素点个数,即为特性视图Fi的目标下半部分面积SDi
    (A2.2.4)计算特性视图Fi的目标纵向对称度
    Figure PCTCN2014085717-appb-100005
  4. 根据权利要求3所述的在轨三维空间目标姿态估计方法,其特征在于,所述特征目标横向对称度Ti,3的提取方式为:
    (A2.3.1)统计二值图像Gi中,1≤横坐标x≤Cix,1≤纵坐标y≤m的区域内,灰度值为255的像素点个数,即为特性视图Fi的目标左半部分面积SLi
    (A2.3.2)统计二值图像Gi中,Cix+1≤横坐标x≤n,1≤纵坐标y≤m的区域内,灰度值为255的像素点个数,即为特性视图Fi的目标右半部分面积SRi
    (A2.3.3)计算特性视图Fi的目标横向对称度
    Figure PCTCN2014085717-appb-100006
  5. 根据权利要求4所述的在轨三维空间目标姿态估计方法,其特征在于,所述特征目标主轴倾斜角Ti,4的提取方式为:
    (A2.4.1)计算特性视图Fi对应的二值图像Gi重心横坐标xi0、纵坐标yi0
    Figure PCTCN2014085717-appb-100007
    式中,
    Figure PCTCN2014085717-appb-100008
    k=0、1,j=0、1;
    (A2.4.2)计算特性视图Fi对应的二值图像Gi对应的p+q阶中心矩μi(p,q):
    Figure PCTCN2014085717-appb-100009
    p=0、1、2,q=0、1、2;
    (A2.4.3)构建实对称矩阵
    Figure PCTCN2014085717-appb-100010
    计算矩阵Mat的特征值V1,V2,及其对应的特征向量
    Figure PCTCN2014085717-appb-100011
    (A2.4.4)计算特性视图Fi目标主轴倾斜角Ti4
    Figure PCTCN2014085717-appb-100012
    Figure PCTCN2014085717-appb-100013
    式中,符号π表示圆周率,符号atan2表示反正切函数。
  6. 根据权利要求1~4任意一项所述的在轨三维空间目标姿态估计方法,其特征在于,还对所述步骤(A2)构建的几何特征库进行归一化处理,对所述步骤(B2)提取的待测图像特征进行归一化处理。
  7. 根据权利要求1~4任意一项所述的在轨三维空间目标姿态估计方法,所述步骤(A1)根据空间目标三维模型获取表征目标各种姿态的目标多视点特性视图的具体实现方式为;
    按俯仰角α每隔角度γ、偏航角β每隔γ将高斯观测球划分为K个二维平面,α=-180°~0°,β=-180°~180°,K=360*180/γ2
    将空间目标三维模型OT置于高斯观测球球心,从球心将三维模型OT分别向K个二维平面进行正投影,共得到K个三维模板目标的多视点特性视图Fi;各特性视图Fi为宽度n、高度m的像素矩阵,fi(x,y)为Fi在点(x,y)处像素灰度值,1≤横坐标x≤n,1≤纵坐标y≤m,i=1、2、…、K。
  8. 根据权利要求1~4任意一项所述的在轨三维空间目标姿态估计方法,所述步骤(B1)首先采用非局部均值滤波对待测图像先进行噪声抑制,再选用最大似然估计算法进行去模糊。
  9. 根据权利要求1~4任意一项所述的在轨三维空间目标姿态估计方法,所述(B3)的具体实现方式为:
    (B3.1)遍历整个几何特征库SMF,计算待测图像的四种几何特征{SG1,SG2,SG3,SG4}与几何特征库SMF中各行向量之间的欧式距离记为D1、…、DK,K为目标多视点特性视图的数量;
    (B3.2)从各欧式距离D1、…、DK中选取四个最小值DS、Dt、Du、Dv,对其对应的四个目标姿态求取算术平均,即为待测图像中的目标姿态。
  10. 一种在轨三维空间目标姿态估计系统,包括离线特征库构建模块和在线姿态估计模块:
    所述离线特征库构建模块,具体包括:
    第一子模块,用于根据空间目标三维模型获取表征空间目标各种姿态的目标多视点特性视图;
    第二子模块,用于从各空间目标多视点特性视图中提取几何特征构成几何特征库;所述几何特征包含目标主体高宽比Ti,1、目标纵向对称度Ti,2、目标横向对称度Ti,3和目标主轴倾斜角Ti,4;所述目标主体高宽比Ti,1是指目标最小外接矩形的高宽比;所述目标纵向对称度Ti,2是指在目标最小外接矩形所围成的矩形区域内,目标上半部分面积与下半部分面积之比;所述目标横向对称度Ti,3是指在目标最小外接矩形所围成的矩形区域内,目标左半部分面积与右半部分面积之比;所述目标主轴倾斜角Ti,4是指特性视图的目标柱体主轴线与视图水平方向的夹角;
    所述在线姿态估计模块,具体包括:
    第三子模块,用于对待测在轨空间目标图像预处理;
    第四子模块,用于对预处理后的待测图像提取特征,该特征与第二子模块提取的特征相同;
    第五子模块,用于将对待测图像提取的特征在几何特征库中进行匹配, 匹配结果对应的特性视图所表征的空间目标姿态即为待测图像中的目标姿态。
PCT/CN2014/085717 2013-12-28 2014-09-02 一种模型约束下的在轨三维空间目标姿态估计方法及系统 WO2015096508A1 (zh)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US15/106,690 US20170008650A1 (en) 2013-12-28 2014-09-02 Attitude estimation method and system for on-orbit three-dimensional space object under model restraint

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201310740553.6A CN104748750B (zh) 2013-12-28 2013-12-28 一种模型约束下的在轨三维空间目标姿态估计方法及系统
CN2013107405536 2013-12-28

Publications (1)

Publication Number Publication Date
WO2015096508A1 true WO2015096508A1 (zh) 2015-07-02

Family

ID=53477486

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2014/085717 WO2015096508A1 (zh) 2013-12-28 2014-09-02 一种模型约束下的在轨三维空间目标姿态估计方法及系统

Country Status (3)

Country Link
US (1) US20170008650A1 (zh)
CN (1) CN104748750B (zh)
WO (1) WO2015096508A1 (zh)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112651437A (zh) * 2020-12-24 2021-04-13 北京理工大学 一种基于深度学习的空间非合作目标位姿估计方法
CN113470113A (zh) * 2021-08-13 2021-10-01 西南科技大学 一种融合brief特征匹配与icp点云配准的零部件姿态估计方法

Families Citing this family (22)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR102170841B1 (ko) * 2014-05-22 2020-10-27 뉴라이언 홀딩스 리미티드 핸드헬드 기화 디바이스
WO2016116856A1 (en) * 2015-01-20 2016-07-28 Politecnico Di Torino Method and system for measuring the angular velocity of a body orbiting in space
US9940504B2 (en) * 2015-06-17 2018-04-10 Itseez3D, Inc. Method to produce consistent face texture
CN105345453B (zh) * 2015-11-30 2017-09-22 北京卫星制造厂 一种基于工业机器人自动化装调的位姿确定方法
CN107958466B (zh) * 2017-12-01 2022-03-29 大唐国信滨海海上风力发电有限公司 一种Slam算法优化基于模型的跟踪方法
CN108109208B (zh) * 2017-12-01 2022-02-08 同济大学 一种海上风电场增强现实方法
CN108319567A (zh) * 2018-02-05 2018-07-24 北京航空航天大学 一种基于高斯过程的空间目标姿态估计不确定度计算方法
CN108320310B (zh) * 2018-02-06 2021-09-28 哈尔滨工业大学 基于图像序列的空间目标三维姿态估计方法
CN108408087B (zh) * 2018-02-12 2019-07-16 北京空间技术研制试验中心 低轨长寿命载人航天器的在轨测试方法
CN108680165B (zh) * 2018-05-04 2020-11-27 中国人民解放军63920部队 基于光学图像的目标飞行器姿态确定方法和装置
CN108873917A (zh) * 2018-07-05 2018-11-23 太原理工大学 一种面向移动平台的无人机自主着陆控制系统及方法
US11873123B2 (en) * 2020-01-05 2024-01-16 United States Of America As Represented By The Secretary Of The Air Force Aerospace vehicle navigation and control system comprising terrestrial illumination matching module for determining aerospace vehicle position and attitude
CN111522007A (zh) * 2020-07-06 2020-08-11 航天宏图信息技术股份有限公司 真实场景与目标仿真融合的sar成像仿真方法和系统
CN111932620B (zh) * 2020-07-27 2024-01-12 根尖体育科技(北京)有限公司 排球发球过网与否的判定方法及发球速度的获取方法
CN112378383B (zh) * 2020-10-22 2021-10-19 北京航空航天大学 基于圆和线特征非合作目标相对位姿双目视觉测量方法
CN112509038B (zh) * 2020-12-15 2023-08-22 华南理工大学 结合视觉仿真的自适应图像模板截取方法、系统及存储介质
CN112634326A (zh) * 2020-12-17 2021-04-09 深圳云天励飞技术股份有限公司 目标跟踪方法、装置、电子设备及存储介质
CN114693988B (zh) * 2020-12-31 2024-05-03 上海湃星信息科技有限公司 卫星自主位姿的判定方法、系统及存储介质
CN112683265B (zh) * 2021-01-20 2023-03-24 中国人民解放军火箭军工程大学 一种基于快速iss集员滤波的mimu/gps组合导航方法
CN115994942B (zh) * 2023-03-23 2023-06-27 武汉大势智慧科技有限公司 三维模型的对称提取方法、装置、设备及存储介质
CN116109706B (zh) * 2023-04-13 2023-06-23 中国人民解放军国防科技大学 基于先验几何约束的空间目标反演方法、装置和设备
CN116385440B (zh) * 2023-06-05 2023-08-11 山东聚宁机械有限公司 一种弧形刀片视觉检测方法

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH0933649A (ja) * 1995-07-21 1997-02-07 Toshiba Corp Isar画像目標識別処理装置
US20060284050A1 (en) * 2005-03-02 2006-12-21 Busse Richard J Portable air defense ground based launch detection system
CN101989326A (zh) * 2009-07-31 2011-03-23 三星电子株式会社 人体姿态识别方法和装置
CN102236794A (zh) * 2010-05-07 2011-11-09 Mv科技软件有限责任公司 3d场景中3d对象的识别和姿态确定
CN102298649A (zh) * 2011-10-09 2011-12-28 南京大学 一种人体动作数据的空间轨迹检索方法
CN102324043A (zh) * 2011-09-07 2012-01-18 北京邮电大学 基于dct的特征描述算子及优化空间量化的图像匹配方法

Family Cites Families (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20070285304A1 (en) * 2006-03-16 2007-12-13 Guy Cooper Target orbit modification via gas-blast
FR2899344B1 (fr) * 2006-04-03 2008-08-15 Eads Astrium Sas Soc Par Actio Procede de restitution de mouvements de la ligne de visee d'un instrument optique
US8121347B2 (en) * 2006-12-12 2012-02-21 Rutgers, The State University Of New Jersey System and method for detecting and tracking features in images
CN100504299C (zh) * 2007-02-06 2009-06-24 华中科技大学 一种空间非合作物体三维信息的获取方法
US8041118B2 (en) * 2007-02-16 2011-10-18 The Boeing Company Pattern recognition filters for digital images
US8240611B2 (en) * 2009-08-26 2012-08-14 Raytheon Company Retro-geo spinning satellite utilizing time delay integration (TDI) for geosynchronous surveillance
CN101650178B (zh) * 2009-09-09 2011-11-30 中国人民解放军国防科学技术大学 序列图像三维重建中控制特征点与最优局部单应引导的图像匹配方法
CN101726298B (zh) * 2009-12-18 2011-06-29 华中科技大学 一种用于前视导航制导的立体地标选择和参考图制备方法
FR2991785B1 (fr) * 2012-06-06 2014-07-18 Astrium Sas Stabilisation d'une ligne de visee d'un systeme d'imagerie embarque a bord d'un satellite
JP6044293B2 (ja) * 2012-11-19 2016-12-14 株式会社Ihi 3次元物体認識装置および3次元物体認識方法

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH0933649A (ja) * 1995-07-21 1997-02-07 Toshiba Corp Isar画像目標識別処理装置
US20060284050A1 (en) * 2005-03-02 2006-12-21 Busse Richard J Portable air defense ground based launch detection system
CN101989326A (zh) * 2009-07-31 2011-03-23 三星电子株式会社 人体姿态识别方法和装置
CN102236794A (zh) * 2010-05-07 2011-11-09 Mv科技软件有限责任公司 3d场景中3d对象的识别和姿态确定
CN102324043A (zh) * 2011-09-07 2012-01-18 北京邮电大学 基于dct的特征描述算子及优化空间量化的图像匹配方法
CN102298649A (zh) * 2011-10-09 2011-12-28 南京大学 一种人体动作数据的空间轨迹检索方法

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
ZHANG, TIANXU ET AL.: "A Novel Multi-Scale Intelligent Recursive Recognition Method for Three-Dimensional Moving Targets", ACTA AUTOMATICA SINICA, vol. 32, no. 5, 30 September 2006 (2006-09-30) *

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112651437A (zh) * 2020-12-24 2021-04-13 北京理工大学 一种基于深度学习的空间非合作目标位姿估计方法
CN112651437B (zh) * 2020-12-24 2022-11-11 北京理工大学 一种基于深度学习的空间非合作目标位姿估计方法
CN113470113A (zh) * 2021-08-13 2021-10-01 西南科技大学 一种融合brief特征匹配与icp点云配准的零部件姿态估计方法
CN113470113B (zh) * 2021-08-13 2023-07-21 西南科技大学 一种融合brief特征匹配与icp点云配准的零部件姿态估计方法

Also Published As

Publication number Publication date
US20170008650A1 (en) 2017-01-12
CN104748750B (zh) 2015-12-02
CN104748750A (zh) 2015-07-01

Similar Documents

Publication Publication Date Title
WO2015096508A1 (zh) 一种模型约束下的在轨三维空间目标姿态估计方法及系统
Peng et al. Pose measurement and motion estimation of space non-cooperative targets based on laser radar and stereo-vision fusion
US9799139B2 (en) Accurate image alignment to a 3D model
CN111862201B (zh) 一种基于深度学习的空间非合作目标相对位姿估计方法
CN106679634B (zh) 一种基于立体视觉的空间非合作目标位姿测量方法
Kunz et al. Map building fusing acoustic and visual information using autonomous underwater vehicles
CN108917753B (zh) 基于从运动恢复结构的飞行器位置确定方法
Wang et al. Accurate georegistration of point clouds using geographic data
d'Angelo et al. Dense multi-view stereo from satellite imagery
Zhu et al. Vision navigation for aircrafts based on 3D reconstruction from real-time image sequences
US11704825B2 (en) Method for acquiring distance from moving body to at least one object located in any direction of moving body by utilizing camera-view depth map and image processing device using the same
CN116563377A (zh) 一种基于半球投影模型的火星岩石测量方法
CN112179373A (zh) 一种视觉里程计的测量方法及视觉里程计
Guan et al. Minimal cases for computing the generalized relative pose using affine correspondences
CN117197333A (zh) 基于多目视觉的空间目标重构与位姿估计方法及系统
CN116883590A (zh) 一种三维人脸点云优化方法、介质及系统
Jiang et al. Icp stereo visual odometry for wheeled vehicles based on a 1dof motion prior
CN115965712A (zh) 一种建筑物二维矢量图构建方法、系统、设备及存储介质
Troiani et al. 1-point-based monocular motion estimation for computationally-limited micro aerial vehicles
Jang et al. Topographic information extraction from KOMPSAT satellite stereo data using SGM
Chen et al. 3d map building based on stereo vision
Yoshisada et al. Indoor map generation from multiple LiDAR point clouds
Jokinen et al. Lower bounds for as-built deviations against as-designed 3-D Building Information Model from single spherical panoramic image
Wan et al. Enhanced lunar topographic mapping using multiple stereo images taken by Yutu-2 rover with changing illumination conditions
Zhang Dense point cloud extraction from oblique imagery

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 14874440

Country of ref document: EP

Kind code of ref document: A1

WWE Wipo information: entry into national phase

Ref document number: 15106690

Country of ref document: US

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 14874440

Country of ref document: EP

Kind code of ref document: A1