CN111680552B - An intelligent identification method for characteristic parts - Google Patents

An intelligent identification method for characteristic parts Download PDF

Info

Publication number
CN111680552B
CN111680552B CN202010350572.8A CN202010350572A CN111680552B CN 111680552 B CN111680552 B CN 111680552B CN 202010350572 A CN202010350572 A CN 202010350572A CN 111680552 B CN111680552 B CN 111680552B
Authority
CN
China
Prior art keywords
spacecraft
image
dimensional geometric
images
feature parts
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010350572.8A
Other languages
Chinese (zh)
Other versions
CN111680552A (en
Inventor
汤亮
袁利
关新
王有懿
姚宁
宗红
冯骁
张科备
郝仁剑
郭子熙
刘昊
龚立纲
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Institute of Control Engineering
Original Assignee
Beijing Institute of Control Engineering
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Institute of Control Engineering filed Critical Beijing Institute of Control Engineering
Priority to CN202010350572.8A priority Critical patent/CN111680552B/en
Publication of CN111680552A publication Critical patent/CN111680552A/en
Application granted granted Critical
Publication of CN111680552B publication Critical patent/CN111680552B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes
    • G06V20/13Satellite images
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • G06T3/4023Scaling of whole images or parts thereof, e.g. expanding or contracting based on decimating pixels or lines of pixels; based on inserting pixels or lines of pixels
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/60Analysis of geometric attributes
    • G06T7/62Analysis of geometric attributes of area, perimeter, diameter or volume
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/25Determination of region of interest [ROI] or a volume of interest [VOI]

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Evolutionary Computation (AREA)
  • Evolutionary Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Astronomy & Astrophysics (AREA)
  • Remote Sensing (AREA)
  • Geometry (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

本发明一种特征部位智能识别方法,适用于空间失效卫星局部典型部位识别领域。传统基于解析算法的目标典型部位识别存在边缘点识别误差大等问题,本发明设计了一种基于卷积神经网络的局部典型特征部位智能识别方法。首先针对失效卫星局部典型部位识别任务,创建包含丰富信息的卫星局部典型部位数据库,对典型部位的构件进行标注,构造训练数据集和测试数据集。然后构建一个深度卷积网络,使用训练数据集进行网络参数的训练,训练完成后,网络即可从输入图像中智能识别出的典型部位。

The invention is an intelligent identification method of characteristic parts, which is suitable for the field of identification of local typical parts of space failed satellites. Traditional identification of typical target parts based on analytical algorithms has problems such as large edge point identification errors. The present invention designs an intelligent identification method for local typical feature parts based on a convolutional neural network. First, for the task of identifying local typical parts of failed satellites, a database of local typical parts of satellites containing rich information was created, the components of the typical parts were annotated, and a training data set and a test data set were constructed. Then build a deep convolutional network and use the training data set to train the network parameters. After the training is completed, the network can intelligently identify typical parts from the input image.

Description

Feature part intelligent recognition method
Technical Field
The invention belongs to the field of space target feature recognition, and particularly relates to an intelligent feature part recognition method.
Background
Protection and on-orbit services against important space targets mainly including satellites have become an important development direction of aerospace technology in countries around the world, and identification technology of satellite features (such as antennas, sailboards and engines) is a key link. With the increasing maturity of aircraft approach technology and optical imaging technology, high resolution optical imaging of space objects using space-based platforms is enabled, which also places a higher demand for satellite object identification, particularly for accurate identification of satellite features.
The space target identification mainly uses the space target characteristic data to effectively judge and identify the identity, the gesture, the state and other attributes. Currently, the source of target characteristic data is mainly foundation detection, including optical and radar equipment. However, the detection data of the foundation device is related to various factors such as the observation angle, the target feature, the sun angle, the atmosphere and the like, so that the detection result has great uncertainty. Although research is also conducted on the aspect of detecting and identifying the space-based targets at home and abroad, the method for identifying the point targets and the on-orbit motion state under the condition of long distance is concentrated, the adopted method depends on the on-line known characteristic points, has strong constraint on the characteristic points, and once the known characteristic points change, the existing method for identifying the characteristic points is difficult to accurately identify. In recent years, the development of deep learning technology greatly improves the effect of image target recognition, and brings new method means to the field of space target recognition.
Disclosure of Invention
The invention solves the technical problems that: the intelligent recognition method for the characteristic parts is provided for overcoming the defect that the existing recognition technology excessively depends on known characteristic points, can realize intelligent recognition of typical target parts and tactical features in a complex space environment, and provides technical support for on-orbit scene depth recognition and understanding.
The technical scheme adopted by the invention is as follows: the intelligent characteristic part identifying process includes the following steps:
(1) Based on an attitude rotation method, constructing a spacecraft image database in space, and storing three-dimensional geometric images of the spacecraft under the condition of different three-axis attitude angle combinations in the spacecraft image database;
(2) The pixel values of three color channels of red (R), green (G) and blue (B) of each three-dimensional geometric image in the spacecraft image database are weighted and averaged, so that a gray image of the three-dimensional geometric image can be obtained; adding measurement noise energy on the basis of the gray image to obtain a noisy gray image;
(3) Performing reduction processing on the gray level image of each three-dimensional geometric image in the spacecraft image database by adopting a cubic interpolation method to obtain a corresponding interpolated image;
(4) Carrying out rotation change on the gray level image of each three-dimensional geometric image in the spacecraft image database in the step (2) to obtain a rotated gray level image;
(5) Respectively labeling the gray level images of all three-dimensional geometric images in the spacecraft image database in the step (2), the gray level images with noise, the gray level images of all three-dimensional geometric images in the step (3), and the gray level images of all three-dimensional geometric images in the step (4) after the gray level images of all three-dimensional geometric images are rotated; respectively performing name setting on the marked spacecraft characteristic part outlines to obtain labels of characteristic parts, and finishing marking the target spacecraft characteristic parts;
(6) Constructing a convolutional neural network, and training the constructed convolutional neural network by utilizing most of images marked by the outline of the characteristic part of the spacecraft in the spacecraft image database to obtain a trained convolutional neural network; and inputting the images with the marked outlines of the characteristic parts of the spacecraft in the rest spacecraft image database, inputting the trained convolutional neural network to identify the characteristic parts of the spacecraft, outputting the identified results, and realizing the intelligent identification of the characteristic parts of the spacecraft.
Preferably, (1) based on a gesture rotation method, constructing a spacecraft image database in a space, wherein three-dimensional geometric images of the spacecraft under the condition of different three-axis gesture angle combinations are stored in the spacecraft image database, and specifically, the method comprises the following steps:
(1) Based on the gesture rotation method, a spacecraft image database in the space is constructed, namely, a three-axis gesture angle of the target spacecraft is set as a rolling anglePitch angle θ, yaw angle ψ. The transformation range of the three-axis attitude angle is [0, 360 ]]And taking values every N degrees to form three-axis attitude angle combinations of a plurality of groups of spacecraft, importing simulation modeling software, and acquiring three-dimensional geometric images of the spacecraft under the condition of different three-axis attitude angle combinations to obtain a spacecraft image database.
Preferably, (2) the pixel values of three color channels of red (R), green (G) and blue (B) of each three-dimensional geometric image in the spacecraft image database are weighted and averaged, so that a gray image of the three-dimensional geometric image can be obtained, specifically:
G(i,j)=0.299×R(i,j)+0.578×G(i,j)+0.114×B(i,j)
wherein: i, j are the horizontal and vertical coordinates of the gray level image, i is greater than or equal to 1, j is greater than or equal to 1, R (i, j), G (i, j), B (i, j) are the pixel values corresponding to the red (R), green (G), blue (B) pixels of the i rows and j columns in the gray level image respectively. G (i, j) is the pixel value of row i and column j in the gray scale image.
Preferably, (3) performing reduction processing on the gray level image of each three-dimensional geometric image in the spacecraft image database by adopting a cubic interpolation method to obtain a corresponding interpolated image; the method comprises the following steps:
f(i+u,j+v)=ABC
wherein: f (i+u, j+v) is the pixel value of the i row and j column of the interpolated image, u, v is the interpolation interval, A, B and C is the coefficient matrix, and the form is as follows:
A=[S(1+u) S(u) S(1-u) S(2-u)]
C=[S(1+v) S(v) S(1-v) S(2-v)] T
wherein: s is an interpolation related function, in particular
Through the preferable rotation scheme and parameter requirements, the image target recognition effect is greatly improved.
Preferably, (4) performing rotation change on the gray level image of each three-dimensional geometric image in the spacecraft image database in the step (2) to obtain a rotated gray level image; the method comprises the following steps:
let the coordinates of a pixel point of the original image (i.e., the gray-scale image of the three-dimensional geometric image) be (i, j), and the coordinates in the rotated gray-scale image after rotation be (i) 1 ,j 1 ) The matrix expression of the rotation transformation is:
the inverse transformation is:
wherein θ represents an angle of rotation around a center point of an original image (i.e., a gray image of a three-dimensional geometric image) when changing from the original image to a rotated gray image in a plane in which the original image is located;
a. b is the coordinate of the rotation center when the image is not rotated, c and d are the coordinates of the center point after the image is rotated, and the coordinates are as follows:
wherein w is 0 Is the width of the original image (before rotation), h 0 Length of original image (before rotation); w (w) 1 H is the width of the gray image after rotation 1 The length of the gray level image after rotation; to truly simulate the constrained effect of the imaging dimensions (length, width) of the optical camera, the imaging dimensions of the rotated spacecraft within the optical camera are varied, and therefore the rotated image length h 1 And width w 1 And the original image has a length h 0 And width w 0 And vary.
Through the preferable rotation scheme and parameter requirements, the image target recognition effect is greatly improved.
Preferably, (1) based on a gesture rotation method, constructing a spacecraft image database in a space, wherein three-dimensional geometric images of the spacecraft under the condition of different three-axis gesture angle combinations are stored in the spacecraft image database, and specifically, the method comprises the following steps:
the target spacecraft is M kinds of spacecraft in space, three-axis attitude angle combinations of a plurality of groups of target spacecraft are imported into STK software, three-dimensional geometric images of the M kinds of spacecraft under different three-axis attitude angle combinations are acquired in the STK software, and the three-dimensional geometric images are stored in a spacecraft image database in space.
Preferably, in order to simulate noise generated when the real shooting equipment is used for shooting the spacecraft, after the gray level image is obtained in the step (2), the noise is added to be used as the gray level image finally obtained in the step (2); on the basis of the gray level image, adding measurement noise energy to obtain the gray level image with noise, specifically: and Gaussian noise is added for blurring, so that the recognition accuracy is further improved.
Adding Gaussian noise for blurring, specifically:
on the basis of gray level images of three-dimensional geometric images, gaussian noise is added for blurring, and the gray level images with noise are obtained, specifically:
G noise (i,j)=G(i,j)+G GB (i,j)
wherein: i, j is the abscissa and ordinate of the gray scale image, G noise (i, j) is the pixel value of row i and column j in the noisy gray-scale image. Sigma is the standard deviation of gaussian noise normal distribution; r is the noisy gray image blur radius.
After noise is added, a gray image with noise of the spacecraft is obtained, a picture of the spacecraft imaged in a space environment can be simulated more realistically, an accurate image is provided for neural network training and feature part recognition, and the robustness and accuracy of feature part recognition are improved.
Preferably, training the constructed convolutional neural network specifically comprises: the convolutional neural network receives the image data and inputs the image data, outputs the label of the characteristic part of the image, inputs the predicted label and the label of the characteristic part of the step (5) into a set label loss function together as a predicted label, calculates a training error through the loss function, dynamically adjusts the parameters of the neural network according to the training error, realizes error convergence, and obtains the trained convolutional neural network after the error converges to the set requirement.
Preferably, in the combination of different three-axis attitude angles in the step (1), the transformation range of the three-axis attitude angles is [0, 360 ]]Taking values every N degrees, wherein N is a common divisor of 360; coform (360/N) 3 And combining three-axis attitude angles of the group target spacecraft.
Preferably, the three-dimensional geometric images of the spacecraft stored in the spacecraft image database in the (1) under the condition of different three-axis attitude angle combinations are specifically:
the three-axis attitude angle combinations of the multiple groups of target spacecraft are imported into simulation modeling software, wherein the simulation modeling software is preferably satellite system toolkit (Satellite Toll Kit, STK) software, and a three-dimensional model of the spacecraft can be constructed, so that three-dimensional geometric images of the spacecraft under the condition of different three-axis attitude angle combinations are obtained.
Compared with the prior art, the invention has the beneficial effects that:
(1) The invention provides the intelligent recognition method for the characteristic parts, which combines characteristic extraction and characteristic recognition, avoids manual design of the characteristics, has stronger universality and greatly improves the recognition efficiency and accuracy of the space target characteristic parts.
(2) The method is based on satellite picture data of various sources, including two-dimensional images obtained by projection of a satellite three-dimensional model, and the like, and the actual running environment of the in-orbit satellite is considered to perform simulated rendering on the satellite images, so that more real satellite image data can be obtained, the number of the satellite image data can be increased, and the training of a convolution network is facilitated.
(3) The intelligent identification method for the characteristic parts is independent of the on-orbit known characteristic points, has strong identification capability on the spacecraft without the characteristic points, has stronger universality and identification robustness, and greatly improves the identification efficiency and accuracy of the characteristic parts of the spacecraft, such as sailboards and the like.
Drawings
FIG. 1 is a schematic illustration of the process of the present invention;
FIG. 2 is a three-dimensional model of a spacecraft modeled by the method of the present invention;
FIG. 3 is a graph of a loss function during training of the method of the present invention;
FIG. 4 shows the recognition result obtained by the method of the present invention.
Detailed Description
The invention is described in further detail below with reference to the drawings and the specific embodiments.
The invention discloses an intelligent identification method for a characteristic part, which is suitable for the field of identification of a local typical part of a space failure satellite. The invention designs an intelligent recognition method for a local typical characteristic part based on a convolutional neural network. Firstly, a satellite local typical part database containing rich information is created for a failure satellite local typical part recognition task, components of typical parts are marked, and a training data set and a testing data set are constructed. Then constructing a deep convolution network, training network parameters by using a training data set, and after training, intelligently identifying typical parts from an input image by the network.
The tasks of capturing space failure spacecrafts, intersecting and butting among the spacecrafts and the like need to accurately acquire information such as relative postures of the spacecrafts. And the identification of the spacecraft characteristic parts is an important technical means for realizing information estimation such as relative gesture and the like. The conventional characteristic part identification method based on the analysis algorithm has the problems of large edge point identification error, transition dependence on the known characteristic points of manual identification and the like. Aiming at the problems, the invention designs an intelligent recognition method of local typical characteristic parts based on a convolutional neural network, solves the problem that the transition relies on known characteristic points of manual identification in the recognition method, and realizes the recognition of the local typical characteristic parts (such as solar wings) of a spacecraft in complex space environments with weak light, blurred jittering images and the like.
As shown in fig. 1, the intelligent characteristic part identification method of the invention comprises the following steps:
(1) Constructing a spacecraft image database in space based on a gesture rotation method, namely setting a three-axis gesture angle of a target spacecraft as a rolling anglePitch angle θ, yaw angle ψ. Preferably, the three-axis attitude angle has a conversion range of [0, 360 ]]Take value every N DEG (N is a common divisor of 360, preferably 90 DEG), and form (360/N) in a conformal manner 3 A combination of three-axis attitude angles of the group target spacecraft; ) Forming three-axis attitude angle combinations of a plurality of groups of target spacecrafts, importing simulation modeling software, and acquiring three-dimensional geometric images of the spacecrafts under the condition of different three-axis attitude angle combinations to obtain a spacecrafts image database, wherein the further preferred scheme specifically comprises the following steps:
(1-1) setting a three-axis attitude angle roll angle of a spacecraftThe pitch angle theta and the yaw angle phi are 0 and 360 DEG]Traversing every N degrees to take value, traversing rolling angle +.>All combinations of pitch angle θ, yaw angle ψ, coform (360/N) 3 Is a combination of the above.
(1-2) selecting an ith spacecraft in simulation modeling software of a satellite system toolkit (Satellite Toll Kit, STK), setting the three-axis gesture of the spacecraft to be the gesture of all traversal combinations obtained in (1-1), and intercepting the geometric picture of the spacecraft under each gesture combination.
(1-3) selecting the (i+1) th spacecraft in simulation modeling software of a satellite system toolkit (Satellite Toll Kit, STK), setting the three-axis gesture of the spacecraft to be the gesture of all traversal combinations obtained in (1-1), and intercepting the geometric picture of the spacecraft under each gesture combination. The simulation modeling software is set in a satellite system toolkit (Satellite Toll Kit, STK) which contains M spacecraft, and M× (360/N) can be obtained 3 And (3) stretching the three-dimensional geometric image to obtain a spacecraft image database, wherein the database comprises information such as multiple gestures, multiple characteristic parts and the like, so that a rich database is provided for accurately identifying the characteristic parts of the spacecraft, and basic data is further provided for improving the identification effect.
(2) The pixel values of three color channels of red (R), green (G) and blue (B) of each three-dimensional geometric image in the spacecraft image database are weighted and averaged to obtain a gray image of the three-dimensional geometric image, (marked as a subset 1) and a gray image of the three-dimensional geometric image with noise (marked as a subset 2), and the preferable scheme is as follows:
(2-1) as shown in fig. 2, the pixel values of three color channels of red (R), green (G), blue (B) of each three-dimensional geometric image in the spacecraft image database are weighted-averaged, so that a gray image of the three-dimensional geometric image can be obtained; the preferred formula is specifically as follows:
G(i,j)=0.299×R(i,j)+0.578×G(i,j)+0.114×B(i,j)
wherein: i and j are the horizontal and vertical coordinates of the image, i is greater than or equal to 1, j is greater than or equal to 1, R (i, j), G (i, j) and B (i, j) are pixel values corresponding to the pixels of the i rows and the j columns in the image respectively. G (i, j) is the pixel value of row i and column j in the image. The gray image obtained by the collocation of the Chinese coefficients in the formula further improves the identification effect.
M× (360/N) can be obtained by the graying process of step (2-1) described above 3 Zhang Hangtian the grayscale image of the three-dimensional geometric image, i.e. the full image in subset 1 is obtained.
(2-2) taking into consideration influence factors such as noise, shake blur and the like existing in space environment when a spacecraft is imaged, adding Gaussian noise to blur on the basis of a gray level image of a three-dimensional geometric image in the step (2-1) to obtain a noisy gray level image, namely a subset 2, wherein the preferable expression is specifically as follows:
G noise (i,j)=G(i,j)+G GB (i,j)
wherein: i, j is the abscissa and ordinate of the gray scale image, G noise (i, j) is the pixel value of row i and column j in the noisy gray-scale image. Sigma is the standard deviation of gaussian noise normal distribution; r is the noisy gray image blur radius.
After noise is added, a gray image with noise of the spacecraft is obtained, a picture of the spacecraft imaged in a space environment can be simulated more realistically, an accurate image is provided for neural network training and feature part recognition, and the robustness and accuracy of feature part recognition are further improved.
(3) Performing reduction processing on the gray level image of each three-dimensional geometric image in the spacecraft image database by adopting a cubic interpolation method to obtain a corresponding interpolated image; the preferable scheme is as follows:
f(i+u,j+v)=ABC
wherein: f (i+u, j+v) is the pixel value of the i row and j column of the interpolated image, u, v is the interpolation interval, A, B and C are coefficient matrixes, and the preferable scheme is as follows:
A=[S(1+u) S(u) S(1-u) S(2-u)]
C=[S(1+v) S(v) S(1-v) S(2-v)] T
wherein: s (·) is an interpolation correlation function, in particular
The interpolated image obtained in the step (3) can realistically simulate the image of the spacecraft without distance imaging in the space environment, and a more accurate training data set is provided for the identification of the neural network.
(4) Performing rotation change on the gray level image of each three-dimensional geometric image in the subset 1 in the spacecraft image database in the step (2) to obtain a rotated gray level image; the preferable scheme is as follows:
let the coordinates of a pixel point of the original image (i.e., the gray-scale image of the three-dimensional geometric image) be (i, j), and the coordinates in the rotated gray-scale image after rotation be (i) 1 ,j 1 ) The preferred matrix expression for the rotation transformation is:
the matrix expression for the preferred inverse transform is:
wherein θ represents an angle of rotation around a center point of an original image (i.e., a gray image of a three-dimensional geometric image) when changing from the original image to a rotated gray image in a plane in which the original image is located;
a. b is the coordinate of the rotation center when the image is not rotated, namely the geometric center point of the position of the spacecraft in the image, and c and d are the coordinates of the center point after the image is rotated. The rotated center point and the rotated center point have both rotation and translation. The further preferred scheme is specifically as follows:
wherein w is 0 =600 is the width of the original image, h 0 =1000 is the length of the original image; w (w) 1 =600 is the width of the rotated gray image, h 1 =360 is the length of the gray image after rotation; to truly simulate the constrained effect of the imaging size (length and width) of the optical camera, the imaging size of the rotated spacecraft in the optical camera is changed, and therefore the rotated image length h 1 And width w 1 And the original image has a length h 0 And width w 0 And vary.
The accuracy of the identification of the characteristic parts is further improved through the preferable formula and the parameter requirements in the step.
(5) Respectively labeling the outline of the characteristic part of the spacecraft on the subset 1 and the subset 2 of the gray level images of each three-dimensional geometric image in the spacecraft image database in the step (2), the image after the gray level image interpolation of each three-dimensional geometric image in the step (3) and the gray level image after the gray level image rotation of each three-dimensional geometric image in the step (4); respectively performing name setting on the marked spacecraft characteristic part outlines to obtain labels of characteristic parts, and finishing marking the target spacecraft characteristic parts; taking a spacecraft solar panel as an example, the preferable scheme is as follows:
(5-1) rejecting images from which the position of the solar panel feature cannot be determined;
and (5-2) delineating and marking the outline of the solar sailboard of the spacecraft by adopting a matrix square frame, and marking the delineating part of the rectangular square frame as an s1 label.
And (5-3) marking the picture with shielding of the solar sailboard of the spacecraft, adopting a fold line to carry out delineation and marking along the outline of the solar sailboard, and marking the delineation part as an s1 label.
By the image specifically processed in the step, the accuracy of identifying the characteristic part can be further improved.
(6) Constructing a convolutional neural network, and training the constructed convolutional neural network by utilizing most of images marked by the outline of the characteristic part of the spacecraft in the spacecraft image database; obtaining a trained convolutional neural network; inputting the images with marked spacecraft characteristic part contours in the rest spacecraft image databases into a trained convolutional neural network to identify the spacecraft characteristic parts, and outputting the identified results to realize intelligent identification of the spacecraft characteristic parts, wherein the preferable scheme is as follows:
(6-1) constructing the loss function of feature edge recognition as L 2 Preferably expressed as
And y is the area of the region marked by each picture characteristic part of the spacecraft image database.The area of the spacecraft characteristic part area identified for the neural network. M is M 2 Is a weight coefficient (described as M 2 Selected as 1).
(6-2) constructing a preferred neural convolutional network, designing the neural convolutional network to have K channels in total, wherein the weight coefficient of each channel is W j 1.ltoreq.j.ltoreq.K (K is preferably 10000). The weight coefficient of each channel is initially W j (0) (said W) j (0) Preferably 0.001).
And (6-3) training the neural network by utilizing most of the images marked by the contours of the characteristic parts of the spacecraft in the spacecraft image database. Outputting all marked pictures in the spacecraft image database to the constructed nerve convolution network, and obtaining the area y of the region where the spacecraft characteristic part is located through the label in the marked images. Weighting coefficients W of various channels through neural network j (0) Calculating space flightArea of region where device feature is located
(6-4) calculating the loss function of feature edge recognition in (6-1) as L 2
(6-5) judgment of L 2 ≤L 2min (said L) 2min Preferably 0.16), if so, proceeding to (6-7); otherwise, carrying out the process (6-6);
(6-6) updating the weight coefficient of each channel of the neural convolution network: w (W) j (l+1)=W j (l) Dt. (dt is a design coefficient.l is the number of steps of neural network training, and l is more than or equal to 1 and less than or equal to l max ). And (6) returning to the step (6-3) for iterative computation. As shown in fig. 3, the edge loss function can be converged to within 0.16 by a plurality of iterative training.
(6-7) weight coefficient W of each channel of the curing neural network parameter j (l) And obtaining the trained neural network.
And (6-8) inputting the images with the outlines of the characteristic parts of the spacecraft in the rest spacecraft image database, obtaining the trained neural network, calculating the area of the characteristic parts of the spacecraft, and outputting the recognized result. As shown in fig. 4, the position of the sailboard of the spacecraft can be further accurately identified through the identification of the preferable convolutional neural network, so that the intelligent identification of the characteristic part of the spacecraft is realized.
The invention combines the feature extraction and the feature recognition, avoids the manual design of the features, has stronger universality and greatly improves the recognition efficiency and precision of the space target feature parts. The method is based on satellite picture data of various sources, including two-dimensional images obtained by projection of a satellite three-dimensional model, and the like, and the actual running environment of the in-orbit satellite is considered to perform simulated rendering on the satellite images, so that more real satellite image data can be obtained, the number of the satellite image data can be increased, and the training of a convolution network is facilitated.
Moreover, the method does not depend on the feature points known in the orbit, has strong recognition capability on the spacecraft without the feature points, has stronger universality and recognition robustness, and greatly improves the recognition efficiency and precision of the feature parts of the spacecraft, such as sailboards and the like.
What is not described in detail in the present specification is a well known technology to those skilled in the art.

Claims (10)

1.一种特征部位智能识别方法,其特征在于步骤如下:1. A method for intelligent recognition of feature regions, characterized by the following steps: (1)基于姿态旋转方法,构建空间中航天器图像数据库,航天器图像数据库中存储航天器在不同三轴姿态角组合情况下的三维几何图像;(1) Based on the attitude rotation method, a spacecraft image database is constructed in space. The spacecraft image database stores three-dimensional geometric images of the spacecraft under different combinations of three-axis attitude angles. (2)对航天器图像数据库中的每个三维几何图像的红R、绿G、蓝B三个颜色通道的像素值进行加权平均,能得到该三维几何图像的灰度图像;在灰度图像基础上,加入测量噪声能得到带噪声的灰度图像;(2) By weighted averaging the pixel values of the three color channels (red R, green G, and blue B) of each three-dimensional geometric image in the spacecraft image database, a grayscale image of the three-dimensional geometric image can be obtained; by adding measurement noise to the grayscale image, a noisy grayscale image can be obtained. (3)采用三次插值方法对航天器图像数据库中的每个三维几何图像的灰度图像进行缩小处理,得到对应的插值后图像;(3) The grayscale image of each three-dimensional geometric image in the spacecraft image database is reduced by cubic interpolation to obtain the corresponding interpolated image; (4)对步骤(2)航天器图像数据库中的每个三维几何图像的灰度图像进行旋转变化,获取旋转后的灰度图像;(4) Rotate the grayscale image of each three-dimensional geometric image in the spacecraft image database of step (2) to obtain the rotated grayscale image; (5)对步骤(2)航天器图像数据库中的各个三维几何图像的灰度图像以及带噪声的灰度图像、步骤(3)各个三维几何图像的灰度图像插值后图像、步骤(4)各个三维几何图像的灰度图像旋转后的灰度图像,分别进行航天器特征部位轮廓标注;并对标注的航天器特征部位轮廓分别进行名称设置,得到特征部分的标签,完成目标航天器特征部位的标注;(5) For the grayscale images of each three-dimensional geometric image in the spacecraft image database in step (2) and the grayscale images with noise, the grayscale images after interpolation of each three-dimensional geometric image in step (3), and the grayscale images after rotation of each three-dimensional geometric image in step (4), the contours of the spacecraft feature parts are marked respectively; and the names of the marked spacecraft feature parts contours are set respectively to obtain the labels of the feature parts, thus completing the marking of the target spacecraft feature parts; (6)构建卷积神经网络,利用航天器图像数据库中航天器特征部位轮廓标注后的图像中的大部分对构建的卷积神经网络进行训练,得到训练后的卷积神经网络;将其余的航天器图像数据库中航天器特征部位轮廓标注后的图像,输入训练后的卷积神经网络进行航天器特征部位识别,输出识别后的结果,实现航天器的特征部位智能识别。(6) Construct a convolutional neural network. Use most of the images with the contour annotation of spacecraft feature parts in the spacecraft image database to train the constructed convolutional neural network to obtain the trained convolutional neural network. Input the remaining images with the contour annotation of spacecraft feature parts in the spacecraft image database into the trained convolutional neural network to identify spacecraft feature parts and output the identification results to realize intelligent identification of spacecraft feature parts. 2.根据权利要求1所述的一种特征部位智能识别方法,其特征在于:(1)基于姿态旋转方法,构建空间中航天器图像数据库,航天器图像数据库中存储航天器在不同三轴姿态角组合情况下的三维几何图像,具体为:2. The intelligent recognition method for feature parts according to claim 1, characterized in that: (1) a spacecraft image database is constructed based on the attitude rotation method, wherein the spacecraft image database stores three-dimensional geometric images of the spacecraft under different combinations of three-axis attitude angles, specifically: (1)基于姿态旋转方法,构建空间中航天器图像数据库,即设置目标航天器三轴姿态角为滚动角俯仰角θ、偏航角ψ;三轴姿态角的变换范围为[0,360°],每隔N°进行取值,形成多组航天器的三轴姿态角组合,导入仿真建模软件,进行航天器在不同三轴姿态角组合情况下的三维几何图像的获取,得到航天器图像数据库。(1) Based on the attitude rotation method, construct a spacecraft image database in space, that is, set the three-axis attitude angles of the target spacecraft as the roll angle. Pitch angle θ, yaw angle ψ; the range of the three-axis attitude angles is [0, 360°], and values are taken every N° to form multiple sets of three-axis attitude angle combinations of the spacecraft. These are imported into simulation modeling software to obtain three-dimensional geometric images of the spacecraft under different three-axis attitude angle combinations, thus obtaining a spacecraft image database. 3.根据权利要求1所述的一种特征部位智能识别方法,其特征在于:(2)对航天器图像数据库中的每个三维几何图像的红R、绿G、蓝B三个颜色通道的像素值进行加权平均,能得到该三维几何图像的灰度图像,具体为:3. The intelligent recognition method for feature parts according to claim 1, characterized in that: (2) the pixel values of the three color channels (red R, green G, and blue B) of each three-dimensional geometric image in the spacecraft image database are weighted and averaged to obtain the grayscale image of the three-dimensional geometric image, specifically: G(i,j)=0.299×R(i,j)+0.578×G(i,j)+0.114×B(i,j)G(i,j)=0.299×R(i,j)+0.578×G(i,j)+0.114×B(i,j) 式中:i,j为灰度图像的横纵坐标,i大于等于1,j大于等于1,R(i,j)、G(i,j)、B(i,j)分别为灰度图像中i行j列像素红R、绿G、蓝B对应的像素值;G(i,j)为灰度图像中i行j列像素值。In the formula: i,j are the horizontal and vertical coordinates of the grayscale image, i is greater than or equal to 1, j is greater than or equal to 1, R(i,j), G(i,j), B(i,j) are the pixel values of red R, green G, and blue B of the pixel in the i-th row and j-th column of the grayscale image, respectively; G(i,j) is the pixel value in the i-th row and j-th column of the grayscale image. 4.根据权利要求1所述的一种特征部位智能识别方法,其特征在于:(3)采用三次插值方法对航天器图像数据库中的每个三维几何图像的灰度图像进行缩小处理,得到对应的插值后图像;具体为:4. The intelligent feature recognition method according to claim 1, characterized in that: (3) a cubic interpolation method is used to reduce the grayscale image of each three-dimensional geometric image in the spacecraft image database to obtain the corresponding interpolated image; specifically: f(i+u,j+v)=ABCf(i+u,j+v)=ABC 式中:f(i+u,j+v)为插值后图像的i行j列像素值,u,v为插值间隔,A,B,C为系数矩阵,其形式如下:In the formula: f(i+u,j+v) is the pixel value in row i and column j of the interpolated image, u and v are the interpolation intervals, and A, B, and C are coefficient matrices, which take the following form: A=[S(1+u) S(u) S(1-u) S(2-u)]A=[S(1+u) S(u) S(1-u) S(2-u)] C=[S(1+v) S(v) S(1-v) S(2-v)]T C=[S(1+v) S(v) S(1-v) S(2-v)] T 式中:G(i,j)为灰度图像中i行j列像素值,S为插值相关函数,具体为In the formula: G(i,j) is the pixel value in row i and column j of the grayscale image, and S is the interpolation correlation function, specifically: 5.根据权利要求1所述的一种特征部位智能识别方法,其特征在于:(1)基于姿态旋转方法,构建空间中航天器图像数据库,航天器图像数据库中存储航天器在不同三轴姿态角组合情况下的三维几何图像,具体为:5. The intelligent recognition method for feature parts according to claim 1, characterized in that: (1) a spacecraft image database is constructed based on the attitude rotation method, wherein the spacecraft image database stores three-dimensional geometric images of the spacecraft under different combinations of three-axis attitude angles, specifically: 目标航天器为空间中M种航天器,将多组目标航天器的三轴姿态角组合导入STK软件中,在STK软件中,进行M种航天器不同三轴姿态角组合下的三维几何图像的获取,存入空间中航天器图像数据库中。The target spacecraft is M types of spacecraft in space. The three-axis attitude angle combinations of multiple target spacecraft are imported into the STK software. In the STK software, three-dimensional geometric images of the M types of spacecraft under different three-axis attitude angle combinations are acquired and stored in the spacecraft image database. 6.根据权利要求1所述的一种特征部位智能识别方法,其特征在于,为模拟采用真实拍摄设备对航天器进行拍摄时产生的噪声,在步骤(2)得到三维几何图像的灰度图像后,加入噪声,作为步骤(2)最终得到的灰度图像;在三维几何图像的灰度图像基础上,加入测量噪声能得到带噪声的灰度图像,具体为:加入高斯噪声进行模糊化。6. The intelligent identification method for feature parts according to claim 1, characterized in that, in order to simulate the noise generated when the spacecraft is photographed by a real shooting device, after obtaining the grayscale image of the three-dimensional geometric image in step (2), noise is added as the final grayscale image obtained in step (2); on the basis of the grayscale image of the three-dimensional geometric image, adding measurement noise can obtain a noisy grayscale image, specifically: adding Gaussian noise for blurring. 7.根据权利要求1所述的一种特征部位智能识别方法,其特征在于:对构建的卷积神经网络进行训练具体为:卷积神经网络接收图像数据输入,输出图像特征部位的标签,作为预测标签,将预测标签与步骤(5)特征部分的标签共同输入设定的标签损失函数,通过损失函数计算训练误差,根据训练误差,动态调整神经网络参数,实现误差收敛,当误差收敛到设定的要求后,得到训练后的卷积神经网络。7. The intelligent recognition method for feature parts according to claim 1, characterized in that: training the constructed convolutional neural network specifically involves: the convolutional neural network receiving image data input, outputting the labels of image feature parts as prediction labels, inputting the prediction labels and the labels of the feature parts in step (5) together into a set label loss function, calculating the training error through the loss function, dynamically adjusting the neural network parameters according to the training error, achieving error convergence, and obtaining the trained convolutional neural network after the error converges to the set requirements. 8.根据权利要求1所述的一种特征部位智能识别方法,其特征在于:步骤(1)不同三轴姿态角组合中,三轴姿态角的变换范围为[0,360°],每隔N°进行取值,N为360的公约数。8. The intelligent identification method for feature parts according to claim 1, characterized in that: in step (1) different combinations of three-axis attitude angles, the transformation range of the three-axis attitude angle is [0, 360°], and the value is taken every N°, where N is the common divisor of 360. 9.根据权利要求8所述的一种特征部位智能识别方法,其特征在于:步骤(1)不同三轴姿态角组合,共形成(360/N)3组目标航天器的三轴姿态角的组合。9. The intelligent identification method for feature parts according to claim 8, characterized in that: in step (1), different combinations of three-axis attitude angles are used to form a total of (360/N) 3 sets of combinations of the three-axis attitude angles of the target spacecraft. 10.根据权利要求1所述的一种特征部位智能识别方法,其特征在于:(1)中航天器图像数据库中存储的航天器在不同三轴姿态角组合情况下的三维几何图像,具体为:10. The intelligent recognition method for feature parts according to claim 1, characterized in that: (1) the three-dimensional geometric images of the spacecraft stored in the spacecraft image database under different combinations of three-axis attitude angles are specifically: 将多组目标航天器的三轴姿态角组合导入仿真建模软件中,仿真建模软件为卫星系统工具包软件,能够构建航天器的三维模型,从而获取航天器在不同三轴姿态角组合情况下的三维几何图像。The three-axis attitude angle combinations of multiple target spacecraft are imported into the simulation modeling software. The simulation modeling software is a satellite system toolkit software that can build a three-dimensional model of the spacecraft, thereby obtaining three-dimensional geometric images of the spacecraft under different three-axis attitude angle combinations.
CN202010350572.8A 2020-04-28 2020-04-28 An intelligent identification method for characteristic parts Active CN111680552B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010350572.8A CN111680552B (en) 2020-04-28 2020-04-28 An intelligent identification method for characteristic parts

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010350572.8A CN111680552B (en) 2020-04-28 2020-04-28 An intelligent identification method for characteristic parts

Publications (2)

Publication Number Publication Date
CN111680552A CN111680552A (en) 2020-09-18
CN111680552B true CN111680552B (en) 2023-10-03

Family

ID=72452614

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010350572.8A Active CN111680552B (en) 2020-04-28 2020-04-28 An intelligent identification method for characteristic parts

Country Status (1)

Country Link
CN (1) CN111680552B (en)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114491694B (en) * 2022-01-17 2024-06-25 北京航空航天大学 A method for constructing a space target dataset based on Unreal Engine
CN115292287B (en) * 2022-08-08 2026-01-27 北京航空航天大学 Automatic labeling and database construction method for satellite feature part images
CN115346277A (en) * 2022-08-29 2022-11-15 联想(北京)有限公司 Data generation method and device
US20250124179A1 (en) * 2023-10-11 2025-04-17 Proteus Space, Inc. Methods and systems for rapid design and delivery of spacecrafts

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103093456A (en) * 2012-12-25 2013-05-08 北京农业信息技术研究中心 Corn ear character index computing method based on images
CN104482934A (en) * 2014-12-30 2015-04-01 华中科技大学 Multi-transducer fusion-based super-near distance autonomous navigation device and method

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6959109B2 (en) * 2002-06-20 2005-10-25 Identix Incorporated System and method for pose-angle estimation
US8180112B2 (en) * 2008-01-21 2012-05-15 Eastman Kodak Company Enabling persistent recognition of individuals in images

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103093456A (en) * 2012-12-25 2013-05-08 北京农业信息技术研究中心 Corn ear character index computing method based on images
CN104482934A (en) * 2014-12-30 2015-04-01 华中科技大学 Multi-transducer fusion-based super-near distance autonomous navigation device and method

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
基于彩色图像的高速目标单目位姿测量方法;刘巍;陈玲;马鑫;李肖;贾振元;;仪器仪表学报(第03期);675-682 *
基于视觉测量的太阳翼模态参数在轨辨识;吴小猷;李文博;张国琪;关新;郭胜;刘易;;空间控制技术与应用(第03期);9-14 *

Also Published As

Publication number Publication date
CN111680552A (en) 2020-09-18

Similar Documents

Publication Publication Date Title
CN111680552B (en) An intelligent identification method for characteristic parts
CN114549871B (en) A method for matching drone aerial images with satellite images
CN111862126A (en) Non-cooperative target relative pose estimation method based on deep learning and geometric algorithm
CN103632349B (en) Method for reducing image blur degrees of TDI-CCD (time delay integration-charge coupled device) cameras
CN106056625B (en) A kind of Airborne IR moving target detecting method based on geographical same place registration
CN118229799B (en) Feature-level camera-lidar online calibration method based on Transformer
CN102722697A (en) Unmanned aerial vehicle autonomous navigation landing visual target tracking method
CN110532865A (en) Spacecraft Structure Recognition Method Based on Visible Light and Laser Fusion
CN119006591B (en) Multi-scale space target relative pose estimation method and system based on deep learning under complex environment
CN117972885A (en) A method for generating spatial intelligent perception data based on simulation enhancement
CN106971189B (en) A kind of noisy method for recognising star map of low resolution
Liu et al. Local topology constrained point cloud registration in building information modeling
CN111553954B (en) An online photometric calibration method based on direct monocular SLAM
Koizumi et al. Development of attitude sensor using deep learning
CN118967954B (en) Urban space three-dimensional reconstruction method and system based on big data
CN115526921A (en) Star map registration method and system
CN119762999A (en) Water accumulation positioning method, device, equipment and medium based on unmanned aerial vehicle three-dimensional oblique photography
CN117094158A (en) A golf ball trajectory prediction method and device
CN114627172B (en) Depth completion method for single-line lidar
CN117274392A (en) Camera internal parameter calibration methods and related equipment
CN109919998B (en) Satellite attitude determination method, device and terminal equipment
CN109631850B (en) Inclined camera shooting relative positioning method based on deep learning
Qiu et al. Remote sensing image rectangling with iterative warping kernel self-correction transformer
CN119359843B (en) Simulation method of large-view-field telescope distortion star map based on actual measurement star map
CN117349571B (en) A method for short-arc orbit determination of spacecraft based on visual measurement

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant