CN111680552A - Intelligent feature part identification method - Google Patents

Intelligent feature part identification method Download PDF

Info

Publication number
CN111680552A
CN111680552A CN202010350572.8A CN202010350572A CN111680552A CN 111680552 A CN111680552 A CN 111680552A CN 202010350572 A CN202010350572 A CN 202010350572A CN 111680552 A CN111680552 A CN 111680552A
Authority
CN
China
Prior art keywords
image
spacecraft
dimensional geometric
gray level
attitude angle
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010350572.8A
Other languages
Chinese (zh)
Other versions
CN111680552B (en
Inventor
汤亮
袁利
关新
王有懿
姚宁
宗红
冯骁
张科备
郝仁剑
郭子熙
刘昊
龚立纲
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Institute of Control Engineering
Original Assignee
Beijing Institute of Control Engineering
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Institute of Control Engineering filed Critical Beijing Institute of Control Engineering
Priority to CN202010350572.8A priority Critical patent/CN111680552B/en
Publication of CN111680552A publication Critical patent/CN111680552A/en
Application granted granted Critical
Publication of CN111680552B publication Critical patent/CN111680552B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes
    • G06V20/13Satellite images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformation in the plane of the image
    • G06T3/40Scaling the whole image or part thereof
    • G06T3/4023Decimation- or insertion-based scaling, e.g. pixel or line decimation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/60Analysis of geometric attributes
    • G06T7/62Analysis of geometric attributes of area, perimeter, diameter or volume
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/25Determination of region of interest [ROI] or a volume of interest [VOI]

Abstract

The invention discloses an intelligent characteristic part identification method, which is suitable for the field of identification of local typical parts of space failure satellites. The invention discloses a local typical characteristic part intelligent identification method based on a convolutional neural network, and aims to solve the problems that the traditional target typical part identification based on an analytic algorithm has large edge point identification error and the like. Firstly, aiming at a task of identifying a local typical part of a failed satellite, a local typical part data base of the satellite containing rich information is created, components of the typical part are labeled, and a training data set and a test data set are constructed. And then constructing a deep convolutional network, training network parameters by using a training data set, and after the training is finished, the network can intelligently identify typical parts from the input image.

Description

Intelligent feature part identification method
Technical Field
The invention belongs to the field of space target feature recognition, and particularly relates to an intelligent feature part recognition method.
Background
Protection and on-orbit service aiming at important space targets mainly comprising satellites become important development directions of aerospace technologies of countries around the world, and identification technology of satellite characteristic parts (such as antennas, sailboards and engines) is a key link. With the increasing maturity of aircraft approach technology and optical imaging technology, high-resolution optical imaging of a space target by using a space-based platform becomes possible, which also puts a higher demand on satellite target identification, in particular on accurate identification of satellite feature components.
The spatial target identification mainly utilizes the characteristic data of the spatial target to effectively judge and identify the attributes of the identity, the posture, the state and the like of the spatial target. At present, the source of the target characteristic data is mainly ground-based detection, including optical and radar equipment. However, the detection data of the ground-based equipment is related to various factors such as an observation angle, a target feature, a sun angle and an atmosphere, so that the detection result has great uncertainty. Although research is carried out on the detection and identification of space-based targets at home and abroad, the method focuses on the detection of point targets under the condition of long distance and the identification of on-orbit motion states, the adopted identification method depends on online known feature points, the feature points are strongly restricted, and once the known feature points change, the existing identification method is difficult to accurately identify feature parts. In recent years, the development of deep learning technology greatly improves the effect of image target identification, which brings a new method means to the field of space target identification.
Disclosure of Invention
The technical problem solved by the invention is as follows: the method overcomes the defect that the existing recognition technology excessively depends on the known feature points, provides an intelligent recognition method for the feature points, can realize intelligent recognition aiming at the typical target part and the tactical features in a complex space environment, and provides technical support for in-orbit scene depth cognition and understanding.
The technical scheme adopted by the invention is as follows: an intelligent characteristic part identification method comprises the following steps:
(1) constructing a spacecraft image database in space based on an attitude rotation method, wherein three-dimensional geometric images of a spacecraft under the condition of different triaxial attitude angle combinations are stored in the spacecraft image database;
(2) carrying out weighted average on pixel values of three color channels of red (R), green (G) and blue (B) of each three-dimensional geometric image in a spacecraft image database to obtain a gray level image of the three-dimensional geometric image; on the basis of the gray level image, adding measurement noise energy to obtain a gray level image with noise;
(3) reducing the gray level image of each three-dimensional geometric image in the spacecraft image database by adopting a cubic interpolation method to obtain a corresponding interpolated image;
(4) carrying out rotation change on the gray level image of each three-dimensional geometric image in the spacecraft image database in the step (2) to obtain a rotated gray level image;
(5) respectively carrying out contour labeling on the feature part of the spacecraft on the gray level image and the noisy gray level image of each three-dimensional geometric image in the spacecraft image database in the step (2), the gray level image interpolated image of each three-dimensional geometric image in the step (3) and the gray level image obtained by rotating the gray level image of each three-dimensional geometric image in the step (4); respectively setting names of the marked spacecraft characteristic part outlines to obtain labels of the characteristic parts, and completing marking of the target spacecraft characteristic parts;
(6) constructing a convolutional neural network, and training the constructed convolutional neural network by using most of the images marked by the spacecraft characteristic part profiles in a spacecraft image database to obtain the trained convolutional neural network; and inputting the images marked with the spacecraft characteristic part profiles in the rest spacecraft image database into the trained convolutional neural network for spacecraft characteristic part recognition, and outputting the recognized result to realize the intelligent recognition of the characteristic parts of the spacecraft.
Preferably, (1) a spacecraft image database in the space is constructed based on a posture rotation method, and three-dimensional geometric images of the spacecraft under the condition of different three-axis posture angle combinations are stored in the spacecraft image database, specifically:
(1) based on the attitude rotation method, an image database of the spacecraft in the space is constructed, namely, the three-axis attitude angle of the target spacecraft is set as a rolling angle
Figure BDA0002471795400000021
Pitch angle θ, yaw angle ψ. The three-axis attitude angle is in the range of [0, 360 DEG ]]And carrying out value taking every N degrees to form a plurality of groups of three-axis attitude angle combinations of the spacecraft, importing simulation modeling software, and acquiring a three-dimensional geometric image of the spacecraft under the condition of different three-axis attitude angle combinations to obtain a spacecraft image database.
Preferably, (2) the pixel values of the three color channels of red (R), green (G), and blue (B) of each three-dimensional geometric image in the spacecraft image database are weighted and averaged to obtain a grayscale image of the three-dimensional geometric image, specifically:
G(i,j)=0.299×R(i,j)+0.578×G(i,j)+0.114×B(i,j)
in the formula: i and j are horizontal and vertical coordinates of the gray scale image, i is greater than or equal to 1, j is greater than or equal to 1, and R (i, j), G (i, j) and B (i, j) are pixel values corresponding to i rows and j columns of pixels red (R), green (G) and blue (B) in the gray scale image respectively. G (i, j) is the pixel value of i rows and j columns in the gray scale image.
Preferably, (3) reducing the gray level image of each three-dimensional geometric image in the spacecraft image database by adopting a cubic interpolation method to obtain a corresponding interpolated image; the method specifically comprises the following steps:
f(i+u,j+v)=ABC
in the formula: f (i + u, j + v) is the pixel value of i rows and j columns of the interpolated image, u and v are the interpolation interval, A, B and C are coefficient matrixes, and the form is as follows:
A=[S(1+u) S(u) S(1-u) S(2-u)]
Figure BDA0002471795400000031
C=[S(1+v) S(v) S(1-v) S(2-v)]T
in the formula: s is an interpolation correlation function, specifically
Figure BDA0002471795400000032
Through the optimized rotation scheme and the parameter requirements, the image target recognition effect is greatly improved.
Preferably, (4) performing rotation change on the gray level image of each three-dimensional geometric image in the spacecraft image database in the step (2) to obtain a rotated gray level image; the method specifically comprises the following steps:
let the coordinate of a certain pixel in the original image (i.e. the gray image of the three-dimensional geometric image) be (i, j), and the coordinate in the rotated gray image be (i, j)1,j1) Then the matrix expression of the rotation transformation is:
Figure BDA0002471795400000041
inverse transformation:
Figure BDA0002471795400000042
in the formula, θ represents an angle of rotation around the center point of the original image (i.e., the grayscale image of the three-dimensional geometric image) when the rotated grayscale image is changed from the original image to the grayscale image in the plane in which the original image is located;
a. b is the coordinate of the rotation center when the image is not rotated, c and d are the coordinates of the center point after the image is rotated, and the method comprises the following steps:
Figure BDA0002471795400000043
in the formula, w0Width of the original image (before rotation), h0Length of the original image (before rotation); w is a1Width of the rotated gray image, h1The length of the rotated gray level image; in order to truly simulate the constraint influence of the size (length and width) of the image formed by the optical camera, the size of the image formed by the rotated spacecraft in the optical camera is changed, so that the length h of the image after rotation1And width w1And the original image has a length h0And width w0With some variation.
Through the optimized rotation scheme and the parameter requirements, the image target recognition effect is greatly improved.
Preferably, (1) a spacecraft image database in the space is constructed based on a posture rotation method, and three-dimensional geometric images of the spacecraft under the condition of different three-axis posture angle combinations are stored in the spacecraft image database, specifically:
the target spacecraft is M types of spacecrafts in space, the three-axis attitude angle combination of a plurality of groups of target spacecrafts is led into STK software, three-dimensional geometric images of the M types of spacecrafts under different three-axis attitude angle combinations are obtained in the STK software, and the three-dimensional geometric images are stored in a spacecraft image database in space.
Preferably, in order to simulate noise generated when real shooting equipment is used for shooting the spacecraft, the noise is added after the gray level image is obtained in the step (2) and is used as the gray level image finally obtained in the step (2); on the basis of the gray level image, measuring noise energy is added to obtain a gray level image with noise, and the method specifically comprises the following steps: and Gaussian noise is added for fuzzification, so that the identification accuracy is further improved.
Adding Gaussian noise for fuzzification, specifically:
on the basis of a gray image of the three-dimensional geometric image, Gaussian noise is added for fuzzification to obtain a gray image with noise, and the method specifically comprises the following steps:
Gnoise(i,j)=G(i,j)+GGB(i,j)
Figure BDA0002471795400000051
in the formula: i, j are the horizontal and vertical coordinates, G, of the gray scale imagenoiseAnd (i, j) are pixel values of i rows and j columns in the noisy gray-scale image. σ is the standard deviation of the Gaussian noise normal distribution; r is the noisy grayscale image blur radius.
After noise is added, a noisy gray image of the spacecraft is obtained, a picture of imaging the spacecraft in a space environment can be simulated more vividly, an accurate image is provided for neural network training and feature part identification, and robustness and accuracy of feature part identification are improved.
Preferably, the training of the constructed convolutional neural network specifically comprises: and (3) receiving image data input by the convolutional neural network, outputting a label of the characteristic part of the image as a prediction label, inputting the prediction label and the label of the characteristic part in the step (5) into a set label loss function together, calculating a training error through the loss function, dynamically adjusting parameters of the neural network according to the training error, realizing error convergence, and obtaining the trained convolutional neural network after the error converges to a set requirement.
Preferably, in the different combinations of the three-axis attitude angles in the step (1), the transformation range of the three-axis attitude angles is [0, 360 DEG ]]Values are taken every N degrees, wherein N is a common divisor of 360 degrees; form together (360/N)3A combination of three-axis attitude angles of the group target spacecraft.
Preferably, the three-dimensional geometric image of the spacecraft stored in the spacecraft image database in (1) under the condition of different three-axis attitude angle combinations specifically includes:
the three-axis attitude angle combination of a plurality of groups of target spacecraft is led into simulation modeling software, the simulation modeling software is preferably Satellite System Toolkit (STK) software, a three-dimensional model of the spacecraft can be built, and therefore three-dimensional geometric images of the spacecraft under the condition of different three-axis attitude angle combinations are obtained.
Compared with the prior art, the invention has the beneficial effects that:
(1) the invention provides an intelligent identification method of characteristic parts, which integrates characteristic extraction and characteristic identification, avoids manual design of characteristics, has stronger universality and greatly improves the identification efficiency and accuracy of space target characteristic parts.
(2) The method is based on satellite picture data from various sources, including two-dimensional images obtained by satellite three-dimensional model projection and the like, and takes the actual operation environment of the in-orbit satellite into consideration to perform simulated rendering on the satellite images, so that more real satellite image data can be obtained, the quantity of the satellite image data can be increased, and the training of a convolution network is facilitated.
(3) The invention provides an intelligent identification method for characteristic parts, which does not depend on characteristic points known in the on-orbit, still has stronger identification capability for spacecrafts without the characteristic points, has stronger universality and identification robust capability, and greatly improves the identification efficiency and accuracy of the characteristic parts of the spacecrafts, such as sailboards and the like.
Drawings
FIG. 1 is a schematic diagram of the process of the present invention;
FIG. 2 is a three-dimensional model of a spacecraft simulated by the method of the present invention;
FIG. 3 is a graph of a loss function during training of the method of the present invention;
FIG. 4 shows the recognition result obtained by the method of the present invention.
Detailed Description
The invention is described in further detail below with reference to the figures and specific embodiments.
The invention discloses an intelligent characteristic part identification method, which is suitable for the field of identification of local typical parts of space failure satellites. The invention discloses a local typical characteristic part intelligent identification method based on a convolutional neural network, and aims to solve the problems that the traditional target typical part identification based on an analytic algorithm has large edge point identification error and the like. Firstly, aiming at a task of identifying a local typical part of a failed satellite, a local typical part data base of the satellite containing rich information is created, components of the typical part are labeled, and a training data set and a test data set are constructed. And then constructing a deep convolutional network, training network parameters by using a training data set, and after the training is finished, the network can intelligently identify typical parts from the input image.
Tasks such as capture of space failure spacecrafts, intersection and butt joint among the spacecrafts need to accurately acquire information such as relative attitude of the spacecrafts. The identification of the characteristic parts of the spacecraft is an important technical means for realizing information estimation such as relative attitude and the like. The traditional feature part identification method based on the analytical algorithm has the problems of large edge point identification error, transition dependence on known feature points of artificial identification and the like. Aiming at the problem, the invention designs the intelligent identification method of the local typical characteristic part based on the convolutional neural network, solves the problem that the identification method is transitionally dependent on the known characteristic points of artificial identification, and realizes the identification of the local typical characteristic part (such as solar wing) of the spacecraft in the complex space environment with dim light, blurred jittering images and the like.
As shown in fig. 1, the method for intelligently identifying characteristic parts of the present invention includes the following steps:
(1) constructing a spacecraft image database in space based on an attitude rotation method, namely setting a three-axis attitude angle of a target spacecraft as a rolling angle
Figure BDA0002471795400000071
Pitch angle θ, yaw angle ψ. Preferably, the range of three-axis attitude angle conversion is [0, 360 ° ]]Taking values every N degrees (the N is a common divisor of 360 degrees, preferably 90 degrees), forming (360/N)3Combining three-axis attitude angles of the group target spacecraft; ) Forming a combination of three-axis attitude angles of a plurality of groups of target spacecrafts, importing simulation modeling software to carry out spacecraft inAcquiring a three-dimensional geometric image under the condition of different triaxial attitude angle combinations to obtain a spacecraft image database, wherein a further preferable scheme specifically comprises the following steps:
(1-1) setting three-axis attitude angle rolling angle of spacecraft
Figure BDA0002471795400000072
The pitch angle theta and the yaw angle psi are in [0, 360 DEG ]]Traversing every N degrees to take values and traversing the rolling angle
Figure BDA0002471795400000073
All combinations of pitch angle theta, yaw angle psi, forming (360/N)3The combination of (1).
(1-2) selecting the ith spacecraft in a simulation modeling software Satellite system Kit (STK), setting the three-axis attitude of the spacecraft to the attitude of all traversal combinations obtained in the step (1-1), and intercepting the geometric picture of the spacecraft under each attitude combination.
(1-3) selecting the (i + 1) th spacecraft in a simulation modeling software 'Satellite system Kit' (STK), setting the three-axis attitude of the spacecraft to be the attitude of all traversal combinations obtained in (1-1), and intercepting the geometric picture of the spacecraft under each attitude combination, wherein the simulation modeling software 'Satellite system Kit' (STK) contains M middle spacecraft, so that M × (360/N) can be obtained3And opening the three-dimensional geometric image to obtain a spacecraft image database, wherein the database comprises information such as multi-posture and multi-feature parts, and provides a rich database for accurately identifying the spacecraft feature parts, and further provides basic data for improving the identification effect.
(2) The method comprises the following steps of carrying out weighted average on pixel values of three color channels of red (R), green (G) and blue (B) of each three-dimensional geometric image in a spacecraft image database to obtain a gray level image of the three-dimensional geometric image (marked as a subset 1) and a gray level image of a three-dimensional geometric image with noise (marked as a subset 2), wherein the preferable scheme is specifically as follows:
(2-1) as shown in fig. 2, performing weighted average on pixel values of three color channels of red (R), green (G) and blue (B) of each three-dimensional geometric image in the spacecraft image database to obtain a grayscale image of the three-dimensional geometric image; the preferred formula is specifically as follows:
G(i,j)=0.299×R(i,j)+0.578×G(i,j)+0.114×B(i,j)
in the formula: i and j are horizontal and vertical coordinates of the image, i is greater than or equal to 1, j is greater than or equal to 1, and R (i, j), G (i, j) and B (i, j) are pixel values corresponding to i rows and j columns of pixels red (R), green (G) and blue (B) in the image respectively. G (i, j) is the pixel value of i rows and j columns in the image. The gray level image obtained by the collocation of the Chinese coefficients of the formula further improves the identification effect.
M × (360/N) can be obtained by the graying processing in the above step (2-1)3And (3) obtaining a gray scale image of the three-dimensional geometric image of the spacecraft, namely obtaining all images in the subset 1.
(2-2) considering influence factors such as noise, jitter and blurring existing in spacecraft imaging under the space environment, and on the basis of the gray image of the three-dimensional geometric image in the step (2-1), adding Gaussian noise to perform fuzzification to obtain a gray image with noise, namely the subset 2, wherein the preferable expression is specifically as follows:
Gnoise(i,j)=G(i,j)+GGB(i,j)
Figure BDA0002471795400000081
in the formula: i, j are the horizontal and vertical coordinates, G, of the gray scale imagenoiseAnd (i, j) are pixel values of i rows and j columns in the noisy gray-scale image. σ is the standard deviation of the Gaussian noise normal distribution; r is the noisy grayscale image blur radius.
After the noise is added, a noisy gray image of the spacecraft is obtained, a picture of imaging the spacecraft in a space environment can be simulated more vividly, an accurate image is provided for neural network training and feature part identification, and robustness and accuracy of the feature part identification are further improved.
(3) Reducing the gray level image of each three-dimensional geometric image in the spacecraft image database by adopting a cubic interpolation method to obtain a corresponding interpolated image; the preferable scheme is specifically as follows:
f(i+u,j+v)=ABC
in the formula: f (i + u, j + v) is the pixel value of i rows and j columns of the interpolated image, u and v are the interpolation interval, A, B and C are coefficient matrixes, and the preferred scheme form is as follows:
A=[S(1+u) S(u) S(1-u) S(2-u)]
Figure BDA0002471795400000091
C=[S(1+v) S(v) S(1-v) S(2-v)]T
in the formula: s (-) is an interpolated correlation function, specifically
Figure BDA0002471795400000092
The interpolated image obtained in the step (3) can vividly simulate the image of the spacecraft without distance under the space environment, and provides a more accurate training data set for the recognition of the neural network.
(4) Carrying out rotation change on the gray level image of each three-dimensional geometric image in the subset 1 in the spacecraft image database in the step (2) to obtain a rotated gray level image; the preferable scheme is specifically as follows:
let the coordinate of a certain pixel in the original image (i.e. the gray image of the three-dimensional geometric image) be (i, j), and the coordinate in the rotated gray image be (i, j)1,j1) Then the matrix expression of the preferred rotation transformation is:
Figure BDA0002471795400000093
the matrix expression for the preferred inverse transform is:
Figure BDA0002471795400000101
in the formula, θ represents an angle of rotation around the center point of the original image (i.e., the grayscale image of the three-dimensional geometric image) when the rotated grayscale image is changed from the original image to the grayscale image in the plane in which the original image is located;
a. b is the coordinate of the rotation center when the image is not rotated, namely the geometric center point of the position of the spacecraft in the image, and c and d are the coordinates of the center point after the image is rotated. The center point after rotation and the center point after rotation have rotation and translation. Further preferred embodiments are as follows:
Figure BDA0002471795400000102
in the formula, w0600 is the width of the original image, h01000 is the length of the original image; w is a1600 is the width of the rotated grayscale image, h1360 is the length of the rotated gray image; in order to truly simulate the constraint influence of the size (length and width) of the image formed by the optical camera, the size of the image formed by the rotated spacecraft in the optical camera is changed, so that the length h of the rotated image1And width w1And the original image has a length h0And width w0With some variation.
The accuracy of the identification of the characteristic part is further improved through the optimal formula and parameter requirements of the step.
(5) Respectively carrying out contour labeling on the characteristic part of the spacecraft on the subsets 1 and 2 of the gray level image of each three-dimensional geometric image in the spacecraft image database in the step (2), the gray level image interpolated image of each three-dimensional geometric image in the step (3) and the gray level image rotated by the gray level image of each three-dimensional geometric image in the step (4); respectively setting names of the marked spacecraft characteristic part outlines to obtain labels of the characteristic parts, and completing marking of the target spacecraft characteristic parts; taking a spacecraft solar sailboard as an example, the preferable scheme is as follows:
(5-1) rejecting images of which the position of the solar panel feature cannot be determined;
(5-2) the outline of the spacecraft solar sail panel is defined and marked by a matrix box, and the part defined by the rectangular box is marked by an s1 label.
(5-3) marking the picture of the spacecraft solar sailboard with the shading, delineating and marking the outline of the solar sailboard by adopting a folding line, and marking the delineation part as an s1 label.
Through the image specifically processed in the above step, the accuracy of identifying the characteristic part can be further improved.
(6) Constructing a convolutional neural network, and training the constructed convolutional neural network by using most of the images marked by the spacecraft characteristic part profiles in the spacecraft image database; obtaining a trained convolutional neural network; inputting the images marked with the spacecraft characteristic part profiles in the rest spacecraft image database into the trained convolutional neural network for spacecraft characteristic part recognition, outputting the recognized result, and realizing the intelligent recognition of the characteristic parts of the spacecraft, wherein the preferable scheme is as follows:
(6-1) constructing a loss function L for feature edge recognition2Preferably represented by
Figure BDA0002471795400000111
And y is the area of the region marked on the characteristic part of each picture of the spacecraft image database.
Figure BDA0002471795400000112
And identifying the area of the spacecraft characteristic part region for the neural network. M2Is a weight coefficient (said M)2Chosen as 1).
(6-2) constructing a preferred neural convolution network, designing the neural convolution network to have K channels in total, wherein the weight coefficient of each channel is WjAnd j is more than or equal to 1 and less than or equal to K (the K is preferably 10000). Each channel weight coefficient is initially Wj(0) (said W)j(0) Preferably 0.001).
And (6-3) training a neural network by using most of the images after the spacecraft feature part contour labeling in the spacecraft image database. Outputting each marked image in the spacecraft image database to the constructed neural convolution network, wherein the marked image passes throughThe tag can obtain the area y of the region where the spacecraft feature is located. Weighting factor W of each channel through neural networkj(0) Calculating the area of the region where the characteristic part of the spacecraft is located
Figure BDA0002471795400000113
(6-4) calculating a loss function for feature edge recognition in (6-1) as L2
(6-5) determination of L2≤L2min(said L)2minPreferably 0.16), and if true, (6-7); otherwise, carrying out (6-6);
(6-6) updating each channel weight coefficient of the neural convolutional network: wj(l+1)=Wj(l) Dt. (dt is a design coefficient, l is the step number of neural network training, and l is more than or equal to 1 and less than or equal to lmax). And (5) returning to the step (6-3) to perform iterative calculation. As shown in fig. 3, the edge loss function can be converged within 0.16 by a plurality of iterative training.
(6-7) weight coefficient W of each channel of the parameters of the consolidated neural networkj(l) And obtaining the trained neural network.
(6-8) inputting the images marked with the spacecraft characteristic part profiles in the rest spacecraft image database into the trained neural network, calculating the region where the spacecraft characteristic parts are located, and outputting the recognized result. As shown in fig. 4, by the identification of the above preferred convolutional neural network, the position of the spacecraft windsurfing board can be further accurately identified, and the intelligent identification of the characteristic part of the spacecraft is realized.
The method integrates the feature extraction and the feature identification, avoids manual design of the features, has stronger universality, and greatly improves the identification efficiency and the identification precision of the space target feature component. The method is based on satellite picture data from various sources, including two-dimensional images obtained by satellite three-dimensional model projection and the like, and takes the actual operation environment of the in-orbit satellite into consideration to perform simulated rendering on the satellite images, so that more real satellite image data can be obtained, the quantity of the satellite image data can be increased, and the training of a convolution network is facilitated.
In addition, the method does not depend on the characteristic points known in the on-orbit, still has stronger identification capability for the spacecraft without the characteristic points, has stronger universality and identification robustness, and greatly improves the identification efficiency and accuracy of the characteristic parts of the spacecraft, such as sailboards and the like.
Those skilled in the art will appreciate that those matters not described in detail in the present specification are well known in the art.

Claims (10)

1. An intelligent characteristic part identification method is characterized by comprising the following steps:
(1) constructing a spacecraft image database in space based on an attitude rotation method, wherein three-dimensional geometric images of a spacecraft under the condition of different triaxial attitude angle combinations are stored in the spacecraft image database;
(2) carrying out weighted average on pixel values of three color channels of red (R), green (G) and blue (B) of each three-dimensional geometric image in a spacecraft image database to obtain a gray level image of the three-dimensional geometric image; on the basis of the gray level image, adding measurement noise energy to obtain a gray level image with noise;
(3) reducing the gray level image of each three-dimensional geometric image in the spacecraft image database by adopting a cubic interpolation method to obtain a corresponding interpolated image;
(4) carrying out rotation change on the gray level image of each three-dimensional geometric image in the spacecraft image database in the step (2) to obtain a rotated gray level image;
(5) respectively carrying out contour labeling on the feature part of the spacecraft on the gray level image and the noisy gray level image of each three-dimensional geometric image in the spacecraft image database in the step (2), the gray level image interpolated image of each three-dimensional geometric image in the step (3) and the gray level image obtained by rotating the gray level image of each three-dimensional geometric image in the step (4); respectively setting names of the marked spacecraft characteristic part outlines to obtain labels of the characteristic parts, and completing marking of the target spacecraft characteristic parts;
(6) constructing a convolutional neural network, and training the constructed convolutional neural network by using most of the images marked by the spacecraft characteristic part profiles in a spacecraft image database to obtain the trained convolutional neural network; and inputting the images marked with the spacecraft characteristic part profiles in the rest spacecraft image database into the trained convolutional neural network for spacecraft characteristic part recognition, and outputting the recognized result to realize the intelligent recognition of the characteristic parts of the spacecraft.
2. The intelligent feature part identification method according to claim 1, characterized in that: (1) based on the attitude rotation method, a spacecraft image database in the space is constructed, and three-dimensional geometric images of the spacecraft under the condition of different triaxial attitude angle combinations are stored in the spacecraft image database, wherein the three-dimensional geometric images specifically comprise the following steps:
(1) based on the attitude rotation method, an image database of the spacecraft in the space is constructed, namely, the three-axis attitude angle of the target spacecraft is set as a rolling angle
Figure FDA0002471795390000023
A pitch angle theta, a yaw angle psi; the three-axis attitude angle is in the range of [0, 360 DEG ]]And carrying out value taking every N degrees to form a plurality of groups of three-axis attitude angle combinations of the spacecraft, importing simulation modeling software, and acquiring a three-dimensional geometric image of the spacecraft under the condition of different three-axis attitude angle combinations to obtain a spacecraft image database.
3. The intelligent feature part identification method according to claim 1, characterized in that: (2) the method comprises the following steps of carrying out weighted average on pixel values of three color channels of red (R), green (G) and blue (B) of each three-dimensional geometric image in a spacecraft image database to obtain a gray level image of the three-dimensional geometric image, and specifically comprises the following steps:
G(i,j)=0.299×R(i,j)+0.578×G(i,j)+0.114×B(i,j)
in the formula: i and j are horizontal and vertical coordinates of the gray image, i is greater than or equal to 1, j is greater than or equal to 1, and R (i, j), G (i, j) and B (i, j) are pixel values corresponding to i rows and j columns of pixels red (R), green (G) and blue (B) in the gray image respectively; g (i, j) is the pixel value of i rows and j columns in the gray scale image.
4. The intelligent feature part identification method according to claim 1, characterized in that: (3) reducing the gray level image of each three-dimensional geometric image in the spacecraft image database by adopting a cubic interpolation method to obtain a corresponding interpolated image; the method specifically comprises the following steps:
f(i+u,j+v)=ABC
in the formula: f (i + u, j + v) is the pixel value of i rows and j columns of the interpolated image, u and v are the interpolation interval, A, B and C are coefficient matrixes, and the form is as follows:
A=[S(1+u) S(u) S(1-u) S(2-u)]
Figure FDA0002471795390000021
C=[S(1+v) S(v) S(1-v) S(2-v)]T
in the formula: s is an interpolation correlation function, specifically
Figure FDA0002471795390000022
5. The intelligent feature part identification method according to claim 1, characterized in that: (1) based on the attitude rotation method, a spacecraft image database in the space is constructed, and three-dimensional geometric images of the spacecraft under the condition of different triaxial attitude angle combinations are stored in the spacecraft image database, wherein the three-dimensional geometric images specifically comprise the following steps:
the target spacecraft is M types of spacecrafts in space, the three-axis attitude angle combination of a plurality of groups of target spacecrafts is led into STK software, three-dimensional geometric images of the M types of spacecrafts under different three-axis attitude angle combinations are obtained in the STK software, and the three-dimensional geometric images are stored in a spacecraft image database in space.
6. The intelligent identification method for the characteristic parts according to claim 1, characterized in that in order to simulate the noise generated when real shooting equipment is adopted to shoot the spacecraft, the noise is added after the gray image is obtained in the step (2) and is used as the gray image finally obtained in the step (2); on the basis of the gray level image, measuring noise energy is added to obtain a gray level image with noise, and the method specifically comprises the following steps: gaussian noise was added for blurring.
7. The intelligent feature part identification method according to claim 1, characterized in that: the training of the constructed convolutional neural network specifically comprises the following steps: and (3) receiving image data input by the convolutional neural network, outputting a label of the characteristic part of the image as a prediction label, inputting the prediction label and the label of the characteristic part in the step (5) into a set label loss function together, calculating a training error through the loss function, dynamically adjusting parameters of the neural network according to the training error, realizing error convergence, and obtaining the trained convolutional neural network after the error converges to a set requirement.
8. The intelligent feature part identification method according to claim 1, characterized in that: in different triaxial attitude angle combinations in the step (1), the transformation range of the triaxial attitude angle is [0, 360 DEG ], values are taken every N DEG, and N is a common divisor of 360.
9. The intelligent feature part identification method according to claim 8, characterized in that: step (1) different three-axis attitude angle combinations are formed (360/N)3A combination of three-axis attitude angles of the group target spacecraft.
10. The intelligent feature part identification method according to claim 1, characterized in that: (1) the three-dimensional geometric image of the spacecraft stored in the image database of the spacecraft under the condition of different three-axis attitude angle combinations specifically comprises the following steps:
the three-axis attitude angle combination of a plurality of groups of target spacecraft is led into simulation modeling software, the simulation modeling software is preferably Satellite System Toolkit (STK) software, a three-dimensional model of the spacecraft can be built, and therefore three-dimensional geometric images of the spacecraft under the condition of different three-axis attitude angle combinations are obtained.
CN202010350572.8A 2020-04-28 2020-04-28 Feature part intelligent recognition method Active CN111680552B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010350572.8A CN111680552B (en) 2020-04-28 2020-04-28 Feature part intelligent recognition method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010350572.8A CN111680552B (en) 2020-04-28 2020-04-28 Feature part intelligent recognition method

Publications (2)

Publication Number Publication Date
CN111680552A true CN111680552A (en) 2020-09-18
CN111680552B CN111680552B (en) 2023-10-03

Family

ID=72452614

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010350572.8A Active CN111680552B (en) 2020-04-28 2020-04-28 Feature part intelligent recognition method

Country Status (1)

Country Link
CN (1) CN111680552B (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20030235332A1 (en) * 2002-06-20 2003-12-25 Moustafa Mohamed Nabil System and method for pose-angle estimation
US20090185723A1 (en) * 2008-01-21 2009-07-23 Andrew Frederick Kurtz Enabling persistent recognition of individuals in images
CN103093456A (en) * 2012-12-25 2013-05-08 北京农业信息技术研究中心 Corn ear character index computing method based on images
CN104482934A (en) * 2014-12-30 2015-04-01 华中科技大学 Multi-transducer fusion-based super-near distance autonomous navigation device and method

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20030235332A1 (en) * 2002-06-20 2003-12-25 Moustafa Mohamed Nabil System and method for pose-angle estimation
US20090185723A1 (en) * 2008-01-21 2009-07-23 Andrew Frederick Kurtz Enabling persistent recognition of individuals in images
CN103093456A (en) * 2012-12-25 2013-05-08 北京农业信息技术研究中心 Corn ear character index computing method based on images
CN104482934A (en) * 2014-12-30 2015-04-01 华中科技大学 Multi-transducer fusion-based super-near distance autonomous navigation device and method

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
刘巍;陈玲;马鑫;李肖;贾振元;: "基于彩色图像的高速目标单目位姿测量方法", 仪器仪表学报, no. 03, pages 675 - 682 *
吴小猷;李文博;张国琪;关新;郭胜;刘易;: "基于视觉测量的太阳翼模态参数在轨辨识", 空间控制技术与应用, no. 03, pages 9 - 14 *

Also Published As

Publication number Publication date
CN111680552B (en) 2023-10-03

Similar Documents

Publication Publication Date Title
CN111862126B (en) Non-cooperative target relative pose estimation method combining deep learning and geometric algorithm
CN107679537B (en) A kind of texture-free spatial target posture algorithm for estimating based on profile point ORB characteristic matching
CN111563878B (en) Space target positioning method
CN109816725A (en) A kind of monocular camera object pose estimation method and device based on deep learning
CN108680165B (en) Target aircraft attitude determination method and device based on optical image
CN107945217B (en) Image characteristic point pair rapid screening method and system suitable for automatic assembly
CN106971189B (en) A kind of noisy method for recognising star map of low resolution
CN106097277B (en) A kind of rope substance point-tracking method that view-based access control model measures
CN115908554A (en) High-precision sub-pixel simulation star map and sub-pixel extraction method
Oestreich et al. On-orbit relative pose initialization via convolutional neural networks
CN115205467A (en) Space non-cooperative target part identification method based on light weight and attention mechanism
Tao et al. Visible and infrared image fusion-based image quality enhancement with applications to space debris on-orbit surveillance
CN114663880A (en) Three-dimensional target detection method based on multi-level cross-modal self-attention mechanism
CN114332243A (en) Rocket booster separation attitude measurement method based on perspective projection model
CN112561807B (en) End-to-end radial distortion correction method based on convolutional neural network
CN109657679B (en) Application satellite function type identification method
CN111680552B (en) Feature part intelligent recognition method
Piccinin et al. ARGOS: Calibrated facility for Image based Relative Navigation technologies on ground verification and testing
CN111523392A (en) Deep learning sample preparation method and recognition method based on satellite ortho-image full-attitude
CN115760984A (en) Non-cooperative target pose measurement method based on monocular vision by cubic star
CN111366162B (en) Small celestial body detector pose estimation method based on solar panel projection and template matching
Zhang et al. Benchmarking the Robustness of Object Detection Based on Near-Real Military Scenes
CN111735447A (en) Satellite-sensitive-simulation type indoor relative pose measurement system and working method thereof
Oumer Visual tracking and motion estimation for an on-orbit servicing of a satellite
Lee et al. Road following in an unstructured desert environment based on the EM (expectation-maximization) algorithm

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant