CN111680552B - Feature part intelligent recognition method - Google Patents

Feature part intelligent recognition method Download PDF

Info

Publication number
CN111680552B
CN111680552B CN202010350572.8A CN202010350572A CN111680552B CN 111680552 B CN111680552 B CN 111680552B CN 202010350572 A CN202010350572 A CN 202010350572A CN 111680552 B CN111680552 B CN 111680552B
Authority
CN
China
Prior art keywords
image
spacecraft
dimensional geometric
images
gray level
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010350572.8A
Other languages
Chinese (zh)
Other versions
CN111680552A (en
Inventor
汤亮
袁利
关新
王有懿
姚宁
宗红
冯骁
张科备
郝仁剑
郭子熙
刘昊
龚立纲
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Institute of Control Engineering
Original Assignee
Beijing Institute of Control Engineering
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Institute of Control Engineering filed Critical Beijing Institute of Control Engineering
Priority to CN202010350572.8A priority Critical patent/CN111680552B/en
Publication of CN111680552A publication Critical patent/CN111680552A/en
Application granted granted Critical
Publication of CN111680552B publication Critical patent/CN111680552B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes
    • G06V20/13Satellite images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformation in the plane of the image
    • G06T3/40Scaling the whole image or part thereof
    • G06T3/4023Decimation- or insertion-based scaling, e.g. pixel or line decimation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/60Analysis of geometric attributes
    • G06T7/62Analysis of geometric attributes of area, perimeter, diameter or volume
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/25Determination of region of interest [ROI] or a volume of interest [VOI]

Abstract

The invention discloses an intelligent identification method for a characteristic part, which is suitable for the field of identification of a local typical part of a space failure satellite. The invention designs an intelligent recognition method for a local typical characteristic part based on a convolutional neural network. Firstly, a satellite local typical part database containing rich information is created for a failure satellite local typical part recognition task, components of typical parts are marked, and a training data set and a testing data set are constructed. Then constructing a deep convolution network, training network parameters by using a training data set, and after training, intelligently identifying typical parts from an input image by the network.

Description

Feature part intelligent recognition method
Technical Field
The invention belongs to the field of space target feature recognition, and particularly relates to an intelligent feature part recognition method.
Background
Protection and on-orbit services against important space targets mainly including satellites have become an important development direction of aerospace technology in countries around the world, and identification technology of satellite features (such as antennas, sailboards and engines) is a key link. With the increasing maturity of aircraft approach technology and optical imaging technology, high resolution optical imaging of space objects using space-based platforms is enabled, which also places a higher demand for satellite object identification, particularly for accurate identification of satellite features.
The space target identification mainly uses the space target characteristic data to effectively judge and identify the identity, the gesture, the state and other attributes. Currently, the source of target characteristic data is mainly foundation detection, including optical and radar equipment. However, the detection data of the foundation device is related to various factors such as the observation angle, the target feature, the sun angle, the atmosphere and the like, so that the detection result has great uncertainty. Although research is also conducted on the aspect of detecting and identifying the space-based targets at home and abroad, the method for identifying the point targets and the on-orbit motion state under the condition of long distance is concentrated, the adopted method depends on the on-line known characteristic points, has strong constraint on the characteristic points, and once the known characteristic points change, the existing method for identifying the characteristic points is difficult to accurately identify. In recent years, the development of deep learning technology greatly improves the effect of image target recognition, and brings new method means to the field of space target recognition.
Disclosure of Invention
The invention solves the technical problems that: the intelligent recognition method for the characteristic parts is provided for overcoming the defect that the existing recognition technology excessively depends on known characteristic points, can realize intelligent recognition of typical target parts and tactical features in a complex space environment, and provides technical support for on-orbit scene depth recognition and understanding.
The technical scheme adopted by the invention is as follows: the intelligent characteristic part identifying process includes the following steps:
(1) Based on an attitude rotation method, constructing a spacecraft image database in space, and storing three-dimensional geometric images of the spacecraft under the condition of different three-axis attitude angle combinations in the spacecraft image database;
(2) The pixel values of three color channels of red (R), green (G) and blue (B) of each three-dimensional geometric image in the spacecraft image database are weighted and averaged, so that a gray image of the three-dimensional geometric image can be obtained; adding measurement noise energy on the basis of the gray image to obtain a noisy gray image;
(3) Performing reduction processing on the gray level image of each three-dimensional geometric image in the spacecraft image database by adopting a cubic interpolation method to obtain a corresponding interpolated image;
(4) Carrying out rotation change on the gray level image of each three-dimensional geometric image in the spacecraft image database in the step (2) to obtain a rotated gray level image;
(5) Respectively labeling the gray level images of all three-dimensional geometric images in the spacecraft image database in the step (2), the gray level images with noise, the gray level images of all three-dimensional geometric images in the step (3), and the gray level images of all three-dimensional geometric images in the step (4) after the gray level images of all three-dimensional geometric images are rotated; respectively performing name setting on the marked spacecraft characteristic part outlines to obtain labels of characteristic parts, and finishing marking the target spacecraft characteristic parts;
(6) Constructing a convolutional neural network, and training the constructed convolutional neural network by utilizing most of images marked by the outline of the characteristic part of the spacecraft in the spacecraft image database to obtain a trained convolutional neural network; and inputting the images with the marked outlines of the characteristic parts of the spacecraft in the rest spacecraft image database, inputting the trained convolutional neural network to identify the characteristic parts of the spacecraft, outputting the identified results, and realizing the intelligent identification of the characteristic parts of the spacecraft.
Preferably, (1) based on a gesture rotation method, constructing a spacecraft image database in a space, wherein three-dimensional geometric images of the spacecraft under the condition of different three-axis gesture angle combinations are stored in the spacecraft image database, and specifically, the method comprises the following steps:
(1) Based on the gesture rotation method, a spacecraft image database in the space is constructed, namely, a three-axis gesture angle of the target spacecraft is set as a rolling anglePitch angle θ, yaw angle ψ. The transformation range of the three-axis attitude angle is [0, 360 ]]And taking values every N degrees to form three-axis attitude angle combinations of a plurality of groups of spacecraft, importing simulation modeling software, and acquiring three-dimensional geometric images of the spacecraft under the condition of different three-axis attitude angle combinations to obtain a spacecraft image database.
Preferably, (2) the pixel values of three color channels of red (R), green (G) and blue (B) of each three-dimensional geometric image in the spacecraft image database are weighted and averaged, so that a gray image of the three-dimensional geometric image can be obtained, specifically:
G(i,j)=0.299×R(i,j)+0.578×G(i,j)+0.114×B(i,j)
wherein: i, j are the horizontal and vertical coordinates of the gray level image, i is greater than or equal to 1, j is greater than or equal to 1, R (i, j), G (i, j), B (i, j) are the pixel values corresponding to the red (R), green (G), blue (B) pixels of the i rows and j columns in the gray level image respectively. G (i, j) is the pixel value of row i and column j in the gray scale image.
Preferably, (3) performing reduction processing on the gray level image of each three-dimensional geometric image in the spacecraft image database by adopting a cubic interpolation method to obtain a corresponding interpolated image; the method comprises the following steps:
f(i+u,j+v)=ABC
wherein: f (i+u, j+v) is the pixel value of the i row and j column of the interpolated image, u, v is the interpolation interval, A, B and C is the coefficient matrix, and the form is as follows:
A=[S(1+u) S(u) S(1-u) S(2-u)]
C=[S(1+v) S(v) S(1-v) S(2-v)] T
wherein: s is an interpolation related function, in particular
Through the preferable rotation scheme and parameter requirements, the image target recognition effect is greatly improved.
Preferably, (4) performing rotation change on the gray level image of each three-dimensional geometric image in the spacecraft image database in the step (2) to obtain a rotated gray level image; the method comprises the following steps:
let the coordinates of a pixel point of the original image (i.e., the gray-scale image of the three-dimensional geometric image) be (i, j), and the coordinates in the rotated gray-scale image after rotation be (i) 1 ,j 1 ) The matrix expression of the rotation transformation is:
the inverse transformation is:
wherein θ represents an angle of rotation around a center point of an original image (i.e., a gray image of a three-dimensional geometric image) when changing from the original image to a rotated gray image in a plane in which the original image is located;
a. b is the coordinate of the rotation center when the image is not rotated, c and d are the coordinates of the center point after the image is rotated, and the coordinates are as follows:
wherein w is 0 Is the width of the original image (before rotation), h 0 Length of original image (before rotation); w (w) 1 H is the width of the gray image after rotation 1 The length of the gray level image after rotation; to truly simulate the constrained effect of the imaging dimensions (length, width) of the optical camera, the imaging dimensions of the rotated spacecraft within the optical camera are varied, and therefore the rotated image length h 1 And width w 1 And the original image has a length h 0 And width w 0 And vary.
Through the preferable rotation scheme and parameter requirements, the image target recognition effect is greatly improved.
Preferably, (1) based on a gesture rotation method, constructing a spacecraft image database in a space, wherein three-dimensional geometric images of the spacecraft under the condition of different three-axis gesture angle combinations are stored in the spacecraft image database, and specifically, the method comprises the following steps:
the target spacecraft is M kinds of spacecraft in space, three-axis attitude angle combinations of a plurality of groups of target spacecraft are imported into STK software, three-dimensional geometric images of the M kinds of spacecraft under different three-axis attitude angle combinations are acquired in the STK software, and the three-dimensional geometric images are stored in a spacecraft image database in space.
Preferably, in order to simulate noise generated when the real shooting equipment is used for shooting the spacecraft, after the gray level image is obtained in the step (2), the noise is added to be used as the gray level image finally obtained in the step (2); on the basis of the gray level image, adding measurement noise energy to obtain the gray level image with noise, specifically: and Gaussian noise is added for blurring, so that the recognition accuracy is further improved.
Adding Gaussian noise for blurring, specifically:
on the basis of gray level images of three-dimensional geometric images, gaussian noise is added for blurring, and the gray level images with noise are obtained, specifically:
G noise (i,j)=G(i,j)+G GB (i,j)
wherein: i, j is the abscissa and ordinate of the gray scale image, G noise (i, j) is the pixel value of row i and column j in the noisy gray-scale image. Sigma is the standard deviation of gaussian noise normal distribution; r is the noisy gray image blur radius.
After noise is added, a gray image with noise of the spacecraft is obtained, a picture of the spacecraft imaged in a space environment can be simulated more realistically, an accurate image is provided for neural network training and feature part recognition, and the robustness and accuracy of feature part recognition are improved.
Preferably, training the constructed convolutional neural network specifically comprises: the convolutional neural network receives the image data and inputs the image data, outputs the label of the characteristic part of the image, inputs the predicted label and the label of the characteristic part of the step (5) into a set label loss function together as a predicted label, calculates a training error through the loss function, dynamically adjusts the parameters of the neural network according to the training error, realizes error convergence, and obtains the trained convolutional neural network after the error converges to the set requirement.
Preferably, in the combination of different three-axis attitude angles in the step (1), the transformation range of the three-axis attitude angles is [0, 360 ]]Taking values every N degrees, wherein N is a common divisor of 360; coform (360/N) 3 And combining three-axis attitude angles of the group target spacecraft.
Preferably, the three-dimensional geometric images of the spacecraft stored in the spacecraft image database in the (1) under the condition of different three-axis attitude angle combinations are specifically:
the three-axis attitude angle combinations of the multiple groups of target spacecraft are imported into simulation modeling software, wherein the simulation modeling software is preferably satellite system toolkit (Satellite Toll Kit, STK) software, and a three-dimensional model of the spacecraft can be constructed, so that three-dimensional geometric images of the spacecraft under the condition of different three-axis attitude angle combinations are obtained.
Compared with the prior art, the invention has the beneficial effects that:
(1) The invention provides the intelligent recognition method for the characteristic parts, which combines characteristic extraction and characteristic recognition, avoids manual design of the characteristics, has stronger universality and greatly improves the recognition efficiency and accuracy of the space target characteristic parts.
(2) The method is based on satellite picture data of various sources, including two-dimensional images obtained by projection of a satellite three-dimensional model, and the like, and the actual running environment of the in-orbit satellite is considered to perform simulated rendering on the satellite images, so that more real satellite image data can be obtained, the number of the satellite image data can be increased, and the training of a convolution network is facilitated.
(3) The intelligent identification method for the characteristic parts is independent of the on-orbit known characteristic points, has strong identification capability on the spacecraft without the characteristic points, has stronger universality and identification robustness, and greatly improves the identification efficiency and accuracy of the characteristic parts of the spacecraft, such as sailboards and the like.
Drawings
FIG. 1 is a schematic illustration of the process of the present invention;
FIG. 2 is a three-dimensional model of a spacecraft modeled by the method of the present invention;
FIG. 3 is a graph of a loss function during training of the method of the present invention;
FIG. 4 shows the recognition result obtained by the method of the present invention.
Detailed Description
The invention is described in further detail below with reference to the drawings and the specific embodiments.
The invention discloses an intelligent identification method for a characteristic part, which is suitable for the field of identification of a local typical part of a space failure satellite. The invention designs an intelligent recognition method for a local typical characteristic part based on a convolutional neural network. Firstly, a satellite local typical part database containing rich information is created for a failure satellite local typical part recognition task, components of typical parts are marked, and a training data set and a testing data set are constructed. Then constructing a deep convolution network, training network parameters by using a training data set, and after training, intelligently identifying typical parts from an input image by the network.
The tasks of capturing space failure spacecrafts, intersecting and butting among the spacecrafts and the like need to accurately acquire information such as relative postures of the spacecrafts. And the identification of the spacecraft characteristic parts is an important technical means for realizing information estimation such as relative gesture and the like. The conventional characteristic part identification method based on the analysis algorithm has the problems of large edge point identification error, transition dependence on the known characteristic points of manual identification and the like. Aiming at the problems, the invention designs an intelligent recognition method of local typical characteristic parts based on a convolutional neural network, solves the problem that the transition relies on known characteristic points of manual identification in the recognition method, and realizes the recognition of the local typical characteristic parts (such as solar wings) of a spacecraft in complex space environments with weak light, blurred jittering images and the like.
As shown in fig. 1, the intelligent characteristic part identification method of the invention comprises the following steps:
(1) Constructing a spacecraft image database in space based on a gesture rotation method, namely setting a three-axis gesture angle of a target spacecraft as a rolling anglePitch angle θ, yaw angle ψ. Preferably, the three-axis attitude angle has a conversion range of [0, 360 ]]Take value every N DEG (N is a common divisor of 360, preferably 90 DEG), and form (360/N) in a conformal manner 3 A combination of three-axis attitude angles of the group target spacecraft; ) Forming three-axis attitude angle combinations of a plurality of groups of target spacecrafts, importing simulation modeling software, and acquiring three-dimensional geometric images of the spacecrafts under the condition of different three-axis attitude angle combinations to obtain a spacecrafts image database, wherein the further preferred scheme specifically comprises the following steps:
(1-1) setting a three-axis attitude angle roll angle of a spacecraftThe pitch angle theta and the yaw angle phi are 0 and 360 DEG]Traversing every N degrees to take value, traversing rolling angle +.>All combinations of pitch angle θ, yaw angle ψ, coform (360/N) 3 Is a combination of the above.
(1-2) selecting an ith spacecraft in simulation modeling software of a satellite system toolkit (Satellite Toll Kit, STK), setting the three-axis gesture of the spacecraft to be the gesture of all traversal combinations obtained in (1-1), and intercepting the geometric picture of the spacecraft under each gesture combination.
(1-3) selecting the (i+1) th spacecraft in simulation modeling software of a satellite system toolkit (Satellite Toll Kit, STK), setting the three-axis gesture of the spacecraft to be the gesture of all traversal combinations obtained in (1-1), and intercepting the geometric picture of the spacecraft under each gesture combination. The simulation modeling software is set in a satellite system toolkit (Satellite Toll Kit, STK) which contains M spacecraft, and M× (360/N) can be obtained 3 And (3) stretching the three-dimensional geometric image to obtain a spacecraft image database, wherein the database comprises information such as multiple gestures, multiple characteristic parts and the like, so that a rich database is provided for accurately identifying the characteristic parts of the spacecraft, and basic data is further provided for improving the identification effect.
(2) The pixel values of three color channels of red (R), green (G) and blue (B) of each three-dimensional geometric image in the spacecraft image database are weighted and averaged to obtain a gray image of the three-dimensional geometric image, (marked as a subset 1) and a gray image of the three-dimensional geometric image with noise (marked as a subset 2), and the preferable scheme is as follows:
(2-1) as shown in fig. 2, the pixel values of three color channels of red (R), green (G), blue (B) of each three-dimensional geometric image in the spacecraft image database are weighted-averaged, so that a gray image of the three-dimensional geometric image can be obtained; the preferred formula is specifically as follows:
G(i,j)=0.299×R(i,j)+0.578×G(i,j)+0.114×B(i,j)
wherein: i and j are the horizontal and vertical coordinates of the image, i is greater than or equal to 1, j is greater than or equal to 1, R (i, j), G (i, j) and B (i, j) are pixel values corresponding to the pixels of the i rows and the j columns in the image respectively. G (i, j) is the pixel value of row i and column j in the image. The gray image obtained by the collocation of the Chinese coefficients in the formula further improves the identification effect.
M× (360/N) can be obtained by the graying process of step (2-1) described above 3 Zhang Hangtian the grayscale image of the three-dimensional geometric image, i.e. the full image in subset 1 is obtained.
(2-2) taking into consideration influence factors such as noise, shake blur and the like existing in space environment when a spacecraft is imaged, adding Gaussian noise to blur on the basis of a gray level image of a three-dimensional geometric image in the step (2-1) to obtain a noisy gray level image, namely a subset 2, wherein the preferable expression is specifically as follows:
G noise (i,j)=G(i,j)+G GB (i,j)
wherein: i, j is the abscissa and ordinate of the gray scale image, G noise (i, j) is the pixel value of row i and column j in the noisy gray-scale image. Sigma is the standard deviation of gaussian noise normal distribution; r is the noisy gray image blur radius.
After noise is added, a gray image with noise of the spacecraft is obtained, a picture of the spacecraft imaged in a space environment can be simulated more realistically, an accurate image is provided for neural network training and feature part recognition, and the robustness and accuracy of feature part recognition are further improved.
(3) Performing reduction processing on the gray level image of each three-dimensional geometric image in the spacecraft image database by adopting a cubic interpolation method to obtain a corresponding interpolated image; the preferable scheme is as follows:
f(i+u,j+v)=ABC
wherein: f (i+u, j+v) is the pixel value of the i row and j column of the interpolated image, u, v is the interpolation interval, A, B and C are coefficient matrixes, and the preferable scheme is as follows:
A=[S(1+u) S(u) S(1-u) S(2-u)]
C=[S(1+v) S(v) S(1-v) S(2-v)] T
wherein: s (·) is an interpolation correlation function, in particular
The interpolated image obtained in the step (3) can realistically simulate the image of the spacecraft without distance imaging in the space environment, and a more accurate training data set is provided for the identification of the neural network.
(4) Performing rotation change on the gray level image of each three-dimensional geometric image in the subset 1 in the spacecraft image database in the step (2) to obtain a rotated gray level image; the preferable scheme is as follows:
let the coordinates of a pixel point of the original image (i.e., the gray-scale image of the three-dimensional geometric image) be (i, j), and the coordinates in the rotated gray-scale image after rotation be (i) 1 ,j 1 ) The preferred matrix expression for the rotation transformation is:
the matrix expression for the preferred inverse transform is:
wherein θ represents an angle of rotation around a center point of an original image (i.e., a gray image of a three-dimensional geometric image) when changing from the original image to a rotated gray image in a plane in which the original image is located;
a. b is the coordinate of the rotation center when the image is not rotated, namely the geometric center point of the position of the spacecraft in the image, and c and d are the coordinates of the center point after the image is rotated. The rotated center point and the rotated center point have both rotation and translation. The further preferred scheme is specifically as follows:
wherein w is 0 =600 is the width of the original image, h 0 =1000 is the length of the original image; w (w) 1 =600 is the width of the rotated gray image, h 1 =360 is the length of the gray image after rotation; to truly simulate the constrained effect of the imaging size (length and width) of the optical camera, the imaging size of the rotated spacecraft in the optical camera is changed, and therefore the rotated image length h 1 And width w 1 And the original image has a length h 0 And width w 0 And vary.
The accuracy of the identification of the characteristic parts is further improved through the preferable formula and the parameter requirements in the step.
(5) Respectively labeling the outline of the characteristic part of the spacecraft on the subset 1 and the subset 2 of the gray level images of each three-dimensional geometric image in the spacecraft image database in the step (2), the image after the gray level image interpolation of each three-dimensional geometric image in the step (3) and the gray level image after the gray level image rotation of each three-dimensional geometric image in the step (4); respectively performing name setting on the marked spacecraft characteristic part outlines to obtain labels of characteristic parts, and finishing marking the target spacecraft characteristic parts; taking a spacecraft solar panel as an example, the preferable scheme is as follows:
(5-1) rejecting images from which the position of the solar panel feature cannot be determined;
and (5-2) delineating and marking the outline of the solar sailboard of the spacecraft by adopting a matrix square frame, and marking the delineating part of the rectangular square frame as an s1 label.
And (5-3) marking the picture with shielding of the solar sailboard of the spacecraft, adopting a fold line to carry out delineation and marking along the outline of the solar sailboard, and marking the delineation part as an s1 label.
By the image specifically processed in the step, the accuracy of identifying the characteristic part can be further improved.
(6) Constructing a convolutional neural network, and training the constructed convolutional neural network by utilizing most of images marked by the outline of the characteristic part of the spacecraft in the spacecraft image database; obtaining a trained convolutional neural network; inputting the images with marked spacecraft characteristic part contours in the rest spacecraft image databases into a trained convolutional neural network to identify the spacecraft characteristic parts, and outputting the identified results to realize intelligent identification of the spacecraft characteristic parts, wherein the preferable scheme is as follows:
(6-1) constructing the loss function of feature edge recognition as L 2 Preferably expressed as
And y is the area of the region marked by each picture characteristic part of the spacecraft image database.The area of the spacecraft characteristic part area identified for the neural network. M is M 2 Is a weight coefficient (described as M 2 Selected as 1).
(6-2) constructing a preferred neural convolutional network, designing the neural convolutional network to have K channels in total, wherein the weight coefficient of each channel is W j 1.ltoreq.j.ltoreq.K (K is preferably 10000). The weight coefficient of each channel is initially W j (0) (said W) j (0) Preferably 0.001).
And (6-3) training the neural network by utilizing most of the images marked by the contours of the characteristic parts of the spacecraft in the spacecraft image database. Outputting all marked pictures in the spacecraft image database to the constructed nerve convolution network, and obtaining the area y of the region where the spacecraft characteristic part is located through the label in the marked images. Weighting coefficients W of various channels through neural network j (0) Calculating space flightArea of region where device feature is located
(6-4) calculating the loss function of feature edge recognition in (6-1) as L 2
(6-5) judgment of L 2 ≤L 2min (said L) 2min Preferably 0.16), if so, proceeding to (6-7); otherwise, carrying out the process (6-6);
(6-6) updating the weight coefficient of each channel of the neural convolution network: w (W) j (l+1)=W j (l) Dt. (dt is a design coefficient.l is the number of steps of neural network training, and l is more than or equal to 1 and less than or equal to l max ). And (6) returning to the step (6-3) for iterative computation. As shown in fig. 3, the edge loss function can be converged to within 0.16 by a plurality of iterative training.
(6-7) weight coefficient W of each channel of the curing neural network parameter j (l) And obtaining the trained neural network.
And (6-8) inputting the images with the outlines of the characteristic parts of the spacecraft in the rest spacecraft image database, obtaining the trained neural network, calculating the area of the characteristic parts of the spacecraft, and outputting the recognized result. As shown in fig. 4, the position of the sailboard of the spacecraft can be further accurately identified through the identification of the preferable convolutional neural network, so that the intelligent identification of the characteristic part of the spacecraft is realized.
The invention combines the feature extraction and the feature recognition, avoids the manual design of the features, has stronger universality and greatly improves the recognition efficiency and precision of the space target feature parts. The method is based on satellite picture data of various sources, including two-dimensional images obtained by projection of a satellite three-dimensional model, and the like, and the actual running environment of the in-orbit satellite is considered to perform simulated rendering on the satellite images, so that more real satellite image data can be obtained, the number of the satellite image data can be increased, and the training of a convolution network is facilitated.
Moreover, the method does not depend on the feature points known in the orbit, has strong recognition capability on the spacecraft without the feature points, has stronger universality and recognition robustness, and greatly improves the recognition efficiency and precision of the feature parts of the spacecraft, such as sailboards and the like.
What is not described in detail in the present specification is a well known technology to those skilled in the art.

Claims (10)

1. The intelligent characteristic part identifying method is characterized by comprising the following steps:
(1) Based on an attitude rotation method, constructing a spacecraft image database in space, and storing three-dimensional geometric images of the spacecraft under the condition of different three-axis attitude angle combinations in the spacecraft image database;
(2) The pixel values of three color channels of red R, green G and blue B of each three-dimensional geometric image in the spacecraft image database are weighted and averaged, so that a gray image of the three-dimensional geometric image can be obtained; adding measurement noise energy on the basis of the gray image to obtain a noisy gray image;
(3) Performing reduction processing on the gray level image of each three-dimensional geometric image in the spacecraft image database by adopting a cubic interpolation method to obtain a corresponding interpolated image;
(4) Carrying out rotation change on the gray level image of each three-dimensional geometric image in the spacecraft image database in the step (2) to obtain a rotated gray level image;
(5) Respectively labeling the gray level images of all three-dimensional geometric images in the spacecraft image database in the step (2), the gray level images with noise, the gray level images of all three-dimensional geometric images in the step (3), and the gray level images of all three-dimensional geometric images in the step (4) after the gray level images of all three-dimensional geometric images are rotated; respectively performing name setting on the marked spacecraft characteristic part outlines to obtain labels of characteristic parts, and finishing marking the target spacecraft characteristic parts;
(6) Constructing a convolutional neural network, and training the constructed convolutional neural network by utilizing most of images marked by the outline of the characteristic part of the spacecraft in the spacecraft image database to obtain a trained convolutional neural network; and inputting the images with the marked outlines of the characteristic parts of the spacecraft in the rest spacecraft image database, inputting the trained convolutional neural network to identify the characteristic parts of the spacecraft, outputting the identified results, and realizing the intelligent identification of the characteristic parts of the spacecraft.
2. The intelligent feature recognition method according to claim 1, wherein: (1) Based on an attitude rotation method, constructing a spacecraft image database in space, and storing three-dimensional geometric images of the spacecraft under the condition of different three-axis attitude angle combinations in the spacecraft image database, wherein the three-dimensional geometric images specifically comprise:
(1) Based on the gesture rotation method, a spacecraft image database in the space is constructed, namely, a three-axis gesture angle of the target spacecraft is set as a rolling anglePitch angle θ, yaw angle ψ; the transformation range of the three-axis attitude angle is [0, 360 ]]And taking values every N degrees to form three-axis attitude angle combinations of a plurality of groups of spacecraft, importing simulation modeling software, and acquiring three-dimensional geometric images of the spacecraft under the condition of different three-axis attitude angle combinations to obtain a spacecraft image database.
3. The intelligent feature recognition method according to claim 1, wherein: (2) The pixel values of three color channels of red R, green G and blue B of each three-dimensional geometric image in the spacecraft image database are weighted and averaged, so that a gray image of the three-dimensional geometric image can be obtained, specifically:
G(i,j)=0.299×R(i,j)+0.578×G(i,j)+0.114×B(i,j)
wherein: i, j is the abscissa of the gray level image, i is greater than or equal to 1, j is greater than or equal to 1, R (i, j), G (i, j) and B (i, j) are pixel values corresponding to the red R, green G and blue B pixels of the i rows and j columns in the gray level image respectively; g (i, j) is the pixel value of row i and column j in the gray scale image.
4. The intelligent feature recognition method according to claim 1, wherein: (3) Performing reduction processing on the gray level image of each three-dimensional geometric image in the spacecraft image database by adopting a cubic interpolation method to obtain a corresponding interpolated image; the method comprises the following steps:
f(i+u,j+v)=ABC
wherein: f (i+u, j+v) is the pixel value of the i row and j column of the interpolated image, u, v is the interpolation interval, A, B and C is the coefficient matrix, and the form is as follows:
A=[S(1+u) S(u) S(1-u) S(2-u)]
C=[S(1+v) S(v) S(1-v) S(2-v)] T
wherein: g (i, j) is the pixel value of i rows and j columns in the gray scale image, S is the interpolation correlation function, specifically
5. The intelligent feature recognition method according to claim 1, wherein: (1) Based on an attitude rotation method, constructing a spacecraft image database in space, and storing three-dimensional geometric images of the spacecraft under the condition of different three-axis attitude angle combinations in the spacecraft image database, wherein the three-dimensional geometric images specifically comprise:
the target spacecraft is M kinds of spacecraft in space, three-axis attitude angle combinations of a plurality of groups of target spacecraft are imported into STK software, three-dimensional geometric images of the M kinds of spacecraft under different three-axis attitude angle combinations are acquired in the STK software, and the three-dimensional geometric images are stored in a spacecraft image database in space.
6. The method for intelligently identifying the characteristic parts according to claim 1, wherein noise is added after the gray level image of the three-dimensional geometric image is obtained in the step (2) to simulate noise generated when the spacecraft is shot by adopting real shooting equipment, and the noise is used as the gray level image finally obtained in the step (2); on the basis of the gray level image of the three-dimensional geometric image, adding measurement noise to obtain the gray level image with noise, specifically: gaussian noise is added for blurring.
7. The intelligent feature recognition method according to claim 1, wherein: training the constructed convolutional neural network specifically comprises the following steps: the convolutional neural network receives the image data and inputs the image data, outputs the label of the characteristic part of the image, inputs the predicted label and the label of the characteristic part of the step (5) into a set label loss function together as a predicted label, calculates a training error through the loss function, dynamically adjusts the parameters of the neural network according to the training error, realizes error convergence, and obtains the trained convolutional neural network after the error converges to the set requirement.
8. The intelligent feature recognition method according to claim 1, wherein: in the step (1), in the combination of different three-axis attitude angles, the transformation range of the three-axis attitude angles is [0, 360 degrees ], the value is taken every N degrees, and N is a common divisor of 360.
9. The intelligent feature recognition method according to claim 8, wherein: step (1) combining different three-axis attitude angles to form (360/N) 3 And combining three-axis attitude angles of the group target spacecraft.
10. The intelligent feature recognition method according to claim 1, wherein: (1) The three-dimensional geometric images of the spacecraft stored in the spacecraft image database under the combination condition of different three-axis attitude angles are specifically as follows:
the three-axis attitude angle combinations of the plurality of groups of target spacecrafts are imported into simulation modeling software, wherein the simulation modeling software is satellite system tool kit software, and can construct a three-dimensional model of the spacecrafts, so that three-dimensional geometric images of the spacecrafts under the condition of different three-axis attitude angle combinations are obtained.
CN202010350572.8A 2020-04-28 2020-04-28 Feature part intelligent recognition method Active CN111680552B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010350572.8A CN111680552B (en) 2020-04-28 2020-04-28 Feature part intelligent recognition method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010350572.8A CN111680552B (en) 2020-04-28 2020-04-28 Feature part intelligent recognition method

Publications (2)

Publication Number Publication Date
CN111680552A CN111680552A (en) 2020-09-18
CN111680552B true CN111680552B (en) 2023-10-03

Family

ID=72452614

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010350572.8A Active CN111680552B (en) 2020-04-28 2020-04-28 Feature part intelligent recognition method

Country Status (1)

Country Link
CN (1) CN111680552B (en)

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103093456A (en) * 2012-12-25 2013-05-08 北京农业信息技术研究中心 Corn ear character index computing method based on images
CN104482934A (en) * 2014-12-30 2015-04-01 华中科技大学 Multi-transducer fusion-based super-near distance autonomous navigation device and method

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6959109B2 (en) * 2002-06-20 2005-10-25 Identix Incorporated System and method for pose-angle estimation
US8180112B2 (en) * 2008-01-21 2012-05-15 Eastman Kodak Company Enabling persistent recognition of individuals in images

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103093456A (en) * 2012-12-25 2013-05-08 北京农业信息技术研究中心 Corn ear character index computing method based on images
CN104482934A (en) * 2014-12-30 2015-04-01 华中科技大学 Multi-transducer fusion-based super-near distance autonomous navigation device and method

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
基于彩色图像的高速目标单目位姿测量方法;刘巍;陈玲;马鑫;李肖;贾振元;;仪器仪表学报(第03期);675-682 *
基于视觉测量的太阳翼模态参数在轨辨识;吴小猷;李文博;张国琪;关新;郭胜;刘易;;空间控制技术与应用(第03期);9-14 *

Also Published As

Publication number Publication date
CN111680552A (en) 2020-09-18

Similar Documents

Publication Publication Date Title
CN111862126B (en) Non-cooperative target relative pose estimation method combining deep learning and geometric algorithm
CN108665496B (en) End-to-end semantic instant positioning and mapping method based on deep learning
CN108470370B (en) Method for jointly acquiring three-dimensional color point cloud by external camera of three-dimensional laser scanner
CN107679537B (en) A kind of texture-free spatial target posture algorithm for estimating based on profile point ORB characteristic matching
CN111563878B (en) Space target positioning method
CN108680165B (en) Target aircraft attitude determination method and device based on optical image
CN114419147A (en) Rescue robot intelligent remote human-computer interaction control method and system
CN109086663A (en) The natural scene Method for text detection of dimension self-adaption based on convolutional neural networks
CN106251282A (en) A kind of generation method and device of mechanical arm sampling environment analogous diagram
CN106097277B (en) A kind of rope substance point-tracking method that view-based access control model measures
CN115908554A (en) High-precision sub-pixel simulation star map and sub-pixel extraction method
CN114332070A (en) Meteor crater detection method based on intelligent learning network model compression
CN104504691B (en) Camera position and posture measuring method on basis of low-rank textures
CN111062310A (en) Few-sample unmanned aerial vehicle image identification method based on virtual sample generation
CN116661334B (en) Missile tracking target semi-physical simulation platform verification method based on CCD camera
CN111680552B (en) Feature part intelligent recognition method
CN111260736B (en) In-orbit real-time calibration method for internal parameters of space camera
CN117351333A (en) Quick star image extraction method of star sensor
Koizumi et al. Development of attitude sensor using deep learning
CN116664622A (en) Visual movement control method and device
CN115760984A (en) Non-cooperative target pose measurement method based on monocular vision by cubic star
CN111366162B (en) Small celestial body detector pose estimation method based on solar panel projection and template matching
CN114419259A (en) Visual positioning method and system based on physical model imaging simulation
Voynov et al. AnyLens: A Generative Diffusion Model with Any Rendering Lens
Liu et al. Blind deblurring using space target features

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant