CN109657679B - Application satellite function type identification method - Google Patents

Application satellite function type identification method Download PDF

Info

Publication number
CN109657679B
CN109657679B CN201811556442.9A CN201811556442A CN109657679B CN 109657679 B CN109657679 B CN 109657679B CN 201811556442 A CN201811556442 A CN 201811556442A CN 109657679 B CN109657679 B CN 109657679B
Authority
CN
China
Prior art keywords
satellite
feature map
map set
convolution
feature
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201811556442.9A
Other languages
Chinese (zh)
Other versions
CN109657679A (en
Inventor
庞羽佳
李志�
蒙波
黄龙飞
张志民
王尹
韩旭
黄剑斌
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
China Academy of Space Technology CAST
Original Assignee
China Academy of Space Technology CAST
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by China Academy of Space Technology CAST filed Critical China Academy of Space Technology CAST
Priority to CN201811556442.9A priority Critical patent/CN109657679B/en
Publication of CN109657679A publication Critical patent/CN109657679A/en
Application granted granted Critical
Publication of CN109657679B publication Critical patent/CN109657679B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/255Detecting or recognising potential candidate objects based on visual cues, e.g. shapes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Biomedical Technology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Multimedia (AREA)
  • Image Analysis (AREA)

Abstract

The invention provides a method for identifying the function type of an application satellite, which comprises the following steps: acquiring a target satellite space image, and adjusting the resolution of the acquired target satellite space image to obtain a target image; and performing data processing on the target image based on a ResNet neural network model, and determining a function type corresponding to the target satellite. According to the method for identifying the function type of the application satellite, the resolution ratio of the space image of the target satellite is adjusted to obtain the image which can be identified by the ResNet neural network model, the function type of the application satellite is automatically identified in orbit through the ResNet neural network model, the ground artificial reading and judgment are not relied on, the identification efficiency is improved, and the requirements of real-time service and operation on the non-cooperative space target with insufficient prior information are met.

Description

Application satellite function type identification method
Technical Field
The invention relates to a method for identifying the function type of an application satellite, belonging to the technical field of satellite identification.
Background
The application satellite is an artificial satellite directly serving national economy, military activities and cultural education, and among various artificial satellites, the application satellite has the largest number of emissions and the largest variety. The application satellites can be roughly divided into three categories, namely a ground observation category, a radio relay category and a navigation positioning reference category according to the basic operation characteristics of the application satellites. The application satellite plays an important role in military and civil fields such as communication, navigation, remote sensing and the like.
The service life of the satellite can be prolonged and the task execution capacity can be improved by carrying out on-orbit service on the application satellite, and the method is one of the current research hotspots at home and abroad. In the process of performing on-orbit service on the application satellite, the on-orbit satellite can be subjected to operations such as auxiliary orbit change, fuel supply, attitude control, satellite takeover, fault repair and the like according to needs. In performing on-orbit repair maintenance on a failed or malfunctioning satellite, the serviced satellite must first be safely accessed. For non-cooperative targets, star chart features, key loads, motion states and the like of the targets are difficult to acquire in advance, and the function types, the motion states, operation parts and the like of the targets must be accurately known in the approaching process so as to determine approaching parking or control strategies and avoid collision.
The existing space non-cooperative target type cognition is that a service aircraft obtains a space image of a target aircraft and transmits the image to a ground control center, and the ground control center artificially determines the type of the non-cooperative target aircraft according to image information by adopting image recognition methods such as edge detection, feature fitting and the like and sends the determined type to the service aircraft. The prior method has the following problems: the satellite-ground loop has large time delay, and cannot meet the requirements of real-time service and operation on a non-cooperative space target with insufficient prior information.
Disclosure of Invention
Aiming at the problems in the prior art, the invention provides an application satellite function type identification method, which is used for autonomously processing and classifying and identifying a space target visible light image generated in an orbit, wherein the classification speed of a single target function type is less than 100ms, and the accuracy can reach 90%.
The technical solution of the invention is as follows:
an application satellite function type identification method comprises the following steps:
acquiring a target satellite space image, and adjusting the resolution of the acquired target satellite space image to obtain a target image;
and performing data processing on the target image based on a ResNet neural network model, and determining a function type corresponding to the target satellite.
In an optional embodiment, the ResNet neural network model includes an initial convolutional layer, three residual learning modules, and a fully-connected layer, each of the residual learning modules includes two residual learning units, the initial convolutional layer outputs a feature map set to a first residual learning module, the first residual learning module outputs a new feature map set to a second residual learning module, the second residual learning module outputs a new feature map set to a third residual learning module, and the third residual learning module outputs a new feature map set to the fully-connected layer, where:
the initial convolutional layer for:
performing one-time two-dimensional convolution on the target image to obtain a feature map set;
a first residual learning unit of the residual learning module, configured to:
performing convolution operation on the feature map set input into the unit for one time to obtain a residual error feature map set of the unit; sequentially carrying out standardization operation, activation operation and primary convolution operation on the feature map set input into the unit to obtain a feature map set after primary convolution operation, and sequentially carrying out standardization operation, activation operation and primary convolution operation on the feature map set after primary convolution operation to obtain a feature map set after secondary convolution; determining a feature map set output by the unit according to the residual feature map set of the unit and the feature map set of the unit after the secondary convolution, and outputting the feature map set output by the unit to a second residual learning unit of the residual learning module;
a second residual learning unit of the residual learning module, configured to:
carrying out standardization operation, activation operation and primary convolution operation on the feature map set input into the unit in sequence to obtain a feature map set after primary convolution, and carrying out standardization operation, activation operation and primary convolution operation on the feature map set after primary convolution of the unit in sequence to obtain a feature map set after secondary convolution; determining an output feature map set of the unit according to the feature map set input into the unit and the feature map set after the secondary convolution of the unit;
the full connection layer is used for:
and performing average pooling on the feature map set output by the third residual learning module, extracting feature vectors for satellite type identification, performing full-connection operation, determining feature accumulated vectors corresponding to each satellite type according to the feature vectors, and performing classification probability statistics through a classifier, thereby determining the function type corresponding to the target satellite.
In an alternative embodiment, the resolution of the target image is not less than 256 × 256.
In an optional embodiment, the method for identifying a function type of an application satellite further includes:
establishing a satellite space image sample library, wherein the sample library comprises a plurality of satellite types and image sample sets corresponding to the satellite types;
and training and testing the initial ResNet neural network model based on the satellite space image sample library to obtain the ResNet neural network model.
In an optional embodiment, the creating a satellite space image sample library includes:
establishing three-dimensional models of different types of satellites, simulating a space environment, imaging the established three-dimensional models to obtain a certain number of simulated space image samples, establishing a corresponding relation between the types of the satellites and the simulated space image samples, and generating a satellite space image sample library.
In an alternative embodiment, the building a three-dimensional model of a different type of satellite includes:
according to the structural characteristics of various types of satellites, building structural three-dimensional models of different types of satellites;
and rendering the structural three-dimensional model according to the surface texture information of the various satellites to obtain three-dimensional models of different types of satellites.
In an alternative embodiment, the simulated spatial environment comprises:
the simulated light source is parallel light, the atmospheric molecule density is 0-0.01 times of the ground atmospheric molecule density, the illumination intensity index is 2-3 times of the ground daily illumination intensity, and the light source incidence direction is randomly generated in the 4 pi space of the three-dimensional model.
In an optional embodiment, the imaging the built three-dimensional model includes:
and imaging the established three-dimensional model randomly within a range except a cone angle of 60 degrees formed by the three-dimensional model to the normal direction of the sky surface, wherein the included angle between the light beam direction of the light source and the imaging axis of the camera is less than 45 degrees during imaging.
In an optional embodiment, an angle between the light beam direction of the light source and an imaging axis of the camera is determined according to the following formula:
Figure BDA0001912039640000041
wherein alpha is less than 45 degrees and is an included angle between the light beam direction of the light source and the imaging axis of the camera;
r1 is the distance from the light source to the origin of the three-dimensional model of the satellite, R2 is the distance from the camera to the origin of the three-dimensional model of the satellite, X1 is the X-axis coordinate of the light source, X2 is the X-axis coordinate of the camera, y1 is the y-axis coordinate of the light source, y2 is the y-axis coordinate of the camera, z1 is the z-axis coordinate of the light source, and z2 is the z-axis coordinate of the camera.
In an optional embodiment, after obtaining the number of simulated aerial image samples, the method further includes:
performing data enhancement on each analog space image sample to obtain an analog space image sample set with the number of samples expanded;
correspondingly, the establishing of the corresponding relation between the satellite type and the simulated space image comprises:
and establishing a corresponding relation between the satellite type and the simulation space samples in the extended simulation space image sample set.
In an optional embodiment, the training and testing the initial ResNet neural network model based on the satellite space image sample library includes:
and processing each sample in the satellite space image sample library into a channel gray image from a three-channel color image, and then training and testing an initial ResNet neural network model.
Compared with the prior art, the invention has the beneficial effects that:
(1) according to the method for identifying the function type of the application satellite, the resolution ratio of the space image of the target satellite is adjusted to obtain the image which can be identified by the ResNet neural network model, the function type of the application satellite is automatically identified in orbit through the ResNet neural network model, the ground artificial reading and judgment are not relied on, the identification efficiency is improved, and the requirements of real-time service and operation on the non-cooperative space target with insufficient prior information are met.
(2) When the method is used for performing the on-orbit automatic identification of the application satellite function type, a series of image processing algorithms such as traditional image segmentation, identification and classification are not needed, the operation is simple, the identification speed is high, the accuracy is high, and the on-orbit automatic identification speed aiming at the application satellite function type can be improved.
(3) The invention can realize the autonomous processing and classification and identification of the space target visible light image generated by the space vehicle on orbit, can simultaneously adapt to three-channel color images and one-channel gray level images, can adapt to image data of different illumination conditions and shooting angles, and has strong adaptability to data.
(4) The method simulates the visible light imaging environment and the reflection characteristics of the star surface materials under the space vacuum condition, generates image samples with real visible light reflection characteristics under different illumination and angles, greatly enriches a deep learning sample library, provides sufficient training and testing materials for the neural network training of space target classification recognition, and improves the reliability of the obtained model.
(5) The invention can realize the autonomous processing and classification and identification of the space target visible light image generated by the space aircraft on orbit, the classification speed of the target function type is less than 100ms, the accuracy can reach 90 percent, the intelligent cognitive level of the space target can be greatly improved, and the autonomous capability of the aircraft on orbit is enhanced.
Drawings
Fig. 1 is a flowchart of a method for identifying a function type of an application satellite according to an embodiment of the present invention;
fig. 2 is a schematic diagram of imaging the established three-dimensional model by simulating a spatial environment according to the embodiment of the present invention.
Detailed Description
The following detailed description of embodiments of the invention will be made with reference to the accompanying drawings.
The embodiment of the invention provides a method for identifying the function type of an application satellite, which comprises the following steps:
step 101: acquiring a target satellite space image, and adjusting the resolution of the acquired target satellite space image to obtain a target image;
step 102: and performing data processing on the target image based on a ResNet neural network model, and determining a function type corresponding to the target satellite.
According to the method for identifying the function type of the application satellite, the resolution ratio of the space image of the target satellite is adjusted to obtain the image which can be identified by the ResNet neural network model, the function type of the application satellite is automatically identified in orbit through the ResNet neural network model, the ground artificial reading and judgment are not relied on, the identification efficiency is improved, and the requirements of real-time service and operation on the non-cooperative space target with insufficient prior information are met.
TABLE 1 concrete structural parameter Table of ResNet neural network model
Figure BDA0001912039640000061
Wherein Conv0 is the initial convolutional layer, ConV1_ x is the first residual learning module, ConV2_ x is the second residual learning module, ConV3_ x is the third residual learning module, and Full connection is the Full connection layer.
As shown in table 1, in an optional embodiment, the ResNet neural network model includes an initial convolutional layer, three residual learning modules, and a fully-connected layer, each of the residual learning modules includes two residual learning units, the initial convolutional layer outputs a feature set to a first residual learning module, the first residual learning module outputs a new feature set to a second residual learning module, the second residual learning module outputs a new feature set to a third residual learning module, and the third residual learning module outputs a new feature set to the fully-connected layer, where:
the initial convolutional layer for:
performing one-time two-dimensional convolution on the target image to obtain a first feature map set, wherein the number of convolution kernels is preferably 16, the size of the convolution kernels is 3 multiplied by 3, and the step length of convolution operation is preferably 1;
a first residual learning unit of the first residual learning module, configured to:
performing convolution operation on the first feature map set to obtain a second feature map set, wherein the number of preferred convolution kernels is 16, the size of the convolution kernels is 1 multiplied by 1, and the step length of the convolution operation is 1; sequentially carrying out standardization (normalization) operation, activation operation and one convolution operation on the first feature map set to obtain a third feature map set, wherein the number of convolution kernels is preferably 16, the size of the convolution kernels is 3 multiplied by 3, and the step length of the convolution operation is 1; sequentially carrying out standardization operation, activation operation and one convolution operation on the third feature map set to obtain a fourth feature map set, wherein the number of preferred convolution kernels is 16, the size of the convolution kernels is 3 multiplied by 3, and the step length of the convolution operation is 1; determining a fifth feature map set according to the second feature map set and the fourth feature map set, specifically, adding corresponding feature values of each image in the second feature map set and the fourth feature map set to obtain an image in the fifth feature map set;
a second residual learning unit of the first residual learning module, configured to:
sequentially carrying out standardization operation, activation operation and one convolution operation on the fifth feature map set to obtain a sixth feature map set, wherein the number of preferred convolution kernels is 16, the size of the convolution kernels is 3 multiplied by 3, and the step length of the convolution operation is 1; sequentially carrying out standardization operation, activation operation and one convolution operation on the sixth feature map set to obtain a seventh feature map set, wherein the number of preferred convolution kernels is 16, the size of the convolution kernels is 3 multiplied by 3, and the step length of the convolution operation is 1; determining an eighth feature map set according to the fifth feature map set and the seventh feature map set;
a first residual learning unit of the second residual learning module, configured to:
performing convolution operation on the eighth feature map set to obtain a ninth feature map set;
carrying out standardization operation, activation operation and one convolution operation on the eighth feature map set in sequence to obtain a tenth feature map set, carrying out standardization operation, activation operation and one convolution operation on the tenth feature map set in sequence to obtain an eleventh feature map set, and determining a twelfth feature map set according to the eleventh feature map set and the ninth feature map set;
a second residual learning unit of the second residual learning module, configured to:
sequentially carrying out standardization operation, activation operation and one convolution operation on the twelfth feature atlas to obtain a thirteenth feature atlas, and sequentially carrying out standardization operation, activation operation and one convolution operation on the thirteenth feature atlas to obtain a fourteenth feature atlas; determining a fifteenth feature atlas according to the fourteenth feature atlas and the twelfth feature atlas;
a first residual learning unit of the third residual learning module, configured to:
performing convolution operation on the fifteenth feature map set to obtain a sixteenth feature map set;
sequentially carrying out standardization operation, activation operation and one convolution operation on the fifteenth feature atlas to obtain a seventeenth feature atlas, and sequentially carrying out standardization operation, activation operation and one convolution operation on the seventeenth feature atlas to obtain an eighteenth feature atlas; determining a nineteenth feature map set according to the eighteenth feature map set and the fifteenth feature map set;
a second residual learning unit of the third residual learning module, configured to:
sequentially carrying out standardization operation, activation operation and one convolution operation on the nineteenth feature map set to obtain a twentieth feature map set, and sequentially carrying out standardization operation, activation operation and one convolution operation on the twentieth feature map set to obtain a twenty-first feature map set; and determining a twenty-second feature map set according to the twenty-first feature map set and the nineteenth feature map set.
In the embodiment of the invention, the number of convolution kernels of convolution operation in the second residual error learning module is preferably 32, and the convolution step length is 2; the number of convolution kernels of the preferred convolution operation in the third residual learning module is 64, and the convolution step size is 2.
The full connection layer is used for:
and performing average pooling on the twenty-second feature map set, extracting feature vectors for satellite type identification, performing full-connection operation, determining feature accumulation vectors corresponding to the satellite types according to the feature vectors, and performing classification probability statistics through a classifier so as to determine the function type corresponding to the target satellite.
Specifically, the full connection layer determines a feature accumulation vector corresponding to each satellite type according to the following formula, and performs category determination through a softmax multi-classifier:
Figure BDA0001912039640000081
wherein a1, a2 and … aT are feature accumulated vectors corresponding to each satellite type output by the full link layer, W is a feature weight matrix, x is input of the full link layer feature vector, b is a full link layer bias parameter, T is a target category number, and N is the number of the feature vectors input to the full link layer.
The full connection layer completes the conversion from distributed feature representation to sample marking space, and is a key step for target classification; the full-connection layer integrates the local feature vectors obtained in the previous convolution and pooling layers through the weight matrix, so that the representation capability of the model can be greatly reserved, and the subsequent model fine tuning and transfer learning can be facilitated.
The ResNet deep neural network model can simulate the neural connection structure of the human brain based on the deep learning theory, describes the data characteristics through a plurality of transformation stage layers when processing image signals, and further provides data explanation. The image data processing flow conforms to the cognitive rule of a primate visual system, namely, the edges and the initial shape are detected firstly, and then more complex visual shapes are formed step by step. The cognitive neural network model applying the satellite function types can form more abstract high-level representation, attribute categories or characteristics by combining low-level characteristics, and finally provides layered characteristic representation of image data.
The cognitive neural network model applying the satellite function type can transform the feature representation of the sample in the original space to a new feature space by performing layer-by-layer feature transformation on the original signal, automatically learn to obtain the hierarchical feature representation, and is more favorable for classification or feature visualization. The model has hierarchy, more parameters and enough capacity, so that the data characteristics can be better represented. Aiming at the problem that image recognition is difficult, a good result can be obtained on the basis of a large amount of training data.
The front-end input of the convolutional neural network adopted by the model adopts a plurality of convolutional kernels for providing image information, translation, rotation and scaling invariance of image targets in accordance in space are fully considered, the convolutional kernels have the same structure and weight sharing, so that the neural network not only keeps larger front-end scale, but also has fewer variable adjustment parameters, and the burden of calculation amount and parameter optimization is greatly reduced. Compared with the traditional image preprocessing filtering and convolution process set manually, the front-end processing process of the model is optimized in performance, and the automatic extraction of the features has specificity aiming at the image content, so that the performance is better than that of the preprocessing process set manually.
Compared with the traditional image classification algorithm, the model uses relatively less preprocessing and does not need to rely on prior knowledge, so that the difficult problem of manual feature design in the traditional image classification algorithm can be avoided to the greatest extent, and the classification and identification of unknown targets can be rapidly and accurately carried out by carrying out automatic feature extraction through a filter.
The ResNet neural network structure adopted by the model allows original input information to be reserved in the feature extraction result, the integrity of the information is effectively protected, the phenomenon that the error of the neural network increases along with the continuous deepening of the layer number on a training set is eliminated, the training of the ultra-deep neural network can be accelerated very quickly, the accuracy of the model is greatly improved, and the model has better portability.
In an alternative embodiment, the resolution of the target image is not less than 256 × 256.
In an optional embodiment, the method for identifying a function type of an application satellite further includes:
establishing a satellite space image sample library, wherein the sample library comprises a plurality of satellite types and image sample sets corresponding to the satellite types;
and training and testing the initial ResNet neural network model based on the satellite space image sample library to obtain the ResNet neural network model.
In an optional embodiment, the creating a satellite space image sample library includes:
establishing three-dimensional models of different types of satellites, simulating a space environment, imaging the established three-dimensional models to obtain a certain number of simulated space image samples, establishing a corresponding relation between the types of the satellites and the simulated space image samples, and generating a satellite space image sample library.
In an alternative embodiment, the building a three-dimensional model of a different type of satellite includes:
according to the structural characteristics of various types of satellites, building structural three-dimensional models of different types of satellites;
and rendering the structural three-dimensional model according to the surface texture information of the various satellites to obtain three-dimensional models of different types of satellites.
In an alternative embodiment, the simulated spatial environment comprises:
the simulated light source is parallel light, the atmospheric molecule density is 0-0.01 times of the ground atmospheric molecule density, the illumination intensity index is 2-3 times of the ground daily illumination intensity, and the light source incidence direction is randomly generated in the 4 pi space of the three-dimensional model.
In an optional embodiment, the imaging the built three-dimensional model includes:
and imaging the established three-dimensional model randomly within a range except a cone angle of 60 degrees formed by the three-dimensional model to the normal direction of the sky surface, wherein the included angle between the light beam direction of the light source and the imaging axis of the camera is less than 45 degrees during imaging.
In an optional embodiment, an angle between the light beam direction of the light source and an imaging axis of the camera is determined according to the following formula:
Figure BDA0001912039640000101
wherein alpha is less than 45 degrees and is an included angle between the light beam direction of the light source and the imaging axis of the camera;
r1 is the distance from the light source to the origin of the three-dimensional model of the satellite, R2 is the distance from the camera to the origin of the three-dimensional model of the satellite, X1 is the X-axis coordinate of the light source, X2 is the X-axis coordinate of the camera, y1 is the y-axis coordinate of the light source, y2 is the y-axis coordinate of the camera, z1 is the z-axis coordinate of the light source, and z2 is the z-axis coordinate of the camera.
In an optional embodiment, after obtaining the number of simulated aerial image samples, the method further includes:
performing data enhancement on each analog space image sample to obtain an analog space image sample set with the number of samples expanded;
correspondingly, the establishing of the corresponding relation between the satellite type and the simulated space image comprises:
and establishing a corresponding relation between the satellite type and the simulation space samples in the extended simulation space image sample set.
In an optional embodiment, the training and testing the initial ResNet neural network model based on the satellite space image sample library includes:
and processing each sample in the satellite space image sample library into a channel gray image from a three-channel color image, and then training and testing an initial ResNet neural network model.
The following is a specific embodiment of the present invention:
(1) establishing a satellite sky image sample library:
collecting and sorting satellite pictures of various public data, and selecting three types of satellite pictures which can completely display the appearance of the satellite and at least partially reflect the respective satellite table characteristics of the three types of application satellites (such as camera loads of earth observation satellites and communication antennas of radio relay satellites);
firstly, according to external information of a satellite displayed by each satellite picture, performing geometric proportion data measurement and calculation on the contour, the height and the like of the satellite, and constructing a three-dimensional white mold (without surface texture data) of the satellite by taking the origin of a three-dimensional coordinate system as the centroid of a satellite body according to the measurement and calculation proportion;
adding equipment and components such as a solar sailboard, a camera lens, a sensor, a thruster, a satellite and arrow docking ring, a measurement and control antenna, a data transmission antenna and the like to a satellite three-dimensional white mold according to the star surface characteristics displayed by each satellite picture to obtain a structural three-dimensional model (without surface texture data) of the satellite;
determining surface texture information of three types of satellites according to the actual material reflection characteristic pair of the satellite surface, wherein the visible light reflection characteristic of the material of the main part of the satellite surface in the embodiment meets the requirement of the reflection characteristic of the real material, and the main part comprises the front surface of a solar panel (covering a solar cell), the back surface of the solar panel, an aluminum-plated heat-controlled multilayer, a gold-plated heat-controlled multilayer, a secondary surface mirror, white paint, other parts of the satellite surface and the like; rendering the structural three-dimensional model of the satellite according to the surface texture information to obtain three-dimensional models of three types of satellites;
referring to fig. 2, a spherical surface with a radius R is established with an origin of a three-dimensional model coordinate of a satellite as a center, and the spherical surface is divided into (N +1) × (N +1) points by N longitude lines and N latitude lines, so that an X coordinate of each point is X (i, j), a Y coordinate is Y (i, j), and a Z coordinate is Z (i, j), wherein i and j are numbers of the latitude lines and the longitude lines respectively, and ranges of the i and the j are 1-N respectively. If the distance between the parallel light source simulating the sunlight and the origin of the three-dimensional satellite model is R1, randomly selecting a value of a latitude line i and a value of a longitude line j on a spherical surface with the radius of R1, wherein the coordinates of the light source are (x1(a, b), y1(a, b) and z1(a, b));
similarly, if the distance between the camera imaging the three-dimensional satellite model and the origin of the three-dimensional satellite model is R2, the imaging camera randomly selects the latitude line number i as c and the longitude line number j as d on the spherical surface with the radius of R2, and the camera coordinates are (x2(c, d), y2(c, d), and z2(c, d)).
As most of application loads of the applied satellite mainly aim at the earth, in order to reflect the key loads and the characteristic parts on the surface of the satellite as much as possible in the simulation imaging process, the top surface of the satellite is prevented from being imaged monotonously, and in order to increase the diversity and the richness of a satellite sample as much as possible, the visual angle direction of each satellite optical imaging image of the satellite is randomly generated in the range of excluding the cone angle of 60 degrees of the normal included angle of the satellite to the sky surface. The weft thread number i is selected so as to avoid selecting a weft thread within the cone angle range of 60 DEG, i.e., i > (60 DEG/180 DEG). times.N.
Because the space vacuum environment is not influenced by air scattering factors, the imaging contrast ratio of the star surface part irradiated by light and the shielded star surface part is very high, when the included angle between the imaging angle of the visible light camera and a parallel light source is too large, most of shielded invisible conditions can be caused, the imaging effect is poor, the star surface characteristics and the satellite appearance characteristics can not be normally reflected, the included angle between the light beam direction of the light source and the imaging axis of the camera can not be smaller than 45 degrees, and the calculation formula of the included angle alpha between the light source and the camera is as follows:
Figure BDA0001912039640000121
and alpha <45 deg..
And when the light source position and the camera position meet the requirements, bringing the coordinates of the light source position and the camera position into three-dimensional modeling software (3D MAX) to perform imaging rendering on the three-dimensional model. And multiple groups of cameras and light source positions can be set as required to enrich sample diversity, increase sample quantity and enhance neural network learning ability.
The imaging size is set to 1920 × 1080 in the model building software, and imaging effect rendering is performed. The total number of samples is not less than 5000, and the satellite coverage is uniform for three types of application satellites.
And counting the number of samples of each type of satellite, expanding the number of samples of each type of satellite according to a proportion, and enhancing the number of samples mainly by adopting image processing modes such as rotation, translation, overturning and the like. Wherein the rotation is a random angle of 0-360 degrees, and the translation distance does not exceed 1/32 the length or width of the image, so as to avoid translating the satellite main body out of the image;
and generating a space image sample library of three types of satellites according to the expanded samples, wherein the number of the three types of satellite samples in the space image sample library is equivalent, 1/3 pictures are randomly selected from each type of satellite sample to be used as test samples, and the rest pictures are used as training samples.
(2) Deep convolutional neural network learning and training are performed using a Tensorflow platform. The training sample set and the test sample set are respectively converted into TFRecord format files accepted by the tensflow platform, and the file type is suffixed with ". tfrecrds". In the generation process of the format file, the sizes of all sample pictures are adjusted to be 360 multiplied by 640, and all three-channel color image data are converted into single-channel gray image data;
and setting the Batch processing image number of the deep convolutional network Batch _ size to be 10, learning all the training samples once to be an epoch, and setting the total training time to be 250 epochs. One evaluation was done using the test sample to complete one epoch. The initial learning rate learning _ rate is 0.1, adjusted to 0.01 after 100 epochs, adjusted to 0.001 after 150 epochs, and adjusted to 0.0001 after 200 epochs. Identifying the number of classes as 3 classes;
the deep convolutional neural network model shown in table 1 is established according to the deep learning typical convolutional neural network ResNet residual network model. The neural network has 14 layers, and is composed of 1 convolution layer, 3 residual error learning modules (blocks) and 1 full-connection layer, wherein each residual error learning module comprises two residual error learning units (bottleeck), each residual error learning unit adopts a 2-layer structure, and each layer of structure comprises 1 convolution operation. The number of filters in the first block is 16, the number of filters in the second block is 32, and the number of filters in the third block is 64. And storing the neural network model in the training process for subsequent test evaluation. The network model uses Relu activation function, outputs by full connection after average pooling, predicts by softmax multi-classifier, and adopts Momentum algorithm as optimization algorithm;
and (3) performing self-identification training and testing of the application satellite function type by using the spatial image sample library generated in the step (1) and the deep convolutional neural network model shown in the table 1. The batch picture training number batch _ size in the training process is 10, 10 pictures in the training sample library are randomly selected each time and sent to the neural network for processing, and an epoch is formed until all samples are trained. The three types of satellite function types can be automatically identified and trained to complete 250 epochs. And testing every 10 epochs, namely, introducing all test samples into a neural network for identification, and comparing the identification classification result with the label to obtain the identification accuracy. The initial learning rate for neural network training is set to 0.1.
And after the neural network training is finished, performing an automatic identification test on the application satellite function type by using the trained network model, and counting the test accuracy. The application satellite function type autonomous identification test can be divided into a non-label sample test and a label (the label is the application satellite type) sample test.
When a label-free sample test is carried out, namely no prior sample type information exists, the trained neural network model is used for identifying a sample image to obtain three types of application satellite identification probabilities, whether the identification is correct or not is judged manually, and the identification accuracy of all the images is counted manually.
When the labeled sample is tested, namely when the type of satellite of the sample to be tested is known in advance, the trained neural network model is used for identifying the sample image, then the satellite type with the maximum probability is taken for automatic comparison with the sample label, and whether the identification is correct or not is judged. When a large number of labeled samples are tested, a statistical result of identification accuracy can be obtained.
And when the identification accuracy rate reaches 90%, the trained model is considered as a final model, a target satellite space image is obtained during in-orbit testing, the resolution ratio of the obtained target satellite space image is adjusted to obtain a target image, the obtained final model is used for performing data processing on the target image, and the function type corresponding to the target satellite is determined.
The above description is only for the best mode of the present invention, but the scope of the present invention is not limited thereto, and any changes or substitutions that can be easily conceived by those skilled in the art within the technical scope of the present invention are included in the scope of the present invention.
The invention is not described in detail and is within the knowledge of a person skilled in the art.

Claims (5)

1. An application satellite function type identification method is characterized by comprising the following steps:
acquiring a target satellite space image, and adjusting the resolution of the acquired target satellite space image to obtain a target image;
performing data processing on the target image based on a ResNet neural network model, and determining a function type corresponding to the target satellite;
the deep convolutional neural network model adopts a ResNet residual error network structure and comprises an initial convolutional layer, three residual error learning modules and a full-connection layer, wherein each residual error learning module comprises two residual error learning units, and each residual error learning unit comprises two convolutional operations;
the initial convolutional layer for:
performing one-time two-dimensional convolution on the target image to obtain a first feature map set, wherein the number of convolution kernels is 16, the size of the convolution kernels is 3 multiplied by 3, and the step length of convolution operation is 1;
a first residual learning unit of the first residual learning module, configured to:
performing convolution operation on the first feature map set to obtain a second feature map set, wherein the number of convolution kernels is 16, the size of the convolution kernels is 1 multiplied by 1, and the step length of the convolution operation is 1; sequentially carrying out standardization operation, activation operation and one convolution operation on the first feature map set to obtain a third feature map set, wherein the number of convolution kernels is 16, the size of the convolution kernels is 3 multiplied by 3, and the step length of the convolution operation is 1; sequentially carrying out standardization operation, activation operation and one convolution operation on the third feature map set to obtain a fourth feature map set, wherein the number of convolution kernels is 16, the size of the convolution kernels is 3 multiplied by 3, and the step length of the convolution operation is 1; adding the corresponding characteristic values of the images in the second characteristic image set and the fourth characteristic image set to obtain an image in a fifth characteristic image set;
a second residual learning unit of the first residual learning module, configured to:
sequentially carrying out standardization operation, activation operation and one convolution operation on the fifth feature map set to obtain a sixth feature map set, wherein the number of convolution kernels is 16, the size of the convolution kernels is 3 multiplied by 3, and the step length of the convolution operation is 1; sequentially carrying out standardization operation, activation operation and one convolution operation on the sixth feature map set to obtain a seventh feature map set, wherein the number of convolution kernels is 16, the size of the convolution kernels is 3 multiplied by 3, and the step length of the convolution operation is 1; determining an eighth feature map set according to the fifth feature map set and the seventh feature map set;
a first residual learning unit of the second residual learning module, configured to:
performing convolution operation on the eighth feature map set to obtain a ninth feature map set;
carrying out standardization operation, activation operation and one convolution operation on the eighth feature map set in sequence to obtain a tenth feature map set, carrying out standardization operation, activation operation and one convolution operation on the tenth feature map set in sequence to obtain an eleventh feature map set, and determining a twelfth feature map set according to the eleventh feature map set and the ninth feature map set;
a second residual learning unit of the second residual learning module, configured to:
sequentially carrying out standardization operation, activation operation and one convolution operation on the twelfth feature atlas to obtain a thirteenth feature atlas, and sequentially carrying out standardization operation, activation operation and one convolution operation on the thirteenth feature atlas to obtain a fourteenth feature atlas; determining a fifteenth feature atlas according to the fourteenth feature atlas and the twelfth feature atlas;
a first residual learning unit of the third residual learning module, configured to:
performing convolution operation on the fifteenth feature map set to obtain a sixteenth feature map set;
sequentially carrying out standardization operation, activation operation and one convolution operation on the fifteenth feature atlas to obtain a seventeenth feature atlas, and sequentially carrying out standardization operation, activation operation and one convolution operation on the seventeenth feature atlas to obtain an eighteenth feature atlas; determining a nineteenth feature map set according to the eighteenth feature map set and the fifteenth feature map set;
a second residual learning unit of the third residual learning module, configured to:
sequentially carrying out standardization operation, activation operation and one convolution operation on the nineteenth feature map set to obtain a twentieth feature map set, and sequentially carrying out standardization operation, activation operation and one convolution operation on the twentieth feature map set to obtain a twenty-first feature map set; determining a twenty-second feature map set according to the twenty-first feature map set and the nineteenth feature map set;
the number of convolution kernels for convolution operation in the second residual learning module is 32, and the convolution step length is 2; the convolution kernel number of convolution operation in the third residual learning module is 64, and the convolution step length is 2;
the full connection layer is used for:
performing average pooling on the twenty-second feature map set, extracting feature vectors for satellite type identification, performing full-connection operation, determining feature accumulation vectors corresponding to each satellite type according to the feature vectors, and performing classification probability statistics through a classifier so as to determine the function type corresponding to the target satellite;
the number of the images processed in batches by the deep convolutional network is 10, the total training period is 250, the initial learning rate is 0.1, the initial learning rate is adjusted to 0.01 after 100 periods, the initial learning rate is adjusted to 0.001 after 150 periods, and the initial learning rate is adjusted to 0.0001 after 200 periods;
establishing a satellite space image sample library, wherein the sample library comprises a plurality of satellite types and image sample sets corresponding to the satellite types;
training and testing an initial ResNet neural network model based on the satellite space image sample library to obtain a ResNet neural network model;
the establishing of the satellite space image sample library comprises the following steps:
establishing three-dimensional models of different types of satellites, simulating a space environment, imaging the established three-dimensional models to obtain a certain number of simulated space image samples, establishing a corresponding relation between the types of the satellites and the simulated space image samples, and generating a satellite space image sample library;
the establishment of the three-dimensional models of different types of satellites comprises the following steps:
carrying out satellite contour and height measurement and calculation according to satellite external information displayed by each satellite picture, and constructing a satellite three-dimensional white mold by taking the origin of a three-dimensional coordinate system as the centroid of a satellite body according to the measurement and calculation proportion;
adding a solar panel, a camera lens, a sensor, a thruster, a satellite and arrow docking ring, a measurement and control antenna and a data transmission antenna to a satellite three-dimensional white model according to the star surface characteristics displayed by each satellite picture in proportion to obtain structural three-dimensional models of different types of application satellites;
determining surface texture information of a satellite according to the reflection characteristics of the actual materials of the satellite surface, wherein the visible light reflection characteristics of the materials of the main parts of the satellite surface meet the requirements of the reflection characteristics of real materials, and the parts comprise the front surface of a solar panel, the back surface of the solar panel, an aluminum-plated thermal control multilayer, a gold-plated thermal control multilayer, a secondary surface mirror, white paint and other parts of the satellite surface;
rendering the structural three-dimensional model according to the surface texture information of each type of satellite to obtain three-dimensional models of different types of satellites;
the simulated spatial environment comprises:
the simulated light source is parallel light, the atmospheric molecular density is 0-0.01 times of the ground atmospheric molecular density, the illumination intensity index is 2-3 times of the ground daily illumination intensity, and the incident direction of the light source is randomly generated in a 4 pi space of the three-dimensional model;
in the light source and camera position setting, a spherical surface with the radius of R is established by taking the origin of coordinates of a three-dimensional model of a satellite as a center, and the spherical surface is divided into (N +1) × (N +1) points by N warps and N wefts, so that the X coordinate of each point is X (i, j), the Y coordinate is Y (i, j), the Z coordinate is Z (i, j), wherein i and j are respectively the numbers of the wefts and the warps, and the ranges of the i and the j are respectively 1-N; if the distance between the parallel light source simulating the sunlight and the origin of the three-dimensional satellite model is R1, randomly selecting a value of a latitude line i and a value of a longitude line j on a spherical surface with the radius of R1, wherein the coordinates of the light source are (x1(a, b), y1(a, b) and z1(a, b)); similarly, if the distance between the camera imaging the three-dimensional satellite model and the origin of the three-dimensional satellite model is R2, the imaging camera randomly selects the latitude line number i as c, the longitude line number j as d value on the spherical surface with the radius of R2, and the camera coordinates are (x2(c, d), y2(c, d), and z2(c, d)); the visual angle direction of each satellite optical imaging image formed by the satellite is randomly generated in the range of excluding the cone angle of 60 degrees formed by the satellite to the normal of the sky; the weft number i > (60 °/180 °) × N.
2. The method of claim 1, wherein the resolution of the target image is 360 x 640.
3. The method for identifying the type of the satellite function according to claim 1, wherein the angle between the light beam direction of the light source and the imaging axis of the camera is determined according to the following formula:
Figure FDA0002661051610000041
wherein alpha is less than 45 degrees and is an included angle between the light beam direction of the light source and the imaging axis of the camera;
r1 is the distance from the light source to the origin of the three-dimensional model of the satellite, R2 is the distance from the camera to the origin of the three-dimensional model of the satellite, x1 is the coordinates of the x axis of the light source, x2 is the coordinates of the x axis of the camera, y1 is the coordinates of the y axis of the light source, y2 is the coordinates of the y axis of the camera, z1 is the coordinates of the z axis of the light source, and z2 is the coordinates of the z axis of the camera.
4. The method for identifying the type of function of an application satellite according to claim 1, wherein after obtaining a certain number of simulated aerial image samples, the method further comprises:
performing data enhancement on each analog space image sample to obtain an analog space image sample set with the number of samples expanded;
correspondingly, the establishing of the corresponding relation between the satellite type and the simulated space image comprises:
and establishing a corresponding relation between the satellite type and the simulation space samples in the extended simulation space image sample set.
5. The method for identifying the application satellite function type according to claim 1, wherein the training and testing of the initial ResNet neural network model based on the satellite space image sample library comprises:
and processing each sample in the satellite space image sample library into a channel gray image from a three-channel color image, and then training and testing an initial ResNet neural network model.
CN201811556442.9A 2018-12-19 2018-12-19 Application satellite function type identification method Active CN109657679B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811556442.9A CN109657679B (en) 2018-12-19 2018-12-19 Application satellite function type identification method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811556442.9A CN109657679B (en) 2018-12-19 2018-12-19 Application satellite function type identification method

Publications (2)

Publication Number Publication Date
CN109657679A CN109657679A (en) 2019-04-19
CN109657679B true CN109657679B (en) 2020-11-20

Family

ID=66114842

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811556442.9A Active CN109657679B (en) 2018-12-19 2018-12-19 Application satellite function type identification method

Country Status (1)

Country Link
CN (1) CN109657679B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111127360B (en) * 2019-12-20 2023-08-29 东南大学 Gray image transfer learning method based on automatic encoder
CN112093082B (en) * 2020-09-25 2022-03-18 中国空间技术研究院 On-orbit capture guiding method and device of high-orbit satellite capture mechanism

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101788817A (en) * 2010-01-29 2010-07-28 航天东方红卫星有限公司 Fault recognition and processing method based on satellite-bone bus
US20150094056A1 (en) * 2013-10-01 2015-04-02 Electronics And Telecommunications Research Institute Satellite communication system and method for adaptive channel assignment

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101814107A (en) * 2010-05-06 2010-08-25 哈尔滨工业大学 Satellite dynamics simulation system and method based on satellite dynamics model library

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101788817A (en) * 2010-01-29 2010-07-28 航天东方红卫星有限公司 Fault recognition and processing method based on satellite-bone bus
US20150094056A1 (en) * 2013-10-01 2015-04-02 Electronics And Telecommunications Research Institute Satellite communication system and method for adaptive channel assignment

Also Published As

Publication number Publication date
CN109657679A (en) 2019-04-19

Similar Documents

Publication Publication Date Title
CN108596101B (en) Remote sensing image multi-target detection method based on convolutional neural network
CN106356757B (en) A kind of power circuit unmanned plane method for inspecting based on human-eye visual characteristic
CN110189304B (en) Optical remote sensing image target on-line rapid detection method based on artificial intelligence
CN110889324A (en) Thermal infrared image target identification method based on YOLO V3 terminal-oriented guidance
CN106570253B (en) Real-time space-based infrared visual simulation method
CN111797676A (en) High-resolution remote sensing image target on-orbit lightweight rapid detection method
CN114972617B (en) Scene illumination and reflection modeling method based on conductive rendering
CN108415098B (en) Based on luminosity curve to the high rail small size target signature recognition methods in space
CN110221360A (en) A kind of power circuit thunderstorm method for early warning and system
CN109241902A (en) A kind of landslide detection method based on multi-scale feature fusion
CN104154919A (en) Method for autonomous measurement of pose of tripod structure of solar panel on non-cooperative spacecraft
CN109657679B (en) Application satellite function type identification method
CN116311078A (en) Forest fire analysis and monitoring method and system
CN114491694B (en) Space target data set construction method based on illusion engine
CN116933567B (en) Space-based complex multi-scene space target simulation data set construction method
CN112580407A (en) Space target component identification method based on lightweight neural network model
CN113902663A (en) Air small target dynamic infrared simulation method and device capable of automatically adapting to weather
CN115205467A (en) Space non-cooperative target part identification method based on light weight and attention mechanism
CN115292287A (en) Automatic labeling and database construction method for satellite feature component image
Oestreich et al. On-orbit relative pose initialization via convolutional neural networks
CN115661251A (en) Imaging simulation-based space target identification sample generation system and method
CN111523392B (en) Deep learning sample preparation method and recognition method based on satellite orthographic image full gesture
Koizumi et al. Development of attitude sensor using deep learning
Piccinin et al. ARGOS: Calibrated facility for Image based Relative Navigation technologies on ground verification and testing
Lewis et al. Determination of spatial and temporal characteristics as an aid to neural network cloud classification

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant