CN117689995A - Unknown spacecraft level detection method based on monocular image - Google Patents
Unknown spacecraft level detection method based on monocular image Download PDFInfo
- Publication number
- CN117689995A CN117689995A CN202311714595.2A CN202311714595A CN117689995A CN 117689995 A CN117689995 A CN 117689995A CN 202311714595 A CN202311714595 A CN 202311714595A CN 117689995 A CN117689995 A CN 117689995A
- Authority
- CN
- China
- Prior art keywords
- spacecraft
- unknown
- detection
- neural network
- detected
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000001514 detection method Methods 0.000 title claims abstract description 213
- 238000003062 neural network model Methods 0.000 claims abstract description 74
- 238000012549 training Methods 0.000 claims abstract description 62
- 238000000034 method Methods 0.000 claims abstract description 42
- 238000012795 verification Methods 0.000 claims abstract description 28
- 238000002372 labelling Methods 0.000 claims abstract description 8
- 238000007781 pre-processing Methods 0.000 claims abstract description 6
- 238000004364 calculation method Methods 0.000 claims abstract description 4
- 238000013528 artificial neural network Methods 0.000 claims description 22
- 238000004140 cleaning Methods 0.000 claims description 7
- 238000012805 post-processing Methods 0.000 claims description 7
- 230000001629 suppression Effects 0.000 claims description 7
- 238000011156 evaluation Methods 0.000 claims description 4
- 238000005286 illumination Methods 0.000 claims 1
- 230000036544 posture Effects 0.000 claims 1
- 238000010200 validation analysis Methods 0.000 abstract description 3
- 230000005764 inhibitory process Effects 0.000 abstract 1
- 238000005516 engineering process Methods 0.000 description 7
- 238000013527 convolutional neural network Methods 0.000 description 5
- 230000006870 function Effects 0.000 description 5
- 238000011895 specific detection Methods 0.000 description 5
- 238000010586 diagram Methods 0.000 description 4
- 238000013135 deep learning Methods 0.000 description 3
- 238000004088 simulation Methods 0.000 description 2
- 238000004458 analytical method Methods 0.000 description 1
- 238000013459 approach Methods 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 238000011160 research Methods 0.000 description 1
- 238000012360 testing method Methods 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/82—Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/20—Image preprocessing
- G06V10/25—Determination of region of interest [ROI] or a volume of interest [VOI]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/44—Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
- G06V10/443—Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components by matching or filtering
- G06V10/449—Biologically inspired filters, e.g. difference of Gaussians [DoG] or Gabor filters
- G06V10/451—Biologically inspired filters, e.g. difference of Gaussians [DoG] or Gabor filters with interaction between the filter responses, e.g. cortical complex cells
- G06V10/454—Integrating the filters into a hierarchical structure, e.g. convolutional neural networks [CNN]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/50—Extraction of image or video features by performing operations within image blocks; by using histograms, e.g. histogram of oriented gradients [HoG]; by summing image-intensity values; Projection analysis
- G06V10/507—Summing image-intensity values; Histogram projection analysis
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/762—Arrangements for image or video recognition or understanding using pattern recognition or machine learning using clustering, e.g. of similar faces in social networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/764—Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/77—Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
- G06V10/774—Generating sets of training patterns; Bootstrap methods, e.g. bagging or boosting
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/77—Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
- G06V10/776—Validation; Performance evaluation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V2201/00—Indexing scheme relating to image or video recognition or understanding
- G06V2201/07—Target detection
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Multimedia (AREA)
- General Physics & Mathematics (AREA)
- Physics & Mathematics (AREA)
- Evolutionary Computation (AREA)
- Artificial Intelligence (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Health & Medical Sciences (AREA)
- General Health & Medical Sciences (AREA)
- Computing Systems (AREA)
- Software Systems (AREA)
- Medical Informatics (AREA)
- Databases & Information Systems (AREA)
- Life Sciences & Earth Sciences (AREA)
- Biodiversity & Conservation Biology (AREA)
- Biomedical Technology (AREA)
- Molecular Biology (AREA)
- Image Analysis (AREA)
Abstract
The invention discloses an unknown spacecraft level detection method based on a monocular image, and belongs to the technical field of aerospace. The implementation method of the invention comprises the following steps: under the condition that the calculation power of an on-board processor of the observation spacecraft is limited, a small lightweight YOLOv5s neural network model is selected for detecting an unknown spacecraft target in the subsequent steps; acquiring N different spacecraft monocular images, and constructing a sample data set for detecting unknown spacecraft levels; labeling two sets of labels on the images in the constructed sample data set; the data set is divided into a training set and a validation set. Constructing an unknown spacecraft level detection strategy, and realizing level detection from an unknown spacecraft component to the whole; in the training process, preprocessing the training set image through standard data enhancement; in the verification process, adopting standard non-maximum value inhibition post-treatment; and (3) utilizing the trained optimal YOLOv5s neural network model to realize the hierarchical detection from the unknown spacecraft part to the whole from the monocular image, and improving the detection precision of the unknown spacecraft.
Description
Technical Field
The invention relates to an unknown spacecraft level detection method based on a monocular image, and belongs to the technical field of aerospace.
Background
With the continuous development of spacecraft technology, more and more complex tasks such as space situation awareness, on-orbit services and the like are focused on scientists and engineers. Spacecraft dynamics and control are the basis for these tasks. Spacecraft detection is one of basic technologies of spacecraft dynamics and control, and provides necessary information for target identification, target tracking and proximity control. The traditional spacecraft detection method detects spacecraft targets by inverting the characteristics of straight lines, polygons, ellipses and the like, can detect spacecraft with classical shapes, and is sensitive to low-level image characteristics. Spacecraft detection is also based on traditional learning methods such as compressed sensing, kernel regression, gaussian processes and the like. The intelligent target detection method has been developed in recent years, and the deep learning-based method has become a popular detection technology due to robustness and stability. By constructing a spacecraft image data set with a label and training a deep convolutional neural network, stable spacecraft characteristics can be independently learned, so that the spacecraft in the image can be effectively detected. For the known spacecraft detection, a large data set of synthetic images from the 3D model can be built for deep learning, and the spacecraft in the test set is also contained in the training set, so that high detection performance is easy to obtain. Some work has also investigated the classification of different spacecraft and the detection of the main parts of the spacecraft. In addition, compared with the observation equipment such as radar, binocular camera and the like, the monocular camera has the characteristics of light weight and low power consumption, and is widely applied to aerospace tasks.
In the developed intelligent detection method for spacecraft targets, an online technology [1] (see: wang L.research on Spatial Multi-objective Recognition Based on Deep learning.Unmanned Systems Technology 2019,2 (3), 49-55.) is used for identifying space targets by using a convolutional neural network, and the identification precision of the spacecraft and parts thereof reaches more than 90% under different conditions of close forward looking, long distance, shielding, motion blur and the like.
The prior art [2] (see: xu G.G; yin H.C.; yuan L.; dong C.Z. spatial Target Recognition Method of HRRP Sequence Based on Convolutional Neural network. Journal of Communication University of China (Science and Technology) 2019,26 (3), 40-44+39.) utilizes the rich information contained in the high resolution distance profile sequence to provide a convolutional neural network-based spatial target recognition method that can automatically learn features from the sequence diagram using the convolutional neural network to thereby achieve target classification.
The prior work mainly focuses on detecting the spacecraft with known structures and surface textures, and the average detection precision of the known spacecraft under various conditions can reach about 0.9. However, as spacecraft design technologies develop, spacecraft of new shapes, structures, surface textures and components are increasingly being developed, and it is practically impossible to build a neural network dataset containing images of all spacecraft. Thus, an important issue for spacecraft detection is whether the training set does not include an image of an unknown spacecraft, which can be effectively detected. Meanwhile, the detection of the unknown spacecraft also faces the problems of inaccurate positioning and low detection confidence. Because the unknown spacecraft is not contained in the training set, the trained neural network cannot learn prior knowledge such as the structure and the surface texture of the target spacecraft. The large area background misleading neural network in the ground truth bounding box takes the background as part of the appearance of the spacecraft when training, influenced by the spacecraft components and structure. For small sample image datasets of spacecraft, this problem is more severe and the training of neural networks is prone to overfitting.
Disclosure of Invention
The invention mainly aims to provide a monocular image-based unknown spacecraft level detection method, which is based on monocular optical images, and utilizes the rapid prediction performance of a neural network and the detection capability of main parts of a spacecraft to learn complex nonlinear relations between observed data and target characteristics and positions of the spacecraft to realize the monocular image-based unknown spacecraft level detection.
The invention aims at realizing the following technical scheme:
the invention discloses an unknown spacecraft level detection method based on a monocular image, which takes RGB images obtained and normalized by shooting an observation spacecraft on-board monocular camera sensor as input and takes the object probability Pr [ object ] of an unknown spacecraft target component]Object locating bounding box and conditional class probability Pr class i |object]For output, under the condition that the computational power of an on-board processor of the observation spacecraft is limited, a small-sized lightweight YOLOv5s neural network model is selected for detecting an unknown spacecraft target in the subsequent step. In order to make up for the weakness that the neural network obtained by training the data set only containing the ground target is difficult to identify the characteristics of the unknown spacecraft in the space target, the detection precision of the YOLOv5s neural network model on the unknown spacecraft is improved, N monocular images of different spacecraft are obtained, and a sample data set for detecting the level of the unknown spacecraft is constructed. Considering that the overall appearance structure of the spacecraft is changeable but basic necessary parts exist stably, in order to obtain stable and reliable part detection performance and further help to improve the overall detection precision of the unknown spacecraft, two sets of labels, namely a spacecraft part boundary frame and a corresponding class name and a spacecraft overall boundary frame, are marked on the images in the constructed sample data set. The data set is divided into a training set and a verification set according to a predetermined proportion. Aiming at the target of the overall detection of an unknown spacecraft, constructing an unknown spacecraft level detection strategy to realize the level detection from an unknown spacecraft component to the whole, wherein the method for realizing the unknown spacecraft level detection strategy comprises the following steps: confidence threshold θ using YOLOv5s neural network model p Detecting each part of the unknown spacecraft respectively; defining a distance index d for clustering, and communicating the detected componentsClustering and grouping are carried out in a direct connection and indirect connection mode, and all components in the same group are judged to jointly form a prediction spacecraft; and estimating a positioning boundary box and a detection confidence of the prediction spacecraft according to the detected parts in each group. In order to evaluate the detection performance of the Yolov5s neural network model on the unknown spacecraft, conventional performance indexes of target detection are given, and performance indexes for the level detection of the unknown spacecraft are designed. Based on the IOU, the CIOU is used for better evaluating the positioning precision of the predicted spacecraft, the scaled RCIOU is designed to represent the positioning precision of the unknown spacecraft component relative to the whole evaluation, and precision, recall rate and average precision indexes are considered. And training to obtain an optimal YOLOv5s neural network model for detecting the unknown spacecraft hierarchy by utilizing the constructed sample data set and the unknown spacecraft hierarchy detection strategy and combining performance indexes. In the training process, the training set image is preprocessed through standard data enhancement. The YOLOv5s neural network model outputs predictions, optimizes the neural network parameters during training to maximize CIOU and uses cross entropy as a loss function, and calculates the class-specific detection confidence of each detection object during verification. In the verification process, standard non-maximum suppression is adopted for post-processing, each object is ensured to be detected only once by cleaning the repeatedly detected objects, and the IOU threshold is used for checking the repetition and only preserving the detected objects with the highest detection confidence. And (3) utilizing the trained optimal YOLOv5s neural network model to realize the hierarchical detection from the unknown spacecraft part to the whole from the monocular image, and improving the detection precision of the unknown spacecraft.
The invention discloses an unknown spacecraft level detection method based on a monocular image, which comprises the following steps of:
step one: taking RGB images obtained by shooting and normalizing a monocular camera sensor carried by an observation spacecraft as input, and taking the object probability Pr [ object ] of an unknown spacecraft target component]Object locating bounding box and conditional class probability Pr class i |object]For output, under the condition that the computational power of an on-board processor of the observation spacecraft is limited, a small-sized lightweight YOLOv5s neural network model is selected for detecting an unknown spacecraft target in the subsequent step.
The YOLOv5 neural network model is one of the most advanced deep neural network models for detecting objects from images, wherein the YOLOv5s neural network model is the official model with the smallest scale in the YOLOv5 neural network model series.
For running on an observation spacecraft with limited calculation power of an on-board processor, a small lightweight YOLOv5s neural network model is selected, RGB images obtained and normalized by shooting of an on-board monocular camera sensor of the observation spacecraft are taken as input, and object probability Pr [ object ] of an unknown spacecraft target component is taken as input]Object locating bounding box and conditional class probability Pr class i |object]For output, for subsequent steps to detect unknown spacecraft targets.
Step two: in order to make up for the weakness that the neural network obtained by training the data set only containing the ground target is difficult to identify the characteristics of the unknown spacecraft in the space target, the detection precision of the YOLOv5s neural network model on the unknown spacecraft is improved, N monocular images of different spacecraft are obtained, and a sample data set for detecting the level of the unknown spacecraft is constructed. Considering that the overall appearance structure of the spacecraft is changeable but basic necessary parts exist stably, in order to obtain stable and reliable part detection performance and further help to improve the overall detection precision of the unknown spacecraft, two sets of labels, namely a spacecraft part boundary frame and a corresponding class name and a spacecraft overall boundary frame, are marked on the images in the constructed sample data set. The data set is divided into a training set and a verification set according to a predetermined proportion.
The existing data set for realizing target detection training of the YOLOv5s neural network model is mostly ground target images, however, the space target has different characteristics from the ground target, and the neural network trained by using the ground target data set is difficult to well identify the characteristics of an unknown spacecraft. In order to better extract the space target characteristics of the unknown spacecraft, improve the detection precision of the Yolov5s neural network model on the unknown spacecraft, acquire N monocular images of different spacecraft and construct a sample data set for detecting the level of the unknown spacecraft.
Most of the spacecraft in the dataset images are different from each other in terms of structure, color, surface texture and the like, only a few of the spacecraft have a plurality of images photographed from different attitudes, lighting conditions and earth backgrounds, and the basic necessary components exist stably although the overall outline structure of the spacecraft is changeable. And labeling two groups of labels on the images in the constructed sample data set, namely the spacecraft part boundary frame, the corresponding class names and the spacecraft whole boundary frame, so that stable and reliable part detection performance is obtained, and further, the unknown spacecraft whole detection precision is improved. The selection of the spacecraft components and the labeling of the boundary boxes of the spacecraft components are mainly based on the basic structure of the spacecraft, namely a solar sailboard, a main body and an external antenna; the spacecraft integral bounding box is the smallest bounding box covering all the parts of the spacecraft.
The sample dataset of the monocular image for unknown spacecraft level detection is divided into a training set and a validation set at a predetermined ratio. Because the YOLOv5s neural network model only learns the spacecraft textures in the monocular image of the training set, and the spacecraft in the verification set is almost unknown during training, the detection performance of the trained YOLOv5s neural network model on the unknown spacecraft is evaluated through the verification set.
Step three: aiming at the target of the overall detection of an unknown spacecraft, constructing an unknown spacecraft level detection strategy to realize the level detection from an unknown spacecraft component to the whole, wherein the method for realizing the unknown spacecraft level detection strategy comprises the following steps: confidence threshold θ using YOLOv5s neural network model p Detecting each part of the unknown spacecraft respectively; defining a distance index d for clustering, clustering and grouping the detected components in a direct connection and indirect connection mode, and judging that all the components in the same group jointly form a prediction spacecraft; and estimating a positioning boundary box and a detection confidence of the prediction spacecraft according to the detected parts in each group. In order to evaluate the detection performance of the Yolov5s neural network model on the unknown spacecraft, conventional performance indexes of target detection are given, and performance indexes for the level detection of the unknown spacecraft are designed. Based on the IOU, the CIOU is used for better evaluating the positioning precision of the predicted spacecraft, the scaled RCIOU is designed to represent the positioning precision of the unknown spacecraft part relative to the whole evaluation, and the precision, recall rate and average precision index are considered。
Aiming at the target of the overall detection of the unknown spacecraft, an unknown spacecraft level detection strategy is constructed, and the level detection from the unknown spacecraft component to the overall is realized. Confidence threshold θ using YOLOv5s neural network model p Each component of the unknown spacecraft is detected separately. A distance index d is defined for the detected group of parts, i.e. the distance between two detected parts is the ratio of the shortest distance to the longest distance of the two bounding boxes. If the bounding boxes of the two detected parts overlap, the distance is zero. If and only if the distance between two detected parts is smaller than a given distance threshold value theta d When the two detected parts are judged to be directly connected; two detected components are determined to be indirectly connected if and only if there is a component that is directly connected between the two detected components. If and only if two detected parts are directly connected or indirectly connected, the two detected parts are clustered into the same group, and all the parts in the same group are judged to jointly form a prediction spacecraft. Estimating the positioning bounding box of the predicted spacecraft as the smallest bounding box covering all parts p in the group, while the detection confidence of the predicted spacecraft s is
The detection confidence is directly output by the YOLOv5s neural network model and is used for predicting the positioning accuracy of a positioning boundary frame of the detection confidence and the occurrence probability of an unknown spacecraft in the predicted positioning boundary frame. Only objects whose detection confidence is greater than a given confidence threshold are determined to be detected objects.
In order to evaluate the detection performance of the Yolov5s neural network model on the unknown spacecraft, conventional performance indexes of target detection are given, and performance indexes for the level detection of the unknown spacecraft are designed.
The cross-over ratio IOU is used for determining the corresponding relation between the detected spacecraft and the ground spacecraft, and is the most commonly used index for evaluating the overlapping of two boundary boxes in a scale-invariant mode. Two bounding boxes b 1 And b 2 The IOU between the two bounding boxes is calculated as the intersection area and union face of the two bounding boxesThe ratio of products, i.e
Typically, an IOU greater than 0.5 indicates a higher positioning accuracy. If the IOU of at least one detected object is larger than a given IOU threshold, marking the spacecraft as a detected spacecraft, and then taking the detected object with the highest detection confidence as the spacecraft.
The underlying IOU indicator reflects only the overlapping area of two bounding boxes, and the center-to-center distance and aspect ratio consistency of the bounding boxes are also important geometric factors. CIOU index is used to better evaluate the positioning accuracy of the predicted spacecraft, i.e
Wherein ρ (b) 1 ,b 2 ) Is two bounding boxes b 1 And b 2 The Euclidean distance of the center point, c is the diagonal length of the smallest bounding box covering the two bounding boxes, and the positive trade-off parameter α is
v measures aspect ratio uniformity, i.e
Wherein w is 1 And h 1 、w 2 And h 2 The width and height of the two bounding boxes, respectively.
In the unknown spacecraft level detection, the positioning accuracy of the unknown spacecraft as a whole depends on the positioning accuracy of its components. The positioning accuracy of the unknown spacecraft component should be evaluated relative to the unknown spacecraft as a whole, but the CIOU index only represents the positioning error of the component relative to the component itself. Because the scale of the unknown spacecraft part is smaller than that of the unknown spacecraft whole, the part positioning error can lead to smaller whole positioning error, so that the direct use of CIOU index is not suitable, and an RCIOU index considering scaling is designed to represent the positioning precision of the unknown spacecraft part, namely
Wherein, 1-CIOU (p) is the component positioning error and is scaled according to the area ratio of the unknown spacecraft component to the unknown spacecraft whole.
The detected spacecraft object is a true positive example TP, the undetected spacecraft object is a false negative example FN, and other detected objects which do not correspond to true spacecraft are false positive examples FP. The precision index and the recall index are respectively
Wherein n is TP 、n FP 、n FN The number of true positive cases, the number of false positive cases and the number of false negative cases are respectively determined. Given the detection confidence threshold and the CIOU threshold, the precision P represents the proportion of the detected spacecraft to all detected objects, and the recall R reflects the proportion of the detected spacecraft to all spacecraft. When the CIOU threshold t is fixed and the confidence threshold is detected to change, the accuracy changes with the change in recall, and the higher the accuracy, the lower the recall, and vice versa. Thus, the average accuracy AP is defined as the area under the accuracy-recall curve for evaluating the average detection performance, i.e
Step four: and (3) training to obtain a YOLOv5s neural network model for the unknown spacecraft level detection by utilizing the sample data set constructed in the second step and the unknown spacecraft level detection strategy constructed in the third step and combining the performance indexes, and verifying until the trained and verified optimal YOLOv5s neural network model is obtained. In the training process, the training set image is preprocessed through standard data enhancement. The YOLOv5s neural network model outputs predictions, optimizes the neural network parameters during training to maximize CIOU and uses cross entropy as a loss function, and calculates the class-specific detection confidence of each detection object during verification. In the verification process, standard non-maximum suppression is adopted for post-processing, each object is ensured to be detected only once by cleaning the repeatedly detected objects, and the IOU threshold is used for checking the repetition and only preserving the detected objects with the highest detection confidence.
And (3) training to obtain an optimal YOLOv5s neural network model for detecting the unknown spacecraft hierarchy by combining the sample data set constructed in the second step and the unknown spacecraft hierarchy detection strategy constructed in the third step with performance indexes, and verifying. In the training process, the training set image is preprocessed through standard data enhancement, and the preprocessing comprises widely-used color adjustment, scaling and overturning.
The backbone of the YOLOv5s neural network model comprises a combination of a Focus module and a series of CBL, CSP, SPP modules; the header is a combination of FAN and PAN network structures, aggregates three levels of high-level abstract features from the backbone, and generates three feature maps with essentially different sizes of receptive fields for detection; the jump connection passes information between all feature layers to better locate and classify the target object. At the output layer, the YOLOv5s neural network model applies standard spatial convolution to the three feature maps, respectively, and outputs predictions based on a predefined a priori bounding box grid. Each prediction has an object probability Pr [ object ] representing the presence of an object in the corresponding prior bounding box]An object locating bounding box for locating detected objects, and several conditional class probabilities Pr [ class ] indicating which class the object belongs to i |object]. During training, the neural network parameters are optimized to enable the unknown spacecraft prediction boundary box and the ground boundary boxThe CIOU between is maximized, the training target of the object probability is defined as CIOU, and the cross entropy is used as a loss function for classification. At verification, the class-specific detection confidence of each predicted object is obtained by multiplying the conditional class probability by the object probability, and the class-i object probability is represented in the prediction boundary box.
In the verification process, standard non-maximum suppression is used as a post-processing process to ensure that each object is detected only once by cleaning up repeatedly detected objects. The IOU threshold is used to check for duplicates and only preserve the detection object with the highest detection confidence among duplicate detection objects.
And step five, utilizing the optimal YOLOv5s neural network model trained in the step four to realize the level detection from the unknown spacecraft part to the whole from the monocular image, and improving the detection precision of the unknown spacecraft.
The beneficial effects are that:
1. according to the unknown spacecraft level detection method based on the monocular image, the neural network is utilized to approach the complex nonlinear relation, so that the unknown spacecraft detection problem taking the monocular image as the observation is solved, and the unknown spacecraft detection method based on the monocular image has the potential to develop the unknown spacecraft detection under the complex observation information, and the coverage range is wider.
2. According to the unknown spacecraft level detection method based on the monocular image, components which exist stably in the unknown spacecraft are fully considered, an unknown spacecraft level detection strategy is built, the problem of detection convergence of the unknown spacecraft under the requirement of higher positioning precision is solved, and compared with the conventional direct integral detection method for the known spacecraft, the solution precision is higher.
3. According to the unknown spacecraft level detection method based on the monocular image, a spacecraft sample data set is constructed, and the spacecraft part and the whole two groups of labels are marked, so that the neural network can learn abundant samples and generate good prediction effects, and the method is high in robustness and reliability.
Drawings
FIG. 1 is a flow chart of a method of unknown spacecraft level detection based on monocular images of the present invention;
FIG. 2 is a block diagram of the YOLOv5s neural network model of the present invention;
FIG. 3 is a schematic diagram of distance indicators and connection patterns defined by an unknown spacecraft level detection strategy of the present invention;
FIG. 4 is a statistical histogram of the area ratio of spacecraft parts to the overall bounding box in the present embodiment; FIG. 4 a) is a statistical histogram of the area ratio of individual parts of the spacecraft to the overall bounding box, and FIG. 4 b) is a statistical histogram of the area ratio of individual parts of the spacecraft to the overall bounding box;
FIG. 5 is a statistical histogram of the detection confidence and CIOU or RCIOU of the direct detection of the unknown spacecraft as a whole and the main part in this embodiment; fig. 5 a) is a statistical histogram of the detection confidence and CIOU of the direct detection spacecraft as a whole, fig. 5 b) is a statistical histogram of the detection confidence and RCIOU of the solar panel component, fig. 5 c) is a statistical histogram of the detection confidence and RCIOU of the body component, fig. 5 d) is a statistical histogram of the detection confidence and RCIOU of the external antenna component;
FIG. 6 is a statistical histogram of CIOU and confidence in detection of the spacecraft ensemble using unknown spacecraft level detection in this embodiment;
FIG. 7 is a bounding box positioning result achieved for unknown spacecraft in the verification set using unknown spacecraft level detection in the present embodiment;
fig. 8 is an illustration of the average accuracy of unknown spacecraft detection evaluated at different IOU thresholds using unknown spacecraft level detection in the present embodiment.
Detailed Description
In order to better illustrate the objects and advantages of the present invention, a detailed explanation of the invention is provided below by performing a simulation analysis of an unknown spacecraft level detection method based on monocular images.
As shown in fig. 1, the unknown spacecraft level detection method based on the monocular image disclosed in the embodiment includes the following steps:
step one: taking RGB images acquired and normalized by a monocular camera sensor carried by an observation spacecraft as input, and taking an object of an unknown spacecraft target componentProbability Pr [ object ]]Object locating bounding box and conditional class probability Pr class i |object]For output, under the condition that the computational power of an on-board processor of the observation spacecraft is limited, a small-sized lightweight YOLOv5s neural network model is selected for detecting an unknown spacecraft target in the subsequent step.
The YOLOv5 neural network model is one of the most advanced deep neural network models for detecting objects from images, wherein the YOLOv5s neural network model is the official model with the smallest scale in the YOLOv5 neural network model series. The structural diagram of the YOLOv5s neural network model is shown in fig. 2.
For running on an observation spacecraft with limited calculation power of an on-board processor, a small lightweight YOLOv5s neural network model is selected, RGB images obtained and normalized by shooting of an on-board monocular camera sensor of the observation spacecraft are taken as input, and object probability Pr [ object ] of an unknown spacecraft target component is taken as input]Object locating bounding box and conditional class probability Pr class i |object]For output, for subsequent steps to detect unknown spacecraft targets.
Step two: in order to make up for the weakness that the neural network obtained by training the data set only containing the ground target is difficult to identify the characteristics of the unknown spacecraft in the space target, the detection precision of the YOLOv5s neural network model on the unknown spacecraft is improved, N monocular images of different spacecraft are obtained, and a sample data set for detecting the level of the unknown spacecraft is constructed. Considering that the overall appearance structure of the spacecraft is changeable but basic necessary parts exist stably, in order to obtain stable and reliable part detection performance and further help to improve the overall detection precision of the unknown spacecraft, two sets of labels, namely a spacecraft part boundary frame and a corresponding class name and a spacecraft overall boundary frame, are marked on the images in the constructed sample data set. The data set is divided into a training set and a verification set according to a predetermined proportion.
The existing data set for realizing target detection training of the YOLOv5s neural network model is mostly ground target images, however, the space target has different characteristics from the ground target, and the neural network trained by using the ground target data set is difficult to well identify the characteristics of an unknown spacecraft. In order to better extract the space target characteristics of the unknown spacecraft, improve the detection precision of the Yolov5s neural network model on the unknown spacecraft, acquire N monocular images of different spacecraft and construct a sample data set for detecting the level of the unknown spacecraft.
Most of the spacecraft in the dataset images are different from each other in terms of structure, color, surface texture and the like, only a few of the spacecraft have a plurality of images photographed from different attitudes, lighting conditions and earth backgrounds, and the basic necessary components exist stably although the overall outline structure of the spacecraft is changeable. And labeling two groups of labels on the images in the constructed sample data set, namely the spacecraft part boundary frame, the corresponding class names and the spacecraft whole boundary frame, so that stable and reliable part detection performance is obtained, and further, the unknown spacecraft whole detection precision is improved. The selection of the spacecraft components and the labeling of the boundary boxes of the spacecraft components are mainly based on the basic structure of the spacecraft, namely a solar sailboard, a main body and an external antenna; the spacecraft integral bounding box is the smallest bounding box covering all the parts of the spacecraft.
The sample dataset of the monocular image for unknown spacecraft level detection is divided into a training set and a validation set at a predetermined ratio. Because the YOLOv5s neural network model only learns the spacecraft textures in the monocular image of the training set, and the spacecraft in the verification set is almost unknown during training, the detection performance of the trained YOLOv5s neural network model on the unknown spacecraft is evaluated through the verification set.
Step three: aiming at the target of the overall detection of an unknown spacecraft, constructing an unknown spacecraft level detection strategy to realize the level detection from an unknown spacecraft component to the whole, wherein the method for realizing the unknown spacecraft level detection strategy comprises the following steps: confidence threshold θ using YOLOv5s neural network model p Detecting each part of the unknown spacecraft respectively; defining a distance index d for clustering, clustering and grouping the detected components in a direct connection and indirect connection mode, and judging that all the components in the same group jointly form a prediction spacecraft; and estimating a positioning boundary box and a detection confidence of the prediction spacecraft according to the detected parts in each group. To evaluate the YOLOv5s neural network modelAnd (3) providing conventional performance indexes for target detection for the detection performance of the unknown spacecraft, and designing the performance indexes for the level detection of the unknown spacecraft. Based on the IOU, the CIOU is used for better evaluating the positioning precision of the predicted spacecraft, the scaled RCIOU is designed to represent the positioning precision of the unknown spacecraft component relative to the whole evaluation, and precision, recall rate and average precision indexes are considered.
Aiming at the target of the overall detection of the unknown spacecraft, an unknown spacecraft level detection strategy is constructed, and the level detection from the unknown spacecraft component to the overall is realized. Confidence threshold θ using YOLOv5s neural network model p Each component of the unknown spacecraft is detected separately. A distance index d as shown in fig. 3 is defined for the detected group of parts, i.e. the distance between two detected parts is the ratio of the shortest distance to the longest distance of the bounding boxes of the two. If the bounding boxes of the two detected parts overlap, the distance is zero. If and only if the distance between two detected parts is smaller than a given distance threshold value theta d When the two detected parts are judged to be directly connected; two detected components are determined to be indirectly connected if and only if there is a component that is directly connected between the two detected components. If and only if two detected parts are directly connected or indirectly connected, the two detected parts are clustered into the same group, and all the parts in the same group are judged to jointly form a prediction spacecraft. Estimating the positioning bounding box of the predicted spacecraft as the smallest bounding box covering all parts p in the group, while the detection confidence of the predicted spacecraft s is
The detection confidence is directly output by the YOLOv5s neural network model and is used for predicting the positioning accuracy of a positioning boundary frame of the detection confidence and the occurrence probability of an unknown spacecraft in the predicted positioning boundary frame. Only objects whose detection confidence is greater than a given confidence threshold are determined to be detected objects.
In order to evaluate the detection performance of the Yolov5s neural network model on the unknown spacecraft, conventional performance indexes of target detection are given, and performance indexes for the level detection of the unknown spacecraft are designed.
The cross-over ratio IOU is used for determining the corresponding relation between the detected spacecraft and the ground spacecraft, and is the most commonly used index for evaluating the overlapping of two boundary boxes in a scale-invariant mode. Two bounding boxes b 1 And b 2 The IOU between them is calculated as the ratio of the intersection area to the union area of the two bounding boxes, i.e
Typically, an IOU greater than 0.5 indicates a higher positioning accuracy. If the IOU of at least one detected object is larger than a given IOU threshold, marking the spacecraft as a detected spacecraft, and then taking the detected object with the highest detection confidence as the spacecraft.
The underlying IOU indicator reflects only the overlapping area of two bounding boxes, and the center-to-center distance and aspect ratio consistency of the bounding boxes are also important geometric factors. CIOU index is used to better evaluate the positioning accuracy of the predicted spacecraft, i.e
Wherein ρ (b) 1 ,b 2 ) Is two bounding boxes b 1 And b 2 The Euclidean distance of the center point, c is the diagonal length of the smallest bounding box covering the two bounding boxes, and the positive trade-off parameter α is
v measures aspect ratio uniformity, i.e
Wherein w is 1 And h 1 、w 2 And h 2 The width and height of the two bounding boxes, respectively.
In the unknown spacecraft level detection, the positioning accuracy of the unknown spacecraft as a whole depends on the positioning accuracy of its components. The positioning accuracy of the unknown spacecraft component should be evaluated relative to the unknown spacecraft as a whole, but the CIOU index only represents the positioning error of the component relative to the component itself. Because the scale of the unknown spacecraft part is smaller than that of the unknown spacecraft whole, the part positioning error can lead to smaller whole positioning error, so that the direct use of CIOU index is not suitable, and an RCIOU index considering scaling is designed to represent the positioning precision of the unknown spacecraft part, namely
Wherein, 1-CIOU (p) is the component positioning error and is scaled according to the area ratio of the unknown spacecraft component to the unknown spacecraft whole.
The detected spacecraft object is a true positive example TP, the undetected spacecraft object is a false negative example FN, and other detected objects which do not correspond to true spacecraft are false positive examples FP. The precision index and the recall index are respectively
Wherein n is TP 、n FP 、n FN The number of true positive cases, the number of false positive cases and the number of false negative cases are respectively determined. Given the detection confidence threshold and the CIOU threshold, the precision P represents the proportion of the detected spacecraft to all detected objects, and the recall R reflects the proportion of the detected spacecraft to all spacecraft. When CIOU threshold t is fixed and confidence threshold change is detected, precision changes along with the change of recall rate, and recall is performed with higher precision The lower the rate and vice versa. Thus, the average accuracy AP is defined as the area under the accuracy-recall curve for evaluating the average detection performance, i.e
Step four: and (3) training to obtain a YOLOv5s neural network model for the unknown spacecraft level detection by utilizing the sample data set constructed in the second step and the unknown spacecraft level detection strategy constructed in the third step and combining the performance indexes, and verifying until the trained and verified optimal YOLOv5s neural network model is obtained. In the training process, the training set image is preprocessed through standard data enhancement. The YOLOv5s neural network model outputs predictions, optimizes the neural network parameters during training to maximize CIOU and uses cross entropy as a loss function, and calculates the class-specific detection confidence of each detection object during verification. In the verification process, standard non-maximum suppression is adopted for post-processing, each object is ensured to be detected only once by cleaning the repeatedly detected objects, and the IOU threshold is used for checking the repetition and only preserving the detected objects with the highest detection confidence.
And (3) training to obtain an optimal YOLOv5s neural network model for detecting the unknown spacecraft hierarchy by combining the sample data set constructed in the second step and the unknown spacecraft hierarchy detection strategy constructed in the third step with performance indexes, and verifying. In the training process, the training set image is preprocessed through standard data enhancement, and the preprocessing comprises widely-used color adjustment, scaling and overturning.
The backbone of the YOLOv5s neural network model comprises a combination of a Focus module and a series of CBL, CSP, SPP modules; the header is a combination of FAN and PAN network structures, aggregates three levels of high-level abstract features from the backbone, and generates three feature maps with essentially different sizes of receptive fields for detection; the jump connection passes information between all feature layers to better locate and classify the target object. At the output layer, the YOLOv5s neural network model applies standard spatial convolution to the three feature maps, respectively, and is based on a predefined prior edgeThe bounding box grid outputs the predictions. Each prediction has an object probability Pr [ object ] representing the presence of an object in the corresponding prior bounding box]An object locating bounding box for locating detected objects, and several conditional class probabilities Pr [ class ] indicating which class the object belongs to i |object]. During training, the neural network parameters are optimized, CIOU between the unknown spacecraft prediction boundary box and the ground true boundary box is maximized, the training target of the object probability is defined as CIOU, and cross entropy is used as a loss function for classification. At verification, the class-specific detection confidence of each predicted object is obtained by multiplying the conditional class probability by the object probability, and the class-i object probability is represented in the prediction boundary box.
In the verification process, standard non-maximum suppression is used as a post-processing process to ensure that each object is detected only once by cleaning up repeatedly detected objects. The IOU threshold is used to check for duplicates and only preserve the detection object with the highest detection confidence among duplicate detection objects.
And step five, utilizing the optimal YOLOv5s neural network model trained in the step four to realize the level detection from the unknown spacecraft part to the whole from the monocular image, and improving the detection precision of the unknown spacecraft.
In order to verify the feasibility and effectiveness of the method, the simulation results are analyzed in this embodiment.
A statistical histogram of the area ratio of spacecraft parts to the overall bounding box is shown in fig. 4. Fig. 4 a) shows that for most spacecraft the bounding box area of the individual parts of the spacecraft in the image is less than 1/4 of the whole, wherein the external antenna is particularly small. The bounding box area of each component class in fig. 4 b) is the union area of each individual component in that class, and it can be found that on average 40% of the image area in the spacecraft bounding box is solar sailboard, 20% is main body, the remaining 40% is background, and the external antenna hardly occupies any area. Since the solar sailboards of the cube star are typically mounted on their surface, the cube star is categorized in the data set of this embodiment into a body class, so many bodies occupy 100% of the area, with no external solar sailboards.
Based on the official pre-trained YOLOv5s neural network model using Microsoft COCO dataset, it was trimmed up to 1000 rounds using 300 training examples and verified using the remaining 90 examples to prevent overfitting during training.
FIG. 5 shows a statistical histogram of the detection confidence and CIOU or RCIOU of the direct detection of an unknown spacecraft whole with a main component, the left side representing the detection confidence and the right side representing the CIOU of an estimated spacecraft whole or the RCIOU of an estimated spacecraft component compared at the same scale. For detection of the spacecraft as a whole and of the main components, the CIOU threshold is set to 0.5. As can be seen from the left side of fig. 5, the detection confidence of the spacecraft component is higher than that of the whole detection. This is because most spacecraft in the verification set are structurally very different from spacecraft in the training set, while the main components of a single class are structurally similar. Because the trained neural network has no direct knowledge of unknown spacecraft targets, the predicted object cannot be confidently declared as a spacecraft during reasoning. As can be seen from the right side of fig. 5, the positioning accuracy is also relatively low. When the whole spacecraft is directly detected, the CIOU of most detected spacecraft is about 0.9. The positioning of the main components of the spacecraft is more accurate, with almost all IOUs above 0.95.
FIG. 6 shows a statistical histogram of CIOU and confidence in detection of an entire spacecraft detected using unknown spacecraft levels. The result shows that the detection confidence and the positioning accuracy of almost all detected spacecrafts are obviously improved to 0.95. Since the detection confidence threshold is used for detection of spacecraft and primary components, few non-spacecraft targets are detected.
Most unknown spacecrafts in the verification set can be accurately detected by adopting the level detection of the unknown spacecrafts, and the positioning result of the boundary box is shown in fig. 7. The whole spacecraft is directly detected to be a yellow bounding box, green is given by a hierarchical detection method, and blue is a ground bounding box and almost overlaps with green.
FIG. 8 shows the average accuracy of evaluating unknown spacecraft detection at different IOU thresholds using unknown spacecraft level detection. It can be seen that unknown spacecraft level detection significantly improves the average accuracy at higher IOU thresholds. With a strict IOU threshold of 0.95, direct detection of the unknown spacecraft as a whole will result in a gradual loss of average accuracy. This means that with such high positioning accuracy requirements, almost no unknown spacecraft can be detected. When the unknown spacecraft level detection method is adopted, the average precision can reach 0.45. Half of the spacecraft can be accurately detected compared to the maximum average accuracy of 0.9 obtained when the IOU threshold is 0.5.
The method has the advantages that feasibility and effectiveness of the method are reflected, robustness, solving precision and efficiency of the position prediction result of the obtained unknown spacecraft on the monocular image are improved, and the method has potential to be applied to complex real-time task scenes.
While the foregoing has been provided for the purpose of illustrating the general principles of the invention, it will be understood that the foregoing disclosure is only illustrative of the principles of the invention and is not intended to limit the scope of the invention, but is to be construed as limited to the specific principles of the invention.
Claims (5)
1. An unknown spacecraft level detection method based on monocular images is characterized by comprising the following steps of: comprises the following steps of the method,
step one: taking RGB images obtained by shooting and normalizing a monocular camera sensor carried by an observation spacecraft as input, and taking the object probability Pr [ object ] of an unknown spacecraft target component]Object locating bounding box and conditional class probability Pr class i |object]For output, under the condition that the calculation power of an on-board processor of the observation spacecraft is limited, selecting a small lightweight YOLOv5s neural network model for detecting an unknown spacecraft target in the subsequent step;
Step two: acquiring N different spacecraft monocular images, and constructing a sample data set for detecting unknown spacecraft levels; considering that the overall appearance structure of the spacecraft is changeable but basic necessary parts exist stably, in order to obtain stable and reliable part detection performance and further help to improve the overall detection precision of the unknown spacecraft, labeling the images in the constructed sample data set with two groups of labels, namely, a first group of labels are spacecraft part boundary frames and corresponding class names, and a second group of labels are spacecraft whole boundary frames; dividing the data set into a training set and a verification set according to a preset proportion;
step three: aiming at the target of the overall detection of an unknown spacecraft, constructing an unknown spacecraft level detection strategy to realize the level detection from an unknown spacecraft component to the whole, wherein the method for realizing the unknown spacecraft level detection strategy comprises the following steps: confidence threshold θ using YOLOv5s neural network model p Detecting each part of the unknown spacecraft respectively; defining a distance index d for clustering, clustering and grouping the detected components in a direct connection and indirect connection mode, and judging that all the components in the same group jointly form a prediction spacecraft; estimating a positioning boundary box and a detection confidence of the prediction spacecraft according to the detected parts in each group; in order to evaluate the detection performance of the Yolov5s neural network model on the unknown spacecraft, conventional performance indexes of target detection are given, and performance indexes for the level detection of the unknown spacecraft are designed; on the basis of the IOU, the CIOU is used for better evaluating the positioning precision of the predicted spacecraft, the scaled RCIOU is designed to represent the positioning precision of the unknown spacecraft component relative to the whole evaluation, and precision, recall rate and average precision indexes are considered;
Step four: training to obtain a YOLOv5s neural network model for the level detection of the unknown spacecraft by utilizing the sample data set constructed in the second step and the unknown spacecraft level detection strategy constructed in the third step and combining performance indexes, and verifying until the trained and verified optimal YOLOv5s neural network model is obtained; in the training process, preprocessing the training set image through standard data enhancement; the YOLOv5s neural network model outputs prediction, optimizing the neural network parameters to maximize CIOU during training, taking cross entropy as a loss function, and calculating the specific class detection confidence coefficient of each detection object during verification; in the verification process, standard non-maximum suppression is adopted for post-processing, each object is ensured to be detected only once by cleaning the repeatedly detected objects, and the IOU threshold is used for checking the repetition and only preserving the detected objects with the highest detection confidence.
2. The unknown spacecraft level detection method based on monocular images of claim 1, wherein: and step five, utilizing the optimal YOLOv5s neural network model trained in the step four to realize the level detection from the unknown spacecraft part to the whole from the monocular image, and improving the detection precision of the unknown spacecraft.
3. A method for detecting unknown spacecraft level based on monocular images according to claim 1 or 2, characterized in that: the implementation method of the second step is that,
the data set for realizing target detection training by using the YOLOv5s neural network model is mostly ground target images, however, the space target has different characteristics from the ground target, and the neural network trained by using the ground target data set is difficult to well identify the characteristics of an unknown spacecraft; in order to better extract the space target characteristics of an unknown spacecraft, improve the detection precision of a YOLOv5s neural network model on the unknown spacecraft, acquire N monocular images of different spacecraft, and construct a sample data set for detecting the level of the unknown spacecraft;
most spacecrafts in the dataset images are different from each other in terms of structure, color, surface texture and the like, only a few spacecrafts have a plurality of images shot from different postures, illumination conditions and earth backgrounds, and basic essential parts exist stably although the overall appearance structure of the spacecrafts is changeable; labeling two groups of labels on the images in the constructed sample data set, namely a spacecraft part boundary frame, a corresponding class name and a spacecraft whole boundary frame, so as to obtain stable and reliable part detection performance and further help to improve the unknown spacecraft whole detection precision; the selection of the spacecraft components and the labeling of the boundary boxes of the spacecraft components are mainly based on the basic structure of the spacecraft, namely a solar sailboard, a main body and an external antenna; the overall boundary frame of the spacecraft is the minimum boundary frame covering all the component boundary frames of the spacecraft;
Dividing a sample data set of a monocular image for unknown spacecraft level detection into a training set and a verification set according to a preset proportion; because the YOLOv5s neural network model only learns the spacecraft textures in the monocular image of the training set, and the spacecraft in the verification set is almost unknown during training, the detection performance of the trained YOLOv5s neural network model on the unknown spacecraft is evaluated through the verification set.
4. A method for detecting unknown spacecraft level based on monocular images as claimed in claim 3, wherein: the implementation method of the third step is that,
aiming at the target of the overall detection of the unknown spacecraft, constructing an unknown spacecraft level detection strategy to realize the level detection from the unknown spacecraft component to the whole; confidence threshold θ using YOLOv5s neural network model p Detecting each part of the unknown spacecraft respectively; defining a distance index d for the detected component group, namely, the distance between two detected components is the ratio of the shortest distance to the longest distance of the two boundary boxes; if the boundary boxes of the two detected parts are overlapped, the distance is zero; if and only if the distance between two detected parts is smaller than a given distance threshold value theta d When the two detected parts are judged to be directly connected; two detected parts are determined to be indirectly connected if and only if there is a part directly connected between the two detected parts; if and only if the two detected parts are directly connected or indirectly connected, clustering the two detected parts into the same group, and judging that all the parts in the same group jointly form a prediction spacecraft; estimating the positioning bounding box of the predicted spacecraft as the smallest bounding box covering all parts p in the group, while the detection confidence of the predicted spacecraft s is
The detection confidence is directly output by the YOLOv5s neural network model and is used for predicting the positioning accuracy of a positioning boundary frame and the occurrence probability of an unknown spacecraft in the predicted positioning boundary frame; only the object whose detection confidence is greater than a given confidence threshold is determined to be the detected object;
in order to evaluate the detection performance of the Yolov5s neural network model on the unknown spacecraft, conventional performance indexes of target detection are given, and performance indexes for the level detection of the unknown spacecraft are designed;
the cross-over ratio IOU is used for determining the corresponding relation between the detected spacecraft and the ground spacecraft, and is the most commonly used index for evaluating the overlapping of two boundary frames in a scale-invariant mode; two bounding boxes b 1 And b 2 The IOU between them is calculated as the ratio of the intersection area to the union area of the two bounding boxes, i.e
In general, an IOU greater than 0.5 indicates a higher positioning accuracy; if the IOU of at least one detected object is larger than a given IOU threshold, marking the spacecraft as a detected spacecraft, and then taking the detected object with the highest detection confidence as the spacecraft;
the basic IOU index only reflects the overlapping area of two bounding boxes, and the consistency of the center point distance and the length-width ratio of the bounding boxes is also an important geometric factor; CIOU index is used to better evaluate the positioning accuracy of the predicted spacecraft, i.e
Wherein ρ (b) 1 ,b 2 ) Is two bounding boxes b 1 And b 2 The Euclidean distance of the center point, c is the diagonal length of the smallest bounding box covering the two bounding boxes, and the positive trade-off parameter α is
v measures aspect ratio uniformity, i.e
Wherein w is 1 And h 1 、w 2 And h 2 The width and the height of the two bounding boxes are respectively;
in the level detection of an unknown spacecraft, the positioning accuracy of the whole unknown spacecraft depends on the positioning accuracy of a part of the unknown spacecraft; the positioning precision of the unknown spacecraft part is estimated relative to the whole unknown spacecraft, but the CIOU index only represents the positioning error of the part relative to the part; because the scale of the unknown spacecraft part is smaller than that of the unknown spacecraft whole, the part positioning error can lead to smaller whole positioning error, so that the direct use of CIOU index is not suitable, and an RCIOU index considering scaling is designed to represent the positioning precision of the unknown spacecraft part, namely
The 1-CIOU (p) is a component positioning error and is scaled according to the area ratio of an unknown spacecraft component to the whole unknown spacecraft;
the detected spacecraft object is a true positive example TP, the undetected spacecraft object is a false negative example FN, and other detected objects which are not corresponding to the true spacecraft are false positive examples FP; the precision index and the recall index are respectively
Wherein n is TP 、n FP 、n FN The number of true positive cases, the number of false positive cases and the number of false negative cases are respectively; given a detection confidence threshold and a CIOU threshold, the precision P representsThe recall rate R reflects the proportion of the detected spacecraft to all the detected objects; when the CIOU threshold t is fixed and the confidence threshold is detected to change, the precision changes along with the change of the recall, and the higher the precision is, the lower the recall is, and vice versa; thus, the average accuracy AP is defined as the area under the accuracy-recall curve for evaluating the average detection performance, i.e
5. The unknown spacecraft hierarchical detection method based on monocular images of claim 4, wherein: the realization method of the fourth step is that,
training to obtain an optimal YOLOv5s neural network model for detecting the unknown spacecraft hierarchy by combining the sample data set constructed in the second step and the unknown spacecraft hierarchy detection strategy constructed in the third step with performance indexes, and verifying; in the training process, preprocessing is carried out on the training set image through standard data enhancement, wherein the preprocessing comprises widely used color adjustment, scaling and overturning;
The backbone of the YOLOv5s neural network model comprises a combination of a Focus module and a series of CBL, CSP, SPP modules; the header is a combination of FAN and PAN network structures, aggregates three levels of high-level abstract features from the backbone, and generates three feature maps with essentially different sizes of receptive fields for detection; the jump connection passes information between all feature layers to better locate and classify the target object; at the output layer, the YOLOv5s neural network model applies standard spatial convolution to the three feature maps respectively, and outputs predictions based on a predefined prior bounding box grid; each prediction has an object probability Pr [ object ] representing the presence of an object in the corresponding prior bounding box]An object locating bounding box for locating detected objects, and several conditional class probabilities Pr [ class ] indicating which class the object belongs to i |object]The method comprises the steps of carrying out a first treatment on the surface of the During training, the neural network parameters are optimized to enable unknown spacecraft prediction boundariesMaximizing CIOU between the frame and the ground bounding box, defining a training target of the object probability as CIOU, and classifying by taking cross entropy as a loss function; when verifying, multiplying the conditional class probability with the object probability to obtain a specific class detection confidence of each predicted object, and representing the probability that the class i object appears in the prediction boundary box;
In the verification process, standard non-maximum suppression is adopted as a post-processing process, and each object is ensured to be detected only once by cleaning the repeatedly detected objects; the IOU threshold is used to check for duplicates and only preserve the detection object with the highest detection confidence among duplicate detection objects.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202311714595.2A CN117689995A (en) | 2023-12-14 | 2023-12-14 | Unknown spacecraft level detection method based on monocular image |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202311714595.2A CN117689995A (en) | 2023-12-14 | 2023-12-14 | Unknown spacecraft level detection method based on monocular image |
Publications (1)
Publication Number | Publication Date |
---|---|
CN117689995A true CN117689995A (en) | 2024-03-12 |
Family
ID=90138611
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202311714595.2A Pending CN117689995A (en) | 2023-12-14 | 2023-12-14 | Unknown spacecraft level detection method based on monocular image |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN117689995A (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN117994637A (en) * | 2024-04-07 | 2024-05-07 | 中国科学院西安光学精密机械研究所 | High-precision space spacecraft domain self-adaptive detection method |
-
2023
- 2023-12-14 CN CN202311714595.2A patent/CN117689995A/en active Pending
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN117994637A (en) * | 2024-04-07 | 2024-05-07 | 中国科学院西安光学精密机械研究所 | High-precision space spacecraft domain self-adaptive detection method |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
Li et al. | Automatic pavement crack detection by multi-scale image fusion | |
CN106127204B (en) | A kind of multi-direction meter reading Region detection algorithms of full convolutional neural networks | |
CN113705478B (en) | Mangrove single wood target detection method based on improved YOLOv5 | |
CN110889324A (en) | Thermal infrared image target identification method based on YOLO V3 terminal-oriented guidance | |
Fu et al. | Scattering-keypoint-guided network for oriented ship detection in high-resolution and large-scale SAR images | |
CN109101897A (en) | Object detection method, system and the relevant device of underwater robot | |
CN107563355A (en) | Hyperspectral abnormity detection method based on generation confrontation network | |
CN104200495A (en) | Multi-target tracking method in video surveillance | |
CN109558823A (en) | A kind of vehicle identification method and system to scheme to search figure | |
CN113326735B (en) | YOLOv 5-based multi-mode small target detection method | |
CN112487900B (en) | SAR image ship target detection method based on feature fusion | |
CN111539422B (en) | Flight target cooperative identification method based on fast RCNN | |
CN110334703B (en) | Ship detection and identification method in day and night image | |
CN117689995A (en) | Unknown spacecraft level detection method based on monocular image | |
CN113888461A (en) | Method, system and equipment for detecting defects of hardware parts based on deep learning | |
CN113111727A (en) | Method for detecting rotating target in remote sensing scene based on feature alignment | |
CN110633727A (en) | Deep neural network ship target fine-grained identification method based on selective search | |
Cheng et al. | YOLOv3 Object Detection Algorithm with Feature Pyramid Attention for Remote Sensing Images. | |
Sun et al. | Image target detection algorithm compression and pruning based on neural network | |
Song et al. | Fast detection of multi-direction remote sensing ship object based on scale space pyramid | |
Zou et al. | Maritime target detection of intelligent ship based on faster R-CNN | |
CN109215059A (en) | Local data's correlating method of moving vehicle tracking in a kind of video of taking photo by plane | |
xi Wang et al. | Detection of MMW radar target based on Doppler characteristics and deep learning | |
Albalooshi et al. | Deep belief active contours (DBAC) with its application to oil spill segmentation from remotely sensed sea surface imagery | |
Chen et al. | An application of improved RANSAC algorithm in visual positioning |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |