CN117372764A - Non-cooperative target detection method in low-light environment - Google Patents
Non-cooperative target detection method in low-light environment Download PDFInfo
- Publication number
- CN117372764A CN117372764A CN202311336092.6A CN202311336092A CN117372764A CN 117372764 A CN117372764 A CN 117372764A CN 202311336092 A CN202311336092 A CN 202311336092A CN 117372764 A CN117372764 A CN 117372764A
- Authority
- CN
- China
- Prior art keywords
- cooperative target
- target detection
- yolo
- image
- cooperative
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000001514 detection method Methods 0.000 title claims abstract description 91
- 238000000605 extraction Methods 0.000 claims abstract description 13
- 230000004927 fusion Effects 0.000 claims abstract description 12
- 238000005286 illumination Methods 0.000 claims abstract description 12
- 238000000034 method Methods 0.000 claims description 33
- 238000012549 training Methods 0.000 claims description 26
- 238000012937 correction Methods 0.000 claims description 9
- 230000008859 change Effects 0.000 claims description 7
- 238000011478 gradient descent method Methods 0.000 claims description 4
- 238000010606 normalization Methods 0.000 claims description 4
- 238000005457 optimization Methods 0.000 claims description 4
- 238000007781 pre-processing Methods 0.000 claims description 4
- 238000012360 testing method Methods 0.000 claims description 4
- 238000012795 verification Methods 0.000 claims description 3
- 230000002708 enhancing effect Effects 0.000 claims description 2
- 238000002372 labelling Methods 0.000 claims description 2
- 230000007246 mechanism Effects 0.000 abstract description 4
- 238000012512 characterization method Methods 0.000 abstract description 2
- 230000006870 function Effects 0.000 description 17
- 238000010586 diagram Methods 0.000 description 4
- 238000011176 pooling Methods 0.000 description 4
- 238000012545 processing Methods 0.000 description 4
- 230000004913 activation Effects 0.000 description 3
- ORILYTVJVMAKLC-UHFFFAOYSA-N Adamantane Natural products C1C(C2)CC3CC1CC2C3 ORILYTVJVMAKLC-UHFFFAOYSA-N 0.000 description 2
- 238000013135 deep learning Methods 0.000 description 2
- 238000012935 Averaging Methods 0.000 description 1
- 238000000137 annealing Methods 0.000 description 1
- 238000013473 artificial intelligence Methods 0.000 description 1
- 238000013528 artificial neural network Methods 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 238000004364 calculation method Methods 0.000 description 1
- 238000010276 construction Methods 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 230000009977 dual effect Effects 0.000 description 1
- 238000003384 imaging method Methods 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 238000003064 k means clustering Methods 0.000 description 1
- 239000011159 matrix material Substances 0.000 description 1
- 238000010008 shearing Methods 0.000 description 1
- 238000013519 translation Methods 0.000 description 1
- 238000010200 validation analysis Methods 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/764—Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/0464—Convolutional networks [CNN, ConvNet]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T17/00—Three dimensional [3D] modelling, e.g. data description of 3D objects
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/20—Image preprocessing
- G06V10/24—Aligning, centring, orientation detection or correction of the image
- G06V10/245—Aligning, centring, orientation detection or correction of the image by locating a pattern; Special marks for positioning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/20—Image preprocessing
- G06V10/25—Determination of region of interest [ROI] or a volume of interest [VOI]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/77—Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
- G06V10/774—Generating sets of training patterns; Bootstrap methods, e.g. bagging or boosting
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/82—Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V2201/00—Indexing scheme relating to image or video recognition or understanding
- G06V2201/07—Target detection
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Evolutionary Computation (AREA)
- Software Systems (AREA)
- Health & Medical Sciences (AREA)
- General Health & Medical Sciences (AREA)
- Computing Systems (AREA)
- Artificial Intelligence (AREA)
- Multimedia (AREA)
- Databases & Information Systems (AREA)
- Medical Informatics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Data Mining & Analysis (AREA)
- Life Sciences & Earth Sciences (AREA)
- Biomedical Technology (AREA)
- Biophysics (AREA)
- Computational Linguistics (AREA)
- Molecular Biology (AREA)
- General Engineering & Computer Science (AREA)
- Mathematical Physics (AREA)
- Geometry (AREA)
- Computer Graphics (AREA)
- Image Analysis (AREA)
Abstract
The invention discloses a non-cooperative target detection method in a low-light environment, and belongs to the technical field of aerospace. According to the invention, the original image and the enhanced image are subjected to feature extraction by adopting a dual-trunk mode, the extracted features are fused, and the feature extraction capability of a non-cooperative target detection model DS-YOLO under a weak light environment is enhanced. According to the invention, 2D Gamma is adopted to adaptively correct the non-cooperative targets under various illumination conditions, so that the quality of the image is improved, the extraction of the spatial non-cooperative target characteristics is facilitated, and the accuracy of image detection and identification is improved. According to the invention, the BiFPN is used as a feature fusion network, the target features fused by the two backbones are fused together better, and an attention mechanism is added, so that the extraction capability of the multi-scale feature fusion network to the spatial non-cooperative target features is improved, the characterization capability of the spatial non-cooperative target detection model is enhanced, and the detection precision and efficiency of the spatial non-cooperative targets are improved.
Description
Technical Field
The invention belongs to the technical field of aerospace, and relates to a non-cooperative target detection method in a low-light environment.
Background
With the continuous increase of the number of space targets and the increasing complexity of space environment, space situation awareness becomes a premise and a foundation for guaranteeing space safety and realizing space tasks, and detection of space non-cooperative targets is an important component of space situation awareness. Compared with a foundation detection platform, the space-based detection platform is not limited by regions and climatic environments, and can detect all weather in real time, so that the space-based detection platform has remarkable advantages. Due to the complex environment of space, especially in low light environments, the brightness of the target may be even weaker than the background, so that the detected distance and the detection probability of the target are greatly reduced.
The traditional non-cooperative target detection method is generally based on manually selected features, but the manually selected features depend on expert priori knowledge and have poor generalization capability. Moreover, the conventional detection method aims at a certain characteristic, and the type of the target is difficult to distinguish and cannot meet the requirements of the existing task.
With the continuous development of artificial intelligence and the improvement of on-board hardware performance, deep learning is used in various fields of spacecraft. Compared with the traditional non-cooperative target detection method, the deep learning target detection method does not need to manually design features, but realizes target detection by learning the features of the image obtained by the image data through the neural network. However, in a low-light environment, intelligent detection of spatial non-cooperative target recognition is poor, and no effective solution has been proposed for this problem.
Disclosure of Invention
In order to solve the problems in the background art, the invention aims to provide a non-cooperative target detection method in a low-light environment, which can realize accurate classification and positioning of targets with different scales aiming at images of non-cooperative targets under various illumination conditions. The invention has the advantages of high detection accuracy and strong anti-interference capability.
In order to achieve the above purpose, the technical scheme adopted by the invention is as follows:
the invention discloses a method for detecting a non-cooperative target in a low-light environment, which is used for classifying and positioning a space non-cooperative target and comprises the following steps: step S1, three-dimensional modeling is carried out on a space non-cooperative target, a motion rule is set according to a nominal orbit generated by the space non-cooperative target, and meanwhile, a motion rule of a corresponding virtual camera is set, so that a space non-cooperative target image is obtained, the space non-cooperative target image characterizes the relative pose change of the space non-cooperative target and an observation satellite, and meanwhile, the position and the brightness of a light source are set, so that a space non-cooperative target data set in a weak light environment is constructed; s2, marking the constructed data set, and dividing the marked image into a training set, a verification set and a test set; s3, preprocessing an image input into the detection model by using 2D Gamma correction, and carrying out self-adaptive correction on the image of the non-cooperative target under various illumination conditions to obtain an RGB image of the enhanced non-cooperative target; and S4, constructing a non-cooperative target detection model DS-YOLO based on an end-to-end YOLOv5, namely, using a double-trunk detection method, respectively inputting an original picture and the picture preprocessed in the step S3 into a CSPS module by combining a CSPDarknet53 and a Swin Transformer, using the CSPS module as a trunk network backhaul of the DS-YOLO model, using a BiFPN as a feature fusion network and introducing an attention module to strengthen the extraction capability of multi-scale features, introducing a DIOU loss function, and constructing the non-cooperative target detection model DS-YOLO. Training a non-cooperative target detection model DS-YOLO on the data set constructed in the step S1 to obtain an optimal non-cooperative target detection model DS-YOLO in a low-light environment, and storing a training result; and S5, inputting the image to be detected into the optimal non-cooperative target detection model DS-YOLO, correcting the brightness through 2D Gamma, then using the optimal target detection model DS-YOLO to detect the original image and the preprocessed image as input, and utilizing the optimal non-cooperative target detection model DS-YOLO to realize the detection of the spatial non-cooperative target in the weak light environment, thereby improving the accuracy and the anti-interference capability of the detection of the non-cooperative target.
The invention discloses a non-cooperative target detection method in a low-light environment, which comprises the following steps of: step S4-1, clustering the marked anchor frames by using a clustering algorithm, and calculating the most suitable anchor frame size to be used as a training anchor frame; s4-2, expanding a data set by using a plurality of data enhancement methods to enhance the generalization performance of the DS-YOLO model; s4-3, training the input image data and randomly initializing the weight; s4-4, adopting a loss error back propagation method, and adopting a random gradient descent method and a minimum loss function method to realize optimization iteration of DS-YOLO model parameters; and step S4-5, repeating the step S4-4 until the loss function is not reduced, thereby obtaining the optimal non-cooperative target detection model in the low-light environment.
The invention discloses a non-cooperative target detection method in a low-light environment, which comprises the following steps:
step S1: and carrying out three-dimensional modeling on the space non-cooperative target, setting a motion rule according to a nominal orbit generated by the space non-cooperative target, setting a motion rule of a corresponding virtual camera at the same time, and obtaining a space non-cooperative target image, so that the space non-cooperative target image characterizes the relative pose change of the space non-cooperative target and an observation satellite, and setting the position and the brightness of a light source at the same time, thereby constructing a space non-cooperative target data set in a low light environment.
The orbital motion of a spatially non-cooperative target is equivalent to a two-body motion, i.e., in inertial space, the motion equation of the non-cooperative target is as follows:
where μ is the gravitational constant.
Step S2: and labeling the constructed data set, and dividing the labeled image into a training set, a verification set and a test set.
Step S3: preprocessing an image input into the detection model by using 2D Gamma correction, and carrying out self-adaptive correction on the image of the non-cooperative target under various illumination conditions to obtain an enhanced RGB image of the non-cooperative target.
Firstly converting RGB picture into HSV color space, then convolving with multi-scale Gaussian function for brightness V
Wherein,
S(x,y)=L(x,y)·R(x,y)
and meets the normalization condition
∫∫G(x,y)dxdy=1
Then use the modified 2D Gamma function
Focusing on the change of local pixels, a proper gamma value is set for each pixel, and the image brightness is adaptively adjusted. According to the multi-scale Retinex method, the illumination component L is calculated
And enhancing the post-restored image by using a 2D Gamma correction method to obtain an enhanced RGB image.
Step S4: and constructing a non-cooperative target detection model DS-YOLO based on an end-to-end YOLOv5, namely respectively inputting an original picture and the picture preprocessed in the step S3 into a CSPS module by combining a CSPDarknet53 and a Swin Transformer, using the CSPS module as a Backbone network backbond of the DS-YOLO model, using BiFPN as a feature fusion network and introducing an attention module to strengthen the extraction capability of multi-scale features, introducing a DIOU loss function, and constructing the non-cooperative target detection model DS-YOLO. Training a non-cooperative target detection model DS-YOLO on the data set constructed in the step S1 to obtain an optimal non-cooperative target detection model DS-YOLO under the low-light environment, and storing a training result.
DIOU loss function:
wherein b, b gt Representing the center points of the predicted and real frames, respectively, ρ represents the euclidean distance between the two center points, and c represents the diagonal distance of the minimum closure region containing both the predicted and real frames.
Training a non-cooperative target detection model DS-YOLO on the data set constructed in the step S1, wherein the specific implementation method comprises the following steps:
step S4-1, clustering the marked anchor frames by using a clustering algorithm, calculating the most suitable anchor frame size, and taking the anchor frame as an anchor frame for training a non-cooperative target detection model DS-YOLO;
s4-2, expanding a data set by using a plurality of data enhancement methods to enhance the generalization performance of a non-cooperative target detection model DS-YOLO;
s4-3, training the input image data, and randomly initializing DS-YOLO parameters of a non-cooperative target detection model;
s4-4, adopting a loss error back propagation method, and adopting a random gradient descent method and a minimum loss function method to realize optimization iteration of DS-YOLO parameters of a non-cooperative target detection model;
and step S4-5, repeating the step S4-4 until the loss function is not reduced, thereby obtaining the optimal non-cooperative target detection model DS-YOLO in the low-light environment.
And S5, inputting the image to be detected into the optimal non-cooperative target detection model DS-YOLO, correcting the brightness through 2D Gamma, then using the optimal target detection model DS-YOLO to detect the original image and the preprocessed image as input, and utilizing the optimal non-cooperative target detection model DS-YOLO to realize the detection of the spatial non-cooperative target in the weak light environment, thereby improving the accuracy and the anti-interference capability of the detection of the non-cooperative target.
The beneficial effects are that:
1. according to the non-cooperative target detection method under the weak light environment, the original image and the enhanced image are subjected to feature extraction by using a backbone network respectively in a double-backbone mode, the extracted features are fused, the feature extraction capacity of a non-cooperative target detection model DS-YOLO on the weak light environment is enhanced, and the detection and positioning precision of the non-cooperative target detection model DS-YOLO on a space non-cooperative target under the weak light environment is improved.
2. According to the non-cooperative target detection method under the low-light environment, 2D Gamma is adopted to adaptively correct the non-cooperative targets under various illumination conditions, so that the quality of an image is improved, the extraction of the characteristics of the spatial non-cooperative targets is facilitated, and the accuracy of image detection and identification is improved.
3. According to the non-cooperative target detection method in the low-light environment, the BiFPN is used as the characteristic fusion network, the target characteristics fused by the two trunks are fused together better, and meanwhile, the attention mechanism is added, so that the extraction capacity of the multi-scale characteristic fusion network to the spatial non-cooperative target characteristics is improved, the characterization capacity of the spatial non-cooperative target detection model is enhanced, and the detection precision and efficiency of the spatial non-cooperative targets are improved.
4. According to the method for detecting the non-cooperative targets in the low-light environment, disclosed by the invention, the three-dimensional modeling is carried out on the space non-cooperative targets, the motion rule is set according to the nominal orbit generated by the space non-cooperative targets, and meanwhile, the motion rule of the corresponding virtual camera is set, so that the space non-cooperative target images are obtained, the relative pose changes of the space non-cooperative targets and the observation satellites are represented by the space non-cooperative target images, the positions and the brightness of the light sources are set, the space non-cooperative target data set in the low-light environment is constructed, and the imaging characteristics of the non-cooperative targets in the actual scene can be displayed.
Drawings
FIG. 1 is a flow chart of a method for non-cooperative target detection in a low light environment in accordance with the present invention;
FIG. 2 is a training flow diagram of a spatially non-cooperative target detection model in accordance with the present invention;
FIG. 3 is a schematic diagram of a dual backbone network structure according to the present invention;
FIG. 4 is a diagram of a BiFPN as a feature fusion network structure incorporating an attention mechanism in the present invention;
fig. 5 is a schematic diagram of a Convolutional Block Attention Module (CBAM) architecture in accordance with the present invention.
Detailed Description
Embodiments of the present invention will be described in detail below with reference to the accompanying drawings, but it will be understood by those skilled in the art that the following examples are only for illustrating the present invention and should not be construed as limiting the scope of the present invention.
Referring to fig. 1, the method for detecting a non-cooperative target in a low-light environment disclosed in this embodiment includes the following specific implementation steps:
step S1: the three-dimensional modeling is carried out on the space non-cooperative targets, a motion rule is set according to a nominal orbit generated by the space non-cooperative targets, and meanwhile, a motion rule of a corresponding virtual camera is set, so that a space non-cooperative target image is obtained, the space non-cooperative target image characterizes the relative pose change of the space non-cooperative targets and an observation satellite, the position and the brightness of a light source are set, a space non-cooperative target data set under a weak light environment is built, and in the embodiment, a 3000-picture construction database is generated.
The orbital motion of a spatially non-cooperative target is equivalent to a two-body motion, i.e., in inertial space, the motion equation of the non-cooperative target is as follows:
where μ is the gravitational constant.
Step S2: after the picture is imported into LabelImg by using LabelImg as a marking tool, the position of a non-cooperative target in the picture is marked by a box, and the type name is marked. The marking of all pictures is completed by the method, a marking file in an xml format is generated and is output to a designated folder, a non-cooperative target data set is generated, and the marked images are marked according to 7:2: the scale of 1 is divided into a training set, a validation set and a test set.
Step S3: preprocessing an image input into the detection model by using 2D Gamma correction, and carrying out self-adaptive correction on the image of the non-cooperative target under various illumination conditions to obtain an enhanced RGB image of the non-cooperative target.
Firstly converting RGB picture into HSV color space, then convolving with multi-scale Gaussian function for brightness V
Wherein,
S(x,y)=L(x,y)·R(x,y)
and meets the normalization condition
∫∫G(x,y)dxdy=1
Then use the modified 2D Gamma function
Focusing on the change of local pixels, a proper gamma value is set for each pixel, and the image brightness is adaptively adjusted. According to the multi-scale Retinex method, the illumination component L is calculated
The enhanced restored image is enhanced by the method, and an enhanced RGB image is obtained.
Step S4: and constructing a non-cooperative target detection model DS-YOLO based on an end-to-end YOLOv5, namely respectively inputting an original picture and the picture preprocessed in the step S3 into a main network backbond of the DS-YOLO model by using a CSPDarknet53 and a Swin Transformer to obtain a CSPS module, wherein the CSPS module is used as the main network backbond of the DS-YOLO model, and the main network consists of a residual module, a pooling layer, an activation layer, a normalization layer and other modules. We combine the backbone networks except the first CSPDarknet53 with Swin fransformer, and construct two backbone networks with the same structure. And the output of each CSPS in the processing enhanced RGB image backbone and the information fusion of the processing original image backbone are used as the input of each CSPS of the processing original image backbone, various characteristic information is fused together, and the characteristic representation capability and the characteristic extraction capability of the backbone are improved.
The BiFPN is used as a feature fusion network and a CBAM attention module is introduced to strengthen the extraction capability of multi-scale features, and the CBAM is introduced into a channel of the feature fusion network BiFPN downsampling to improve the expression capability of the feature fusion network on the features. The CBAM combines the spatial and channel attention mechanism modules, the channel attention consists of different pooling layers MaxPool and AvgPool, MLP and Sigmod activation functions, while the spatial attention has two channel-based global maximum pooling and averaging pooling, and finally goes through the Sigmod activation functions. The attention matrix is obtained through channel attention and space attention, and the final characteristics are obtained by multiplying the input characteristics.
And introducing a DIOU loss function to construct an optimal target detection model DS-YOLO.
DIOU loss function:
wherein b, b gt Representing the center points of the predicted and real frames, respectively, ρ represents the euclidean distance between the two center points, and c represents the diagonal distance of the minimum closure region containing both the predicted and real frames.
The training of the optimal target detection model comprises the following steps:
s4-1, clustering the marked anchor frames by using a K-means clustering algorithm, calculating the most suitable anchor frame size, and taking the anchor frame as a training anchor frame;
and S4-2, processing the data set by using two data enhancement methods of light distortion and geometric distortion aiming at the characteristics of space illumination, noise and complex environment, and expanding the data set. For photometric distortion, the brightness, saturation and noise of the image are enhanced to adapt to the influence of illumination and noise in space. In dealing with geometric distortion, random scaling, cropping, translation, shearing, and rotation are added. Generating more training samples by adjusting the exposure from 1 to 1.5 times, generating more training samples by adjusting the saturation from 1 to 1.5 times, adding noise to a picture by using Gaussian noise, and expanding a data set to enhance the generalization performance of a DS-YOLO model;
step S4-3, training the input image data, randomly initializing the weight, setting training super parameters, wherein the learning rate is 0.0001, the cosine annealing super parameters are 0.00036, the learning rate momentum is 0.978, the training image batch input into the network model each time is 64, and the total iteration calculation is 2000 times.
Step S4-4, adopting a loss error back propagation method, using a random gradient descent method and a minimized loss function method to realize optimization iteration of DS-YOLO model parameters, wherein the parameter updating mode of the embodiment using Adam as an optimizer Adam is shown in the following formula
m t :=beta 1 *m t-1 +(1-beta 1 )*g
v t :=beta 2 *v t-1 +(1-beta 2 )*g*g
And step S4-5, repeating the step S4-4 until the loss function is not reduced, thereby obtaining the optimal non-cooperative target detection model in the low-light environment.
And S5, inputting the image to be detected into the detection model, correcting the brightness through 2D Gamma, then using an optimal target detection model DS-YOLO to detect the original image and the preprocessed image as input, and using the optimal DS-YOLO model to realize space non-cooperative target detection under the dim light environment with high accuracy and strong anti-interference capability.
While the foregoing has been provided for the purpose of illustrating the general principles of the invention, it will be understood that the foregoing disclosure is only illustrative of the principles of the invention and is not intended to limit the scope of the invention, but is to be construed as limited to the specific principles of the invention.
Claims (4)
1. A non-cooperative target detection method in a low-light environment is characterized in that: comprises the following steps of the method,
step S1: three-dimensional modeling is carried out on the space non-cooperative targets, a motion rule is set according to a nominal orbit generated by the space non-cooperative targets, and meanwhile, a motion rule of a corresponding virtual camera is set, so that a space non-cooperative target image is obtained, the space non-cooperative target image characterizes the relative pose change of the space non-cooperative targets and an observation satellite, meanwhile, the position and the brightness of a light source are set, and a space non-cooperative target data set under a weak light environment is constructed;
step S2: labeling the constructed data set, and dividing the labeled image into a training set, a verification set and a test set;
step S3: preprocessing an image input into the detection model by using 2D Gamma correction, and adaptively correcting the image of the non-cooperative target under various illumination conditions to obtain an enhanced RGB image of the non-cooperative target;
step S4: constructing a non-cooperative target detection model DS-YOLO based on an end-to-end YOLOv5, namely respectively inputting an original picture and a picture preprocessed in the step S3 by using a method of double-trunk detection, combining a CSPDarknet53 with a SwinTransformer to obtain a CSPS module, using the CSPS module as a trunk network backbond of the DS-YOLO model, using BiFPN as a feature fusion network and introducing an attention module to strengthen the extraction capability of multi-scale features, introducing a DIOU loss function, and constructing the non-cooperative target detection model DS-YOLO; training a non-cooperative target detection model DS-YOLO on the data set constructed in the step S1 to obtain an optimal non-cooperative target detection model DS-YOLO in a low-light environment, and storing a training result;
and S5, inputting the image to be detected into the optimal non-cooperative target detection model DS-YOLO, correcting the brightness through 2D Gamma, then using the optimal target detection model DS-YOLO to detect the original image and the preprocessed image as input, and utilizing the optimal non-cooperative target detection model DS-YOLO to realize the detection of the spatial non-cooperative target in the weak light environment, thereby improving the accuracy and the anti-interference capability of the detection of the non-cooperative target.
2. The method for non-cooperative target detection in a low light environment of claim 1, wherein: in the step S1 of the process,
the orbital motion of a spatially non-cooperative target is equivalent to a two-body motion, i.e., in inertial space, the motion equation of the non-cooperative target is as follows:
where μ is the gravitational constant.
3. The method for non-cooperative target detection in a low light environment of claim 2, wherein: in the step S3 of the process,
converting RGB picture into HSV color space, and then convolving with multi-scale Gaussian function for brightness V
Wherein,
S(x,y)=L(x,y)·R(x,y)
and meets the normalization condition
∫∫G(x,y)dxdy=1
Then use the modified 2D Gamma function
Focusing on the change of local pixels, setting a proper gamma value for each pixel, and adaptively adjusting the brightness of the image; according to the multi-scale Retinex method, the illumination component L is calculated
And enhancing the post-restored image by using a 2D Gamma correction method to obtain an enhanced RGB image.
4. A method for non-cooperative target detection in a low light environment as recited in claim 3, wherein: in the step S4 of the process,
DIOU loss function:
wherein b, b gt Representing the center points of the predicted and real frames, respectively, ρ representing the Euclidean distance between the two center points, c representing the diagonal distance of the minimum closure region containing both predicted and real frames;
Training a non-cooperative target detection model DS-YOLO on the data set constructed in the step S1, wherein the implementation method comprises the following steps:
step S4-1, clustering the marked anchor frames by using a clustering algorithm, calculating the most suitable anchor frame size, and taking the anchor frame as an anchor frame for training a non-cooperative target detection model DS-YOLO;
s4-2, expanding a data set by using a plurality of data enhancement methods to enhance the generalization performance of a non-cooperative target detection model DS-YOLO;
s4-3, training the input image data, and randomly initializing DS-YOLO parameters of a non-cooperative target detection model;
s4-4, adopting a loss error back propagation method, and adopting a random gradient descent method and a minimum loss function method to realize optimization iteration of DS-YOLO parameters of a non-cooperative target detection model;
and S4-5, repeating the step S4-4 until the loss function is not reduced, and obtaining the optimal non-cooperative target detection model DS-YOLO in the low-light environment.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202311336092.6A CN117372764A (en) | 2023-10-16 | 2023-10-16 | Non-cooperative target detection method in low-light environment |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202311336092.6A CN117372764A (en) | 2023-10-16 | 2023-10-16 | Non-cooperative target detection method in low-light environment |
Publications (1)
Publication Number | Publication Date |
---|---|
CN117372764A true CN117372764A (en) | 2024-01-09 |
Family
ID=89401730
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202311336092.6A Pending CN117372764A (en) | 2023-10-16 | 2023-10-16 | Non-cooperative target detection method in low-light environment |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN117372764A (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN117876942A (en) * | 2024-03-12 | 2024-04-12 | 中国民用航空飞行学院 | Unmanned aerial vehicle and bird monitoring method based on convolutional neural network |
-
2023
- 2023-10-16 CN CN202311336092.6A patent/CN117372764A/en active Pending
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN117876942A (en) * | 2024-03-12 | 2024-04-12 | 中国民用航空飞行学院 | Unmanned aerial vehicle and bird monitoring method based on convolutional neural network |
CN117876942B (en) * | 2024-03-12 | 2024-05-24 | 中国民用航空飞行学院 | Unmanned aerial vehicle and bird monitoring method based on convolutional neural network |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN111950453B (en) | Random shape text recognition method based on selective attention mechanism | |
CN113158862B (en) | Multitasking-based lightweight real-time face detection method | |
CN111967480A (en) | Multi-scale self-attention target detection method based on weight sharing | |
CN111985376A (en) | Remote sensing image ship contour extraction method based on deep learning | |
CN113255659B (en) | License plate correction detection and identification method based on MSAFF-yolk 3 | |
CN113780152A (en) | Remote sensing image ship small target detection method based on target perception | |
CN112150493A (en) | Semantic guidance-based screen area detection method in natural scene | |
CN114783024A (en) | Face recognition system of gauze mask is worn in public place based on YOLOv5 | |
CN111680705B (en) | MB-SSD method and MB-SSD feature extraction network suitable for target detection | |
CN106338733A (en) | Forward-looking sonar object tracking method based on frog-eye visual characteristic | |
CN117372764A (en) | Non-cooperative target detection method in low-light environment | |
CN114972748B (en) | Infrared semantic segmentation method capable of explaining edge attention and gray scale quantization network | |
CN116758130A (en) | Monocular depth prediction method based on multipath feature extraction and multi-scale feature fusion | |
CN111445496B (en) | Underwater image recognition tracking system and method | |
CN114972952B (en) | Model lightweight-based industrial part defect identification method | |
CN115359474A (en) | Lightweight three-dimensional target detection method, device and medium suitable for mobile terminal | |
CN116993975A (en) | Panoramic camera semantic segmentation method based on deep learning unsupervised field adaptation | |
CN115047455A (en) | Lightweight SAR image ship target detection method | |
CN113989612A (en) | Remote sensing image target detection method based on attention and generation countermeasure network | |
CN114612658A (en) | Image semantic segmentation method based on dual-class-level confrontation network | |
CN112785629A (en) | Aurora motion characterization method based on unsupervised deep optical flow network | |
US11881020B1 (en) | Method for small object detection in drone scene based on deep learning | |
CN115035429A (en) | Aerial photography target detection method based on composite backbone network and multiple measuring heads | |
CN115439738A (en) | Underwater target detection method based on self-supervision cooperative reconstruction | |
CN115482280A (en) | Visual positioning method based on adaptive histogram equalization |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |