CN117372764A - Non-cooperative target detection method in low-light environment - Google Patents

Non-cooperative target detection method in low-light environment Download PDF

Info

Publication number
CN117372764A
CN117372764A CN202311336092.6A CN202311336092A CN117372764A CN 117372764 A CN117372764 A CN 117372764A CN 202311336092 A CN202311336092 A CN 202311336092A CN 117372764 A CN117372764 A CN 117372764A
Authority
CN
China
Prior art keywords
cooperative target
target detection
yolo
image
cooperative
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311336092.6A
Other languages
Chinese (zh)
Inventor
乔栋
刘月鹏
郑德智
秦同
韩宏伟
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Institute of Technology BIT
Original Assignee
Beijing Institute of Technology BIT
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Institute of Technology BIT filed Critical Beijing Institute of Technology BIT
Priority to CN202311336092.6A priority Critical patent/CN117372764A/en
Publication of CN117372764A publication Critical patent/CN117372764A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/764Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/0464Convolutional networks [CNN, ConvNet]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/24Aligning, centring, orientation detection or correction of the image
    • G06V10/245Aligning, centring, orientation detection or correction of the image by locating a pattern; Special marks for positioning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/25Determination of region of interest [ROI] or a volume of interest [VOI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/774Generating sets of training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/07Target detection

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Evolutionary Computation (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Computing Systems (AREA)
  • Artificial Intelligence (AREA)
  • Multimedia (AREA)
  • Databases & Information Systems (AREA)
  • Medical Informatics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Data Mining & Analysis (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Molecular Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Geometry (AREA)
  • Computer Graphics (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a non-cooperative target detection method in a low-light environment, and belongs to the technical field of aerospace. According to the invention, the original image and the enhanced image are subjected to feature extraction by adopting a dual-trunk mode, the extracted features are fused, and the feature extraction capability of a non-cooperative target detection model DS-YOLO under a weak light environment is enhanced. According to the invention, 2D Gamma is adopted to adaptively correct the non-cooperative targets under various illumination conditions, so that the quality of the image is improved, the extraction of the spatial non-cooperative target characteristics is facilitated, and the accuracy of image detection and identification is improved. According to the invention, the BiFPN is used as a feature fusion network, the target features fused by the two backbones are fused together better, and an attention mechanism is added, so that the extraction capability of the multi-scale feature fusion network to the spatial non-cooperative target features is improved, the characterization capability of the spatial non-cooperative target detection model is enhanced, and the detection precision and efficiency of the spatial non-cooperative targets are improved.

Description

Non-cooperative target detection method in low-light environment
Technical Field
The invention belongs to the technical field of aerospace, and relates to a non-cooperative target detection method in a low-light environment.
Background
With the continuous increase of the number of space targets and the increasing complexity of space environment, space situation awareness becomes a premise and a foundation for guaranteeing space safety and realizing space tasks, and detection of space non-cooperative targets is an important component of space situation awareness. Compared with a foundation detection platform, the space-based detection platform is not limited by regions and climatic environments, and can detect all weather in real time, so that the space-based detection platform has remarkable advantages. Due to the complex environment of space, especially in low light environments, the brightness of the target may be even weaker than the background, so that the detected distance and the detection probability of the target are greatly reduced.
The traditional non-cooperative target detection method is generally based on manually selected features, but the manually selected features depend on expert priori knowledge and have poor generalization capability. Moreover, the conventional detection method aims at a certain characteristic, and the type of the target is difficult to distinguish and cannot meet the requirements of the existing task.
With the continuous development of artificial intelligence and the improvement of on-board hardware performance, deep learning is used in various fields of spacecraft. Compared with the traditional non-cooperative target detection method, the deep learning target detection method does not need to manually design features, but realizes target detection by learning the features of the image obtained by the image data through the neural network. However, in a low-light environment, intelligent detection of spatial non-cooperative target recognition is poor, and no effective solution has been proposed for this problem.
Disclosure of Invention
In order to solve the problems in the background art, the invention aims to provide a non-cooperative target detection method in a low-light environment, which can realize accurate classification and positioning of targets with different scales aiming at images of non-cooperative targets under various illumination conditions. The invention has the advantages of high detection accuracy and strong anti-interference capability.
In order to achieve the above purpose, the technical scheme adopted by the invention is as follows:
the invention discloses a method for detecting a non-cooperative target in a low-light environment, which is used for classifying and positioning a space non-cooperative target and comprises the following steps: step S1, three-dimensional modeling is carried out on a space non-cooperative target, a motion rule is set according to a nominal orbit generated by the space non-cooperative target, and meanwhile, a motion rule of a corresponding virtual camera is set, so that a space non-cooperative target image is obtained, the space non-cooperative target image characterizes the relative pose change of the space non-cooperative target and an observation satellite, and meanwhile, the position and the brightness of a light source are set, so that a space non-cooperative target data set in a weak light environment is constructed; s2, marking the constructed data set, and dividing the marked image into a training set, a verification set and a test set; s3, preprocessing an image input into the detection model by using 2D Gamma correction, and carrying out self-adaptive correction on the image of the non-cooperative target under various illumination conditions to obtain an RGB image of the enhanced non-cooperative target; and S4, constructing a non-cooperative target detection model DS-YOLO based on an end-to-end YOLOv5, namely, using a double-trunk detection method, respectively inputting an original picture and the picture preprocessed in the step S3 into a CSPS module by combining a CSPDarknet53 and a Swin Transformer, using the CSPS module as a trunk network backhaul of the DS-YOLO model, using a BiFPN as a feature fusion network and introducing an attention module to strengthen the extraction capability of multi-scale features, introducing a DIOU loss function, and constructing the non-cooperative target detection model DS-YOLO. Training a non-cooperative target detection model DS-YOLO on the data set constructed in the step S1 to obtain an optimal non-cooperative target detection model DS-YOLO in a low-light environment, and storing a training result; and S5, inputting the image to be detected into the optimal non-cooperative target detection model DS-YOLO, correcting the brightness through 2D Gamma, then using the optimal target detection model DS-YOLO to detect the original image and the preprocessed image as input, and utilizing the optimal non-cooperative target detection model DS-YOLO to realize the detection of the spatial non-cooperative target in the weak light environment, thereby improving the accuracy and the anti-interference capability of the detection of the non-cooperative target.
The invention discloses a non-cooperative target detection method in a low-light environment, which comprises the following steps of: step S4-1, clustering the marked anchor frames by using a clustering algorithm, and calculating the most suitable anchor frame size to be used as a training anchor frame; s4-2, expanding a data set by using a plurality of data enhancement methods to enhance the generalization performance of the DS-YOLO model; s4-3, training the input image data and randomly initializing the weight; s4-4, adopting a loss error back propagation method, and adopting a random gradient descent method and a minimum loss function method to realize optimization iteration of DS-YOLO model parameters; and step S4-5, repeating the step S4-4 until the loss function is not reduced, thereby obtaining the optimal non-cooperative target detection model in the low-light environment.
The invention discloses a non-cooperative target detection method in a low-light environment, which comprises the following steps:
step S1: and carrying out three-dimensional modeling on the space non-cooperative target, setting a motion rule according to a nominal orbit generated by the space non-cooperative target, setting a motion rule of a corresponding virtual camera at the same time, and obtaining a space non-cooperative target image, so that the space non-cooperative target image characterizes the relative pose change of the space non-cooperative target and an observation satellite, and setting the position and the brightness of a light source at the same time, thereby constructing a space non-cooperative target data set in a low light environment.
The orbital motion of a spatially non-cooperative target is equivalent to a two-body motion, i.e., in inertial space, the motion equation of the non-cooperative target is as follows:
where μ is the gravitational constant.
Step S2: and labeling the constructed data set, and dividing the labeled image into a training set, a verification set and a test set.
Step S3: preprocessing an image input into the detection model by using 2D Gamma correction, and carrying out self-adaptive correction on the image of the non-cooperative target under various illumination conditions to obtain an enhanced RGB image of the non-cooperative target.
Firstly converting RGB picture into HSV color space, then convolving with multi-scale Gaussian function for brightness V
Wherein,
S(x,y)=L(x,y)·R(x,y)
and meets the normalization condition
∫∫G(x,y)dxdy=1
Then use the modified 2D Gamma function
Focusing on the change of local pixels, a proper gamma value is set for each pixel, and the image brightness is adaptively adjusted. According to the multi-scale Retinex method, the illumination component L is calculated
And enhancing the post-restored image by using a 2D Gamma correction method to obtain an enhanced RGB image.
Step S4: and constructing a non-cooperative target detection model DS-YOLO based on an end-to-end YOLOv5, namely respectively inputting an original picture and the picture preprocessed in the step S3 into a CSPS module by combining a CSPDarknet53 and a Swin Transformer, using the CSPS module as a Backbone network backbond of the DS-YOLO model, using BiFPN as a feature fusion network and introducing an attention module to strengthen the extraction capability of multi-scale features, introducing a DIOU loss function, and constructing the non-cooperative target detection model DS-YOLO. Training a non-cooperative target detection model DS-YOLO on the data set constructed in the step S1 to obtain an optimal non-cooperative target detection model DS-YOLO under the low-light environment, and storing a training result.
DIOU loss function:
wherein b, b gt Representing the center points of the predicted and real frames, respectively, ρ represents the euclidean distance between the two center points, and c represents the diagonal distance of the minimum closure region containing both the predicted and real frames.
Training a non-cooperative target detection model DS-YOLO on the data set constructed in the step S1, wherein the specific implementation method comprises the following steps:
step S4-1, clustering the marked anchor frames by using a clustering algorithm, calculating the most suitable anchor frame size, and taking the anchor frame as an anchor frame for training a non-cooperative target detection model DS-YOLO;
s4-2, expanding a data set by using a plurality of data enhancement methods to enhance the generalization performance of a non-cooperative target detection model DS-YOLO;
s4-3, training the input image data, and randomly initializing DS-YOLO parameters of a non-cooperative target detection model;
s4-4, adopting a loss error back propagation method, and adopting a random gradient descent method and a minimum loss function method to realize optimization iteration of DS-YOLO parameters of a non-cooperative target detection model;
and step S4-5, repeating the step S4-4 until the loss function is not reduced, thereby obtaining the optimal non-cooperative target detection model DS-YOLO in the low-light environment.
And S5, inputting the image to be detected into the optimal non-cooperative target detection model DS-YOLO, correcting the brightness through 2D Gamma, then using the optimal target detection model DS-YOLO to detect the original image and the preprocessed image as input, and utilizing the optimal non-cooperative target detection model DS-YOLO to realize the detection of the spatial non-cooperative target in the weak light environment, thereby improving the accuracy and the anti-interference capability of the detection of the non-cooperative target.
The beneficial effects are that:
1. according to the non-cooperative target detection method under the weak light environment, the original image and the enhanced image are subjected to feature extraction by using a backbone network respectively in a double-backbone mode, the extracted features are fused, the feature extraction capacity of a non-cooperative target detection model DS-YOLO on the weak light environment is enhanced, and the detection and positioning precision of the non-cooperative target detection model DS-YOLO on a space non-cooperative target under the weak light environment is improved.
2. According to the non-cooperative target detection method under the low-light environment, 2D Gamma is adopted to adaptively correct the non-cooperative targets under various illumination conditions, so that the quality of an image is improved, the extraction of the characteristics of the spatial non-cooperative targets is facilitated, and the accuracy of image detection and identification is improved.
3. According to the non-cooperative target detection method in the low-light environment, the BiFPN is used as the characteristic fusion network, the target characteristics fused by the two trunks are fused together better, and meanwhile, the attention mechanism is added, so that the extraction capacity of the multi-scale characteristic fusion network to the spatial non-cooperative target characteristics is improved, the characterization capacity of the spatial non-cooperative target detection model is enhanced, and the detection precision and efficiency of the spatial non-cooperative targets are improved.
4. According to the method for detecting the non-cooperative targets in the low-light environment, disclosed by the invention, the three-dimensional modeling is carried out on the space non-cooperative targets, the motion rule is set according to the nominal orbit generated by the space non-cooperative targets, and meanwhile, the motion rule of the corresponding virtual camera is set, so that the space non-cooperative target images are obtained, the relative pose changes of the space non-cooperative targets and the observation satellites are represented by the space non-cooperative target images, the positions and the brightness of the light sources are set, the space non-cooperative target data set in the low-light environment is constructed, and the imaging characteristics of the non-cooperative targets in the actual scene can be displayed.
Drawings
FIG. 1 is a flow chart of a method for non-cooperative target detection in a low light environment in accordance with the present invention;
FIG. 2 is a training flow diagram of a spatially non-cooperative target detection model in accordance with the present invention;
FIG. 3 is a schematic diagram of a dual backbone network structure according to the present invention;
FIG. 4 is a diagram of a BiFPN as a feature fusion network structure incorporating an attention mechanism in the present invention;
fig. 5 is a schematic diagram of a Convolutional Block Attention Module (CBAM) architecture in accordance with the present invention.
Detailed Description
Embodiments of the present invention will be described in detail below with reference to the accompanying drawings, but it will be understood by those skilled in the art that the following examples are only for illustrating the present invention and should not be construed as limiting the scope of the present invention.
Referring to fig. 1, the method for detecting a non-cooperative target in a low-light environment disclosed in this embodiment includes the following specific implementation steps:
step S1: the three-dimensional modeling is carried out on the space non-cooperative targets, a motion rule is set according to a nominal orbit generated by the space non-cooperative targets, and meanwhile, a motion rule of a corresponding virtual camera is set, so that a space non-cooperative target image is obtained, the space non-cooperative target image characterizes the relative pose change of the space non-cooperative targets and an observation satellite, the position and the brightness of a light source are set, a space non-cooperative target data set under a weak light environment is built, and in the embodiment, a 3000-picture construction database is generated.
The orbital motion of a spatially non-cooperative target is equivalent to a two-body motion, i.e., in inertial space, the motion equation of the non-cooperative target is as follows:
where μ is the gravitational constant.
Step S2: after the picture is imported into LabelImg by using LabelImg as a marking tool, the position of a non-cooperative target in the picture is marked by a box, and the type name is marked. The marking of all pictures is completed by the method, a marking file in an xml format is generated and is output to a designated folder, a non-cooperative target data set is generated, and the marked images are marked according to 7:2: the scale of 1 is divided into a training set, a validation set and a test set.
Step S3: preprocessing an image input into the detection model by using 2D Gamma correction, and carrying out self-adaptive correction on the image of the non-cooperative target under various illumination conditions to obtain an enhanced RGB image of the non-cooperative target.
Firstly converting RGB picture into HSV color space, then convolving with multi-scale Gaussian function for brightness V
Wherein,
S(x,y)=L(x,y)·R(x,y)
and meets the normalization condition
∫∫G(x,y)dxdy=1
Then use the modified 2D Gamma function
Focusing on the change of local pixels, a proper gamma value is set for each pixel, and the image brightness is adaptively adjusted. According to the multi-scale Retinex method, the illumination component L is calculated
The enhanced restored image is enhanced by the method, and an enhanced RGB image is obtained.
Step S4: and constructing a non-cooperative target detection model DS-YOLO based on an end-to-end YOLOv5, namely respectively inputting an original picture and the picture preprocessed in the step S3 into a main network backbond of the DS-YOLO model by using a CSPDarknet53 and a Swin Transformer to obtain a CSPS module, wherein the CSPS module is used as the main network backbond of the DS-YOLO model, and the main network consists of a residual module, a pooling layer, an activation layer, a normalization layer and other modules. We combine the backbone networks except the first CSPDarknet53 with Swin fransformer, and construct two backbone networks with the same structure. And the output of each CSPS in the processing enhanced RGB image backbone and the information fusion of the processing original image backbone are used as the input of each CSPS of the processing original image backbone, various characteristic information is fused together, and the characteristic representation capability and the characteristic extraction capability of the backbone are improved.
The BiFPN is used as a feature fusion network and a CBAM attention module is introduced to strengthen the extraction capability of multi-scale features, and the CBAM is introduced into a channel of the feature fusion network BiFPN downsampling to improve the expression capability of the feature fusion network on the features. The CBAM combines the spatial and channel attention mechanism modules, the channel attention consists of different pooling layers MaxPool and AvgPool, MLP and Sigmod activation functions, while the spatial attention has two channel-based global maximum pooling and averaging pooling, and finally goes through the Sigmod activation functions. The attention matrix is obtained through channel attention and space attention, and the final characteristics are obtained by multiplying the input characteristics.
And introducing a DIOU loss function to construct an optimal target detection model DS-YOLO.
DIOU loss function:
wherein b, b gt Representing the center points of the predicted and real frames, respectively, ρ represents the euclidean distance between the two center points, and c represents the diagonal distance of the minimum closure region containing both the predicted and real frames.
The training of the optimal target detection model comprises the following steps:
s4-1, clustering the marked anchor frames by using a K-means clustering algorithm, calculating the most suitable anchor frame size, and taking the anchor frame as a training anchor frame;
and S4-2, processing the data set by using two data enhancement methods of light distortion and geometric distortion aiming at the characteristics of space illumination, noise and complex environment, and expanding the data set. For photometric distortion, the brightness, saturation and noise of the image are enhanced to adapt to the influence of illumination and noise in space. In dealing with geometric distortion, random scaling, cropping, translation, shearing, and rotation are added. Generating more training samples by adjusting the exposure from 1 to 1.5 times, generating more training samples by adjusting the saturation from 1 to 1.5 times, adding noise to a picture by using Gaussian noise, and expanding a data set to enhance the generalization performance of a DS-YOLO model;
step S4-3, training the input image data, randomly initializing the weight, setting training super parameters, wherein the learning rate is 0.0001, the cosine annealing super parameters are 0.00036, the learning rate momentum is 0.978, the training image batch input into the network model each time is 64, and the total iteration calculation is 2000 times.
Step S4-4, adopting a loss error back propagation method, using a random gradient descent method and a minimized loss function method to realize optimization iteration of DS-YOLO model parameters, wherein the parameter updating mode of the embodiment using Adam as an optimizer Adam is shown in the following formula
m t :=beta 1 *m t-1 +(1-beta 1 )*g
v t :=beta 2 *v t-1 +(1-beta 2 )*g*g
And step S4-5, repeating the step S4-4 until the loss function is not reduced, thereby obtaining the optimal non-cooperative target detection model in the low-light environment.
And S5, inputting the image to be detected into the detection model, correcting the brightness through 2D Gamma, then using an optimal target detection model DS-YOLO to detect the original image and the preprocessed image as input, and using the optimal DS-YOLO model to realize space non-cooperative target detection under the dim light environment with high accuracy and strong anti-interference capability.
While the foregoing has been provided for the purpose of illustrating the general principles of the invention, it will be understood that the foregoing disclosure is only illustrative of the principles of the invention and is not intended to limit the scope of the invention, but is to be construed as limited to the specific principles of the invention.

Claims (4)

1. A non-cooperative target detection method in a low-light environment is characterized in that: comprises the following steps of the method,
step S1: three-dimensional modeling is carried out on the space non-cooperative targets, a motion rule is set according to a nominal orbit generated by the space non-cooperative targets, and meanwhile, a motion rule of a corresponding virtual camera is set, so that a space non-cooperative target image is obtained, the space non-cooperative target image characterizes the relative pose change of the space non-cooperative targets and an observation satellite, meanwhile, the position and the brightness of a light source are set, and a space non-cooperative target data set under a weak light environment is constructed;
step S2: labeling the constructed data set, and dividing the labeled image into a training set, a verification set and a test set;
step S3: preprocessing an image input into the detection model by using 2D Gamma correction, and adaptively correcting the image of the non-cooperative target under various illumination conditions to obtain an enhanced RGB image of the non-cooperative target;
step S4: constructing a non-cooperative target detection model DS-YOLO based on an end-to-end YOLOv5, namely respectively inputting an original picture and a picture preprocessed in the step S3 by using a method of double-trunk detection, combining a CSPDarknet53 with a SwinTransformer to obtain a CSPS module, using the CSPS module as a trunk network backbond of the DS-YOLO model, using BiFPN as a feature fusion network and introducing an attention module to strengthen the extraction capability of multi-scale features, introducing a DIOU loss function, and constructing the non-cooperative target detection model DS-YOLO; training a non-cooperative target detection model DS-YOLO on the data set constructed in the step S1 to obtain an optimal non-cooperative target detection model DS-YOLO in a low-light environment, and storing a training result;
and S5, inputting the image to be detected into the optimal non-cooperative target detection model DS-YOLO, correcting the brightness through 2D Gamma, then using the optimal target detection model DS-YOLO to detect the original image and the preprocessed image as input, and utilizing the optimal non-cooperative target detection model DS-YOLO to realize the detection of the spatial non-cooperative target in the weak light environment, thereby improving the accuracy and the anti-interference capability of the detection of the non-cooperative target.
2. The method for non-cooperative target detection in a low light environment of claim 1, wherein: in the step S1 of the process,
the orbital motion of a spatially non-cooperative target is equivalent to a two-body motion, i.e., in inertial space, the motion equation of the non-cooperative target is as follows:
where μ is the gravitational constant.
3. The method for non-cooperative target detection in a low light environment of claim 2, wherein: in the step S3 of the process,
converting RGB picture into HSV color space, and then convolving with multi-scale Gaussian function for brightness V
Wherein,
S(x,y)=L(x,y)·R(x,y)
and meets the normalization condition
∫∫G(x,y)dxdy=1
Then use the modified 2D Gamma function
Focusing on the change of local pixels, setting a proper gamma value for each pixel, and adaptively adjusting the brightness of the image; according to the multi-scale Retinex method, the illumination component L is calculated
And enhancing the post-restored image by using a 2D Gamma correction method to obtain an enhanced RGB image.
4. A method for non-cooperative target detection in a low light environment as recited in claim 3, wherein: in the step S4 of the process,
DIOU loss function:
wherein b, b gt Representing the center points of the predicted and real frames, respectively, ρ representing the Euclidean distance between the two center points, c representing the diagonal distance of the minimum closure region containing both predicted and real frames;
Training a non-cooperative target detection model DS-YOLO on the data set constructed in the step S1, wherein the implementation method comprises the following steps:
step S4-1, clustering the marked anchor frames by using a clustering algorithm, calculating the most suitable anchor frame size, and taking the anchor frame as an anchor frame for training a non-cooperative target detection model DS-YOLO;
s4-2, expanding a data set by using a plurality of data enhancement methods to enhance the generalization performance of a non-cooperative target detection model DS-YOLO;
s4-3, training the input image data, and randomly initializing DS-YOLO parameters of a non-cooperative target detection model;
s4-4, adopting a loss error back propagation method, and adopting a random gradient descent method and a minimum loss function method to realize optimization iteration of DS-YOLO parameters of a non-cooperative target detection model;
and S4-5, repeating the step S4-4 until the loss function is not reduced, and obtaining the optimal non-cooperative target detection model DS-YOLO in the low-light environment.
CN202311336092.6A 2023-10-16 2023-10-16 Non-cooperative target detection method in low-light environment Pending CN117372764A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311336092.6A CN117372764A (en) 2023-10-16 2023-10-16 Non-cooperative target detection method in low-light environment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311336092.6A CN117372764A (en) 2023-10-16 2023-10-16 Non-cooperative target detection method in low-light environment

Publications (1)

Publication Number Publication Date
CN117372764A true CN117372764A (en) 2024-01-09

Family

ID=89401730

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311336092.6A Pending CN117372764A (en) 2023-10-16 2023-10-16 Non-cooperative target detection method in low-light environment

Country Status (1)

Country Link
CN (1) CN117372764A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117876942A (en) * 2024-03-12 2024-04-12 中国民用航空飞行学院 Unmanned aerial vehicle and bird monitoring method based on convolutional neural network

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117876942A (en) * 2024-03-12 2024-04-12 中国民用航空飞行学院 Unmanned aerial vehicle and bird monitoring method based on convolutional neural network
CN117876942B (en) * 2024-03-12 2024-05-24 中国民用航空飞行学院 Unmanned aerial vehicle and bird monitoring method based on convolutional neural network

Similar Documents

Publication Publication Date Title
CN111950453B (en) Random shape text recognition method based on selective attention mechanism
CN113158862B (en) Multitasking-based lightweight real-time face detection method
CN111967480A (en) Multi-scale self-attention target detection method based on weight sharing
CN111985376A (en) Remote sensing image ship contour extraction method based on deep learning
CN113255659B (en) License plate correction detection and identification method based on MSAFF-yolk 3
CN113780152A (en) Remote sensing image ship small target detection method based on target perception
CN112150493A (en) Semantic guidance-based screen area detection method in natural scene
CN114783024A (en) Face recognition system of gauze mask is worn in public place based on YOLOv5
CN111680705B (en) MB-SSD method and MB-SSD feature extraction network suitable for target detection
CN106338733A (en) Forward-looking sonar object tracking method based on frog-eye visual characteristic
CN117372764A (en) Non-cooperative target detection method in low-light environment
CN114972748B (en) Infrared semantic segmentation method capable of explaining edge attention and gray scale quantization network
CN116758130A (en) Monocular depth prediction method based on multipath feature extraction and multi-scale feature fusion
CN111445496B (en) Underwater image recognition tracking system and method
CN114972952B (en) Model lightweight-based industrial part defect identification method
CN115359474A (en) Lightweight three-dimensional target detection method, device and medium suitable for mobile terminal
CN116993975A (en) Panoramic camera semantic segmentation method based on deep learning unsupervised field adaptation
CN115047455A (en) Lightweight SAR image ship target detection method
CN113989612A (en) Remote sensing image target detection method based on attention and generation countermeasure network
CN114612658A (en) Image semantic segmentation method based on dual-class-level confrontation network
CN112785629A (en) Aurora motion characterization method based on unsupervised deep optical flow network
US11881020B1 (en) Method for small object detection in drone scene based on deep learning
CN115035429A (en) Aerial photography target detection method based on composite backbone network and multiple measuring heads
CN115439738A (en) Underwater target detection method based on self-supervision cooperative reconstruction
CN115482280A (en) Visual positioning method based on adaptive histogram equalization

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination