CN115272871A - Method for detecting dim small target under space-based background - Google Patents

Method for detecting dim small target under space-based background Download PDF

Info

Publication number
CN115272871A
CN115272871A CN202211177614.8A CN202211177614A CN115272871A CN 115272871 A CN115272871 A CN 115272871A CN 202211177614 A CN202211177614 A CN 202211177614A CN 115272871 A CN115272871 A CN 115272871A
Authority
CN
China
Prior art keywords
image
pyramid
frame
point
space
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211177614.8A
Other languages
Chinese (zh)
Inventor
付强
朱瑞
刘壮
王超
史浩东
李英超
姜会林
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Changchun University of Science and Technology
Original Assignee
Changchun University of Science and Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Changchun University of Science and Technology filed Critical Changchun University of Science and Technology
Priority to CN202211177614.8A priority Critical patent/CN115272871A/en
Publication of CN115272871A publication Critical patent/CN115272871A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/246Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/764Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/80Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level
    • G06V10/806Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level of extracted features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10032Satellite or aerial image; Remote sensing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30181Earth observation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30241Trajectory

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Computing Systems (AREA)
  • Databases & Information Systems (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Software Systems (AREA)
  • Image Analysis (AREA)

Abstract

The invention relates to a method for detecting a dim small target under a space-based background. The invention relates to the technical field of target detection, which is used for carrying out pyramid decomposition on an original space-based dim small target image to obtain a decomposed image, and carrying out weighted fusion on the decomposed image to obtain an image pyramid weighted fusion result image; adopting self-adaptive threshold segmentation to the image pyramid weighted fusion result graph to determine space-based dim level small target candidate points; setting a test condition and determining a real target point. The invention adopts the image pyramid to carry out multi-scale scaling on the image, thereby realizing the purposes of increasing the data set and diversifying the resolution, fusing the multi-scale image, expanding the dynamic range of the image, improving the performance indexes such as the integral definition, the contrast ratio, the gray variance and the like of the image, improving the image quality and providing more information for target detection.

Description

Method for detecting dim small target under space-based background
Technical Field
The invention relates to the technical field of target detection, in particular to a method for detecting dim small targets under a space-based background.
Background
The space target space-based monitoring system is the guarantee of space asset safety, is a basic national strategic facility, is an important development direction of future spatial situation perception, is a leading-edge technology in the field of space detection, and has important strategic significance for effective execution of national space missions and maintenance of national safety systems.
Currently, space-based space object detection systems include radar detection, infrared detection, and visible light detection. The radar detection utilizes wireless electromagnetic waves or laser to detect a target, and has the advantages of strong anti-interference capability, high positioning accuracy and capability of capturing a small-size and long-distance space target. But the defects are that the radar detection equipment has larger mass and higher requirement on the load of the space-based platform; the infrared detection utilizes an infrared imaging technology to detect the target in the space shadow area, but has the problems of short detection distance, weak detected target signal, low signal-to-noise ratio, large influence of fluctuation of the background and the like. Background noise can sometimes even overwhelm space-based targets, making it difficult to detect spatial targets. Compared with the space-based visible light detection and the former two, the advantages are that: (1) For a space target running in a non-shadow area, the visible light is obvious in characteristic and easy to detect due to the irradiation of the sun. Meanwhile, the visible light detection technology is mature, and the requirements for detecting various space targets such as satellites, space debris, boosting rockets, protective covers and the like can be met by using visible light wave band detection; (2) The visible light detection can acquire high-resolution images, a large amount of information is acquired on each frame of image, the detection distance is long, and the device has the capability of simultaneously detecting a plurality of targets; (3) The visible light image processing system is relatively low in cost, small in size and easy to carry on a space-based platform, and the problem of space miniaturization is solved conveniently.
The space-based visible light target monitoring system can master and sense the spatial target situation in real time and timely react to dangerous targets when necessary. The importance and significance of researching space-based space target detection technology are as follows: the complex non-uniform noise background in the image shot in the space-based environment is suppressed, the fast and effective detection and high-precision positioning of the small targets with different tracks and low signal-to-noise ratios are realized, the detection capability of a space target monitoring system can be improved, and the space target monitoring level is greatly improved.
Disclosure of Invention
The invention provides a method for detecting dim and small targets under a space-based background, which overcomes the influence of stray light, noise and the like on the detection of the space targets in a star map, improves the performance indexes of the overall definition, contrast and gray variance of an image and improves the image quality on the basis. A background noise suppression method suitable for space-based images is developed, and how to eliminate the interference of complex background noise in a star map and simultaneously reserve space target information is achieved.
It should be noted that, in this document, relational terms such as first and second, and the like are used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus.
The invention provides a method for detecting dim small targets under a space-based background, which provides the following technical scheme:
a method for detecting dim small targets in a space-based background, the method comprising the steps of:
step1: carrying out pyramid decomposition on the original sky-based dark and weak small target image, and respectively carrying out Gaussian pyramid decomposition and Laplacian pyramid decomposition on the weight map and the original image to obtain a decomposed image;
step2: weighting and fusing the decomposed images by combining the Laplacian pyramid with three information measure factors of image contrast, entropy and exposure, respectively calculating the weight pyramid of each image under different resolutions, and then obtaining image contents and image details of different scales to obtain an image pyramid weighting and fusing result image;
and 3, step3: adopting self-adaptive threshold segmentation to the image pyramid weighting fusion result graph, respectively calculating segmentation thresholds of different areas by using a local threshold segmentation method, combining the whole with the local, and determining a space-based dim small target candidate point;
and 4, step 4: setting a test condition, establishing space-based dim small target candidate points obtained after threshold segmentation into a tree structure, and pruning a motion trail of each node of the tree structure; and outputting the real track, reserving all candidate points forming the real track on each frame of image, and determining a real target point.
Preferably, the step1 specifically comprises:
carrying out pyramid decomposition on an original space-based dark small target image, respectively carrying out Gaussian pyramid decomposition and Laplacian pyramid decomposition on a weight map and the original image, carrying out low-level down-sampling on the Gaussian pyramid to obtain a high-level image, carrying out Gaussian blur on each level, then carrying out 2-level down-sampling on each level to obtain the original data image, wherein the size of each level of image of the Gaussian pyramid is one fourth of that of a lower-level image, and carrying out the image after the Gaussian pyramid decomposition by the following formula:
Figure 359219DEST_PATH_IMAGE001
(1)
wherein G is l Is a Gaussian pyramid
Figure 338676DEST_PATH_IMAGE002
Layer image, C l 、R l Is as follows
Figure 915151DEST_PATH_IMAGE002
The total number of rows and the total number of columns of the layer image,
Figure 841519DEST_PATH_IMAGE003
as a Gaussian filter templatemLine for mobile communication terminalnThe column values are set to the column values,
Figure 780525DEST_PATH_IMAGE004
the size of the powder is 5 multiplied by 5,l ev the number of layers of the gaussian pyramid is represented,
Figure 592535DEST_PATH_IMAGE005
decomposing the image for a Gaussian pyramid
Figure 125147DEST_PATH_IMAGE006
Line for mobile communication terminal
Figure 714261DEST_PATH_IMAGE007
The value of the column;
and (3) introducing a Laplacian pyramid to retain image details, and recovering the detail information of the original image after the image is reconstructed and fused, wherein the Laplacian pyramid decomposition method comprises the following steps of:
to the first
Figure 507773DEST_PATH_IMAGE002
Carrying out Gaussian blur and downsampling on the layer original image data to obtain G l+1 To G l+1 Performing upsampling expansion to obtain a Laplacian pyramid decomposition image I l *
Figure 891350DEST_PATH_IMAGE008
(2)
Wherein Z represents a positive integer, image I l * And withlThe layers are of the same size, thenlLayer image G l And I l * Subtract to obtain the contentInformation saving the firstlLayer image L l
Figure 770313DEST_PATH_IMAGE009
(3)
Wherein,
Figure 631959DEST_PATH_IMAGE010
for x row and y column of Gaussian filter template, G l Is a pair G l Derivation, G evL Is the L th ev A layer gaussian blur function, Z being a positive integer.
Preferably, the step2 specifically comprises:
the method comprises the following steps of performing Laplace filtering on an original sky-based dim small target image, taking an absolute value of a coefficient obtained through filtering response, reacting contrast information at each pixel by using an obtained absolute value response coefficient, and expressing a discrete form of a Laplace operator second-order partial derivative for filtering through the following formula:
Figure 686502DEST_PATH_IMAGE011
(4)
in the formula,
Figure 178664DEST_PATH_IMAGE012
which represents the gray-scale value of the image,
Figure 748185DEST_PATH_IMAGE013
in the form of the first partial derivative,
Figure 85626DEST_PATH_IMAGE014
is composed of
Figure 808992DEST_PATH_IMAGE012
A second partial derivative;
entropy in grayscale image evaluation is used as an index to evaluate how much and how little information content of an image is, and entropy e in grayscale image evaluation is expressed by the following formula:
Figure 206475DEST_PATH_IMAGE015
(5)
wherein,
Figure 263293DEST_PATH_IMAGE016
representing an image histogram, wherein L represents an image gray value, and i is the ith image histogram;
measuring the exposure of the small target image with day-based darkness, selecting pixel points with good exposure, and expressing E (x, y) by the following formula:
Figure 404424DEST_PATH_IMAGE017
(6)
wherein V (x, y) represents an image pixel;
the linear combination of three information measure factors of image contrast, entropy and exposure is represented by the following formula:
Figure 699139DEST_PATH_IMAGE018
(7)
wherein,
Figure 267524DEST_PATH_IMAGE019
is a serial number of the image,C,S,Pthe contrast, entropy and proper exposure are adopted,
Figure 873954DEST_PATH_IMAGE020
is a constant term
Figure 818777DEST_PATH_IMAGE021
Figure 233577DEST_PATH_IMAGE022
Is the total weight value of the weight value,
Figure 238443DEST_PATH_IMAGE023
in order to be a weight of the contrast ratio,
Figure 738694DEST_PATH_IMAGE024
in order to be the weight of the entropy value,
Figure 956049DEST_PATH_IMAGE025
is an exposure weight value;
respectively calculating corresponding weight maps of all sequence images
Figure 490935DEST_PATH_IMAGE026
Each pixel is cumulatively calculated so that the sum of the weights of the spatial positions of the ownership remap is 1, i.e.
Figure 260177DEST_PATH_IMAGE027
(8)
Where N is the number of input images.
Preferably, the step3 specifically comprises:
in the threshold segmentation process, a method of combining local threshold segmentation and global threshold segmentation is adopted to determine weak and small targets, and the global threshold segmentation is self-adaptive threshold segmentation T adopted for the whole image G T is represented by the following formula G
Figure 253584DEST_PATH_IMAGE028
(9)
Figure 540209DEST_PATH_IMAGE029
(10)
Where σ is the gray scale standard deviation of the image, t is an odd number greater than 3,
Figure 804968DEST_PATH_IMAGE030
is the image gray value after the filtering processing;
obtaining the image after global threshold segmentation by the formula (10)
Figure 151636DEST_PATH_IMAGE031
The local threshold segmentation divides the image into N × N regions, and calculates the first region of different regions
Figure 626479DEST_PATH_IMAGE032
Individual division threshold value
Figure 779112DEST_PATH_IMAGE033
Figure 616487DEST_PATH_IMAGE034
(11)
Figure 930794DEST_PATH_IMAGE035
(12)
Wherein,
Figure 955250DEST_PATH_IMAGE036
is of a different area
Figure 911574DEST_PATH_IMAGE032
A division threshold value, σ i Is the first
Figure 744401DEST_PATH_IMAGE032
The standard deviation of the number of divided regions,
Figure 223749DEST_PATH_IMAGE032
= 1,2,3, … , N×N,
Figure 328977DEST_PATH_IMAGE037
is the first
Figure 495517DEST_PATH_IMAGE032
An average value of the divided regions;
obtaining the image after local threshold segmentation by the formula (12)
Figure 448429DEST_PATH_IMAGE038
The same value of t is used in both equations (9) and (11), and the final candidate points for the overall threshold segmentation are:
Figure 307801DEST_PATH_IMAGE039
(13)。
preferably, the step 4 specifically includes:
setting a test condition according to the track characteristic information of the motion of the space-based target, establishing candidate points into a tree structure, and pruning the motion track of each node of the tree structure;
wherein H 1 The candidate target point is on the motion trail; h 2 The candidate target point is not on the motion trail;
according to the track motion characteristics of the candidate target points, setting a test condition:
Figure 978953DEST_PATH_IMAGE040
(14)
wherein,
Figure 214763DEST_PATH_IMAGE041
Figure 287761DEST_PATH_IMAGE042
and
Figure 318034DEST_PATH_IMAGE043
is a point on different three frames in a frame set, and the coordinates of the mass center are respectively
Figure 476483DEST_PATH_IMAGE044
Figure 250404DEST_PATH_IMAGE045
And
Figure 177908DEST_PATH_IMAGE046
the frame index is respectively used
Figure 379083DEST_PATH_IMAGE047
Figure 24828DEST_PATH_IMAGE048
And
Figure 77140DEST_PATH_IMAGE049
represents:
Figure 124731DEST_PATH_IMAGE050
(15)
Figure 496806DEST_PATH_IMAGE051
(16)
Figure 629847DEST_PATH_IMAGE052
(17)
wherein the thresholds for distance and angle are used respectively
Figure 745571DEST_PATH_IMAGE053
And
Figure 382089DEST_PATH_IMAGE054
expressed, the value is selected as
Figure 925065DEST_PATH_IMAGE055
Figure 873299DEST_PATH_IMAGE053
Is selected as
Figure 792713DEST_PATH_IMAGE056
Taking a candidate point in a first frame in an image sequence as an initial point
Figure 283737DEST_PATH_IMAGE057
Starting to search, and after searching by taking the point in the first frame as an initial point, performing second search by taking the point which does not meet the test condition in the search result of the second frame as a new initial point, and constructing a vector with the points of each subsequent frame by taking the initial point as a starting point to form a multi-stage hypothesis test parallel judgment tree from the vector to the vector;
a decision stage, in which each candidate trajectory score in the search stage needs to be evaluated, and the evaluation is performed according to the following formula:
Figure 263195DEST_PATH_IMAGE058
(18)
wherein,
Figure 839670DEST_PATH_IMAGE059
is a set of image sequence frames required for a search decision,
Figure 156250DEST_PATH_IMAGE060
is composed of
Figure 501781DEST_PATH_IMAGE061
The evaluation of the point trajectory is carried out,
Figure 669718DEST_PATH_IMAGE062
and
Figure 999068DEST_PATH_IMAGE063
is based on
Figure 260285DEST_PATH_IMAGE059
And the two selected thresholds are selected
Figure 725901DEST_PATH_IMAGE064
Figure 47161DEST_PATH_IMAGE065
Figure 598228DEST_PATH_IMAGE066
Reserving the suspected track, switching to the next frame set for judging again, and executing the following operations:
Figure 663136DEST_PATH_IMAGE067
(19)
wherein,
Figure 248838DEST_PATH_IMAGE068
Figure 741000DEST_PATH_IMAGE069
recording all real tracks, reserving all candidate points forming the real tracks on each frame of image, and determining the candidate points as real target points; and deleting the false tracks, and eliminating corresponding candidate points on each frame of image to finish the detection of all candidate targets on the image sequence.
Preferably, if it is currently the second
Figure 779363DEST_PATH_IMAGE061
Recording if a point in the frame satisfies a test condition
Figure 382382DEST_PATH_IMAGE070
Figure 88170DEST_PATH_IMAGE071
Is a pair of
Figure 220074DEST_PATH_IMAGE061
Checking for point condition, otherwise, recording when not satisfied
Figure 11313DEST_PATH_IMAGE072
After completing the search of the current image sequence, all
Figure 886865DEST_PATH_IMAGE073
The score is recorded as a candidate trajectory.
Preferably, the search comprises the steps of:
step1: respectively selecting candidate points in the first frame and the second frame as initial points of searching;
step2: searching the first candidate point satisfying the searching radius range condition, the maximum inter-frame motion distance and the minimum inter-frame motion distance in the subsequent image frame
Figure 447159DEST_PATH_IMAGE074
And initial point
Figure 21403DEST_PATH_IMAGE075
Constructing an initial vector
Figure 299938DEST_PATH_IMAGE076
(ii) a Num (k) =0 denotes the current th
Figure 244760DEST_PATH_IMAGE061
The candidate points which do not meet the condition of the search radius range in the frame are continued to the next frame for searching until the points which meet the search radius range are found
Figure 659561DEST_PATH_IMAGE074
Step3: in determining an initial vector
Figure 664426DEST_PATH_IMAGE077
Then, candidate points satisfying the condition of searching radius range in the rest frames are searched
Figure 164677DEST_PATH_IMAGE078
Constructing a trajectory vector
Figure 647611DEST_PATH_IMAGE079
Determine it and
Figure 41552DEST_PATH_IMAGE080
whether the check condition in equation (14) is satisfied.
A system for detecting small dark objects in a space-based background, the system comprising:
the decomposition module is used for carrying out pyramid decomposition on the original sky-based dim small target image, and carrying out Gaussian pyramid decomposition and Laplacian pyramid decomposition on the weight map and the original image respectively to obtain a decomposed image;
the fusion module performs weighted fusion on the decomposed images through a Laplacian pyramid in combination with three information measure factors of image contrast, entropy and exposure, and obtains image contents and image details of different scales after respectively calculating a weight pyramid of each image under different resolutions to obtain an image pyramid weighted fusion result image;
the candidate point module is used for adopting self-adaptive threshold segmentation on the image pyramid weighted fusion result image, respectively calculating segmentation thresholds of different areas by using a local threshold segmentation method, and determining small space-based dim target candidate points by combining the whole part and the local part;
the real target point module is used for setting a detection condition, establishing space-based dim small target candidate points obtained after threshold segmentation into a tree structure, and pruning the motion trail of each node of the tree structure; and outputting the real track, reserving all candidate points forming the real track on each frame of image, and determining a real target point.
A computer-readable storage medium, having stored thereon a computer program for execution by a processor for implementing a method of dim small object detection in a space-based background as claimed in any one of claims 1 to 7.
A computer device comprising a memory storing a computer program and a processor implementing the method of dim small target detection in a space-based context according to any one of claims 1-7 when executing the computer program.
The invention has the following beneficial effects:
the invention adopts the image pyramid to carry out multi-scale scaling on the image, thereby realizing the purposes of increasing the data set and diversifying the resolution, fusing the multi-scale image, expanding the dynamic range of the image, improving the performance indexes such as the integral definition, the contrast, the gray variance and the like of the image, improving the image quality and providing more information for target detection.
The invention adopts a multilevel hypothesis testing method to detect the dim small target with low signal-to-noise ratio with unknown position and speed in the image sequence. According to the method, the motion tracks of a large number of candidate targets in the image sequence are constructed into a tree structure, each frame in the sequence image is pruned by assuming a test condition, the algorithm space complexity and time complexity are reduced, the detection probability is improved, and the number of false alarms is reduced.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the embodiments or the prior art descriptions will be briefly described below, and it is obvious that the drawings in the following description are some embodiments of the present invention, and other drawings can be obtained by those skilled in the art without creative efforts.
FIG. 1 is a flow chart of the detection method of the present invention;
FIG. 2 is a graph of an acquired original day-based target;
FIG. 3 is a diagram of the result of pyramid-weighted fusion of images;
FIG. 4 is a graph of threshold segmentation results based on an image pyramid algorithm;
FIG. 5 is a graph of the results of small day-based dim target detection.
Detailed Description
The technical solutions of the present invention will be described clearly and completely with reference to the accompanying drawings, and it should be understood that the described embodiments are some, but not all embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
In the description of the present invention, it should be noted that the terms "center", "upper", "lower", "left", "right", "vertical", "horizontal", "inner", "outer", etc., indicate orientations or positional relationships based on the orientations or positional relationships shown in the drawings, and are only for convenience of description and simplicity of description, but do not indicate or imply that the device or element being referred to must have a particular orientation, be constructed and operated in a particular orientation, and thus, should not be construed as limiting the present invention. Furthermore, the terms "first," "second," and "third" are used for descriptive purposes only and are not to be construed as indicating or implying relative importance.
In the description of the present invention, it should be noted that, unless otherwise explicitly specified or limited, the terms "mounted," "connected," and "connected" are to be construed broadly and may be, for example, fixedly connected, detachably connected, or integrally connected; can be mechanically or electrically connected; they may be connected directly or indirectly through intervening media, or they may be interconnected between two elements. The specific meanings of the above terms in the present invention can be understood in specific cases to those skilled in the art.
In addition, the technical features involved in the different embodiments of the present invention described below may be combined with each other as long as they do not conflict with each other.
The present invention will be described in detail with reference to specific examples.
The first embodiment is as follows:
as shown in fig. 1 to 5, the specific optimized technical solution adopted to solve the above technical problems of the present invention is: the invention relates to a method for detecting dim small targets under a space-based background.
A method of detecting dim small targets in a space-based background, the method comprising the steps of:
step1: performing pyramid decomposition on the original sky-based dark small target image, and performing Gaussian pyramid decomposition and Laplacian pyramid decomposition on the weight map and the original image respectively to obtain a decomposed image;
step2: weighting and fusing the decomposed images by combining the Laplacian pyramid with three information measure factors of image contrast, entropy and exposure, respectively calculating the weight pyramid of each image under different resolutions, and then obtaining image contents and image details of different scales to obtain an image pyramid weighting and fusing result image;
and 3, step3: adopting self-adaptive threshold segmentation to the image pyramid weighted fusion result graph, respectively calculating segmentation thresholds of different regions by using a local threshold segmentation method, combining the whole with the local to determine space-based dim small target candidate points;
and 4, step 4: setting a test condition, establishing space-based dim small target candidate points obtained after threshold segmentation into a tree structure, and pruning the motion trail of each node of the tree structure; and outputting the real track, reserving all candidate points forming the real track on each frame of image, and determining a real target point.
The invention provides a method for detecting dim small targets under a space-based background, which mainly comprises two characteristics that firstly, an image pyramid is utilized to carry out pyramid decomposition on an original image, a Laplacian pyramid of the image is established for the decomposed Gaussian pyramid, the image details are reserved,
and reconstructing the original image by combining the three fusion strategies weighted by the information measure factors.
Secondly, the motion tracks of a large number of candidate targets in the image sequence are constructed into a tree structure by adopting a multi-level hypothesis testing method,
and pruning each frame in the sequence image by assuming a test condition so as to obtain a detection result of the small dark target on the day basis.
The second concrete embodiment:
the second embodiment of the present application differs from the first embodiment only in that:
the step1 specifically comprises the following steps:
performing pyramid decomposition on an original sky-based dark small target image, performing Gaussian pyramid decomposition and Laplace pyramid decomposition on a weight map and the original image respectively, performing Gaussian blur on the Gaussian pyramid from a lower layer to obtain a high-layer image, performing Gaussian blur on each layer and then performing 2-down sampling on each layer to obtain the original data image, namely performing Gaussian blur on the 0-th layer and then performing interlaced alternate-column downsampling on the 0-th layer to obtain the high-layer pyramid image by analogy in sequence, wherein the size of each layer of image of the Gaussian pyramid is one fourth of that of the lower-layer image, and performing the Gaussian pyramid decomposition on the image according to the following formula:
Figure 939287DEST_PATH_IMAGE082
(1)
wherein G l Of Gaussian pyramidlLayer image, G l 、R l Is a firstlThe total number of rows and the total number of columns of the layer image,
Figure 960333DEST_PATH_IMAGE083
is a Gaussian filter templatemLine ofnThe column values are set to the column values,
Figure 615305DEST_PATH_IMAGE083
the size of the glass is 5 multiplied by 5,
Figure 961973DEST_PATH_IMAGE084
the number of layers of the gaussian pyramid is represented,
Figure 436817DEST_PATH_IMAGE005
decomposing the image for a Gaussian pyramid
Figure 995974DEST_PATH_IMAGE006
Line for mobile communication terminal
Figure 234014DEST_PATH_IMAGE007
The column values; in the invention, the maximum number of decomposable layers is taken, and the calculation mode is
Figure 751583DEST_PATH_IMAGE085
During decomposition of the Gaussian pyramid, gaussian filtering loses high-frequency details of the image, the image details can be reserved by introducing the Laplacian pyramid, and the detail information of the original image can be restored after image reconstruction and fusion;
the Laplacian pyramid decomposition method comprises the following steps of introducing a Laplacian pyramid to retain image details, and restoring detail information of an original image after image reconstruction and fusion, wherein the Laplacian pyramid decomposition method comprises the following steps:
to the firstlCarrying out Gaussian blur and downsampling on the layer original image data to obtain G l+1 To G l+1 Performing upsampling expansion to obtain a Laplacian pyramid decomposition image I l *
Figure 979302DEST_PATH_IMAGE086
(2)
Wherein Z represents a positive integerPicture I l * Andlthe layers are of the same size, thenlLayer image G l And I l * Subtract to obtain the second one containing detail informationlLayer image L l
Figure 342150DEST_PATH_IMAGE087
(3)
Wherein,
Figure 440556DEST_PATH_IMAGE010
is the x row and y column of the Gaussian filtering template,
Figure 597868DEST_PATH_IMAGE088
is G l For the purpose of the derivation of the derivatives,
Figure 47304DEST_PATH_IMAGE089
is composed of
Figure 72898DEST_PATH_IMAGE090
And Z is a positive integer.
The third concrete embodiment:
the difference between the third embodiment and the second embodiment of the present application is only that:
the step2 specifically comprises the following steps:
the method comprises the following steps of performing Laplace filtering on an original sky-based dim small target image, taking an absolute value of a coefficient obtained through filtering response, reacting contrast information at each pixel by using an obtained absolute value response coefficient, and expressing a Laplace operator for filtering in a discrete form derived from a second-order partial derivative through the following formula:
Figure 291389DEST_PATH_IMAGE091
(4)
in the formula,
Figure 150761DEST_PATH_IMAGE092
which represents the gray-scale value of the image,
Figure 821914DEST_PATH_IMAGE093
in the form of the first partial derivative,
Figure 57723DEST_PATH_IMAGE094
is composed of
Figure 130721DEST_PATH_IMAGE092
A second partial derivative;
entropy is used as an index to evaluate the amount of information contained in an image in grayscale image evaluation, and the entropy e in grayscale image evaluation is expressed by the following formula:
Figure 160994DEST_PATH_IMAGE095
(5)
wherein,
Figure 53864DEST_PATH_IMAGE096
representing an image histogram, L representing an image gray value, and i being an ith image histogram;
measuring the exposure of the small target image, selecting the pixel points with good exposure, and expressing the pixel points with good exposure by the following formula
Figure 302486DEST_PATH_IMAGE097
Figure 495570DEST_PATH_IMAGE098
(6)
Wherein,
Figure 696744DEST_PATH_IMAGE099
to represent image pixels;
the linear combination of three information measure factors of image contrast, entropy and exposure is represented by the following formula:
Figure 342489DEST_PATH_IMAGE100
(7)
wherein,
Figure 388942DEST_PATH_IMAGE019
is a serial number of the image,C,S,Pthe contrast, entropy and proper exposure are adopted,
Figure 295587DEST_PATH_IMAGE101
is a constant term
Figure 667663DEST_PATH_IMAGE102
Figure 800704DEST_PATH_IMAGE103
Is the total weight value of the weight value,
Figure 916427DEST_PATH_IMAGE104
in order to be a weight of the contrast ratio,
Figure 818524DEST_PATH_IMAGE105
in order to be the weight of the entropy value,
Figure 361501DEST_PATH_IMAGE106
is an exposure weight value;
respectively calculating corresponding weight maps of all sequence images
Figure 450680DEST_PATH_IMAGE107
Each pixel is cumulatively calculated so that the sum of the weights of the spatial positions of the ownership remap is 1, i.e.
Figure 635674DEST_PATH_IMAGE108
(8)
N is the number of input images, the more the input images are, the more accurate the result of the final weight map calculation coefficient is, so that the fused image can more completely retain the brightness and detail information of the original scene. Fig. 2 is a graph of the image pyramid weighted fusion result.
The fourth concrete embodiment:
the difference between the fourth embodiment and the third embodiment is only that:
the step3 specifically comprises the following steps:
the threshold segmentation process adopts a method of combining local threshold segmentation and global threshold segmentation to determine a small target, and the global threshold segmentation is self-adaptive threshold segmentation adopted for the whole image
Figure 392277DEST_PATH_IMAGE109
Is represented by the following formula
Figure 106155DEST_PATH_IMAGE110
Figure 231366DEST_PATH_IMAGE111
(9)
Figure 688893DEST_PATH_IMAGE112
(10)
Wherein,
Figure 300003DEST_PATH_IMAGE113
is the gray scale standard deviation of the image, t is an odd number greater than 3,
Figure 591307DEST_PATH_IMAGE114
is the gray value of the image after the filtering process,
Figure 920657DEST_PATH_IMAGE115
is a Gaussian filtering template line value;
obtaining the image after global threshold segmentation through the formula (10)
Figure 916295DEST_PATH_IMAGE116
The local threshold segmentation divides the image into N × N regions, and calculates the first and second regions respectively
Figure 647490DEST_PATH_IMAGE032
Individual division threshold value
Figure 968750DEST_PATH_IMAGE033
Figure 519817DEST_PATH_IMAGE117
(11)
Figure 584725DEST_PATH_IMAGE118
(12)
Wherein,
Figure 29482DEST_PATH_IMAGE036
is a second of different areas
Figure 990485DEST_PATH_IMAGE032
A number of the segmentation thresholds are set to be,
Figure 763269DEST_PATH_IMAGE119
is the first
Figure 366288DEST_PATH_IMAGE032
The standard deviation of each of the divided regions is,
Figure 806497DEST_PATH_IMAGE032
= 1,2,3, … , N×N,
Figure 469559DEST_PATH_IMAGE037
is the first
Figure 1078DEST_PATH_IMAGE032
An average value of the divided areas;
obtaining the image after local threshold segmentation by the formula (12)
Figure 673368DEST_PATH_IMAGE120
The same is used in both equations (9) and (11)
Figure 233662DEST_PATH_IMAGE121
The final candidate points for obtaining the overall threshold segmentation are:
Figure 802047DEST_PATH_IMAGE122
(13)。
the fifth concrete example:
the difference between the fifth embodiment and the fourth embodiment is only that:
and (3) setting a hypothesis test condition according to the movement characteristics of the space-based target, establishing candidate points obtained after threshold segmentation into a tree structure, and pruning the movement track of each node of the tree structure. And establishing a multi-level hypothesis testing discrimination tree to judge the motion track points of the real target with small space-based darkness.
The space targets including the in-orbit aircraft, the fragments and the like are subjected to the action of gravity to do orbital motion and follow the Keplerian motion law, but in continuous 5-frame exposure time, each target is assumed to have a respective fixed motion direction and speed, the method sets assumed test conditions according to track characteristic information of space-based target motion, a series of candidate points are built into a tree structure, and in the process, the motion tracks of all nodes of the tree structure are pruned.
The step 4 specifically comprises the following steps:
setting a test condition according to track characteristic information of the motion of the space-based target, establishing the candidate points into a tree structure, and pruning the motion track of each node of the tree structure;
wherein H 1 The candidate target point is on the motion trail; h 2 The candidate target point is not on the motion trail;
according to the track motion characteristics of the candidate target points, setting a test condition:
Figure 815002DEST_PATH_IMAGE123
(14)
wherein,
Figure 759824DEST_PATH_IMAGE124
Figure 440204DEST_PATH_IMAGE125
and
Figure 179490DEST_PATH_IMAGE126
is a point on different three frames in a frame set, and the coordinates of the mass center are respectively
Figure 679742DEST_PATH_IMAGE127
Figure 428255DEST_PATH_IMAGE128
And
Figure 697562DEST_PATH_IMAGE129
the frame index is respectively used
Figure 201225DEST_PATH_IMAGE130
Figure 454351DEST_PATH_IMAGE131
And
Figure 475397DEST_PATH_IMAGE132
represents:
Figure 864790DEST_PATH_IMAGE133
(15)
Figure 205598DEST_PATH_IMAGE134
(16)
Figure 149284DEST_PATH_IMAGE135
(17)
wherein the threshold values of the distance and the angle are respectively used
Figure 239599DEST_PATH_IMAGE136
And
Figure 483499DEST_PATH_IMAGE137
expressed, the value is selected as
Figure 1068DEST_PATH_IMAGE138
Figure 963208DEST_PATH_IMAGE136
Is selected as
Figure 591635DEST_PATH_IMAGE139
Usually, a candidate point in the first frame of the image sequence is taken as an initial point
Figure 690041DEST_PATH_IMAGE140
A search is started. For an object with a high signal-to-noise ratio, the object can be detected after threshold segmentation in each frame of the image sequence, however, for a small object with a low signal-to-noise ratio with weak energy, for example, a signal-to-noise ratio less than 3, the object is easily submerged by background and noise after threshold segmentation in a certain frame, the object cannot be detected in the frame, and the motion trajectory of the object appears to be discontinuous on the whole image sequence. Once the target is lost in a certain frame, the subsequent search using the initial point as the root node fails. Thus, an improved multi-level hypothesis testing search tree was built to solve this problem. After the point in the first frame is used as the initial point for searching, the point which does not meet the hypothesis test condition in the search result of the second frame is used as a new initial point for searching for the second time. In addition, unlike the traditional frame-by-frame progressive search from point to point, the vector is constructed by using the initial point as the starting point and the points of the subsequent frames, and the vector-to-vector multi-stage hypothesis testing parallel discrimination tree is formed.
Taking a candidate point in a first frame in an image sequence as an initial point
Figure 378512DEST_PATH_IMAGE140
Starting to search, and after searching by taking the point in the first frame as an initial point, performing second search by taking the point which does not meet the test condition in the search result of the second frame as a new initial point, and constructing a vector with the points of each subsequent frame by taking the initial point as a starting point to form a multi-stage hypothesis test parallel judgment tree from the vector to the vector;
a decision stage, in which each candidate trajectory score in the search stage needs to be evaluated, and the evaluation is performed according to the following formula:
Figure 562368DEST_PATH_IMAGE141
(18)
wherein,
Figure 994487DEST_PATH_IMAGE059
is a set of image sequence frames required for a search decision,
Figure 212978DEST_PATH_IMAGE142
is composed of
Figure 72350DEST_PATH_IMAGE061
The evaluation of the point trajectory is carried out,
Figure 743503DEST_PATH_IMAGE143
and
Figure 713733DEST_PATH_IMAGE144
is based on
Figure 786731DEST_PATH_IMAGE059
And the two selected thresholds are selected
Figure 88442DEST_PATH_IMAGE145
Figure 981312DEST_PATH_IMAGE146
Figure 755233DEST_PATH_IMAGE147
In 5 frames, according to the characteristics of target motion tracks, real target tracks are easily judged and false tracks are eliminated, but if a single threshold is adopted for simple classification, some low signal-to-noise ratio targets are easily omitted, so a class of 'suspected tracks' is added, namely 3 points in 5 frames meet the condition of hypothesis test conditions, and the current 5 frames cannot be accurately determined to be real tracks or false alarms, so the suspected tracks are reserved and are judged again in the next frame set, the execution operation is as follows, the suspected tracks are reserved and are judged again in the next frame set, and the execution operation is as follows:
Figure 807372DEST_PATH_IMAGE148
(19)
wherein,
Figure 602021DEST_PATH_IMAGE149
recording all real tracks, reserving all candidate points forming the real tracks on each frame of image, and determining the candidate points as real target points; and deleting the false tracks, and eliminating corresponding candidate points on each frame of image to finish the detection of all candidate targets on the image sequence. Fig. 5 is a final view of the detection result of the small dark and weak target in the sky base.
The sixth specific embodiment:
the difference between the sixth embodiment and the fifth embodiment is only that:
is at present the first
Figure 982187DEST_PATH_IMAGE150
Recording if a point in the frame satisfies a test condition
Figure 294219DEST_PATH_IMAGE151
Figure 341810DEST_PATH_IMAGE152
Is a pair of
Figure 369678DEST_PATH_IMAGE150
Checking for point condition, otherwise, recording when not satisfied
Figure 971560DEST_PATH_IMAGE153
After completing the search of the current image sequence, all
Figure 821705DEST_PATH_IMAGE152
Is recorded as oneA score for the candidate trajectory.
The seventh specific embodiment:
the seventh embodiment of the present application differs from the sixth embodiment only in that:
the search comprises the following steps:
step1: respectively selecting candidate points in the first frame and the second frame as initial points for searching;
step2: searching the first candidate point satisfying the searching radius range condition, the maximum inter-frame motion distance and the minimum inter-frame motion distance in the subsequent image frame
Figure 647355DEST_PATH_IMAGE074
And initial point
Figure 924753DEST_PATH_IMAGE075
Constructing an initial vector
Figure 545090DEST_PATH_IMAGE076
(ii) a Num (k) =0 denotes the current th
Figure 667767DEST_PATH_IMAGE154
The candidate points which do not meet the condition of the search radius range in the frame are continued to the next frame for searching until the points which meet the search radius range are found
Figure 549004DEST_PATH_IMAGE074
Step3: in determining an initial vector
Figure 262882DEST_PATH_IMAGE077
Then, candidate points satisfying the condition of searching radius range in the rest frames are searched
Figure 839357DEST_PATH_IMAGE078
Constructing a trajectory vector
Figure 31304DEST_PATH_IMAGE079
Determine it and
Figure 907993DEST_PATH_IMAGE080
whether the check condition in equation (14) is satisfied.
The eighth embodiment:
the eighth embodiment of the present application differs from the seventh embodiment only in that:
the search comprises the following steps:
step1: respectively selecting candidate points in the first frame and the second frame as initial points for searching;
step2: searching the first candidate point satisfying the search radius range condition, the maximum inter-frame motion distance and the minimum inter-frame motion distance in the subsequent image frame
Figure 792772DEST_PATH_IMAGE074
And initial point
Figure 122122DEST_PATH_IMAGE075
Forming an initial vector
Figure 976815DEST_PATH_IMAGE076
(ii) a Num (k) =0 denotes the current th
Figure 776187DEST_PATH_IMAGE155
The candidate points which do not meet the condition of the search radius range in the frame are searched until the points which meet the search radius range are found
Figure 300709DEST_PATH_IMAGE074
Step3: in determining an initial vector
Figure 117355DEST_PATH_IMAGE079
Then, candidate points satisfying the condition of searching radius range in the rest frames are searched
Figure 182263DEST_PATH_IMAGE078
Constructing a trajectory vector
Figure 502386DEST_PATH_IMAGE079
Determine it and
Figure 728968DEST_PATH_IMAGE076
whether the check condition in equation (14) is satisfied.
The specific embodiment is nine:
the ninth embodiment of the present application differs from the eighth embodiment only in that:
the present invention provides a computer-readable storage medium having stored thereon a computer program for execution by a processor for implementing a method of dim small target detection in a space-based background.
The specific example is ten:
the embodiment ten of the present application differs from the embodiment nine only in that:
the invention provides computer equipment which comprises a memory and a processor, wherein a computer program is stored in the memory, and when the processor runs the computer program stored in the memory, the processor executes a dim small target detection method under a space-based background.
The above description is only a preferred embodiment of the method for detecting the dim small target under the space-based background, and the protection scope of the method for detecting the dim small target under the space-based background is not limited to the above embodiments, and all technical solutions belonging to the idea belong to the protection scope of the present invention. It should be noted that modifications and variations that do not depart from the gist of the invention are intended to be within the scope of the invention.

Claims (10)

1. A method for detecting dim small targets under a space-based background is characterized by comprising the following steps: the method comprises the following steps:
step1: performing pyramid decomposition on the original sky-based dark small target image, and performing Gaussian pyramid decomposition and Laplacian pyramid decomposition on the weight map and the original image respectively to obtain a decomposed image;
step2: weighting and fusing the decomposed images by combining the Laplacian pyramid with three information measure factors of image contrast, entropy and exposure, respectively calculating the weight pyramid of each image under different resolutions, and then obtaining image contents and image details of different scales to obtain an image pyramid weighting and fusing result image;
and step3: adopting self-adaptive threshold segmentation to the image pyramid weighting fusion result graph, respectively calculating segmentation thresholds of different areas by using a local threshold segmentation method, combining the whole with the local, and determining a space-based dim small target candidate point;
and 4, step 4: setting a test condition, establishing space-based dim small target candidate points obtained after threshold segmentation into a tree structure, and pruning the motion trail of each node of the tree structure; and outputting the real track, reserving all candidate points forming the real track on each frame of image, and determining a real target point.
2. The method for detecting dim small targets under space-based background as claimed in claim 1, wherein:
the step1 specifically comprises the following steps:
carrying out pyramid decomposition on an original space-based dark small target image, respectively carrying out Gaussian pyramid decomposition and Laplacian pyramid decomposition on a weight map and the original image, carrying out low-level down-sampling on the Gaussian pyramid to obtain a high-level image, carrying out Gaussian blur on each level, then carrying out 2-level down-sampling on each level to obtain the original data image, wherein the size of each level of image of the Gaussian pyramid is one fourth of that of a lower-level image, and carrying out the image after the Gaussian pyramid decomposition by the following formula:
Figure 719281DEST_PATH_IMAGE001
(1)
wherein
Figure 322301DEST_PATH_IMAGE002
Of Gaussian pyramid
Figure 231351DEST_PATH_IMAGE003
The image of the layer(s) is,
Figure 753468DEST_PATH_IMAGE004
Figure 13548DEST_PATH_IMAGE005
is a first
Figure 420259DEST_PATH_IMAGE003
The total number of rows and the total number of columns of the layer image,
Figure 183815DEST_PATH_IMAGE006
as a Gaussian filter template
Figure 17779DEST_PATH_IMAGE007
Line of
Figure 889789DEST_PATH_IMAGE008
Column values, taking a size of 5 x 5,
Figure 834611DEST_PATH_IMAGE009
the number of layers of the gaussian pyramid is represented,
Figure 983833DEST_PATH_IMAGE010
decomposing the image for a Gaussian pyramid
Figure 988698DEST_PATH_IMAGE011
Line for mobile communication terminal
Figure 223370DEST_PATH_IMAGE012
The value of the column;
and (3) introducing a Laplacian pyramid to retain image details, and recovering the detail information of the original image after the image is reconstructed and fused, wherein the Laplacian pyramid decomposition method comprises the following steps of:
to the first
Figure 723882DEST_PATH_IMAGE003
Carrying out Gaussian blur and downsampling on the layer original image data to obtain
Figure 852244DEST_PATH_IMAGE013
To, for
Figure 762432DEST_PATH_IMAGE013
Performing upsampling expansion to obtain a Laplacian pyramid decomposition image
Figure 484400DEST_PATH_IMAGE014
Figure 771025DEST_PATH_IMAGE015
(2)
Wherein, Z represents a positive integer,
Figure 160418DEST_PATH_IMAGE014
image and
Figure 975927DEST_PATH_IMAGE016
the layers are of the same size, then
Figure 309825DEST_PATH_IMAGE016
Layer image
Figure 134562DEST_PATH_IMAGE017
And
Figure 378461DEST_PATH_IMAGE014
subtract to obtain the second one containing detail information
Figure 364872DEST_PATH_IMAGE016
Layer image
Figure 61432DEST_PATH_IMAGE018
Figure 689860DEST_PATH_IMAGE019
(3)
Wherein,
Figure 53845DEST_PATH_IMAGE020
is the x row and y column of the Gaussian filtering template,
Figure 607229DEST_PATH_IMAGE021
is a pair of
Figure 525507DEST_PATH_IMAGE017
The derivative is taken as a function of the time,
Figure 692046DEST_PATH_IMAGE022
is as follows
Figure 910538DEST_PATH_IMAGE009
A layer gaussian blur function, Z being a positive integer.
3. The method for detecting dim small targets under space-based background as claimed in claim 2, wherein:
the step2 specifically comprises the following steps:
the method comprises the following steps of performing Laplace filtering on an original sky-based dim small target image, taking an absolute value of a coefficient obtained through filtering response, reacting contrast information at each pixel by using an obtained absolute value response coefficient, and expressing a discrete form of a Laplace operator second-order partial derivative for filtering through the following formula:
Figure 504330DEST_PATH_IMAGE023
(4)
in the formula,
Figure 175483DEST_PATH_IMAGE020
which represents the gray-scale value of the image,
Figure 411292DEST_PATH_IMAGE024
in the form of the first partial derivative,
Figure 218711DEST_PATH_IMAGE025
is composed of
Figure 108039DEST_PATH_IMAGE020
A second partial derivative;
entropy is used as an index to evaluate the amount of information contained in an image in grayscale image evaluation, and the entropy e in grayscale image evaluation is expressed by the following formula:
Figure 908DEST_PATH_IMAGE026
(5)
wherein,
Figure 774829DEST_PATH_IMAGE027
which represents the histogram of the image,
Figure 702334DEST_PATH_IMAGE028
which represents the gray-scale value of the image,
Figure 903508DEST_PATH_IMAGE029
is a first
Figure 549253DEST_PATH_IMAGE029
An image histogram;
measuring the exposure of the small target image with weak sky-based darkness, and selecting the pixel points with good exposure to be represented by the following formula
Figure 855426DEST_PATH_IMAGE030
Figure 637438DEST_PATH_IMAGE031
(6)
Wherein,
Figure 743934DEST_PATH_IMAGE032
to represent image pixels;
the linear combination of three information measure factors of image contrast, entropy and exposure is represented by the following formula:
Figure 736029DEST_PATH_IMAGE033
(7)
wherein,
Figure 992698DEST_PATH_IMAGE034
is a serial number of the image,C,S,Pthe contrast, entropy and proper exposure are adopted,
Figure 894795DEST_PATH_IMAGE035
is a constant term
Figure 437772DEST_PATH_IMAGE036
Figure 58109DEST_PATH_IMAGE037
Is the total weight value of the weight value,
Figure 711945DEST_PATH_IMAGE038
in order to be a contrast weight, the contrast ratio,
Figure 468548DEST_PATH_IMAGE039
in order to be the weight of the entropy value,
Figure 713585DEST_PATH_IMAGE040
is an exposure weight value;
respectively calculating corresponding weight maps of all sequence images
Figure 290059DEST_PATH_IMAGE041
Each pixel in the weight map is accumulated to make the sum of the weights of the spatial positions of the weight map be 1, namely
Figure 13165DEST_PATH_IMAGE042
(8)
Where N is the number of input images.
4. The method for detecting dim small targets under space-based background as claimed in claim 1, wherein:
the step3 specifically comprises the following steps:
the threshold segmentation process adopts a method of combining local threshold segmentation and global threshold segmentation to determine a small target, and the global threshold segmentation is self-adaptive threshold segmentation adopted for the whole image
Figure 889854DEST_PATH_IMAGE043
Is represented by the following formula
Figure 774633DEST_PATH_IMAGE043
Figure 103983DEST_PATH_IMAGE044
(9)
Figure 371060DEST_PATH_IMAGE045
(10)
Wherein,
Figure 102256DEST_PATH_IMAGE046
is the standard deviation of the gray scale of the image,
Figure 157936DEST_PATH_IMAGE047
is an odd number greater than 3 and,
Figure 974582DEST_PATH_IMAGE048
is the gray value of the image after the filtering process,
Figure 39490DEST_PATH_IMAGE049
is a Gaussian filter template line value;
obtaining the image after global threshold segmentation by the formula (10)
Figure 359613DEST_PATH_IMAGE050
Local thresholding to divide images into
Figure 117354DEST_PATH_IMAGE051
Each region, calculating the first of the different regions
Figure 421296DEST_PATH_IMAGE052
The division threshold is as follows:
Figure 493157DEST_PATH_IMAGE053
(11)
Figure 464524DEST_PATH_IMAGE054
(12)
wherein,
Figure 862008DEST_PATH_IMAGE055
is of a different area
Figure 653246DEST_PATH_IMAGE052
A number of the segmentation thresholds are set to be,
Figure 59957DEST_PATH_IMAGE056
is the first
Figure 620251DEST_PATH_IMAGE052
The standard deviation of the number of divided regions,
Figure 454215DEST_PATH_IMAGE052
= 1,2,3, … ,
Figure 608116DEST_PATH_IMAGE051
Figure 552938DEST_PATH_IMAGE057
is the first
Figure 250896DEST_PATH_IMAGE052
An average value of the divided regions;
obtaining the image after local threshold segmentation by the formula (12)
Figure 255761DEST_PATH_IMAGE058
The same is used in both equations (9) and (11)
Figure 756013DEST_PATH_IMAGE047
The final candidate points for obtaining the overall threshold segmentation are:
Figure 238947DEST_PATH_IMAGE059
(13)。
5. the method of claim 4, wherein the method comprises the following steps:
the step 4 specifically comprises the following steps:
setting a test condition according to track characteristic information of the motion of the space-based target, establishing the candidate points into a tree structure, and pruning the motion track of each node of the tree structure;
wherein H 1 The candidate target point is on the motion trail; h 2 The candidate target point is not on the motion trail;
according to the track motion characteristics of the candidate target points, setting a test condition:
Figure 773833DEST_PATH_IMAGE060
(14)
wherein,
Figure 684020DEST_PATH_IMAGE061
Figure 937147DEST_PATH_IMAGE062
and
Figure 958193DEST_PATH_IMAGE063
is a point on different three frames in a frame set, and the coordinates of the mass center are respectively
Figure 82007DEST_PATH_IMAGE064
Figure 428674DEST_PATH_IMAGE065
And
Figure 762573DEST_PATH_IMAGE066
the frame index is respectively used
Figure 852888DEST_PATH_IMAGE067
Figure 96788DEST_PATH_IMAGE068
And
Figure 489723DEST_PATH_IMAGE069
represents:
Figure 451863DEST_PATH_IMAGE070
(15)
Figure 80290DEST_PATH_IMAGE071
(16)
Figure 309189DEST_PATH_IMAGE072
(17)
wherein the threshold values of the distance and the angle are respectively used
Figure 873026DEST_PATH_IMAGE073
And
Figure 181516DEST_PATH_IMAGE074
expressed, the value is selected as
Figure 754580DEST_PATH_IMAGE075
Figure 566547DEST_PATH_IMAGE073
Is selected as
Figure 425919DEST_PATH_IMAGE076
Taking a candidate point in a first frame in an image sequence as an initial point
Figure 831492DEST_PATH_IMAGE077
Starting searching, namely, after searching by taking a point in a first frame as an initial point, performing second searching by taking a point which does not meet the test condition in the search result of a second frame as a new initial point, and constructing a vector with the points of subsequent frames by taking the initial point as a starting point to form a multi-stage hypothesis test parallel judgment tree from the vector to the vector;
a decision stage, in which each candidate trajectory score in the search stage needs to be evaluated, and the evaluation is performed according to the following formula:
Figure 332881DEST_PATH_IMAGE078
(18)
wherein,
Figure 140300DEST_PATH_IMAGE079
is a set of image sequence frames required for a search decision,
Figure 436152DEST_PATH_IMAGE080
is composed of
Figure 329022DEST_PATH_IMAGE081
The evaluation of the point trajectory is carried out,
Figure 102943DEST_PATH_IMAGE082
and
Figure 30447DEST_PATH_IMAGE083
is based on
Figure 966042DEST_PATH_IMAGE079
And the two selected thresholds are selected
Figure 611787DEST_PATH_IMAGE084
Reserving the suspected trajectory, switching to the next frame set for judgment again, and executing the following operations:
Figure 917961DEST_PATH_IMAGE085
(19)
wherein
Figure 965551DEST_PATH_IMAGE086
Figure 337627DEST_PATH_IMAGE087
Recording all real tracks, reserving all candidate points forming the real tracks on each frame of image, and determining the candidate points as real target points; and deleting the false tracks, and eliminating corresponding candidate points on each frame of image to finish the detection of all candidate targets on the image sequence.
6. The method of claim 5, wherein the method comprises:
is at present the first
Figure 205088DEST_PATH_IMAGE081
Recording if a point in the frame satisfies a test condition
Figure 320812DEST_PATH_IMAGE088
Figure 222909DEST_PATH_IMAGE080
Is a pair of
Figure 765886DEST_PATH_IMAGE081
Checking for point condition, otherwise, recording when not satisfied
Figure 120644DEST_PATH_IMAGE089
After completing the search of the current image sequence, all
Figure 40058DEST_PATH_IMAGE080
The score is recorded as a candidate trajectory.
7. The method for detecting dim small targets under space-based background as claimed in claim 6, wherein: the search comprises the following steps:
step1: respectively selecting candidate points in the first frame and the second frame as initial points of searching;
step2: searching the first candidate point satisfying the searching radius range condition, the maximum inter-frame motion distance and the minimum inter-frame motion distance in the subsequent image frame
Figure 796662DEST_PATH_IMAGE090
And initial point
Figure 776119DEST_PATH_IMAGE091
Constructing an initial vector
Figure 618173DEST_PATH_IMAGE092
(ii) a Num (k) =0 denotes the current th
Figure 75699DEST_PATH_IMAGE081
Failing to satisfy search radius range bars in a frameThe candidate points of the piece are searched in the next frame until the points meeting the search radius range are found
Figure 686809DEST_PATH_IMAGE093
Step3: in determining an initial vector
Figure 837168DEST_PATH_IMAGE094
Then, candidate points satisfying the condition of searching radius range in the rest frames are searched
Figure 31432DEST_PATH_IMAGE095
Constructing a trajectory vector
Figure 292649DEST_PATH_IMAGE096
Determine it and
Figure 492686DEST_PATH_IMAGE092
whether the check condition in equation (14) is satisfied.
8. A system for detecting dim small targets under a space-based background is characterized in that: the system comprises:
the decomposition module is used for carrying out pyramid decomposition on the original sky-based dark small target image, and carrying out Gaussian pyramid decomposition and Laplacian pyramid decomposition on the weight map and the original image respectively to obtain a decomposed image;
the fusion module performs weighted fusion on the decomposed images by combining the Laplace pyramid with three information measure factors of image contrast, entropy and exposure, and obtains image contents and image details of different scales after respectively calculating the weight pyramid of each image under different resolutions to obtain an image pyramid weighted fusion result image;
the candidate point module is used for adopting self-adaptive threshold segmentation on the image pyramid weighting fusion result image, calculating segmentation thresholds of different areas by using a local threshold segmentation method, combining the whole image with the local image and determining a small sky-based dim target candidate point;
the real target point module is used for setting a detection condition, establishing space-based dim small target candidate points obtained after threshold segmentation into a tree structure, and pruning a motion trail of each node of the tree structure; and outputting the real track, reserving all candidate points forming the real track on each frame of image, and determining a real target point.
9. A computer-readable storage medium having stored thereon a computer program, characterized in that: the program is executed by a processor for implementing a method of dim small target detection in a space based background as claimed in any one of claims 1 to 7.
10. A computer device comprising a memory and a processor, the memory storing a computer program, characterized in that: the processor, when executing the computer program, implements a method for detecting dim small objects in a space-based background as claimed in any one of claims 1 to 7.
CN202211177614.8A 2022-09-27 2022-09-27 Method for detecting dim small target under space-based background Pending CN115272871A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211177614.8A CN115272871A (en) 2022-09-27 2022-09-27 Method for detecting dim small target under space-based background

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211177614.8A CN115272871A (en) 2022-09-27 2022-09-27 Method for detecting dim small target under space-based background

Publications (1)

Publication Number Publication Date
CN115272871A true CN115272871A (en) 2022-11-01

Family

ID=83756598

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211177614.8A Pending CN115272871A (en) 2022-09-27 2022-09-27 Method for detecting dim small target under space-based background

Country Status (1)

Country Link
CN (1) CN115272871A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115578476A (en) * 2022-11-21 2023-01-06 山东省标筑建筑规划设计有限公司 Efficient storage method for urban and rural planning data
CN115861134A (en) * 2023-02-24 2023-03-28 长春理工大学 Star map processing method under space-based background
CN117974460A (en) * 2024-03-29 2024-05-03 深圳中科精工科技有限公司 Image enhancement method, system and storage medium

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109767439A (en) * 2019-01-10 2019-05-17 中国科学院上海技术物理研究所 A kind of multiple dimensioned difference of self-adapting window and the object detection method of bilateral filtering

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109767439A (en) * 2019-01-10 2019-05-17 中国科学院上海技术物理研究所 A kind of multiple dimensioned difference of self-adapting window and the object detection method of bilateral filtering

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
刘鑫龙: "可见光相机高动态范围成像技术研究", 《中国优秀硕士学位论文全文数据库 信息科技辑》 *
吴晓军: "基于多曝光图像的高动态范围图像生成", 《中国优秀硕士学位论文全文数据库 信息科技辑》 *
朱含露: "天基空中动目标红外探测与识别关键技术研究", 《中国博士学位论文全文数据库 工程科技I辑》 *
李梦阳: "天基复杂背景下空间暗弱小目标检测方法研究", 《中国博士学位论文全文数据库 工程科技I辑》 *

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115578476A (en) * 2022-11-21 2023-01-06 山东省标筑建筑规划设计有限公司 Efficient storage method for urban and rural planning data
CN115578476B (en) * 2022-11-21 2023-03-10 山东省标筑建筑规划设计有限公司 Efficient storage method for urban and rural planning data
CN115861134A (en) * 2023-02-24 2023-03-28 长春理工大学 Star map processing method under space-based background
CN117974460A (en) * 2024-03-29 2024-05-03 深圳中科精工科技有限公司 Image enhancement method, system and storage medium
CN117974460B (en) * 2024-03-29 2024-06-11 深圳中科精工科技有限公司 Image enhancement method, system and storage medium

Similar Documents

Publication Publication Date Title
CN115272871A (en) Method for detecting dim small target under space-based background
CN110675418B (en) Target track optimization method based on DS evidence theory
CN111709416B (en) License plate positioning method, device, system and storage medium
CN114742799B (en) Industrial scene unknown type defect segmentation method based on self-supervision heterogeneous network
US9911191B2 (en) State estimation apparatus, state estimation method, and integrated circuit with calculation of likelihood data and estimation of posterior probability distribution data
CN111462050B (en) YOLOv3 improved minimum remote sensing image target detection method and device and storage medium
CN114140683A (en) Aerial image target detection method, equipment and medium
CN113140005B (en) Target object positioning method, device, equipment and storage medium
CN112036381B (en) Visual tracking method, video monitoring method and terminal equipment
CN111126278A (en) Target detection model optimization and acceleration method for few-category scene
Fang et al. Infrared small UAV target detection based on depthwise separable residual dense network and multiscale feature fusion
CN115240149A (en) Three-dimensional point cloud detection and identification method and device, electronic equipment and storage medium
CN109919223A (en) Object detection method and device based on deep neural network
CN112287906A (en) Template matching tracking method and system based on depth feature fusion
CN112508803A (en) Denoising method and device for three-dimensional point cloud data and storage medium
CN116758411A (en) Ship small target detection method based on remote sensing image pixel-by-pixel processing
Ren et al. A lightweight object detection network in low-light conditions based on depthwise separable pyramid network and attention mechanism on embedded platforms
CN112560799B (en) Unmanned aerial vehicle intelligent vehicle target detection method based on adaptive target area search and game and application
CN101149803A (en) Small false alarm rate test estimation method for point source target detection
CN113160279A (en) Method and device for detecting abnormal behaviors of pedestrians in subway environment
CN116310832A (en) Remote sensing image processing method, device, equipment, medium and product
CN110047103A (en) Mixed and disorderly background is removed from image to carry out object detection
CN115410102A (en) SAR image airplane target detection method based on combined attention mechanism
RU2000120929A (en) METHOD FOR PROCESSING SIGNALS FOR DETERMINING THE COORDINATES OF OBJECTS OBSERVED IN A SEQUENCE OF TELEVISION IMAGES, AND A DEVICE FOR ITS IMPLEMENTATION (OPTIONS)
CN114463300A (en) Steel surface defect detection method, electronic device, and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination