CN115272871A - Method for detecting dim small target under space-based background - Google Patents
Method for detecting dim small target under space-based background Download PDFInfo
- Publication number
- CN115272871A CN115272871A CN202211177614.8A CN202211177614A CN115272871A CN 115272871 A CN115272871 A CN 115272871A CN 202211177614 A CN202211177614 A CN 202211177614A CN 115272871 A CN115272871 A CN 115272871A
- Authority
- CN
- China
- Prior art keywords
- image
- pyramid
- frame
- point
- space
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 57
- 230000011218 segmentation Effects 0.000 claims abstract description 53
- 238000000354 decomposition reaction Methods 0.000 claims abstract description 39
- 238000001514 detection method Methods 0.000 claims abstract description 37
- 238000012360 testing method Methods 0.000 claims abstract description 30
- 230000004927 fusion Effects 0.000 claims abstract description 19
- 230000033001 locomotion Effects 0.000 claims description 40
- 238000001914 filtration Methods 0.000 claims description 16
- 238000011156 evaluation Methods 0.000 claims description 12
- 238000013138 pruning Methods 0.000 claims description 10
- 238000004590 computer program Methods 0.000 claims description 9
- 230000008569 process Effects 0.000 claims description 8
- 230000004044 response Effects 0.000 claims description 6
- 238000005070 sampling Methods 0.000 claims description 5
- 238000010295 mobile communication Methods 0.000 claims description 4
- NAWXUBYGYWOOIX-SFHVURJKSA-N (2s)-2-[[4-[2-(2,4-diaminoquinazolin-6-yl)ethyl]benzoyl]amino]-4-methylidenepentanedioic acid Chemical compound C1=CC2=NC(N)=NC(N)=C2C=C1CCC1=CC=C(C(=O)N[C@@H](CC(=C)C(O)=O)C(O)=O)C=C1 NAWXUBYGYWOOIX-SFHVURJKSA-N 0.000 claims description 3
- 230000006870 function Effects 0.000 claims description 3
- 230000009471 action Effects 0.000 description 4
- 238000005516 engineering process Methods 0.000 description 4
- 238000012544 monitoring process Methods 0.000 description 4
- 238000004422 calculation algorithm Methods 0.000 description 2
- 238000004364 calculation method Methods 0.000 description 2
- 238000009795 derivation Methods 0.000 description 2
- 238000012545 processing Methods 0.000 description 2
- 235000015842 Hesperis Nutrition 0.000 description 1
- 235000012633 Iberis amara Nutrition 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 230000018109 developmental process Effects 0.000 description 1
- 238000010586 diagram Methods 0.000 description 1
- 239000012634 fragment Substances 0.000 description 1
- 239000011521 glass Substances 0.000 description 1
- 230000005484 gravity Effects 0.000 description 1
- 238000003331 infrared imaging Methods 0.000 description 1
- 238000012423 maintenance Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000008447 perception Effects 0.000 description 1
- 239000000843 powder Substances 0.000 description 1
- 230000000750 progressive effect Effects 0.000 description 1
- 230000001681 protective effect Effects 0.000 description 1
- 230000001629 suppression Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/10—Terrestrial scenes
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/20—Analysis of motion
- G06T7/246—Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/44—Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/764—Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/77—Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
- G06V10/80—Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level
- G06V10/806—Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level of extracted features
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10016—Video; Image sequence
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10032—Satellite or aerial image; Remote sensing
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30181—Earth observation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30241—Trajectory
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Multimedia (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Health & Medical Sciences (AREA)
- Artificial Intelligence (AREA)
- Computing Systems (AREA)
- Databases & Information Systems (AREA)
- Evolutionary Computation (AREA)
- General Health & Medical Sciences (AREA)
- Medical Informatics (AREA)
- Software Systems (AREA)
- Image Analysis (AREA)
Abstract
The invention relates to a method for detecting a dim small target under a space-based background. The invention relates to the technical field of target detection, which is used for carrying out pyramid decomposition on an original space-based dim small target image to obtain a decomposed image, and carrying out weighted fusion on the decomposed image to obtain an image pyramid weighted fusion result image; adopting self-adaptive threshold segmentation to the image pyramid weighted fusion result graph to determine space-based dim level small target candidate points; setting a test condition and determining a real target point. The invention adopts the image pyramid to carry out multi-scale scaling on the image, thereby realizing the purposes of increasing the data set and diversifying the resolution, fusing the multi-scale image, expanding the dynamic range of the image, improving the performance indexes such as the integral definition, the contrast ratio, the gray variance and the like of the image, improving the image quality and providing more information for target detection.
Description
Technical Field
The invention relates to the technical field of target detection, in particular to a method for detecting dim small targets under a space-based background.
Background
The space target space-based monitoring system is the guarantee of space asset safety, is a basic national strategic facility, is an important development direction of future spatial situation perception, is a leading-edge technology in the field of space detection, and has important strategic significance for effective execution of national space missions and maintenance of national safety systems.
Currently, space-based space object detection systems include radar detection, infrared detection, and visible light detection. The radar detection utilizes wireless electromagnetic waves or laser to detect a target, and has the advantages of strong anti-interference capability, high positioning accuracy and capability of capturing a small-size and long-distance space target. But the defects are that the radar detection equipment has larger mass and higher requirement on the load of the space-based platform; the infrared detection utilizes an infrared imaging technology to detect the target in the space shadow area, but has the problems of short detection distance, weak detected target signal, low signal-to-noise ratio, large influence of fluctuation of the background and the like. Background noise can sometimes even overwhelm space-based targets, making it difficult to detect spatial targets. Compared with the space-based visible light detection and the former two, the advantages are that: (1) For a space target running in a non-shadow area, the visible light is obvious in characteristic and easy to detect due to the irradiation of the sun. Meanwhile, the visible light detection technology is mature, and the requirements for detecting various space targets such as satellites, space debris, boosting rockets, protective covers and the like can be met by using visible light wave band detection; (2) The visible light detection can acquire high-resolution images, a large amount of information is acquired on each frame of image, the detection distance is long, and the device has the capability of simultaneously detecting a plurality of targets; (3) The visible light image processing system is relatively low in cost, small in size and easy to carry on a space-based platform, and the problem of space miniaturization is solved conveniently.
The space-based visible light target monitoring system can master and sense the spatial target situation in real time and timely react to dangerous targets when necessary. The importance and significance of researching space-based space target detection technology are as follows: the complex non-uniform noise background in the image shot in the space-based environment is suppressed, the fast and effective detection and high-precision positioning of the small targets with different tracks and low signal-to-noise ratios are realized, the detection capability of a space target monitoring system can be improved, and the space target monitoring level is greatly improved.
Disclosure of Invention
The invention provides a method for detecting dim and small targets under a space-based background, which overcomes the influence of stray light, noise and the like on the detection of the space targets in a star map, improves the performance indexes of the overall definition, contrast and gray variance of an image and improves the image quality on the basis. A background noise suppression method suitable for space-based images is developed, and how to eliminate the interference of complex background noise in a star map and simultaneously reserve space target information is achieved.
It should be noted that, in this document, relational terms such as first and second, and the like are used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus.
The invention provides a method for detecting dim small targets under a space-based background, which provides the following technical scheme:
a method for detecting dim small targets in a space-based background, the method comprising the steps of:
step1: carrying out pyramid decomposition on the original sky-based dark and weak small target image, and respectively carrying out Gaussian pyramid decomposition and Laplacian pyramid decomposition on the weight map and the original image to obtain a decomposed image;
step2: weighting and fusing the decomposed images by combining the Laplacian pyramid with three information measure factors of image contrast, entropy and exposure, respectively calculating the weight pyramid of each image under different resolutions, and then obtaining image contents and image details of different scales to obtain an image pyramid weighting and fusing result image;
and 3, step3: adopting self-adaptive threshold segmentation to the image pyramid weighting fusion result graph, respectively calculating segmentation thresholds of different areas by using a local threshold segmentation method, combining the whole with the local, and determining a space-based dim small target candidate point;
and 4, step 4: setting a test condition, establishing space-based dim small target candidate points obtained after threshold segmentation into a tree structure, and pruning a motion trail of each node of the tree structure; and outputting the real track, reserving all candidate points forming the real track on each frame of image, and determining a real target point.
Preferably, the step1 specifically comprises:
carrying out pyramid decomposition on an original space-based dark small target image, respectively carrying out Gaussian pyramid decomposition and Laplacian pyramid decomposition on a weight map and the original image, carrying out low-level down-sampling on the Gaussian pyramid to obtain a high-level image, carrying out Gaussian blur on each level, then carrying out 2-level down-sampling on each level to obtain the original data image, wherein the size of each level of image of the Gaussian pyramid is one fourth of that of a lower-level image, and carrying out the image after the Gaussian pyramid decomposition by the following formula:
wherein G is l Is a Gaussian pyramidLayer image, C l 、R l Is as followsThe total number of rows and the total number of columns of the layer image,as a Gaussian filter templatemLine for mobile communication terminalnThe column values are set to the column values,the size of the powder is 5 multiplied by 5,l ev the number of layers of the gaussian pyramid is represented,decomposing the image for a Gaussian pyramidLine for mobile communication terminalThe value of the column;
and (3) introducing a Laplacian pyramid to retain image details, and recovering the detail information of the original image after the image is reconstructed and fused, wherein the Laplacian pyramid decomposition method comprises the following steps of:
to the firstCarrying out Gaussian blur and downsampling on the layer original image data to obtain G l+1 To G l+1 Performing upsampling expansion to obtain a Laplacian pyramid decomposition image I l * :
Wherein Z represents a positive integer, image I l * And withlThe layers are of the same size, thenlLayer image G l And I l * Subtract to obtain the contentInformation saving the firstlLayer image L l :
Wherein,for x row and y column of Gaussian filter template, G l ’ Is a pair G l Derivation, G evL Is the L th ev A layer gaussian blur function, Z being a positive integer.
Preferably, the step2 specifically comprises:
the method comprises the following steps of performing Laplace filtering on an original sky-based dim small target image, taking an absolute value of a coefficient obtained through filtering response, reacting contrast information at each pixel by using an obtained absolute value response coefficient, and expressing a discrete form of a Laplace operator second-order partial derivative for filtering through the following formula:
in the formula,which represents the gray-scale value of the image,in the form of the first partial derivative,is composed ofA second partial derivative;
entropy in grayscale image evaluation is used as an index to evaluate how much and how little information content of an image is, and entropy e in grayscale image evaluation is expressed by the following formula:
wherein,representing an image histogram, wherein L represents an image gray value, and i is the ith image histogram;
measuring the exposure of the small target image with day-based darkness, selecting pixel points with good exposure, and expressing E (x, y) by the following formula:
wherein V (x, y) represents an image pixel;
the linear combination of three information measure factors of image contrast, entropy and exposure is represented by the following formula:
wherein,is a serial number of the image,C,S,Pthe contrast, entropy and proper exposure are adopted,is a constant term,Is the total weight value of the weight value,in order to be a weight of the contrast ratio,in order to be the weight of the entropy value,is an exposure weight value;
respectively calculating corresponding weight maps of all sequence imagesEach pixel is cumulatively calculated so that the sum of the weights of the spatial positions of the ownership remap is 1, i.e.
Where N is the number of input images.
Preferably, the step3 specifically comprises:
in the threshold segmentation process, a method of combining local threshold segmentation and global threshold segmentation is adopted to determine weak and small targets, and the global threshold segmentation is self-adaptive threshold segmentation T adopted for the whole image G T is represented by the following formula G :
Where σ is the gray scale standard deviation of the image, t is an odd number greater than 3,is the image gray value after the filtering processing;
The local threshold segmentation divides the image into N × N regions, and calculates the first region of different regionsIndividual division threshold value:
Wherein,is of a different areaA division threshold value, σ i Is the firstThe standard deviation of the number of divided regions,= 1,2,3, … , N×N,is the firstAn average value of the divided regions;
The same value of t is used in both equations (9) and (11), and the final candidate points for the overall threshold segmentation are:
preferably, the step 4 specifically includes:
setting a test condition according to the track characteristic information of the motion of the space-based target, establishing candidate points into a tree structure, and pruning the motion track of each node of the tree structure;
wherein H 1 The candidate target point is on the motion trail; h 2 The candidate target point is not on the motion trail;
according to the track motion characteristics of the candidate target points, setting a test condition:
wherein,,andis a point on different three frames in a frame set, and the coordinates of the mass center are respectively,Andthe frame index is respectively used,Andrepresents:
wherein the thresholds for distance and angle are used respectivelyAndexpressed, the value is selected as,Is selected as;
Taking a candidate point in a first frame in an image sequence as an initial pointStarting to search, and after searching by taking the point in the first frame as an initial point, performing second search by taking the point which does not meet the test condition in the search result of the second frame as a new initial point, and constructing a vector with the points of each subsequent frame by taking the initial point as a starting point to form a multi-stage hypothesis test parallel judgment tree from the vector to the vector;
a decision stage, in which each candidate trajectory score in the search stage needs to be evaluated, and the evaluation is performed according to the following formula:
wherein,is a set of image sequence frames required for a search decision,is composed ofThe evaluation of the point trajectory is carried out,andis based onAnd the two selected thresholds are selected,,;
Reserving the suspected track, switching to the next frame set for judging again, and executing the following operations:
recording all real tracks, reserving all candidate points forming the real tracks on each frame of image, and determining the candidate points as real target points; and deleting the false tracks, and eliminating corresponding candidate points on each frame of image to finish the detection of all candidate targets on the image sequence.
Preferably, if it is currently the secondRecording if a point in the frame satisfies a test condition,Is a pair ofChecking for point condition, otherwise, recording when not satisfiedAfter completing the search of the current image sequence, allThe score is recorded as a candidate trajectory.
Preferably, the search comprises the steps of:
step1: respectively selecting candidate points in the first frame and the second frame as initial points of searching;
step2: searching the first candidate point satisfying the searching radius range condition, the maximum inter-frame motion distance and the minimum inter-frame motion distance in the subsequent image frameAnd initial pointConstructing an initial vector(ii) a Num (k) =0 denotes the current thThe candidate points which do not meet the condition of the search radius range in the frame are continued to the next frame for searching until the points which meet the search radius range are found;
Step3: in determining an initial vectorThen, candidate points satisfying the condition of searching radius range in the rest frames are searchedConstructing a trajectory vectorDetermine it andwhether the check condition in equation (14) is satisfied.
A system for detecting small dark objects in a space-based background, the system comprising:
the decomposition module is used for carrying out pyramid decomposition on the original sky-based dim small target image, and carrying out Gaussian pyramid decomposition and Laplacian pyramid decomposition on the weight map and the original image respectively to obtain a decomposed image;
the fusion module performs weighted fusion on the decomposed images through a Laplacian pyramid in combination with three information measure factors of image contrast, entropy and exposure, and obtains image contents and image details of different scales after respectively calculating a weight pyramid of each image under different resolutions to obtain an image pyramid weighted fusion result image;
the candidate point module is used for adopting self-adaptive threshold segmentation on the image pyramid weighted fusion result image, respectively calculating segmentation thresholds of different areas by using a local threshold segmentation method, and determining small space-based dim target candidate points by combining the whole part and the local part;
the real target point module is used for setting a detection condition, establishing space-based dim small target candidate points obtained after threshold segmentation into a tree structure, and pruning the motion trail of each node of the tree structure; and outputting the real track, reserving all candidate points forming the real track on each frame of image, and determining a real target point.
A computer-readable storage medium, having stored thereon a computer program for execution by a processor for implementing a method of dim small object detection in a space-based background as claimed in any one of claims 1 to 7.
A computer device comprising a memory storing a computer program and a processor implementing the method of dim small target detection in a space-based context according to any one of claims 1-7 when executing the computer program.
The invention has the following beneficial effects:
the invention adopts the image pyramid to carry out multi-scale scaling on the image, thereby realizing the purposes of increasing the data set and diversifying the resolution, fusing the multi-scale image, expanding the dynamic range of the image, improving the performance indexes such as the integral definition, the contrast, the gray variance and the like of the image, improving the image quality and providing more information for target detection.
The invention adopts a multilevel hypothesis testing method to detect the dim small target with low signal-to-noise ratio with unknown position and speed in the image sequence. According to the method, the motion tracks of a large number of candidate targets in the image sequence are constructed into a tree structure, each frame in the sequence image is pruned by assuming a test condition, the algorithm space complexity and time complexity are reduced, the detection probability is improved, and the number of false alarms is reduced.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the embodiments or the prior art descriptions will be briefly described below, and it is obvious that the drawings in the following description are some embodiments of the present invention, and other drawings can be obtained by those skilled in the art without creative efforts.
FIG. 1 is a flow chart of the detection method of the present invention;
FIG. 2 is a graph of an acquired original day-based target;
FIG. 3 is a diagram of the result of pyramid-weighted fusion of images;
FIG. 4 is a graph of threshold segmentation results based on an image pyramid algorithm;
FIG. 5 is a graph of the results of small day-based dim target detection.
Detailed Description
The technical solutions of the present invention will be described clearly and completely with reference to the accompanying drawings, and it should be understood that the described embodiments are some, but not all embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
In the description of the present invention, it should be noted that the terms "center", "upper", "lower", "left", "right", "vertical", "horizontal", "inner", "outer", etc., indicate orientations or positional relationships based on the orientations or positional relationships shown in the drawings, and are only for convenience of description and simplicity of description, but do not indicate or imply that the device or element being referred to must have a particular orientation, be constructed and operated in a particular orientation, and thus, should not be construed as limiting the present invention. Furthermore, the terms "first," "second," and "third" are used for descriptive purposes only and are not to be construed as indicating or implying relative importance.
In the description of the present invention, it should be noted that, unless otherwise explicitly specified or limited, the terms "mounted," "connected," and "connected" are to be construed broadly and may be, for example, fixedly connected, detachably connected, or integrally connected; can be mechanically or electrically connected; they may be connected directly or indirectly through intervening media, or they may be interconnected between two elements. The specific meanings of the above terms in the present invention can be understood in specific cases to those skilled in the art.
In addition, the technical features involved in the different embodiments of the present invention described below may be combined with each other as long as they do not conflict with each other.
The present invention will be described in detail with reference to specific examples.
The first embodiment is as follows:
as shown in fig. 1 to 5, the specific optimized technical solution adopted to solve the above technical problems of the present invention is: the invention relates to a method for detecting dim small targets under a space-based background.
A method of detecting dim small targets in a space-based background, the method comprising the steps of:
step1: performing pyramid decomposition on the original sky-based dark small target image, and performing Gaussian pyramid decomposition and Laplacian pyramid decomposition on the weight map and the original image respectively to obtain a decomposed image;
step2: weighting and fusing the decomposed images by combining the Laplacian pyramid with three information measure factors of image contrast, entropy and exposure, respectively calculating the weight pyramid of each image under different resolutions, and then obtaining image contents and image details of different scales to obtain an image pyramid weighting and fusing result image;
and 3, step3: adopting self-adaptive threshold segmentation to the image pyramid weighted fusion result graph, respectively calculating segmentation thresholds of different regions by using a local threshold segmentation method, combining the whole with the local to determine space-based dim small target candidate points;
and 4, step 4: setting a test condition, establishing space-based dim small target candidate points obtained after threshold segmentation into a tree structure, and pruning the motion trail of each node of the tree structure; and outputting the real track, reserving all candidate points forming the real track on each frame of image, and determining a real target point.
The invention provides a method for detecting dim small targets under a space-based background, which mainly comprises two characteristics that firstly, an image pyramid is utilized to carry out pyramid decomposition on an original image, a Laplacian pyramid of the image is established for the decomposed Gaussian pyramid, the image details are reserved,
and reconstructing the original image by combining the three fusion strategies weighted by the information measure factors.
Secondly, the motion tracks of a large number of candidate targets in the image sequence are constructed into a tree structure by adopting a multi-level hypothesis testing method,
and pruning each frame in the sequence image by assuming a test condition so as to obtain a detection result of the small dark target on the day basis.
The second concrete embodiment:
the second embodiment of the present application differs from the first embodiment only in that:
the step1 specifically comprises the following steps:
performing pyramid decomposition on an original sky-based dark small target image, performing Gaussian pyramid decomposition and Laplace pyramid decomposition on a weight map and the original image respectively, performing Gaussian blur on the Gaussian pyramid from a lower layer to obtain a high-layer image, performing Gaussian blur on each layer and then performing 2-down sampling on each layer to obtain the original data image, namely performing Gaussian blur on the 0-th layer and then performing interlaced alternate-column downsampling on the 0-th layer to obtain the high-layer pyramid image by analogy in sequence, wherein the size of each layer of image of the Gaussian pyramid is one fourth of that of the lower-layer image, and performing the Gaussian pyramid decomposition on the image according to the following formula:
wherein G l Of Gaussian pyramidlLayer image, G l 、R l Is a firstlThe total number of rows and the total number of columns of the layer image,is a Gaussian filter templatemLine ofnThe column values are set to the column values,the size of the glass is 5 multiplied by 5,the number of layers of the gaussian pyramid is represented,decomposing the image for a Gaussian pyramidLine for mobile communication terminalThe column values; in the invention, the maximum number of decomposable layers is taken, and the calculation mode is。
During decomposition of the Gaussian pyramid, gaussian filtering loses high-frequency details of the image, the image details can be reserved by introducing the Laplacian pyramid, and the detail information of the original image can be restored after image reconstruction and fusion;
the Laplacian pyramid decomposition method comprises the following steps of introducing a Laplacian pyramid to retain image details, and restoring detail information of an original image after image reconstruction and fusion, wherein the Laplacian pyramid decomposition method comprises the following steps:
to the firstlCarrying out Gaussian blur and downsampling on the layer original image data to obtain G l+1 To G l+1 Performing upsampling expansion to obtain a Laplacian pyramid decomposition image I l * :
Wherein Z represents a positive integerPicture I l * Andlthe layers are of the same size, thenlLayer image G l And I l * Subtract to obtain the second one containing detail informationlLayer image L l :
Wherein,is the x row and y column of the Gaussian filtering template,is G l For the purpose of the derivation of the derivatives,is composed ofAnd Z is a positive integer.
The third concrete embodiment:
the difference between the third embodiment and the second embodiment of the present application is only that:
the step2 specifically comprises the following steps:
the method comprises the following steps of performing Laplace filtering on an original sky-based dim small target image, taking an absolute value of a coefficient obtained through filtering response, reacting contrast information at each pixel by using an obtained absolute value response coefficient, and expressing a Laplace operator for filtering in a discrete form derived from a second-order partial derivative through the following formula:
in the formula,which represents the gray-scale value of the image,in the form of the first partial derivative,is composed ofA second partial derivative;
entropy is used as an index to evaluate the amount of information contained in an image in grayscale image evaluation, and the entropy e in grayscale image evaluation is expressed by the following formula:
wherein,representing an image histogram, L representing an image gray value, and i being an ith image histogram;
measuring the exposure of the small target image, selecting the pixel points with good exposure, and expressing the pixel points with good exposure by the following formula:
the linear combination of three information measure factors of image contrast, entropy and exposure is represented by the following formula:
wherein,is a serial number of the image,C,S,Pthe contrast, entropy and proper exposure are adopted,is a constant term,Is the total weight value of the weight value,in order to be a weight of the contrast ratio,in order to be the weight of the entropy value,is an exposure weight value;
respectively calculating corresponding weight maps of all sequence imagesEach pixel is cumulatively calculated so that the sum of the weights of the spatial positions of the ownership remap is 1, i.e.
N is the number of input images, the more the input images are, the more accurate the result of the final weight map calculation coefficient is, so that the fused image can more completely retain the brightness and detail information of the original scene. Fig. 2 is a graph of the image pyramid weighted fusion result.
The fourth concrete embodiment:
the difference between the fourth embodiment and the third embodiment is only that:
the step3 specifically comprises the following steps:
the threshold segmentation process adopts a method of combining local threshold segmentation and global threshold segmentation to determine a small target, and the global threshold segmentation is self-adaptive threshold segmentation adopted for the whole imageIs represented by the following formula:
Wherein,is the gray scale standard deviation of the image, t is an odd number greater than 3,is the gray value of the image after the filtering process,is a Gaussian filtering template line value;
The local threshold segmentation divides the image into N × N regions, and calculates the first and second regions respectivelyIndividual division threshold value:
Wherein,is a second of different areasA number of the segmentation thresholds are set to be,is the firstThe standard deviation of each of the divided regions is,= 1,2,3, … , N×N,is the firstAn average value of the divided areas;
The same is used in both equations (9) and (11)The final candidate points for obtaining the overall threshold segmentation are:
the fifth concrete example:
the difference between the fifth embodiment and the fourth embodiment is only that:
and (3) setting a hypothesis test condition according to the movement characteristics of the space-based target, establishing candidate points obtained after threshold segmentation into a tree structure, and pruning the movement track of each node of the tree structure. And establishing a multi-level hypothesis testing discrimination tree to judge the motion track points of the real target with small space-based darkness.
The space targets including the in-orbit aircraft, the fragments and the like are subjected to the action of gravity to do orbital motion and follow the Keplerian motion law, but in continuous 5-frame exposure time, each target is assumed to have a respective fixed motion direction and speed, the method sets assumed test conditions according to track characteristic information of space-based target motion, a series of candidate points are built into a tree structure, and in the process, the motion tracks of all nodes of the tree structure are pruned.
The step 4 specifically comprises the following steps:
setting a test condition according to track characteristic information of the motion of the space-based target, establishing the candidate points into a tree structure, and pruning the motion track of each node of the tree structure;
wherein H 1 The candidate target point is on the motion trail; h 2 The candidate target point is not on the motion trail;
according to the track motion characteristics of the candidate target points, setting a test condition:
wherein,,andis a point on different three frames in a frame set, and the coordinates of the mass center are respectively,Andthe frame index is respectively used,Andrepresents:
wherein the threshold values of the distance and the angle are respectively usedAndexpressed, the value is selected as,Is selected as;
Usually, a candidate point in the first frame of the image sequence is taken as an initial pointA search is started. For an object with a high signal-to-noise ratio, the object can be detected after threshold segmentation in each frame of the image sequence, however, for a small object with a low signal-to-noise ratio with weak energy, for example, a signal-to-noise ratio less than 3, the object is easily submerged by background and noise after threshold segmentation in a certain frame, the object cannot be detected in the frame, and the motion trajectory of the object appears to be discontinuous on the whole image sequence. Once the target is lost in a certain frame, the subsequent search using the initial point as the root node fails. Thus, an improved multi-level hypothesis testing search tree was built to solve this problem. After the point in the first frame is used as the initial point for searching, the point which does not meet the hypothesis test condition in the search result of the second frame is used as a new initial point for searching for the second time. In addition, unlike the traditional frame-by-frame progressive search from point to point, the vector is constructed by using the initial point as the starting point and the points of the subsequent frames, and the vector-to-vector multi-stage hypothesis testing parallel discrimination tree is formed.
Taking a candidate point in a first frame in an image sequence as an initial pointStarting to search, and after searching by taking the point in the first frame as an initial point, performing second search by taking the point which does not meet the test condition in the search result of the second frame as a new initial point, and constructing a vector with the points of each subsequent frame by taking the initial point as a starting point to form a multi-stage hypothesis test parallel judgment tree from the vector to the vector;
a decision stage, in which each candidate trajectory score in the search stage needs to be evaluated, and the evaluation is performed according to the following formula:
wherein,is a set of image sequence frames required for a search decision,is composed ofThe evaluation of the point trajectory is carried out,andis based onAnd the two selected thresholds are selected,,;
In 5 frames, according to the characteristics of target motion tracks, real target tracks are easily judged and false tracks are eliminated, but if a single threshold is adopted for simple classification, some low signal-to-noise ratio targets are easily omitted, so a class of 'suspected tracks' is added, namely 3 points in 5 frames meet the condition of hypothesis test conditions, and the current 5 frames cannot be accurately determined to be real tracks or false alarms, so the suspected tracks are reserved and are judged again in the next frame set, the execution operation is as follows, the suspected tracks are reserved and are judged again in the next frame set, and the execution operation is as follows:
recording all real tracks, reserving all candidate points forming the real tracks on each frame of image, and determining the candidate points as real target points; and deleting the false tracks, and eliminating corresponding candidate points on each frame of image to finish the detection of all candidate targets on the image sequence. Fig. 5 is a final view of the detection result of the small dark and weak target in the sky base.
The sixth specific embodiment:
the difference between the sixth embodiment and the fifth embodiment is only that:
is at present the firstRecording if a point in the frame satisfies a test condition,Is a pair ofChecking for point condition, otherwise, recording when not satisfiedAfter completing the search of the current image sequence, allIs recorded as oneA score for the candidate trajectory.
The seventh specific embodiment:
the seventh embodiment of the present application differs from the sixth embodiment only in that:
the search comprises the following steps:
step1: respectively selecting candidate points in the first frame and the second frame as initial points for searching;
step2: searching the first candidate point satisfying the searching radius range condition, the maximum inter-frame motion distance and the minimum inter-frame motion distance in the subsequent image frameAnd initial pointConstructing an initial vector(ii) a Num (k) =0 denotes the current thThe candidate points which do not meet the condition of the search radius range in the frame are continued to the next frame for searching until the points which meet the search radius range are found;
Step3: in determining an initial vectorThen, candidate points satisfying the condition of searching radius range in the rest frames are searchedConstructing a trajectory vectorDetermine it andwhether the check condition in equation (14) is satisfied.
The eighth embodiment:
the eighth embodiment of the present application differs from the seventh embodiment only in that:
the search comprises the following steps:
step1: respectively selecting candidate points in the first frame and the second frame as initial points for searching;
step2: searching the first candidate point satisfying the search radius range condition, the maximum inter-frame motion distance and the minimum inter-frame motion distance in the subsequent image frameAnd initial pointForming an initial vector(ii) a Num (k) =0 denotes the current thThe candidate points which do not meet the condition of the search radius range in the frame are searched until the points which meet the search radius range are found;
Step3: in determining an initial vectorThen, candidate points satisfying the condition of searching radius range in the rest frames are searchedConstructing a trajectory vectorDetermine it andwhether the check condition in equation (14) is satisfied.
The specific embodiment is nine:
the ninth embodiment of the present application differs from the eighth embodiment only in that:
the present invention provides a computer-readable storage medium having stored thereon a computer program for execution by a processor for implementing a method of dim small target detection in a space-based background.
The specific example is ten:
the embodiment ten of the present application differs from the embodiment nine only in that:
the invention provides computer equipment which comprises a memory and a processor, wherein a computer program is stored in the memory, and when the processor runs the computer program stored in the memory, the processor executes a dim small target detection method under a space-based background.
The above description is only a preferred embodiment of the method for detecting the dim small target under the space-based background, and the protection scope of the method for detecting the dim small target under the space-based background is not limited to the above embodiments, and all technical solutions belonging to the idea belong to the protection scope of the present invention. It should be noted that modifications and variations that do not depart from the gist of the invention are intended to be within the scope of the invention.
Claims (10)
1. A method for detecting dim small targets under a space-based background is characterized by comprising the following steps: the method comprises the following steps:
step1: performing pyramid decomposition on the original sky-based dark small target image, and performing Gaussian pyramid decomposition and Laplacian pyramid decomposition on the weight map and the original image respectively to obtain a decomposed image;
step2: weighting and fusing the decomposed images by combining the Laplacian pyramid with three information measure factors of image contrast, entropy and exposure, respectively calculating the weight pyramid of each image under different resolutions, and then obtaining image contents and image details of different scales to obtain an image pyramid weighting and fusing result image;
and step3: adopting self-adaptive threshold segmentation to the image pyramid weighting fusion result graph, respectively calculating segmentation thresholds of different areas by using a local threshold segmentation method, combining the whole with the local, and determining a space-based dim small target candidate point;
and 4, step 4: setting a test condition, establishing space-based dim small target candidate points obtained after threshold segmentation into a tree structure, and pruning the motion trail of each node of the tree structure; and outputting the real track, reserving all candidate points forming the real track on each frame of image, and determining a real target point.
2. The method for detecting dim small targets under space-based background as claimed in claim 1, wherein:
the step1 specifically comprises the following steps:
carrying out pyramid decomposition on an original space-based dark small target image, respectively carrying out Gaussian pyramid decomposition and Laplacian pyramid decomposition on a weight map and the original image, carrying out low-level down-sampling on the Gaussian pyramid to obtain a high-level image, carrying out Gaussian blur on each level, then carrying out 2-level down-sampling on each level to obtain the original data image, wherein the size of each level of image of the Gaussian pyramid is one fourth of that of a lower-level image, and carrying out the image after the Gaussian pyramid decomposition by the following formula:
whereinOf Gaussian pyramidThe image of the layer(s) is,、is a firstThe total number of rows and the total number of columns of the layer image,as a Gaussian filter templateLine ofColumn values, taking a size of 5 x 5,the number of layers of the gaussian pyramid is represented,decomposing the image for a Gaussian pyramidLine for mobile communication terminalThe value of the column;
and (3) introducing a Laplacian pyramid to retain image details, and recovering the detail information of the original image after the image is reconstructed and fused, wherein the Laplacian pyramid decomposition method comprises the following steps of:
to the firstCarrying out Gaussian blur and downsampling on the layer original image data to obtainTo, forPerforming upsampling expansion to obtain a Laplacian pyramid decomposition image:
Wherein, Z represents a positive integer,image andthe layers are of the same size, thenLayer imageAndsubtract to obtain the second one containing detail informationLayer image:
3. The method for detecting dim small targets under space-based background as claimed in claim 2, wherein:
the step2 specifically comprises the following steps:
the method comprises the following steps of performing Laplace filtering on an original sky-based dim small target image, taking an absolute value of a coefficient obtained through filtering response, reacting contrast information at each pixel by using an obtained absolute value response coefficient, and expressing a discrete form of a Laplace operator second-order partial derivative for filtering through the following formula:
in the formula,which represents the gray-scale value of the image,in the form of the first partial derivative,is composed ofA second partial derivative;
entropy is used as an index to evaluate the amount of information contained in an image in grayscale image evaluation, and the entropy e in grayscale image evaluation is expressed by the following formula:
wherein,which represents the histogram of the image,which represents the gray-scale value of the image,is a firstAn image histogram;
measuring the exposure of the small target image with weak sky-based darkness, and selecting the pixel points with good exposure to be represented by the following formula:
the linear combination of three information measure factors of image contrast, entropy and exposure is represented by the following formula:
wherein,is a serial number of the image,C,S,Pthe contrast, entropy and proper exposure are adopted,is a constant term,Is the total weight value of the weight value,in order to be a contrast weight, the contrast ratio,in order to be the weight of the entropy value,is an exposure weight value;
respectively calculating corresponding weight maps of all sequence imagesEach pixel in the weight map is accumulated to make the sum of the weights of the spatial positions of the weight map be 1, namely
Where N is the number of input images.
4. The method for detecting dim small targets under space-based background as claimed in claim 1, wherein:
the step3 specifically comprises the following steps:
the threshold segmentation process adopts a method of combining local threshold segmentation and global threshold segmentation to determine a small target, and the global threshold segmentation is self-adaptive threshold segmentation adopted for the whole imageIs represented by the following formula:
Wherein,is the standard deviation of the gray scale of the image,is an odd number greater than 3 and,is the gray value of the image after the filtering process,is a Gaussian filter template line value;
Local thresholding to divide images intoEach region, calculating the first of the different regionsThe division threshold is as follows:
wherein,is of a different areaA number of the segmentation thresholds are set to be,is the firstThe standard deviation of the number of divided regions,= 1,2,3, … , ,is the firstAn average value of the divided regions;
The same is used in both equations (9) and (11)The final candidate points for obtaining the overall threshold segmentation are:
5. the method of claim 4, wherein the method comprises the following steps:
the step 4 specifically comprises the following steps:
setting a test condition according to track characteristic information of the motion of the space-based target, establishing the candidate points into a tree structure, and pruning the motion track of each node of the tree structure;
wherein H 1 The candidate target point is on the motion trail; h 2 The candidate target point is not on the motion trail;
according to the track motion characteristics of the candidate target points, setting a test condition:
wherein,,andis a point on different three frames in a frame set, and the coordinates of the mass center are respectively,Andthe frame index is respectively used,Andrepresents:
wherein the threshold values of the distance and the angle are respectively usedAndexpressed, the value is selected as,Is selected as;
Taking a candidate point in a first frame in an image sequence as an initial pointStarting searching, namely, after searching by taking a point in a first frame as an initial point, performing second searching by taking a point which does not meet the test condition in the search result of a second frame as a new initial point, and constructing a vector with the points of subsequent frames by taking the initial point as a starting point to form a multi-stage hypothesis test parallel judgment tree from the vector to the vector;
a decision stage, in which each candidate trajectory score in the search stage needs to be evaluated, and the evaluation is performed according to the following formula:
wherein,is a set of image sequence frames required for a search decision,is composed ofThe evaluation of the point trajectory is carried out,andis based onAnd the two selected thresholds are selected;
Reserving the suspected trajectory, switching to the next frame set for judgment again, and executing the following operations:
Recording all real tracks, reserving all candidate points forming the real tracks on each frame of image, and determining the candidate points as real target points; and deleting the false tracks, and eliminating corresponding candidate points on each frame of image to finish the detection of all candidate targets on the image sequence.
6. The method of claim 5, wherein the method comprises:
7. The method for detecting dim small targets under space-based background as claimed in claim 6, wherein: the search comprises the following steps:
step1: respectively selecting candidate points in the first frame and the second frame as initial points of searching;
step2: searching the first candidate point satisfying the searching radius range condition, the maximum inter-frame motion distance and the minimum inter-frame motion distance in the subsequent image frameAnd initial pointConstructing an initial vector(ii) a Num (k) =0 denotes the current thFailing to satisfy search radius range bars in a frameThe candidate points of the piece are searched in the next frame until the points meeting the search radius range are found;
8. A system for detecting dim small targets under a space-based background is characterized in that: the system comprises:
the decomposition module is used for carrying out pyramid decomposition on the original sky-based dark small target image, and carrying out Gaussian pyramid decomposition and Laplacian pyramid decomposition on the weight map and the original image respectively to obtain a decomposed image;
the fusion module performs weighted fusion on the decomposed images by combining the Laplace pyramid with three information measure factors of image contrast, entropy and exposure, and obtains image contents and image details of different scales after respectively calculating the weight pyramid of each image under different resolutions to obtain an image pyramid weighted fusion result image;
the candidate point module is used for adopting self-adaptive threshold segmentation on the image pyramid weighting fusion result image, calculating segmentation thresholds of different areas by using a local threshold segmentation method, combining the whole image with the local image and determining a small sky-based dim target candidate point;
the real target point module is used for setting a detection condition, establishing space-based dim small target candidate points obtained after threshold segmentation into a tree structure, and pruning a motion trail of each node of the tree structure; and outputting the real track, reserving all candidate points forming the real track on each frame of image, and determining a real target point.
9. A computer-readable storage medium having stored thereon a computer program, characterized in that: the program is executed by a processor for implementing a method of dim small target detection in a space based background as claimed in any one of claims 1 to 7.
10. A computer device comprising a memory and a processor, the memory storing a computer program, characterized in that: the processor, when executing the computer program, implements a method for detecting dim small objects in a space-based background as claimed in any one of claims 1 to 7.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202211177614.8A CN115272871A (en) | 2022-09-27 | 2022-09-27 | Method for detecting dim small target under space-based background |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202211177614.8A CN115272871A (en) | 2022-09-27 | 2022-09-27 | Method for detecting dim small target under space-based background |
Publications (1)
Publication Number | Publication Date |
---|---|
CN115272871A true CN115272871A (en) | 2022-11-01 |
Family
ID=83756598
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202211177614.8A Pending CN115272871A (en) | 2022-09-27 | 2022-09-27 | Method for detecting dim small target under space-based background |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN115272871A (en) |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN115578476A (en) * | 2022-11-21 | 2023-01-06 | 山东省标筑建筑规划设计有限公司 | Efficient storage method for urban and rural planning data |
CN115861134A (en) * | 2023-02-24 | 2023-03-28 | 长春理工大学 | Star map processing method under space-based background |
CN117974460A (en) * | 2024-03-29 | 2024-05-03 | 深圳中科精工科技有限公司 | Image enhancement method, system and storage medium |
Citations (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109767439A (en) * | 2019-01-10 | 2019-05-17 | 中国科学院上海技术物理研究所 | A kind of multiple dimensioned difference of self-adapting window and the object detection method of bilateral filtering |
-
2022
- 2022-09-27 CN CN202211177614.8A patent/CN115272871A/en active Pending
Patent Citations (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109767439A (en) * | 2019-01-10 | 2019-05-17 | 中国科学院上海技术物理研究所 | A kind of multiple dimensioned difference of self-adapting window and the object detection method of bilateral filtering |
Non-Patent Citations (4)
Title |
---|
刘鑫龙: "可见光相机高动态范围成像技术研究", 《中国优秀硕士学位论文全文数据库 信息科技辑》 * |
吴晓军: "基于多曝光图像的高动态范围图像生成", 《中国优秀硕士学位论文全文数据库 信息科技辑》 * |
朱含露: "天基空中动目标红外探测与识别关键技术研究", 《中国博士学位论文全文数据库 工程科技I辑》 * |
李梦阳: "天基复杂背景下空间暗弱小目标检测方法研究", 《中国博士学位论文全文数据库 工程科技I辑》 * |
Cited By (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN115578476A (en) * | 2022-11-21 | 2023-01-06 | 山东省标筑建筑规划设计有限公司 | Efficient storage method for urban and rural planning data |
CN115578476B (en) * | 2022-11-21 | 2023-03-10 | 山东省标筑建筑规划设计有限公司 | Efficient storage method for urban and rural planning data |
CN115861134A (en) * | 2023-02-24 | 2023-03-28 | 长春理工大学 | Star map processing method under space-based background |
CN117974460A (en) * | 2024-03-29 | 2024-05-03 | 深圳中科精工科技有限公司 | Image enhancement method, system and storage medium |
CN117974460B (en) * | 2024-03-29 | 2024-06-11 | 深圳中科精工科技有限公司 | Image enhancement method, system and storage medium |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN115272871A (en) | Method for detecting dim small target under space-based background | |
CN110675418B (en) | Target track optimization method based on DS evidence theory | |
CN111709416B (en) | License plate positioning method, device, system and storage medium | |
CN114742799B (en) | Industrial scene unknown type defect segmentation method based on self-supervision heterogeneous network | |
US9911191B2 (en) | State estimation apparatus, state estimation method, and integrated circuit with calculation of likelihood data and estimation of posterior probability distribution data | |
CN111462050B (en) | YOLOv3 improved minimum remote sensing image target detection method and device and storage medium | |
CN114140683A (en) | Aerial image target detection method, equipment and medium | |
CN113140005B (en) | Target object positioning method, device, equipment and storage medium | |
CN112036381B (en) | Visual tracking method, video monitoring method and terminal equipment | |
CN111126278A (en) | Target detection model optimization and acceleration method for few-category scene | |
Fang et al. | Infrared small UAV target detection based on depthwise separable residual dense network and multiscale feature fusion | |
CN115240149A (en) | Three-dimensional point cloud detection and identification method and device, electronic equipment and storage medium | |
CN109919223A (en) | Object detection method and device based on deep neural network | |
CN112287906A (en) | Template matching tracking method and system based on depth feature fusion | |
CN112508803A (en) | Denoising method and device for three-dimensional point cloud data and storage medium | |
CN116758411A (en) | Ship small target detection method based on remote sensing image pixel-by-pixel processing | |
Ren et al. | A lightweight object detection network in low-light conditions based on depthwise separable pyramid network and attention mechanism on embedded platforms | |
CN112560799B (en) | Unmanned aerial vehicle intelligent vehicle target detection method based on adaptive target area search and game and application | |
CN101149803A (en) | Small false alarm rate test estimation method for point source target detection | |
CN113160279A (en) | Method and device for detecting abnormal behaviors of pedestrians in subway environment | |
CN116310832A (en) | Remote sensing image processing method, device, equipment, medium and product | |
CN110047103A (en) | Mixed and disorderly background is removed from image to carry out object detection | |
CN115410102A (en) | SAR image airplane target detection method based on combined attention mechanism | |
RU2000120929A (en) | METHOD FOR PROCESSING SIGNALS FOR DETERMINING THE COORDINATES OF OBJECTS OBSERVED IN A SEQUENCE OF TELEVISION IMAGES, AND A DEVICE FOR ITS IMPLEMENTATION (OPTIONS) | |
CN114463300A (en) | Steel surface defect detection method, electronic device, and storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |