CN107316318B - Air target automatic detection method based on multi-subregion background fitting - Google Patents

Air target automatic detection method based on multi-subregion background fitting Download PDF

Info

Publication number
CN107316318B
CN107316318B CN201710384738.6A CN201710384738A CN107316318B CN 107316318 B CN107316318 B CN 107316318B CN 201710384738 A CN201710384738 A CN 201710384738A CN 107316318 B CN107316318 B CN 107316318B
Authority
CN
China
Prior art keywords
target
image
value
sigma
average gray
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201710384738.6A
Other languages
Chinese (zh)
Other versions
CN107316318A (en
Inventor
方勇
尹晓琳
丁洋坤
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hebei Hanguang Heavy Industry Ltd
Original Assignee
Hebei Hanguang Heavy Industry Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hebei Hanguang Heavy Industry Ltd filed Critical Hebei Hanguang Heavy Industry Ltd
Priority to CN201710384738.6A priority Critical patent/CN107316318B/en
Publication of CN107316318A publication Critical patent/CN107316318A/en
Application granted granted Critical
Publication of CN107316318B publication Critical patent/CN107316318B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/246Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
    • G06T7/248Analysis of motion using feature-based methods, e.g. the tracking of corners or segments involving reference images or patches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/254Analysis of motion involving subtraction of images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20021Dividing image into blocks, subimages or windows
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/60Analysis of geometric attributes
    • G06T7/66Analysis of geometric attributes of image moments or centre of gravity

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses an automatic aerial target detection method based on multi-subregion background fitting, which divides an image to be detected into P subregions and calculates the average gray value A (1) -A (P) of each subregion; then sorting the average gray values A (1) -A (P) of all the subregions from large to small to obtain new sequences A '(1) -A' (P) of the average gray values of the subregions; selecting the average gray value of a second black or second white sub-region as a segmentation threshold according to A '(1) to A' (P), carrying out proper reduction or amplification according to image contrast, and then carrying out image binarization to obtain a segmentation image; and then carrying out target positioning on the segmentation image. The invention can reduce the generation of false targets and also inherits the advantages of simple algorithm and easy hardware realization.

Description

Air target automatic detection method based on multi-subregion background fitting
Technical Field
The invention relates to the technical field of automatic aerial target detection of visible light television images or infrared images, in particular to an automatic aerial target detection method based on multi-subregion background fitting.
Background
With the development of information technology, the intelligent detection and identification of targets by means of video image processing are greatly developed, and particularly in the military field, the automatic detection and tracking of targets can greatly shorten the reaction time of a weapon system, which is very important for improving the performance index of the whole system.
The traditional real-time detection method for the aerial target mainly comprises the methods of target detection based on background difference, aerial target detection based on image line correlation and the like. However, these methods have certain limitations.
The basic idea of the background difference-based target detection method is to subtract a current frame image from a background image to obtain a target image, but the method is only effective when a camera is in a static state and an air background is also in a static state. However, in most cases, the system needs automatic search, the camera is in motion, and therefore, this method is not applicable.
The main idea of the aerial target detection method based on image row correlation is that based on the correlation between adjacent rows of the image, the gray level of the image is reversed firstly, then the average gray level value of a certain row is used as a reference, and the average gray level value is subtracted from all the other rows, so that the image background is removed, and the real target is obtained. However, the practical effect of the method is not ideal, the main reason is that the method is interfered by the angle of illumination, cloud layers and the like, the aerial background shot by the visible light video or the infrared video under most conditions is not uniform, and when the average value of a certain line is randomly selected as a reference, the background is not ideal to be removed, so that more false targets are caused. Therefore, this method is also not applicable.
Disclosure of Invention
In view of the above, the invention provides an automatic aerial target detection method based on multi-subregion background fitting, which inherits the advantages of simple and easy hardware implementation of the existing algorithm, and meanwhile, considers the interference characteristics of the motion of a camera, a cloud layer and the like, and obtains a practical novel automatic aerial target detection method in a targeted manner, thereby reducing the generation of false targets.
In order to solve the technical problem, the invention is realized as follows:
an automatic aerial target detection method based on multi-subregion background fitting comprises the following steps:
dividing an image to be detected into P subregions, wherein P is a set integer;
step two, obtaining the average gray value A (m) of each subarea, wherein m is 1,2, … and P;
step three, sorting the average gray values a (1) -a (P) of all the sub-regions from large to small to obtain a new sub-region average gray value sequence a' (m), wherein m is 1,2, … and P;
step four, when the target is black relative to the background, selecting Th (A' (P-1) multiplied by sigma) as a gray threshold Th, wherein the larger the image contrast is, the smaller the value of sigma is, and sigma is less than 1; performing binarization segmentation on the image by using a gray threshold Th, and setting the pixel value to be 255 if the pixel value is less than or equal to Th to obtain a segmented image;
when the target is white relative to the background, selecting Th (A' (2) multiplied by sigma) as a gray threshold Th, wherein the larger the image contrast is, the larger the value of sigma is, and sigma is larger than 1; performing binarization segmentation on the image by using a gray threshold Th, and setting the pixel value to be 255 if the pixel value is greater than or equal to Th to obtain a segmented image;
and step five, carrying out target positioning by using the segmented image obtained in the step four.
Preferably, the fifth step adopts a mode of solving the centroid of the image for a plurality of iterations to solve the position of the target.
Preferably, the target obtained in the step five is judged by using the known target minimum size, and if the number of target bright points is smaller than the target minimum size, the target is considered as a false target.
Preferably, the second step is: and (3) placing a square template on any position of each subregion, and taking the average gray value of each pixel in the template as the average gray value A (m) of the subregion, wherein m is 1,2, … and P.
Has the advantages that:
the invention considers the darkest and whitest parts as targets, and the average gray value of the second black and white sub-regions is used as a segmentation threshold value according to the average gray value of the sub-regions and is properly scaled according to the image contrast, so that the accurate segmentation threshold value is more pertinently obtained for different images. Moreover, the algorithm of the scheme is very concise and is easy to realize by hardware.
Drawings
FIG. 1 is a flow chart of the present invention.
Detailed Description
The invention is described in detail below by way of example with reference to the accompanying drawings.
The basic principle of the invention is to remove the background accurately in real time by a certain method by utilizing the contrast difference between the target and the background, namely the gray level difference, so as to obtain the target.
Step one, dividing the whole image into P subregions, wherein the larger P is, the more accurate the background acquisition is and the better the effect is. In this example, P ═ 16 was chosen.
And step two, placing a Q multiplied by Q square template on any position of each sub-region, and taking the average gray value of each pixel in the template as the average gray value A (m) of the sub-region, wherein m is 1,2, … and P. When the value of Q is required, Q × Q needs to be smaller than the size of the sub-region, preferably more than 50% of the area of the sub-region, and in the preferred embodiment, Q is 10.
Figure BDA0001306051340000031
Where s (m) is a region of size Q × Q in the mth sub-region, i.e., a region at the template placement position, and f (i, j) is the grayscale value of the pixel (i, j).
And thirdly, sequencing the obtained sub-region gray values from large to small to obtain a new sub-region average gray value sequence A' (m), wherein m is 1,2, … and P.
And step four, when the target is black against the background, in general, in the case of a visible light image, selecting an average gray scale value of the 2 nd lowest gray scale value and appropriately reducing the average gray scale value as a gray scale threshold Th, wherein Th is a' (P-1) × σ, and σ is less than 1. Sigma is used for compensating contrast, and the value principle of sigma is as follows: when the gray value difference between the target and the background is obvious, namely the image contrast is large, the sigma is relatively selected to be smaller, and conversely, the sigma is selected to be larger. Typically, σ is chosen to be 0.9. Then, the whole image is binarized by using the gray threshold Th to obtain a new segmented image:
Figure BDA0001306051340000041
where T (i, j) represents the pixel value of pixel (i, j) in the segmented image.
However, when the target is "white" with respect to the background, in general, in the case of an infrared image, the 2 nd largest average gradation value is selected and appropriately enlarged as the gradation threshold value, where Th is a' (2) × σ, σ > 1. Sigma is used for compensating contrast, and the value principle of sigma is as follows: when the gray value difference between the target and the background is obvious, namely the image contrast is large, the sigma is relatively selected to be larger, and conversely, the sigma is selected to be smaller. Typically, σ is chosen to be 1.1. Then, the whole image is subjected to binarization processing by using the gray threshold value to obtain a new segmentation image:
Figure BDA0001306051340000042
and fifthly, carrying out target positioning on the image after the segmentation. The position of the target can be generally found by iteratively finding the centroid of the image for a plurality of times. The specific mode is as follows:
Figure BDA0001306051340000043
Figure BDA0001306051340000044
assuming that the size of the image is W × H, the initial values of M and N in the above formula may be M ═ W and N ═ H. When the first centroid (X1, Y1) is calculated by using the above equations (1) and (2), M and N are reduced, for example, by 20% by taking (X1, Y1) as the center point, and the centroid (X2, Y2) is obtained again. And so on. After the centroid is obtained for many times, the real position of the target can be accurately obtained, and the centroid can be obtained for three times generally.
And step six, after the position of the target is obtained, judging the authenticity of the target. The present embodiment uses the size of the target for discrimination. And (3) assuming that the minimum size of the target is Ws multiplied by Hs, calculating the number of bright points by taking the target positioning position as a central point and taking the B multiplied by B area as an area to be detected, wherein if the number of bright points is greater than or equal to Ws multiplied by Hs, the target is a real target, otherwise, the target is a false target. Wherein, the B × B region is at least larger than Ws × Hs, and preferably 2 times Ws × Hs can be selected.
This flow ends by this point.
In summary, the above description is only a preferred embodiment of the present invention, and is not intended to limit the scope of the present invention. Any modification, equivalent replacement, or improvement made within the spirit and principle of the present invention should be included in the protection scope of the present invention.

Claims (4)

1. An automatic aerial target detection method based on multi-subregion background fitting is characterized by comprising the following steps:
dividing an image to be detected into P subregions, wherein P is a set integer;
step two, obtaining the average gray value A (m) of each subarea, wherein m is 1,2, … and P;
step three, sorting the average gray values a (1) -a (P) of all the sub-regions from large to small to obtain a new sub-region average gray value sequence a' (m), wherein m is 1,2, … and P;
step four, when the target is black relative to the background, selecting Th (A' (P-1) multiplied by sigma) as a gray threshold Th, wherein the larger the image contrast is, the smaller the value of sigma is, and sigma is less than 1; performing binarization segmentation on the image by using a gray threshold Th, and setting the pixel value to be 255 if the pixel value is less than or equal to Th to obtain a segmented image;
when the target is white relative to the background, selecting Th (A' (2) multiplied by sigma) as a gray threshold Th, wherein the larger the image contrast is, the larger the value of sigma is, and sigma is larger than 1; performing binarization segmentation on the image by using a gray threshold Th, and setting the pixel value to be 255 if the pixel value is greater than or equal to Th to obtain a segmented image;
and step five, carrying out target positioning by using the segmented image obtained in the step four.
2. The method for automatically detecting the aerial target based on the background fitting of the multiple subregions as claimed in claim 1, wherein the fifth step adopts a mode of solving the centroid of the image by multiple iterations to solve the position of the target.
3. The method for automatically detecting the aerial target based on the multi-sub-region background fitting as claimed in claim 1 or 2, wherein the target obtained in the step five is judged by using the known minimum size of the target, and the number of the bright points of the target is smaller than the minimum size of the target, and the target is considered as a false target.
4. The method for automatically detecting the aerial target based on the background fitting of the multiple sub-regions as claimed in claim 1, wherein the second step is: and (3) placing a square template on any position of each subregion, and taking the average gray value of each pixel in the template as the average gray value A (m) of the subregion, wherein m is 1,2, … and P.
CN201710384738.6A 2017-05-26 2017-05-26 Air target automatic detection method based on multi-subregion background fitting Active CN107316318B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710384738.6A CN107316318B (en) 2017-05-26 2017-05-26 Air target automatic detection method based on multi-subregion background fitting

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710384738.6A CN107316318B (en) 2017-05-26 2017-05-26 Air target automatic detection method based on multi-subregion background fitting

Publications (2)

Publication Number Publication Date
CN107316318A CN107316318A (en) 2017-11-03
CN107316318B true CN107316318B (en) 2020-06-02

Family

ID=60182137

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710384738.6A Active CN107316318B (en) 2017-05-26 2017-05-26 Air target automatic detection method based on multi-subregion background fitting

Country Status (1)

Country Link
CN (1) CN107316318B (en)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112581374A (en) * 2019-09-29 2021-03-30 深圳市光鉴科技有限公司 Speckle sub-pixel center extraction method, system, device and medium
CN110887563B (en) * 2019-11-18 2021-10-01 中国科学院上海技术物理研究所 Hyperspectral area array detector bad element detection method
CN111862131B (en) * 2020-07-31 2021-03-19 易思维(杭州)科技有限公司 Adhesive tape edge detection method and application thereof
CN116703741B (en) * 2022-09-27 2024-03-15 荣耀终端有限公司 Image contrast generation method and device and electronic equipment

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP0814602B1 (en) * 1996-06-21 2004-04-07 Hewlett-Packard Company, A Delaware Corporation Jitter-form background control for minimizing spurious gray cast in scanned images
CN102855634A (en) * 2011-06-28 2013-01-02 中兴通讯股份有限公司 Image detection method and image detection device
CN103955940A (en) * 2014-05-16 2014-07-30 天津重方科技有限公司 Method based on X-ray back scattering image and for detecting objects hidden in human body
US9123133B1 (en) * 2014-03-26 2015-09-01 National Taipei University Of Technology Method and apparatus for moving object detection based on cerebellar model articulation controller network
CN105761238A (en) * 2015-12-30 2016-07-13 河南科技大学 Method of extracting saliency target through gray statistical data depth information
CN106557549A (en) * 2016-10-24 2017-04-05 珠海格力电器股份有限公司 The method and apparatus of identification destination object

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP0814602B1 (en) * 1996-06-21 2004-04-07 Hewlett-Packard Company, A Delaware Corporation Jitter-form background control for minimizing spurious gray cast in scanned images
CN102855634A (en) * 2011-06-28 2013-01-02 中兴通讯股份有限公司 Image detection method and image detection device
US9123133B1 (en) * 2014-03-26 2015-09-01 National Taipei University Of Technology Method and apparatus for moving object detection based on cerebellar model articulation controller network
CN103955940A (en) * 2014-05-16 2014-07-30 天津重方科技有限公司 Method based on X-ray back scattering image and for detecting objects hidden in human body
CN105761238A (en) * 2015-12-30 2016-07-13 河南科技大学 Method of extracting saliency target through gray statistical data depth information
CN106557549A (en) * 2016-10-24 2017-04-05 珠海格力电器股份有限公司 The method and apparatus of identification destination object

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
Pham Ich Quy 等.Using thresholding techniques for object detection in infrared images.《 Proceedings of the 16th International Conference on Mechatronics - Mechatronika 2014》.IEE,2015,第1-8页. *
宁慧英.一种改进的运动目标检测方法.《火力与指挥控制》.2012,第37卷(第6期),第124-126页. *

Also Published As

Publication number Publication date
CN107316318A (en) 2017-11-03

Similar Documents

Publication Publication Date Title
CN107316318B (en) Air target automatic detection method based on multi-subregion background fitting
US20160078272A1 (en) Method and system for dismount detection in low-resolution uav imagery
CN110097586B (en) Face detection tracking method and device
CN107194317B (en) Violent behavior detection method based on grid clustering analysis
WO2014128688A1 (en) Method, system and software module for foreground extraction
US8433104B2 (en) Image processing method for background removal
US9123141B2 (en) Ghost artifact detection and removal in HDR image processing using multi-level median threshold bitmaps
WO2013102797A1 (en) System and method for detecting targets in maritime surveillance applications
CN106204617A (en) Adapting to image binarization method based on residual image rectangular histogram cyclic shift
TWI729587B (en) Object localization system and method thereof
JP2011165170A (en) Object detection device and program
CN110473255B (en) Ship mooring post positioning method based on multiple grid division
CN111274964A (en) Detection method for analyzing water surface pollutants based on visual saliency of unmanned aerial vehicle
CN107123132A (en) A kind of moving target detecting method of Statistical background model
CN106952236B (en) Fisheye lens shot image distortion correction method based on BP neural network
JP2012023572A (en) White balance coefficient calculating device and program
CN111325073A (en) Monitoring video abnormal behavior detection method based on motion information clustering
CN108241837B (en) Method and device for detecting remnants
Ghahremannezhad et al. Real-time hysteresis foreground detection in video captured by moving cameras
Fatichah et al. Optical flow feature based for fire detection on video data
CN111739039B (en) Rapid centroid positioning method, system and device based on edge extraction
Ishida et al. Shadow detection by three shadow models with features robust to illumination changes
CN108389219B (en) Weak and small target tracking loss re-detection method based on multi-peak judgment
CN113409334A (en) Centroid-based structured light angle point detection method
CN111353991A (en) Target detection method and device, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant