CN107316318A - Aerial target automatic testing method based on multiple subarea domain Background fitting - Google Patents

Aerial target automatic testing method based on multiple subarea domain Background fitting Download PDF

Info

Publication number
CN107316318A
CN107316318A CN201710384738.6A CN201710384738A CN107316318A CN 107316318 A CN107316318 A CN 107316318A CN 201710384738 A CN201710384738 A CN 201710384738A CN 107316318 A CN107316318 A CN 107316318A
Authority
CN
China
Prior art keywords
target
gray value
average gray
segmentation
subregion
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201710384738.6A
Other languages
Chinese (zh)
Other versions
CN107316318B (en
Inventor
方勇
尹晓琳
丁洋坤
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hebei Hanguang Heavy Industry Ltd
Original Assignee
Hebei Hanguang Heavy Industry Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hebei Hanguang Heavy Industry Ltd filed Critical Hebei Hanguang Heavy Industry Ltd
Priority to CN201710384738.6A priority Critical patent/CN107316318B/en
Publication of CN107316318A publication Critical patent/CN107316318A/en
Application granted granted Critical
Publication of CN107316318B publication Critical patent/CN107316318B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/246Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
    • G06T7/248Analysis of motion using feature-based methods, e.g. the tracking of corners or segments involving reference images or patches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/254Analysis of motion involving subtraction of images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20021Dividing image into blocks, subimages or windows
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/60Analysis of geometric attributes
    • G06T7/66Analysis of geometric attributes of image moments or centre of gravity

Abstract

The invention discloses a kind of aerial target automatic testing method based on multiple subarea domain Background fitting, image to be detected is divided into P sub-regions by this method, asks for average gray value A (1)~A (P) of all subregion;Then average gray value A (1)~A (P) of all subregions is ranked up in the way of from big to small, obtains new subregion average gray value sequence A ' (1)~A ' (P);Choose that second is black or the second white subregion average gray value is suitably zoomed in or out as segmentation threshold, and according to picture contrast according to A ' (1)~A ' (P), then carry out image binaryzation, obtain segmentation figure picture;Target positioning is carried out to segmentation figure picture again.The present invention can reduce the generation of false target, also inherit existing algorithm and be succinctly easy to hard-wired advantage.

Description

Aerial target automatic testing method based on multiple subarea domain Background fitting
Technical field
The present invention relates to the aerial target Automatic Measurement Technique field of visual TV image or infrared image, more particularly to A kind of aerial target automatic testing method based on multiple subarea domain Background fitting.
Background technology
With the development of information technology, carry out the Intelligent Measurement of target with the mode of Computer Vision and identification is obtained Greatly development, especially in military field, when automatic detection and tracking to target can greatly shorten the reaction of armament systems Between, this is most important to the performance indications for improving whole system.
Traditional real-time detection method to aerial target mainly has the target detection based on background subtraction, based on image line phase The methods such as the aerial target detection of pass.But, these methods all have some limitations.
Object detection method basic thought based on background subtraction is to draw mesh using current frame image and background image subtraction Logo image, still, this method can only be just effective when video camera remains static and Sky background is also at inactive state.So And, in most cases, system needs automatic search, and video camera is kept in motion, and therefore, this method is not applied to simultaneously.
Based on the related aerial target detection method main thought of image line be based on the correlation between image adjacent lines, Image is subjected to gray inversion first, then by the use of the average gray of certain a line as benchmark, remaining all row subtracts this One average gray value, so as to remove image background, obtains real target.But this method practical function is preferable not to the utmost, mainly Reason is the interference by the angle and cloud layer of illumination etc., it is seen that the sky that light video or infrared video are shot as a rule Middle background is simultaneously uneven, when we randomly select the average value of certain a line as benchmark, for background rejecting not Ideal, can cause more " falseness " target.Therefore, this method is not applied to yet.
The content of the invention
In view of this, the invention provides a kind of aerial target automatic testing method based on multiple subarea domain Background fitting, On the one hand inheriting existing algorithm is succinctly easy to hard-wired advantage, simultaneously, it is contemplated that the motion of video camera and cloud layer etc. Interference feature, targetedly obtained a more practical novel air target automatic testing method, reduced false mesh Target is produced.
In order to solve the above-mentioned technical problem, the present invention is realized in:
A kind of aerial target automatic testing method based on multiple subarea domain Background fitting, including:
Step 1: image to be detected is divided into P sub-regions, P is the integer of setting;
Step 2: asking for the average gray value A (m), m=1,2 ... of all subregion, P;
Step 3: being ranked up to average gray value A (1)~A (P) of all subregions in the way of from big to small, obtain To new subregion average gray value sequence A ' (m), m=1,2 ..., P;
Step 4: when target is " black " relative to background, then Th=A ' (P-1) × σ is chosen as gray threshold Th, its In, picture contrast is bigger, and σ value is smaller, σ<1;Binarization segmentation is carried out to image using gray threshold Th, pixel value is small Then it is set as 255 in or equal to Th, obtains segmentation figure picture;
When target is " white " relative to background, then Th=A ' (2) × σ is chosen as gray threshold Th, wherein, image pair Bigger than degree, σ value is bigger, σ>1;Binarization segmentation is carried out to image using gray threshold Th, pixel value is more than or equal to Th is then set as 255, obtains segmentation figure picture;
Step 5: the segmentation figure picture obtained using step 4 carries out target positioning.
Preferably, the step 5 obtains the position of target by the way of successive ignition asks for image centroid.
Preferably, the target obtained using known target minimum dimension to step 5 differentiates that target highlight number is small In target minimum dimension, then it is assumed that be false target.
Preferably, the step 2 is:To every sub-regions, any one position of the subregion is positioned over square template Put, the average gray value of each pixel in modulus plate is used as the average gray value A (m), m=1,2 ... of the subregion, P.
Beneficial effect:
It is black and second by second according to the average gray value of subregion it is considered herein that most black and most white part is target The average gray value of white subregion carries out appropriate scaling as segmentation threshold and according to picture contrast, so for difference Image, the more targeted accurate segmentation threshold of acquisition divided with the average value that randomly selects certain a line as benchmark Cut and compare, result in more preferable segmentation effect, reduce the generation of false target.Moreover, the algorithm of this programme is very simple It is clean to be easy to hard-wired advantage.
Brief description of the drawings
Fig. 1 is flow chart of the invention.
Embodiment
The present invention will now be described in detail with reference to the accompanying drawings and examples.
The general principle of the present invention is there is certain poor contrast, namely gray scale difference value using target and background, is passed through Certain method accurately removes background in real time, obtains target.
Step 1: entire image is divided into P sub-regions, P is bigger under normal circumstances, and it is more accurate that background is obtained, and effect is got over It is good.P=16 is chosen in the present embodiment.
Step 2: to every sub-regions, being positioned on any one position of the subregion, being taken with Q × Q square template The average gray value of each pixel in template, is used as the average gray value A (m), m=1,2 ... of the subregion, P.Q value requirement When Q × Q need more than 50% less than subregion size, preferably subregion area, take Q=10 in this preferred embodiment.
Wherein, S (m) is the region that size is Q × Q in m-th of subregion, i.e. region on template placement location, f (i, J) it is the gray value of pixel (i, j).
Step 3: being ranked up to the subregion gray value asked in the way of from big to small, obtain new subregion and put down Equal gray value sequence A ' (m), m=1,2 ..., P.
Step 4: when target is " black " relative to background, being in general the situation of visible images, then choosing reciprocal 2nd small average gray value is simultaneously suitably reduced as gray threshold Th, now Th=A ' (P-1) × σ, σ<1.σ is used for compensating pair Than degree, σ value principle is:When target differs more obvious with the gray value of background, i.e., when picture contrast is larger, σ phases It is smaller to choosing, conversely, then choosing larger.Under normal circumstances, it is 0.9 to choose σ.Then, entire image is utilized above-mentioned Gray threshold Th carry out binary conversion treatment, obtain new segmentation figure picture:
Wherein, T (i, j) represents the pixel value of pixel (i, j) in segmentation figure picture.
However, when target relative to background be " white " when, be in general the situation of infrared image, then choose the 2nd greatly Average gray value simultaneously suitably amplifies as gray threshold, now Th=A ' (2) × σ, σ>1.σ is used for compensating contrast, σ value Principle is:When target differs more obvious with the gray value of background, i.e., when picture contrast is larger, σ is relative to choose larger, Conversely, then choosing smaller.Under normal circumstances, it is 1.1 to choose σ.Then, entire image is carried out using above-mentioned gray threshold Binary conversion treatment, obtains new segmentation figure picture:
Step 5: carrying out target positioning to the image after segmentation.The general side that image centroid is asked for by successive ignition Formula can obtain the position of target.Concrete mode is as follows:
It is assumed that the size of image is W × H, then M in above-mentioned formula, N initial value can be set to M=W, N=H.It is upper when utilizing State formula (1) and (2) are calculated after the first width barycenter (X1, Y1), the point centered on (X1, Y1), M and N reduce, for example, reduced 20%, barycenter (X2, Y2) is asked for again.By that analogy.Repeatedly ask for after barycenter, you can the accurate actual position for obtaining target, Typically ask for three times.
Step 6: after asking for the position of target, it is necessary to which the true and false to target differentiates.The present embodiment utilizes target Size is differentiated.It is assumed that the minimum dimension of target is Ws × Hs, then the point centered on target position location, with B × B region As region to be measured, bright spot number is calculated, if bright spot number is more than or equal to Ws × Hs, it is real goal to show target, conversely, It is then false target.Wherein, B × B regions are at least greater than Ws × Hs, can preferably select 2 times of Ws × Hs.
So far, this flow terminates.
In summary, presently preferred embodiments of the present invention is these are only, is not intended to limit the scope of the present invention. Within the spirit and principles of the invention, any modification, equivalent substitution and improvements made etc., should be included in the present invention's Within protection domain.

Claims (4)

1. a kind of aerial target automatic testing method based on multiple subarea domain Background fitting, it is characterised in that including:
Step 1: image to be detected is divided into P sub-regions, P is the integer of setting;
Step 2: asking for the average gray value A (m), m=1,2 ... of all subregion, P;
Step 3: being ranked up to average gray value A (1)~A (P) of all subregions in the way of from big to small, obtain new Subregion average gray value sequence A ' (m), m=1,2 ..., P;
Step 4: when target is " black " relative to background, then Th=A ' (P-1) × σ is chosen as gray threshold Th, wherein, Picture contrast is bigger, and σ value is smaller, σ<1;Binarization segmentation is carried out to image using gray threshold Th, pixel value is less than Or then it is set as 255 equal to Th, obtain segmentation figure picture;
When target is " white " relative to background, then Th=A ' (2) × σ is chosen as gray threshold Th, wherein, picture contrast Bigger, σ value is bigger, σ>1;Binarization segmentation is carried out to image using gray threshold Th, pixel value is more than or equal to Th then It is set as 255, obtains segmentation figure picture;
Step 5: the segmentation figure picture obtained using step 4 carries out target positioning.
2. the aerial target automatic testing method as claimed in claim 1 based on multiple subarea domain Background fitting, it is characterised in that The step 5 obtains the position of target by the way of successive ignition asks for image centroid.
3. the aerial target automatic testing method as claimed in claim 1 or 2 based on multiple subarea domain Background fitting, its feature exists In the target obtained using known target minimum dimension to step 5 is differentiated, target highlight number is less than the minimum chi of target It is very little, then it is assumed that to be false target.
4. the aerial target automatic testing method as claimed in claim 1 based on multiple subarea domain Background fitting, it is characterised in that The step 2 is:To every sub-regions, it is positioned over square template on any one position of the subregion, it is each in modulus plate The average gray value of pixel, is used as the average gray value A (m), m=1,2 ... of the subregion, P.
CN201710384738.6A 2017-05-26 2017-05-26 Air target automatic detection method based on multi-subregion background fitting Active CN107316318B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710384738.6A CN107316318B (en) 2017-05-26 2017-05-26 Air target automatic detection method based on multi-subregion background fitting

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710384738.6A CN107316318B (en) 2017-05-26 2017-05-26 Air target automatic detection method based on multi-subregion background fitting

Publications (2)

Publication Number Publication Date
CN107316318A true CN107316318A (en) 2017-11-03
CN107316318B CN107316318B (en) 2020-06-02

Family

ID=60182137

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710384738.6A Active CN107316318B (en) 2017-05-26 2017-05-26 Air target automatic detection method based on multi-subregion background fitting

Country Status (1)

Country Link
CN (1) CN107316318B (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110887563A (en) * 2019-11-18 2020-03-17 中国科学院上海技术物理研究所 Hyperspectral area array detector bad element detection method
CN111862131A (en) * 2020-07-31 2020-10-30 易思维(杭州)科技有限公司 Adhesive tape edge detection method and application thereof
CN112581374A (en) * 2019-09-29 2021-03-30 深圳市光鉴科技有限公司 Speckle sub-pixel center extraction method, system, device and medium
CN116703741A (en) * 2022-09-27 2023-09-05 荣耀终端有限公司 Image contrast generation method and device and electronic equipment

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5859928A (en) * 1996-06-21 1999-01-12 Hewlett-Packard Company Jitter-form background control for minimizing spurious gray cast in scanned images
CN102855634B (en) * 2011-06-28 2017-03-22 中兴通讯股份有限公司 Image detection method and image detection device
US9123133B1 (en) * 2014-03-26 2015-09-01 National Taipei University Of Technology Method and apparatus for moving object detection based on cerebellar model articulation controller network
CN103955940B (en) * 2014-05-16 2018-01-16 天津重方科技有限公司 A kind of detection method of the human body cache based on X ray backscatter images
CN105761238B (en) * 2015-12-30 2018-11-06 河南科技大学 A method of passing through gray-scale statistical data depth information extraction well-marked target
CN106557549A (en) * 2016-10-24 2017-04-05 珠海格力电器股份有限公司 The method and apparatus of identification destination object

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112581374A (en) * 2019-09-29 2021-03-30 深圳市光鉴科技有限公司 Speckle sub-pixel center extraction method, system, device and medium
CN110887563A (en) * 2019-11-18 2020-03-17 中国科学院上海技术物理研究所 Hyperspectral area array detector bad element detection method
CN110887563B (en) * 2019-11-18 2021-10-01 中国科学院上海技术物理研究所 Hyperspectral area array detector bad element detection method
CN111862131A (en) * 2020-07-31 2020-10-30 易思维(杭州)科技有限公司 Adhesive tape edge detection method and application thereof
CN111862131B (en) * 2020-07-31 2021-03-19 易思维(杭州)科技有限公司 Adhesive tape edge detection method and application thereof
CN116703741A (en) * 2022-09-27 2023-09-05 荣耀终端有限公司 Image contrast generation method and device and electronic equipment
CN116703741B (en) * 2022-09-27 2024-03-15 荣耀终端有限公司 Image contrast generation method and device and electronic equipment

Also Published As

Publication number Publication date
CN107316318B (en) 2020-06-02

Similar Documents

Publication Publication Date Title
Deng et al. Infrared moving point target detection based on spatial–temporal local contrast filter
CN106650665B (en) Face tracking method and device
Bai et al. Enhancement of dim small target through modified top-hat transformation under the condition of heavy clutter
CN107316318A (en) Aerial target automatic testing method based on multiple subarea domain Background fitting
CN101976436B (en) Pixel-level multi-focus image fusion method based on correction of differential image
Park Shape-resolving local thresholding for object detection
CN105469090B (en) Small target detecting method and device in infrared image based on frequency-domain residual
US20150092051A1 (en) Moving object detector
Wang et al. Clutter-adaptive infrared small target detection in infrared maritime scenarios
CN101326549A (en) Method for detecting streaks in digital images
JP2007156655A (en) Variable region detection apparatus and its method
CN111191535B (en) Pedestrian detection model construction method based on deep learning and pedestrian detection method
Zhao et al. Principal curvature for infrared small target detection
US20200302155A1 (en) Face detection and recognition method using light field camera system
US9014426B2 (en) Method and device for the detection of moving objects in a video image sequence
JP5367244B2 (en) Target detection apparatus and target detection method
CN103607558A (en) Video monitoring system, target matching method and apparatus thereof
JP4818285B2 (en) Congestion retention detection system
JP2014048131A (en) Image processing device, method, and program
CN113409334B (en) Centroid-based structured light angle point detection method
US10331977B2 (en) Method for the three-dimensional detection of objects
CN111325073A (en) Monitoring video abnormal behavior detection method based on motion information clustering
CN116703755A (en) Omission risk monitoring system for medical waste refrigeration house
Duncan et al. Relational entropy-based saliency detection in images and videos
JP2009116686A (en) Imaging target detection apparatus and method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant