CN101261736A - Collaborative detection method for multi-source image motive target - Google Patents

Collaborative detection method for multi-source image motive target Download PDF

Info

Publication number
CN101261736A
CN101261736A CNA2008100179213A CN200810017921A CN101261736A CN 101261736 A CN101261736 A CN 101261736A CN A2008100179213 A CNA2008100179213 A CN A2008100179213A CN 200810017921 A CN200810017921 A CN 200810017921A CN 101261736 A CN101261736 A CN 101261736A
Authority
CN
China
Prior art keywords
source image
detection
image
result
target
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CNA2008100179213A
Other languages
Chinese (zh)
Inventor
张艳宁
郑江滨
郗润平
杨根
张秀伟
孙瑾秋
仝小敏
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Northwestern Polytechnical University
Original Assignee
Northwestern Polytechnical University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Northwestern Polytechnical University filed Critical Northwestern Polytechnical University
Priority to CNA2008100179213A priority Critical patent/CN101261736A/en
Publication of CN101261736A publication Critical patent/CN101261736A/en
Pending legal-status Critical Current

Links

Images

Landscapes

  • Image Analysis (AREA)

Abstract

The invention discloses a cooperative detection method for the movable target of a multi-source image which firstly carries out image registration and unifies the coordinate system of the multi-source image for considering the multi-source image; carries out movable target detection on the sequence of each multi-source image; evaluates each detection result and corrects each detection result according to the feedback of the evaluation result until obtaining a more reliable detection output; as feedback correction can carry out fully understanding on the information of the multi-source image, a false target with smaller credit is abandoned, thus reducing the false alarm rate, increasing the confidence coefficient of the real target and improving the detection rate. By utilizing multispectral information and adopting a feedback closed ring to carry out result correction, the cooperative detection method for the movable target of the multi-source image of the invention can better solve the problems of shielding and shadow; the average detecting rate is improved from the 88.0 percent of the prior art to 92.3, thus avoiding detection miss and detection leakage.

Description

Collaborative detection method for multi-source image motive target
Technical field
The present invention relates to a kind of collaborative detection method for multi-source image motive target.
Background technology
Document " distant object based on multi-sensor information fusion detects infrared technique, 2006, Vo1.28 (12), p695-698 " discloses a kind of moving target detection method that merges based on the feature level.This method at first adopts frame-to-frame differences accumulation algorithm to obtain the moving region, carries out moving target extraction and feature level then and merges.This algorithm has carried out creditability measurement at fusing stage to the testing result of each source images, by creditability measurement the different sensors result is weighted summation, but the result is not fed back to processing unit, can't make full use of multi-sensor information, the average detected rate has only 88.0%.
Summary of the invention
In order to overcome the low deficiency of prior art accuracy of detection, the invention provides a kind of collaborative detection method for multi-source image motive target, utilize multispectral information, adopt feedback closed loop to carry out correction as a result, can be good at problems such as solution is blocked, shade, can improve accuracy of detection.
The technical solution adopted for the present invention to solve the technical problems: a kind of collaborative detection method for multi-source image motive target is characterized in that comprising the steps:
(a) at first use template matching method to finish coupling between the unique point, utilize the transformation parameter between the unique point estimated image then, and choose optimum one group as final transformation parameter, finish image registration;
(b) by continuous several two field pictures are analyzed, obtain the noise and the noise distribution density of imaging system,, adopt process of iteration, obtain cutting apart high threshold according to noise density; Then, be configured to half an of normal distribution, obtain cutting apart low threshold value; At last, cut apart for twice by high threshold and low threshold value, and further handle, obtain the visible images moving target in conjunction with mathematical morphology;
(c) at first design three adaptive thresholds that successively decrease successively, current frame image just is divided into four layers by gray-scale value, and the three first layers higher to gray-scale value, constructed one group of adaptive height threshold value respectively according to its gamma characteristic; Then the three first layers difference image of connecting is detected and carries out the neighborhood merging of layer interior testing result; Adopt mathematical morphology that testing result is further processed at last, obtain the infrared image moving target;
(d) the visible images moving target that obtains and the satisfaction of infrared image moving target result and setting are compared, determining to carry out closed loop still is the open loop operation, treat the result who enters feedback closed loop and carry out mark, when the testing result of sensor repeatedly enters feedback closed loop and still can not be when satisfied, then feedback closed loop should not be entered once more, circulation should be stopped; Have only when the number of times that is entered feedback closed loop by the result after estimating more after a little while, just carry out the correction of corresponding parameter.
The invention has the beneficial effects as follows: owing to utilize multispectral information, adopt feedback closed loop to carry out correction as a result, can be good at problems such as solution is blocked, shade, the average detected rate brings up to 92.3% by 88.0% of prior art, has avoided flase drop and omission.
Below in conjunction with drawings and Examples the present invention is elaborated.
Description of drawings
Accompanying drawing is the process flow diagram of collaborative detection method for multi-source image motive target of the present invention.
Embodiment
With reference to accompanying drawing.
One, image registration.
The approximate range of at first manual selected characteristic point then with the unique point of the angle point in this scope as image, uses the Harris factor to extract.
Secondly, carry out characteristic matching.Feature Points Matching was divided into for two steps, and its detailed process is as follows:
1, unique point is slightly mated.
Thick coupling is F 1, F 2In to set up one in each unique point be (2k+1) * (2k+1) template at center with this unique point.Select F then 1In a unique point, with its template and F 2In the template at each reference mark compare, calculate the correlativity of two templates.
F 1Middle unique point (x i, y i) and F 2Middle unique point (x ' j, y ' j) degree of correlation be calculated as follows:
cor ij = Σ u = - k k Σ v = - k k [ f 1 ( x i + u , y i + v ) - f ‾ 1 ( x i , y i ) ] [ f 2 ( x j ′ + u , y j ′ + v ) - f ‾ 2 ( x j ′ , y j ′ ) ] { Σ u = - k k Σ u = - k k [ f 1 ( x i + u , y i + v ) - f ‾ 1 ( x i , y i ) ] 2 Σ u = - k k Σ v = - k k [ f 2 ( x j ′ + u , y j ′ + v ) - f ‾ 2 ( x j ′ , y j ′ ) ] 2 } 1 / 2 - - - ( 1 )
Wherein: f 1Expression I 1The gray scale of unique point, f 2Expression I 2The gray scale of unique point, the mean value of all pixel grey scales in the f representation template.By above formula as can be known ,-1≤cor Ij≤ 1.
Use formula (1), and threshold value T is set 1, choose greater than T 1Unique point right, the feature point set M that is slightly mated thus 1={ (x i, y j) | 0<i<=n} and M 2=(x ' i, y ' i) | 0<i<=n}, wherein M 1In (x i, y i) and M 2In (x ' i, y ' i) coupling, n represents the unique point logarithm that mates.
2, the smart coupling of unique point.
This paper is example with the affine model, introduces the work of the smart compatible portion of unique point.
The formula of affined transformation model is as follows:
x = m 1 x ′ + m 2 y ′ + m 3 m 7 x ′ + m 8 y ′ + 1 - - - ( 2 )
y = m 4 x ′ + m 5 y ′ + m 6 m 7 x ′ + m 8 y ′ + 1 - - - ( 3 )
From M 1, M 2In optional four pairs of matching characteristic points, (x 1, y 1), (x 2, y 2), (x 3, y 3), (x 4, y 4) and (x ' 1, y ' 1), (x ' 2, y ' 2), (x ' 3, y ' 3), (x ' 4, y ' 4).According to one group of transformation model parameter of affined transformation Model Calculation P i=(m 1, m 2, m 3, m 4, m 5, m 6, m 7, m 8).
Calculate M 2In other each unique point, as (x ' 1, y ' 1) (corresponding point in M1 are (x 1, y 1)), at parameter P iFollowing corresponding point coordinate (x ' 1, y ' 1).At I 1In extract with (x ' 1, y ' 1) be the template at center, calculate then (x ' 1, y ' 1) and (x 1, y 1) the degree of correlation.If its degree of correlation is greater than a certain threshold value T 2, think that then this meets this group parameter to point.Statistics meets parameter P iUnique point logarithm sum i
Select the unique point logarithm of formula (4) sum as final coupling between image, and with that group parameter P of its correspondence parameter as the inter-image transformations model.At this moment not only reject the incorrect point of coupling in the thick coupling, also calculated each parameter of inter-image transformations model.
sum = max i ( sum i ) , ( 0 < i &le; C n 4 ) - - - ( 4 )
Repeat said process, up to having selected M 1, M 2In the combination (C altogether that had a few n 4Kind) till, just finished the work of characteristic matching part.
Two, visible images moving object detection.
In record present frame and preceding first frame, preceding the 3rd frame, the point set that gray-scale value does not change obtains point set 1.
In the record point set 1, in preceding second frame, the point set that gray-scale value changes obtains point set 2.
Calculate in every frame, because the density p of the noise that image-forming mechanism causes.
An initial arbitrarily high threshold is set, carries out iteration, when binaryzation, the density p of the little noise that obtains thinks that the threshold value that obtains this moment is exactly high threshold th during very near ρ *Promptly | ρ-ρ ' |≤ξ, wherein ξ is a very little number of prior appointment.
According to normal distribution half sets up noise model, tries to achieve low threshold value tl *
Present frame and former frame are carried out difference, difference image is carried out cutting apart for twice according to high threshold and low threshold value.
According to the neighborhood judgement, carry out neighborhood and merge, carry out morphology again and handle, obtain final testing result.
Three, infrared image moving object detection.
Segmentation threshold θ between step 1, three adaptation layers that successively decrease of calculating 1, θ 2, θ 3, segmentation threshold adopts the OSTU method to calculate.And calculate and cut apart height threshold value Th in corresponding three groups of adaptation layers 1, Tl 1, Th 2, Tl 2, Th 3, Tl 3
The high threshold Th of high brightness layer 1:
Th 1=θ 12 (5)
The low threshold value Tl of high brightness layer 1:
Tl 1=η 1 (6)
The high threshold Th of intermediate light layer 2:
Th 2=θ 23 (7)
The low threshold value Tl of intermediate light layer 2:
Tl 2=η 2 (8)
The high threshold Th of low-light level layer 3:
Th 2=θ 3 (9)
The low threshold value Tl of low-light level layer 3:
Tl 3=η 3 (10)
η 1, η 2, η 3Segmentation threshold for difference result in high brightness layer, intermediate light layer, the low-light level layer.Difference processing result in each layer is the difference of each brightness layer and its gray average, and it is significantly bimodal that its histogram is, and two peak-to-peak separation threshold values can think to be exactly low threshold value to be asked.
Step 2, to the high brightness layer:
(a) with current frame image f t(x, y) middle gray-scale value is greater than θ 1Pixel respectively with f T-k(x, y), f T-k-1(x, y), f T-k-2(x, y) pixel of middle correspondence position is carried out the difference image computing, obtains f 1(x, y), f 2(x, y) and f 3(x, y);
(b) to f 1(x, y), f 2(x, y) and f 3(x y) uses Th respectively 1And Tl 1Carry out binaryzation, obtain f ' 1(x, y), f ' 2(x, y), f ' 3(x, y) and f " 1(x, y), f " 2(x, y), f " 3(x, y), promptly
f i &prime; ( x , y ) = 255 if f i ( x , y ) &GreaterEqual; Th 1 0 else , f i &prime; &prime; ( x , y ) = 255 if f i ( x , y ) &GreaterEqual; Tl 1 0 else - - - ( 11 )
(c) eliminate the influence of noise spot to testing result, if current frame image and front k, coming to the same thing after k+1 and the k+2 frame difference image binaryzation, then this pixel is target or background, otherwise thinks noise spot, that is:
d 1 &prime; ( x , y ) = f 1 &prime; ( x , y ) if f 1 &prime; ( x , y ) = f 2 &prime; ( x , y ) = f 3 &prime; ( x , y ) 0 else - - - ( 12 )
d 1 &prime; &prime; ( x , y ) = f 1 &prime; &prime; ( x , y ) if f 1 &prime; &prime; ( x , y ) = f 2 &prime; &prime; ( x , y ) = f 3 &prime; &prime; ( x , y ) 0 else - - - ( 13 )
(d) at high threshold Th 1The neighborhood of the testing result that obtains is with low threshold value Tl 1The result who detects goes to replace, and finishes in the high brightness layer and detects, and obtains " seed region " for the first time.
Step 3, the pixel of brightness layer carries out the operation of step 2 at the neighborhood of for the first time " seed region " and in belonging to, and obtains the second time " seed region ".
Step 4, carry out the operation of step 3, obtain for the third time " seed region ", promptly final actual detection result at the neighborhood of for the second time " seed region " and the pixel that belongs to the low-light level layer.
Final actual detection result is carried out morphology handle, the fragment problems in detecting is handled.
The visible images moving target that obtains and the satisfaction of infrared image moving target result and setting are compared, and determining to carry out closed loop still is the open loop operation; Treat the result who enters feedback closed loop afterwards and carry out mark,, should stop circulation when the testing result of this sensor repeatedly enters feedback closed loop and still can not then should not enter feedback closed loop once more when satisfied; Have only when the number of times that is entered feedback closed loop by the result after estimating more after a little while, just carry out the correction of corresponding parameter.
After testing, use the average detected rate of method of the present invention to reach 92.3%.

Claims (1)

1, a kind of collaborative detection method for multi-source image motive target is characterized in that comprising the steps:
(a) at first use template matching method to finish coupling between the unique point, utilize the transformation parameter between the unique point estimated image then, and choose optimum one group as final transformation parameter, finish image registration;
(b) by continuous several two field pictures are analyzed, obtain the noise and the noise distribution density of imaging system,, adopt process of iteration, obtain cutting apart high threshold according to noise density; Then, be configured to half an of normal distribution, obtain cutting apart low threshold value; At last, cut apart for twice by high threshold and low threshold value, and further handle, obtain the visible images moving target in conjunction with mathematical morphology;
(c) at first design three adaptive thresholds that successively decrease successively, current frame image just is divided into four layers by gray-scale value, and the three first layers higher to gray-scale value, constructed one group of adaptive height threshold value respectively according to its gamma characteristic; Then the three first layers difference image of connecting is detected and carries out the neighborhood merging of layer interior testing result; Adopt mathematical morphology that testing result is further processed at last, obtain the infrared image moving target;
(d) the visible images moving target that obtains and the satisfaction of infrared image moving target result and setting are compared, determining to carry out closed loop still is the open loop operation, treat the result who enters feedback closed loop and carry out mark, when the testing result of sensor repeatedly enters feedback closed loop and still can not be when satisfied, then feedback closed loop should not be entered once more, circulation should be stopped; Have only when the number of times that is entered feedback closed loop by the result after estimating more after a little while, just carry out the correction of corresponding parameter.
CNA2008100179213A 2008-04-10 2008-04-10 Collaborative detection method for multi-source image motive target Pending CN101261736A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CNA2008100179213A CN101261736A (en) 2008-04-10 2008-04-10 Collaborative detection method for multi-source image motive target

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CNA2008100179213A CN101261736A (en) 2008-04-10 2008-04-10 Collaborative detection method for multi-source image motive target

Publications (1)

Publication Number Publication Date
CN101261736A true CN101261736A (en) 2008-09-10

Family

ID=39962176

Family Applications (1)

Application Number Title Priority Date Filing Date
CNA2008100179213A Pending CN101261736A (en) 2008-04-10 2008-04-10 Collaborative detection method for multi-source image motive target

Country Status (1)

Country Link
CN (1) CN101261736A (en)

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101493937B (en) * 2009-02-27 2010-10-13 西北工业大学 Method for detecting content reliability of digital picture by utilizing gradient local entropy
WO2010118629A1 (en) * 2009-04-17 2010-10-21 The Hong Kong University Of Science And Technology Method, device and system for facilitating motion estimation and compensation of feature-motion decorrelation
CN101794442B (en) * 2010-01-25 2011-11-09 哈尔滨工业大学 Calibration method for extracting illumination-insensitive information from visible images
CN102446354A (en) * 2011-08-29 2012-05-09 北京建筑工程学院 Integral registration method of high-precision multisource ground laser point clouds
CN105741284A (en) * 2016-01-28 2016-07-06 中国船舶重工集团公司第七一〇研究所 Multi-beam forward-looking sonar target detection method
CN106462960A (en) * 2014-04-23 2017-02-22 微软技术许可有限责任公司 Collaborative alignment of images
CN109300148A (en) * 2018-09-19 2019-02-01 西北工业大学 Multi-source image method for registering based on method collaboration
CN109685078A (en) * 2018-12-17 2019-04-26 浙江大学 Infrared image recognition based on automatic marking
CN117953223A (en) * 2024-03-26 2024-04-30 大连华璟科技有限公司 Animal intelligent detection method and system based on infrared image processing

Cited By (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101493937B (en) * 2009-02-27 2010-10-13 西北工业大学 Method for detecting content reliability of digital picture by utilizing gradient local entropy
WO2010118629A1 (en) * 2009-04-17 2010-10-21 The Hong Kong University Of Science And Technology Method, device and system for facilitating motion estimation and compensation of feature-motion decorrelation
CN102396000A (en) * 2009-04-17 2012-03-28 香港科技大学 Method, device and system for facilitating motion estimation and compensation of feature-motion decorrelation
CN102396000B (en) * 2009-04-17 2013-08-21 香港科技大学 Method, device and system for facilitating motion estimation and compensation of feature-motion decorrelation
US9286691B2 (en) 2009-04-17 2016-03-15 The Hong Kong University Of Science And Technology Motion estimation and compensation of feature-motion decorrelation
CN101794442B (en) * 2010-01-25 2011-11-09 哈尔滨工业大学 Calibration method for extracting illumination-insensitive information from visible images
CN102446354A (en) * 2011-08-29 2012-05-09 北京建筑工程学院 Integral registration method of high-precision multisource ground laser point clouds
CN106462960A (en) * 2014-04-23 2017-02-22 微软技术许可有限责任公司 Collaborative alignment of images
CN105741284A (en) * 2016-01-28 2016-07-06 中国船舶重工集团公司第七一〇研究所 Multi-beam forward-looking sonar target detection method
CN105741284B (en) * 2016-01-28 2018-10-26 中国船舶重工集团公司第七一〇研究所 A kind of multi-beam Forward-looking Sonar object detection method
CN109300148A (en) * 2018-09-19 2019-02-01 西北工业大学 Multi-source image method for registering based on method collaboration
CN109300148B (en) * 2018-09-19 2021-05-18 西北工业大学 Multi-source image registration method based on method cooperation
CN109685078A (en) * 2018-12-17 2019-04-26 浙江大学 Infrared image recognition based on automatic marking
CN109685078B (en) * 2018-12-17 2022-04-05 浙江大学 Infrared image identification method based on automatic annotation
CN117953223A (en) * 2024-03-26 2024-04-30 大连华璟科技有限公司 Animal intelligent detection method and system based on infrared image processing
CN117953223B (en) * 2024-03-26 2024-06-11 大连华璟科技有限公司 Animal intelligent detection method and system based on infrared image processing

Similar Documents

Publication Publication Date Title
CN101261736A (en) Collaborative detection method for multi-source image motive target
US11789545B2 (en) Information processing device and method, program and recording medium for identifying a gesture of a person from captured image data
CN104392468B (en) Based on the moving target detecting method for improving visual background extraction
CN110119728A (en) Remote sensing images cloud detection method of optic based on Multiscale Fusion semantic segmentation network
CN104408460B (en) A kind of lane detection and tracking detection method
CN106886216B (en) Robot automatic tracking method and system based on RGBD face detection
CN104867133B (en) A kind of quick substep solid matching method
CN101835037B (en) Method and system for carrying out reliability classification on motion vector in video
KR100957716B1 (en) Extraction Method of Skin-Colored Region using Variable Skin Color Model
CN104268860B (en) A kind of method for detecting lane lines
CN101527043B (en) Video picture segmentation method based on moving target outline information
CN101162503A (en) Method for extracting and recognizing human ear characteristic by improved Hausdorff distance
CN103093458B (en) The detection method of key frame and device
CN104517095A (en) Head division method based on depth image
CN104899892A (en) Method for quickly extracting star points from star images
CN107369164A (en) A kind of tracking of infrared small object
Wang et al. Unstructured road detection using hybrid features
CN101320477A (en) Human body tracing method and equipment thereof
CN109949344A (en) It is a kind of to suggest that the nuclear phase of window closes filter tracking method based on color probability target
CN101833768A (en) Method and system for carrying out reliability classification on motion vector in video
CN101408984B (en) Method for detecting synergic movement target
CN104537690B (en) Moving point target detection method based on maximum value-time index combination
CN114937042B (en) Plastic product quality evaluation method based on machine vision
Saha et al. Analysis of Railroad Track Crack Detection using Computer Vision
JP4467702B2 (en) Shadow change region determination device, image determination device and image generation device using the same, and shadow intensity ratio calculation device used therefor

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C02 Deemed withdrawal of patent application after publication (patent law 2001)
WD01 Invention patent application deemed withdrawn after publication

Open date: 20080910