CN102201058B - Cat eye effect object recognition algorithm of active and passive imaging system sharing same aperture - Google Patents

Cat eye effect object recognition algorithm of active and passive imaging system sharing same aperture Download PDF

Info

Publication number
CN102201058B
CN102201058B CN 201110124471 CN201110124471A CN102201058B CN 102201058 B CN102201058 B CN 102201058B CN 201110124471 CN201110124471 CN 201110124471 CN 201110124471 A CN201110124471 A CN 201110124471A CN 102201058 B CN102201058 B CN 102201058B
Authority
CN
China
Prior art keywords
image
algorithm
point
passive
active
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
CN 201110124471
Other languages
Chinese (zh)
Other versions
CN102201058A (en
Inventor
李丽
刘丽
党二升
吴磊
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beihang University
Original Assignee
Beihang University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beihang University filed Critical Beihang University
Priority to CN 201110124471 priority Critical patent/CN102201058B/en
Publication of CN102201058A publication Critical patent/CN102201058A/en
Application granted granted Critical
Publication of CN102201058B publication Critical patent/CN102201058B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Image Processing (AREA)

Abstract

The invention provides a cat eye effect object recognition algorithm of active and passive imaging systems sharing a same aperture. The algorithm is suitable for detecting and recognizing optical targets with a characteristic of cat eye effect. The algorithm of the invention improves the cat eye effect object recognition algorithm based on gray scales and shape features. For the active and passive imaging systems sharing the same aperture, differential operation is performed between active and passive images after image registration so that background can be filtered. Rough registering algorithm can be improved. A search scope of matching feature points in a second image can be reduced in horizontal and vertical directions. And algorithm efficiency can be raised. The algorithm possesses good utility value and wide application prospect in the fields of laser imaging and digital image processing technology.

Description

" opal " effect Target Recognition Algorithms of active imaging and passive imaging of common aperture system
(1) technical field
The present invention is for the Target Recognition Algorithms of taking aim at equipment with optoelectronic device or the sight of optical lens.Utilize active imaging and passive imaging of common aperture system acquisition master by motion video and identification " opal " effect target, belong to laser imaging and digital image processing techniques fields.
(2) background technology
when one of laser alignment during with the optical system of optical lens, optical system can be similar to the combination of regarding lens and a detector focal plane as, laser in its visual field is through the effect of optical system, form one " image patch " near the focal plane of optical system, this " image patch " is equivalent to a light source, laser reflects back with certain beam divergence angle through optical system, by the light path principle of reversibility, laser beam can be returned along former road, thereby generation good directionality, the reflected in parallel light of concentration of energy, its light intensity substantially exceeds common diffuse reflection light intensity, " opal " effect of Here it is imaging optical system.
" opal " performance characteristic of optical system has very high using value, based on the Laser Active Detection system of " opal " effect, because of its advantage with early warning, is widely used.The active imaging and passive imaging of common aperture system is a kind of of Laser Active Detection system.At present, the LASER Light Source of the Laser Active Detection system of " opal " effect has continuous laser source and two kinds of forms of pulsed laser light source." opal " effect active detection system of pulsed laser light source does not have good dynamic scan characteristic, and the dynamic scan characteristic of continuous laser source " opal " effect active detection system is good, can be used in the The Cloud Terrace system to carry out servo-actuated search sweep; Pulsed laser light source " opal " effect active detection system need to gather video flowing to Same Scene and carry out subsequent treatment, this algorithm uses continuous laser source " opal " effect active detection system only need gather main passive two width images to Same Scene and gets final product (initiatively image namely relies on from the main light source emergent light and reflects formed picture through target to be measured, is namely relied on target own radiation light or reflects the picture that natural light forms by motion video).Continuous laser source " opal " effect active detection system is adopted in domestic identification to " opal " effect target more, and Same Scene only gathers initiatively image of a width, and " opal " effect target is determined in the recognition methods of based target gray scale and shape facility.Yet the method target recognition effect in complex background is not good, lacks reliability, and this patent algorithm improves this algorithm, leads passive image difference computing after having added image registration, uses at last shape to differentiate locking " opal " effect target.
The active imaging and passive imaging of common aperture system receives light and at first (adopts Cold Mirrors, i.e. transmitted infrared light through the dichroism spectroscope as shown in Figure 1, reflect visible light) be divided into two-way, transmitted light is through CCD1 (Charge-coupled Device, i.e. charge-coupled image sensor) imaging, for initiatively looking like; Reflected light is passive picture through the CCD2 imaging.
(3) summary of the invention
1, purpose: the recognition methods that the present invention is directed to intensity-based and shape facility is unsuitable for the defective of target identification in complex background, proposes the improvement algorithm for common aperture imaging system, namely carries out the image difference computing after image registration.The original image that the method is processed be continuous laser as lighting source gather main by motion video pair, utilize main passive image difference computing to realize the background filtering, and this algorithm has added image registration techniques before using main passive image difference computing, greatly reduces because main passive image pixel subtraction point does not represent the interference noise that the same object point of scene produces.Use this algorithm identified " opal " effect target precise and high efficiency more.
2, technical scheme: the present invention by two CCD cameras gather respectively main, by motion video, the image of at first one of them CCD camera being taken carries out horizontal mirror image switch to be processed, then image is carried out gaussian filtering, then carries out image registration.Image registration comprises feature point detection and Feature Points Matching two parts, wherein, at first the coupling of unique point adopts the Euclidean distance of proper vector as similarity measurement, obtain the feature point set of thick coupling, adopt again the RANSAC (abbreviation of RANdom SAmpleConsensus of high robust, be the random sampling unification algorism) algorithm iteration purification matching double points data, and calculate the transformation relation of two width images in conjunction with least square method, obtain perspective transformation matrix.Utilize the perspective transformation matrix that obtains in the Feature Points Matching process, piece image is mapped on the matched position of another width image, then carry out the image difference computing.Differentiate locking " opal " effect target by shape at last.
Fig. 2 is the process flow diagram of this algorithm, and the specific implementation step is as follows:
Step 1: image level mirror image switch and gaussian filtering;
Step 2: image registration;
Step 3: image conversion;
Step 4: image difference;
Step 5: image binaryzation;
Step 6: zone marker;
Step 7: shape facility is differentiated;
Step 8: target area locking.
3, advantage and effect: advantage of the present invention is to lead passive image difference computing after adopting image registration again, realizes the background filtering.Because two CCD put error, mainly, subtracted each other pixel and do not necessarily represent same object point in scene during without the computing of images match direct differential by motion video, this algorithm has overcome this drawback, has improved the accuracy of identification " opal " effect target.
(4) description of drawings
Fig. 1 is active imaging and passive imaging of common aperture system architecture schematic diagram.This system architecture is mainly by continuous semiconductor laser instrument (CWLASER), Laser emission and receiving optics, dichroism spectroscope, two compositions such as CCD camera, DSP (DigitalSignal Processing, digital signal processing) processing unit and display module.Lasing light emitter is made of semiconductor laser, exports continuous infrared laser; The optical axis of CCD1, CCD2 camera lens orthogonal and all with dichroism spectroscope angle at 45 °; After the processing unit processes conversion of image through digital signal processor DSP, shown by display module.
Fig. 2 is algorithm flow chart.
(5) embodiment
The present invention adopts continuous laser as active light source, can realize round-the-clock target identification, uses the dichroism spectroscope to see through the infrared light reflection visible light, uses two CCD cameras to obtain respectively main by motion video.
Fig. 2 is algorithm flow chart of the present invention, and these algorithm concrete steps are as follows:
1, image level mirror image switch and gaussian filtering: obtain the main by motion video of Same Scene by two CCD, system architecture determines that CCD1 and CCD2 are imaged as horizontal mirror, need to carry out horizontal mirror image switch to piece image wherein, re-use Gaussian filter to master, passive image filtering;
2, image registration: image registration comprises feature point detection and Feature Points Matching two parts.Use SIFT (Scale-InvariantFeature Transform, i.e. yardstick invariant features conversion) algorithm to carry out feature point detection, to image I 1, I 2Detect respectively N 1, N 2Individual unique point.The SIFT algorithm is a kind of algorithm that extracts local feature, search out extreme point at metric space, extracting position, yardstick, invariable rotary characteristic quantity are therefore the SIFT feature descriptor has good robustness to illumination variation, image rotation, proportional zoom, geometry deformation, fuzzy and compression of images.
At first the coupling of unique point adopts the Euclidean distance of proper vector as similarity measurement, obtains the feature point set of thick coupling.Get image I 1In certain unique point, in image I 2Middle searching and its Euclidean distance recently and time two near unique points, if minimum distance thinks that except in proper order closely less than certain threshold value this is a pair of match point.There is the mistake coupling unavoidably in above thick matching result, then adopts RANSAC algorithm iteration purification match point.The RANSAC algorithm is a kind of robust Model algorithm for estimating of widespread use, main thought is: the as far as possible little sampling set of initial utilization, estimate model parameter, then under estimated parameter, select the data point of data centralization to enlarge initial sampling set as far as possible, iteration produces the most homogeneous data set, utilizes at last the most homogeneous data set, re-starts the estimation of model parameter.The model parameter of determining at last consists of perspective transition matrix H, and the unique point that consistent data is concentrated is to satisfying:
x i y i 1 = H * x i ′ y i ′ 1
(x wherein i, y i) be I 1In i (1≤i≤N 1) individual unique point coordinate, (x ' i, y ' i) be image I 2In characteristic of correspondence point coordinate with it.Do not think that at the matching double points of most homogeneous data centralization the point of matching error is right.
This patent improves thick matching algorithm in process of image registration.To main used SIFT feature point detection algorithm by motion video after, image I 1, I 2N is arranged respectively 1, N 2Individual unique point judges image I in former thick matching process 1In i unique point whether with image I 2In certain Feature Points Matching the time, computed image I 1In i the point and I 2All N 2The Euclidean distance of individual unique point and sequence.When calculating I 1In all N 1During the matching relationship of individual unique point, need repetitive operation N 1Inferior, work as N 1And N 2When all very large, calculate very consuming timely, efficiency of algorithm is low.For the active imaging and passive imaging of common aperture system, by to the images after registration computational analysis, knowing that two width image corresponding point row pixels and row pixel coordinate do not differ can surpass 50.If therefore I 1The pixel coordinate of middle unique point i is (a, b), at I 2During the corresponding point of middle search i, the hunting zone can be restricted to, row-coordinate at a-50 between a+50, the row coordinate at b-50 to the unique point between b+50.
3, image conversion: utilize the perspective transformation matrix that obtains in the Feature Points Matching process, piece image is mapped on the matched position of another width image.
x ′ y ′ 1 ~ H * x y 1
What formula represented is with image I 2Be mapped to image I 1Matched position on, [x y 1] is I 2The homogeneous coordinates of middle pixel, [x ' y ' 1] is I 2The homogeneous coordinates of the pixel after projective transformation.
4, image difference: with image I 1With image I 2Image after projective transformation carries out the image difference computing;
5, image binaryzation: use self-adaption thresholding method, image after calculus of differences is carried out binary conversion treatment;
6, zone marker: have a plurality of connected domains in binary image, for extracting its feature, need to carry out zone marker;
7, shape facility is differentiated: pass through comprehensive criterion again
Metric=|1-Metric r|+|1-Metric e|
Further lock onto target region.Wherein
Figure BDA0000060983510000042
Represent the circularity of cut zone,
Figure BDA0000060983510000043
Represent the excentricity of cut zone, A is the area of cut zone, and P is the girth of cut zone, and a is the major axis of cut zone, and b is the minor axis of cut zone.The more approaching zero explanation critical region of the value of Metric is near circle;
8, lock onto target zone: realized the target area locking by above flow process.
Advantage of the present invention is to introduce image registration techniques to lead passive image difference computing and realize the background filtering, has improved the reliability of " opal " effect target identification.In process of image registration, thick matching algorithm is improved, for the active imaging and passive imaging of common aperture system, dwindled from the ranks both direction hunting zone of seeking unique point to be matched the second width image, improved efficiency of algorithm.

Claims (2)

1. " opal " effect Target Recognition Algorithms of active imaging and passive imaging of common aperture system, lead passive image difference computing after the employing image registration techniques and realize the background filtering, improved the reliability of " opal " effect target identification, can identify accurately " opal " effect target in complex background, algorithm flow is summarised as following key step:
(1) image of CCD in system being taken carries out horizontal mirror image switch to be processed, then to the master, carried out gaussian filtering by motion video;
(2) carry out image registration after filtering, removed because main passive image pixel subtraction point does not represent the interference noise that in scene, same object point brings;
Use the SIFT algorithm to carry out feature point detection, to image I 1, I 2Detect respectively N 1, N 2Individual unique point; Adopt the Euclidean distance of proper vector to carry out the coupling of unique point as similarity measurement, obtain the feature point set of thick coupling; Adopt again RANSAC algorithm iteration purification match point; The model parameter of determining at last consists of perspective transition matrix H, and the unique point that consistent data is concentrated is to satisfying:
x i y i 1 = H * x i ′ y i ′ 1
(x wherein i, y i) be I 1In i (1≤i≤N 1) individual unique point coordinate, (x ' i, y ' i) be image I 2In characteristic of correspondence point coordinate with it, do not think that at the matching double points of most homogeneous data centralization the point of matching error is right;
(3) utilize the perspective transformation matrix that obtains in the Feature Points Matching process, piece image is mapped on the matched position of another width image;
x ′ y ′ 1 ~ H * x y 1
What formula represented is with image I 2Be mapped to image I 1Matched position on, wherein [x ' y ' 1] is I 2The homogeneous coordinates of middle pixel, [x y 1] is I 2The homogeneous coordinates of the pixel after projective transformation;
(4) with image I 1With image I 2Image after projective transformation carries out the image difference computing;
(5) use self-adaption thresholding method, image after calculus of differences is carried out binary conversion treatment;
(6) there are a plurality of connected domains in binary image, for extracting its feature, carry out zone marker;
(7) at last by shape criterion locking " opal " effect target.
2. " opal " effect Target Recognition Algorithms of active imaging and passive imaging of common aperture as claimed in claim 1 system, it is characterized in that: in process of image registration, thick matching algorithm is improved, for the active imaging and passive imaging of common aperture system, dwindled from the ranks both direction hunting zone of seeking unique point to be matched the second width image.
CN 201110124471 2011-05-13 2011-05-13 Cat eye effect object recognition algorithm of active and passive imaging system sharing same aperture Expired - Fee Related CN102201058B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN 201110124471 CN102201058B (en) 2011-05-13 2011-05-13 Cat eye effect object recognition algorithm of active and passive imaging system sharing same aperture

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN 201110124471 CN102201058B (en) 2011-05-13 2011-05-13 Cat eye effect object recognition algorithm of active and passive imaging system sharing same aperture

Publications (2)

Publication Number Publication Date
CN102201058A CN102201058A (en) 2011-09-28
CN102201058B true CN102201058B (en) 2013-06-05

Family

ID=44661721

Family Applications (1)

Application Number Title Priority Date Filing Date
CN 201110124471 Expired - Fee Related CN102201058B (en) 2011-05-13 2011-05-13 Cat eye effect object recognition algorithm of active and passive imaging system sharing same aperture

Country Status (1)

Country Link
CN (1) CN102201058B (en)

Families Citing this family (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103377472B (en) * 2012-04-13 2016-12-14 富士通株式会社 For removing the method and system of attachment noise
CN102749625A (en) * 2012-06-28 2012-10-24 北京航空航天大学 Range-gating laser-imaging detection method for cat-eye effect target
CN103308029A (en) * 2013-05-17 2013-09-18 北京航空航天大学 Automatic cat eye effect target distance measurement method
CN103488970A (en) * 2013-08-29 2014-01-01 北京理工大学 Cat eye object recognition algorithm
CN105306912B (en) * 2015-12-07 2018-06-26 成都比善科技开发有限公司 Intelligent peephole system based on luminous intensity and apart from detection triggering camera shooting
CN107833212A (en) * 2017-11-01 2018-03-23 国网山东省电力公司电力科学研究院 A kind of image matching method for improving electric transmission line channel perils detecting accuracy rate
CN110072035A (en) * 2018-01-22 2019-07-30 中国科学院上海微系统与信息技术研究所 Dual imaging system
CN109544535B (en) * 2018-11-26 2022-06-24 马杰 Peeping camera detection method and system based on optical filtering characteristics of infrared cut-off filter
CN109738879A (en) * 2019-01-23 2019-05-10 中国科学院微电子研究所 Active laser detection apparatus

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101566693B (en) * 2009-05-06 2012-02-15 北京航空航天大学 System for detecting active imaging and passive imaging of common aperture
CN101976342B (en) * 2010-09-02 2014-06-25 北京航空航天大学 Space-time collaborative determination-based cat eye effect target identification method

Also Published As

Publication number Publication date
CN102201058A (en) 2011-09-28

Similar Documents

Publication Publication Date Title
CN102201058B (en) Cat eye effect object recognition algorithm of active and passive imaging system sharing same aperture
CN107993258B (en) Image registration method and device
US10909395B2 (en) Object detection apparatus
CN103093191A (en) Object recognition method with three-dimensional point cloud data and digital image data combined
CN107560592B (en) Precise distance measurement method for photoelectric tracker linkage target
WO2011104706A1 (en) A system and method for providing 3d imaging
CN104685513A (en) Feature based high resolution motion estimation from low resolution images captured using an array source
Hsu et al. An improvement stereo vision images processing for object distance measurement
García-Moreno et al. LIDAR and panoramic camera extrinsic calibration approach using a pattern plane
KR20160121509A (en) Structured light matching of a set of curves from two cameras
CN108376409B (en) Light field image registration method and system
El Bouazzaoui et al. Enhancing RGB-D SLAM performances considering sensor specifications for indoor localization
US20230273357A1 (en) Device and method for image processing
CN110728703B (en) Registration fusion method for visible light image and solar blind ultraviolet light image
CN117406234A (en) Target ranging and tracking method based on single-line laser radar and vision fusion
Chenchen et al. A camera calibration method for obstacle distance measurement based on monocular vision
Cheng et al. Structured light-based shape measurement system
Delmas et al. Stereo camera visual odometry for moving urban environments
WO2021124657A1 (en) Camera system
CN107610170B (en) Multi-view image refocusing depth acquisition method and system
Bevilacqua et al. Automatic perspective camera calibration based on an incomplete set of chessboard markers
Jiang et al. Stereo matching based on random speckle projection for dynamic 3D sensing
CN113592917A (en) Camera target handover method and handover system
Abramov et al. Algorithms for detecting and tracking of objects with optical markers in 3D space
CN113409334A (en) Centroid-based structured light angle point detection method

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20130605

Termination date: 20170513

CF01 Termination of patent right due to non-payment of annual fee