CN102201058A - Cat eye effect object recognition algorithm of active and passive imaging system sharing same aperture - Google Patents

Cat eye effect object recognition algorithm of active and passive imaging system sharing same aperture Download PDF

Info

Publication number
CN102201058A
CN102201058A CN201110124471XA CN201110124471A CN102201058A CN 102201058 A CN102201058 A CN 102201058A CN 201110124471X A CN201110124471X A CN 201110124471XA CN 201110124471 A CN201110124471 A CN 201110124471A CN 102201058 A CN102201058 A CN 102201058A
Authority
CN
China
Prior art keywords
image
passive
algorithm
imaging
active
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201110124471XA
Other languages
Chinese (zh)
Other versions
CN102201058B (en
Inventor
李丽
刘丽
党二升
吴磊
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beihang University
Original Assignee
Beihang University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beihang University filed Critical Beihang University
Priority to CN 201110124471 priority Critical patent/CN102201058B/en
Publication of CN102201058A publication Critical patent/CN102201058A/en
Application granted granted Critical
Publication of CN102201058B publication Critical patent/CN102201058B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Image Processing (AREA)

Abstract

The invention provides a cat eye effect object recognition algorithm of active and passive imaging systems sharing a same aperture. The algorithm is suitable for detecting and recognizing optical targets with a characteristic of cat eye effect. The algorithm of the invention improves the cat eye effect object recognition algorithm based on gray scales and shape features. For the active and passive imaging systems sharing the same aperture, differential operation is performed between active and passive images after image registration so that background can be filtered. Rough registering algorithm can be improved. A search scope of matching feature points in a second image can be reduced in horizontal and vertical directions. And algorithm efficiency can be raised. The algorithm possesses good utility value and wide application prospect in the fields of laser imaging and digital image processing technology.

Description

" opal " effect Target Recognition Algorithms of active imaging and passive imaging of common aperture system
(1) technical field
The present invention is for taking aim at the Target Recognition Algorithms of equipment at the optoelectronic device that has optical lens or sight.It is main by motion video and identification " opal " effect target to utilize the active imaging and passive imaging of common aperture system to obtain, and belongs to laser imaging and digital image processing techniques field.
(2) background technology
When one of laser alignment has the optical system of optical lens, optical system can be similar to the combination of regarding lens and a detector focal plane as, laser in its visual field is through the effect of optical system, near the focal plane of optical system, form one " image patch ", this " image patch " is equivalent to a light source, laser reflects back with certain beam divergence angle through optical system, by the light path principle of reversibility, laser beam can be returned along former road, thereby generation good directionality, the reflected in parallel light of concentration of energy, its light intensity substantially exceeds common diffuse reflection light intensity, " opal " effect of Here it is imaging optical system.
" opal " performance characteristic of optical system has very high using value, based on the laser active detection system of " opal " effect, because of its advantage with early warning, has obtained using widely.The active imaging and passive imaging of common aperture system is a kind of of laser active detection system.At present, the LASER Light Source of the laser active detection system of " opal " effect has continuous laser source and two kinds of forms of pulsed laser light source." opal " effect active detection system of pulsed laser light source does not have good dynamic scan characteristic, and the dynamic scan characteristic of continuous laser source " opal " effect active detection system is good, can be used in to carry out the servo-actuated search sweep in the The Cloud Terrace system; Pulsed laser light source " opal " effect active detection system need be gathered video flowing to Same Scene and be carried out subsequent treatment, this algorithm uses continuous laser source " opal " effect active detection system only need gather main passive two width of cloth images to Same Scene and get final product (initiatively image promptly relies on from the main light source emergent light through the formed picture of target reflection to be measured, is promptly relied on the picture of target own radiation light or the formation of reflection natural light by motion video).Domestic to " opal " effect identification of targets, adopt continuous laser source " opal " effect active detection system more, Same Scene is only gathered initiatively image of a width of cloth, and " opal " effect target is determined in the recognition methods of based target gray scale and shape facility.Yet this method is the Target Recognition poor effect in complex background, lacks reliability, and this patent algorithm improves this algorithm, leads passive image difference computing after having added image registration, uses shape to differentiate locking " opal " effect target at last.
The active imaging and passive imaging of common aperture system receives light and at first (adopts Cold Mirrors, i.e. transmitted infrared light through the dichroism spectroscope as shown in Figure 1, reflect visible light) is divided into two-way, transmitted light is through CCD1 (Charge-coupled Device, i.e. charge-coupled image sensor) imaging, for initiatively looking like; Reflected light is passive picture through the CCD2 imaging.
(3) summary of the invention
1, purpose: the present invention is directed to the defective that is unsuitable for Target Recognition in the complex background based on the recognition methods of gray scale and shape facility, propose promptly to carry out the image difference computing after the image registration at the improvement algorithm that is total to the aperture imaging system.The original image that this method is handled is that continuous laser is right by motion video as the master of lighting source collection, utilize main passive image difference computing to realize the background filtering, and this algorithm has added image registration techniques before using main passive image difference computing, reduces the interference noise of not representing the same object point of scene to produce owing to main passive image pixel subtraction point greatly.Use this algorithm identified " opal " effect target precise and high efficiency more.
2, technical scheme: the present invention by two CCD cameras gather respectively main, by motion video, at first the image that one of them CCD camera is taken carries out horizontal mirror image switch and handles, and again image is carried out gaussian filtering, carries out image registration then.Image registration comprises feature point detection and Feature Points Matching two parts, wherein, the coupling of unique point at first adopts the Euclidean distance of proper vector as similarity measurement, the feature point set that is slightly mated, adopt the RANSAC (abbreviation of RANdom SAmpleConsensus of high robust again, being the random sampling unification algorism) algorithm iteration purification match point is to data, and calculate the transformation relation of two width of cloth images in conjunction with least square method, obtain perspective transformation matrix.Utilize the perspective transformation matrix that obtains in the Feature Points Matching process, piece image is mapped on the matched position of another width of cloth image, carry out the image difference computing again.Differentiate locking " opal " effect target by shape at last.
Fig. 2 is the process flow diagram of this algorithm, and the specific implementation step is as follows:
Step 1: image level mirror image switch and gaussian filtering;
Step 2: image registration;
Step 3: image transformation;
Step 4: image difference;
Step 5: image binaryzation;
Step 6: zone marker;
Step 7: shape facility is differentiated;
Step 8: target area locking.
3, advantage and effect: advantage of the present invention is to lead passive image difference computing again after adopting image registration, realizes the background filtering.Because two CCD put error, mainly during without the computing of images match direct differential, subtracted each other pixel and do not necessarily represent same object point in the scene by motion video, this algorithm has overcome this drawback, has improved the accuracy of identification " opal " effect target.
(4) description of drawings
Fig. 1 is an active imaging and passive imaging of common aperture system architecture synoptic diagram.This system architecture is mainly by continuous semiconductor laser instrument (CWLASER), Laser emission and receiving optics, dichroism spectroscope, two compositions such as CCD camera, DSP (DigitalSignal Processing, digital signal processing) processing unit and display module.Lasing light emitter is made of semiconductor laser, exports continuous infrared laser; The optical axis of CCD1, CCD2 camera lens orthogonal and all with dichroism spectroscope angle at 45; After the processing unit processes conversion of image through digital signal processor DSP, show by display module.
Fig. 2 is an algorithm flow chart.
(5) embodiment
The present invention adopts continuous laser as the active light source, can realize round-the-clock Target Recognition, uses the dichroism spectroscope to see through the infrared light reflection visible light, and it is main by motion video to use two CCD cameras to obtain respectively.
Fig. 2 is an algorithm flow chart of the present invention, and these algorithm concrete steps are as follows:
1, image level mirror image switch and gaussian filtering: obtain the master of Same Scene by motion video by two CCD, system architecture decision CCD1 and CCD2 are imaged as horizontal mirror, need carry out horizontal mirror image switch to piece image wherein, re-use Gaussian filter master, passive image filtering;
2, image registration: image registration comprises feature point detection and Feature Points Matching two parts.Use SIFT (Scale-InvariantFeature Transform, i.e. yardstick invariant features conversion) algorithm to carry out feature point detection, to image I 1, I 2Detect N respectively 1, N 2Individual unique point.The SIFT algorithm is a kind of algorithm that extracts local feature, search out extreme point at metric space, extracting position, yardstick, invariable rotary characteristic quantity are so the SIFT feature descriptor has good robustness to illumination variation, image rotation, proportional zoom, geometry deformation, fuzzy and compression of images.
The coupling of unique point at first adopts the Euclidean distance of proper vector as similarity measurement, the feature point set that is slightly mated.Get image I 1In certain unique point, in image I 2The middle searching and the nearest and inferior two near unique points of its Euclidean distance if minimum distance removes in proper order closely less than certain threshold value, thinks that then this is a pair of match point.More than thick matching result have the mistake coupling unavoidably, adopt RANSAC algorithm iteration purification match point again.The RANSAC algorithm is a kind of robust Model algorithm for estimating of widespread use, main thought is: the as far as possible little sampling set of initial utilization, estimate model parameter, then under estimated parameter, select the data point of data centralization to enlarge initial sampling set as far as possible, iteration produces maximum consistance data set, utilizes maximum consistance data set at last, carries out the estimation of model parameter again.The model parameter of determining constitutes perspective transition matrix H at last, and the unique point of consistance data centralization is to satisfying:
x i y i 1 = H * x i ′ y i ′ 1
(x wherein i, y i) be I 1In i (1≤i≤N 1) individual unique point coordinate, (x ' i, y ' i) be image I 2In characteristic of correspondence point coordinate with it.Not right to the point of thinking matching error at the match point of maximum consistance data centralization.
This patent improves thick matching algorithm in process of image registration.To leading by behind the motion video use SIFT feature point detection algorithm image I 1, I 2N is arranged respectively 1, N 2Individual unique point is judged image I in the former thick matching process 1In i unique point whether with image I 2In certain Feature Points Matching the time, computed image I 1In i the point and I 2All N 2The Euclidean distance of individual unique point and ordering.When calculating I 1In all N 1During the matching relationship of individual unique point, need repetitive operation N 1Inferior, work as N 1And N 2When all very big, calculate very consuming timely, efficiency of algorithm is low.At the active imaging and passive imaging of common aperture system, by to the images after registration computational analysis, knowing that two width of cloth image corresponding point row pixels and row pixel coordinate do not differ can surpass 50.If so I 1In the pixel coordinate of unique point i be (a, b), then at I 2In the search i corresponding point the time, the hunting zone can be restricted to, row-coordinate at a-50 between the a+50, the row coordinate at b-50 to the unique point between the b+50.
3, image transformation: utilize the perspective transformation matrix that obtains in the Feature Points Matching process, piece image is mapped on the matched position of another width of cloth image.
x ′ y ′ 1 ~ H * x y 1
Formulate be with image I 2Be mapped to image I 1Matched position on, [x y 1] is I 2The homogeneous coordinates of middle pixel, [x ' y ' 1] is I 2The homogeneous coordinates of the pixel after the projective transformation.
4, image difference: with image I 1With image I 2Image after the projective transformation carries out the image difference computing;
5, image binaryzation: the utilization self-adaption thresholding method, carry out binary conversion treatment to image behind the calculus of differences;
6, zone marker: have a plurality of connected domains in the binary image,, need carry out zone marker for extracting its feature;
7, shape facility is differentiated: pass through comprehensive criterion again
Metric=|1-Metric r|+|1-Metric e|
Further lock onto target region.Wherein
Figure BDA0000060983510000042
Represent the circularity of cut zone,
Figure BDA0000060983510000043
Represent the excentricity of cut zone, A is the area of cut zone, and P is the girth of cut zone, and a is the major axis of cut zone, and b is the minor axis of cut zone.The approaching more circle of the approaching more zero explanation critical region of the value of Metric;
8, lock onto target zone: realized the target area locking by above flow process.
Advantage of the present invention is to introduce image registration techniques to lead the filtering of passive image difference computing realization background, has improved the reliability of " opal " effect target identification. In process of image registration, thick matching algorithm is improved, for the active imaging and passive imaging of common aperture system, dwindled the hunting zone of second width of cloth image, seeking characteristic point to be matched from the ranks both direction, improved efficiency of algorithm.

Claims (3)

1. " opal " effect Target Recognition Algorithms of active imaging and passive imaging of common aperture system, lead the filtering of passive image difference computing realization background after adopting image registration techniques, improve the reliability of " opal " effect Target Recognition, can accurate recognition have gone out " opal " effect target in the complex background.Algorithm flow is summarised as following key step:
(1) image that CCD in the system is taken carries out horizontal mirror image switch and handles, again to the master, carried out gaussian filtering by motion video;
(2) carry out image registration after the filtering, removed because main passive image pixel subtraction point is not represented the interference noise that same object point brings in the scene.In process of image registration, thick matching algorithm is improved,, dwindled the hunting zone of second width of cloth image, seeking unique point to be matched from the ranks both direction at the active imaging and passive imaging of common aperture system;
(3) utilize the perspective transformation matrix that obtains in the Feature Points Matching process, piece image is mapped on the matched position of another width of cloth image;
(4) with image I 1With image I 2Image after the projective transformation carries out the image difference computing;
(5) the utilization self-adaption thresholding method carries out binary conversion treatment to image behind the calculus of differences;
(6) there are a plurality of connected domains in the binary image,, carry out zone marker for extracting its feature;
(7) at last by shape criterion locking " opal " effect target.
2. " opal " effect Target Recognition Algorithms of active imaging and passive imaging of common aperture as claimed in claim 1 system, it is characterized in that: used image registration techniques before the main passive image difference computing, removed because main passive image pixel subtraction point is not represented the interference noise that same object point brings in the scene.
3. " opal " effect Target Recognition Algorithms of active imaging and passive imaging of common aperture as claimed in claim 1 system, it is characterized in that:, dwindled the hunting zone of second width of cloth image, seeking unique point to be matched from the ranks both direction at the active imaging and passive imaging of common aperture system.
CN 201110124471 2011-05-13 2011-05-13 Cat eye effect object recognition algorithm of active and passive imaging system sharing same aperture Expired - Fee Related CN102201058B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN 201110124471 CN102201058B (en) 2011-05-13 2011-05-13 Cat eye effect object recognition algorithm of active and passive imaging system sharing same aperture

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN 201110124471 CN102201058B (en) 2011-05-13 2011-05-13 Cat eye effect object recognition algorithm of active and passive imaging system sharing same aperture

Publications (2)

Publication Number Publication Date
CN102201058A true CN102201058A (en) 2011-09-28
CN102201058B CN102201058B (en) 2013-06-05

Family

ID=44661721

Family Applications (1)

Application Number Title Priority Date Filing Date
CN 201110124471 Expired - Fee Related CN102201058B (en) 2011-05-13 2011-05-13 Cat eye effect object recognition algorithm of active and passive imaging system sharing same aperture

Country Status (1)

Country Link
CN (1) CN102201058B (en)

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102749625A (en) * 2012-06-28 2012-10-24 北京航空航天大学 Range-gating laser-imaging detection method for cat-eye effect target
CN103308029A (en) * 2013-05-17 2013-09-18 北京航空航天大学 Automatic cat eye effect target distance measurement method
CN103377472A (en) * 2012-04-13 2013-10-30 富士通株式会社 Method for removing adhering noise and system
CN103488970A (en) * 2013-08-29 2014-01-01 北京理工大学 Cat eye object recognition algorithm
CN105306912A (en) * 2015-12-07 2016-02-03 成都比善科技开发有限公司 Intelligent cat-eye system triggering shooting based on luminous intensity and distance detection
CN107833212A (en) * 2017-11-01 2018-03-23 国网山东省电力公司电力科学研究院 A kind of image matching method for improving electric transmission line channel perils detecting accuracy rate
CN109544535A (en) * 2018-11-26 2019-03-29 马杰 It is a kind of that camera detection method and system are pried through based on infrared cutoff filter optical filtration characteristic
CN109738879A (en) * 2019-01-23 2019-05-10 中国科学院微电子研究所 Active laser detection apparatus
CN110072035A (en) * 2018-01-22 2019-07-30 中国科学院上海微系统与信息技术研究所 Dual imaging system

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101566693A (en) * 2009-05-06 2009-10-28 北京航空航天大学 System for detecting active imaging and passive imaging of common aperture
CN101976342A (en) * 2010-09-02 2011-02-16 北京航空航天大学 Space-time collaborative determination-based cat eye effect target identification method

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101566693A (en) * 2009-05-06 2009-10-28 北京航空航天大学 System for detecting active imaging and passive imaging of common aperture
CN101976342A (en) * 2010-09-02 2011-02-16 北京航空航天大学 Space-time collaborative determination-based cat eye effect target identification method

Cited By (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103377472A (en) * 2012-04-13 2013-10-30 富士通株式会社 Method for removing adhering noise and system
CN103377472B (en) * 2012-04-13 2016-12-14 富士通株式会社 For removing the method and system of attachment noise
CN102749625A (en) * 2012-06-28 2012-10-24 北京航空航天大学 Range-gating laser-imaging detection method for cat-eye effect target
CN103308029A (en) * 2013-05-17 2013-09-18 北京航空航天大学 Automatic cat eye effect target distance measurement method
CN103488970A (en) * 2013-08-29 2014-01-01 北京理工大学 Cat eye object recognition algorithm
CN105306912A (en) * 2015-12-07 2016-02-03 成都比善科技开发有限公司 Intelligent cat-eye system triggering shooting based on luminous intensity and distance detection
CN105306912B (en) * 2015-12-07 2018-06-26 成都比善科技开发有限公司 Intelligent peephole system based on luminous intensity and apart from detection triggering camera shooting
CN107833212A (en) * 2017-11-01 2018-03-23 国网山东省电力公司电力科学研究院 A kind of image matching method for improving electric transmission line channel perils detecting accuracy rate
CN110072035A (en) * 2018-01-22 2019-07-30 中国科学院上海微系统与信息技术研究所 Dual imaging system
CN109544535A (en) * 2018-11-26 2019-03-29 马杰 It is a kind of that camera detection method and system are pried through based on infrared cutoff filter optical filtration characteristic
CN109738879A (en) * 2019-01-23 2019-05-10 中国科学院微电子研究所 Active laser detection apparatus

Also Published As

Publication number Publication date
CN102201058B (en) 2013-06-05

Similar Documents

Publication Publication Date Title
CN102201058B (en) Cat eye effect object recognition algorithm of active and passive imaging system sharing same aperture
CN107993258B (en) Image registration method and device
US10909395B2 (en) Object detection apparatus
CN103093191A (en) Object recognition method with three-dimensional point cloud data and digital image data combined
WO2011104706A1 (en) A system and method for providing 3d imaging
CN111046776A (en) Mobile robot traveling path obstacle detection method based on depth camera
CN109341668B (en) Multi-camera measuring method based on refraction projection model and light beam tracking method
Hsu et al. An improvement stereo vision images processing for object distance measurement
KR20160121509A (en) Structured light matching of a set of curves from two cameras
JPH09297849A (en) Vehicle detector
El Bouazzaoui et al. Enhancing rgb-d slam performances considering sensor specifications for indoor localization
Xinmei et al. Passive measurement method of tree height and crown diameter using a smartphone
US20230273357A1 (en) Device and method for image processing
CN110728703B (en) Registration fusion method for visible light image and solar blind ultraviolet light image
Chenchen et al. A camera calibration method for obstacle distance measurement based on monocular vision
Delmas et al. Stereo camera visual odometry for moving urban environments
Cheng et al. Structured light-based shape measurement system
Li et al. Real time obstacle estimation based on dense stereo vision for robotic lawn mowers
CN107610170B (en) Multi-view image refocusing depth acquisition method and system
Jiang et al. Stereo matching based on random speckle projection for dynamic 3D sensing
Fei et al. Obstacle Detection for Agricultural Machinery Vehicle
Bevilacqua et al. Automatic perspective camera calibration based on an incomplete set of chessboard markers
CN113592917A (en) Camera target handover method and handover system
Zhang Target-based calibration of 3D LiDAR and binocular camera on unmanned vehicles
Abramov et al. Algorithms for detecting and tracking of objects with optical markers in 3D space

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20130605

Termination date: 20170513