CN105959514A - Weak target imaging detection device and method - Google Patents

Weak target imaging detection device and method Download PDF

Info

Publication number
CN105959514A
CN105959514A CN201610248720.9A CN201610248720A CN105959514A CN 105959514 A CN105959514 A CN 105959514A CN 201610248720 A CN201610248720 A CN 201610248720A CN 105959514 A CN105959514 A CN 105959514A
Authority
CN
China
Prior art keywords
image
point
pixel
polarization
imaging
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201610248720.9A
Other languages
Chinese (zh)
Other versions
CN105959514B (en
Inventor
张振
顾朗朗
梁苍
孙启尧
高红民
陈哲
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hohai University HHU
Original Assignee
Hohai University HHU
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hohai University HHU filed Critical Hohai University HHU
Priority to CN201610248720.9A priority Critical patent/CN105959514B/en
Publication of CN105959514A publication Critical patent/CN105959514A/en
Application granted granted Critical
Publication of CN105959514B publication Critical patent/CN105959514B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/50Constructional details
    • H04N23/55Optical parts specially adapted for electronic image sensors; Mounting thereof
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/20Scenes; Scene-specific elements in augmented reality scenes
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/50Constructional details
    • H04N23/54Mounting of pick-up tubes, electronic image sensors, deviation or focusing coils
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/80Camera processing pipelines; Components thereof

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Physics & Mathematics (AREA)
  • Signal Processing (AREA)
  • General Physics & Mathematics (AREA)
  • Optics & Photonics (AREA)
  • Theoretical Computer Science (AREA)
  • Image Processing (AREA)

Abstract

The present invention discloses a weak target imaging detection device and method. By using the light intensity difference of reflected and scattered lights of a target and a background in a specific waveband in the polarization directions of 0 DEC and 90 DEC, a dual-channel orthogonal differential imaging method is adopted to realize spectrum-polarization synchronous imaging. A hardware module can be divided into three parts, i.e., an instrument shell, an optical system and an FPGA main control board. The instrument shell is used to connect an optical lens, a circuit board and a tripod. The optical system has a dual-channel structure and is used to capture two images with different polarization angles and wavebands. The FPGA main control board is used for parameter configuration, synchronous acquisition, image caching and preprocessing of a dual-channel CMOS image sensor. A software module performs tasks of dual-channel image acquisition, image distortion correction, dual-channel image registration, image differential fusion and image target detection in turn. Compared with existing methods, the weak target imaging detection device and method of the invention are low in hardware cost and software complexity and provide an effective means for detection of moving stealthy targets in the background condition of complex ground.

Description

A kind of weak signal target imaging detection device and method
Technical field
The present invention relates to a kind of optical imagery detection device and method, particularly relate to a kind of weak signal target imaging detection device and Method, belongs to optical imaging field.
Background technology
Target detection and identification technology refers to fixing or mobile target is carried out non-cpntact measurement, and can accurately obtain target Attribute information, pick out the high-tech means of target genuine-fake.Wherein optical detection is owing to being passive type work, safe and out of sight, So having obtained quick development in recent years and having paid attention to greatly.But the use of conventional camouflage coating make target and background it Between can approximate realization " homochromy with spectrum ", utilize " stealthy " that traditional light intensity detection means are difficult in effective detection of complex background Weak signal target.
Polarization is one of fundamental characteristics of light, and any target all can show during reflection and transmitting electromagnetic radiation The polarization characteristic determined by himself characteristic and optics philosophy.In general nature environment, the degree of polarization of surface feature background is relatively Low, and the degree of polarization of made Target is higher.Degree of polarization such as plant is generally less than 0.5%;The polarization of rock, sandstone, exposed soil etc. Degree is between 0.5%~1.5%;The degree of polarization on the water surface, cement pavement, roof etc. is generally higher than 1.5%, and (especially the water surface is inclined Degree of shaking has reached 8%~10%);The degree of polarization of some nonmetallic materials and part metals material surface has reached more than 2% (to be had Even reach more than 10%).Scene information under different polarization state is obtained by imaging, can be to having polarization-light intensity difference Different target and background are effectively distinguished, and then realize detection and the identification of weak signal target under complex background.Therefore, the most partially Shake imaging detection in sides such as weather environment scientific research, the exploitation of ocean, space exploration, biomedicine and Military Application Face receives increasing attention.
In Polarization Detection, the polarization state of target light radiation completely can describe with four Stokes (Stokes) parameters, The intensity of polarization light U reached the standard grade in the intensity of polarization light Q reached the standard grade including overall strength I of light wave, horizontal direction, 45 °/135 ° directions, and Intensity V of circularly polarized light.In actual application, V is negligible, and then is described as by degree of polarization The angle of polarization is described as θ=0.5arctan (U/Q).Therefore to obtain above-mentioned polarization state information, at least to obtain three width different partially Shake the intensity image in direction with computing parameter I, Q, U.
Principle accordingly, the polarization imaging detection device being applied at present mainly has four kinds: the mode of (1) timesharing imaging. Which use an image device, by order be rotatably mounted on camera lens before polaroid obtain 0 °, 60 °, 90 ° three not Image with polarization direction;There is simple in construction, the advantage easily realized;But it is only applicable to target and is static feelings with background Condition.(2) mode of light path light splitting.Which uses beam splitter and delayer, will be divided by single-lens homogenizer Become identical three parts, and the polaroid through 0 °, 60 °, 90 ° direction projects on three independent image devices;Can be simultaneously Obtain the polarization image in three directions;But this mode can make the energy incided on single imager part be greatly decreased, and causes into As signal to noise ratio substantially reduces.(3) mode of focal plane is divided.Which use special process make image device, thereon often The most corresponding 0 ° of one pixel, 60 °, a polarization direction in 90 °, and be distributed according to RGB in similar color image sensor Bayer format arrange;It is possible not only to realize polarization imaging simultaneously, and without extra light-splitting device, it is easy to accomplish The miniaturization of instrument;But the complex manufacturing technology of point focal plane device and unrealized commercialization.(4) mode of spatial registration.Should Mode uses three camera composition triple channel synchronous imaging systems, gathers the polarization image in 0 °, 60 °, 90 ° direction respectively, then leads to Cross image space registration Algorithm the pixel of three width image overlapping regions to be alignd;There is relatively low hardware complexity;But due to three Distortion parameter and the shooting visual angle of passage are inconsistent, as can not be reasonably corrected, image registration accuracy will be caused the highest, affect weak The detection of Small object.For the application of Faint target detection, the purpose of polarization imaging is not to obtain degree of polarization or inclined Shake angle information, but strengthens the contrast of target and background the most in real time, efficiently.From this view point, Stokes is utilized It is not a kind of efficient method that equation carries out fusion to multichannel image.
The present invention utilizes the reflection of target and background and scattered light at specific band and on 0 ° and 90 ° of polarization directions Light-intensity difference, uses the imaging mode of dual pathways orthogonal differential to realize spectrum-polarization synchronous imaging, compares existing synchronization and polarize Image space formula, has relatively low hardware cost and software complexity, provides for the detection of motion Stealthy Target under the complex background of ground A kind of effective means.
Summary of the invention
The present invention is directed to the deficiency that under the complex background of existing ground, motion Stealthy Target detecting system exists, it is provided that a kind of Weak signal target imaging detection device and method.
The present invention is achieved through the following technical solutions:
A kind of weak signal target imaging detection device, is made up of Instrument shell, optical system and FPGA master control borad three part, and it is special Levy and be: Instrument shell is used for connecting optical lens, circuit board and spider, including housing front panel, housing after-frame and three feet Seat fixed by frame;Optical system uses channel structure, and for obtaining two width different polarization angles and the image of wave band, passage 1 includes 0 ° of linear polarization filter, optical lens, C mouth mirror head adapter ring, optical filtering bar, 470nm narrow band pass filter and cmos image sensor;Logical Road 2 includes 90 ° of linear polarization filters, optical lens, C mouth mirror head adapter ring, optical filtering bar, 630nm narrow band pass filter and cmos image Sensor;FPGA master control borad for carrying out parameter configuration, synchronous acquisition, image buffer storage and pre-to dual pathways cmos image sensor Process, and transmitted to PC by USB interface.
The size of described housing front panel is 100mm × 50mm × 5mm, it is provided with for fixing optical lens Two C mouth mirror head adapter rings, the center distance of two adapter rings is 50mm, and major diameter of thread is 25.1mm;The size of housing after-frame is 100mm × 50mm × 30mm, is attached thereto by the screw that 12 specifications are Φ 3*6 of front panel surrounding, has a B on the left of it Type USB interface, is used for connecting FPGA master control borad and PC;Spider fixed seating, in the downside of housing after-frame, passes through centre gauge Lattice are the The Cloud Terrace of the screw connection spider of 1/4-20.
Described passage 1 and the focal length of the optical lens of passage 2 are 8mm and focus, and aperture range of accommodation is F1.4-F16, Focusing range is 0.1m-∞, is connected with two C mouth mirror head adapter rings on front panel;Two panels rotary linear polarization filter passes through chi Before the very little adapter ring for M30.5 × 0.5mm is separately mounted to two optical lens;Use linear polarization scaling board by the line of the two correspondence The polarization direction of polarization filter regulates respectively to 0 ° and 90 °;Two panels narrow band pass filter is installed on CMOS by optical filtering bar respectively The surface of imageing sensor;Optical filter all uses two-way mirror material, a size of 12mm × 12mm × 0.7mm, and centre wavelength is divided Not Wei 470nm and 630nm, half-band width is 20nm, peak transmission>90%, end the degree of depth<1%;Cmos image sensor uses " monochromatic area array sensor, spectral response range is 400-1050nm to the 1/2 of 1300000 pixels.
Described FPGA master control borad is with a piece of non-volatile fpga chip as core, and uses programmable system on chip technology The soft core Nios II processors of 32 and part peripheral hardware thereof being integrated in single-chip, off-chip is only with a piece of USB2.0 interface Chip communicates with PC with Type B USB interface;Nios II processor by Avalon bus marco user RAM, user FLASH, Peripheral hardware in the sheet such as 2 groups of dual port RAM controllers that USB controller, the dual pathways are corresponding and image capture module;User RAM is used as The running memory of Nios II processor;User FLASH is for storing the program code that Nios II processor performs;USB controls Device is changed for configuration and the bus protocol of USB2.0 interface chip;Dual port RAM is an asynchronous FIFO, effective for image line The screening of data and process, and make data keep synchronizing in transmitting procedure;Image capture module include Configuration Control Unit and time Sequence controller two parts, Configuration Control Unit passes through I2C bi-directional data universal serial bus SCLK, SDATA are in cmos image sensor Portion's depositor configures, and time schedule controller is believed by clock signal STROBE, PIXCLK, L_VALID, F_VALID and control Number STANDBY, TRIGGER, CLKIN control cmos image sensor synchronism output data DOUT [9:0].
The workflow of described FPGA master control borad is: first master control borad carries out system initialization after powering on, and then makes Nios II processor is waited for;PC is by USB interface after master control borad sends initial signal, and Nios II processor leads to Cross Configuration Control Unit and successively twin-channel cmos image sensor write register manipulation, be set to candid photograph pattern, And configure the parameters such as image resolution ratio, time of exposure and electron gain.After being provided with, the I of Configuration Control Unit2C bus enters Idle condition, and make 2 groups of time schedule controller synchronized transmission TRIGGER pulses;Cmos image sensor receives TRIGGER pulse After, inside carries out horizontal reset, exports STROBE pulse, the length of pulse width mark paxel integration time after completing;STROBE After signal is 0 by 1 saltus step, normal output data DOUT [7:0], simultaneously output synchronizing signal F_VALID and L_VALID;Sequential After controller receives data and the synchronizing signal of return, first F_VALID and L_VALID is carried out AND-operation;Work as result Effective for high interval scale now data, and then store it in twoport for work clock according to address 0~1280 with pixel clock In RAM;When result is by high step-down, represent a line valid data end of transmission, now by the data every 512 in 2 groups of dual port RAMs Individual byte is packaged as a packet and is sequentially output in the FIFO of USB2.0 interface chip, then through USB line transmission to PC;When one After frame data end of transmission, it is STANDBY pattern that Nios II processor arranges cmos image sensor by Configuration Control Unit, Stop data exporting and wait next initial signal.
A kind of detection method of weak signal target imaging detection device, including following five key steps:
(1) Channel Image collection, the imaging device that first task scans USB port after starting and connection is specified;Confirm To imaging device transmission control word to arrange imaging parameters after connection, including image resolution ratio, time of exposure and electron gain etc.; After accomplishing the setting up send acquisition instructions and etc. view data to be received, after twin-channel view data is all transmitted with The bitmap format of lossless compress preserves image.
(2) image distortion correction, is designed with Zhang Zhengyou method and demarcates the optical distortion parameter of imaging system, nonlinear distortion Model only considers the radial distortion of image:
&delta; X = x ( k 1 r 2 + k 2 r 4 + k 3 r 6 + ) &delta; Y = y ( k 1 r 2 + k 2 r 4 + k 3 r 6 + )
Wherein, δXAnd δYBeing distortion value, it is relevant with subpoint location of pixels in the picture.X, y are that picture point is in imaging The normalization projection value obtained according to linear projection model under plane coordinate system,k1、k2、k3Deng for radial distortion Coefficient, the most only considers secondary distortion, and the coordinate after distortion is:
x d = x + &delta; X = x + x ( k 1 r 2 + k 2 r 4 ) y d = y + &delta; Y = y + y ( k 1 r 2 + k 2 r 4 )
Make (ud,vd), (u, v) is respectively actual coordinate and the ideal coordinates that spatial point is corresponding under image coordinate system, then both Relation is:
( u - u 0 ) r 2 ( u - u 0 ) r 4 ( v - v 0 ) r 2 ( v - v 0 ) r 4 k 1 k 2 = u d - u v d - v
Using linear calibration's result as initial parameter values, bring following object function into and minimize, it is achieved nonlinear parameter Estimate:
&Sigma; i = 1 n &Sigma; j = 1 m | | m i j - m ^ ( A , k 1 , k 2 , R i , t i , M j ) | | 2
Wherein,Be the jth o'clock of calibrating template on the i-th width image, utilize estimate parameter obtain Subpoint, MjFor calibrating template jth point coordinate figure under world coordinate system, m is each image feature point number, and n is figure As number;Utilize the camera calibration parameter of LM majorization of iterative method gained, finally give more accurate coefficient of radial distortion, and then The distortionless image coordinate of reverse.
(3) Channel Image registration, double under the conditions of being used for realizing different imaging viewing field, wave band, the angle of polarization and optical distortion The pixel alignment of channel image, uses a kind of image registration algorithm based on SURF characteristic point, including following five sub-steps:
1) detection SURF characteristic point, on the basis of building integral image, utilizes frame type filtering approximate substitution second order high This filtering, and characteristic point to be selected and the point around it are calculated Hessian value respectively, if this feature point has maximum Hessian value, then it is characterized a little;
2) generate feature description vector, use the half-tone information of characteristic point neighborhood, by calculating the single order of integral image The little wave response of Haar, obtains grayscale distribution information and produces the feature description vector of 128 dimensions;
3) two-step method matching characteristic point, by thick matching algorithm based on closest neighbouring ratio method with based on RANSAC Two steps of smart matching algorithm, set up correct between reference picture and image characteristic point subject to registration one_to_one corresponding coupling and close System, it is characterised in that: after the characteristic vector of two width images generates, the Euclidean distance initially with SURF feature description vector is made Being the similarity determination tolerance of key point in two width images, method is to obtain a characteristic point to arest neighbors feature by K-d tree Distance d of pointND, it is to distance d of time neighbour's characteristic pointNNDIf, their ratio be less than threshold epsilon, then retain this feature point with The matching double points that its arest neighbors is constituted;Then randomly select 4 pairs of initial matching characteristic points, calculate by this 4 to determined by point thoroughly Depending on transformation matrix H, then weigh the matching degree of remaining characteristic point with this matrix:
| | x i &prime; y i &prime; 1 - H x i y i 1 | | &le; t
Wherein, t is threshold value, and the feature point pairs less than or equal to t is the interior point of H, and the feature point pairs more than t is then exterior point, this Interior point set constantly updated by sample, by the available maximum interior set of k the stochastical sampling of RANSAC, after now have also been obtained optimization Interior set corresponding to perspective transformation matrix H;
4) coordinate transform and resampling, the coordinate of image pixel is linearly become by the perspective transformation matrix H according to trying to achieve Changing, and use bilinear interpolation that the gray value of image pixel carries out resampling, bilinear interpolation supposes around interpolated point Grey scale change in the region in four some besieged cities is linear, such that it is able to by linear interpolation method, according to four neighbor pixel Gray value, calculate the gray value of interpolated point;
5) cutting image overlapping region, four boundary points after converting image coordinate according to following formula differentiate, determine Four boundary point coordinate (X of overlapping region after image registrationmin,Ymin)、(Xmin,Ymax)、(Xmax,Ymin)、(Xmax,Ymax):
X m i n = max ( X 0 , X 3 ) , X min = 0 | X min < 0 X max = min ( X 1 , X 2 , W - 1 ) Y min = max ( Y 0 , Y 1 ) , Y min = 0 | Y min < 0 Y m i n = min ( Y 2 , X 3 , H - 1 )
Wherein, W, H are width and the height of image, and Channel Image is cut out by the rectangular area constituted according to above boundary point Cut, obtain 0 ° and 90 ° of polarization image I (0 °) and I (90 °) of registration;
(4) image difference merges, and uses the mode of dual pathways orthogonal differential to merge the orthogonal differential graphical representation obtained and is:
Q=I (0 °)-I (90 °)
(5) image object detection, system carries out target detection based on morphologic method to orthogonal differential polarization image, bag Include three below sub-step:
1) binary conversion treatment, uses maximum variance between clusters self adaptation to choose global threshold, and principle is as follows: setting image has M Gray value, span at 0M-1, is chosen gray value t in this range, is divided the image into two groups of G0And G1, G0The pixel comprised Gray value is at 0t, G1Gray value at t+1M-1, represent total number of image pixels, n with NiRepresent the number of the pixel that gray value is i, then The probability that each gray value i occurs is pi=ni/ N, G0And G1The probability that class occurs is Average isThen inter-class variance is:
σ(t)20ω101)2
Optimal threshold T is exactly the value of the t making inter-class variance maximum, it may be assumed that
T=argmax σ (t)2,t∈[0,M-1]
2) opening operation operation, opening operation operation is used for filtering tiny chaff interference and obtaining more accurate objective contour, It is defined as first corroding the process expanded afterwards: the effect of corrosion is to eliminate incoherent details, particularly marginal point in object, makes The border of object is internally shunk, and its expression formula is as follows:
E = X &CircleTimes; B = { x , y | B x , y &SubsetEqual; X }
Wherein, the bianry image after E represents corrosion;B represents structural element i.e. template, it be made up of 0 or 1 any one Plant the figure of shape, B has a central point, corrodes centered by this puts;X is that original image is after binary conversion treatment The collection of pixels of image;Calculating process is slide construction element B in X image area, when a certain with on X image of its central point Point (x, y) overlap time, traversal structural element in pixel, if each pixel with (x, y) centered by identical bits Put middle corresponding pixel points identical, then (x, y) will be retained in E pixel, for being unsatisfactory for the pixel of condition then Disallowable fall, thus can reach shrink border effect;Expand contrary with the effect of corrosion, its limit to binaryzation contour of object Boundary's point expands, it is possible to the cavity remained in object after filling up segmentation, makes object complete, and its expression formula is as follows:
S = X &CirclePlus; B = { x , y | S x , y &cap; X &NotEqual; &phi; }
Wherein, the set of the bianry image pixel after S represents expansion;B represents structural element i.e. template;X represents process Image pixel set after binary conversion treatment.Calculating process is slide construction element B in X image area, when the central point of B moves on to Certain point on X image (x, time y), the pixel in traversal structural element, if the pixel in structural element B and X image Pixel at least one identical, then just retain that (x, y) pixel is in S, the most just removes this pixel;To binary map After carrying out opening operation operation, image is divided into multiple connected region;
3) connected domain identification, adjoins criterion initially with 8 and splits the connected domain in image, and 8 adjoin connected domain Definition is: each pixel in this region, and in 8 neighbors in its all 8 directions, at least a pixel still falls within this Region, inserts different digital labellings according to this definition by connected domains different in bianry image;Extract each connection the most respectively The pixel girth in territory, and contrast with targets threshold set in advance, if in threshold interval, it is judged to candidate target; Finally use the minimum rectangle frame that can surround its connected domain profile to identify candidate target in the picture, complete target detection.
The method have the advantages that
1, hardware system is easily achieved.Without complicated light path light splitting design or image device processing technology.
2, computed in software complexity is low.Complicated camera calibration work only needs to carry out once in the lab;Image co-registration Without calculating degree of polarization, only needing to carry out once simple pixel grey scale calculus of differences.
3, the registration accuracy of algorithm is high.Before registration, the nonlinear distortion of camera is corrected.
4, the detection of moving target it is applicable to.
Accompanying drawing explanation
Fig. 1 is the weak signal target image-forming detecting system software and hardware functional block diagram that the present invention relates to.
Fig. 2 is the weak signal target imaging detection device hardware configuration schematic perspective view that the present invention relates to, label title in figure: 1 For housing front panel;2 is housing after-frame;3 fix seat for spider;4 is 0 ° of linear polarization filter;5 is 90 ° of linear polarization filters;6、7 For optical lens;8,9 is C mouth mirror head adapter ring;10,11 is optical filtering bar;12 is 470nm narrow band pass filter;13 is 630nm arrowband Optical filter;14,15 is cmos image sensor;16 is USB interface.
The FPGA master control borad hardware circuit diagram that Fig. 3 the present invention relates to.
Fig. 4 is the weak signal target imaging detection method software flow block diagram that the present invention relates to.
Detailed description of the invention
Below in conjunction with the accompanying drawings technical scheme is described in detail:
The weak signal target image-forming detecting system software and hardware functional block diagram of the present invention is as shown in Figure 1.Weak signal target image checking The hardware module of device can be divided into Instrument shell, optical system and FPGA master control borad three part.Wherein Instrument shell is used for connecting Optical lens, circuit board and spider, fix seat including housing front panel, housing after-frame and spider;Optical system uses double Channel design, for obtaining two width different polarization angles and the image of wave band, passage 1 includes 0 ° of linear polarization filter, optical lens, C Mouth mirror head adapter ring, optical filtering bar, 470nm narrow band pass filter and cmos image sensor;Passage 2 includes 90 ° of linear polarization filters, light Learn camera lens, C mouth mirror head adapter ring, optical filtering bar, 630nm narrow band pass filter and cmos image sensor;FPGA master control borad is for right Dual pathways cmos image sensor carries out parameter configuration, synchronous acquisition, image buffer storage and pretreatment, and is transmitted by USB interface To PC.Software module runs on PC, and execution Channel Image collection successively, image distortion correction, Channel Image are joined Accurate, image difference merges and image object Detection task.
The weak signal target imaging detection device hardware configuration schematic perspective view of the present invention is as shown in Figure 2.The chi of housing front panel 1 Very little for 100mm × 50mm × 5mm, it is provided with the C mouth mirror head adapter ring 8,9 for fixing optical lens, in two adapter rings In the heart away from for 50mm, major diameter of thread is 25.1mm.The size of housing after-frame 2 is 100mm × 50mm × 30mm, by front panel four The screw that 12 specifications are Φ 3*6 in week is attached thereto;There is a Type B usb 16 on the left of it, be used for connecting FPGA master control borad And PC.Spider is fixed seat 3 and is positioned at the downside of housing after-frame, is 1/4-20 (external diameter 1/4 inch, pitch by center specification 20 teeth/inch) screw connect spider The Cloud Terrace.The focal length of optical lens 6,7 is 8mm and focuses, and aperture range of accommodation is F1.4-F16, focusing range is 0.1m-∞, is connected with the C mouth mirror head adapter ring 8,9 on front panel respectively.The rotary line of two panels is inclined Vibration filter mirror 4,5 is separately mounted to optical lens by the adapter ring of a size of M30.5 × 0.5mm (external diameter 30.5mm, pitch 0.5mm) 6, before 7;Linear polarization scaling board is used to regulate the polarization direction of the linear polarization filter of the two correspondence to 0 ° and 90 ° respectively.Two panels The optical filtering bar 10,11 that narrow band pass filter 12,13 passes through respectively is installed on the surface of cmos image sensor 14,15;Optical filter All using two-way mirror material, a size of 12mm × 12mm × 0.7mm, centre wavelength is respectively 470nm and 630nm, half-band width For 20nm, peak transmission>90%, end the degree of depth<1%.Cmos image sensor 14,15 all uses 1,300,000 pixels MT9M001.MT9M001 is 1/2 " monochromatic area array sensor, spectral response range is 400-1050nm;Imaging signal to noise ratio is with dynamic State scope is respectively 45dB and 68.2dB, can reach the level of CCD;The Pixel Dimensions of 5.2 μ m 5.2 μm reaches The high low light level sensitivity of 2.1V/lux-sec;And the consecutive image capture ability of 1280 × 1024@30fps disclosure satisfy that great majority The detection demand of moving target.
The FPGA master control borad hardware circuit diagram of the present invention is as shown in Figure 3.For realizing dual pathways cmos image sensor Synchronous acquisition and control, the hardware designs of master control borad is with a piece of non-volatile fpga chip as core, and uses on programmable chip Soft core Nios II processor and the part peripheral hardware thereof of 32 are integrated in single-chip by systems technology, and off-chip is only with a piece of USB2.0 interface chip communicates with PC with Type B USB interface, substantially increases the integrated level of system component function, and reduces System-level cost.Nios II processor builds in the way of IP kernel, by Avalon bus marco user RAM, user FLASH, Peripheral hardware in the sheet such as 2 groups of dual port RAM controllers that USB controller, the dual pathways are corresponding and image capture module.Wherein, user RAM uses Make the running memory of Nios II processor;User FLASH is for storing the program code that Nios II processor performs;USB is controlled Device processed is changed for configuration and the bus protocol of USB2.0 interface chip;Dual port RAM is an asynchronous FIFO, has for image line The screening of effect data and process, and make data keep synchronizing in transmitting procedure;Image capture module include Configuration Control Unit and Time schedule controller two parts, Configuration Control Unit passes through I2C bi-directional data universal serial bus SCLK, SDATA are to cmos image sensor Internal register configures, and time schedule controller passes through clock signal STROBE, PIXCLK, L_VALID, F_VALID and control Signal STANDBY, TRIGGER, CLKIN control cmos image sensor synchronism output data DOUT [9:0].
When being embodied as, fpga chip uses MAX 10 serial model No. of ALTERA company to be the chip of 10M08E144ES. This chip uses the 55nm embedded NOR flash memory technology of TSMC to manufacture, and has the embedded SRAM of 8K logical block, 378Kb Resource, and user's FLASH resource of 172KB.Owing to the maximum pixel array of cmos image sensor is 1280 × 1024, amount Change figure place is 8bit, and caching 1 row data needs the memory space of 10Kbit, therefore distributes 2 pieces from embedded SRAM resource The space of 10Kb is for building 2 dual port RAMs, and remaining 358Kb is distributed to user RAM.USB2.0 interface chip uses The CY7C68013A of CYPRESS company, its internal FIFO resource size is that 4KB, ancillary equipment and USB interface can be simultaneously to these FIFO resource operates, and in the presence that need not USB firmware program, FIFO can carry out data transmission with external circuit, Big transfer rate is 96MB/s.
The workflow of FPGA master control borad is: first master control borad carries out system initialization after powering on, and then makes at Nios II Reason device is waited for.PC is by USB interface after master control borad sends initial signal, and Nios II processor is by configuring control Twin-channel cmos image sensor is write register manipulation by device processed successively, is set to candid photograph pattern, and configures figure As parameters such as resolution, time of exposure and electron gains.After being provided with, the I of Configuration Control Unit2C bus enters idle condition, And make 2 groups of time schedule controller synchronized transmission TRIGGER pulses.After cmos image sensor receives TRIGGER pulse, inside is carried out Horizontal reset, exports STROBE pulse, the length of pulse width mark paxel integration time after completing.STROBE signal is by 1 saltus step After being 0, normal output data DOUT [7:0], simultaneously output synchronizing signal F_VALID and L_VALID.Time schedule controller receives After the data returned and synchronizing signal, first F_VALID and L_VALID is carried out AND-operation.When result be high interval scale this Time data effective, and then store it in dual port RAM according to address 0~1280 with pixel clock for work clock;Work as result During by high step-down, represent a line valid data end of transmission, now every for the data in 2 groups of dual port RAMs 512 bytes are packaged as One packet is sequentially output in the FIFO of USB2.0 interface chip, then through USB line transmission to PC.When frame data transfer It is STANDBY pattern that Bi Hou, Nios II processor arranges cmos image sensor by Configuration Control Unit, stops data output And wait next initial signal.
The weak signal target imaging detection method software flow block diagram of the present invention is as shown in Figure 4.Weak signal target imaging detection method bag Include following five key steps:
(1) Channel Image collection.After task starts, the imaging device that first scanning USB port connection are specified;Confirm To imaging device transmission control word to arrange imaging parameters after connection, including image resolution ratio, time of exposure and electron gain etc.; After accomplishing the setting up send acquisition instructions and etc. view data to be received, after twin-channel view data is all transmitted with The bitmap format of lossless compress preserves image.
(2) image distortion correction.For realizing the accuracy registration of Channel Image, need respectively two width images to be distorted Correction.The dual pathways in view of imaging system has independence, and the Zhang Zhengyou plane reference method being designed with classics demarcates imaging The optical distortion parameter of system.Optical distortion is nonlinear, mainly includes radial distortion, tangential distortion, centrifugal distortion and thin Prismatic distortion etc., need to carry out the estimation of distortion parameter with nonlinear model.Wherein radial distortion is the master that image produces error Wanting factor, its model can approximate description be:
&delta; X = x ( k 1 r 2 + k 2 r 4 + k 3 r 6 + ) &delta; Y = y ( k 1 r 2 + k 2 r 4 + k 3 r 6 + ) - - - ( 1 )
Wherein, δXAnd δYBeing distortion value, it is relevant with subpoint location of pixels in the picture.X, y are that picture point is in imaging The normalization projection value obtained according to linear projection model under plane coordinate system,k1、k2、k3Deng for radial distortion Coefficient, the most only considers secondary distortion, and the coordinate after distortion is:
x d = x + &delta; X = x + x ( k 1 r 2 + k 2 r 4 ) y d = y + &delta; Y = y + y ( k 1 r 2 + k 2 r 4 ) - - - ( 2 )
Make (ud,vd), (u v) is respectively actual coordinate and the ideal coordinates that spatial point is corresponding under image coordinate system.Both then Relation is:
( u - u 0 ) r 2 ( u - u 0 ) r 4 ( v - v 0 ) r 2 ( v - v 0 ) r 4 k 1 k 2 = u d - u v d - v - - - ( 3 )
Using linear calibration's result as initial parameter values, bring following object function into and minimize, it is achieved nonlinear parameter Estimate:
&Sigma; i = 1 n &Sigma; j = 1 m | | m i j - m ^ ( A , k 1 , k 2 , R i , t i , M j ) | | 2 - - - ( 4 )
Wherein,Be the jth o'clock of calibrating template on the i-th width image, utilize estimate parameter obtain Subpoint, MjFor calibrating template jth point coordinate figure under world coordinate system, m is each image feature point number, and n is figure As number.Utilize the camera calibration parameter of LM majorization of iterative method gained, finally give more accurate coefficient of radial distortion, and then The distortionless image coordinate of reverse.
(3) Channel Image registration.Due to dual pathways difference in imaging viewing field, wave band, the angle of polarization and optical distortion, Two width images need to carry out registrating just to make pixel to be fused align.In view of SURF feature point pairs image rotation, translate, contract Put, with noise, there is preferable robustness, have employed a kind of image registration algorithm based on SURF characteristic point, including following five Sub-step:
1) detection SURF characteristic point, on the basis of building integral image, utilizes frame type filtering approximate substitution second order high This filtering, and characteristic point to be selected and the point around it are calculated Hessian value respectively, if this feature point has maximum Hessian value, then it is characterized a little.
2) generate feature description vector, use the half-tone information of characteristic point neighborhood, by calculating the one of integral image The little wave response of rank Haar, obtains grayscale distribution information and produces the feature description vector of 128 dimensions.
3) two-step method matching characteristic point, by thick matching algorithm based on closest neighbouring ratio method with based on RANSAC Two steps of smart matching algorithm, set up correct between reference picture and image characteristic point subject to registration one_to_one corresponding coupling and close System.It is characterized in that: after the characteristic vector of two width images generates, the Euclidean distance initially with SURF feature description vector is made Being the similarity determination tolerance of key point in two width images, method is to obtain a characteristic point to arest neighbors feature by K-d tree Distance d of pointND, it is to distance d of time neighbour's characteristic pointNNDIf, their ratio be less than threshold epsilon, then retain this feature point with The matching double points that its arest neighbors is constituted;Then randomly select 4 pairs of initial matching characteristic points, calculate by this 4 to determined by point thoroughly Depending on transformation matrix H, then weigh the matching degree of remaining characteristic point with this matrix:
| | x i &prime; y i &prime; 1 - H x i y i 1 | | &le; t - - - ( 5 )
Wherein, t is threshold value, and the feature point pairs less than or equal to t is the interior point of H, and the feature point pairs more than t is then exterior point.This Interior point set constantly updated by sample, by the available maximum interior set of k the stochastical sampling of RANSAC, after now have also been obtained optimization Interior set corresponding to perspective transformation matrix H.
4) coordinate transform and resampling.The coordinate of image pixel is linearly become by the perspective transformation matrix H according to trying to achieve Change, and use bilinear interpolation that the gray value of image pixel is carried out resampling.Bilinear interpolation supposes around interpolated point Grey scale change in the region in four some besieged cities is linear, such that it is able to by linear interpolation method, according to four neighbor pixel Gray value, calculate the gray value of interpolated point.
5) cutting image overlapping region.Four boundary points after converting image coordinate according to following formula differentiate, determine Four boundary point coordinate (X of overlapping region after image registrationmin,Ymin)、(Xmin,Ymax)、(Xmax,Ymin)、(Xmax,Ymax):
X m i n = max ( X 0 , X 3 ) , X min = 0 | X min < 0 X max = min ( X 1 , X 2 , W - 1 ) Y min = max ( Y 0 , Y 1 ) , Y min = 0 | Y min < 0 Y m i n = min ( Y 2 , X 3 , H - 1 ) - - - ( 6 )
Wherein, W, H are width and the height of image.Channel Image is cut out by the rectangular area constituted according to above boundary point Cut, obtain 0 ° and 90 ° of polarization image I (0 °) and I (90 °) of registration.
(4) image difference merges.Owing to reflection and the scattered light of target and background have aobvious on 0 ° and 90 ° of polarization directions The light-intensity difference write, uses the image co-registration mode of dual pathways orthogonal differential can not only obtain preferable signal noise ratio (snr) of image, and And there is extremely low software complexity.Merging the orthogonal differential graphical representation obtained is:
Q=I (0 °)-I (90 °) (7)
(5) image object detection.Mathematical morphology is the mathematical method of the contour structure analyzing geometry and object, main Including expansion, burn into opening operation, closed operation etc..In image processing field for " keeping the basic configuration of object, remove not Correlated characteristic ", can extract for expressing and describing shape useful feature.Generally Morphological scale-space show as a kind of based on The neighborhood operation mode of template, i.e. defines a kind of special neighborhood being referred to as " structural element " or template, to be processed On each pixel of bianry image, its region corresponding with bianry image being carried out certain logical operations, the result obtained is exactly The pixel value of output image.The character of the size of structural element, content and computing all will influence whether the knot of Morphological scale-space Really.System carries out target detection based on morphologic method to orthogonal differential polarization image, has explicit physical meaning, computing effect The feature that rate is high, including image binaryzation, opening operation operation, connected domain identification three sub-steps.
1) binary conversion treatment.Image binaryzation processes the premise being by morphologic filtering, and chooses and suitably split threshold Value is its important step.Here use maximum variance between clusters self adaptation choose global threshold, this algorithm by Otsu in 1979 Propose, be that statistical property based on entire image realizes automatically choosing of threshold value, be the overall situation the most outstanding representative of binaryzation.Calculate The basic thought of method is that the gray scale of image is divided into two groups by the gray value with a certain supposition, when the inter-class variance maximum of two groups, this Gray value is exactly the optimal threshold of image binaryzation.If image has M gray value, span, at 0 M-1, is chosen in this range Gray value t, divides the image into two groups of G0And G1, G0The gray value of the pixel comprised is at 0 t, G1Gray value at t+1 M-1, use N Represent total number of image pixels, niRepresenting the number of the pixel that gray value is i, the probability that the most each gray value i occurs is pi=ni/ N, G0And G1The probability that class occurs isAverage isThen Inter-class variance is:
σ(t)20ω101)2(8) optimal threshold T is exactly the value of the t making inter-class variance maximum, it may be assumed that
T=argmax σ (t)2,t∈[0,M-1] (9)
2) opening operation operation.Opening operation operation is for filtering tiny chaff interference and obtaining more accurate objective contour. It is defined as first corroding the process expanded afterwards: the Main Function of corrosion is to eliminate incoherent details, particularly edge in object Point, makes the border of object internally shrink.Its expression formula is as follows:
E = X &CircleTimes; B = { x , y | B x , y &SubsetEqual; X } - - - ( 10 )
Wherein, the bianry image after E represents corrosion;B represents structural element i.e. template, it be made up of 0 or 1 any one Plant the figure of shape, B has a central point, corrodes centered by this puts;X is that original image is after binary conversion treatment The collection of pixels of image.Calculating process is slide construction element B in X image area, when a certain with on X image of its central point Point (x, y) overlap time, traversal structural element in pixel, if each pixel with (x, y) centered by identical bits Put middle corresponding pixel points identical, then (x, y) will be retained in E pixel, for being unsatisfactory for the pixel of condition then Disallowable fall, thus can reach shrink border effect.Expand contrary with the effect of corrosion, its limit to binaryzation contour of object Boundary's point expands, it is possible to the cavity remained in object after filling up segmentation, makes object complete.Its expression formula is as follows:
S = X &CirclePlus; B = { x , y | S x , y &cap; X &NotEqual; &phi; } - - - ( 11 )
Wherein, the set of the bianry image pixel after S represents expansion;B represents structural element i.e. template;X represents process Image pixel set after binary conversion treatment.Calculating process is slide construction element B in X image area, when the central point of B moves on to Certain point on X image (x, time y), the pixel in traversal structural element, if the pixel in structural element B and X image Pixel at least one identical, then just retain that (x, y) pixel is in S, the most just removes this pixel.
3) connected domain identification.After bianry image is carried out opening operation, image is divided into multiple connected region.In order to therefrom Filter out candidate target, need connected domain is split, labelling, and extract feature for target recognition.Connected area segmentation Purpose is target " 1 " value collection of pixels adjacent to each other in a width dot matrix bianry image to be extracted, and is different in image Connected domain insert different digital labellings.Algorithm is generally divided into two classes: a class is local neighborhood algorithm, and basic thought is from office Portion, to overall, check each Connected component one by one, determines one " starting point ", then inserts labelling to surrounding neighbors extension;Separately One class is to local from entirety, first determines different Connected component, then fills out the method for each Connected component area filling Enter labelling.Here use 8 adjoin criterion the connected domain in image is scanned for, labelling.The 8 definition Shi Gai districts adjoining connected domain Each pixel in territory, in 8 neighbors in its all 8 directions, at least a pixel still falls within this region.The company of completing After the segmentation in logical territory and labelling, the pixel girth and the targets threshold set in advance that extract each connected domain respectively contrast, as Fruit is then judged to candidate target in threshold interval, uses the minimum rectangle frame that can surround its connected domain profile to mark in the picture Know and target.

Claims (5)

1. a weak signal target imaging detection device, is made up of Instrument shell, optical system and FPGA master control borad three part, its feature It is: Instrument shell is used for connecting optical lens, circuit board and spider, including housing front panel, housing after-frame and spider Fixing seat;Optical system uses channel structure, and for obtaining two width different polarization angles and the image of wave band, passage 1 includes 0 ° Linear polarization filter, optical lens, C mouth mirror head adapter ring, optical filtering bar, 470nm narrow band pass filter and cmos image sensor;Passage 2 include that 90 ° of linear polarization filters, optical lens, C mouth mirror head adapter ring, optical filtering bar, 630nm narrow band pass filter and cmos image pass Sensor;FPGA master control borad for carrying out parameter configuration, synchronous acquisition, image buffer storage and locating in advance to dual pathways cmos image sensor Reason, and transmitted to PC by USB interface.
A kind of weak signal target imaging detection device the most according to claim 1, it is characterised in that: described housing front panel A size of 100mm × 50mm × 5mm, it is provided with two C mouth mirror head adapter rings for fixing optical lens, two adapter rings Center distance is 50mm, and major diameter of thread is 25.1mm;The size of housing after-frame is 100mm × 50mm × 30mm, passes through front panel The screw that 12 specifications are Φ 3*6 of surrounding is attached thereto, and has a Type B USB interface on the left of it, is used for connecting FPGA master control borad And PC;Spider fixed seating, in the downside of housing after-frame, connects spider by the screw that center specification is 1/4-20 The Cloud Terrace.
A kind of weak signal target imaging detection device the most according to claim 1, it is characterised in that: described passage 1 and passage 2 The focal length of optical lens be 8mm and focus, aperture range of accommodation is F1.4-F16, and focusing range is 0.1m-∞, and front panel On two C mouth mirror head adapter rings be connected;Two panels rotary linear polarization filter is by the adapter ring of a size of M30.5 × 0.5mm respectively Before being arranged on two optical lens;Linear polarization scaling board is used to be regulated respectively the polarization direction of the linear polarization filter of the two correspondence To 0 ° and 90 °;Two panels narrow band pass filter is installed on the surface of cmos image sensor respectively by optical filtering bar;Optical filter is all adopted Using two-way mirror material, a size of 12mm × 12mm × 0.7mm, centre wavelength is respectively 470nm and 630nm, and half-band width is 20nm, peak transmission>90%, end the degree of depth<1%;Cmos image sensor uses the 1/2 of 1,300,000 pixels, and " monochromatic face battle array passes Sensor, spectral response range is 400-1050nm.
A kind of weak signal target imaging detection device the most according to claim 1, it is characterised in that: described FPGA master control borad with A piece of non-volatile fpga chip is core, and uses programmable system on chip technology by the soft core Nios II processor of 32 And part peripheral hardware is integrated in single-chip, off-chip is led to PC only with a piece of USB2.0 interface chip and Type B USB interface Letter;Nios II processor is by corresponding 2 groups of Avalon bus marco user RAM, user FLASH, USB controller, the dual pathways Peripheral hardware in the sheets such as dual port RAM controller and image capture module;User RAM is used as the running memory of Nios II processor;User FLASH is for storing the program code that Nios II processor performs;USB controller for USB2.0 interface chip configuration and Bus protocol is changed;Dual port RAM is an asynchronous FIFO, for screening and the process of image line valid data, and makes data exist Transmitting procedure keeps synchronize;Image capture module includes Configuration Control Unit and time schedule controller two parts, and Configuration Control Unit leads to Cross I2Cmos image sensor internal register is configured by C bi-directional data universal serial bus SCLK, SDATA, time schedule controller Controlled by clock signal STROBE, PIXCLK, L_VALID, F_VALID and control signal STANDBY, TRIGGER, CLKIN Cmos image sensor synchronism output data DOUT [9:0].
5. weak signal target imaging detection method based on a kind of weak signal target imaging detection device described in claim 1, its feature exists In: include following five key steps:
(1) Channel Image collection, the imaging device that first task scans USB port after starting and connection is specified;Confirm to connect Afterwards to imaging device transmission control word to arrange imaging parameters, including image resolution ratio, time of exposure and electron gain;Complete to set Postpone acquisition instructions of transmission and etc. view data to be received, with lossless pressure after twin-channel view data is all transmitted The bitmap format of contracting preserves image;
(2) image distortion correction, is designed with Zhang Zhengyou method and demarcates the optical distortion parameter of imaging system, nonlinear distortion varying model Only consider the radial distortion of image:
&delta; X = x ( k 1 r 2 + k 2 r 4 + k 3 r 6 + ) &delta; Y = y ( k 1 r 2 + k 2 r 4 + k 3 r 6 + )
Wherein, δXAnd δYBeing distortion value, it is relevant with subpoint location of pixels in the picture, and x, y are that picture point is at imaging plane The normalization projection value obtained according to linear projection model under coordinate system,k1、k2、k3Deng for radial distortion system Number, the most only considers secondary distortion, and the coordinate after distortion is:
x d = x + &delta; X = x + x ( k 1 r 2 + k 2 r 4 ) y d = y + &delta; Y = y + y ( k 1 r 2 + k 2 r 4 )
Make (ud,vd), (u v) is respectively actual coordinate and ideal coordinates, the then both sides relation that spatial point is corresponding under image coordinate system For:
( u - u 0 ) r 2 ( u - u 0 ) r 4 ( v - v 0 ) r 2 ( v - v 0 ) r 4 k 1 k 2 = u d - u v d - v
Using linear calibration's result as initial parameter values, bring following object function into and minimize, it is achieved the estimation of nonlinear parameter:
&Sigma; i = 1 n &Sigma; j = 1 m | | m i j - m ^ ( A , k 1 , k 2 , R i , t i , M j ) | | 2
Wherein,Be the jth o'clock of calibrating template on the i-th width image, utilize and estimate the throwing that obtains of parameter Shadow point, MjFor calibrating template jth point coordinate figure under world coordinate system, m is each image feature point number, and n is picture number Mesh;Utilize the camera calibration parameter of LM majorization of iterative method gained, finally give more accurate coefficient of radial distortion, and then reverse Distortionless image coordinate;
(3) Channel Image registration, is used for realizing the dual pathways under the conditions of different imaging viewing field, wave band, the angle of polarization and optical distortion The pixel alignment of image, uses a kind of image registration algorithm based on SURF characteristic point, including following five sub-steps:
1) detection SURF characteristic point, on the basis of building integral image, utilizes frame type filtering approximate substitution second order Gauss filter Ripple, and characteristic point to be selected and the point around it are calculated Hessian value respectively, if this feature point has the Hessian of maximum Value, then it is characterized a little;
2) generate feature description vector, use the half-tone information of characteristic point neighborhood, little by calculating the single order Haar of integral image Wave response, obtains grayscale distribution information and produces the feature description vector of 128 dimensions;
3) two-step method matching characteristic point, by thick matching algorithm based on closest neighbouring ratio method and essence based on RANSAC Two steps of matching algorithm, set up one_to_one corresponding matching relationship correct between reference picture and image characteristic point subject to registration, its Be characterised by: when two width images characteristic vector generate after, initially with SURF feature description vector Euclidean distance as two The similarity determination tolerance of key point in width image, method obtains a characteristic point to arest neighbors characteristic point by K-d tree Distance dND, it is to distance d of time neighbour's characteristic pointNNDIf their ratio is less than threshold epsilon, then retain this feature point with it The matching double points that neighbour is constituted;Then randomly select 4 pairs of initial matching characteristic points, calculate and 4 perspective determined by point is become by this Change matrix H, then weigh the matching degree of remaining characteristic point with this matrix:
| | x i &prime; y i &prime; 1 - H x i y i 1 | | &le; t
Wherein, t is threshold value, and the feature point pairs less than or equal to t is the interior point of H, and the feature point pairs more than t is then exterior point, the most not Disconnected update in point set, maximum interior gathered by k the stochastical sampling of RANSAC is available, now have also been obtained after optimization is interior Perspective transformation matrix H corresponding to some set;
4) coordinate transform and resampling, carries out linear transformation according to the perspective transformation matrix H tried to achieve to the coordinate of image pixel, and Using bilinear interpolation that the gray value of image pixel carries out resampling, bilinear interpolation supposes four points around interpolated point Grey scale change in the region in besieged city is linear, such that it is able to by linear interpolation method, according to the gray scale of four neighbor pixel Value, calculates the gray value of interpolated point;
5) cutting image overlapping region, four boundary points after converting image coordinate according to following formula differentiate, determine image Four boundary point coordinate (X of overlapping region after registrationmin,Ymin)、(Xmin,Ymax)、(Xmax,Ymin)、(Xmax,Ymax):
X min = max ( X 0 , X 3 ) , X min = 0 | X min < 0 X max = min ( X 1 , X 2 , W - 1 ) Y min = max ( Y 0 , Y 1 ) , Y min = 0 | Y min < 0 Y min = min ( Y 2 , X 3 , H - 1 )
Wherein, W, H are width and the height of image, and the rectangular area constituted according to above boundary point carries out cutting to Channel Image, Obtain 0 ° and 90 ° of polarization image I (0 °) and I (90 °) of registration;
(4) image difference merges, and uses the mode of dual pathways orthogonal differential to merge the orthogonal differential graphical representation obtained and is:
Q=I (0 °)-I (90 °);
(5) image object detection, system carries out target detection based on morphologic method to orthogonal differential polarization image, including with Lower three sub-steps:
1) binary conversion treatment, uses maximum variance between clusters self adaptation to choose global threshold, and principle is as follows: setting image has M gray scale Value, span at 0M-1, is chosen gray value t in this range, is divided the image into two groups of G0And G1, G0The ash of the pixel comprised Angle value is at 0t, G1Gray value at t+1M-1, represent total number of image pixels, n with NiRepresent the number of the pixel that gray value is i, then The probability that each gray value i occurs is pi=ni/ N, G0And G1The probability that class occurs is Average isThen inter-class variance is:
σ(t)20ω101)2
Optimal threshold T is exactly the value of the t making inter-class variance maximum, it may be assumed that
T=arg max σ (t)2,t∈[0,M-1]
2) opening operation operation, opening operation operation is for filtering tiny chaff interference and obtaining more accurate objective contour, and it is fixed Justice is for first corroding the process expanded afterwards: the effect of corrosion is to eliminate incoherent details, particularly marginal point in object, makes object Border internally shrink, its expression formula is as follows:
E = X &CircleTimes; B = { x , y | B x , y &SubsetEqual; X }
Wherein, the bianry image after E represents corrosion;B represents structural element i.e. template, any shape that it is made up of 0 or 1 The figure of shape, has a central point in B, corrodes centered by this puts;X is original image figure after binary conversion treatment The collection of pixels of picture;Calculating process is slide construction element B in X image area, the certain point on its central point with X image (x, y) overlap time, traversal structural element in pixel, if each pixel with (x, y) centered by same position Middle corresponding pixel points is identical, then pixel (x, y) will be retained in E, for be unsatisfactory for the pixel of condition then by Weed out, thus can reach the effect shrinking border;Expand contrary with the effect of corrosion, its border to binaryzation contour of object Point expands, it is possible to the cavity remained in object after filling up segmentation, makes object complete, and its expression formula is as follows:
S = X &CirclePlus; B = { x , y | S x , y &cap; X &NotEqual; &phi; }
Wherein, the set of the bianry image pixel after S represents expansion;B represents structural element i.e. template;X represents through two-value Image pixel set after change process.Calculating process is slide construction element B in X image area, when the central point of B moves on to X figure As upper certain point (x, time y), the pixel in traversal structural element, if the picture of the pixel in structural element B and X image Vegetarian refreshments at least one identical, then just retain that (x, y) pixel is in S, the most just removes this pixel;Bianry image is entered After the operation of row opening operation, image is divided into multiple connected region;
3) connected domain identification, adjoins criterion initially with 8 and splits the connected domain in image, and 8 adjoin the definition of connected domain It is: each pixel in this region that in 8 neighbors in its all 8 directions, at least a pixel still falls within this region, According to this definition, connected domains different in bianry image inserted different digital labellings;Extract the picture of each connected domain the most respectively Element girth, and contrast with targets threshold set in advance, if in threshold interval, it is judged to candidate target;Finally adopt Identify candidate target in the picture with the minimum rectangle frame that can surround its connected domain profile, complete target detection.
CN201610248720.9A 2016-04-20 2016-04-20 A kind of weak signal target imaging detection device Active CN105959514B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201610248720.9A CN105959514B (en) 2016-04-20 2016-04-20 A kind of weak signal target imaging detection device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201610248720.9A CN105959514B (en) 2016-04-20 2016-04-20 A kind of weak signal target imaging detection device

Publications (2)

Publication Number Publication Date
CN105959514A true CN105959514A (en) 2016-09-21
CN105959514B CN105959514B (en) 2018-09-21

Family

ID=56917746

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201610248720.9A Active CN105959514B (en) 2016-04-20 2016-04-20 A kind of weak signal target imaging detection device

Country Status (1)

Country Link
CN (1) CN105959514B (en)

Cited By (21)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106651802A (en) * 2016-12-24 2017-05-10 大连日佳电子有限公司 Machine vision tin soldering location detection method
CN106851071A (en) * 2017-03-27 2017-06-13 远形时空科技(北京)有限公司 Sensor and heat transfer agent processing method
CN108090490A (en) * 2016-11-21 2018-05-29 南京理工大学 A kind of Stealthy Target detecting system and method based on multispectral polarization imaging
CN108181624A (en) * 2017-12-12 2018-06-19 西安交通大学 A kind of Difference Calculation imaging device and method
CN108230316A (en) * 2018-01-08 2018-06-29 浙江大学 A kind of floating harmful influence detection method based on the processing of polarization differential enlarged drawing
CN108320303A (en) * 2017-12-19 2018-07-24 中国人民解放军战略支援部队航天工程大学 A kind of pinhole cameras detection method based on binocular detection
CN109064504A (en) * 2018-08-24 2018-12-21 深圳市商汤科技有限公司 Image processing method, device and computer storage medium
CN109308693A (en) * 2018-08-29 2019-02-05 北京航空航天大学 By the target detection and pose measurement list binocular vision system of a ptz camera building
CN109427044A (en) * 2017-08-25 2019-03-05 瑞昱半导体股份有限公司 Electronic device
CN109859178A (en) * 2019-01-18 2019-06-07 北京航空航天大学 A kind of infrared remote sensing image real-time target detection method based on FPGA
CN109900719A (en) * 2019-03-04 2019-06-18 华中科技大学 A kind of visible detection method of blade surface knife mark
CN109934112A (en) * 2019-02-14 2019-06-25 青岛小鸟看看科技有限公司 A kind of face alignment method and camera
CN110232694A (en) * 2019-06-12 2019-09-13 安徽建筑大学 A kind of infrared polarization thermal imagery threshold segmentation method
CN110832843A (en) * 2017-07-12 2020-02-21 索尼公司 Image forming apparatus, image forming method, and image forming system
CN111161140A (en) * 2018-11-08 2020-05-15 银河水滴科技(北京)有限公司 Method and device for correcting distorted image
CN111242152A (en) * 2018-11-29 2020-06-05 北京易讯理想科技有限公司 Image retrieval method based on target extraction
CN113418864A (en) * 2021-06-03 2021-09-21 奥比中光科技集团股份有限公司 Multispectral image sensor and manufacturing method thereof
CN113933246A (en) * 2021-09-27 2022-01-14 中国人民解放军陆军工程大学 Compact multiband full-polarization imaging device compatible with F bayonet lens
CN113945531A (en) * 2021-10-20 2022-01-18 福州大学 Double-channel imaging gas quantitative detection method
CN115880188A (en) * 2023-02-08 2023-03-31 长春理工大学 Polarization direction statistical image generation method, device and medium
CN117061854A (en) * 2023-10-11 2023-11-14 中国人民解放军战略支援部队航天工程大学 Super-structured surface polarization camera structure for three-dimensional perception of space target

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
GB2294778A (en) * 1993-07-10 1996-05-08 Siemens Plc Improved spectrometer
US5572359A (en) * 1993-07-15 1996-11-05 Nikon Corporation Differential interference microscope apparatus and an observing method using the same apparatus
US7193214B1 (en) * 2005-04-08 2007-03-20 The United States Of America As Represented By The Secretary Of The Army Sensor having differential polarization and a network comprised of several such sensors
CN102297722A (en) * 2011-09-05 2011-12-28 西安交通大学 Double-channel differential polarizing interference imaging spectrometer
CN103604945A (en) * 2013-10-25 2014-02-26 河海大学 Three-channel CMOS synchronous polarization imaging system
CN104103073A (en) * 2014-07-14 2014-10-15 中国人民解放军国防科学技术大学 Infrared polarization image edge detection method
CN204203261U (en) * 2014-11-14 2015-03-11 南昌工程学院 A kind of three light-path CMOS polarization synchronous imaging devices

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
GB2294778A (en) * 1993-07-10 1996-05-08 Siemens Plc Improved spectrometer
US5572359A (en) * 1993-07-15 1996-11-05 Nikon Corporation Differential interference microscope apparatus and an observing method using the same apparatus
US7193214B1 (en) * 2005-04-08 2007-03-20 The United States Of America As Represented By The Secretary Of The Army Sensor having differential polarization and a network comprised of several such sensors
CN102297722A (en) * 2011-09-05 2011-12-28 西安交通大学 Double-channel differential polarizing interference imaging spectrometer
CN103604945A (en) * 2013-10-25 2014-02-26 河海大学 Three-channel CMOS synchronous polarization imaging system
CN104103073A (en) * 2014-07-14 2014-10-15 中国人民解放军国防科学技术大学 Infrared polarization image edge detection method
CN204203261U (en) * 2014-11-14 2015-03-11 南昌工程学院 A kind of three light-path CMOS polarization synchronous imaging devices

Cited By (31)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108090490A (en) * 2016-11-21 2018-05-29 南京理工大学 A kind of Stealthy Target detecting system and method based on multispectral polarization imaging
CN106651802B (en) * 2016-12-24 2019-10-18 大连日佳电子有限公司 Machine vision scolding tin position finding and detection method
CN106651802A (en) * 2016-12-24 2017-05-10 大连日佳电子有限公司 Machine vision tin soldering location detection method
CN106851071A (en) * 2017-03-27 2017-06-13 远形时空科技(北京)有限公司 Sensor and heat transfer agent processing method
CN110832843A (en) * 2017-07-12 2020-02-21 索尼公司 Image forming apparatus, image forming method, and image forming system
US11399144B2 (en) 2017-07-12 2022-07-26 Sony Group Corporation Imaging apparatus, image forming method, and imaging system
CN109427044A (en) * 2017-08-25 2019-03-05 瑞昱半导体股份有限公司 Electronic device
CN109427044B (en) * 2017-08-25 2022-02-25 瑞昱半导体股份有限公司 Electronic device
CN108181624A (en) * 2017-12-12 2018-06-19 西安交通大学 A kind of Difference Calculation imaging device and method
CN108181624B (en) * 2017-12-12 2020-03-17 西安交通大学 Difference calculation imaging device and method
CN108320303A (en) * 2017-12-19 2018-07-24 中国人民解放军战略支援部队航天工程大学 A kind of pinhole cameras detection method based on binocular detection
CN108230316B (en) * 2018-01-08 2020-06-05 浙江大学 Floating hazardous chemical substance detection method based on polarization differential amplification image processing
CN108230316A (en) * 2018-01-08 2018-06-29 浙江大学 A kind of floating harmful influence detection method based on the processing of polarization differential enlarged drawing
CN109064504A (en) * 2018-08-24 2018-12-21 深圳市商汤科技有限公司 Image processing method, device and computer storage medium
CN109308693A (en) * 2018-08-29 2019-02-05 北京航空航天大学 By the target detection and pose measurement list binocular vision system of a ptz camera building
CN111161140A (en) * 2018-11-08 2020-05-15 银河水滴科技(北京)有限公司 Method and device for correcting distorted image
CN111161140B (en) * 2018-11-08 2023-09-19 银河水滴科技(北京)有限公司 Distortion image correction method and device
CN111242152A (en) * 2018-11-29 2020-06-05 北京易讯理想科技有限公司 Image retrieval method based on target extraction
CN109859178A (en) * 2019-01-18 2019-06-07 北京航空航天大学 A kind of infrared remote sensing image real-time target detection method based on FPGA
CN109934112A (en) * 2019-02-14 2019-06-25 青岛小鸟看看科技有限公司 A kind of face alignment method and camera
CN109900719A (en) * 2019-03-04 2019-06-18 华中科技大学 A kind of visible detection method of blade surface knife mark
CN110232694A (en) * 2019-06-12 2019-09-13 安徽建筑大学 A kind of infrared polarization thermal imagery threshold segmentation method
CN110232694B (en) * 2019-06-12 2021-09-07 安徽建筑大学 Infrared polarization thermal image threshold segmentation method
CN113418864A (en) * 2021-06-03 2021-09-21 奥比中光科技集团股份有限公司 Multispectral image sensor and manufacturing method thereof
CN113933246A (en) * 2021-09-27 2022-01-14 中国人民解放军陆军工程大学 Compact multiband full-polarization imaging device compatible with F bayonet lens
CN113933246B (en) * 2021-09-27 2023-11-21 中国人民解放军陆军工程大学 Compact multiband full-polarization imaging device compatible with F-mount lens
CN113945531A (en) * 2021-10-20 2022-01-18 福州大学 Double-channel imaging gas quantitative detection method
CN113945531B (en) * 2021-10-20 2023-10-27 福州大学 Dual-channel imaging gas quantitative detection method
CN115880188A (en) * 2023-02-08 2023-03-31 长春理工大学 Polarization direction statistical image generation method, device and medium
CN115880188B (en) * 2023-02-08 2023-05-19 长春理工大学 Polarization direction statistical image generation method, device and medium
CN117061854A (en) * 2023-10-11 2023-11-14 中国人民解放军战略支援部队航天工程大学 Super-structured surface polarization camera structure for three-dimensional perception of space target

Also Published As

Publication number Publication date
CN105959514B (en) 2018-09-21

Similar Documents

Publication Publication Date Title
CN105959514A (en) Weak target imaging detection device and method
CA3157194C (en) Systems and methods for augmentation of sensor systems and imaging systems with polarization
WO2020259118A1 (en) Method and device for image processing, method and device for training object detection model
EP2375755B1 (en) Apparatus for detecting direction of image pickup device and moving body comprising same
US9373052B2 (en) Shape and photometric invariants recovery from polarisation images
CN104599258B (en) A kind of image split-joint method based on anisotropic character descriptor
US20080063294A1 (en) System and Method for High Performance Image Processing
CN101556696B (en) Depth map real-time acquisition algorithm based on array camera
EP3440627A1 (en) Image dehazing and restoration
CN106327452A (en) Fragmented remote sensing image synthesis method and device for cloudy and rainy region
EP2662833B1 (en) Light source data processing device, method and program
CN104717482A (en) Multi-spectral multi-depth-of-field array shooting method and shooting camera
CN110276831B (en) Method and device for constructing three-dimensional model, equipment and computer-readable storage medium
CN112285710A (en) Multi-source remote sensing reservoir water storage capacity estimation method and device
CN112924028A (en) Light field polarization imaging detection system for sea surface oil spill
CN114544006B (en) Low-altitude remote sensing image correction system and method based on ambient illumination condition
CN205681547U (en) A kind of multichannel polarization and infrared image capturing system
CN113223065B (en) Automatic matching method for SAR satellite image and optical image
US20230368356A1 (en) Image processing device for drone, drone image processing method, and drone image processing processor
WO2021056297A1 (en) Image processing method and device, unmanned aerial vehicle, system and storage medium
CN106530326B (en) Change detecting method based on image texture feature and DSM
US9800796B1 (en) Apparatus and method for low dynamic range and high dynamic range image alignment
CN105628622A (en) Polarization imaging system based on three cameras
CN109118460B (en) Method and system for synchronously processing light-splitting polarization spectrum information
Kamal et al. Resoluting multispectral image using image fusion and CNN model

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant