CN110472658A - A kind of the level fusion and extracting method of the detection of moving-target multi-source - Google Patents
A kind of the level fusion and extracting method of the detection of moving-target multi-source Download PDFInfo
- Publication number
- CN110472658A CN110472658A CN201910602605.0A CN201910602605A CN110472658A CN 110472658 A CN110472658 A CN 110472658A CN 201910602605 A CN201910602605 A CN 201910602605A CN 110472658 A CN110472658 A CN 110472658A
- Authority
- CN
- China
- Prior art keywords
- image
- target
- point
- tested
- value
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000000034 method Methods 0.000 title claims abstract description 80
- 230000004927 fusion Effects 0.000 title claims abstract description 37
- 238000001514 detection method Methods 0.000 title claims abstract description 24
- 238000002156 mixing Methods 0.000 claims abstract description 50
- 238000001228 spectrum Methods 0.000 claims abstract description 44
- 230000008447 perception Effects 0.000 claims abstract description 6
- 230000003313 weakening effect Effects 0.000 claims abstract description 4
- 230000002045 lasting effect Effects 0.000 claims abstract description 3
- 238000004422 calculation algorithm Methods 0.000 claims description 32
- 239000011159 matrix material Substances 0.000 claims description 26
- 230000006870 function Effects 0.000 claims description 21
- 238000004364 calculation method Methods 0.000 claims description 20
- 230000008569 process Effects 0.000 claims description 20
- 238000005096 rolling process Methods 0.000 claims description 17
- 230000011218 segmentation Effects 0.000 claims description 16
- 238000010606 normalization Methods 0.000 claims description 14
- 238000012545 processing Methods 0.000 claims description 14
- 238000001914 filtration Methods 0.000 claims description 13
- 230000009466 transformation Effects 0.000 claims description 12
- 238000004458 analytical method Methods 0.000 claims description 9
- 230000000877 morphologic effect Effects 0.000 claims description 9
- HUTDUHSNJYTCAR-UHFFFAOYSA-N ancymidol Chemical compound C1=CC(OC)=CC=C1C(O)(C=1C=NC=NC=1)C1CC1 HUTDUHSNJYTCAR-UHFFFAOYSA-N 0.000 claims description 8
- 238000003709 image segmentation Methods 0.000 claims description 8
- 238000005520 cutting process Methods 0.000 claims description 4
- 239000000203 mixture Substances 0.000 claims description 4
- 230000005855 radiation Effects 0.000 claims description 4
- 239000000654 additive Substances 0.000 claims description 3
- 230000000996 additive effect Effects 0.000 claims description 3
- ONUFESLQCSAYKA-UHFFFAOYSA-N iprodione Chemical compound O=C1N(C(=O)NC(C)C)CC(=O)N1C1=CC(Cl)=CC(Cl)=C1 ONUFESLQCSAYKA-UHFFFAOYSA-N 0.000 claims description 3
- 238000010276 construction Methods 0.000 claims description 2
- 241000208340 Araliaceae Species 0.000 claims 1
- 235000005035 Panax pseudoginseng ssp. pseudoginseng Nutrition 0.000 claims 1
- 235000003140 Panax quinquefolius Nutrition 0.000 claims 1
- 238000011156 evaluation Methods 0.000 claims 1
- 235000008434 ginseng Nutrition 0.000 claims 1
- 239000000155 melt Substances 0.000 claims 1
- 230000008901 benefit Effects 0.000 description 7
- 230000008859 change Effects 0.000 description 7
- 230000003595 spectral effect Effects 0.000 description 7
- 238000010586 diagram Methods 0.000 description 4
- 230000010339 dilation Effects 0.000 description 4
- 238000005516 engineering process Methods 0.000 description 4
- 238000003384 imaging method Methods 0.000 description 4
- VMXUWOKSQNHOCA-UKTHLTGXSA-N ranitidine Chemical compound [O-][N+](=O)\C=C(/NC)NCCSCC1=CC=C(CN(C)C)O1 VMXUWOKSQNHOCA-UKTHLTGXSA-N 0.000 description 4
- 230000000694 effects Effects 0.000 description 3
- 230000002708 enhancing effect Effects 0.000 description 3
- 230000003628 erosive effect Effects 0.000 description 3
- 230000033001 locomotion Effects 0.000 description 3
- 230000003287 optical effect Effects 0.000 description 3
- 239000000523 sample Substances 0.000 description 3
- 238000013519 translation Methods 0.000 description 3
- 230000007797 corrosion Effects 0.000 description 2
- 238000005260 corrosion Methods 0.000 description 2
- 238000013461 design Methods 0.000 description 2
- 238000013507 mapping Methods 0.000 description 2
- 238000005192 partition Methods 0.000 description 2
- 238000003909 pattern recognition Methods 0.000 description 2
- 238000011160 research Methods 0.000 description 2
- 238000003860 storage Methods 0.000 description 2
- PXFBZOLANLWPMH-UHFFFAOYSA-N 16-Epiaffinine Natural products C1C(C2=CC=CC=C2N2)=C2C(=O)CC2C(=CC)CN(C)C1C2CO PXFBZOLANLWPMH-UHFFFAOYSA-N 0.000 description 1
- 230000003321 amplification Effects 0.000 description 1
- 230000002238 attenuated effect Effects 0.000 description 1
- 238000005311 autocorrelation function Methods 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000015556 catabolic process Effects 0.000 description 1
- 238000012512 characterization method Methods 0.000 description 1
- 238000000701 chemical imaging Methods 0.000 description 1
- 238000006243 chemical reaction Methods 0.000 description 1
- 230000008878 coupling Effects 0.000 description 1
- 238000010168 coupling process Methods 0.000 description 1
- 238000005859 coupling reaction Methods 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 230000018109 developmental process Effects 0.000 description 1
- 230000004069 differentiation Effects 0.000 description 1
- 238000009826 distribution Methods 0.000 description 1
- 238000000605 extraction Methods 0.000 description 1
- 239000004744 fabric Substances 0.000 description 1
- 239000000945 filler Substances 0.000 description 1
- 238000005286 illumination Methods 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 230000002401 inhibitory effect Effects 0.000 description 1
- 230000005764 inhibitory process Effects 0.000 description 1
- 230000010354 integration Effects 0.000 description 1
- 230000000873 masking effect Effects 0.000 description 1
- 239000000463 material Substances 0.000 description 1
- 238000002844 melting Methods 0.000 description 1
- 230000008018 melting Effects 0.000 description 1
- 238000003199 nucleic acid amplification method Methods 0.000 description 1
- 238000005457 optimization Methods 0.000 description 1
- 238000007500 overflow downdraw method Methods 0.000 description 1
- 238000003672 processing method Methods 0.000 description 1
- 230000002829 reductive effect Effects 0.000 description 1
- 230000008439 repair process Effects 0.000 description 1
- 239000011435 rock Substances 0.000 description 1
- 238000005070 sampling Methods 0.000 description 1
- 238000012216 screening Methods 0.000 description 1
- 238000000638 solvent extraction Methods 0.000 description 1
- 238000004611 spectroscopical analysis Methods 0.000 description 1
- 238000012549 training Methods 0.000 description 1
- 230000032258 transport Effects 0.000 description 1
- XLYOFNOQVPJJNP-UHFFFAOYSA-N water Substances O XLYOFNOQVPJJNP-UHFFFAOYSA-N 0.000 description 1
- 238000005303 weighing Methods 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/25—Fusion techniques
- G06F18/253—Fusion techniques of extracted features
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/40—Image enhancement or restoration using histogram techniques
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/70—Denoising; Smoothing
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/11—Region-based segmentation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/13—Edge detection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/136—Segmentation; Edge detection involving thresholding
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/155—Segmentation; Edge detection involving morphological operators
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/30—Determination of transform parameters for the alignment of images, i.e. image registration
- G06T7/33—Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10032—Satellite or aerial image; Remote sensing
- G06T2207/10036—Multispectral image; Hyperspectral image
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Data Mining & Analysis (AREA)
- Life Sciences & Earth Sciences (AREA)
- Artificial Intelligence (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Bioinformatics & Computational Biology (AREA)
- Evolutionary Biology (AREA)
- Evolutionary Computation (AREA)
- General Engineering & Computer Science (AREA)
- Image Analysis (AREA)
Abstract
The invention belongs to the multi-source data levels based on Multiple Source Sensor to merge and extractive technique field, and in particular to a kind of the level fusion and extracting method of the detection of moving-target multi-source.The present invention will be seen that light image and infrared light image carry out registration fusion and obtain first layer blending image;After first layer blending image and high spectrum image are registrated, Weakening treatment is carried out to the image pixel after registration according to terrain classification region, obtains second layer blending image;Target acquisition is carried out to second layer blending image, the location information of target in the picture is obtained, target is perceived, obtain longitude and latitude of the target in true environment, the posture tracking target of aircraft is adjusted, realizes the lasting detection and perception to target.The present invention combines a variety of image sources, and the signal characteristic of a variety of image sources is effectively combined by image co-registration, removes the repeated data information of redundancy, increases the accuracy rate of target acquisition, improves detection efficient.
Description
Technical field
The invention belongs to the multi-source data levels based on Multiple Source Sensor to merge and extractive technique field, and in particular to a kind of
The level fusion of moving-target multi-source detection and extracting method.
Background technique
With advances in technology with development, the quality of payload is obviously improved, and the increase of quality means
More senser element can be carried, the computing capability and information storage capability of payload are also obviously improved, energy
The calculating enough executed is more complicated.A variety of detecting devices can be often carried in the payload of spacecraft, it is such as visible
Optical sensor, infrared sensor, bloom spectrum sensor etc. can obtain visible light sensing data image, infrared sensing data respectively
Image, EO-1 hyperion sensing data image etc..
Premise for target tracking in image is target acquisition, how to establish a kind of quick, accurate and effective target spy
Survey method is critical problem.Image recognition is that the methods and techniques of pattern-recognition are used in image domains.Pattern-recognition refers to pair
Characterization things or the various forms of information of phenomenon are handled and are analyzed, things or phenomenon are described, recognized, is classified and
The process of explanation is realized with computer and identifies and classify.By this thought in the picture, that is, realize similar to intelligence to sense
The things known is recognized.The main thought of image recognition is the information bank for establishing the feature of things, is carried out to strange image
Collected feature is compared collection apparatus with the information in known characteristic information library, to more than a certain similarity threshold
The case where value, is both considered to find and identifies target.
Summary of the invention
Level fusion and extracting method the purpose of the present invention is to provide a kind of detection of moving-target multi-source.
The purpose of the present invention is realized by following technical solution: the following steps are included:
Step 1: reading the image of multi-source image sensor input;
Step 2: it will be seen that light image and infrared light image carry out image registration, the image after fusion registration obtains first
Layer blending image;
Step 3: first layer blending image and high spectrum image are subjected to image registration, according to terrain classification region to registration after
Image pixel carry out Weakening treatment, obtain second layer blending image;
Step 4: target acquisition being carried out to second layer blending image, the location information of target in the picture is obtained, to target
It is perceived, obtains longitude and latitude of the target in true environment, adjust the posture tracking target of aircraft, target is held in realization
Continuous detection and perception.
The present invention may also include:
Method for registering images in the step 2 and step 3 specifically:
Step 2.1: extracting image border profile, obtain the edge contour image of original image;
Profile, phase equalization function are extracted to image using phase equalization algorithm are as follows:
Wherein AnFor the amplitude on scale n;φnIt (x) is n-th of Fourier components phase value at x;PC is worked as in expression
(x) when obtaining maximum value at x, the weighted average amount of each component Local Phase parallactic angle of Fourier;
Step 2.2: the feature angle point with scale, position and direction information, specific side are established in edge contour image
Method are as follows:
Step 2.2.1: construction Nonlinear Scale Space Theory makes feature angle point with dimensional information;
Gaussian filtering process is carried out to edge contour images, obtains image grey level histogram and contrast factor k;Conversion one
Group uses additive operator splitting algorithm to obtain all Information Levels of nonlinear filtering image after calculating the time:
Wherein, AlIndicate conductance matrix of the image I on different dimensions on l;tiIt is defined as calculating the time, and only uses every time
One group of calculating time constructs Nonlinear Scale Space Theory;E is unit battle array;
Step 2.2.2: detection feature angle point obtains characteristic angle dot position information;
A local window is moved point by point in the edge contour image of Nonlinear Scale Space Theory, to the pixel value in window
Operation is carried out to determine whether being angle point;
Step 2.2.3: the directional information of feature angle point is calculated
The coordinate of feature angle point p (i) in the picture is (x (i), y (i)), and two point p (i-k), p (i+ are selected in neighborhood
K), make the two points with point p (i) apart from being k, T is the tangent line at the place point p (i), and the principal direction of feature angle point p (i) is tangent line T and x
The angle theta of axis positive directionFeature, calculation formula is as follows:
Step 2.3: establishing shape description matrix;
If feature point set P { p1,p2,...pn,},pi∈R2, with a certain characteristic point p (i) for origin, in being with p (i) point
In r × r neighborhood of the heart, polar coordinate system is established, to 360 ° of 12 equal portions of progress, 12 sectors is obtained, is then followed successively by by radiusFive concentric circles are drawn, 60 zonules are obtained;The feature point number in each zonule is counted, is calculated
piThe shape histogram hi of point, the shape histogram hi of each characteristic point are exactly Shape context description of each characteristic point;
The calculation method of the shape histogram hi of each characteristic point are as follows:
Hi (k)=# { q ≠ pi:(q-pi)∈bin(k)}
Wherein # indicate statistics kth (k=1,2 ... 60) in a region characteristic point number;
Step 2.4: the characteristic angle point of two images being matched, image registration is completed;
The characteristic point of its arest neighbors and time neighbour, Euclidean distance are searched for by using Euclidean distance are as follows:
Wherein, aiShape context for reference picture arbitrary characteristics point describes R (a0,a1,...a59) in i-th, bi
Shape context for reference picture arbitrary characteristics point describes I (b0,b1,...b59) in i-th;
If p is any feature point in certain piece image, with p arest neighbors to be registered and secondary neighbour's characteristic point be set to i,
J, then the Euclidean distance of they and characteristic point p are respectively DipAnd Djp;Set operation threshold valueAs the small Mr. Yu of the threshold value
When one value, that is, thinks that p and i are the characteristic points correctly matched, otherwise fail.
The method of visible images and infrared light image after fusion is registrated in the step 2 specifically:
Step 3.1: region segmentation being carried out to the infrared image after registration, isolates suspicious region and the background of infrared image
Region;The suspicious region is the highlight regions that the big image of infra-red radiation becomes clear;
Step 3.2: dual-tree complex wavelet transform being carried out with visible images to the infrared image after registration respectively, obtains image
Low-frequency information and high-frequency information, the essential information of image correspond to the low-frequency information of wavelet transform result, the details letter of image
Breath corresponds to the high-frequency information of wavelet transform result;
Step 3.3: the result of the result and wavelet transformation that divide the image into is merged, and respectively obtains low frequency blending image
With high frequency blending image;
Step 3.4: low frequency blending image and high frequency blending image being subjected to dual-tree complex wavelet inverse transformation, obtain melting for the first time
Close image.
Target acquisition is carried out to second layer blending image in the step 4, obtains the location information of target in the picture
Method specifically:
Step 4.1: second layer blending image is filtered;
Establishing a window matrix, individual element scans on 2d, and the value of matrix center is in window matrix
The average value of each point value substitutes, and indicates are as follows:
Wherein: f (x, y) is second layer blending image to be processed;G (x, y) is the second layer fusion figure after filtering processing
Picture;S is the set with (x, y) point for middle neighborhood of a point coordinate points, and M is the sum for gathering internal coordinate;
Step 4.2: to the second layer blending image after filtering processing using the image threshold method of rolling average at
Reason, obtains binary map;
Zk+1Indicate the rolling average gray scale in scanning sequency, in the point that+1 step of kth encounters, at new point are as follows:
Wherein, nGray scaleRepresent the points used when calculating average gray, initial value m (1)=zi/nGray scale;
Rolling average in the picture calculates every bit, so segmentation is executed with following formula:
Wherein, K is the constant in [0,1] range, mxyIt is rolling average of the input picture at (x, y);
Step 4.3: deleting the image that area is less than target in bianry image, remove the interference of irrelevant information;
Step 4.4: the bianry image after the interference of removal irrelevant information being handled using morphological image;
Step 4.5: establishing cutting function, target is cut into from the full image after morphological image process, is obtained
To target image to be tested;
It is 0 that background parts, which are black level value, in image I after morphological image process, and target part to be tested is white value
It is 1;From image (0,0), coordinate starts to have looked for, and finds the point that first coordinate pixel value is 1, since looking for this point, finds
All the points are named as set T by the point that all pixels value being attached thereto is 11, in set T1In point coordinate in find cross
The maximum value x of coordinate1maxWith minimum value x1min, in set T1In point coordinate in find the maximum value y of ordinate1maxMost
Small value y1min, then the target image to be tested that cutsAnd so on, all targets to be tested are found, are obtained
To all target images to be tested
Step 4.6: finding the main symmetry axis of target image to be tested using principal component analytical method, and obtain target image to be tested
Main symmetry axis and x-axis angle thetaIt is to be tested;
The coordinate of each point is two dimension in target image information to be tested, these are put composition nIt is to be tested2 column matrix X of rowIt is to be tested,
Middle nIt is to be testedFor the points in target image information to be tested, X is calculatedIt is to be testedCovariance matrix CIt is to be tested, and continue to calculate covariance square
Battle array CIt is to be testedFeature vector VIt is to be tested=(xv,yv), then the main symmetry axis of target image to be tested and x-axis angle thetaIt is to be testedAre as follows:
Step 4.7: carrying out image direction normalization, target image to be tested is rotated into θIt is to be testedAngle, and remove newly generated black
Side;
Step 4.8: image size normalization is carried out, it will be by the target image picture size to be tested of direction normalization
Become template size;
Step 4.9: by the target image to be tested after direction normalizing and size normalization and the image in template library
It is matched one by one, and sets similar threshold value T, be target by this image recognition when similarity degree is more than this threshold value.
The method of low frequency blending image is obtained in the step 3.3 are as follows: according to infrared Image Segmentation at suspicious region and
The location information of background area, it will be seen that light image is split according to identical location information;For infrared image with it is visible
The suspicious region of light image low frequency part, using following rule:
Wherein,For l layers of fusion figure low frequency coefficient,For l layers of infrared figure
Picture low frequency coefficient,For l layers of visible images low frequency coefficient;
For the background area of infrared image and visible images low frequency part, using Local Deviation method, Local Deviation is got over
Greatly, show that how corresponding each pixel gray-value variation in the region be bigger, the pixel contrast in the opposite region is just
Higher, the corresponding information in this region is also more;The pixel big to Local Deviation value increases weight in image co-registration, rule
It is then as follows:
Wherein, ωirFor infrared image weight, ωvisFor visible images weight;Infrared image weight ωirAnd visible light
Image weights ωvisCalculation method are as follows:
ωir=1- ωvis;
Wherein, σvisAnd σirThe respectively Local Deviation of visible images and infrared image, r are related coefficient region;It can be seen that
The Local Deviation σ of light imagevisWith the Local Deviation σ of infrared imageirCalculation method are as follows:
The calculation method of correlation coefficient r are as follows:
Wherein image size M × N,Indicate the average gray value of visible images,Indicate the average ash of infrared image
Angle value, Iir(i, j) represents infrared image, Ivis(i, j) represents visible images.
The beneficial effects of the present invention are:
The present invention applies visible images, infrared light image and high spectrum image simultaneously, while the visible images point that get both
Resolution is high, and infrared light image target contrast is high, and high-spectrum can be as distinguishing the various sensor maps such as artificiality and natural object
The advantages of picture, it is accurate that target acquisition determines, influence of the earth atmosphere activity to target acquisition is effectively reduced.The present invention combines a variety of
Image source, by image co-registration can effectively combine a variety of image sources signal characteristic, effectively remove the repeat number of redundancy it is believed that
Breath improves detection efficient so as to effectively increase the accuracy rate of target acquisition.The tracking to target may be implemented in the present invention,
And the prediction of the location informations such as bogey heading, the speed of a ship or plane, longitude and latitude.
Detailed description of the invention
Fig. 1 is overall flow schematic diagram of the invention.
Fig. 2 is that infrared image of the invention merges flow diagram with visible light image registration.
Fig. 3 is that first layer blending image of the invention merges flow diagram with high spectrum image.
Fig. 4 is target acquisition and perception flow diagram of the invention.
Specific embodiment
The present invention is described further with reference to the accompanying drawing.
The invention belongs to based on Multiple Source Sensor multi-source data level fusion with extractive technique field, in spacecraft
Payload in can often carry a variety of detecting devices, such as visible light sensor, infrared sensor, bloom spectrum sensor
Deng can obtain visible light sensing data image, infrared sensing data image, EO-1 hyperion sensing data image etc. respectively, specifically relate to
And multi-source image fusion, to technologies such as the perception of the identification of image, target
Image data is originated from multiple sensors in the present invention, and often same position target will appear multiple images, still
The information emphasis that image from every kind of sensor carries is different, and due to similar background information between different images
Data redundancy is caused, so before executing the image processing techniques such as image segmentation, target acquisition, target acquisition, target apperception,
Fusion treatment is carried out to image, bulk information data are subjected to integration screening, to remove redundancy, leave different data sources
Valid data information in image.Image fusion technology includes image registration techniques and fusing image data technology.
The purpose of the present invention is utilizing multi-source image sensor, consider attitude of flight vehicle is motor-driven, spatial attitude disturbance, cloud
Caused image fault under the factors such as platform vibration;Consider to cause under the conditions ofs earth atmosphere motion artifacts, sea sea situation are severe etc.
Image object masking, sufficiently with different sensors to the sharp sex differernce of the different characteristic of signal, pass through the image of multilayer
Registration and image co-registration make image reach image definition requirements, the requirement of target standout, and reache a certain level and show
The requirement for the target that is blocked, based on fused image, design object probe algorithm reaches in multiregion, more images
The target acquisition in source, and combine aircraft appearance rail information, attitude of flight vehicle motor-driven etc., reach in certain time space to target
Discovery, tracking and position prediction function.
Purpose of the present invention implementation:
Step 1: reading the image of multi-source image sensor input;
Step 2: it will be seen that light image and infrared light image carry out image registration, the image after fusion registration obtains first
Layer blending image;
Step 3: first layer blending image and high spectrum image are subjected to image registration, according to terrain classification region to registration after
Image pixel carry out Weakening treatment, obtain second layer blending image;
Step 4: target acquisition being carried out to second layer blending image, the location information of target in the picture is obtained, to target
It is perceived, obtains longitude and latitude of the target in true environment, adjust the posture tracking target of aircraft, target is held in realization
Continuous detection and perception.
1) first layer merges: establishing infrared image and visible light image registration and blending algorithm, utilizes visible light resolution ratio
The features such as height, but vulnerable to atmospheric effect, infrared image target contrast is high, but environment resolution ratio is low, make fused image mesh
Mark is prominent and ambient enviroment has certain details.
2) second layer merges: that establishes high spectrum image and first layer blending image is registrated blending algorithm, reaches prominent mesh
Mark weakens the purpose that irrelevant information influences.
3) image recognition is carried out to second layer blending image.It establishes a kind of with certain actual application ability and feasibility
Target acquisition algorithm.
4) target apperception, on the basis of 3) step discovery target, in conjunction under aircraft appearance rail information, earth rotation, star
Information, the rough spotting earth longitudes and latitudes such as point position vector, optical axis direction vector predict target on the basis of target trajectory
Track is perceived discovery.
By above step, the independent discovery of payload, identification, the ability for tracking target can be reached.
Advantages of the present invention mainly has a two o'clock: first, image source of the present invention has the whole world, round-the-clock on aircraft,
The features such as range is wide, high-timeliness.Second, the present invention applies visible images, infrared light image and high spectrum image simultaneously, together
When get both visible images high resolution, infrared light image target contrast is high, high-spectrum can as distinguish artificiality with from
The advantages of various sensor images such as right object, sufficiently improve target acquisition rate and accuracy rate.Third, the present invention make full use of platform
Advantage may be implemented to a degree of tracking of target and the prediction of position (course, the speed of a ship or plane, longitude and latitude).
Firstly, the problems such as considering due to aircraft platforms vibration and focusing deviation is so infrared light image and visible light figure
As image registration problem must be taken into consideration before fusion.Consider that objective contour is core in infrared image and visible images, so choosing
With the Data Matching fusion method based on shape descriptor.Quickly similar nearest neighbor algorithm is selected in image registration of the invention, is examined
Worry wants existing this algorithm widely to be used in the image of such situation, stability and universality with higher.Herein
On the basis of, RANSAC algorithm is selected in order to further enhance the robustness of inventive algorithm, to matched characteristic point
It is screened, rejects erroneous point, be left optimal collection.
Considering high-altitude image imaging characteristics, imaging is influenced vulnerable to earth atmosphere activity, and considers spacecraft orbit limitation, at
The problems such as forming an angle with plumb line as optical axis, cause scalloping, image region segmentation method of the invention is to be based on
What saliency was considered, Image Fusion is also based on the result of segmentation.Consider the imaging characteristics of infrared image, long wave
Infrared image is to acquire object itself to exhale the long-wave infrared come, and the infra-red radiation of target area is often higher than environment
, thus we consider to enhance this graphic processing method using conspicuousness, by infrared image target area in image procossing
Enhancing, background information weaken, and play the role of inhibiting noise to a certain extent.It can be efficiently separated in conjunction with dual-tree complex wavelet algorithm
The characteristic of high-frequency information in image and low-frequency information out considers infrared image and can be by the imaging characteristics of light figure and image matter
Amount uses different convergence strategies to high frequency low frequency respectively.Complete first layer image co-registration.
Secondly, high spectrum image has irreplaceable advantage in the research of analysis atural object, the present invention uses frequency spectrum phase
Like degree classification method, to improve the accurate fixed of ground object target detection, on the basis of distinguishing atural object is inartificial or artificiality, then
A tomographic image fusion is carried out, achievees the purpose that weaken the jamming patterns information such as the inhuman divine force that created the universe, be provided effectively for final discrimination objective
Foundation and effectively support.Complete second layer image co-registration.
Finally, being merged based on final image, it is contemplated that the redundancy in image is effectively weakened, and target highlights,
The present invention includes a kind of image recognition algorithm based on template matching, and can persisted moniker target position, track, and certain
Target direction (such as longitude and latitude, course, the speed of a ship or plane) is predicted in degree, attitude of flight vehicle is instructed to change, and is guaranteed to Target self-determination
Lasting identification and tracking.
Heterologous sensing data used in the present invention is respectively from visible light sensor and infrared sensor.Both sensings
Feature of image obtained by device is as shown in table 1 below.Load is on aircraft platforms, when the image of acquisition, it is contemplated that ground
Ball air motion, Ambient reflect scattering process;And it in view of aircraft platforms pose adjustment, platform disturbance vibration and puts down
Platform stability etc.;And in view of photoelectric platform assembling and setting, platform property etc., picture position, side caused by many factors
Position, scale, shape difference.So this method is intended to reduce influence of the extraneous factor to image, and maximum journey to the greatest extent
The advantage of two kinds of sensor acquisition data of protrusion of degree.
1 visible light sensor of table and infrared sensor acquire feature of image
The present invention includes process of image registration.First using phase equalization algorithm (Phase Congruency) to image
Extract profile.Phase equalization function is defined as:
Wherein AnFor the amplitude on scale n;φnIt (x) is n-th of Fourier components phase value at x;PC is worked as in expression
(x) when obtaining maximum value at x, the weighted average amount of each component Local Phase parallactic angle of Fourier.Obtain the edge contour of original image
Image is applied in subsequent processing.
The present invention includes to establish feature point methods.Nonlinear Scale Space Theory building method is as follows, and input picture is carried out
Then gaussian filtering process obtains image grey level histogram, obtain contrast factor k.Reconvert uses after one group of calculating time
Additive operator splitting algorithm obtains all Information Levels of nonlinear filtering image:
Wherein, AlIndicate conductance matrix of the image I on different dimensions on l.tiIt is defined as calculating the time, and only uses every time
One group of calculating time constructs Nonlinear Scale Space Theory, and E is unit battle array.
Angular-point detection method is as follows, moves a local window point by point in the picture, transports to the pixel value in window
It calculates to determine whether being angle point.Assuming that the grey scale change that local window C translation (u, v) generates afterwards, it may be assumed that
Wherein I (x, y) is gray value of the image in (x, y) point, and w (x, y) indicates gaussian weighing function.
In order to acquire the value for keeping E (u, v) as big as possible, above formula Taylor expansion obtains:
And then switch to matrix form:
WhereinIx,IyDivide in the x-direction with the gradient in the direction y for image grayscale
Amount.
Then local autocorrelation function E (u, v) can be approximated to be elliptic function:
E(u,v)≈Au2+2Cuv+Bv2
Equal reference points around the point constitute elliptic curve, and the point on ellipse is identical as central point degree of correlation, second order
The eigenvalue λ of matrix M1,λ2It is elliptical long axis and short axle respectively, indicates the direction of the speed of grey scale change.So working as characteristic value
λ1,λ2It is angle point when all very big and suitable.When two characteristic values small one and large one when, be edge.When two characteristic values all very littles
When, as flat site.In order to make image that there is the scale invariability in Corner Detection, Corner Detection Algorithm is brought into
Into Nonlinear Scale Space Theory described above, makes characteristic point simultaneous with scale and location information, obtain angle point receptance function
Are as follows:
Wherein, σi,SFor scale factor,Respectively the second-order differential of grey scale change in the x and y direction and partially
Derivative.The point for meeting angle point receptance function is angle point.
Angle point is characterized plus suitable directional information.The coordinate of known features angle point p (i) in the picture is (x (i), y
(i)) two point p (i-k), p (i+k), are selected in neighborhood, make the two points with point p (i) apart from being k, T is the place point p (i)
Tangent line, the principal direction of feature angle point p (i) are the angle theta of tangent line T and positive direction of the x-axisFeature, calculation formula is as follows:
Angle point is characterized on the basis of above and determines principal direction, the advantage of doing so is that making it have rotational invariance.
It can be good at being used in the feature angle point matching problem between infrared image and visible images, hereinafter referred to as characteristic point.
The present invention includes shape descriptor generating algorithm.If feature point set P { p1,p2,...pn,},pi∈R2, with a certain spy
Levying point p (i) is origin, in r × r neighborhood centered on p (i) point, establishes polar coordinate system, to 360 ° of 12 equal portions of progress, obtains
To 12 sectors, then it is followed successively by by radiusFive concentric circles are drawn, to obtain 60 zonules.
The feature point number in each zonule is counted, p is calculatediThe shape histogram hi of point, hi are defined as follows:
Hi (k)=# { q ≠ pi:(q-pi)∈bin(k)}
Wherein # indicate statistics kth (k=1,2 ... 60) in a region characteristic point number.The shape of each characteristic point is straight
Side's figure is exactly Shape context description of each characteristic point.
The present invention includes the matching process of feature point set.Arest neighbors and time neighbour's characteristic point are searched for, most using quick approximation
Nearest neighbor algorithm searches for the characteristic point that each characteristic point searches for its arest neighbors and time neighbour by using Euclidean distance.Euclidean distance is fixed
Justice are as follows:
Wherein, aiShape context for reference picture arbitrary characteristics point describes R (a0,a1,...a59) in i-th, bi
Shape context for reference picture arbitrary characteristics point describes I (b0,b1,...b59) in i-th.Then algorithm concrete operations walk
It suddenly is, if p is any feature point in infrared image, to be set to i, j with p arest neighbors to be registered and time neighbour's characteristic point, then
The Euclidean distance of they and characteristic point p are respectively DipAnd Djp.Set operation threshold valueWhen the threshold value is less than a certain value
When, that is, think that p and i are the characteristic points correctly matched, otherwise fails.
RANSAC algorithm is selected in order to further enhance the robustness of inventive algorithm, to matched characteristic point
It is screened, rejects erroneous point, be left optimal collection.All best match characteristic points are substituted into image sky to location parameter by the algorithm
Between projective transformation model, image projection transformation relationship is obtained by direct linear transformation's algorithm, the registration parameter of image is exactly red
Affine transformation relationship between outer image and visible images.So far, we complete matching between infrared image and visible images
Quasi- process.
The present invention includes image co-registration process.Its process is summarized as first carrying out region segmentation to infrared image, can separate
The highlight regions of infrared image and background area out, then using the result of Infrared Image Region Segmentation Based as foundation, to visible images
Make corresponding mapping.Dual-tree complex wavelet transform carried out respectively to infrared image and visible images, processing the result is that
To the low-frequency information and high-frequency information of image, the essential information of image corresponds to the low-frequency information of wavelet transform result, image
Detailed information corresponds to the high-frequency information of wavelet transform result.The result coupling of the result and wavelet transformation that divide the image into is examined
Consider.When handling low-frequency information, highlight regions and background area, difference and the mission requirements of the information reflected are considered
Consider, reply actual conditions use different convergence strategies.When handling high-frequency information, it is contemplated that high-frequency information is mainly schemed anyway
The minutia of picture is melted so not being all that weight is distributed in each region according to the abundant degree of detailed information according to weight design
Close strategy.The specific steps of above-mentioned process are as follows:
Step 3.1: region segmentation being carried out to the infrared image after registration, isolates suspicious region and the background of infrared image
Region;The suspicious region is the highlight regions that the big image of infra-red radiation becomes clear;
Step 3.2: dual-tree complex wavelet transform being carried out with visible images to the infrared image after registration respectively, obtains image
Low-frequency information and high-frequency information, the essential information of image correspond to the low-frequency information of wavelet transform result, the details letter of image
Breath corresponds to the high-frequency information of wavelet transform result;
Step 3.3: the result of the result and wavelet transformation that divide the image into is merged, and respectively obtains low frequency blending image
With high frequency blending image.
The highlight regions of infrared image are chosen, the image for carrying out conspicuousness enhancing to the infrared image after registration is first had to
Processing, the thermal target information enhancement of infrared image after processing, background information is fuzzy, so that the contrast of entire infrared image increases
By force.The algorithm of conspicuousness enhancing is mainly based upon the histogram of image.Pixel I in image IcConspicuousness is defined as:
Wherein Dis (Ic,Ii)=| | Ic-Ii| | it is IcColor distance indicates IcWith IiDifference in color, above formula can change
It is written as:
Wherein acFor pixel IcGray value, n be image in contained gray scale sum, fjFor ajWhat is occurred in the picture is general
Rate;Image significance figure is calculated, saliency map I is obtainedsal;
Inclusion region partitioning algorithm of the present invention.Each pixel in image is mixed into spy by K gauss hybrid models
Levying indicates, wherein K=k ∈ { 1,2 ... K }.Pixel one kind in image is corresponding with target gauss hybrid models, another
It is corresponding with background gauss hybrid models.The image Gibbs energy function of Region Segmentation Algorithm as a result, are as follows:
E (α, k, θ, z)=U (α, k, θ, z)+V (α, z)
Wherein z is pixel value, and corresponding pixel belongs to background, α corresponding pixel when being 1 when α ∈ { 0,1 }, α are 0
Belong to target, U is area item, and V is border item, the calculation method of area item U are as follows:
θ={ π (α, k), μ (α, k), ∑ (α, k), α=0,1, k=1...K }
Area item is used to distinguish the pixel in target area or the pixel in background area, determines parameter θ by learning
Afterwards, the region energy item of Gibbs determines that.
The calculation method of border item V are as follows:
Wherein γ is empirical, is obtained by training, C indicates the set of adjacent pixel pair, function [αn≠αm] value
Only 1 or 0, work as αn≠αmWhen, [αn≠αm]=1, works as αn=αmWhen, [αn≠αm]=0.
β=(2 < | | zm-zn||2>)-1, indicate the mathematic expectaion of sample.The contrast that β corresponds to image can determine right
The border item of the situation more high or low than degree.Image is split with max-flow min-cut algorithm, after the completion of segmentation, optimization gauss
Mixed model parameter, repeatedly iteration completes image segmentation when making energy function minimum.This image partition method is transported
It need to be by saliency map I mentioned above when using the present inventionsalAs the initialization value of image segmentation algorithm, it is used to uncalibrated image
Highlight regions and background area be iterated segmentation according to the region demarcated, obtain segmentation result.
The present invention include based on dual-tree complex wavelet transform (dual-tree complex wavelet transform,
DTCWT Image Fusion Rule), dual-tree complex wavelet function is defined as:
ψ (x)=ψh(x)+jψg(x)
Wherein ψh(x) and ψgIt (x) is real number small echo.After two-dimentional DTCWT transformation, picture breakdown obtains two low frequency wavelets
The high frequency coefficient of coefficient and six direction (± 15 °, ± 45 °, ± 75 °).
For low frequency part first by region segmentation method mentioned above, infrared image is split respectively, point
Cut out suspicious region and except exterior domain, record position information, it will be seen that light image is split according to identical location information, if
Resolution ratio difference then, then after normalizing location information, is handled as coefficient.
According to infrared Image Segmentation at the location information of suspicious region and background area, it will be seen that light image is according to identical
Location information is split;For the suspicious region of infrared image and visible images low frequency part, using following rule:
Wherein,For l layers of fusion figure low frequency coefficient,For l layers of infrared figure
Picture low frequency coefficient,For l layers of visible images low frequency coefficient;
Exterior domain is removed for low frequency part, using Local Deviation method, Local Deviation is bigger, shows each in the region
How corresponding pixel gray-value variation be bigger, and the pixel contrast in the opposite region is higher, one can consider that this
The corresponding information in region is also more.According to the above analysis, the pixel big to Local Deviation value increases power in image co-registration
Weight, rule are as follows:
For the background area of infrared image and visible images low frequency part, using Local Deviation method, Local Deviation is got over
Greatly, show that how corresponding each pixel gray-value variation in the region be bigger, the pixel contrast in the opposite region is just
It is higher, one can consider that the corresponding information in this region is also more.According to the above analysis pixel big to Local Deviation value
Point increases weight in image co-registration, and rule is as follows:
Wherein, ωirFor infrared image weight, ωvisFor visible images weight;Infrared image weight ωirAnd visible light
Image weights ωvisCalculation method are as follows:
ωir=1- ωvis;
Wherein, σvisAnd σirThe respectively Local Deviation of visible images and infrared image, r are related coefficient region;It can be seen that
The Local Deviation σ of light imagevisWith the Local Deviation σ of infrared imageirCalculation method are as follows:
The calculation method of correlation coefficient r are as follows:
Wherein image size M × N,Indicate the average gray value of visible images,Indicate the average ash of infrared image
Angle value, Iir(i, j) represents infrared image, Ivis(i, j) represents visible images.
High frequency section is divided the image by image partition method above as n region, with A={ a1,
a2...anIndicate.If each region corresponds to the region weight of oneselfThe present invention provides following weighting
High frequency fusion rule.
Wherein, a numerical value C is setl,θ> 1 is used to amplify high frequency coefficient, it is therefore an objective to pair of detail section in prominent image
Degree of ratio.It is contemplated that doing so, the noise in image can equally be amplified, so a two values matrix M is addedl,θ, whenWhen Ml,θ=1, and going divider value is 1 but isolated point, and doing so only amplification to connect sheet of high frequency coefficient picture
Vegetarian refreshments removes noise;For contracting function, it is therefore an objective to reduce influence of the noise to high-frequency information.During actual fused
The concave-convex variation at edge may cause the result of fusion to be distorted deformation, we are using infrared image and visible images
The high frequency coefficient that DTWCT is converted calculates a unit vector, improves, is then merged at this time to original high frequency coefficient
The high frequency coefficient of image is rewritten are as follows:
Wherein:Wherein, it merges
Regular f is with infrared image and can be by weight in light imageOn the basis of big, the region r of the image is calculatediHigh frequency coefficient
Mean value, then using this mean value as the correspondence high frequency coefficient of blending image.Sl,θBy highlight regions image segmentation result elder generation dilation operation
It is obtained using 2 dimension mean filters, it is therefore an objective to guarantee that blending image each cell domain detailed information is more significant.Wherein region is weighed
Value
Wherein Hl,θ(x, y) high frequency coefficient, l are the number of plies, and θ is directional subband, | ri θ| it is region ri θSize.
The present invention includes that high spectrum image active layer time characters of ground object extraction and image co-registration become owner of image method, such as attached drawing 3
It is shown.
Firstly, it is generally believed that high-spectrum remote-sensing refers to spectral resolution 10-2The remote sensing of λ order of magnitude range.High-spectrum
The features such as feature of picture, wave band is more, and spectral region is narrow, and wave spectrum is continuous, it is even hundreds of that the single pixel member in image contains tens
A wave band, the range of each wave band are less than the spectrum of 10nm.So analyzing from spectral Dimensions remote sensing information, analysis is different
Information bank is demarcated and established to the feature of object reflective spectrum, and knowledge is matched in information bank using the spectroscopic data of target
Not, to add label in image object, the identification to atural object is realized with this.
Atural object differentiation is carried out from frequency domain angle, is believed the corresponding complete spectrum of each pixel of Hyperspectral imaging as sequence
Number, classified using frequency spectrum similarity classification method (FSSM method) to trial zone image.Due to high-spectral data have from
Type is dissipated, is analyzed so discrete Fourier transform (DFT) can be used, DFT transform can with compressed signal and largely
Upper inhibition noise and Hughes phenomenon, and obtain signal spectrum, this makes it possible to effectively extract differently, object light spectral curve is not
The main wave crest of co-wavelength position and the frequency spectrum of trough retain the effective information on the curve of spectrum.
One-dimensional discrete Fourier transformation is used first, and spectral signal is transformed into frequency domain and obtains frequency spectrum.It will be every in HIS image
The corresponding spectral sequence of a pixel is considered as one-dimensional discrete signal f (n), and DFT may be defined as:
WhereinP (k)=R2(k)+I2(k);
Fphase=arctan (I (k)/R (k));
In formula | F (k) |, P (k), FphaseIt is amplitude spectrum, energy spectrum and the phase of the corresponding spectral sequence of pixel respectively
Spectrum, R (k), I (k) are the real and imaginary parts of F (k) respectively, and k is the serial number of DFT transform, and N is discrete sampling data length, n be from
Scattered sampled point, that is, corresponding high-spectral data wave band number, f (n) are pixels in the corresponding reflectance value of each wave band, i.e. ground light
Compose reflectance value.
Next yin calculates the difference of frequency spectrum between target spectrum and reference spectra frequency spectrum with lance distance, and then measures spectrum
Similitude to classify, calculation formula is as follows:
In formula, Ftar(i) and Fref(i) be respectively target and reference spectra curve frequency spectrum, NsIt is the low order for participating in calculating
Multi harmonics.Reference spectra can be using the pixel spectrum extracted on Laboratory Spectra, field measuring spectrum or image.It uses
When the spectrum of field measuring is as reference spectra, in order to eliminate influence of the atmosphere to spectrum, need to carry out atmosphere to remote sensing image
It corrects.
Consider the region split in first layer fusion.The simple and effective of consideration method, i.e., reduce as far as possible
Requirement of the algorithm to hardware computing resource, storage resources.Consider material and water, rock, the plant of nature etc. of man-made target
's.And just for the background of so-called target, natural object is all exactly background.So only needing to identify in high spectrum image
The natural background part come, is weakened in the first tomographic image by the method for weighting and is attenuated to the influence of non-targeted information
It is minimum.Specific method for registering images is the same as the method in first layer fusion.
The present invention include moving-target detection and tracing algorithm, as shown in Fig. 4.
Target acquisition function:
Final image is filtered first, purpose removes noise.Mean filter process is to establish a window square
Individual element scans battle array on 2d, and the average value of each point value substitutes in the value window matrix of matrix center,
It is represented by
Wherein: f (x, y) is second layer blending image to be processed;G (x, y) is the second layer fusion figure after filtering processing
Picture;S is the set with (x, y) point for middle neighborhood of a point coordinate points, and M is the sum for gathering internal coordinate;
Then it is handled using the image threshold method of rolling average, grayscale image is made to become binary map.Basic thought is
Along the scanning line computation rolling average of piece image.With the execution of zigzag mode by-line, and then reduce illumination deviation.Enable Zk+1
It indicates in scanning sequency, in the point that+1 step of kth encounters.Rolling average gray scale at new point is given by:
Wherein, nGray scaleRepresent the points used when calculating average gray, initial value m (1)=zi/nGray scale;
Rolling average in the picture calculates every bit, so segmentation is executed with following formula:
Wherein, K is the constant in [0,1] range, mxyIt is rolling average of the input picture at (x, y);
It is 5 times of target width, K=0.5 that usually we, which take n,.This Research on threshold selection can effectively avoid shadow point
Target image is extracted in the influence that cloth unevenness generates the variation of image two-value, help.
Consider that the single width of hardware computing capability and permission handles the time, scaling appropriate can be carried out to image in terms of improving
Calculate speed.Only consider the biggish image down processing of length not being related to magnification distortion problem, so in selecting method herein
When only need to consider simplicity, therefore herein the present invention select arest neighbors difference arithmetic
Then cell area image unrelated with target in image is deleted, area is deleted in bianry image and is significantly less than target
Image, to delete the interference of irrelevant information in image.
Image is handled using image aspects mathematic(al) function, makes to obtain shape the most essential after image object is processed
Feature.Two basic morphology operations are understood first.Erosion operation is defined as:
Wherein A ⊙ S expression corrodes A using S, and detailed process is that structural element S is allowed to move in the plane domain of A image
Dynamic, if S can be completely contained in A when the origin translation of S to z point, the set that all such z points are constituted is denoted as S
To the corrosion image of A.Corrosion can melt the boundary of object, can make thin connection position fracture in image object.
Dilation operation is defined as:
WhereinExpression expands A using S, and detailed process is to allow structural element S in the whole image plane of A
The interior movement mapping of S relative to its own origin when its own origin translation to z pointThere is common intersection with A, i.e.,Extremely with A
Rare 1 pixel is overlapping, then the set that all such z points are constituted is denoted as S to the expanding image of A.Expansion energy makes object
Boundary expands, and can connect the gap of fracture.
Opening operation is to carry out dilation operation after first carrying out erosion operation to image A using structural element S, is indicated are as follows:
Opening operation can eliminate that small object, separating objects, smooth big object boundary do not change volume at very thin point.
Closed operation is to carry out erosion operation after first carrying out dilation operation to image A using structural element S, is indicated are as follows:
Closed operation being capable of filler body inner aperture hole, connection adjacent object, unobvious object area and the shape of changing
In the case of its smooth boundary.
The present invention can select corresponding Morphology Algorithm according to the actual situation, reach and eventually find suspected target.
Cutting function is established, target is cut into from full image, obtains target image to be tested.Method is, according to preceding
Text processing as a result, it is 0 that background parts, which are black level value, in image I, target part to be tested is that white value is 1.It is sat from image (0,0)
Mark starts to have looked for, and finds the point that first coordinate pixel value is 1, since looking for this point, finds all pictures being attached thereto
All the points are named as set T by the point that element value is 11, in set T1In point coordinate in find the maximum value x of abscissa1max
With minimum value x1min, in set T1In point coordinate in find the maximum value y of ordinate1maxWith minimum value y1min, then cut
The target image to be tested arrivedx1min< x < x1max,y1min< y < y1max.And so on, it finds all to be tested
Target obtains all target images to be tested
In view of the man-made target that we are identified usually has symmetry, so we are looked for using principal component analytical method
To the main symmetry axis of image to be tested, and obtain itself and x-axis angle theta.Principal component analytical method is the maximum that it is found out in N-dimensional data
Distribution vector and value.The coordinate of each point is two dimension in target image information to be tested, these are put composition nIt is to be tested2 column matrix of row
XIt is to be tested, wherein nIt is to be testedFor the points in target image information to be tested, X is calculatedIt is to be testedCovariance matrix CIt is to be tested, and continue to calculate association
Variance matrix CIt is to be testedFeature vector VIt is to be tested=(xv,yv), then the main symmetry axis of target image to be tested and x-axis angle thetaIt is to be testedAre as follows:
Then image direction normalization is carried out, by image rotation θIt is to be testedAngle, and newly generated black surround is removed, both cut again
It cuts.
Then image size normalization is carried out, picture size is become into template size.Template library is established, and provides template
Having a size of M × N in library.
Establish target discrimination function:
By treated, image to be tested carries out matching one by one with the image in template library and sets a certain similar threshold value T.When
It is target by this image recognition when similarity degree is more than this threshold value.Specific matching step is as follows:
1) similarity threshold T is set
2) H is calculated according to the following method, it is assumed that image A to be tested, template image B.Judge image pixel value A (x, y) to be tested
It is whether equal with template image pixel value B (x, y).The H=H+1 if equal;Next point is judged if unequal, wherein x ∈ (0,
M),y∈(0,N).Obtain final H value
If 3) judge H > T, it is judged as discovery target;If H < T, next template is replaced, step 2) is repeated, until
All templates are gone through, then is judged as and does not find target.
Target apperception:
Application target probe function obtains target position, i.e., coordinate in the picture.
(1) according to load position, target position longitude and latitude is determined.
(2) position coordinates according to target in the picture determine pose adjustment angle, and target is maintained at image central area
In domain.
The above two o'clock reaches target apperception purpose.
The foregoing is only a preferred embodiment of the present invention, is not intended to restrict the invention, for the skill of this field
For art personnel, the invention may be variously modified and varied.All within the spirits and principles of the present invention, made any to repair
Change, equivalent replacement, improvement etc., should all be included in the protection scope of the present invention.
Claims (7)
1. the level fusion and extracting method of a kind of moving-target multi-source detection, it is characterised in that: the following steps are included:
Step 1: reading the image of multi-source image sensor input;
Step 2: it will be seen that light image and infrared light image carry out image registration, the image after fusion registration obtains first layer and melts
Close image;
Step 3: first layer blending image and high spectrum image being subjected to image registration, according to terrain classification region to the figure after registration
As pixel progress Weakening treatment, second layer blending image is obtained;
Step 4: target acquisition being carried out to second layer blending image, obtains the location information of target in the picture, target is carried out
Perception, obtains longitude and latitude of the target in true environment, adjusts the posture tracking target of aircraft, realizes the lasting spy to target
It surveys and perceives.
2. the level fusion and extracting method of a kind of moving-target multi-source detection according to claim 1, it is characterised in that: institute
Method for registering images in the step 2 and step 3 stated specifically:
Step 2.1: extracting image border profile, obtain the edge contour image of original image;
Profile, phase equalization function are extracted to image using phase equalization algorithm are as follows:
Wherein AnFor the amplitude on scale n;φnIt (x) is n-th of Fourier components phase value at x;It indicates to work as PC (x) In
When obtaining maximum value at x, the weighted average amount of each component Local Phase parallactic angle of Fourier;
Step 2.2: the feature angle point with scale, position and direction information is established in edge contour image, method particularly includes:
Step 2.2.1: construction Nonlinear Scale Space Theory makes feature angle point with dimensional information;
Gaussian filtering process is carried out to edge contour images, obtains image grey level histogram and contrast factor k;Convert one group of meter
All Information Levels of nonlinear filtering image are obtained using additive operator splitting algorithm after evaluation time:
Wherein, AlIndicate conductance matrix of the image I on different dimensions on l;tiIt is defined as calculating the time, and every time only with one group
The time is calculated to construct Nonlinear Scale Space Theory;E is unit battle array;
Step 2.2.2: detection feature angle point obtains characteristic angle dot position information;
It moves a local window point by point in the edge contour image of Nonlinear Scale Space Theory, the pixel value in window is carried out
Operation is to determine whether be angle point;
Step 2.2.3: the directional information of feature angle point is calculated
The coordinate of feature angle point p (i) in the picture is (x (i), y (i)), and two point p (i-k), p (i+k) are selected in neighborhood,
Make the two points with point p (i) apart from being k, T is the tangent line at the place point p (i), and the principal direction of feature angle point p (i) is tangent line T and x-axis
The angle theta of positive directionFeature, calculation formula is as follows:
Step 2.3: establishing shape description matrix;
If feature point set P { p1,p2,...pn,},pi∈R2, with a certain characteristic point p (i) for origin, in the r centered on p (i) point
In × r neighborhood, polar coordinate system is established, to 360 ° of 12 equal portions of progress, 12 sectors is obtained, is then followed successively by by radiusFive concentric circles are drawn, 60 zonules are obtained;The feature point number in each zonule is counted, is calculated
piThe shape histogram hi of point, the shape histogram hi of each characteristic point are exactly Shape context description of each characteristic point;
The calculation method of the shape histogram hi of each characteristic point are as follows:
Hi (k)=# { q ≠ pi:(q-pi)∈bin(k)}
Wherein # indicate statistics kth (k=1,2 ... 60) in a region characteristic point number;
Step 2.4: the characteristic angle point of two images being matched, image registration is completed;
The characteristic point of its arest neighbors and time neighbour, Euclidean distance are searched for by using Euclidean distance are as follows:
Wherein, aiShape context for reference picture arbitrary characteristics point describes R (a0,a1,...a59) in i-th, biFor ginseng
The Shape context for examining image arbitrary characteristics point describes I (b0,b1,...b59) in i-th;
If p is any feature point in certain piece image, it is set to i, j with p arest neighbors to be registered and time neighbour's characteristic point, then
The Euclidean distance of they and characteristic point p are respectively DipAnd Djp;Set operation threshold valueWhen the threshold value is less than a certain value
When, that is, think that p and i are the characteristic points correctly matched, otherwise fails.
3. a kind of level fusion of moving-target multi-source detection according to claim 1 or 2 and extracting method, feature exist
In: the method for visible images and infrared light image after fusion is registrated in the step 2 specifically:
Step 3.1: region segmentation being carried out to the infrared image after registration, isolates suspicious region and the background area of infrared image
Domain;The suspicious region is the highlight regions that the big image of infra-red radiation becomes clear;
Step 3.2: dual-tree complex wavelet transform being carried out with visible images to the infrared image after registration respectively, obtains the low of image
Frequency information and high-frequency information, the essential information of image correspond to the low-frequency information of wavelet transform result, the detailed information pair of image
It should be in the high-frequency information of wavelet transform result;
Step 3.3: the result of the result and wavelet transformation that divide the image into is merged, and respectively obtains low frequency blending image and height
Frequency blending image;
Step 3.4: low frequency blending image and high frequency blending image being subjected to dual-tree complex wavelet inverse transformation, obtain first time fusion figure
Picture.
4. a kind of level fusion of moving-target multi-source detection according to claim 1 or 2 and extracting method, feature exist
In: target acquisition, the method for obtaining the location information of target in the picture are carried out to second layer blending image in the step 4
Specifically:
Step 4.1: second layer blending image is filtered;
Establishing a window matrix, individual element scans on 2d, each point in the value window matrix of matrix center
The average value of value substitutes, and indicates are as follows:
Wherein: f (x, y) is second layer blending image to be processed;G (x, y) is the second layer blending image after filtering processing;S
It is the set with (x, y) point for middle neighborhood of a point coordinate points, M is the sum for gathering internal coordinate;
Step 4.2: the second layer blending image after filtering processing being handled using the image threshold method of rolling average, is obtained
To binary map;
Zk+1Indicate the rolling average gray scale in scanning sequency, in the point that+1 step of kth encounters, at new point are as follows:
Wherein, nGray scaleRepresent the points used when calculating average gray, initial value m (1)=zi/nGray scale;
Rolling average in the picture calculates every bit, so segmentation is executed with following formula:
Wherein, K is the constant in [0,1] range, mxyIt is rolling average of the input picture at (x, y);
Step 4.3: deleting the image that area is less than target in bianry image, remove the interference of irrelevant information;
Step 4.4: the bianry image after the interference of removal irrelevant information being handled using morphological image;
Step 4.5: establish cutting function, target be cut into from the full image after morphological image process, obtain to
Test target image;
It is 0 that background parts, which are black level value, in image I after morphological image process, and target part to be tested is that white value is 1;
From image (0,0), coordinate starts to have looked for, and finds the point that first coordinate pixel value is 1, since looking for this point, finds therewith
All the points are named as set T by the point that connected all pixels value is 11, in set T1In point coordinate in find abscissa
Maximum value x1maxWith minimum value x1min, in set T1In point coordinate in find the maximum value y of ordinate1maxAnd minimum value
y1min, then the target image to be tested that cutsx1min< x < x1max,y1min< y < y1max, and so on,
All targets to be tested are found, all target images to be tested are obtained
Step 4.6: finding the main symmetry axis of target image to be tested using principal component analytical method, and obtain target image master couple to be tested
Claim axis and x-axis angle thetaIt is to be tested;
The coordinate of each point is two dimension in target image information to be tested, these are put composition nIt is to be tested2 column matrix X of rowIt is to be tested, wherein
nIt is to be testedFor the points in target image information to be tested, X is calculatedIt is to be testedCovariance matrix CIt is to be tested, and continue to calculate covariance matrix
CIt is to be testedFeature vector VIt is to be tested=(xv,yv), then the main symmetry axis of target image to be tested and x-axis angle thetaIt is to be testedAre as follows:
Step 4.7: carrying out image direction normalization, target image to be tested is rotated into θIt is to be testedAngle, and remove newly generated black surround;
Step 4.8: carrying out image size normalization, will become by the target image picture size to be tested of direction normalization
Template size;
Step 4.9: one by one by the target image to be tested after direction normalizing and size normalization and the image in template library
It is matched, and sets similar threshold value T, be target by this image recognition when similarity degree is more than this threshold value.
5. the level fusion and extracting method of a kind of moving-target multi-source detection according to claim 3, it is characterised in that: institute
Target acquisition is carried out to second layer blending image in the step 4 stated, the method for obtaining the location information of target in the picture is specific
Are as follows:
Step 4.1: second layer blending image is filtered;
Establishing a window matrix, individual element scans on 2d, each point in the value window matrix of matrix center
The average value of value substitutes, and indicates are as follows:
Wherein: f (x, y) is second layer blending image to be processed;G (x, y) is the second layer blending image after filtering processing;S
It is the set with (x, y) point for middle neighborhood of a point coordinate points, M is the sum for gathering internal coordinate;
Step 4.2: the second layer blending image after filtering processing being handled using the image threshold method of rolling average, is obtained
To binary map;
Zk+1Indicate the rolling average gray scale in scanning sequency, in the point that+1 step of kth encounters, at new point are as follows:
Wherein, nGray scaleRepresent the points used when calculating average gray, initial value m (1)=zi/nGray scale;
Rolling average in the picture calculates every bit, so segmentation is executed with following formula:
Wherein, K is the constant in [0,1] range, mxyIt is rolling average of the input picture at (x, y);
Step 4.3: deleting the image that area is less than target in bianry image, remove the interference of irrelevant information;
Step 4.4: the bianry image after the interference of removal irrelevant information being handled using morphological image;
Step 4.5: establish cutting function, target be cut into from the full image after morphological image process, obtain to
Test target image;
It is 0 that background parts, which are black level value, in image I after morphological image process, and target part to be tested is that white value is 1;
From image (0,0), coordinate starts to have looked for, and finds the point that first coordinate pixel value is 1, since looking for this point, finds therewith
All the points are named as set T by the point that connected all pixels value is 11, in set T1In point coordinate in find abscissa
Maximum value x1maxWith minimum value x1min, in set T1In point coordinate in find the maximum value y of ordinate1maxAnd minimum value
y1min, then the target image to be tested that cutsx1min< x < x1max,y1min< y < y1max, and so on,
All targets to be tested are found, all target images to be tested are obtained
Step 4.6: finding the main symmetry axis of target image to be tested using principal component analytical method, and obtain target image master couple to be tested
Claim axis and x-axis angle thetaIt is to be tested;
The coordinate of each point is two dimension in target image information to be tested, these are put composition nIt is to be tested2 column matrix X of rowIt is to be tested, wherein
nIt is to be testedFor the points in target image information to be tested, X is calculatedIt is to be testedCovariance matrix CIt is to be tested, and continue to calculate covariance matrix
CIt is to be testedFeature vector VIt is to be tested=(xv,yv), then the main symmetry axis of target image to be tested and x-axis angle thetaIt is to be testedAre as follows:
Step 4.7: carrying out image direction normalization, target image to be tested is rotated into θIt is to be testedAngle, and remove newly generated black surround;
Step 4.8: carrying out image size normalization, will become by the target image picture size to be tested of direction normalization
Template size;
Step 4.9: one by one by the target image to be tested after direction normalizing and size normalization and the image in template library
It is matched, and sets similar threshold value T, be target by this image recognition when similarity degree is more than this threshold value.
6. the level fusion and extracting method of a kind of moving-target multi-source detection according to claim 3, it is characterised in that: institute
The method of low frequency blending image is obtained in the step 3.3 stated are as follows: according to infrared Image Segmentation at suspicious region and background area
Location information, it will be seen that light image is split according to identical location information;For infrared image and visible images low frequency
Partial suspicious region, using following rule:
Wherein,For l layers of fusion figure low frequency coefficient,For l layers of infrared image low frequency
Coefficient,For l layers of visible images low frequency coefficient;
For the background area of infrared image and visible images low frequency part, using Local Deviation method, Local Deviation is bigger, table
How corresponding each pixel gray-value variation in the bright region be bigger, and the pixel contrast in the opposite region is higher,
The corresponding information in this region is also more;The pixel big to Local Deviation value increases weight in image co-registration, and rule is such as
Under:
Wherein, ωirFor infrared image weight, ωvisFor visible images weight;Infrared image weight ωirIt is weighed with visible images
Value ωvisCalculation method are as follows:
ωir=1- ωvis;
Wherein, σvisAnd σirThe respectively Local Deviation of visible images and infrared image, r are related coefficient region;Visible light figure
The Local Deviation σ of picturevisWith the Local Deviation σ of infrared imageirCalculation method are as follows:
The calculation method of correlation coefficient r are as follows:
Wherein image size M × N,Indicate the average gray value of visible images,Indicate the average gray of infrared image
Value, Iir(i, j) represents infrared image, Ivis(i, j) represents visible images.
7. the level fusion and extracting method of a kind of moving-target multi-source detection according to claim 5, it is characterised in that: institute
The method of low frequency blending image is obtained in the step 3.3 stated are as follows: according to infrared Image Segmentation at suspicious region and background area
Location information, it will be seen that light image is split according to identical location information;For infrared image and visible images low frequency
Partial suspicious region, using following rule:
Wherein,For l layers of fusion figure low frequency coefficient,For l layers of infrared image low frequency
Coefficient,For l layers of visible images low frequency coefficient;
For the background area of infrared image and visible images low frequency part, using Local Deviation method, Local Deviation is bigger, table
How corresponding each pixel gray-value variation in the bright region be bigger, and the pixel contrast in the opposite region is higher,
The corresponding information in this region is also more;The pixel big to Local Deviation value increases weight in image co-registration, and rule is such as
Under:
Wherein, ωirFor infrared image weight, ωvisFor visible images weight;Infrared image weight ωirIt is weighed with visible images
Value ωvisCalculation method are as follows:
ωir=1- ωvis;
Wherein, σvisAnd σirThe respectively Local Deviation of visible images and infrared image, r are related coefficient region;Visible light figure
The Local Deviation σ of picturevisWith the Local Deviation σ of infrared imageirCalculation method are as follows:
The calculation method of correlation coefficient r are as follows:
Wherein image size M × N,Indicate the average gray value of visible images,Indicate the average gray of infrared image
Value, Iir(i, j) represents infrared image, Ivis(i, j) represents visible images.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910602605.0A CN110472658B (en) | 2019-07-05 | 2019-07-05 | Hierarchical fusion and extraction method for multi-source detection of moving target |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910602605.0A CN110472658B (en) | 2019-07-05 | 2019-07-05 | Hierarchical fusion and extraction method for multi-source detection of moving target |
Publications (2)
Publication Number | Publication Date |
---|---|
CN110472658A true CN110472658A (en) | 2019-11-19 |
CN110472658B CN110472658B (en) | 2023-02-14 |
Family
ID=68506839
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201910602605.0A Active CN110472658B (en) | 2019-07-05 | 2019-07-05 | Hierarchical fusion and extraction method for multi-source detection of moving target |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN110472658B (en) |
Cited By (13)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111626230A (en) * | 2020-05-29 | 2020-09-04 | 合肥工业大学 | Vehicle logo identification method and system based on feature enhancement |
CN111667517A (en) * | 2020-06-05 | 2020-09-15 | 北京环境特性研究所 | Infrared polarization information fusion method and device based on wavelet packet transformation |
CN111815689A (en) * | 2020-06-30 | 2020-10-23 | 杭州科度科技有限公司 | Semi-automatic labeling method, equipment, medium and device |
CN112669360A (en) * | 2020-11-30 | 2021-04-16 | 西安电子科技大学 | Multi-source image registration method based on non-closed multi-dimensional contour feature sequence |
WO2021098081A1 (en) * | 2019-11-22 | 2021-05-27 | 大连理工大学 | Trajectory feature alignment-based multispectral stereo camera self-calibration algorithm |
CN113191965A (en) * | 2021-04-14 | 2021-07-30 | 浙江大华技术股份有限公司 | Image noise reduction method, device and computer storage medium |
CN113303905A (en) * | 2021-05-26 | 2021-08-27 | 中南大学湘雅二医院 | Interventional operation simulation method based on video image feedback |
CN113781315A (en) * | 2021-07-21 | 2021-12-10 | 武汉市异方体科技有限公司 | Multi-view-angle-based homologous sensor data fusion filtering method |
CN114153001A (en) * | 2021-12-30 | 2022-03-08 | 同方威视技术股份有限公司 | Inspection system and inspection method for inspecting frozen goods in goods |
CN116503756A (en) * | 2023-05-25 | 2023-07-28 | 数字太空(北京)科技股份公司 | Method for establishing surface texture reference surface based on ground control point database |
CN116862916A (en) * | 2023-09-05 | 2023-10-10 | 常熟理工学院 | Production detection method and system based on image processing |
CN117994624A (en) * | 2024-04-03 | 2024-05-07 | 聊城大学 | Target identification method based on visible light and hyperspectral image information fusion |
CN111815689B (en) * | 2020-06-30 | 2024-06-04 | 杭州科度科技有限公司 | Semi-automatic labeling method, equipment, medium and device |
Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN1932882A (en) * | 2006-10-19 | 2007-03-21 | 上海交通大学 | Infared and visible light sequential image feature level fusing method based on target detection |
US20090147238A1 (en) * | 2007-03-27 | 2009-06-11 | Markov Vladimir B | Integrated multi-sensor survailance and tracking system |
CN101546428A (en) * | 2009-05-07 | 2009-09-30 | 西北工业大学 | Image fusion of sequence infrared and visible light based on region segmentation |
CN105321172A (en) * | 2015-08-31 | 2016-02-10 | 哈尔滨工业大学 | SAR, infrared and visible light image fusion method |
CN106485740A (en) * | 2016-10-12 | 2017-03-08 | 武汉大学 | A kind of combination point of safes and the multidate SAR image registration method of characteristic point |
CN108198157A (en) * | 2017-12-22 | 2018-06-22 | 湖南源信光电科技股份有限公司 | Heterologous image interfusion method based on well-marked target extracted region and NSST |
CN109558848A (en) * | 2018-11-30 | 2019-04-02 | 湖南华诺星空电子技术有限公司 | A kind of unmanned plane life detection method based on Multi-source Information Fusion |
-
2019
- 2019-07-05 CN CN201910602605.0A patent/CN110472658B/en active Active
Patent Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN1932882A (en) * | 2006-10-19 | 2007-03-21 | 上海交通大学 | Infared and visible light sequential image feature level fusing method based on target detection |
US20090147238A1 (en) * | 2007-03-27 | 2009-06-11 | Markov Vladimir B | Integrated multi-sensor survailance and tracking system |
CN101546428A (en) * | 2009-05-07 | 2009-09-30 | 西北工业大学 | Image fusion of sequence infrared and visible light based on region segmentation |
CN105321172A (en) * | 2015-08-31 | 2016-02-10 | 哈尔滨工业大学 | SAR, infrared and visible light image fusion method |
CN106485740A (en) * | 2016-10-12 | 2017-03-08 | 武汉大学 | A kind of combination point of safes and the multidate SAR image registration method of characteristic point |
CN108198157A (en) * | 2017-12-22 | 2018-06-22 | 湖南源信光电科技股份有限公司 | Heterologous image interfusion method based on well-marked target extracted region and NSST |
CN109558848A (en) * | 2018-11-30 | 2019-04-02 | 湖南华诺星空电子技术有限公司 | A kind of unmanned plane life detection method based on Multi-source Information Fusion |
Non-Patent Citations (4)
Title |
---|
ZENG XIANGJIN等: "Fusion research of visible and infrared images based on IHS transform and regional variance wavelet transform", 《2018 10TH INTERNATIONAL CONFERENCE ON INTELLIGENT HUMAN-MACHINE SYSTEMS AND CYBERNETICS》 * |
张文娜: "多源图像融合技术研究", 《中国优秀博硕士学位论文全文数据库(硕士) 信息科技辑》 * |
张筱晗等: "高光谱图像融合算法研究与进展", 《舰船电子工程》 * |
郭庆乐: "多时相遥感图像变化检测及趋势分析", 《中国优秀博硕士学位论文全文数据库(硕士)信息科技辑》 * |
Cited By (22)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2021098081A1 (en) * | 2019-11-22 | 2021-05-27 | 大连理工大学 | Trajectory feature alignment-based multispectral stereo camera self-calibration algorithm |
US11575873B2 (en) | 2019-11-22 | 2023-02-07 | Dalian University Of Technology | Multispectral stereo camera self-calibration algorithm based on track feature registration |
CN111626230A (en) * | 2020-05-29 | 2020-09-04 | 合肥工业大学 | Vehicle logo identification method and system based on feature enhancement |
CN111626230B (en) * | 2020-05-29 | 2023-04-14 | 合肥工业大学 | Vehicle logo identification method and system based on feature enhancement |
CN111667517A (en) * | 2020-06-05 | 2020-09-15 | 北京环境特性研究所 | Infrared polarization information fusion method and device based on wavelet packet transformation |
CN111815689A (en) * | 2020-06-30 | 2020-10-23 | 杭州科度科技有限公司 | Semi-automatic labeling method, equipment, medium and device |
CN111815689B (en) * | 2020-06-30 | 2024-06-04 | 杭州科度科技有限公司 | Semi-automatic labeling method, equipment, medium and device |
CN112669360A (en) * | 2020-11-30 | 2021-04-16 | 西安电子科技大学 | Multi-source image registration method based on non-closed multi-dimensional contour feature sequence |
CN112669360B (en) * | 2020-11-30 | 2023-03-10 | 西安电子科技大学 | Multi-source image registration method based on non-closed multi-dimensional contour feature sequence |
CN113191965B (en) * | 2021-04-14 | 2022-08-09 | 浙江大华技术股份有限公司 | Image noise reduction method, device and computer storage medium |
CN113191965A (en) * | 2021-04-14 | 2021-07-30 | 浙江大华技术股份有限公司 | Image noise reduction method, device and computer storage medium |
CN113303905B (en) * | 2021-05-26 | 2022-07-01 | 中南大学湘雅二医院 | Interventional operation simulation method based on video image feedback |
CN113303905A (en) * | 2021-05-26 | 2021-08-27 | 中南大学湘雅二医院 | Interventional operation simulation method based on video image feedback |
CN113781315A (en) * | 2021-07-21 | 2021-12-10 | 武汉市异方体科技有限公司 | Multi-view-angle-based homologous sensor data fusion filtering method |
CN114153001A (en) * | 2021-12-30 | 2022-03-08 | 同方威视技术股份有限公司 | Inspection system and inspection method for inspecting frozen goods in goods |
CN114153001B (en) * | 2021-12-30 | 2024-02-06 | 同方威视技术股份有限公司 | Inspection system and inspection method for inspecting frozen products in goods |
CN116503756A (en) * | 2023-05-25 | 2023-07-28 | 数字太空(北京)科技股份公司 | Method for establishing surface texture reference surface based on ground control point database |
CN116503756B (en) * | 2023-05-25 | 2024-01-12 | 数字太空(北京)科技股份公司 | Method for establishing surface texture reference surface based on ground control point database |
CN116862916A (en) * | 2023-09-05 | 2023-10-10 | 常熟理工学院 | Production detection method and system based on image processing |
CN116862916B (en) * | 2023-09-05 | 2023-11-07 | 常熟理工学院 | Production detection method and system based on image processing |
CN117994624A (en) * | 2024-04-03 | 2024-05-07 | 聊城大学 | Target identification method based on visible light and hyperspectral image information fusion |
CN117994624B (en) * | 2024-04-03 | 2024-06-11 | 聊城大学 | Target identification method based on visible light and hyperspectral image information fusion |
Also Published As
Publication number | Publication date |
---|---|
CN110472658B (en) | 2023-02-14 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110472658A (en) | A kind of the level fusion and extracting method of the detection of moving-target multi-source | |
Zakeri et al. | Image based techniques for crack detection, classification and quantification in asphalt pavement: a review | |
CN108460341B (en) | Optical remote sensing image target detection method based on integrated depth convolution network | |
Rani et al. | Road Identification Through Efficient Edge Segmentation Based on Morphological Operations. | |
Rudol et al. | Human body detection and geolocalization for UAV search and rescue missions using color and thermal imagery | |
Guo et al. | Relevance of airborne lidar and multispectral image data for urban scene classification using Random Forests | |
Mallet et al. | A marked point process for modeling lidar waveforms | |
Teodoro et al. | Comparison of performance of object-based image analysis techniques available in open source software (Spring and Orfeo Toolbox/Monteverdi) considering very high spatial resolution data | |
Hormese et al. | Automated road extraction from high resolution satellite images | |
Spröhnle et al. | Object-based analysis and fusion of optical and SAR satellite data for dwelling detection in refugee camps | |
Jiang et al. | An optimized deep neural network detecting small and narrow rectangular objects in Google Earth images | |
CN114821358A (en) | Optical remote sensing image marine ship target extraction and identification method | |
CN106169086B (en) | High-resolution optical image under navigation data auxiliary damages method for extracting roads | |
Khudov et al. | The method for determining informative zones on images from on-board surveillance systems | |
Avudaiamma et al. | Automatic building extraction from VHR satellite image | |
Han et al. | An unsupervised algorithm for change detection in hyperspectral remote sensing data using synthetically fused images and derivative spectral profiles | |
Sirmacek et al. | Building detection using local Gabor features in very high resolution satellite images | |
Majidi et al. | Aerial tracking of elongated objects in rural environments | |
Sadjadi et al. | Combining Hyperspectral and LiDAR Data for Building Extraction using Machine Learning Technique. | |
Mojeddifar et al. | Integration of support vector machines for hydrothermal alteration mapping using ASTER data–case study: the northwestern part of the Kerman Cenozoic Magmatic Arc, Iran | |
Mohammadzadeh et al. | A self-organizing fuzzy segmentation (SOFS) method for road detection from high resolution satellite images | |
Pravalika et al. | Bridge Detection using Satellite Images | |
Wang et al. | Enhancing small object detection in remote sensing imagery with advanced generative adversarial networks | |
CN106156771A (en) | A kind of meter reading Region detection algorithms based on multi-feature fusion | |
Rishitha et al. | A Comprehensive Study on Bridge Detection and Extraction Techniques |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |