CN105205825A - Multi-resolution infrared and visible light scene matching method based on NSCT domain - Google Patents

Multi-resolution infrared and visible light scene matching method based on NSCT domain Download PDF

Info

Publication number
CN105205825A
CN105205825A CN201510635880.4A CN201510635880A CN105205825A CN 105205825 A CN105205825 A CN 105205825A CN 201510635880 A CN201510635880 A CN 201510635880A CN 105205825 A CN105205825 A CN 105205825A
Authority
CN
China
Prior art keywords
infrared
visible ray
image
reference picture
target image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201510635880.4A
Other languages
Chinese (zh)
Other versions
CN105205825B (en
Inventor
刘刚
刘中华
张丹
史恒亮
郑林涛
刘森
赵旭辉
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
TIANJIN HUAGUOREN CARTOON CREATION Co.,Ltd.
Original Assignee
Henan University of Science and Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Henan University of Science and Technology filed Critical Henan University of Science and Technology
Priority to CN201510635880.4A priority Critical patent/CN105205825B/en
Publication of CN105205825A publication Critical patent/CN105205825A/en
Application granted granted Critical
Publication of CN105205825B publication Critical patent/CN105205825B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/30Noise filtering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures

Abstract

The invention discloses a multi-resolution infrared and visible light scene matching method based on a non-subsample contourlet transformation (NSCT) domain. According to the method, phase equalization conversion is conducted on an infrared target image and a visible light reference image; NSCT is conducted on the infrared target image and the visible light reference image; a population is randomly generated with rank coordinates of a pixel point in a low-frequency image part, with the highest dimension, in the visible light reference image as individual description; an infrared low-frequency image target in the corresponding dimension is searched by means of selection, intersection and variation of genetic searching with invariant moment correlation coefficients of the visible light reference image and the infrared target image as similarity measurement, and the position matched with the dimension is obtained; adjacent dimension searching is conducted within the range of the neighborhood of the position matched with the last dimension, and the position the dimension of which is 1 is the position matched with the infrared target image under full resolution of the visible light reference image. The method is high in matching precision, matching speed and robustness and capable of resisting actual measurement image rotation geometric distortion, and is a practical method for achieving matching of infrared and visible light scenes.

Description

Multiresolution based on NSCT territory is infrared with visible ray Scene matching method based
Technical field
The present invention relates to a kind of Scene matching method based, be applicable to visible images be benchmark, infrared image be actual measurement scene matching aided navigation.
Background technology
Scene matching aided navigation is that a kind of image matching technology that relies on carries out pinpoint assisting navigation technology to aircraft.Although the Downward-looking scene matching that benchmark image and realtime graphic are all visible images has been applied in precision guided weapon, because optical sensor is easily subject to the impact of severe weather conditions, desirable high quality graphic cannot be obtained under many circumstances.Infrared imaging sensor does not then affect by the natural cause such as cloud and mist, night, has round-the-clock imaging capability.Therefore, development reference map is visible images, and realtime graphic is that the Downward-looking scene matching precision guided weapon of infrared image becomes one of main flow direction.
Summary of the invention
Fundamental purpose of the present invention be open one based on NSCT(Non-SubsampledContourletTransform, non-down sampling contourlet transform) multiresolution in territory is infrared with visible ray Scene matching method based, improves matching precision and the matching speed of image.
The present invention by the following technical solutions and technical measures realize.
It is infrared with visible ray Scene matching method based that the present invention proposes a kind of multiresolution based on NSCT territory, comprises the steps:
Step 1, respectively phase equalization conversion is carried out to infrared target image and visible ray reference picture;
Step 2, to phase equalization conversion after infrared target image and visible ray reference picture carry out non-down sampling contourlet transform respectively, obtain multiple dimensioned infrared target image and visible ray reference picture;
Step 3, using the ranks coordinate of pixel in the low-frequency image of the highest yardstick of visible ray reference picture as individual volume description, stochastic generation population;
Step 4, using the Krawtchouk of infrared target image and visible ray reference picture corresponding scale low frequency part, bending moment related coefficient is not as fitness function, and through the selection of genetic search, intersection, variation, iteration goes out the new population through optimizing;
Step 5, when reaching maximum iteration time or meet given precision, the individuality the highest according to fitness in the new population optimized after iteration, obtains the matched position of infrared target image in visible ray reference picture under yardstick S;
Step 6, on the S-1 yardstick of visible ray reference picture, S yardstick matched position contiguous range is searched for, in this neighborhood, the ranks coordinate of pixel is as individual volume description, stochastic generation population, and obtain the matched position of infrared target image in visible ray reference picture of this yardstick according to the method for step 4 and 5;
Step 7, repetition step 6, until when visible ray reference picture yardstick reaches 1, namely the matched position obtained is the final matched position of infrared target image under visible ray reference picture full resolution.
Preferably, described Krawtchouk not bending moment be 0-3 rank.
Preferably, described fitness function is:
, in formula, the Krawtchouk not bending moment of infrared target image, for its average; be centered by population at individual position point, with the Krawtchouk not bending moment of infrared target image visible ray reference picture of the same size, for its average, n, mkrawtchouk not bending moment two degree of freedom exponent numbers.
Preferably, the method for intersecting to individuality in population adopts following formula:
, in formula, represent two individualities in population, , for weights coefficient, , .
Preferably, the method made a variation to individuality in population adopts following formula:
, t is current iteration number of times, and UB, LB are respectively the bound of a volume description in current population, , r is the random number in [0,1] scope, and G is maximum number of iterations, and b is systematic parameter.
Preferably, G is the span of 100, b is 2 ~ 5.
Preferably, the tower wave filter of described non-down sampling contourlet transform is 9-7, and anisotropic filter is pkva, and decomposition scale is 3, and directional subband number is respectively 16,8,4.
Compared with prior art, the present invention at least has following advantage and beneficial effect: first the present invention slackens infrared with the difference of visible images in gray scale and contrast by phase equalization conversion, then with Krawtchouk not bending moment profile matching feature, utilize genetic search strategy, using the Krawtchouk of visible ray reference picture and infrared target image not bending moment related coefficient as the fitness criterion of search and similarity measurement, constantly update population by high yardstick to low yardstick and also obtain the exact matching position of infrared target on visible ray low-frequency image gradually, realize infrared with visible ray multiresolution scene matching aided navigation in non-down sampling contourlet territory, method of the present invention not only has higher matching precision and matching speed faster, and robustness is good, the rotation geometry distortion of measuring image can be resisted, solve a kind of practical approach that is infrared and visible ray scene matching aided navigation problem.
Accompanying drawing explanation
Fig. 1 the present invention is based on the infrared FB(flow block) with visible ray Scene matching method based of the multiresolution in NSCT territory.
Fig. 2 is the visible ray reference picture during the present invention tests.
Fig. 3 is the infrared target image during the present invention tests.
Fig. 4 is three infrared target image extracted from Fig. 3, and Fig. 4 A is infrared target 1, Fig. 4 B be infrared target 2, Fig. 4 C is infrared target 3.
Fig. 5 is Fig. 4 tri-infrared target image matching result figure in FIG.
Embodiment
Describe the present invention below in conjunction with drawings and Examples.Fig. 1 is FB(flow block) of the present invention, with reference to figure 1, the present invention is based on that the multiresolution in NSCT territory is infrared to be comprised the steps: with visible ray Scene matching method based
Step 1, respectively phase equalization conversion is carried out to infrared target image and visible ray reference picture.
Often there is comparatively high-gray level and contrast difference between infrared and visible images, thus cause general character matching characteristic to extract and describe very difficult.Phase equalization conversion reflection be the phase characteristic information of image, namely also phase equalization feature can the feature of Description Image, and there is the feature of local light photograph and contrast unchangeability, can be used for the impact weakening gradation of image and contrast difference, the feature significance that can be multi-mode image provides a unified tolerance.
The phase equalization transforming function transformation function of two-dimensional space image I is:
(1)
(1) in formula, for image I is at the amplitude of given filter scales n and direction o; for filter band weighting factor; for estimating noise threshold value, only have when phase pushing figure is greater than time be just used for calculate phase place consistent; for little constant is to avoid denominator for zero; for phase shift function.
The present invention, before NSCT conversion, convert infrared phase equalization of carrying out two-dimensional space with visible images, reduces gradation of image, contrast difference and extracts matching characteristic and the impact of description.This conversion not only remains the edge feature of original image, inhibits the partial noise of image, also retains the area information near marginal point.
Step 2, to phase equalization conversion after infrared target image and visible ray reference picture carry out non-down sampling contourlet transform respectively, obtain multiple dimensioned infrared target image and visible ray reference picture.
NSCT becomes the band on each yardstick to lead to directional subband picture breakdown.Non-sampled laplacian pyramid (LaplacianPyramid, LP) Pyramid transform has been used to the multi-resolution decomposition of NSCT.Every one-level LP decomposes and is with reduction of fractions to a common denominator amount by the sampling of the low pass of a generation upper level signal with by sample of obtaining with the difference of upper level signal of low pass.Next stage multi-resolution decomposition iteration in the low pass sampling produced is carried out.Non-sampled directional filter banks (DirectionalFilterBank, DFB) decomposes LP the spectrum division wedgewise frequency subband that the band obtained leads to image, completes the Directional Decomposition in particular dimensions.
The coupling of carrying out image in NSCT territory can greatly reduce search volume, improves matching efficiency.In the present invention, the tower wave filter of NSCT conversion is " 9-7 ", and anisotropic filter is " pkva ", and decomposition scale L is 3, and directional subband number is respectively [16,8,4].
Step 3, in non-down sampling contourlet territory, in the low-frequency image of the highest yardstick of visible ray reference picture, the ranks coordinate of pixel is as individual volume description, stochastic generation population, and the individuality in population is encoded in decimal system mode.
Low-frequency image is after carrying out NSCT decomposition to visible ray reference picture, the low frequency part image that the low pass downsampling factor on each yardstick is corresponding.
Step 4, using the Krawtchouk of infrared target image and visible ray reference picture corresponding scale low frequency part, bending moment related coefficient is not as fitness function, and through the selection of genetic search, intersection, variation, iteration goes out the new population through optimizing.
If on the highest yardstick initialization population, then corresponding scale value refers to the highest yardstick, namely using the Krawtchouk of infrared target image and the highest yardstick low frequency part of visible ray reference picture not bending moment related coefficient as fitness function, through the selection of genetic search, intersection, variation, iteration goes out the new population through optimizing.If through step 5, refer to S-1 yardstick according to the corresponding scale in step 4 and 5 methods in step 6.
In scene matching aided navigation application, the selection of feature space is a key issue.Image moment, as the Feature Descriptor of image, both can express the global characteristics of picture shape, also can provide dissimilar geometrical property information simultaneously.Krawtchouk not bending moment has the advantage can extracting local feature from any interested image-region, the Krawtchouk moment invariants be made up of it has the fundamental property of Krawtchouk not bending moment, and has good translation, rotation and scale invariability.The orthogonality of Krawtchouk not bending moment eliminates information redundancy during traditional square Description Image, also eliminates the discretization error that traditional continuity moment produces digital picture, meets the requirement of image object feature extraction as proper vector completely.The nm rank Krawtchouk square of image f (x, y) can be expressed as:
(2)
normalization Krawtchouk polynomial expression:
(3)
(4)
(5)
Wherein, N and M is line number and the columns of image, x, n=0,1,2 ... N, N>0, p, p1, p2 are regulating parameter, and variation range between zero and one. hypergeometric function:
(6)
Wherein, (a) kpochhammer operator:
(7)
ij rank geometric moments:
(8)
, be Krawtchouk multinomial coefficient, its value can by hypergeometric function calculate.
In order to construct affine-invariant features Krawtchouk square, make it meet translation, rotation, scale invariability, need image f (x, y) to move to its centroid position, its rotary main shaft is converted into x-axis, and translation and the image after rotating are carried out change of scale and obtained normalized image.Calculate the geometric moment of normalized image, affine Geometric moments invariants translation, rotation, yardstick to unchangeability can be obtained , assuming that N is the little value of image line columns, then its expression formula is as follows:
(9)
Wherein:
(10)
(11)
(12)
The expression formula of Krawtchouk of the present invention not bending moment is then had to be:
(13)
Krawtchouk of the present invention not bending moment adopts 0-3 rank, then by the Krawtchouk fitness function of population at individual that bending moment does not obtain be:
(14)
(14) in formula, the nm rank Krawtchouk not bending moment of real-time infrared target image, for its average; point centered by population at individual position, with the nm rank Krawtchouk not bending moment of real-time infrared target image visible ray reference picture of the same size, for its average.Individual to each in population, utilize Krawtchouk not bending moment related coefficient determine its ideal adaptation degree.
Can be described by following formula (15) according to the crossover process between real coding two individualities:
(15)
, for weights coefficient, determined by formula (16):
(16)
individual when being current iteration fitness, such crossover process makes result closer to the larger individuality of fitness.
Individual variation adopts non-uniform mutation, if t is current iteration number of times, for the object that variation is selected, then mutation process can be described by formula (17):
(17)
UB, LB are respectively the bound of a volume description in current population; The rreturn value scope of g (t, y) function be (0, y), its value is along with the increase of iterations, and the probability of trend zero also increases thereupon, and the function expression of g (t, y) is:
(18)
In formula, r is the random number in [0,1] scope, and G is maximum number of iterations, and b is systematic parameter, generally gets 2 ~ 5, and G of the present invention gets 100, b and gets 3.
The gene operator initial ratio such as selection, crossover and mutation of genetic searching method gets 0.9,0.6 and 0.2, respectively using the condition that iteration terminates as genetic search under each resolution for 100 times.
Step 5, when reaching maximum iteration time or meet given precision, the individuality the highest according to fitness in the new population optimized after iteration, obtains the matched position of infrared target image in visible ray reference picture under yardstick S.
Step 6, on the S-1 yardstick of visible ray reference picture, S yardstick matched position contiguous range is searched for, in this neighborhood, the ranks coordinate of pixel is as individual volume description, stochastic generation population, and obtain the matched position of infrared target image in visible ray reference picture of this yardstick according to the method for step 4 and 5.
Step 7, repetition step 6, until when visible ray reference picture yardstick reaches 1, namely the matched position obtained is the final matched position of infrared target image under visible ray reference picture full resolution.
Below by Simulation experiments validate effect of the present invention.
Experiment 1: adopt take photo by plane visible images and infrared image to carry out Matching Experiment.Benchmark image is the visible images shown in Fig. 2, and measuring image is the infrared image shown in Fig. 3, and size is 640 pixel × 720 pixels.As Fig. 4, three infrared targets extract from the infrared image of Fig. 3, and size is 128 pixel × 128 pixels, and Fig. 4 A is infrared target 1, Fig. 4 B be infrared target 2, Fig. 4 C is infrared target 3.
Three infrared targets of Fig. 4 matching result in Fig. 1 benchmark image is as shown in Fig. 5 and table 1.Three square frames in Fig. 5 show the matched position of three infrared targets.Can find out from Fig. 5 and table 1, although the contrast of infrared image and visible images differs greatly, the contrast difference of the inventive method to image is insensitive, can obtain good matching result.
The present invention introduces the matching characteristic that Krawtchouk moment invariants extracts image in NSCT territory, utilize genetic algorithm as search strategy, using the Krawtchouk of visible ray benchmark image and infrared target image not bending moment related coefficient as the fitness criterion of Genetic algorithm searching and similarity measurement, transform domain height yardstick low resolution low-frequency image is slightly mated, then, mate further at low yardstick high resolving power low-frequency image according to thick matching result, the coupling of infrared and visible images under finally realizing full resolution situation.The inventive method coupling degree of accuracy is high, good stability, and search volume transforms to more sparse NSCT territory, and therefore matching speed is fast.
Further, experiment 2: noise is added to infrared image and tests, experiment 3: again test after the rotational transform of infrared image geometry.Experimentation repeats no more here, experimental result is, utilize method of the present invention to carry out mating and all can obtain more accurate matching result, illustrate phase equalization transfer pair picture noise have good rejection ability and krawtchouk not bending moment as the superiority of matching characteristic.
The present invention can also be widely used in other embodiments, and protection scope of the present invention is not by the restriction of embodiment, and it is as the criterion with the protection domain of claim.Any those skilled in the art, in the scope not departing from the technology of the present invention thought, can carry out various change and amendment, still belong to the protection domain of technical solution of the present invention.

Claims (7)

1. infrared with a visible ray Scene matching method based based on the multiresolution in NSCT territory, it is characterized in that, comprise the steps:
Step 1, respectively phase equalization conversion is carried out to infrared target image and visible ray reference picture;
Step 2, to phase equalization conversion after infrared target image and visible ray reference picture carry out non-down sampling contourlet transform respectively, obtain multiple dimensioned infrared target image and visible ray reference picture;
Step 3, using the ranks coordinate of pixel in the low-frequency image of the highest yardstick of visible ray reference picture as individual volume description, stochastic generation population;
Step 4, using the Krawtchouk of infrared target image and visible ray reference picture corresponding scale low frequency part, bending moment related coefficient is not as fitness function, and through the selection of genetic search, intersection, variation, iteration goes out the new population through optimizing;
Step 5, when reaching maximum iteration time or meet given precision, the individuality the highest according to fitness in the new population optimized after iteration, obtains the matched position of infrared target image in visible ray reference picture under yardstick S;
Step 6, on the S-1 yardstick of visible ray reference picture, S yardstick matched position contiguous range is searched for, in this neighborhood, the ranks coordinate of pixel is as individual volume description, stochastic generation population, and obtain the matched position of infrared target image in visible ray reference picture of this yardstick according to the method for step 4 and 5;
Step 7, repetition step 6, until when visible ray reference picture yardstick reaches 1, namely the matched position obtained is the final matched position of infrared target image under visible ray reference picture full resolution.
2. the multiresolution based on NSCT territory according to claim 1 is infrared with visible ray Scene matching method based, and it is characterized in that, described Krawtchouk not bending moment is 0-3 rank.
3. the multiresolution based on NSCT territory according to claim 1 is infrared with visible ray Scene matching method based, and it is characterized in that, described fitness function is:
, in formula, the Krawtchouk not bending moment of infrared target image, for its average; be centered by population at individual position point, with the Krawtchouk not bending moment of infrared target image visible ray reference picture of the same size, for its average, n, mkrawtchouk not bending moment two degree of freedom exponent numbers.
4. the multiresolution based on NSCT territory according to claim 3 is infrared with visible ray Scene matching method based, it is characterized in that, is adopt following formula to method of intersecting individual in population:
, in formula, represent two individualities in population, , for weights coefficient, , .
5. the multiresolution based on NSCT territory according to claim 1 is infrared with visible ray Scene matching method based, it is characterized in that, is adopt following formula to the method made a variation individual in population: , t is current iteration number of times, and UB, LB are respectively the bound of a volume description in current population, , r is the random number in [0,1] scope, and G is maximum number of iterations, and b is systematic parameter.
6. the multiresolution based on NSCT territory according to claim 5 is infrared with visible ray Scene matching method based, and it is characterized in that, G is the span of 100, b is 2 ~ 5.
7. the multiresolution based on NSCT territory according to claim 1 is infrared with visible ray Scene matching method based, and it is characterized in that, the tower wave filter of described non-down sampling contourlet transform is 9-7, anisotropic filter is pkva, and decomposition scale is 3, and directional subband number is respectively 16,8,4.
CN201510635880.4A 2015-09-30 2015-09-30 Multiresolution based on NSCT domains is infrared with visible ray Scene matching method Active CN105205825B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201510635880.4A CN105205825B (en) 2015-09-30 2015-09-30 Multiresolution based on NSCT domains is infrared with visible ray Scene matching method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201510635880.4A CN105205825B (en) 2015-09-30 2015-09-30 Multiresolution based on NSCT domains is infrared with visible ray Scene matching method

Publications (2)

Publication Number Publication Date
CN105205825A true CN105205825A (en) 2015-12-30
CN105205825B CN105205825B (en) 2018-06-29

Family

ID=54953486

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201510635880.4A Active CN105205825B (en) 2015-09-30 2015-09-30 Multiresolution based on NSCT domains is infrared with visible ray Scene matching method

Country Status (1)

Country Link
CN (1) CN105205825B (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110728713A (en) * 2018-07-16 2020-01-24 Oppo广东移动通信有限公司 Test method and test system
CN110992407A (en) * 2019-11-07 2020-04-10 武汉多谱多勒科技有限公司 Infrared and visible light image matching method
CN111462196A (en) * 2020-03-03 2020-07-28 中国电子科技集团公司第二十八研究所 Remote sensing image matching method based on cuckoo search and Krawtchouk moment invariant

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102521818A (en) * 2011-12-05 2012-06-27 西北工业大学 Fusion method of SAR (Synthetic Aperture Radar) images and visible light images on the basis of NSCT (Non Subsampled Contourlet Transform)

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102521818A (en) * 2011-12-05 2012-06-27 西北工业大学 Fusion method of SAR (Synthetic Aperture Radar) images and visible light images on the basis of NSCT (Non Subsampled Contourlet Transform)

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
吴一全 等: "利用NSCT和Krawtchouk矩进行图像检索", 《武汉大学学报-信息科学版》 *
廉蔺 等: "基于边缘最优映射的红外和可见光图像自动配准算法", 《自动化学报》 *
王成栋 等: "基于实数编码的自适应伪并行遗传算法", 《西安交通大学学报》 *

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110728713A (en) * 2018-07-16 2020-01-24 Oppo广东移动通信有限公司 Test method and test system
CN110728713B (en) * 2018-07-16 2022-09-30 Oppo广东移动通信有限公司 Test method and test system
CN110992407A (en) * 2019-11-07 2020-04-10 武汉多谱多勒科技有限公司 Infrared and visible light image matching method
CN110992407B (en) * 2019-11-07 2023-10-27 武汉多谱多勒科技有限公司 Infrared and visible light image matching method
CN111462196A (en) * 2020-03-03 2020-07-28 中国电子科技集团公司第二十八研究所 Remote sensing image matching method based on cuckoo search and Krawtchouk moment invariant

Also Published As

Publication number Publication date
CN105205825B (en) 2018-06-29

Similar Documents

Publication Publication Date Title
CN110097093B (en) Method for accurately matching heterogeneous images
Baatz et al. Handling urban location recognition as a 2d homothetic problem
CN102629374B (en) Image super resolution (SR) reconstruction method based on subspace projection and neighborhood embedding
CN103700099A (en) Rotation and dimension unchanged wide baseline stereo matching method
CN113838191A (en) Three-dimensional reconstruction method based on attention mechanism and monocular multi-view
CN104077782A (en) Satellite-borne remote sense image matching method
CN115861591B (en) Unmanned aerial vehicle positioning method based on transformer key texture coding matching
Yi et al. Survey of structure from motion
Xiang et al. A robust two-stage registration algorithm for large optical and SAR images
Hallquist et al. Single view pose estimation of mobile devices in urban environments
CN105205825A (en) Multi-resolution infrared and visible light scene matching method based on NSCT domain
Chen et al. Robust local feature descriptor for multisource remote sensing image registration
Hughes et al. A semi-supervised approach to SAR-optical image matching
CN111739079A (en) Multi-source low-altitude stereo pair fast matching method based on semantic features
Yuan et al. Dense image-matching via optical flow field estimation and fast-guided filter refinement
Huang et al. Multimodal image matching using self similarity
CN116703996A (en) Monocular three-dimensional target detection algorithm based on instance-level self-adaptive depth estimation
CN116309026A (en) Point cloud registration method and system based on statistical local feature description and matching
CN106056599B (en) A kind of object recognition algorithm and device based on Object Depth data
Ren et al. SAR image matching method based on improved SIFT for navigation system
Lin et al. Enhancing deep-learning object detection performance based on fusion of infrared and visible images in advanced driver assistance systems
Zhang et al. Geometry and context guided refinement for stereo matching
Thomas et al. Fast robust perspective transform estimation for automatic image registration in disaster response applications
Ren et al. A fast and robust scene matching method for navigation
WO2019090509A1 (en) Hyperspectral image classification method and system

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
TR01 Transfer of patent right
TR01 Transfer of patent right

Effective date of registration: 20191106

Address after: 471000 room 207, F3, Yanhuang science and Technology Park, No. 333, Kaiyuan Avenue, Luolong District, Luoyang City, Henan Province

Patentee after: Luoyang Rixin Intelligent Technology Co., Ltd

Address before: 471000 No. 263 Kaiyuan Road, Luolong District, Henan, Luoyang

Patentee before: Henan University of Science and Technology

TR01 Transfer of patent right
TR01 Transfer of patent right

Effective date of registration: 20201124

Address after: No.11, no.13-956-960, 961-1, - 2, Nanma Road, Heping District, Tianjin

Patentee after: TIANJIN HUAGUOREN CARTOON CREATION Co.,Ltd.

Address before: 471000 room 207, F3, Yanhuang science and Technology Park, No. 333, Kaiyuan Avenue, Luolong District, Luoyang City, Henan Province

Patentee before: Luoyang Rixin Intelligent Technology Co.,Ltd.