CN112308887B - Multi-source image sequence real-time registration method - Google Patents
Multi-source image sequence real-time registration method Download PDFInfo
- Publication number
- CN112308887B CN112308887B CN202011069504.0A CN202011069504A CN112308887B CN 112308887 B CN112308887 B CN 112308887B CN 202011069504 A CN202011069504 A CN 202011069504A CN 112308887 B CN112308887 B CN 112308887B
- Authority
- CN
- China
- Prior art keywords
- image
- feature
- registration
- registered
- pairs
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000000034 method Methods 0.000 title claims abstract description 39
- 238000005070 sampling Methods 0.000 claims abstract description 29
- 230000009466 transformation Effects 0.000 claims abstract description 27
- 230000008569 process Effects 0.000 claims abstract description 11
- 238000001514 detection method Methods 0.000 claims abstract description 9
- 239000011159 matrix material Substances 0.000 claims description 3
- 238000000605 extraction Methods 0.000 abstract description 5
- 230000000007 visual effect Effects 0.000 description 14
- 230000000694 effects Effects 0.000 description 11
- 238000005516 engineering process Methods 0.000 description 9
- 238000012545 processing Methods 0.000 description 6
- 238000010586 diagram Methods 0.000 description 4
- 238000011156 evaluation Methods 0.000 description 3
- 238000013461 design Methods 0.000 description 2
- 238000011161 development Methods 0.000 description 2
- 230000018109 developmental process Effects 0.000 description 2
- 230000004927 fusion Effects 0.000 description 2
- PXFBZOLANLWPMH-UHFFFAOYSA-N 16-Epiaffinine Natural products C1C(C2=CC=CC=C2N2)=C2C(=O)CC2C(=CC)CN(C)C1C2CO PXFBZOLANLWPMH-UHFFFAOYSA-N 0.000 description 1
- 230000004888 barrier function Effects 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000008901 benefit Effects 0.000 description 1
- 230000009977 dual effect Effects 0.000 description 1
- 238000002474 experimental method Methods 0.000 description 1
- 230000036541 health Effects 0.000 description 1
- 238000003709 image segmentation Methods 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 230000035800 maturation Effects 0.000 description 1
- 230000035515 penetration Effects 0.000 description 1
- 230000002265 prevention Effects 0.000 description 1
- 230000002123 temporal effect Effects 0.000 description 1
- 238000012360 testing method Methods 0.000 description 1
- 238000012795 verification Methods 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/30—Determination of transform parameters for the alignment of images, i.e. image registration
- G06T7/33—Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10004—Still image; Photographic image
Landscapes
- Engineering & Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Image Processing (AREA)
Abstract
The invention discloses a multi-source image sequence real-time registration method, which comprises the following steps: inputting two groups of single-source image sequences, wherein one group of single-source image sequences is a reference image sequence, and the other group of single-source image sequences is an image sequence to be registered; and simultaneously sampling two groups of single-source image sequences at intervals by using an interval sampling method to obtain a set of calibration frame image pairs. And registering the current to-be-processed sampling graph pair by using a feature-based image registration algorithm, and updating camera parameters by using registration parameters. And sequentially performing projection transformation on the image pairs to be registered after the current calibration frame image pair and before the next calibration frame image pair to obtain a series of registered target image sequence pairs. Selecting the next scaled frame image pair as the current sample image pair to be processed; until all the images to be registered are registered. The method reduces the processes of feature point detection and extraction, feature description, feature matching and solving the registration relation of the sequence image after the frame image is scaled, and has high registration precision.
Description
[ field of technology ]
The invention belongs to the technical field of image processing, and particularly relates to a multi-source image sequence real-time registration method.
[ background Art ]
With the development of the emerging high-tech industries such as computer vision, image processing technology has entered a new stage. The current image processing technology mainly comprises image segmentation, image fusion, image restoration, image matching, image registration and the like, and once the new technology appears, the new technology is applied to various fields such as spacecraft, medical health, weather prediction, agriculture and forestry prevention and management, homeland resource examination and the like. With the development and maturation of technologies, the application of these technologies has been further refined, such as face payment, criminal suspects' search, judgment of cell canceration, weather prediction, statistics of national grain yield, and the like. With the expansion and penetration of these application fields, problems such as increased difficulty and reduced efficiency of image processing are caused. In many cases, a single image cannot meet the current information amount requirement, so that a plurality of images need to be combined to obtain more comprehensive and complete information, and image processing operations such as image registration technology, image fusion, image stitching and the like are needed. Image registration techniques are an important basis on which these applications depend, and are one of the most rapidly developing image processing techniques in recent years.
According to the difference of the image information utilized in the image registration process, the image registration is divided into two major directions, namely an algorithm for registering through the region and an algorithm for registering through the feature. The feature-based image registration method has the characteristics of high registration precision and relatively short registration time. However, the current classical feature registration algorithm still cannot meet the real-time requirement.
[ invention ]
The invention aims to provide a multi-source image sequence real-time registration method, which reduces the processes of feature point detection and extraction, feature description, feature matching and solving registration relation of sequence images after frame image calibration, and has high registration precision.
The invention adopts the following technical scheme: a feature-based multi-source image sequence real-time registration method comprises the following steps:
s1, inputting two groups of single-source image sequences, wherein one group of single-source image sequences is a reference image sequence, and the other group of single-source image sequences is an image sequence to be registered; wherein, the two groups of single-source image sequences are acquired under the same time sequence.
Simultaneously sampling two groups of single-source image sequences at intervals by using an interval sampling method to obtain a plurality of sampling graph pairs, and sequentially combining the plurality of sampling graph pairs according to a time sequence to obtain a calibration frame image pair set; a first group of image pairs in the set of scaled frame image pairs is selected as the current sample pattern pair to be processed.
And S2, registering the current to-be-processed sampling graph pair by using a feature-based image registration algorithm to obtain registration parameters, and updating camera parameters by using the registration parameters.
And S3, sequentially performing projection transformation on the image pairs to be registered after the current calibration frame image pair and before the next calibration frame image pair by using the latest camera parameters obtained in the step S2 to obtain a series of registered target image sequence pairs, and completing the registration of the current calibration frame image.
And S4, selecting the next calibration frame image pair as the current sample image pair to be processed, and sequentially repeating the steps S2 to S3 to finish the registration of the next calibration frame image.
And S5, repeating the step S4 until all the images to be registered are registered.
Further, the step S2 specifically includes:
step S21: and (3) detecting characteristic points of the reference image and the image to be registered in the current sampling graph pair to be processed obtained in the step (S1) by using the same characteristic point detection algorithm, and extracting the position and gray characteristic information of the two images.
Step S22: selecting a certain feature description Fu Suanfa, and constructing feature descriptors for all feature points of the reference image detected in the step S21 to form a feature set of the reference image; and constructing feature descriptors for all feature points of the image to be registered detected in the step S21 by using the same feature description Fu Suanfa to form a feature set of the image to be registered.
Step S23: and selecting a certain feature point matching algorithm, and carrying out matching association on the feature set of the reference image and feature descriptors in the feature set of the image to be registered to obtain matched feature pairs.
Step S24: and determining registration parameters according to the geometrical relationship between the matched feature pairs.
Step S25: and updating camera parameters by using the registration parameters obtained in the step S24, and performing projection transformation on the image to be registered in the current calibration frame image based on the latest camera parameters to obtain a registered target image of the current calibration frame image.
Further, the plurality of sampling pattern pairs are sequentially combined in time series in such a manner that: two sampling image pairs under the same time sequence are one group, and different groups are arranged in time sequence.
Further, the specific procedure for updating the camera parameters in the step S24 is as follows: and adopting an image transformation model, and obtaining transformation parameters of the model through the position corresponding relation between the matched characteristic points to obtain a transformation matrix, namely registration parameters.
Further, the image transformation model selects a similarity transformation model.
The beneficial effects of the invention are as follows: the registration relationship obtained by the calibration frame images is utilized to register the subsequent images obtained in a short time, the processes of feature point detection and extraction, feature description, feature matching and solving the registration relationship of the sequence images after the calibration frame images are reduced, the registration accuracy is high, and the efficient real-time registration of the video or image sequences is achieved.
[ description of the drawings ]
FIG. 1 is a schematic diagram of the present invention;
FIG. 2 is a flow chart of the present invention;
FIG. 3 is a diagram of ORB feature matching effect in an embodiment of the invention;
3a ORB feature matching effect graph of scene 1;
3b ORB feature matching effect graph of scene 2;
3c ORB feature matching effect graph of scene 3;
FIG. 4 illustrates the image sequence matching effect of ORB features in an embodiment of the invention;
4a, registering front and rear visual images of the image 1;
4b, registering front and rear visual images of the image 2;
4c, registering front and rear visual images of the image 3;
4d, registering front and rear visual images of the image 4;
4e, registering front and rear visual images of the image 5;
4f, registering front and rear visual images of the image 6;
4g, registering front and rear visual images of the image 7;
4h, registering front and rear visual images of the image 8;
4i. Registration front-rear visual image of image 9;
4j. Registration front-rear visual map of image 10;
4k. Registration front-rear visual image of image 11;
4l. Registration front-rear visual map of image 12;
wherein, the left image is a vision image before registration, and the right image is a vision image after registration.
[ detailed description ] of the invention
The invention will be described in detail below with reference to the drawings and the detailed description.
The method is applied to registration of infrared images and visible light images, and the used data set is an unmanned aerial vehicle aerial ground landscape remote sensing image acquired by self, namely an UAV data set for short.
The invention discloses a feature-based multi-source image sequence real-time registration method, which is shown in fig. 1 and 2 and comprises the following steps:
s1, inputting two groups of single-source image sequences, wherein one group of single-source image sequences is a reference image sequence, and the other group of single-source image sequences is an image sequence to be registered; wherein, the two groups of single-source image sequences are acquired under the same time sequence.
Simultaneously sampling two groups of single-source image sequences at intervals by using an interval sampling method to obtain a plurality of sampling graph pairs, sequentially combining the plurality of sampling graph pairs according to a time sequence, namely, arranging two sampling image pairs in the same time sequence into one group, and arranging different groups according to time sequence to obtain a calibration frame image pair set; wherein each sampling image pair specifically comprises: the method comprises the steps that a reference image and an image to be registered, which are acquired at the same time in a reference image sequence and an image sequence to be registered, are acquired; a first group of image pairs in the set of scaled frame image pairs is selected as the current sample pattern pair to be processed.
Step S2, registering the current sampling graph pair to be processed by using a feature-based image registration algorithm, and updating camera parameters by using the acquired registration parameters;
the specific process of the feature-based image registration algorithm is as follows:
step S21: and (3) detecting characteristic points of the reference image and the image to be registered in the current sampling graph pair to be processed obtained in the step (S1) by using the same characteristic point detection algorithm, and extracting the position and gray characteristic information of the two images.
Step S22: selecting a certain feature description Fu Suanfa, and constructing feature descriptors for all feature points of the reference image detected in the step S21 to form a feature set of the reference image; and constructing feature descriptors for all feature points of the image to be registered detected in the step S21 by using the same feature description Fu Suanfa to form a feature set of the image to be registered.
Step S23: and selecting a certain feature point matching algorithm, and carrying out matching association on the feature set of the reference image and feature descriptors in the feature set of the image to be registered to obtain matched feature pairs.
Step S24: and determining registration parameters according to the geometrical relationship between the matched feature pairs. Specifically, an image transformation model is selected by using priori knowledge of camera position relation or experimental test mode, and transformation parameters of the model are obtained through the position corresponding relation between the matched characteristic points, so as to obtain a transformation matrix, namely registration parameters.
Step S25: and updating the camera parameters by using the registration parameters obtained in the step S24. And carrying out projection transformation on the image to be registered in the current calibration frame image based on the latest camera parameters to obtain a registered target image of the current calibration frame image.
Step S3, sequentially performing projection transformation on the image pairs to be registered after the current calibration frame image and before the next calibration frame image by using the latest camera parameters obtained in the step S25 to obtain a series of registered target image pairs, and completing registration of the current calibration frame image;
step S4, selecting a next calibration frame image, and sequentially repeating the step S2 and the step S3 to finish registration of the next calibration frame image;
and S5, repeating the step S4 until all the images to be registered are registered.
The method is applied to registration of infrared images and visible light images, and the used data set is an unmanned aerial vehicle aerial ground landscape remote sensing image acquired by self, namely an UAV data set for short. The data set has 12 scenes, 1000 images, and the spatial resolution of the images is 1280×960 pixels and the temporal resolution is 25 frames/second. The images to be registered are acquired from four sensors, and images of a visible light red light wave band, a visible light green light wave band, a visible light red side light wave band and a near infrared wave band are respectively acquired. Because the four cameras are located at different positions, there is a weak difference in the scene range acquired by the 4 images, i.e., the same thing is located at different spatial positions on different images. For this case, a certain image is often selected as a reference image, and the other three groups of images are mapped into the space of the reference image, so as to realize registration of 4 images. The process of steps S1 to S5 described above is performed for each group of image sequence pairs.
When the step S1 is executed, the infrared image sequence is a reference image sequence, and the visible light red light band image sequence, the visible light green light band image sequence and the visible light red edge band image sequence are respectively image sequences to be registered. The sampling interval is 251 frames of images, consisting of the reference image, 1,251,501 of the sequence of registered images, the 250 x i+1, frame image pair groups together form a set of scaled frame image pairs.
The execution of steps S21 to S23 involves the determination of a feature point detection algorithm, a feature description algorithm, and a feature matching algorithm. At present, the design and improvement algorithm of each independent function algorithm exists, and in actual operation, each module algorithm can be freely selected and integrated according to application requirements. While some sophisticated algorithms include a complete set of designs that detect feature matches from feature points, such as classical SURF feature algorithms, FAST feature algorithms, ORB feature algorithms, and the like. In this embodiment, an ORB feature algorithm is selected to obtain a feature matching result of the reference image and the image to be registered in the calibration frame image pair.
In the execution step S24, the determination of the transformation model is involved. Common transformation models include translational transforms, rigid body transforms, similarity transforms, affine transforms, projective transforms, nonlinear transforms, and the like. In this embodiment, the front-back misalignment between the cameras and the positional relationship of the up-down distribution commonly result in equidistant transformation and uniform scaling between the registration image and the reference image, and for this phenomenon, the transformation model is selected as a similar transformation model.
To verify the effectiveness of the method of the present invention, evaluations were made subjectively and objectively, respectively. The subjective evaluation is subjective judgment of the human eyes on the image registration condition, the objective evaluation index is registration time and registration accuracy RMSE, and the lower the registration time and registration accuracy is, the better the registration effect of the image is.
In the feature-based image registration algorithm, the matching condition of features has great influence on experimental results, so related experiments and result display are designed. As shown in fig. 3, 3 scenes are selected for the ORB feature matching effect graphs of different scenes, and as shown in fig. 3a, 3b and 3c, in each graph, from top to bottom and from left to right, an image of red light band-an image of infrared band, an image of green light band-an image of infrared band, and an image of red edge light band-an image of infrared band are respectively displayed. From the visual effect, the matching accuracy is high, and basically the matching can be correctly performed. It is noted that the feature matching pairs generated for the visible light and infrared images of different bands in the same scene have large differences, which are mainly the changes caused by the differences of the gray values.
In order to verify the real-time performance of image registration in the invention, any one scene is selected in a data set, and the registration effect of a section of image sequence is displayed under the scene. Fig. 4 is a registration diagram of a small image sequence of the ORB algorithm in a certain scene, each group of image sequences includes 12 frames of images, as shown in fig. 4 a-4 l, the image sequences record a process that a white truck just enters a picture from the upper surface of a left road and finally will leave the picture after slowly moving. In the implementation process, the first group of images are set as calibration frame images, and the other 11 groups of images are registered along the registration parameters of the first group of images through the processes of feature point detection, feature extraction and matching and solving the registration parameters in the step S2.
The left image of fig. 4 is an effect diagram after the unregistered visible light is overlapped, and obvious double image, blurring and other problems exist in the image, which indicates that the spatial positions of the images of the 3 wave bands are not completely consistent. And registering the visible light images of the 3 wave bands by taking the infrared images as reference images respectively, and stacking the obtained registered target images as shown in the left side of the figure 4 to obtain a right side view of the figure 4. It can be seen that after the first frame image passes through the feature-based registration algorithm in the step S2, the problems of ghost, blurring and the like of the image are solved, so that the good feature matching effect of the ORB algorithm is fully illustrated, and the selected transformation model is reasonable. And the subsequent series of images are subjected to space transformation in the step S3, so that the obtained registration result also eliminates the visual barriers such as double images, blurring and the like, and the images with rich details and clear boundaries are obtained. Therefore, the present invention achieves a good registration effect in the present embodiment, as evaluated subjectively.
In order to facilitate the verification from the objective aspect, the method is applied to 10 scenes, and the average registration time and the average registration precision of each image in all scenes are counted. The results are shown in Table 1.
TABLE 1 ORB feature based image sequence registration time and precision
Scene(s) | Time (ms) | Precision (pixel) |
1 | 7.332 | 0.1579 |
2 | 7.238 | 0.1043 |
3 | 7.625 | 0.1624 |
4 | 7.236 | 0.1862 |
5 | 7.584 | 0.1551 |
6 | 7.856 | 0.0959 |
7 | 7.356 | 0.0935 |
8 | 8.040 | 0.0475 |
9 | 7.792 | 0.1981 |
10 | 8.460 | 0.0735 |
As can be seen from the data in table 1, the average registration time is 7.236-8.460ms in all scenes, and in this embodiment, the time resolution of the image sequence to be registered is 25 frames/second, that is, if the registration speed under the real-time condition is to be satisfied, the average registration time of each frame of image should be less than or equal to 40ms (1000 ms/25). The average registration time for 10 scenes is less than 40ms, which indicates that the method can completely meet the real-time requirement of image registration; in addition, the registration accuracy is less than 1 accuracy, which means that the high efficiency of image registration is realized at the pixel level. Therefore, the method can completely meet the dual requirements of image registration on time and precision from objective aspect.
The invention provides a characteristic-based multi-source image sequence real-time registration technology, which utilizes the fact that the relative relation among a plurality of cameras for acquiring multi-source images is kept fixed in a short time based on the normal condition, and provides a registration method for registering a subsequent image acquired in a short time by utilizing the registration relation acquired by a calibration frame image. The strategy greatly reduces the processes of feature point detection and extraction, feature description, feature matching and solving registration relation of the sequence image after the frame image is scaled. Therefore, under the advantage of high registration accuracy by means of a feature-based registration algorithm, the registration time of the image sequence is greatly reduced, and the image registration efficiency is improved.
Claims (4)
1. The characteristic-based multi-source image sequence real-time registration method is characterized by comprising the following steps of:
s1, inputting two groups of single-source image sequences, wherein one group of single-source image sequences is a reference image sequence, and the other group of single-source image sequences is an image sequence to be registered; wherein, the two groups of single-source image sequences are obtained under the same time sequence;
simultaneously sampling two groups of single-source image sequences at intervals by using an interval sampling method to obtain a plurality of sampling graph pairs, and sequentially combining the plurality of sampling graph pairs according to a time sequence to obtain a calibration frame image pair set; selecting a first group of image pairs in the scaled frame image pair set as current sampling graph pairs to be processed;
s2, registering the current to-be-processed sampling graph pair by using a feature-based image registration algorithm to obtain registration parameters, and updating camera parameters by using the registration parameters;
step S3, sequentially performing projection transformation on the image pairs to be registered after the current calibration frame image pair and before the next calibration frame image pair by using the latest camera parameters obtained in the step S2 to obtain a series of registered target image sequence pairs, and completing the registration of the current calibration frame image;
step S4, selecting the next calibration frame image pair as the current sample image pair to be processed, and sequentially repeating the steps S2-S3 to finish the registration of the next calibration frame image;
step S5, repeating the step S4 until all the images to be registered are registered;
the step S2 specifically comprises the following steps:
step S21: the same feature point detection algorithm is utilized to detect feature points of the reference image and the image to be registered in the current sampling graph pair to be processed obtained in the step S1, and position and gray feature information of the two images are extracted;
step S22: selecting a certain feature description Fu Suanfa, and constructing feature descriptors for all feature points of the reference image detected in the step S21 to form a feature set of the reference image; constructing feature descriptors for all feature points of the image to be registered detected in the step S21 by using the same feature description Fu Suanfa to form a feature set of the image to be registered;
step S23: selecting a certain feature point matching algorithm, and carrying out matching association on feature descriptors in a feature set of the reference image and a feature set of the image to be registered to obtain matched feature pairs;
step S24: determining registration parameters according to the geometrical relationship between the matched feature pairs;
step S25: and updating camera parameters by using the registration parameters obtained in the step S24, and performing projection transformation on the image to be registered in the current calibration frame image based on the latest camera parameters to obtain a registered target image of the current calibration frame image.
2. The method for real-time registration of a feature-based multi-source image sequence according to claim 1, wherein the plurality of pairs of sampling patterns are combined in sequence in time series in such a way that: two sampling image pairs under the same time sequence are one group, and different groups are arranged in time sequence.
3. The method for real-time registration of feature-based multi-source image sequences according to claim 2, wherein the specific process of updating the camera parameters in step S24 is as follows: and adopting an image transformation model, and obtaining transformation parameters of the model through the position corresponding relation between the matched characteristic points to obtain a transformation matrix, namely registration parameters.
4. A method of real-time registration of a feature-based multi-source image sequence according to claim 3, wherein the image transformation model selects a similar transformation model.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202011069504.0A CN112308887B (en) | 2020-09-30 | 2020-09-30 | Multi-source image sequence real-time registration method |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202011069504.0A CN112308887B (en) | 2020-09-30 | 2020-09-30 | Multi-source image sequence real-time registration method |
Publications (2)
Publication Number | Publication Date |
---|---|
CN112308887A CN112308887A (en) | 2021-02-02 |
CN112308887B true CN112308887B (en) | 2024-03-22 |
Family
ID=74488708
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202011069504.0A Active CN112308887B (en) | 2020-09-30 | 2020-09-30 | Multi-source image sequence real-time registration method |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN112308887B (en) |
Families Citing this family (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113052260A (en) * | 2021-04-21 | 2021-06-29 | 合肥中科类脑智能技术有限公司 | Transformer substation foreign matter identification method and system based on image registration and target detection |
CN113610906B (en) * | 2021-08-06 | 2023-07-18 | 山西大学 | Multi-parallax image sequence registration method based on fusion image guidance |
Citations (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101248671A (en) * | 2005-09-29 | 2008-08-20 | 三星电子株式会社 | Method of estimating disparity vector, apparatus for encoding and decoding multi-view picture |
CN101765863A (en) * | 2006-12-19 | 2010-06-30 | 皇家飞利浦电子股份有限公司 | Temporal registration of medical data |
CN101937565A (en) * | 2010-09-16 | 2011-01-05 | 上海交通大学 | Dynamic image registration method based on moving target track |
CN106327532A (en) * | 2016-08-31 | 2017-01-11 | 北京天睿空间科技股份有限公司 | Three-dimensional registering method for single image |
CN107194960A (en) * | 2017-05-22 | 2017-09-22 | 中国农业科学院农业资源与农业区划研究所 | A kind of method for registering for high spectrum image |
CN108294768A (en) * | 2017-12-29 | 2018-07-20 | 华中科技大学 | The X-ray angiocardiography of sequence image multi-parameter registration subtracts image method and system |
CN109146930A (en) * | 2018-09-20 | 2019-01-04 | 河海大学常州校区 | A kind of electric power calculator room equipment is infrared and visible light image registration method |
CN109785370A (en) * | 2018-12-12 | 2019-05-21 | 南京工程学院 | A kind of weak texture image method for registering based on space time series model |
CN109829853A (en) * | 2019-01-18 | 2019-05-31 | 电子科技大学 | A kind of unmanned plane image split-joint method |
Family Cites Families (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US8611692B2 (en) * | 2011-09-26 | 2013-12-17 | Northrop Grumman Systems Corporation | Automated image registration with varied amounts of a priori information using a minimum entropy method |
-
2020
- 2020-09-30 CN CN202011069504.0A patent/CN112308887B/en active Active
Patent Citations (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101248671A (en) * | 2005-09-29 | 2008-08-20 | 三星电子株式会社 | Method of estimating disparity vector, apparatus for encoding and decoding multi-view picture |
CN101765863A (en) * | 2006-12-19 | 2010-06-30 | 皇家飞利浦电子股份有限公司 | Temporal registration of medical data |
CN101937565A (en) * | 2010-09-16 | 2011-01-05 | 上海交通大学 | Dynamic image registration method based on moving target track |
CN106327532A (en) * | 2016-08-31 | 2017-01-11 | 北京天睿空间科技股份有限公司 | Three-dimensional registering method for single image |
CN107194960A (en) * | 2017-05-22 | 2017-09-22 | 中国农业科学院农业资源与农业区划研究所 | A kind of method for registering for high spectrum image |
CN108294768A (en) * | 2017-12-29 | 2018-07-20 | 华中科技大学 | The X-ray angiocardiography of sequence image multi-parameter registration subtracts image method and system |
CN109146930A (en) * | 2018-09-20 | 2019-01-04 | 河海大学常州校区 | A kind of electric power calculator room equipment is infrared and visible light image registration method |
CN109785370A (en) * | 2018-12-12 | 2019-05-21 | 南京工程学院 | A kind of weak texture image method for registering based on space time series model |
CN109829853A (en) * | 2019-01-18 | 2019-05-31 | 电子科技大学 | A kind of unmanned plane image split-joint method |
Non-Patent Citations (4)
Title |
---|
A fast and high accuracy registration method for multi-source images;Yang, Fengbao;《OPTIK》;20151202;第126卷(第21期);全文 * |
Review of remote sensing image registration techniques;Yu Xian-chuan;;《 Optics and Precision Engineering》;20140605;第21卷(第11期);全文 * |
基于小波分解与多约束改进的序列图像配准;徐志刚;《仪器仪表学报》;20111031;第32卷(第10期);全文 * |
基于局部空间线性恢复模型的 多光谱与全色图像融合算法;张易凡;《西北工业大学学报》;20080228;第26卷(第1期);全文 * |
Also Published As
Publication number | Publication date |
---|---|
CN112308887A (en) | 2021-02-02 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US20220044375A1 (en) | Saliency Map Enhancement-Based Infrared and Visible Light Fusion Method | |
Mou et al. | A relation-augmented fully convolutional network for semantic segmentation in aerial scenes | |
CN111209810B (en) | Boundary frame segmentation supervision deep neural network architecture for accurately detecting pedestrians in real time through visible light and infrared images | |
Luo et al. | Thermal infrared image colorization for nighttime driving scenes with top-down guided attention | |
CN109685045B (en) | Moving target video tracking method and system | |
CN103810475B (en) | A kind of object recognition methods and device | |
CN112308887B (en) | Multi-source image sequence real-time registration method | |
CN112257526B (en) | Action recognition method based on feature interactive learning and terminal equipment | |
CN105809626A (en) | Self-adaption light compensation video image splicing method | |
CN102495998B (en) | Static object detection method based on visual selective attention computation module | |
CN111310633A (en) | Parallel space-time attention pedestrian re-identification method based on video | |
CN112950502B (en) | Image processing method and device, electronic equipment and storage medium | |
CN103729620B (en) | A kind of multi-view pedestrian detection method based on multi-view Bayesian network | |
CN105354856A (en) | Human matching and positioning method and system based on MSER and ORB | |
US11636582B1 (en) | Stitching quality evaluation method and system and redundancy reduction method and system for low-altitude unmanned aerial vehicle remote sensing images | |
CN105374051B (en) | The anti-camera lens shake video moving object detection method of intelligent mobile terminal | |
CN112613568A (en) | Target identification method and device based on visible light and infrared multispectral image sequence | |
CN110390657A (en) | A kind of image interfusion method | |
CN103632131B (en) | Apparatus and method for extracting object | |
CN116188859A (en) | Tea disease unmanned aerial vehicle remote sensing monitoring method based on superdivision and detection network | |
CN111833384B (en) | Method and device for rapidly registering visible light and infrared images | |
CN103927517B (en) | Motion detection method based on human body global feature histogram entropies | |
CN110430400B (en) | Ground plane area detection method of binocular movable camera | |
CN112396637A (en) | Dynamic behavior identification method and system based on 3D neural network | |
CN108491796A (en) | A kind of time domain period point target detecting method |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |