CN109146001B - Multi-view ISAR image fusion method - Google Patents
Multi-view ISAR image fusion method Download PDFInfo
- Publication number
- CN109146001B CN109146001B CN201811071197.2A CN201811071197A CN109146001B CN 109146001 B CN109146001 B CN 109146001B CN 201811071197 A CN201811071197 A CN 201811071197A CN 109146001 B CN109146001 B CN 109146001B
- Authority
- CN
- China
- Prior art keywords
- isar
- image
- nth
- superpixel
- rigid transformation
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000007500 overflow downdraw method Methods 0.000 title abstract description 6
- 230000009466 transformation Effects 0.000 claims abstract description 44
- 239000011159 matrix material Substances 0.000 claims abstract description 34
- 238000000034 method Methods 0.000 claims abstract description 29
- 230000011218 segmentation Effects 0.000 claims abstract description 24
- 230000004927 fusion Effects 0.000 claims abstract description 18
- 230000001131 transforming effect Effects 0.000 claims abstract description 3
- 239000002245 particle Substances 0.000 claims description 16
- 238000005457 optimization Methods 0.000 claims description 9
- 238000004422 calculation algorithm Methods 0.000 claims description 7
- 230000000717 retained effect Effects 0.000 claims description 4
- PXFBZOLANLWPMH-UHFFFAOYSA-N 16-Epiaffinine Natural products C1C(C2=CC=CC=C2N2)=C2C(=O)CC2C(=CC)CN(C)C1C2CO PXFBZOLANLWPMH-UHFFFAOYSA-N 0.000 claims description 3
- 238000013519 translation Methods 0.000 claims description 2
- 230000017105 transposition Effects 0.000 claims description 2
- 238000012545 processing Methods 0.000 abstract description 4
- 238000004364 calculation method Methods 0.000 abstract description 2
- 238000004088 simulation Methods 0.000 description 21
- 238000010586 diagram Methods 0.000 description 2
- 230000000694 effects Effects 0.000 description 2
- 238000003384 imaging method Methods 0.000 description 2
- 238000006467 substitution reaction Methods 0.000 description 2
- 239000013598 vector Substances 0.000 description 2
- 239000000654 additive Substances 0.000 description 1
- 230000000996 additive effect Effects 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 230000007812 deficiency Effects 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 230000018109 developmental process Effects 0.000 description 1
- 238000003709 image segmentation Methods 0.000 description 1
- 230000010365 information processing Effects 0.000 description 1
- 238000012805 post-processing Methods 0.000 description 1
- 238000007781 pre-processing Methods 0.000 description 1
- 230000001737 promoting effect Effects 0.000 description 1
- 238000011160 research Methods 0.000 description 1
- 230000000007 visual effect Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/25—Fusion techniques
- G06F18/253—Fusion techniques of extracted features
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/23—Clustering techniques
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/20—Image preprocessing
- G06V10/26—Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
- G06V10/267—Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion by performing operations on regions, e.g. growing, shrinking or watersheds
Landscapes
- Engineering & Computer Science (AREA)
- Data Mining & Analysis (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Evolutionary Biology (AREA)
- Evolutionary Computation (AREA)
- Bioinformatics & Computational Biology (AREA)
- General Engineering & Computer Science (AREA)
- Artificial Intelligence (AREA)
- Life Sciences & Earth Sciences (AREA)
- Multimedia (AREA)
- Image Processing (AREA)
- Image Analysis (AREA)
Abstract
The invention discloses a multi-view ISAR image fusion method, which mainly solves the problems of redundancy of feature points, complex processing and large computation amount extracted in the prior art. The scheme is as follows: performing simple linear iterative clustering segmentation on the N ISAR images to obtain superpixel coordinates X, Y and brightness information L; setting a brightness threshold value, and reserving the super-pixel information of which L is greater than the threshold value; selecting a first ISAR image as a reference image, and establishing a rigid transformation relation between an nth ISAR image and the reference image by using reserved parameters to obtain a transformation matrix Bn(ii) a Setting a cost function J between the nth ISAR graph and the reference graphn(ii) a Solving for JnTaking the rigid transformation matrix B of the minimumn', and find Bn' inverse matrix An(ii) a The nth ISAR graph is based on the inverse matrix AnAnd transforming to a reference image coordinate system, and superposing all transformed ISAR images and the reference image to obtain a fusion image. The extracted feature points are refined, the calculation amount is small, and the method can be used for three-dimensional image reconstruction, target recognition and attitude estimation.
Description
Technical Field
The invention belongs to the technical field of image processing, and particularly relates to an image fusion method which can be used for three-dimensional image reconstruction, target recognition and attitude estimation.
Background
Digital image fusion is a basic problem in computer vision, and has wide application in three-dimensional image reconstruction, target recognition, pose estimation and other aspects. The digital image registration is a preprocessing stage of digital image fusion, and the digital image registration technology can match and overlap different images of the same target acquired under different imaging angles to generate a new explanation about the target, wherein the explanation cannot be obtained from single-shot image information. The research on the digital image registration problem has important significance and value for promoting the development of multi-view image fusion technology.
At present, an ISAR image fusion method for a spatial target mainly includes two methods:
a can RELAX algorithm RELAX to system error and environment additive noise distribution hypothesis with the signal processing aspect as the representative, withdraw the scattering point and match and fuse the characteristic point, the deficiency of this method is that needs ISAR imaging signal processing aspect knowledge, process complicated, the operation is heavy;
the other method is characterized in that a scale invariant feature transform algorithm SIFT represented by an information processing layer is used for extracting feature points for matching and fusing, but the method only needs to process the information layer of the image and does not utilize structural information of the image, so that the extracted feature points are redundant, and the algorithm computation amount is large.
Disclosure of Invention
The invention aims to provide a multi-view ISAR image fusion method aiming at the defects of the prior art, so as to accurately extract effective characteristic points required by fusion, reduce the amount of calculation and improve the fusion efficiency.
The technical idea of the invention is as follows: performing superpixel simple linear iterative clustering SLIC segmentation on a series of ISAR images, converting an RGB color space into a CIELab color space and three-dimensional feature vectors under X, Y coordinates, constructing a cost function by using the three-dimensional feature vectors, solving the minimum value of the cost function to obtain a rigid rotation matrix between the series of ISAR images, and finally registering and fusing all the ISAR images into an image. The implementation scheme comprises the following steps:
(1) performing simple linear iterative clustering SLIC (hierarchical segmentation in hierarchical structure) segmentation on a series of N ISAR images to obtain superpixel coordinates X, Y and brightness information L; setting a brightness threshold value according to the size of the ISAR image, and keeping the superpixel coordinates X, Y and the brightness information L, wherein L is greater than the threshold value, and N is greater than or equal to 2;
(2) selecting a first ISAR image from a series of N ISAR images as a reference image, and establishing a rigid transformation relation between the nth ISAR image and the reference image by using (1) reserved superpixel coordinates X, Y to obtain a transformation matrix Bn,n=2,3,...,N;
(3) Setting a cost function J between the nth ISAR graph and the reference graph by using a rigid transformation relation established by (1) the reserved superpixel coordinates X, Y and the brightness information L and (2)n:
α=2*(l/M)
Wherein,for the retained location and intensity information of the center of the superpixel of the nth ISAR image,the method comprises the steps of (1) obtaining superpixel center coordinates and brightness information of an nth ISAR image obtained through rigid transformation in step (2), wherein alpha is the weight of the brightness information L, L is the side length of an image, M is the number of superpixels preset by superpixel simple linear iterative clustering SLIC segmentation, and p is 1,21,k=1,2,...,Wn,W1Number of super-pixels, W, reserved for reference picturesnThe number of superpixels reserved for the nth ISAR graph represents the multiplication;
(4) solving cost function J by PSO (particle swarm optimization) algorithmnTaking the rigid transformation matrix B of the minimumnAnd solving the rigid transformation matrix BnInverse matrix A ofn;
(5) Taking the reference image as a standard, and adopting a two-dimensional affine transformation cubic interpolation method to make the nth ISAR image according to the inverse matrix AnAnd transforming to a reference image coordinate system, and superposing all the transformed ISAR images and the reference images in a pixel summation mode to obtain a final fusion image.
Compared with the prior art, the invention has the advantages that:
firstly, the ISAR image is segmented by adopting the superpixel simple linear iterative clustering SLIC, the structural information of the image is considered, the key structural information of a space target is favorably obtained, and a foundation is laid for accurately obtaining a matching fusion image subsequently;
secondly, the cost function is optimized and solved by adopting a Particle Swarm Optimization (PSO), so that the method is fast and accurate, and the registration and fusion of a series of ISAR two-dimensional images can be accurately realized.
Drawings
FIG. 1 is a flow chart of an implementation of the present invention;
FIG. 2 is an ISAR image obtained by simulation according to the present invention;
FIG. 3 is a segmentation graph obtained by performing superpixel simple linear iterative clustering segmentation on an ISAR image in the present invention;
fig. 4 is a multi-view image fusion diagram obtained by simulation of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Referring to fig. 1, the implementation steps of the invention are as follows:
step 1, segmenting the ISAR image.
Methods of image segmentation fall into two categories: based on boundary segmentation and based on region segmentation, the main methods are as follows: the invention discloses a gray threshold segmentation method, a boundary segmentation method, a texture segmentation method, a region growing method and the like, which adopts super-pixel simple linear iterative clustering SLIC segmentation in the texture segmentation method, and the method groups pixels by using the similarity of characteristics between the pixels, replaces a large number of pixels with a small number of super-pixels to express picture characteristics, greatly reduces the complexity of image post-processing, and is concretely realized as follows:
(1.1) performing superpixel simple linear iterative clustering SLIC segmentation on a series of N ISAR images:
(1.1a) reading a pixel matrix of each ISAR image;
(1.1b) initializing a clustering center;
(1.1c) reselecting the clustering center in the 3 x 3 neighborhood of the clustering center;
(1.1d) distributing a class label to each pixel point in the neighborhood around each cluster center;
(1.1e) for each pixel point, calculating the distance between the pixel point and the seed point, and taking the seed point corresponding to the minimum value as the clustering center of the pixel point;
(1.1f) continuously iterating the steps (1d) and (1e) until the clustering center of each pixel point is not changed any more, so as to obtain a superpixel coordinate X, Y and brightness information L, wherein N is more than or equal to 2;
(1.2) setting a brightness threshold according to the size of the ISAR image, and keeping the superpixel coordinates X, Y with L larger than the threshold and the brightness information L.
And 2, establishing a rigid transformation relation between the graphs according to the segmentation result of the step 1 to obtain a transformation matrix.
The transformation in computer graphics comprises two-dimensional rotation transformation and three-dimensional rotation transformation, the rigid transformation relation established by the invention is a rotation relation rotating around any point in the two-dimensional rotation transformation, and the rigid transformation relation is specifically realized as follows:
(2.1) selecting a first ISAR image from a series of N ISAR images as a reference image, and establishing a rigid transformation relation between the nth ISAR image and the reference image by using the retained superpixel coordinates X, Y and the brightness information L in the step (1), wherein the rigid transformation relation is expressed as follows:
wherein,for the superpixel center coordinates and luminance information, θ, of the reference map obtained in step 1nIs the rotation angle, x, of the nth ISAR image relative to the reference imagen0To translate the abscissa, yn0In order to translate the ordinate, the longitudinal axis,is the superpixel center of the nth ISAR image obtained by rigid transformationCoordinate and brightness information, wherein T is the transposition operation of the matrix;
(2.2) obtaining a rigid transformation matrix B according to the rigid transformation relation of (2.1)n:
Wherein, thetanIs the rotation angle, x, of the nth ISAR image relative to the reference imagen0To translate the abscissa, yn0Is the translation ordinate.
And 3, setting a cost function between the nth ISAR graph and the reference graph.
The specific form of the cost function depends on the particular problem, the cost function is typically used for parameter estimation, and the specific form of the cost function is a function of the difference between the data instance estimate and the true value. The cost function of this example is set according to the center coordinates and brightness information of the superpixel retained in step 1 and the center coordinates and brightness information of the superpixel of the nth ISAR graph obtained through rigid transformation in step 2, and the cost function is specifically realized as follows:
α=2*(l/M)
wherein,the position and brightness information of the super-pixel center of the nth ISAR image reserved for step 1,the superpixel center coordinates and the brightness information of the nth ISAR graph obtained by rigid transformation in the step 2, and alpha is the brightness informationL, L is the side length of the image, M is the number of super pixels preset by the simple linear iterative clustering SLIC segmentation of the super pixels, and x represents multiplication;
(3.2) setting a cost function J between the nth ISAR graph and the reference graph according to the result of (3.1)n:
Wherein, p is 1,21,k=1,2,...,Wn,W1Number of super-pixels, W, reserved for reference picturesnThe number of superpixels reserved for the nth ISAR graph.
Step 4, utilizing the cost function to carry out rigid transformation on the matrix BnAnd (6) optimizing.
The method for optimizing the rigid transformation matrix, namely optimizing the cost function, can be divided into two types: the main methods for optimizing the constraint problem and the non-constraint problem are as follows: the invention adopts a particle swarm optimization PSO for optimizing the unconstrained problem to carry out optimization, namely solving and enabling a cost function JnTaking the rigid transformation matrix B of the minimumnIt is implemented as follows:
(4.1) initializing the position and velocity of the particles;
(4.2) cost function JnCalculating the fitness of the particles for the fitness;
(4.3) obtaining individual historical optimal positions of the particles and historical optimal positions of the group;
(4.4) updating the position and the speed of the particles by combining the individual historical optimal position and the historical optimal position of the group;
(4.5) continuously iterating the steps (4.2) - (4.4), and selecting the minimum value of the fitness to obtain a cost function JnThree-dimensional rigid transformation matrix B taking minimum valuen', obtaining an optimized three-dimensional rigid transformation matrix Bn′。
Step 5, according to the optimized three-dimensional rigid transformation matrix Bn' go toAnd (5) line ISAR image fusion.
(5.1) on the optimized three-dimensional rigid transformation matrix Bn' inversion to obtain the inverse matrix A of the three-dimensional rigid transformation matrixn;
(5.2) utilizing an interpolation method to make the nth ISAR graph according to the inverse matrix AnTransforming to a reference image coordinate system:
the existing difference method comprises the following steps: the invention adopts but not limited to a method of cubic interpolation by two-dimensional affine transformation, and leads the nth ISAR image to be based on an inverse matrix AnTransforming to a reference picture coordinate system;
and (5.3) superposing all the transformed ISAR images and the reference image in a pixel summation mode to obtain a final fusion image.
The effects of the present invention can be further demonstrated by the following simulation experiments.
Simulation conditions:
the invention uses matlab software to simulate ISAR system, and the parameters are shown in Table 1:
TABLE 1 ISAR System principal parameters
The superpixel simple linear iterative clustering SLIC segmentation adopted by the simulation of the invention has the parameters shown in Table 2:
TABLE 2 superpixel simple Linear iterative clustering SLIC segmentation principal parameters
The particle swarm optimization PSO adopted by the simulation of the invention has the parameters shown in the table 3:
TABLE 3 PSO Primary parameters of particle swarm optimization Algorithm
(II) simulation content and result:
simulation 1: the ISAR system was simulated according to the simulation parameters in table 1 to obtain 6 ISAR simulation graphs "tiangong-one" at different viewing angles, as shown in fig. 2. As can be seen from fig. 2, the simulated 6 ISAR simulation diagrams at different viewing angles have different features: the position of the Tiangong I in the image is different, the posture angle is different, and the strong scattering part is different, so that the requirement of multiple visual angles is met.
Simulation 2: the 6 ISAR simulation graphs obtained by simulation 1 were subjected to superpixel simple linear iterative clustering SLIC segmentation, respectively, according to the simulation parameters in table 2, and the results are shown in fig. 3. As can be seen from fig. 3, the result graph obtained by segmenting the ISAR simulation graph shows that the invention considers the structural information of the image to obtain the key structural information of the spatial target, and lays a foundation for accurately obtaining the matching fusion graph in the following;
simulation 3: optimizing a cost function J by using a particle swarm optimization PSO according to the simulation parameters of the table 3nThe method of the invention is used for multi-view image fusion, and the final result obtained by fusing the ISAR simulation graphs of 'Tiangong I' under 6 different views obtained in the simulation 1 is shown in figure 4. As can be seen from fig. 4, the final fusion map performs image registration well on the 6 maps obtained in simulation 1, and the strong scattering site features of the 6 maps are fused, so that a fusion effect is achieved.
The above description is only for the specific embodiments of the present invention, but the scope of the present invention is not limited thereto, and any person skilled in the art can easily conceive of the changes or substitutions within the technical scope of the present invention, and all the changes or substitutions should be covered within the scope of the present invention. Therefore, the protection scope of the present invention shall be subject to the protection scope of the appended claims.
Claims (4)
1. A method for realizing multi-view ISAR image fusion is characterized by comprising the following steps:
(1) performing simple linear iterative clustering SLIC (hierarchical segmentation in hierarchical structure) segmentation on a series of N ISAR images to obtain superpixel coordinates X, Y and brightness information L; setting a brightness threshold value according to the size of the ISAR image, and keeping the superpixel coordinates X, Y and the brightness information L, wherein L is greater than the threshold value, and N is greater than or equal to 2;
(2) selecting a first ISAR image from a series of N ISAR images as a reference image, and establishing a rigid transformation relation between the nth ISAR image and the reference image by using (1) reserved superpixel coordinates X, Y and brightness information L to obtain a transformation matrix BnN is 2,3,. and N; is represented as follows:
wherein, thetanIs the rotation angle, x, of the nth ISAR image relative to the reference imagen0To translate the abscissa, yn0Is a translation ordinate;
(3) setting a cost function J between the nth ISAR graph and the reference graph by using a rigid transformation relation established by (1) the reserved superpixel coordinates X, Y and the brightness information L and (2)n:
α=2*(l/M)
Wherein,for the retained location and intensity information of the center of the superpixel of the nth ISAR image,is the superpixel center coordinate and brightness information of the nth ISAR graph obtained by rigid transformation in (2),the loss amount of point matching is shown, alpha is the weight of brightness information L, L is the side length of the image, M is the number of superpixels preset by the simple linear iterative clustering SLIC segmentation of the superpixels, and p is 1,21,k=1,2,...,Wn,W1Number of super-pixels, W, reserved for reference picturesnThe number of superpixels reserved for the nth ISAR graph represents the multiplication;
(4) solving cost function J by PSO (particle swarm optimization) algorithmnTaking the rigid transformation matrix B of the minimumn', and solving the rigid transformation matrix Bn' inverse matrix An;
(5) Taking the reference image as a standard, and adopting a two-dimensional affine transformation cubic interpolation method to make the nth ISAR image according to the inverse matrix AnAnd transforming to a reference image coordinate system, and superposing all the transformed ISAR images and the reference images in a pixel summation mode to obtain a final fusion image.
2. The method of claim 1, wherein the superpixel simple linear iterative clustering, SLIC, segmentation is performed on a series of N ISAR images in (1) by:
(1a) reading a pixel matrix of each ISAR image;
(1b) initializing a clustering center;
(1c) reselecting a clustering center in a 3 x 3 neighborhood of the clustering center;
(1d) distributing a class label to each pixel point in the neighborhood around each clustering center;
(1e) for each pixel point, calculating the distance between the pixel point and the seed point, and taking the seed point corresponding to the minimum value as the clustering center of the pixel point;
(1f) and (4) continuously iterating the steps (1d) and (1e) until the clustering center of each pixel point is not changed any more.
3. The method of claim 1, wherein the rigid transformation relationship between the nth ISAR graph and the reference graph is established in (2) and is expressed as follows:
wherein,is the superpixel center coordinates and luminance information of the reference map obtained in (1), θnIs the rotation angle, x, of the nth ISAR image relative to the reference imagen0To translate the abscissa, yn0In order to translate the ordinate, the longitudinal axis,and T is the transposition operation of the matrix for the superpixel center coordinate and the brightness information of the nth ISAR image obtained by rigid transformation.
4. The method of claim 1, wherein the step (4) of solving the cost function J by using a Particle Swarm Optimization (PSO) algorithmnThree-dimensional rigid transformation matrix B taking minimum valuen', which is implemented as follows:
(4a) initializing the position and speed of the particles;
(4b) with a cost function JnCalculating the fitness of the particles for the fitness;
(4c) acquiring individual historical optimal positions of the particles and historical optimal positions of the groups;
(4d) updating the position and the speed of the particles by combining the individual historical optimal position and the historical optimal position of the group;
(4e) continuously iterating the steps (4b) to (4d), and selecting the minimum value of the fitness to obtain a cost function JnThree-dimensional rigid transformation matrix B taking minimum valuen′。
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201811071197.2A CN109146001B (en) | 2018-09-14 | 2018-09-14 | Multi-view ISAR image fusion method |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201811071197.2A CN109146001B (en) | 2018-09-14 | 2018-09-14 | Multi-view ISAR image fusion method |
Publications (2)
Publication Number | Publication Date |
---|---|
CN109146001A CN109146001A (en) | 2019-01-04 |
CN109146001B true CN109146001B (en) | 2021-09-10 |
Family
ID=64825147
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201811071197.2A Active CN109146001B (en) | 2018-09-14 | 2018-09-14 | Multi-view ISAR image fusion method |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN109146001B (en) |
Families Citing this family (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110148165B (en) * | 2019-05-17 | 2022-01-25 | 西安电子科技大学 | Particle swarm optimization-based three-dimensional interference ISAR image registration method |
CN110580723B (en) * | 2019-07-05 | 2022-08-19 | 成都智明达电子股份有限公司 | Method for carrying out accurate positioning by utilizing deep learning and computer vision |
CN111402345B (en) * | 2020-06-04 | 2020-09-04 | 深圳看到科技有限公司 | Model generation method and device based on multi-view panoramic image |
CN111830504B (en) * | 2020-07-23 | 2023-11-24 | 中山大学 | Sequence ISAR three-dimensional imaging method based on sequential fusion factorization |
CN112014817B (en) * | 2020-08-24 | 2023-06-02 | 中国电子科技集团公司第三十八研究所 | Three-dimensional reconstruction method of spatial spin target |
CN112529945B (en) * | 2020-11-17 | 2023-02-21 | 西安电子科技大学 | Multi-view three-dimensional ISAR scattering point set registration method |
CN112946646B (en) * | 2021-01-29 | 2023-03-21 | 西安电子科技大学 | Satellite target attitude and size estimation method based on ISAR image interpretation |
CN113112440A (en) * | 2021-04-23 | 2021-07-13 | 华北电力大学 | Ultraviolet and visible light image fusion system and method based on FPGA |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104240212A (en) * | 2014-09-03 | 2014-12-24 | 西安电子科技大学 | ISAR image fusion method based on target characteristics |
CN105427314A (en) * | 2015-11-23 | 2016-03-23 | 西安电子科技大学 | Bayesian saliency based SAR image target detection method |
CN106778821A (en) * | 2016-11-25 | 2017-05-31 | 西安电子科技大学 | Classification of Polarimetric SAR Image method based on SLIC and improved CNN |
-
2018
- 2018-09-14 CN CN201811071197.2A patent/CN109146001B/en active Active
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104240212A (en) * | 2014-09-03 | 2014-12-24 | 西安电子科技大学 | ISAR image fusion method based on target characteristics |
CN105427314A (en) * | 2015-11-23 | 2016-03-23 | 西安电子科技大学 | Bayesian saliency based SAR image target detection method |
CN106778821A (en) * | 2016-11-25 | 2017-05-31 | 西安电子科技大学 | Classification of Polarimetric SAR Image method based on SLIC and improved CNN |
Non-Patent Citations (2)
Title |
---|
ISAR图像的特征提取及应用研究;许志伟;《中国优秀硕士学位论文全文数据库信息科技辑》;20160315(第03期);I136-2542 * |
基于深度学习和层次语义空间的SAR图像分割;孟义鹏;《中国优秀硕士学位论文全文数据库信息科技辑》;20180415(第04期);I136-2242 * |
Also Published As
Publication number | Publication date |
---|---|
CN109146001A (en) | 2019-01-04 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN109146001B (en) | Multi-view ISAR image fusion method | |
CN109003325B (en) | Three-dimensional reconstruction method, medium, device and computing equipment | |
CN113178009B (en) | Indoor three-dimensional reconstruction method utilizing point cloud segmentation and grid repair | |
CN106780576B (en) | RGBD data stream-oriented camera pose estimation method | |
CN102592275B (en) | Virtual viewpoint rendering method | |
CN110223387A (en) | A kind of reconstructing three-dimensional model technology based on deep learning | |
CN113963117B (en) | Multi-view three-dimensional reconstruction method and device based on variable convolution depth network | |
CN113538569B (en) | Weak texture object pose estimation method and system | |
CN114549669B (en) | Color three-dimensional point cloud acquisition method based on image fusion technology | |
CN115512073A (en) | Three-dimensional texture grid reconstruction method based on multi-stage training under differentiable rendering | |
CN114677479A (en) | Natural landscape multi-view three-dimensional reconstruction method based on deep learning | |
CN112734914A (en) | Image stereo reconstruction method and device for augmented reality vision | |
CN110930361B (en) | Method for detecting occlusion of virtual and real objects | |
CN116912405A (en) | Three-dimensional reconstruction method and system based on improved MVSNet | |
CN113239749A (en) | Cross-domain point cloud semantic segmentation method based on multi-modal joint learning | |
CN117437363B (en) | Large-scale multi-view stereoscopic method based on depth perception iterator | |
CN118397064A (en) | Deep learning-based composite view angle infrared target space reconstruction method | |
Da Silveira et al. | Indoor depth estimation from single spherical images | |
CN116681839B (en) | Live three-dimensional target reconstruction and singulation method based on improved NeRF | |
CN117501313A (en) | Hair rendering system based on deep neural network | |
Xue et al. | Feature Point Extraction and Matching Method Based on Akaze in Illumination Invariant Color Space | |
CN116681844A (en) | Building white film construction method based on sub-meter stereopair satellite images | |
Li et al. | Point-Based Neural Scene Rendering for Street Views | |
CN115063485A (en) | Three-dimensional reconstruction method, device and computer-readable storage medium | |
Cui et al. | 3D reconstruction with spherical cameras |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |