CN105488778A - Multi-viewpoint image fusion method based on block SPCA - Google Patents
Multi-viewpoint image fusion method based on block SPCA Download PDFInfo
- Publication number
- CN105488778A CN105488778A CN201510819511.0A CN201510819511A CN105488778A CN 105488778 A CN105488778 A CN 105488778A CN 201510819511 A CN201510819511 A CN 201510819511A CN 105488778 A CN105488778 A CN 105488778A
- Authority
- CN
- China
- Prior art keywords
- image
- spca
- images
- fusion method
- piecemeal
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000007500 overflow downdraw method Methods 0.000 title claims abstract description 15
- 101710082751 Carboxypeptidase S1 homolog A Proteins 0.000 title claims description 28
- 102100023804 Coagulation factor VII Human genes 0.000 title claims description 28
- 238000000513 principal component analysis Methods 0.000 claims abstract description 23
- 230000009466 transformation Effects 0.000 claims abstract description 20
- 230000004927 fusion Effects 0.000 claims abstract description 17
- 238000000354 decomposition reaction Methods 0.000 claims description 8
- 238000013519 translation Methods 0.000 claims description 7
- 230000011218 segmentation Effects 0.000 claims description 5
- 238000010494 dissociation reaction Methods 0.000 claims description 3
- 230000005593 dissociations Effects 0.000 claims description 3
- 238000000034 method Methods 0.000 abstract description 28
- 230000008569 process Effects 0.000 abstract description 11
- 230000008901 benefit Effects 0.000 abstract description 7
- 238000012545 processing Methods 0.000 abstract description 2
- 238000007781 pre-processing Methods 0.000 abstract 1
- 239000011159 matrix material Substances 0.000 description 16
- 235000013350 formula milk Nutrition 0.000 description 8
- 230000008859 change Effects 0.000 description 6
- 238000006243 chemical reaction Methods 0.000 description 6
- 230000009467 reduction Effects 0.000 description 6
- 230000006835 compression Effects 0.000 description 5
- 238000007906 compression Methods 0.000 description 5
- 238000006073 displacement reaction Methods 0.000 description 4
- 238000005516 engineering process Methods 0.000 description 4
- 230000000007 visual effect Effects 0.000 description 4
- 238000004364 calculation method Methods 0.000 description 3
- 238000004040 coloring Methods 0.000 description 2
- 230000000694 effects Effects 0.000 description 2
- 238000005286 illumination Methods 0.000 description 2
- 239000000203 mixture Substances 0.000 description 2
- 230000007704 transition Effects 0.000 description 2
- 230000004304 visual acuity Effects 0.000 description 2
- 238000013528 artificial neural network Methods 0.000 description 1
- 230000005540 biological transmission Effects 0.000 description 1
- 230000019771 cognition Effects 0.000 description 1
- 238000013480 data collection Methods 0.000 description 1
- 230000004438 eyesight Effects 0.000 description 1
- 238000001093 holography Methods 0.000 description 1
- 238000003384 imaging method Methods 0.000 description 1
- 238000011160 research Methods 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/50—Image enhancement or restoration using two or more images, e.g. averaging or subtraction
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T3/00—Geometric image transformations in the plane of the image
- G06T3/40—Scaling of whole images or parts thereof, e.g. expanding or contracting
- G06T3/4038—Image mosaicing, e.g. composing plane images from plane sub-images
Landscapes
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Image Processing (AREA)
Abstract
The invention discloses a multi-viewpoint image fusion method based on block SPCA (Steerable Principal Component Analysis). The method comprises the following steps of (1) dividing a color image into monochrome channel images, segmenting each monochrome channel image into an independent small-block image, and respectively performing SPCA by aiming at each small-block image to obtain a principal component analysis image; (2) calculating the transformation relationship between the two multi-viewpoint adjacent color images on pixel points according to the principal component analysis images, and performing registration on the two multi-viewpoint adjacent color images according to the transformation relationship; and (3) performing wavelet-based multi-viewpoint image fusion on the two registered multi-viewpoint adjacent color images, and obtaining a fused image. The multi-viewpoint image fusion method has the advantages that an SPCA method is used; the image processing process is refined; the necessary registration is added in the image preprocessing link; and the advantages are shown in the aspects of image fusion performance and computation complexity.
Description
Technical field
The present invention relates to the collection field of multi-view point video, particularly relate to a kind of based on piecemeal SPCA multi-view image fusion method.
Background technology
Traditional image co-registration is divided into that Pixel-level merges, feature-based fusion and decision level fusion, and they too focus on the processed level of image, but have ignored the feature that method itself has.In practical application, in order to describe and study conveniently, people be more prone to the means of image co-registration process to name various method.
Therefore, researchers propose respectively based on wavelet transformation, pyramid decomposition, HIS space, morphology, statistics, neural network, the image co-registration of PCA etc.In addition, also someone proposes some and is employed relatively less, but the method that effectiveness comparison is desirable, as Digital Holography method, the gradient method and fuzzy technology method etc. of Decomposition of Mixed Pixels method, multi-wavelength.In addition, multi-view image, with the advantage of its uniqueness, plays irreplaceable advantage in actual applications.
The storage of multi-view image, transport is a puzzlement people's difficult problem.Therefore, realize, to the fusion compression of multi-view image, there is very important practical significance.Due to high resolving power, high information quantity feature that multi-view image has, in order to the convenience of subsequent calculations process, in advance the emphasis that dimensionality reduction is also research is carried out to image.Meanwhile, when fusion compression, owing to there is the conversion of displacement and angle between different images, so process as the case may be again after needing to carry out registration to image.
In actual life, for a certain scene, one group of observed image that can obtain about it from different moment or angle, the entirety of these images is just called multi-view image.People realize the tracking to target by multi-view image technology, the identification of face, the estimation of gesture, and the interchange of multi-user.The scene of reality can be presented in face of observer by multi-view point video that the basis of multi-view image grows up better, has more the sense of reality.But along with the increase of number of cameras, the data volume of multi-view point video also increases exponentially, this brings great difficulty to the storage of video data and transmission.Therefore, realize becoming to the Efficient Compression of multi-view point video the difficult problem that people must solve.
At present, to the compression of multi-view point video mainly centered by de-redundancy, but from the view mode of human eye, to be fused to the compression at center closer to human vision property.Due to the derived relation between multi-view point video and multi-view image, to the fusion of fusion after all still to multi-view image of multi-view point video.Carry out multi-view image fusion, need the information comprised in original each visual point image to be intactly incorporated in the new image of a width to go as far as possible.Like this, people could by newly-generated image, to former scene have vivider, understand intuitively.From the angle of set, if be rc size, image I regards a finite data collection as, so each pixel (i, j), i ∈ { 1,2, ..., r}, j ∈ { 1,2 ..., the pixel value I (i of c}, j) be all the element belonging to this set, i.e. I={I (o, j) | 1≤i≤r, 1≤j≤c}.
When image is very large, the data point that set I comprises is also just many.When regarding each element of I as belong to it an one dimension statistical nature, the dimension of I will inevitably be very high.Practice shows, the image set I very large to intrinsic dimensionality directly merges by traditional method, although may obtain reasonable effect, computation complexity is general all higher, and this is worthless for some pursue high efficiency application program.So, before carrying out fusion treatment, preferably under the main information prerequisite of not losing ensureing original large scale, high-definition picture, dimensionality reduction is carried out to image.Processed image, because data point minimizing can bring great convenience to follow-up calculation process.
Multi-view image, with the advantage of its uniqueness, plays irreplaceable advantage in actual applications.The storage of multi-view image, transport is a puzzlement people's difficult problem.Therefore, realize, to the fusion of multi-view image, there is very strong practical value.
Due to high resolving power, high information quantity feature that multi-view image has, in order to the convenience of follow-up fusion calculation process, treat fused images in advance and carry out dimensionality reduction.Meanwhile, when fusion, owing to there is translation, rotation, the conversion of amplifying, reducing between different images, so need to carry out efficient registration to remove the impact of these conversion to image.
Summary of the invention
The present invention is with SPCA (principal component analysis (PCA) [C ' edricVonesch that direction is adjustable, Fred ' ericStauber, andMichaelUnser.SteerablePCAforRotation-InvariantImageRe cognition.SIAMJ.ImagingSciences.Vol.8, No.3, pp.1857 – 1873]) based on, the conversion registration technology of combining image, fusion treatment is carried out to multi-view image, multi-view image fusion method based on piecemeal SPCA is proposed, to meet the demand that multi-view image merges.
Based on a multi-view image fusion method of piecemeal SPCA, it is characterized in that, comprising:
(1) coloured image is divided into each individual color channel image, each individual color channel Iamge Segmentation is become independently subimage block, carries out SPCA respectively for after each subimage block, obtain principal component analysis (PCA) image;
(2) on pixel, there is transformation relation according between the adjacent two width coloured images of principal component analysis (PCA) image calculating multiple views, according to described transformation relation, registration is carried out to the adjacent two width coloured images of multiple views;
Described transformation relation comprises translation, convergent-divergent and angular transformation.
(3) carry out merging based on the multi-view image of small echo to the adjacent two width coloured images of the multiple views after registration, obtain the image after merging.
In step (1), because general picture size to be fused is all larger, directly carry out mixing operation (matrix operation), calculated amount is very large.Become by large-sized Iamge Segmentation to be fused and independently use SPCA dimensionality reduction to operate again after subimage block, although on the surface because the piecemeal of image causes the number of times calculated to add, the total computation complexity of image co-registration declines.Like this, both remain the degree of accuracy based on entire image method, greatly reduce again the complexity of calculating.
Many videos visual point image is due to the change of the illumination, position etc. during shooting, pixel between adjacent two width images is caused to vary widely on color, position, wherein change in location comprises translation, convergent-divergent, rotation etc., these variation relations directly affects the registration of image, therefore in step (2), obtain the matrix representing these variation relations, then ensure that adjacent image carries out registration in the situation of invariant position, the precision of registration can be higher.
Although the scene of some region performance in two width adjacent images is identical, owing to taking the difference of viewpoint, the light levels that they show and colouring information still have a great difference.So, the pixel value of two sub-picture corresponding parts can not be carried out simple weighted sum-average arithmetic.
As preferably, in order to make the pixel of fused images steady in boundary transition, step (3) being weighted interpolation with the operator S of 3 × 3 before fusion from the intersection of two width coloured images adjacent to the multiple views after registration, and wherein S is:
The present invention adopts the multi-view image based on small echo to merge, and step (3) current color image when merging carries out N layer wavelet decomposition, obtains (3N+1) individual different frequency bands, comprises 3N high frequency subimage and 1 low frequency subgraph picture.
Step (3) for HFS, directly gets the coefficient of dissociation of value as fused images of corresponding coefficient of wavelet decomposition maximum absolute value person in two width coloured images when merging.
Step (3) for low frequency part, set a matching degree threshold value T when merging, and calculates two bags of fusion coloured images in the Local Deviation matching degree of each point, according to the relation of Local Deviation matching degree and matching degree threshold value T, adopts corresponding convergence strategy.
After having processed, just can obtain the desirable fused images based on wavelet transformation.
The present invention is based on piecemeal SPCA multi-view image fusion method, use the method for SPCA by the process refinement of image procossing, consider the spatial correlation of 2-D data, have good effect for during gray level image dimensionality reduction.SPCA due to need to consider conventional multi-view image different points of view between there is the fact of displacement difference, the pre-service link of image adds necessary registration and projective transformation operation (direction is adjustable).Therefore, the inventive method all shows advantage in the performance and computation complexity of image co-registration.
Embodiment
Below in conjunction with specific embodiment, the present invention is described in detail.
The present embodiment based on piecemeal SPCA (principal component analysis (PCA) that direction is adjustable) multi-view image fusion method, comprise the steps:
(1) the SPCA piecemeal of single channel image: because general picture size to be fused is all larger, directly carry out mixing operation (matrix operation), calculated amount is very large.Become by large-sized Iamge Segmentation to be fused and independently use SPCA dimensionality reduction to operate again after small images, although on the surface because the piecemeal of image causes the number of times calculated to add, the total computation complexity of image co-registration declines.Like this, both remain the degree of accuracy based on entire image method, greatly reduce again the complexity of calculating.
PCA is a kind of traditional manifold learning, and it by extracting principal character to reach the object reducing calculated amount from sample data.Suppose the sample set S=I that there is K width image
1, I
2..., I
k{ }, every width image I
k(K=1,2 ..., k) size is all r × c.First, I
kall pixels be rearranged into a row vector s by row
k, s
ksize be 1 × N, wherein N=r × c; Then, all s
kform a new matrix A=[s
1s
2... s
k]
t; Again svd is carried out to A, ask for the proper vector α corresponding to the maximum d of an A eigenwert; Finally, with the projective transformation of vector by each s
kproject, just obtain the d dimension statement of original image.The PCA method of Here it is gray level image.
The present invention adopt SPCA be PCA improve, compared to PCA with a dimensional vector for starting point, SPCA to the process of data based on two-dimensional matrix.Due to this difference, SPCA can retain the spatial correlation between data to a great extent, insensitive in the change such as translation, rotation to image.In theory, PCA can be replaced to process all single channel band fused images with SPCA.But because general picture size to be fused is all larger, directly carry out the matrix operation of merging, calculated amount is quite large.Become by large-sized Iamge Segmentation to be fused and independently use SPCA again after small images, although calculative number of times adds on the surface, total computation complexity reduces.Like this, both remain the degree of accuracy based on view picture method to be fused, greatly reduce the complexity of calculating again.During concrete operations, coloured image is divided into each individual color channel image, each individual color channel image A is divided into the identical subimage block of n size, and uses A
i, i ∈ (1,2 ..., n) represent i-th subimage block, then the step based on piecemeal SPCA method is as follows:
1) computed image sample set A
1, A
2..., A
naverage
2) for all i ∈ 1,2 ..., n}, calculates
3) L is made
o← (E
d, O)
r, wherein E
dbe the unit matrix of d × d, d is the dimension that image is expected to drop to, and O is null matrix;
4) k=O is made, the initialisation image root-mean-square error REMS (k) when SPCA inverse transformation is gone back ← ∞;
5) by
compute matrix M
rthe proper vector corresponding to d eigenvalue of maximum
l
0the L as k=0
k;
6) make k=k+1, carry out assignment simultaneously
7) by
compute matrix M
lthe proper vector corresponding to d eigenvalue of maximum
8) assignment operation is carried out
9) calculate
Wherein || ||
fthe F norm of representing matrix;
10) if REMS (k-1)-REMS (k)≤η, so 11 are turned), otherwise, jump to 5) continue to perform, wherein η is the threshold value preset;
11) to transformation matrix L
r × dand R
c × dassignment, i.e. L ← L respectively
k, R ← R
k;
12) for each i ∈ 1,2 ..., n}, by formula
calculate original-gray image through conversion after Gray Projection (i.e. principal component analysis (PCA) image) and return.
(2) registration of image: many videos visual point image is due to the change of the illumination, position etc. during shooting, pixel between adjacent two width images is caused to vary widely on color, position, wherein change in location comprises translation, convergent-divergent, rotation etc., these variation relations directly affects the registration of image, first the matrix representing these variation relations will be obtained, then ensure that adjacent image carries out registration in the situation of invariant position, the precision of registration can be higher.
Suppose to have visual point image I1 and I2 to be fused that two are adjacent, so in I1, in any point (x1, y1) to I2, the transformation relation of corresponding point (x2, y2) can describe [7] with transformation for mula below:
In formula: λ is the yardstick of conversion, and a ~ h is the coefficient of correlating transforms.Homography matrix H is defined as
From formula (2):
For processing conveniently, when change of scale is constant, only consider the horizontal and vertical displacement between pixel.Then can obtain a=e=1, λ=1, b=d=g=h=O, have simultaneously
In formula, c and f represents the horizontal and vertical displacement of pixel respectively.
Then registration can be carried out according to formula (4).
In image registration, rough registration only considered the translation problem between adjacent image pixels point, when being used for representing Image1, when representing Image1, their overlapping regions roughly can be calculated for (1:(m-f), 1:(n-c)) and ((1+f): m, (1+c): n).
(3) image co-registration: although the scene of some region performance in two width adjacent images is identical, owing to taking the difference of viewpoint, the light levels that they show and colouring information still have a great difference.So the pixel value of two sub-picture corresponding parts can not be carried out simple weighted sum-average arithmetic.
In order to make the pixel of fused images steady in boundary transition, from their intersection, be weighted interpolation with the operator s of one 3 × 3, wherein S is:
The present invention adopts the thought of wavelet transformation to merge overlay region, first will carry out N layer wavelet decomposition to image, obtain (3N+1) individual different frequency bands, and these frequency bands comprise 3N high frequency subimage and 1 low frequency subgraph picture.For high frequency imaging part, directly get the coefficient of dissociation of value as fused images of corresponding coefficient of wavelet decomposition maximum absolute value person in two width source images, for low frequency part, the regular relative complex of process some, concrete steps are as follows:
1) suppose:
C (I) represents the matrix of coefficients of the wavelet low frequency composition of image I;
P=(m, n) represents the locus of wavelet coefficient;
So C (I, p) is designated as the value of the element of (m, n) under representing wavelet low frequency composition matrix of coefficients;
2) centered by P, selected a zonule Q, u (I, P) represent that C (I) is centered by P, the average in Q, and G (I, P) is the Local Deviation conspicuousness of C (I) in Q, meet
In formula: W (q) is weights, from p more away from, be worth less;
3) according to formula (6) computed image I respectively
1, image I
2local Deviation conspicuousness G (I
1, P), G (I
2, P), then calculate their Local Deviation matching degrees at P point:
4) set a matching degree threshold value T, work as M
2(P), during <T, convergence strategy is
Work as M
2p, during ()>=T, convergence strategy is Average Strategy
Wherein,
After above-mentioned process completes, just can obtain the desirable fused images based on wavelet transformation.
Claims (6)
1., based on a multi-view image fusion method of piecemeal SPCA, it is characterized in that, comprising:
(1) coloured image is divided into each individual color channel image, each individual color channel Iamge Segmentation is become independently subimage block, carries out SPCA respectively for after each subimage block, obtain principal component analysis (PCA) image;
(2) on pixel, there is transformation relation according between the adjacent two width coloured images of principal component analysis (PCA) image calculating multiple views, according to described transformation relation, registration is carried out to the adjacent two width coloured images of multiple views;
(3) carry out merging based on the multi-view image of small echo to the adjacent two width coloured images of the multiple views after registration, obtain the image after merging.
2., as claimed in claim 1 based on the multi-view image fusion method of piecemeal SPCA, it is characterized in that, described transformation relation comprises translation, convergent-divergent and angular transformation.
3. as claimed in claim 2 based on the multi-view image fusion method of piecemeal SPCA, it is characterized in that, step (3) being weighted interpolation with the operator S of 3 × 3 before fusion from the intersection of two width coloured images adjacent to the multiple views after registration, and wherein S is:
4. as claimed in claim 3 based on the multi-view image fusion method of piecemeal SPCA, it is characterized in that, step (3) current color image when merging carries out N layer wavelet decomposition, obtain (3N+1) individual different frequency bands, comprise 3N high frequency subimage and 1 low frequency subgraph picture.
5. as claimed in claim 4 based on the multi-view image fusion method of piecemeal SPCA, it is characterized in that, step (3) for HFS, directly gets the coefficient of dissociation of value as fused images of corresponding coefficient of wavelet decomposition maximum absolute value person in two width coloured images when merging.
6. as claimed in claim 5 based on the multi-view image fusion method of piecemeal SPCA, it is characterized in that, step (3) merge time for low frequency part, set a matching degree threshold value T, calculate two bags and merge the Local Deviation matching degree of coloured image at each point, according to the relation of Local Deviation matching degree and matching degree threshold value T, adopt corresponding convergence strategy.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201510819511.0A CN105488778A (en) | 2015-11-23 | 2015-11-23 | Multi-viewpoint image fusion method based on block SPCA |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201510819511.0A CN105488778A (en) | 2015-11-23 | 2015-11-23 | Multi-viewpoint image fusion method based on block SPCA |
Publications (1)
Publication Number | Publication Date |
---|---|
CN105488778A true CN105488778A (en) | 2016-04-13 |
Family
ID=55675744
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201510819511.0A Pending CN105488778A (en) | 2015-11-23 | 2015-11-23 | Multi-viewpoint image fusion method based on block SPCA |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN105488778A (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN115761411A (en) * | 2022-11-24 | 2023-03-07 | 北京的卢铭视科技有限公司 | Model training method, living body detection method, electronic device, and storage medium |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101883287A (en) * | 2010-07-14 | 2010-11-10 | 清华大学深圳研究生院 | Method for multi-viewpoint video coding side information integration |
CN103455991A (en) * | 2013-08-22 | 2013-12-18 | 西北大学 | Multi-focus image fusion method |
-
2015
- 2015-11-23 CN CN201510819511.0A patent/CN105488778A/en active Pending
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101883287A (en) * | 2010-07-14 | 2010-11-10 | 清华大学深圳研究生院 | Method for multi-viewpoint video coding side information integration |
CN103455991A (en) * | 2013-08-22 | 2013-12-18 | 西北大学 | Multi-focus image fusion method |
Non-Patent Citations (2)
Title |
---|
CÉDRIC VONESCH ET AL: "Steerable PCA for Rotation-Invariant Image Recognition", 《SIAM J.IMAGING SCIENCES》 * |
吴远昌 等: "分块GPCA和多视点图像融合", 《哈尔滨工程大学学报》 * |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN115761411A (en) * | 2022-11-24 | 2023-03-07 | 北京的卢铭视科技有限公司 | Model training method, living body detection method, electronic device, and storage medium |
CN115761411B (en) * | 2022-11-24 | 2023-09-01 | 北京的卢铭视科技有限公司 | Model training method, living body detection method, electronic device, and storage medium |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
Mitrokhin et al. | EV-IMO: Motion segmentation dataset and learning pipeline for event cameras | |
Sun et al. | Multi-view to novel view: Synthesizing novel views with self-learned confidence | |
Tateno et al. | Distortion-aware convolutional filters for dense prediction in panoramic images | |
CN102903096B (en) | Monocular video based object depth extraction method | |
Bian et al. | Auto-rectify network for unsupervised indoor depth estimation | |
CN105654492A (en) | Robust real-time three-dimensional (3D) reconstruction method based on consumer camera | |
Nakajima et al. | Fast and accurate semantic mapping through geometric-based incremental segmentation | |
CN108009985B (en) | Video splicing method based on graph cut | |
CN111783582A (en) | Unsupervised monocular depth estimation algorithm based on deep learning | |
Aleotti et al. | Learning end-to-end scene flow by distilling single tasks knowledge | |
CN111127522B (en) | Depth optical flow prediction method, device, equipment and medium based on monocular camera | |
CN102360504A (en) | Self-adaptation virtual and actual three-dimensional registration method based on multiple natural characteristics | |
CN104517317A (en) | Three-dimensional reconstruction method of vehicle-borne infrared images | |
CN114036969A (en) | 3D human body action recognition algorithm under multi-view condition | |
Zhang et al. | Depth map prediction from a single image with generative adversarial nets | |
CN108491752A (en) | A kind of hand gestures method of estimation based on hand Segmentation convolutional network | |
Yin et al. | Novel view synthesis for large-scale scene using adversarial loss | |
CN105488778A (en) | Multi-viewpoint image fusion method based on block SPCA | |
Kim et al. | FPGA implementation of stereoscopic image proceesing architecture base on the gray-scale projection | |
Cheng et al. | Understanding depth map progressively: Adaptive distance interval separation for monocular 3d object detection | |
CN112116653B (en) | Object posture estimation method for multiple RGB pictures | |
CN115272450A (en) | Target positioning method based on panoramic segmentation | |
Kitt et al. | Trinocular optical flow estimation for intelligent vehicle applications | |
Lee et al. | Globally consistent video depth and pose estimation with efficient test-time training | |
Shoman et al. | Illumination invariant camera localization using synthetic images |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20160413 |