CN105488778A - Multi-viewpoint image fusion method based on block SPCA - Google Patents

Multi-viewpoint image fusion method based on block SPCA Download PDF

Info

Publication number
CN105488778A
CN105488778A CN201510819511.0A CN201510819511A CN105488778A CN 105488778 A CN105488778 A CN 105488778A CN 201510819511 A CN201510819511 A CN 201510819511A CN 105488778 A CN105488778 A CN 105488778A
Authority
CN
China
Prior art keywords
image
spca
images
fusion method
piecemeal
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201510819511.0A
Other languages
Chinese (zh)
Inventor
厉晓华
赵磊
方伟杰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang University ZJU
Original Assignee
Zhejiang University ZJU
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang University ZJU filed Critical Zhejiang University ZJU
Priority to CN201510819511.0A priority Critical patent/CN105488778A/en
Publication of CN105488778A publication Critical patent/CN105488778A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration using two or more images, e.g. averaging or subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • G06T3/4038Image mosaicing, e.g. composing plane images from plane sub-images

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Image Processing (AREA)

Abstract

The invention discloses a multi-viewpoint image fusion method based on block SPCA (Steerable Principal Component Analysis). The method comprises the following steps of (1) dividing a color image into monochrome channel images, segmenting each monochrome channel image into an independent small-block image, and respectively performing SPCA by aiming at each small-block image to obtain a principal component analysis image; (2) calculating the transformation relationship between the two multi-viewpoint adjacent color images on pixel points according to the principal component analysis images, and performing registration on the two multi-viewpoint adjacent color images according to the transformation relationship; and (3) performing wavelet-based multi-viewpoint image fusion on the two registered multi-viewpoint adjacent color images, and obtaining a fused image. The multi-viewpoint image fusion method has the advantages that an SPCA method is used; the image processing process is refined; the necessary registration is added in the image preprocessing link; and the advantages are shown in the aspects of image fusion performance and computation complexity.

Description

Based on the multi-view image fusion method of piecemeal SPCA
Technical field
The present invention relates to the collection field of multi-view point video, particularly relate to a kind of based on piecemeal SPCA multi-view image fusion method.
Background technology
Traditional image co-registration is divided into that Pixel-level merges, feature-based fusion and decision level fusion, and they too focus on the processed level of image, but have ignored the feature that method itself has.In practical application, in order to describe and study conveniently, people be more prone to the means of image co-registration process to name various method.
Therefore, researchers propose respectively based on wavelet transformation, pyramid decomposition, HIS space, morphology, statistics, neural network, the image co-registration of PCA etc.In addition, also someone proposes some and is employed relatively less, but the method that effectiveness comparison is desirable, as Digital Holography method, the gradient method and fuzzy technology method etc. of Decomposition of Mixed Pixels method, multi-wavelength.In addition, multi-view image, with the advantage of its uniqueness, plays irreplaceable advantage in actual applications.
The storage of multi-view image, transport is a puzzlement people's difficult problem.Therefore, realize, to the fusion compression of multi-view image, there is very important practical significance.Due to high resolving power, high information quantity feature that multi-view image has, in order to the convenience of subsequent calculations process, in advance the emphasis that dimensionality reduction is also research is carried out to image.Meanwhile, when fusion compression, owing to there is the conversion of displacement and angle between different images, so process as the case may be again after needing to carry out registration to image.
In actual life, for a certain scene, one group of observed image that can obtain about it from different moment or angle, the entirety of these images is just called multi-view image.People realize the tracking to target by multi-view image technology, the identification of face, the estimation of gesture, and the interchange of multi-user.The scene of reality can be presented in face of observer by multi-view point video that the basis of multi-view image grows up better, has more the sense of reality.But along with the increase of number of cameras, the data volume of multi-view point video also increases exponentially, this brings great difficulty to the storage of video data and transmission.Therefore, realize becoming to the Efficient Compression of multi-view point video the difficult problem that people must solve.
At present, to the compression of multi-view point video mainly centered by de-redundancy, but from the view mode of human eye, to be fused to the compression at center closer to human vision property.Due to the derived relation between multi-view point video and multi-view image, to the fusion of fusion after all still to multi-view image of multi-view point video.Carry out multi-view image fusion, need the information comprised in original each visual point image to be intactly incorporated in the new image of a width to go as far as possible.Like this, people could by newly-generated image, to former scene have vivider, understand intuitively.From the angle of set, if be rc size, image I regards a finite data collection as, so each pixel (i, j), i ∈ { 1,2, ..., r}, j ∈ { 1,2 ..., the pixel value I (i of c}, j) be all the element belonging to this set, i.e. I={I (o, j) | 1≤i≤r, 1≤j≤c}.
When image is very large, the data point that set I comprises is also just many.When regarding each element of I as belong to it an one dimension statistical nature, the dimension of I will inevitably be very high.Practice shows, the image set I very large to intrinsic dimensionality directly merges by traditional method, although may obtain reasonable effect, computation complexity is general all higher, and this is worthless for some pursue high efficiency application program.So, before carrying out fusion treatment, preferably under the main information prerequisite of not losing ensureing original large scale, high-definition picture, dimensionality reduction is carried out to image.Processed image, because data point minimizing can bring great convenience to follow-up calculation process.
Multi-view image, with the advantage of its uniqueness, plays irreplaceable advantage in actual applications.The storage of multi-view image, transport is a puzzlement people's difficult problem.Therefore, realize, to the fusion of multi-view image, there is very strong practical value.
Due to high resolving power, high information quantity feature that multi-view image has, in order to the convenience of follow-up fusion calculation process, treat fused images in advance and carry out dimensionality reduction.Meanwhile, when fusion, owing to there is translation, rotation, the conversion of amplifying, reducing between different images, so need to carry out efficient registration to remove the impact of these conversion to image.
Summary of the invention
The present invention is with SPCA (principal component analysis (PCA) [C ' edricVonesch that direction is adjustable, Fred ' ericStauber, andMichaelUnser.SteerablePCAforRotation-InvariantImageRe cognition.SIAMJ.ImagingSciences.Vol.8, No.3, pp.1857 – 1873]) based on, the conversion registration technology of combining image, fusion treatment is carried out to multi-view image, multi-view image fusion method based on piecemeal SPCA is proposed, to meet the demand that multi-view image merges.
Based on a multi-view image fusion method of piecemeal SPCA, it is characterized in that, comprising:
(1) coloured image is divided into each individual color channel image, each individual color channel Iamge Segmentation is become independently subimage block, carries out SPCA respectively for after each subimage block, obtain principal component analysis (PCA) image;
(2) on pixel, there is transformation relation according between the adjacent two width coloured images of principal component analysis (PCA) image calculating multiple views, according to described transformation relation, registration is carried out to the adjacent two width coloured images of multiple views;
Described transformation relation comprises translation, convergent-divergent and angular transformation.
(3) carry out merging based on the multi-view image of small echo to the adjacent two width coloured images of the multiple views after registration, obtain the image after merging.
In step (1), because general picture size to be fused is all larger, directly carry out mixing operation (matrix operation), calculated amount is very large.Become by large-sized Iamge Segmentation to be fused and independently use SPCA dimensionality reduction to operate again after subimage block, although on the surface because the piecemeal of image causes the number of times calculated to add, the total computation complexity of image co-registration declines.Like this, both remain the degree of accuracy based on entire image method, greatly reduce again the complexity of calculating.
Many videos visual point image is due to the change of the illumination, position etc. during shooting, pixel between adjacent two width images is caused to vary widely on color, position, wherein change in location comprises translation, convergent-divergent, rotation etc., these variation relations directly affects the registration of image, therefore in step (2), obtain the matrix representing these variation relations, then ensure that adjacent image carries out registration in the situation of invariant position, the precision of registration can be higher.
Although the scene of some region performance in two width adjacent images is identical, owing to taking the difference of viewpoint, the light levels that they show and colouring information still have a great difference.So, the pixel value of two sub-picture corresponding parts can not be carried out simple weighted sum-average arithmetic.
As preferably, in order to make the pixel of fused images steady in boundary transition, step (3) being weighted interpolation with the operator S of 3 × 3 before fusion from the intersection of two width coloured images adjacent to the multiple views after registration, and wherein S is:
1 / 8 1 / 8 1 / 8 1 / 8 0 1 / 8 1 / 8 g 1 / 8 1 / 8 .
The present invention adopts the multi-view image based on small echo to merge, and step (3) current color image when merging carries out N layer wavelet decomposition, obtains (3N+1) individual different frequency bands, comprises 3N high frequency subimage and 1 low frequency subgraph picture.
Step (3) for HFS, directly gets the coefficient of dissociation of value as fused images of corresponding coefficient of wavelet decomposition maximum absolute value person in two width coloured images when merging.
Step (3) for low frequency part, set a matching degree threshold value T when merging, and calculates two bags of fusion coloured images in the Local Deviation matching degree of each point, according to the relation of Local Deviation matching degree and matching degree threshold value T, adopts corresponding convergence strategy.
After having processed, just can obtain the desirable fused images based on wavelet transformation.
The present invention is based on piecemeal SPCA multi-view image fusion method, use the method for SPCA by the process refinement of image procossing, consider the spatial correlation of 2-D data, have good effect for during gray level image dimensionality reduction.SPCA due to need to consider conventional multi-view image different points of view between there is the fact of displacement difference, the pre-service link of image adds necessary registration and projective transformation operation (direction is adjustable).Therefore, the inventive method all shows advantage in the performance and computation complexity of image co-registration.
Embodiment
Below in conjunction with specific embodiment, the present invention is described in detail.
The present embodiment based on piecemeal SPCA (principal component analysis (PCA) that direction is adjustable) multi-view image fusion method, comprise the steps:
(1) the SPCA piecemeal of single channel image: because general picture size to be fused is all larger, directly carry out mixing operation (matrix operation), calculated amount is very large.Become by large-sized Iamge Segmentation to be fused and independently use SPCA dimensionality reduction to operate again after small images, although on the surface because the piecemeal of image causes the number of times calculated to add, the total computation complexity of image co-registration declines.Like this, both remain the degree of accuracy based on entire image method, greatly reduce again the complexity of calculating.
PCA is a kind of traditional manifold learning, and it by extracting principal character to reach the object reducing calculated amount from sample data.Suppose the sample set S=I that there is K width image 1, I 2..., I k{ }, every width image I k(K=1,2 ..., k) size is all r × c.First, I kall pixels be rearranged into a row vector s by row k, s ksize be 1 × N, wherein N=r × c; Then, all s kform a new matrix A=[s 1s 2... s k] t; Again svd is carried out to A, ask for the proper vector α corresponding to the maximum d of an A eigenwert; Finally, with the projective transformation of vector by each s kproject, just obtain the d dimension statement of original image.The PCA method of Here it is gray level image.
The present invention adopt SPCA be PCA improve, compared to PCA with a dimensional vector for starting point, SPCA to the process of data based on two-dimensional matrix.Due to this difference, SPCA can retain the spatial correlation between data to a great extent, insensitive in the change such as translation, rotation to image.In theory, PCA can be replaced to process all single channel band fused images with SPCA.But because general picture size to be fused is all larger, directly carry out the matrix operation of merging, calculated amount is quite large.Become by large-sized Iamge Segmentation to be fused and independently use SPCA again after small images, although calculative number of times adds on the surface, total computation complexity reduces.Like this, both remain the degree of accuracy based on view picture method to be fused, greatly reduce the complexity of calculating again.During concrete operations, coloured image is divided into each individual color channel image, each individual color channel image A is divided into the identical subimage block of n size, and uses A i, i ∈ (1,2 ..., n) represent i-th subimage block, then the step based on piecemeal SPCA method is as follows:
1) computed image sample set A 1, A 2..., A naverage
2) for all i ∈ 1,2 ..., n}, calculates
3) L is made o← (E d, O) r, wherein E dbe the unit matrix of d × d, d is the dimension that image is expected to drop to, and O is null matrix;
4) k=O is made, the initialisation image root-mean-square error REMS (k) when SPCA inverse transformation is gone back ← ∞;
5) by compute matrix M rthe proper vector corresponding to d eigenvalue of maximum l 0the L as k=0 k;
6) make k=k+1, carry out assignment simultaneously
7) by compute matrix M lthe proper vector corresponding to d eigenvalue of maximum
8) assignment operation is carried out
9) calculate R E M S ( k ) = ( 1 n Σ i = 1 n | | A ~ i - L k L k T A ~ i R k R k T | | F 2 ) , Wherein || || fthe F norm of representing matrix;
10) if REMS (k-1)-REMS (k)≤η, so 11 are turned), otherwise, jump to 5) continue to perform, wherein η is the threshold value preset;
11) to transformation matrix L r × dand R c × dassignment, i.e. L ← L respectively k, R ← R k;
12) for each i ∈ 1,2 ..., n}, by formula calculate original-gray image through conversion after Gray Projection (i.e. principal component analysis (PCA) image) and return.
(2) registration of image: many videos visual point image is due to the change of the illumination, position etc. during shooting, pixel between adjacent two width images is caused to vary widely on color, position, wherein change in location comprises translation, convergent-divergent, rotation etc., these variation relations directly affects the registration of image, first the matrix representing these variation relations will be obtained, then ensure that adjacent image carries out registration in the situation of invariant position, the precision of registration can be higher.
Suppose to have visual point image I1 and I2 to be fused that two are adjacent, so in I1, in any point (x1, y1) to I2, the transformation relation of corresponding point (x2, y2) can describe [7] with transformation for mula below:
λ x 2 y 2 1 = a b c d e f g h 1 x 1 y 1 1 - - - ( 1 )
In formula: λ is the yardstick of conversion, and a ~ h is the coefficient of correlating transforms.Homography matrix H is defined as
H = a b c d e f g h l - - - ( 2 )
From formula (2):
{ x 2 = ax 1 + by 1 + c λ ( gx 1 + hy 1 + l ) y 2 = dx 1 + ey 1 + f λ ( gx 1 + hy 1 + l ) - - - ( 3 )
For processing conveniently, when change of scale is constant, only consider the horizontal and vertical displacement between pixel.Then can obtain a=e=1, λ=1, b=d=g=h=O, have simultaneously
x 2 = x 1 + c y 2 = y 1 + f - - - ( 4 )
In formula, c and f represents the horizontal and vertical displacement of pixel respectively.
Then registration can be carried out according to formula (4).
In image registration, rough registration only considered the translation problem between adjacent image pixels point, when being used for representing Image1, when representing Image1, their overlapping regions roughly can be calculated for (1:(m-f), 1:(n-c)) and ((1+f): m, (1+c): n).
(3) image co-registration: although the scene of some region performance in two width adjacent images is identical, owing to taking the difference of viewpoint, the light levels that they show and colouring information still have a great difference.So the pixel value of two sub-picture corresponding parts can not be carried out simple weighted sum-average arithmetic.
In order to make the pixel of fused images steady in boundary transition, from their intersection, be weighted interpolation with the operator s of one 3 × 3, wherein S is:
1 / 8 1 / 8 1 / 8 1 / 8 0 1 / 8 1 / 8 g 1 / 8 1 / 8 - - - ( 5 )
The present invention adopts the thought of wavelet transformation to merge overlay region, first will carry out N layer wavelet decomposition to image, obtain (3N+1) individual different frequency bands, and these frequency bands comprise 3N high frequency subimage and 1 low frequency subgraph picture.For high frequency imaging part, directly get the coefficient of dissociation of value as fused images of corresponding coefficient of wavelet decomposition maximum absolute value person in two width source images, for low frequency part, the regular relative complex of process some, concrete steps are as follows:
1) suppose:
C (I) represents the matrix of coefficients of the wavelet low frequency composition of image I;
P=(m, n) represents the locus of wavelet coefficient;
So C (I, p) is designated as the value of the element of (m, n) under representing wavelet low frequency composition matrix of coefficients;
2) centered by P, selected a zonule Q, u (I, P) represent that C (I) is centered by P, the average in Q, and G (I, P) is the Local Deviation conspicuousness of C (I) in Q, meet
G ( I , P ) = Σ q ⋐ Q w ( q ) | C ( I , q ) - u ( I , p ) | 2 - - - ( 6 )
In formula: W (q) is weights, from p more away from, be worth less;
3) according to formula (6) computed image I respectively 1, image I 2local Deviation conspicuousness G (I 1, P), G (I 2, P), then calculate their Local Deviation matching degrees at P point:
M 2 ( p ) = 2 Σ q ⋐ Q w ( q ) | C ( I 1 , q ) - u ( I 1 , p ) | | C ( I 2 , q ) - u ( I 2 , p ) | G ( I 1 , p ) + G ( I 2 + p ) - - - ( 7 )
4) set a matching degree threshold value T, work as M 2(P), during <T, convergence strategy is
C ( I f , p ) = C ( I 1 , p ) , C ( I 1 , p ) &GreaterEqual; C ( I 2 , p ) C ( I 2 , p ) , C ( I 1 , p ) < C ( I 2 , p ) - - - ( 8 )
Work as M 2p, during ()>=T, convergence strategy is Average Strategy
C ( I f , p ) = W max C ( I 1 , p ) + W min C ( I 2 , p ) , G ( I 1 , p ) &GreaterEqual; G ( I 2 , p ) W max C ( I 2 , p ) + W min C ( I 1 , p ) , G ( I 1 , p ) < G ( I 2 , p ) - - - ( 9 )
Wherein,
W min = 0.5 - 0.5 ( 1 - M 2 ( p ) 1 - T ) , W max = 1 - W min - - - ( 10 )
After above-mentioned process completes, just can obtain the desirable fused images based on wavelet transformation.

Claims (6)

1., based on a multi-view image fusion method of piecemeal SPCA, it is characterized in that, comprising:
(1) coloured image is divided into each individual color channel image, each individual color channel Iamge Segmentation is become independently subimage block, carries out SPCA respectively for after each subimage block, obtain principal component analysis (PCA) image;
(2) on pixel, there is transformation relation according between the adjacent two width coloured images of principal component analysis (PCA) image calculating multiple views, according to described transformation relation, registration is carried out to the adjacent two width coloured images of multiple views;
(3) carry out merging based on the multi-view image of small echo to the adjacent two width coloured images of the multiple views after registration, obtain the image after merging.
2., as claimed in claim 1 based on the multi-view image fusion method of piecemeal SPCA, it is characterized in that, described transformation relation comprises translation, convergent-divergent and angular transformation.
3. as claimed in claim 2 based on the multi-view image fusion method of piecemeal SPCA, it is characterized in that, step (3) being weighted interpolation with the operator S of 3 × 3 before fusion from the intersection of two width coloured images adjacent to the multiple views after registration, and wherein S is:
1 / 8 1 / 8 1 / 8 1 / 8 0 1 / 8 1 / 8 g 1 / 8 1 / 8 .
4. as claimed in claim 3 based on the multi-view image fusion method of piecemeal SPCA, it is characterized in that, step (3) current color image when merging carries out N layer wavelet decomposition, obtain (3N+1) individual different frequency bands, comprise 3N high frequency subimage and 1 low frequency subgraph picture.
5. as claimed in claim 4 based on the multi-view image fusion method of piecemeal SPCA, it is characterized in that, step (3) for HFS, directly gets the coefficient of dissociation of value as fused images of corresponding coefficient of wavelet decomposition maximum absolute value person in two width coloured images when merging.
6. as claimed in claim 5 based on the multi-view image fusion method of piecemeal SPCA, it is characterized in that, step (3) merge time for low frequency part, set a matching degree threshold value T, calculate two bags and merge the Local Deviation matching degree of coloured image at each point, according to the relation of Local Deviation matching degree and matching degree threshold value T, adopt corresponding convergence strategy.
CN201510819511.0A 2015-11-23 2015-11-23 Multi-viewpoint image fusion method based on block SPCA Pending CN105488778A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201510819511.0A CN105488778A (en) 2015-11-23 2015-11-23 Multi-viewpoint image fusion method based on block SPCA

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201510819511.0A CN105488778A (en) 2015-11-23 2015-11-23 Multi-viewpoint image fusion method based on block SPCA

Publications (1)

Publication Number Publication Date
CN105488778A true CN105488778A (en) 2016-04-13

Family

ID=55675744

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201510819511.0A Pending CN105488778A (en) 2015-11-23 2015-11-23 Multi-viewpoint image fusion method based on block SPCA

Country Status (1)

Country Link
CN (1) CN105488778A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115761411A (en) * 2022-11-24 2023-03-07 北京的卢铭视科技有限公司 Model training method, living body detection method, electronic device, and storage medium

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101883287A (en) * 2010-07-14 2010-11-10 清华大学深圳研究生院 Method for multi-viewpoint video coding side information integration
CN103455991A (en) * 2013-08-22 2013-12-18 西北大学 Multi-focus image fusion method

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101883287A (en) * 2010-07-14 2010-11-10 清华大学深圳研究生院 Method for multi-viewpoint video coding side information integration
CN103455991A (en) * 2013-08-22 2013-12-18 西北大学 Multi-focus image fusion method

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
CÉDRIC VONESCH ET AL: "Steerable PCA for Rotation-Invariant Image Recognition", 《SIAM J.IMAGING SCIENCES》 *
吴远昌 等: "分块GPCA和多视点图像融合", 《哈尔滨工程大学学报》 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115761411A (en) * 2022-11-24 2023-03-07 北京的卢铭视科技有限公司 Model training method, living body detection method, electronic device, and storage medium
CN115761411B (en) * 2022-11-24 2023-09-01 北京的卢铭视科技有限公司 Model training method, living body detection method, electronic device, and storage medium

Similar Documents

Publication Publication Date Title
Mitrokhin et al. EV-IMO: Motion segmentation dataset and learning pipeline for event cameras
Sun et al. Multi-view to novel view: Synthesizing novel views with self-learned confidence
Tateno et al. Distortion-aware convolutional filters for dense prediction in panoramic images
CN102903096B (en) Monocular video based object depth extraction method
Bian et al. Auto-rectify network for unsupervised indoor depth estimation
CN105654492A (en) Robust real-time three-dimensional (3D) reconstruction method based on consumer camera
Nakajima et al. Fast and accurate semantic mapping through geometric-based incremental segmentation
CN108009985B (en) Video splicing method based on graph cut
CN111783582A (en) Unsupervised monocular depth estimation algorithm based on deep learning
Aleotti et al. Learning end-to-end scene flow by distilling single tasks knowledge
CN111127522B (en) Depth optical flow prediction method, device, equipment and medium based on monocular camera
CN102360504A (en) Self-adaptation virtual and actual three-dimensional registration method based on multiple natural characteristics
CN104517317A (en) Three-dimensional reconstruction method of vehicle-borne infrared images
CN114036969A (en) 3D human body action recognition algorithm under multi-view condition
Zhang et al. Depth map prediction from a single image with generative adversarial nets
CN108491752A (en) A kind of hand gestures method of estimation based on hand Segmentation convolutional network
Yin et al. Novel view synthesis for large-scale scene using adversarial loss
CN105488778A (en) Multi-viewpoint image fusion method based on block SPCA
Kim et al. FPGA implementation of stereoscopic image proceesing architecture base on the gray-scale projection
Cheng et al. Understanding depth map progressively: Adaptive distance interval separation for monocular 3d object detection
CN112116653B (en) Object posture estimation method for multiple RGB pictures
CN115272450A (en) Target positioning method based on panoramic segmentation
Kitt et al. Trinocular optical flow estimation for intelligent vehicle applications
Lee et al. Globally consistent video depth and pose estimation with efficient test-time training
Shoman et al. Illumination invariant camera localization using synthetic images

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20160413