WO2019214568A1 - 一种基于深度的光场拼接方法 - Google Patents
一种基于深度的光场拼接方法 Download PDFInfo
- Publication number
- WO2019214568A1 WO2019214568A1 PCT/CN2019/085643 CN2019085643W WO2019214568A1 WO 2019214568 A1 WO2019214568 A1 WO 2019214568A1 CN 2019085643 W CN2019085643 W CN 2019085643W WO 2019214568 A1 WO2019214568 A1 WO 2019214568A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- light field
- depth
- feature point
- transformation matrix
- point pairs
- Prior art date
Links
- 238000000034 method Methods 0.000 title claims abstract description 43
- 239000011159 matrix material Substances 0.000 claims abstract description 81
- 230000009466 transformation Effects 0.000 claims abstract description 51
- 238000012216 screening Methods 0.000 claims abstract description 8
- 230000003287 optical effect Effects 0.000 claims description 8
- 238000004364 calculation method Methods 0.000 claims description 3
- 238000000605 extraction Methods 0.000 claims description 3
- 238000005070 sampling Methods 0.000 claims description 3
- 238000001514 detection method Methods 0.000 claims description 2
- 238000001914 filtration Methods 0.000 claims description 2
- 238000013507 mapping Methods 0.000 abstract description 2
- 230000004927 fusion Effects 0.000 description 4
- 238000005457 optimization Methods 0.000 description 3
- 238000012217 deletion Methods 0.000 description 2
- 230000037430 deletion Effects 0.000 description 2
- 238000003384 imaging method Methods 0.000 description 2
- 238000007781 pre-processing Methods 0.000 description 2
- 238000012545 processing Methods 0.000 description 2
- 230000009286 beneficial effect Effects 0.000 description 1
- 238000010586 diagram Methods 0.000 description 1
- 235000013399 edible fruits Nutrition 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 238000005192 partition Methods 0.000 description 1
- 230000005855 radiation Effects 0.000 description 1
- 230000000717 retained effect Effects 0.000 description 1
- 239000007787 solid Substances 0.000 description 1
- 238000012360 testing method Methods 0.000 description 1
- 239000013598 vector Substances 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T3/00—Geometric image transformations in the plane of the image
- G06T3/40—Scaling of whole images or parts thereof, e.g. expanding or contracting
- G06T3/4038—Image mosaicing, e.g. composing plane images from plane sub-images
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/23—Clustering techniques
- G06F18/232—Non-hierarchical techniques
- G06F18/2321—Non-hierarchical techniques using statistics or function optimisation, e.g. modelling of probability density functions
- G06F18/23213—Non-hierarchical techniques using statistics or function optimisation, e.g. modelling of probability density functions with fixed number of clusters, e.g. K-means clustering
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/162—Segmentation; Edge detection involving graph-based methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/30—Determination of transform parameters for the alignment of images, i.e. image registration
- G06T7/33—Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods
- G06T7/337—Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods involving reference images or patches
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10052—Images from lightfield camera
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20212—Image combination
- G06T2207/20221—Image fusion; Image merging
Definitions
- the invention relates to the field of computer vision and digital image processing, and in particular to a depth-based light field splicing method.
- Light field imaging records light information from all directions by adding a microlens array between the main lens and the sensor to obtain a complete information optical radiation field.
- the resolution of optical sensors continues to rise and the field of light cameras becomes more market-oriented, the practical value of light field imaging technology is becoming higher and higher.
- the microlens array-based light field camera can record the spatial position information and direction information of the three-dimensional scene at the same time. Therefore, the light field data collected by the light field camera has a wide range of applications, such as refocusing. , depth estimation, significance test, etc.
- the field of view of the handheld all-optical camera is small.
- the light field splicing method that does not depend on the light field structure setting can improve the field of view of the light field camera.
- the existing method of light field splicing mainly uses the feature extraction and matching method to calculate the transformation matrix between adjacent input light fields, perform light field registration, and find the light by constructing the energy loss function of the overlapping light field overlap region.
- the optimal stitching of the field realizes the fusion of the light field; this method has certain limitations. Although it can realize the light field splicing with less parallax, once the parallax of the input light field data changes greatly, it will produce misalignment and ghosting. Waiting for errors, you can't get the correct stitching results.
- Another method is to reduce the influence of parallax on the splicing result by using image splicing method combined with parallax tolerance. This method improves the splicing accuracy to a certain extent, but can not completely offset the influence of parallax, and Separating the image of the corresponding sub-aperture image of the light field alone will introduce the problem that the entire light field is inconsistent in the spatial domain and the angular domain.
- the present invention provides a depth-based light field splicing method, which solves the problem of dislocation and ghosting caused by large parallax changes, and realizes a precise parallax-tolerant light field splicing method.
- the invention discloses a depth-based light field splicing method, comprising the following steps:
- A1 inputting a light field to be spliced and a sub-aperture image of the light field, and performing a light field depth estimation on the sub-aperture image of the light field to obtain a depth map of the light field;
- A2 extracting feature points in the subaperture image of the light field, matching the feature points to obtain feature point pairs, and screening the feature point pairs to obtain matching feature point pairs;
- A3 4D meshing the light field to be spliced, predicting the global homography transformation matrix according to the matching feature point pairs; and establishing a weight matrix according to the depth and position relationship between the feature point and the grid center point;
- the optimal transformation matrix and the weight matrix are used to predict the optimal homography transformation matrix of each grid, and the light field is mapped according to the optimal homography transformation matrix of each grid in the light field;
- A4 The light field is fused to obtain the result of the light field splicing.
- step A2 specifically includes:
- A21 extracting feature points in the subaperture image of the light field, and matching the feature points to obtain feature point pairs;
- A22 performing feature clustering on the depth map to obtain a depth level map of the light field
- A23 Grouping the feature point pairs according to the depth level map, respectively filtering the feature point pairs of each group, and combining the selected feature point pairs to obtain matching feature point pairs.
- step A3 specifically includes:
- A31 4D meshing the light field to be spliced
- step A32 determining whether each of the grids after 4D meshing includes different depth layers, and if so, dividing the network again by the depth layer; otherwise, directly proceeding to step A33;
- A33 predicting a global homography transformation matrix according to matching feature point pairs
- A34 establishing a weight matrix according to the depth and position relationship between the feature point and the center of the grid point;
- A35 predicting an optimal homography transformation matrix of each grid according to a global homography transformation matrix and a weight matrix
- A36 The light field is mapped according to the optimal homography transformation matrix of each grid in the light field.
- the invention has the beneficial effects that the depth-based light field splicing method of the present invention uses the optimal homography transformation matrix of each grid instead of the global homography transformation matrix to map the light field.
- the optimal homography transformation matrix of each mesh is combined with the global homography transformation matrix and introduced
- the weight matrix established by the depth map is used for prediction, which solves the problem of misalignment and ghosting caused by large parallax changes, and realizes accurate parallax-tolerant light field splicing method; thereby further realizing more accurate large parallax light.
- the field splicing ensures the consistency of the spatial and angular domains of the spliced light field, thereby expanding the viewing angle of the light field.
- the depth map is obtained by characterizing the depth map, and the feature point pairs are grouped according to the depth level map, and the feature point pairs of each group are separately screened to avoid parallax caused by
- the erroneous deletion of the feature point pairs ensures that sufficient and effective matching feature point pairs can be obtained, which lays a good foundation for the subsequent prediction of the global homography transformation matrix and the optimal homography transformation matrix of each grid, and further improves Accuracy during the registration of the light field.
- FIG. 1 is a schematic flow chart of a depth-based light field splicing method according to a preferred embodiment of the present invention.
- a preferred embodiment of the present invention discloses a depth-based light field splicing method, including the following steps:
- A1 inputting a light field to be spliced and a sub-aperture image of the light field, and performing a light field depth estimation on the sub-aperture image of the light field to obtain a depth map of the light field;
- the light field to be spliced is input, and the sub-aperture image of the light field is obtained by decoding and pre-processing; and the depth map of the light field is obtained by using the optical field depth estimation for the sub-aperture image of the light field.
- L r (x, y, u, v) is the reference light field
- L w (x, y, u, v) is the light field to be spliced
- S r (u 0 , v 0 ) and S w (u 0 , v 0 ) are subaperture images of the light field at the viewing angle (u 0 , v 0 ).
- the depth image of the light field is obtained using the light field depth estimation method as D(x, y).
- A2 extracting feature points in the subaperture image of the light field, matching the feature points to obtain feature point pairs, and screening the feature point pairs to obtain matching feature point pairs;
- step A2 includes the following steps:
- A21 extracting feature points in the subaperture image of the light field, and matching the feature points to obtain feature point pairs;
- the feature points in the subaperture image of the light field are extracted by using the SIFT feature extraction method, and the feature points are matched to obtain a coarse matching feature point pair, namely:
- S r (u 0 , v 0 ) and S w (u 0 , v 0 ) are sub-aperture images of the light field at the viewing angle (u 0 , v 0 ), and ⁇ F is composed of feature points extracted by SIFT set.
- A22 performing feature clustering on the depth map to obtain a depth level map of the light field
- the depth map is layered by using feature clustering, the main depth layer is retained, and small depth changes are discarded, so that the details of the depth map obtained by the depth estimation algorithm may be inaccurate.
- the method for obtaining the depth map D l of the light field by using the k-means feature clustering method is as follows:
- S i is the i-th depth layer where the pixel is located, and is generated by clustering:
- D(x, y) is the depth map obtained by the optical field depth estimation method
- ⁇ i is the cluster center
- K is the number of clusters (corresponding to the depth layer in the depth map)
- D l (x, y) is the depth map obtained.
- A23 Group the feature point pairs according to the depth hierarchy diagram, and then filter the feature point pairs of each group separately, and combine the selected feature point pairs to obtain matching feature point pairs.
- the feature point pairs of the rough matching are filtered according to the depth level map.
- the main steps are: grouping the feature point pairs according to the depth degree map D l (x, y), and then respectively A set of feature points are screened using the Continuous Consistent Sampling Detection (RANSAC) algorithm, the outliers are eliminated, and finally the selected pairs of selected feature points are combined to obtain the final valid matching feature point pairs, namely:
- RANSAC Continuous Consistent Sampling Detection
- P is a feature point to be screened
- S i is the i-th depth layer where the pixel is located
- K is the number of clusters
- ⁇ F is the set of feature point pairs extracted by SIFT
- ⁇ r is the feature point after the screening A collection of pairs.
- A3 4D meshing the light field to be spliced, predicting the global homography transformation matrix according to the matching feature point pairs; and establishing a weight matrix according to the depth and position relationship between the feature point and the grid center point;
- the optimal transformation matrix and the weight matrix are used to predict the optimal homography transformation matrix of each grid, and the light field is mapped according to the optimal homography transformation matrix of each grid in the light field;
- step A3 includes the following steps:
- A31 4D meshing the light field to be spliced
- the input light field is segmented into a regular four-dimensional solid mesh to improve the degree of freedom in the light field registration process.
- step A32 determining whether each of the grids after 4D meshing includes different depth layers, and if so, dividing the network again by the depth layer; otherwise, directly proceeding to step A33;
- A33 predicting a global homography transformation matrix according to matching feature point pairs
- A34 establishing a weight matrix according to the depth and position relationship between the feature point and the center of the grid point of each grid;
- the weight matrix w i is:
- ⁇ and ⁇ are proportional coefficients
- ⁇ [0,1] is the minimum threshold of the weight matrix wi
- (x*,y*) is the position coordinate of the center point of the mesh
- (x i ,y i ) is the feature point
- D l is the depth hierarchy in step A2.
- the D l in the w i formula of the weight matrix may also be calculated by using the depth map D in step A1, that is, the weight matrix w i is:
- ⁇ and ⁇ are proportional coefficients
- ⁇ [0,1] is the minimum threshold of the weight matrix wi
- (x*,y*) is the position coordinate of the center point of the mesh
- (x i ,y i ) is the feature point
- D is the depth map of the light field in step A1.
- A35 predicting an optimal homography transformation matrix of each grid according to a global homography transformation matrix and a weight matrix
- the method for predicting the optimal homography transformation matrix of each grid by the depth-based light field motion model is as follows:
- w i is a weight matrix, which is related to the depth and position of the feature point and the grid;
- ⁇ is the 5-dimensional light field global homography transformation matrix;
- the matrix A ⁇ R 4N ⁇ 25 can be obtained by matrix transformation;
- the matrix A ⁇ R 4N ⁇ 25 is obtained by matrix transformation:
- a ⁇ R 4N ⁇ 25 has four linear independent row vectors, so at least six pairs of matching feature points are needed. To enhance the robustness, more pairs of matching feature points can be used.
- A36 The light field is mapped according to the optimal homography transformation matrix of each grid in the light field.
- each grid is mapped:
- M is the mesh after dividing the input light field, and the M' mapped grid. It is the optimal homography transformation matrix of the light field corresponding to each grid.
- the light field is mapped according to the optimal homography transformation matrix of the light field grid, and the pixel coverage area caused by the parallax is determined according to the depth map obtained by the optical field depth in step A1 or the step A2.
- the depth hierarchy obtained by feature clustering selects the pixel with the smallest depth as the final pixel value of the pixel coverage position.
- A4 The light field image fusion is obtained, and the light field stitching result is obtained.
- the light field is fused by using a 4D graph cutting method to obtain a light field splicing result.
- the 4D map cut is a four-dimensional multi-resolution map cut.
- the four-dimensional graph cut is specifically: mapping the entire 4D light field to a weighted undirected graph, finding the optimal dividing line to ensure the continuity of the space and angle of the entire light field, so p' is the pixel p in the energy optimization function.
- the spatial and angular dimensions are adjacent to each other; the multi-resolution map is specifically as follows: first, the light field data is downsampled in spatial resolution, and then the graph cut is performed to obtain a split line at a low resolution, according to low resolution. The dividing line under the rate limits the high-resolution cutting area, and finally the image is cut at high resolution to obtain the optimal suture.
- the 4D graph-cut is specifically: firstly, the 4D light field is mapped into a weighted undirected graph, and then the energy optimization function is calculated:
- multi-resolution 4D image cutting is adopted, and the specific steps are: first, down-sampling the spatial resolution of the light field, and then performing The graph cuts the partition line at low resolution, limits the high-resolution map cut region according to the split line at low resolution, and finally performs the graph cut to obtain the optimal suture at high resolution.
- the light field splicing method of the present invention combines the idea of using a local homography transformation matrix to replace the global homography transformation matrix, which significantly improves the flexibility in the light field registration process, thereby achieving more accurate light field splicing in the detail part.
- the problem of fruit dislocation and ghosting caused by large parallax changes is solved, and a precise parallax-tolerant light field splicing method is realized.
- the depth map estimated by the light field camera's own light field data guides the screening of the feature point pairs, avoiding the erroneous deletion of the feature point pairs due to the parallax, thereby ensuring sufficient and effective matching feature point pairs.
- the image cutting algorithm is used to find the optimal stitching to realize the light field fusion, and further correct the small misalignment generated during the stitching process to achieve more accurate light field stitching.
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Bioinformatics & Computational Biology (AREA)
- Artificial Intelligence (AREA)
- Evolutionary Biology (AREA)
- Evolutionary Computation (AREA)
- General Engineering & Computer Science (AREA)
- Life Sciences & Earth Sciences (AREA)
- Probability & Statistics with Applications (AREA)
- Image Processing (AREA)
Abstract
Description
Claims (10)
- 一种基于深度的光场拼接方法,其特征在于,包括以下步骤:A1:输入待拼接的光场以及该光场的子孔径图像,对光场的子孔径图像进行光场深度估计得到光场的深度图;A2:提取光场的子孔径图像中的特征点,对特征点进行匹配得到特征点对,对特征点对进行筛选得到匹配特征点对;A3:将待拼接的光场进行4D网格化,根据匹配特征点对预测全局单应性变换矩阵;并根据特征点与网格中心点的深度和位置关系建立权值矩阵;再根据全局单应性变换矩阵和权值矩阵来预测每个网格的最优单应性变换矩阵,并根据光场中每个网格的最优单应性变换矩阵映射光场;A4:对光场进行融合,得到光场拼接结果。
- 根据权利要求1所述的基于深度的光场拼接方法,其特征在于,步骤A2具体包括:A21:提取光场的子孔径图像中的特征点,对特征点进行匹配得到特征点对;A22:对所述深度图进行特征聚类得到光场的深度层次图;A23:根据深度层次图对特征点对进行分组,分别对各组的特征点对进行筛选,合并筛选后的特征点对得到匹配特征点对。
- 根据权利要求2所述的基于深度的光场拼接方法,其特征在于,其中步骤A21具体包括:使用SIFT特征提取方法提取光场的子孔径图像中的特征点,并对特征点进行匹配,得到粗匹配的特征点对:其中,S r(u 0,v 0)和S w(u 0,v 0)是光场在视角(u 0,v 0)处的子孔径图像,Ω F为SIFT提取的特征点对组成的集合;优选地,步骤A23具体包括:根据深度层次图D l(x,y)对特征点对进行分组,然后分别对每一组的特征点对使用连续一致抽样检测算法进行筛选,剔除离群点,最后合并每一组筛选后的特征点对,得到最终有效的匹配特征点对:其中,P是一对待筛选的特征点,S i为像素所在的第i深度层,K是聚类个数,Ω F为SIFT提取的特征点对组成的集合,Ω r为筛选过后的特征点对组成的集合。
- 根据权利要求1所述的基于深度的光场拼接方法,其特征在于,步骤A3中根据匹配特征点对预测的全局单应性变换矩阵Η的计算公式为:P'=HP其中P(u,v,x,y,1)和P'(u',v',x',y',1)是通过步骤A2所得到的匹配特征点对。
- 根据权利要求2至5任一项所述的基于深度的光场拼接方法,其特征在于,步骤A3中根据光场中每个网格的最优单应性变换矩阵映射光场时,针对视差造成的像素覆盖区域,根据步骤A1中的深度图或者步骤A2中的深度层次图挑选深度最小的像素作为像素覆盖位置的最终像素值。
- 根据权利要求1所述的基于深度的光场拼接方法,其特征在于,步骤A3中具体包括:A31:将待拼接的光场进行4D网格化;A32:判断4D网格化后的每一个网格中是否包含不同深度层,如果是,则按深度层再次分割网络;否则直接进入步骤A33;A33:根据匹配特征点对预测全局单应性变换矩阵;A34:根据特征点与网格点中心的深度和位置关系建立权值矩阵;A35:根据全局单应性变换矩阵和权值矩阵来预测每个网格的最优单应性变换矩阵;A36:根据光场中每个网格的最优单应性变换矩阵映射光场。
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810428591.0 | 2018-05-07 | ||
CN201810428591.0A CN108921781B (zh) | 2018-05-07 | 2018-05-07 | 一种基于深度的光场拼接方法 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2019214568A1 true WO2019214568A1 (zh) | 2019-11-14 |
Family
ID=64402352
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2019/085643 WO2019214568A1 (zh) | 2018-05-07 | 2019-05-06 | 一种基于深度的光场拼接方法 |
Country Status (2)
Country | Link |
---|---|
CN (1) | CN108921781B (zh) |
WO (1) | WO2019214568A1 (zh) |
Cited By (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111340701A (zh) * | 2020-02-24 | 2020-06-26 | 南京航空航天大学 | 一种基于聚类法筛选匹配点的电路板图像拼接方法 |
CN111507904A (zh) * | 2020-04-22 | 2020-08-07 | 华中科技大学 | 一种微观打印图案的图像拼接方法及装置 |
CN111882487A (zh) * | 2020-07-17 | 2020-11-03 | 北京信息科技大学 | 一种基于双平面平移变换的大视场光场数据融合方法 |
CN113191369A (zh) * | 2021-04-09 | 2021-07-30 | 西安理工大学 | 一种基于光场角度域变化矩阵的特征点检测方法 |
CN113506214A (zh) * | 2021-05-24 | 2021-10-15 | 南京莱斯信息技术股份有限公司 | 一种多路视频图像拼接方法 |
CN116934591A (zh) * | 2023-06-28 | 2023-10-24 | 深圳市碧云祥电子有限公司 | 多尺度特征提取的图像拼接方法、装置、设备及存储介质 |
CN117221466A (zh) * | 2023-11-09 | 2023-12-12 | 北京智汇云舟科技有限公司 | 基于网格变换的视频拼接方法及系统 |
Families Citing this family (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108921781B (zh) * | 2018-05-07 | 2020-10-02 | 清华大学深圳研究生院 | 一种基于深度的光场拼接方法 |
CN110084749B (zh) * | 2019-04-17 | 2023-03-31 | 清华大学深圳研究生院 | 一种焦距非一致的光场图像的拼接方法 |
CN110264403A (zh) * | 2019-06-13 | 2019-09-20 | 中国科学技术大学 | 一种基于图像深度分层的去伪影图像拼接方法 |
CN110930310B (zh) * | 2019-12-09 | 2023-04-07 | 中国科学技术大学 | 全景图像拼接方法 |
CN111161143A (zh) * | 2019-12-16 | 2020-05-15 | 首都医科大学 | 一种光学定位技术辅助的手术视野全景拼接方法 |
CN112465704B (zh) * | 2020-12-07 | 2024-02-06 | 清华大学深圳国际研究生院 | 一种全局-局部自适应优化的全景光场拼接方法 |
CN114373153B (zh) * | 2022-01-12 | 2022-12-27 | 北京拙河科技有限公司 | 一种基于多尺度阵列相机的视频成像优化系统与方法 |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106873301A (zh) * | 2017-04-21 | 2017-06-20 | 北京理工大学 | 基于阵列相机对远距离小孔后方进行成像的系统及方法 |
CN106886979A (zh) * | 2017-03-30 | 2017-06-23 | 深圳市未来媒体技术研究院 | 一种图像拼接装置及图像拼接方法 |
CN107403423A (zh) * | 2017-08-02 | 2017-11-28 | 清华大学深圳研究生院 | 一种光场相机的合成孔径去遮挡方法 |
CN107578376A (zh) * | 2017-08-29 | 2018-01-12 | 北京邮电大学 | 基于特征点聚类四叉划分和局部变换矩阵的图像拼接方法 |
CN108921781A (zh) * | 2018-05-07 | 2018-11-30 | 清华大学深圳研究生院 | 一种基于深度的光场拼接方法 |
Family Cites Families (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CA2369710C (en) * | 2002-01-30 | 2006-09-19 | Anup Basu | Method and apparatus for high resolution 3d scanning of objects having voids |
CN101394573B (zh) * | 2008-10-30 | 2010-06-16 | 清华大学 | 一种基于特征匹配的全景图生成方法及系统 |
CN101923709B (zh) * | 2009-06-16 | 2013-06-26 | 日电(中国)有限公司 | 图像拼接方法与设备 |
CN102833487B (zh) * | 2012-08-08 | 2015-01-28 | 中国科学院自动化研究所 | 面向视觉计算的光场成像装置和方法 |
US9332243B2 (en) * | 2012-10-17 | 2016-05-03 | DotProduct LLC | Handheld portable optical scanner and method of using |
US8978984B2 (en) * | 2013-02-28 | 2015-03-17 | Hand Held Products, Inc. | Indicia reading terminals and methods for decoding decodable indicia employing light field imaging |
CN106791869B (zh) * | 2016-12-21 | 2019-08-27 | 中国科学技术大学 | 基于光场子孔径图像相对位置关系的快速运动搜索方法 |
CN106526867B (zh) * | 2017-01-22 | 2018-10-30 | 网易(杭州)网络有限公司 | 影像画面的显示控制方法、装置及头戴式显示设备 |
CN107295264B (zh) * | 2017-08-01 | 2019-09-06 | 清华大学深圳研究生院 | 一种基于单应性变换光场数据压缩方法 |
CN107909578A (zh) * | 2017-10-30 | 2018-04-13 | 上海理工大学 | 基于六边形拼接算法的光场图像重聚焦方法 |
-
2018
- 2018-05-07 CN CN201810428591.0A patent/CN108921781B/zh active Active
-
2019
- 2019-05-06 WO PCT/CN2019/085643 patent/WO2019214568A1/zh active Application Filing
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106886979A (zh) * | 2017-03-30 | 2017-06-23 | 深圳市未来媒体技术研究院 | 一种图像拼接装置及图像拼接方法 |
CN106873301A (zh) * | 2017-04-21 | 2017-06-20 | 北京理工大学 | 基于阵列相机对远距离小孔后方进行成像的系统及方法 |
CN107403423A (zh) * | 2017-08-02 | 2017-11-28 | 清华大学深圳研究生院 | 一种光场相机的合成孔径去遮挡方法 |
CN107578376A (zh) * | 2017-08-29 | 2018-01-12 | 北京邮电大学 | 基于特征点聚类四叉划分和局部变换矩阵的图像拼接方法 |
CN108921781A (zh) * | 2018-05-07 | 2018-11-30 | 清华大学深圳研究生院 | 一种基于深度的光场拼接方法 |
Cited By (12)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111340701A (zh) * | 2020-02-24 | 2020-06-26 | 南京航空航天大学 | 一种基于聚类法筛选匹配点的电路板图像拼接方法 |
CN111340701B (zh) * | 2020-02-24 | 2022-06-28 | 南京航空航天大学 | 一种基于聚类法筛选匹配点的电路板图像拼接方法 |
CN111507904A (zh) * | 2020-04-22 | 2020-08-07 | 华中科技大学 | 一种微观打印图案的图像拼接方法及装置 |
CN111507904B (zh) * | 2020-04-22 | 2023-06-02 | 华中科技大学 | 一种微观打印图案的图像拼接方法及装置 |
CN111882487A (zh) * | 2020-07-17 | 2020-11-03 | 北京信息科技大学 | 一种基于双平面平移变换的大视场光场数据融合方法 |
CN113191369A (zh) * | 2021-04-09 | 2021-07-30 | 西安理工大学 | 一种基于光场角度域变化矩阵的特征点检测方法 |
CN113191369B (zh) * | 2021-04-09 | 2024-02-09 | 西安理工大学 | 一种基于光场角度域变化矩阵的特征点检测方法 |
CN113506214A (zh) * | 2021-05-24 | 2021-10-15 | 南京莱斯信息技术股份有限公司 | 一种多路视频图像拼接方法 |
CN113506214B (zh) * | 2021-05-24 | 2023-07-21 | 南京莱斯信息技术股份有限公司 | 一种多路视频图像拼接方法 |
CN116934591A (zh) * | 2023-06-28 | 2023-10-24 | 深圳市碧云祥电子有限公司 | 多尺度特征提取的图像拼接方法、装置、设备及存储介质 |
CN117221466A (zh) * | 2023-11-09 | 2023-12-12 | 北京智汇云舟科技有限公司 | 基于网格变换的视频拼接方法及系统 |
CN117221466B (zh) * | 2023-11-09 | 2024-01-23 | 北京智汇云舟科技有限公司 | 基于网格变换的视频拼接方法及系统 |
Also Published As
Publication number | Publication date |
---|---|
CN108921781A (zh) | 2018-11-30 |
CN108921781B (zh) | 2020-10-02 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2019214568A1 (zh) | 一种基于深度的光场拼接方法 | |
CN112435325B (zh) | 基于vi-slam和深度估计网络的无人机场景稠密重建方法 | |
CN107659774B (zh) | 一种基于多尺度相机阵列的视频成像系统及视频处理方法 | |
CA3121440C (en) | Assembly body change detection method, device and medium based on attention mechanism | |
CN108074218B (zh) | 基于光场采集装置的图像超分辨率方法及装置 | |
CN109064410B (zh) | 一种基于超像素的光场图像拼接方法 | |
CN111343367B (zh) | 一种十亿像素虚拟现实视频采集装置、系统与方法 | |
CN109816708B (zh) | 基于倾斜航空影像的建筑物纹理提取方法 | |
CN107316275A (zh) | 一种光流辅助的大尺度显微图像拼接算法 | |
CN115205489A (zh) | 一种大场景下的三维重建方法、系统及装置 | |
CN109118544B (zh) | 基于透视变换的合成孔径成像方法 | |
CN107341815B (zh) | 基于多目立体视觉场景流的剧烈运动检测方法 | |
JP6174104B2 (ja) | 室内2d平面図の生成方法、装置及びシステム | |
CN105005964A (zh) | 基于视频序列影像的地理场景全景图快速生成方法 | |
CN115953535A (zh) | 三维重建方法、装置、计算设备和存储介质 | |
CN111860651B (zh) | 一种基于单目视觉的移动机器人半稠密地图构建方法 | |
CN108663026A (zh) | 一种振动测量方法 | |
WO2021035627A1 (zh) | 获取深度图的方法、装置及计算机存储介质 | |
CN112465704A (zh) | 一种全局-局部自适应优化的全景光场拼接方法 | |
CN115619623A (zh) | 一种基于移动最小二乘变换的平行鱼眼相机图像拼接方法 | |
Gao et al. | Multi-source data-based 3D digital preservation of largescale ancient chinese architecture: A case report | |
CN113436130A (zh) | 一种非结构光场智能感知系统与装置 | |
CN116132610A (zh) | 一种综采面视频拼接方法及系统 | |
CN108090930A (zh) | 基于双目立体相机的障碍物视觉检测系统及方法 | |
Zhou et al. | Video stabilization and completion using two cameras |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 19800597 Country of ref document: EP Kind code of ref document: A1 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 19800597 Country of ref document: EP Kind code of ref document: A1 |
|
32PN | Ep: public notification in the ep bulletin as address of the adressee cannot be established |
Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 01.04.2021) |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 19800597 Country of ref document: EP Kind code of ref document: A1 |