CN106548494A - A kind of video image depth extraction method based on scene Sample Storehouse - Google Patents
A kind of video image depth extraction method based on scene Sample Storehouse Download PDFInfo
- Publication number
- CN106548494A CN106548494A CN201610847113.4A CN201610847113A CN106548494A CN 106548494 A CN106548494 A CN 106548494A CN 201610847113 A CN201610847113 A CN 201610847113A CN 106548494 A CN106548494 A CN 106548494A
- Authority
- CN
- China
- Prior art keywords
- depth
- picture
- image
- scene
- candidate
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10016—Video; Image sequence
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20212—Image combination
- G06T2207/20221—Image fusion; Image merging
Landscapes
- Image Processing (AREA)
- Testing, Inspecting, Measuring Of Stereoscopic Televisions And Televisions (AREA)
Abstract
The invention discloses a kind of video image depth extraction method based on scene Sample Storehouse, mainly includes:The foundation of depth scene library, picture feature extraction, the fusion of depth picture, foreground target estimation of Depth, five part of depth map global optimization.Depth value in existing depth picture is moved to input Target Photo using the similarity between common RGB pictures by the present invention;During actual production of film and TV, multiple scene libraries can be set up using the depth picture in similar scene, the depth map of input Target Photo is generated;In the process can by the artificial depth adjustment for participating in reduce it is minimum, improve work efficiency, it is high with accuracy, the features such as process time is short.
Description
Technical field
The invention belongs to technical field of video image processing, and in particular to a kind of video image depth based on scene Sample Storehouse
Degree extracting method.
Background technology
3 D video has become irresistible high technological tide, because of the visual impact which brings, the sense of reality of scene,
Favored by the industry technology such as substantial amounts of film, TV and advertisement personnel and salesman.But 3 d video content remains unchanged at present
Lack, existing numerous two-dimensional videos are converted to into 3 D video and are more and more paid attention to.Wherein topmost method is
The depth information of original two-dimensional video is calculated first, then by DIBR (Depth Image Based Rendering) skill
Art obtains virtual view video image, synthesizes corresponding 3 D video.
The source of existing 3 D video is broadly divided into hardware and realizes and software realization.Hardware approach is directly using three-dimensional
Camera simultaneously completes to shoot in multiple angles.Software approach is then that existing two-dimensional video is changed into three-dimensional by the method for software
Three-dimensional video-frequency, mainly has two categories below:
(1) using video or image editing software, manually to video in every two field picture split, manually determined one by one
The context of each object in the two field picture, is assigned to corresponding depth value.The depth map that this method is obtained has higher
Precision, quality preferably, but due to needing to carry out Video Image Segmentation and depth assignment frame by frame, need to expend substantial amounts of artificial, effect
Rate is not high.
(2) full-automatic switch technology of the two-dimensional video contrary with full artificial assignment to 3 D video, to receive two
Dimension video information is analyzed and calculates depth map, then using the method synthesis 3 D video of DIBR.Three-dimensional electricity many at present
Depending on all built-in device based on this switch technology all in television set, in that context it may be convenient to which the two dimension received in TV is regarded
Frequency signal is converted to three dimensional video signal to watch in real time, but this depth map for calculating in real time is relative to method (1)
In the depth map effect that obtains it is poor, the chaotic situation of before and after's scape can sometimes occur, cause vision to perplex.
At present the technology and method of domestic existing D reconstruction video be also in the starting stage, with foreign countries also have compared with
Big gap, it is not high to there is automaticity, the low series of problems of fabrication cycle length, high cost, efficiency.Wherein estimate and recover
Depth map corresponding to two dimensional image, becomes the key that whole two-dimensional video turns 3 D video.Recover from ordinary two-dimensional image
There is following difficult point in depth information:
1. if, not by other depth informations, computer can not estimate the depth level of image from single picture and close
System.
2., when video is processed, concordance of the depth map in temporal information how is can guarantee that, how to ensure interframe
There is no saltus step in picture.
The content of the invention
Based on above-mentioned, the invention provides a kind of video image depth extraction method based on scene Sample Storehouse, main to close
The method of note software extracts depth image from the middle of common two dimensional image, in the process can be by the artificial depth for participating in
It is minimum that degree adjustment is reduced, and improves work efficiency, high with accuracy, the features such as process time is short.
A kind of video image depth extraction method based on scene Sample Storehouse, comprises the steps:
(1) set up image library;Described image storehouse includes a large amount of RGB images and its corresponding depth map, and these RGB scheme
As coming from multiple scenes;
(2) feature extraction is carried out to the RGB image in input picture and image library, it is special with the GIST for obtaining these images
Levy vector sum light stream (optical flow) characteristic vector, and then the phase of calculating input image and each RGB image in image library
Like angle value;
(3) several RGB images minimum with input picture Similarity value are chosen as candidate's picture from image library, and
Guarantee that these candidate's pictures are respectively from each different scene, and then by each candidate's picture corresponding depth map in Pixel-level
A depth picture U is not fused into;
(4) foreground target is recovered in depth picture U, so as to obtain depth picture D*;
(5) make depth picture D*Minimum solution is carried out to following object function as initial value, input picture correspondence is obtained
Depth map D;
Wherein:DiFor the depth value of ith pixel in depth map D, Et(Di) for DiCorresponding data item, Es(Di) for DiCorrespondence
Space smoothing item, Ep(Di) for DiCorresponding image library priori depth item, α and β are default constant, and i is natural number and 1
The sum of all pixels of≤i≤N, N for depth picture D.
Each scene corresponds to an independent camera lens, and the RGB image of Same Scene is clapped from its correspondence camera lens
Take the photograph.
By the Similarity value of each RGB image in following formula calculating input image and image library in the step (2):
Similarity=(1- ω) | | G1-G2||+ω||F1-F2||
Wherein:Similarity is input picture and the Similarity value of RGB image, G1And F1Respectively input picture
GIST characteristic vectors and Optical-flow Feature vector, G2And F2Respectively the GIST characteristic vectors of RGB image and Optical-flow Feature are vectorial, ω
For default weight coefficient.
Preferably, the step (3) merged the RGB image of similar scene in image library before candidate's picture is chosen
One scene library of composition, then guarantee that any two candidate's pictures will not come from same scene library when choosing candidate's picture;Energy
Enough make the depth image that is subsequently generated will more accurately, details is more rich.
Each candidate's picture corresponding depth map is merged in pixel scale using SIFT Flow algorithms in the step (3)
Into a depth picture U.
The detailed process for recovering foreground target in the step (4) in depth picture U is:First, it is defeated by extracting
Enter the foreground target of image binaryzation is carried out to the foreground and background in input picture, obtain the foreground template of input picture
M;Then, foreground target region is depicted based on foreground template M in depth picture U;Finally, by prospect mesh in depth picture U
The depth value of mark region all pixels is revised as the depth value of foreground target region minimum point, so as to obtain depth picture D*.
Data item Et(Di) expression formula it is as follows:
Wherein:Ci (j)The depth value of ith pixel in jth candidate's picture correspondence depth map is represented,WithIt is respectively right
The gradient operator in X-direction and Y-direction is answered,For correspondence jth candidate's picture correspondence depth in SIFT Flow algorithms
The fusion function of figure, γ are default constant,T is variable, ε=10-4,For depth value Ci (j)It is corresponding
Weight, j is natural number and 1≤j≤K, K for the quantity of candidate's picture.
The space smoothing item Es(Di) expression formula it is as follows:
Wherein:T is variable, ε=10-4,WithLadder in X-direction and Y-direction is corresponded to respectively
Degree operator, LiFor the color value of ith pixel in input picture.
Described image storehouse priori depth item Ep(Di) expression formula it is as follows:
Ep(Di)=φ (Di-Pi)
Wherein:T is variable, ε=10-4, PiIt is all depth maps in image library with regard to ith pixel
Average depth value.
Depth value in existing depth picture is moved to input using the similarity between common RGB pictures by the present invention
Target Photo;During actual production of film and TV, multiple scene libraries can be set up using the depth picture in similar scene, generated
The depth map of input Target Photo;Minimum, the raising work effect that the artificial depth adjustment for participating in can be reduced in the process
Rate, it is high with accuracy, the features such as process time is short.
Description of the drawings
The step of Fig. 1 is the inventive method schematic flow sheet.
Specific embodiment
In order to more specifically describe the present invention, below in conjunction with the accompanying drawings and specific embodiment is to technical scheme
It is described in detail.
The present invention is mainly included based on the video image depth extraction method of scene Sample Storehouse:The foundation of depth scene library,
Picture feature extraction, the fusion of depth picture, foreground target estimation of Depth, five part of depth map global optimization, as shown in Figure 1;Its
Detailed process is as follows:
1. be directed to different scenes sets up depth image storehouse.
Before estimation of Depth is carried out to target image, needs set up image library according to different scenes.In production of film and TV,
We are defined as an independent camera lens scene first.All of video will be stored in data base in the form of picture frame,
Ordinary two dimensional RGB image and the corresponding depth map of every image were wherein both included.Will be by RGB in processing procedure afterwards
Characteristic information, carry out feature extraction.General each camera lens can as one group of scene, in follow-up processing procedure if
Choose the scene similar to input picture to be trained, good effect can be reached.
2. eigenvalue is calculated by feature extraction algorithm.
2.1st, using GIST features and Optical-flow Feature (optical flow) to calculating per pictures in image data base
Corresponding eigenvalue.
2.2nd, input Target Photo GIST features and Optical-flow Feature value are calculated.
2.3rd, Similarity value in input Target Photo and image data base is calculated using equation below:
Similarity score=(1- ω) | | G1-G2||+ω||F1-F2||
Wherein:G1And G2It is the GIST characteristic vectors in two field pictures, F1And F2The characteristic vector of light stream in two field pictures,
ω can be adjusted as needed in good time as weight.
3. from existing depth picture by depth migration in Target Photo.
3.1st, it is ranked up according to the height of Similarity value similarity score in step 2, and chooses immediate
10 normal pictures are used as candidate's picture.
3.2nd, from depth image storehouse, select the depth image C corresponding to candidate's picturei(i=1...k), it is ensured that every
One candidate's picture both is from different scene libraries.In actual mechanical process, in order to reach more preferable effect, can select similar
Scene as scene library.If each similar scene contributes candidate's picture, the depth image of generation will be more accurate,
Details is more rich.
3.3rd, the depth map of each candidate's picture is fused into into a pictures in pixel scale using the method for SIFT Flow
D.The picture D for now generating is a rough figure, afterwards the step of will further optimize smooth this figure.
4. the depth of foreground target is recovered.
Foreground target refers to people's target of concern in the middle of a pictures, is typically the object of some motions.Before
Step 2~3 substantially determine the overall depth of background, but some foreground objects, it is particularly some moving objects often whole
Most critical in individual video.
4.1st, by moving object segmentation, the method for the background difference such as gauss hybrid models, in one group of continuous RGB image
Foreground target is extracted, foreground target picture binaryzation is obtained into template, M is designated as.Some moving objects based on dynamic background
Detection method is also applied to this step, detection object accuracy and the concordance between multiframe, will have influence on last depth
The accuracy of figure.In order to improve accuracy, what we were appropriate introduces artificial interaction, as step 4.2.
4.2nd, in order to ensure to ultimately produce the quality of depth map, step 4.1 or can be replaced by way of man-machine interactively, by
Manually determine the border of foreground target.Even so the man-hour of many still can be saved, is raised labour efficiency.
4.3rd, mask M for obtaining step 4.1~4.2, is applied in the picture D generated in step 3.3, the institute in D
There is the pixel depth in M, be defined as the depth of M minimum points and ground contact points position, whole foreground target is provided with this
Depth.
5. pair gained depth image is optimized.
In step 5, we use the depth map C of whole group candidate's picturei(i=1...k) to carrying out per a depth map for generating
Optimization.
5.1st, above-mentioned steps have only completed depth migration and have obtained a depth map substantially, using equation below to step
The depth image generated in 4 further optimizes.
Wherein:L is input picture, and D is the depth map of target image, and the optimization done is sought to-log (P (D | L)) subtract
To minimum, will D it is minimum with the difference of remaining each candidate's picture.EtIt is data item, EsIt is space smoothing item, EpIt is data base
The depth item of priori, α, β are also constant simultaneously.Cause income value as far as possible little through successive ignition as object function.
E in step 5.1tFor weighing the difference between Target Photo and candidate's picture, below equation meter can be passed through
Calculate:
Wherein:Quantity of the K for candidate's picture, w is the weighted value of each candidate's picture, and γ is weight coefficient constant, function
For the fusion function of sift flow in step 3.3, that is, the corresponding relation that sift flow are found between two pictures is first passed through, then
The pixel of correspondence position in candidate's picture is moved in target depth figure, this whole process it be defined as fusion function, per wait
Picture is selected to there is a fusion function.Sift flow algorithms come from document Ce Liu, Jenny Yuen, Antonio
Torralba,et al.SIFT Flow:Dense Correspondence across Different Scenes[M]//
Computer Vision–ECCV 2008.Springer Berlin Heidelberg,2008:28-42.
Space smoothing item EsCalculated by below equation, be made up of the gradient on x and y directions:
In above formula
To calculate data base's priori item, p is depth-averaged value in all picture databases to following formula.
Ep(Di)=φ (Di-Pi)
5.2nd, make the depth image that step 4 is finally obtained as initial value, practical operation is continuous by each iterative process
Adjustment parameters, the minima until reaching iterationses maximum or E (D), and now corresponding D values are obtained, it is as excellent
Depth map after change.
The above-mentioned description to embodiment is to be understood that for ease of those skilled in the art and apply the present invention.
Person skilled in the art obviously easily can make various modifications to above-described embodiment, and described herein general
Principle is applied in other embodiment without through performing creative labour.Therefore, the invention is not restricted to above-described embodiment, ability
Field technique personnel announcement of the invention, the improvement made for the present invention and modification all should be in protection scope of the present invention
Within.
Claims (9)
1. a kind of video image depth extraction method based on scene Sample Storehouse, comprises the steps:
(1) set up image library;Described image storehouse includes a large amount of RGB images and its corresponding depth map, and these RGB images
From in multiple scenes;
(2) feature extraction is carried out to the RGB image in input picture and image library, with obtain the GIST features of these images to
Amount and Optical-flow Feature vector, so in calculating input image and image library each RGB image Similarity value;
(3) several RGB images minimum with input picture Similarity value are chosen as candidate's picture from image library, and is guaranteed
These candidate's pictures are respectively from each different scene, and then each candidate's picture corresponding depth map is melted in pixel scale
One depth picture U of synthesis;
(4) foreground target is recovered in depth picture U, so as to obtain depth picture D*;
(5) make depth picture D*Minimum solution is carried out to following object function as initial value, the corresponding depth of input picture is obtained
Figure D;
Wherein:DiFor the depth value of ith pixel in depth map D, Et(Di) for DiCorresponding data item, Es(Di) for DiCorresponding sky
Between smooth item, Ep(Di) for DiCorresponding image library priori depth item, α and β are default constant, i be natural number and 1≤i≤
The sum of all pixels of N, N for depth picture D.
2. video image depth extraction method according to claim 1, it is characterised in that:Each scene corresponds to one
Independent camera lens, the RGB image of Same Scene is from captured by its correspondence camera lens.
3. video image depth extraction method according to claim 1, it is characterised in that:In the step (2) by with
The Similarity value of each RGB image in lower formula calculating input image and image library:
Similarity=(1- ω) | | G1-G2||+ω||F1-F2||
Wherein:Similarity is input picture and the Similarity value of RGB image, G1And F1The GIST of respectively input picture is special
Levy vector sum Optical-flow Feature vector, G2And F2Respectively the GIST characteristic vectors of RGB image and Optical-flow Feature are vectorial, and ω is default
Weight coefficient.
4. video image depth extraction method according to claim 1, it is characterised in that:The step (3) is choosing time
Before selecting picture, make the RGB image of similar scene in image library merge one scene library of composition, then guarantee when choosing candidate's picture
Any two candidate's pictures will not come from same scene library.
5. video image depth extraction method according to claim 1, it is characterised in that:Adopt in the step (3)
Each candidate's picture corresponding depth map is fused into a depth picture U in pixel scale by SIFT Flow algorithms.
6. video image depth extraction method according to claim 1, it is characterised in that:In depth in the step (4)
The detailed process that foreground target is recovered in picture U is:First, by extracting the foreground target of input picture to input figure
Foreground and background as in carries out binaryzation, obtains foreground template M of input picture;Then, based on foreground template M in depth map
Foreground target region is depicted in piece U;Finally, the depth value of foreground target region all pixels in depth picture U is changed
For the depth value of foreground target region minimum point, so as to obtain depth picture D*。
7. video image depth extraction method according to claim 1, it is characterised in that:Data item Et(Di) table
It is as follows up to formula:
Wherein:Ci (j)Represent the depth value of ith pixel in jth candidate's picture correspondence depth map, ▽xAnd ▽yX is corresponded to respectively
Gradient operator on direction and Y-direction,For correspondence jth candidate's picture correspondence depth map in SIFT Flow algorithms
Fusion function, γ are default constant,T is variable, ε=10-4,For depth value Ci (j)Corresponding power
Weight, j is natural number and 1≤j≤K, K are the quantity of candidate's picture.
8. video image depth extraction method according to claim 1, it is characterised in that:The space smoothing item Es(Di)
Expression formula it is as follows:
Es(Di)=Sx,iφ(▽xDi)+Sy,iφ(▽yDi)
Wherein:T is variable, ε=10-4, ▽xAnd ▽yThe gradient respectively corresponded in X-direction and Y-direction is calculated
Son, LiFor the color value of ith pixel in input picture.
9. video image depth extraction method according to claim 1, it is characterised in that:Described image storehouse priori depth item
Ep(Di) expression formula it is as follows:
Ep(Di)=φ (Di-Pi)
Wherein:T is variable, ε=10-4, PiFor all depth maps in image library with regard to ith pixel average depth
Angle value.
Priority Applications (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201610847113.4A CN106548494A (en) | 2016-09-26 | 2016-09-26 | A kind of video image depth extraction method based on scene Sample Storehouse |
PCT/CN2016/109982 WO2018053952A1 (en) | 2016-09-26 | 2016-12-14 | Video image depth extraction method based on scene sample library |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201610847113.4A CN106548494A (en) | 2016-09-26 | 2016-09-26 | A kind of video image depth extraction method based on scene Sample Storehouse |
Publications (1)
Publication Number | Publication Date |
---|---|
CN106548494A true CN106548494A (en) | 2017-03-29 |
Family
ID=58369410
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201610847113.4A Pending CN106548494A (en) | 2016-09-26 | 2016-09-26 | A kind of video image depth extraction method based on scene Sample Storehouse |
Country Status (2)
Country | Link |
---|---|
CN (1) | CN106548494A (en) |
WO (1) | WO2018053952A1 (en) |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108492364A (en) * | 2018-03-27 | 2018-09-04 | 百度在线网络技术(北京)有限公司 | The method and apparatus for generating model for generating image |
CN110322499A (en) * | 2019-07-09 | 2019-10-11 | 浙江科技学院 | A kind of monocular image depth estimation method based on multilayer feature |
CN112446822A (en) * | 2021-01-29 | 2021-03-05 | 聚时科技(江苏)有限公司 | Method for generating contaminated container number picture |
Families Citing this family (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110674837A (en) * | 2019-08-15 | 2020-01-10 | 深圳壹账通智能科技有限公司 | Video similarity obtaining method and device, computer equipment and storage medium |
US10902607B1 (en) * | 2019-12-06 | 2021-01-26 | Black Sesame International Holding Limited | Fast instance segmentation |
CN112637614B (en) * | 2020-11-27 | 2023-04-21 | 深圳市创成微电子有限公司 | Network direct broadcast video processing method, processor, device and readable storage medium |
CN112560998A (en) * | 2021-01-19 | 2021-03-26 | 德鲁动力科技(成都)有限公司 | Amplification method of few sample data for target detection |
CN112967365B (en) * | 2021-02-05 | 2022-03-15 | 浙江大学 | Depth map generation method based on user perception optimization |
CN115496863B (en) * | 2022-11-01 | 2023-03-21 | 之江实验室 | Short video generation method and system for scene interaction of movie and television intelligent creation |
Citations (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103024420A (en) * | 2013-01-17 | 2013-04-03 | 宁波工程学院 | 2D-3D (two-dimension to three-dimension) conversion method for single images in RGBD (red, green and blue plus depth) data depth migration |
Family Cites Families (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101223552A (en) * | 2005-08-17 | 2008-07-16 | Nxp股份有限公司 | Video processing method and device for depth extraction |
CN101815225B (en) * | 2009-02-25 | 2014-07-30 | 三星电子株式会社 | Method for generating depth map and device thereof |
CN103716615B (en) * | 2014-01-09 | 2015-06-17 | 西安电子科技大学 | 2D video three-dimensional method based on sample learning and depth image transmission |
-
2016
- 2016-09-26 CN CN201610847113.4A patent/CN106548494A/en active Pending
- 2016-12-14 WO PCT/CN2016/109982 patent/WO2018053952A1/en active Application Filing
Patent Citations (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103024420A (en) * | 2013-01-17 | 2013-04-03 | 宁波工程学院 | 2D-3D (two-dimension to three-dimension) conversion method for single images in RGBD (red, green and blue plus depth) data depth migration |
Non-Patent Citations (3)
Title |
---|
KEVIN KARSCH 等: "DepthTransfer: Depth Extraction from Video Using Non-Parametric Sampling", 《IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE》 * |
冼进 等: "《动漫设计与制作》", 30 April 2007 * |
朱尧 等: "基于非参数化采样的单幅图像深度估计", 《计算机应用研究》 * |
Cited By (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108492364A (en) * | 2018-03-27 | 2018-09-04 | 百度在线网络技术(北京)有限公司 | The method and apparatus for generating model for generating image |
CN110322499A (en) * | 2019-07-09 | 2019-10-11 | 浙江科技学院 | A kind of monocular image depth estimation method based on multilayer feature |
CN110322499B (en) * | 2019-07-09 | 2021-04-09 | 浙江科技学院 | Monocular image depth estimation method based on multilayer characteristics |
CN112446822A (en) * | 2021-01-29 | 2021-03-05 | 聚时科技(江苏)有限公司 | Method for generating contaminated container number picture |
CN112446822B (en) * | 2021-01-29 | 2021-07-30 | 聚时科技(江苏)有限公司 | Method for generating contaminated container number picture |
Also Published As
Publication number | Publication date |
---|---|
WO2018053952A1 (en) | 2018-03-29 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN106548494A (en) | A kind of video image depth extraction method based on scene Sample Storehouse | |
US9438878B2 (en) | Method of converting 2D video to 3D video using 3D object models | |
JP6561216B2 (en) | Generating intermediate views using optical flow | |
US20140009462A1 (en) | Systems and methods for improving overall quality of three-dimensional content by altering parallax budget or compensating for moving objects | |
CN112434709A (en) | Aerial survey method and system based on real-time dense three-dimensional point cloud and DSM of unmanned aerial vehicle | |
Kawai et al. | Diminished reality considering background structures | |
CN102609950B (en) | Two-dimensional video depth map generation process | |
Zhang et al. | Personal photograph enhancement using internet photo collections | |
CN107274337A (en) | A kind of image split-joint method based on improvement light stream | |
CN105005964A (en) | Video sequence image based method for rapidly generating panorama of geographic scene | |
US20150195510A1 (en) | Method of integrating binocular stereo video scenes with maintaining time consistency | |
CN111553845A (en) | Rapid image splicing method based on optimized three-dimensional reconstruction | |
CN116563459A (en) | Text-driven immersive open scene neural rendering and mixing enhancement method | |
CN116977596A (en) | Three-dimensional modeling system and method based on multi-view images | |
US20110149039A1 (en) | Device and method for producing new 3-d video representation from 2-d video | |
Zhang et al. | Refilming with depth-inferred videos | |
CN107330856B (en) | Panoramic imaging method based on projective transformation and thin plate spline | |
CN101945299A (en) | Camera-equipment-array based dynamic scene depth restoring method | |
Gava et al. | Dense scene reconstruction from spherical light fields | |
Chugunov et al. | Shakes on a plane: Unsupervised depth estimation from unstabilized photography | |
Lin et al. | Iterative feedback estimation of depth and radiance from defocused images | |
Zhang et al. | Coherent video generation for multiple hand-held cameras with dynamic foreground | |
Liu et al. | Fog effect for photography using stereo vision | |
CN111695525B (en) | 360-degree clothing fitting display method and device | |
CN115578260A (en) | Attention method and system for direction decoupling for image super-resolution |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20170329 |