CN106331729B - Conversion method in a kind of adaptive equalization three-dimensional video-frequency frame per second based on correlation - Google Patents

Conversion method in a kind of adaptive equalization three-dimensional video-frequency frame per second based on correlation Download PDF

Info

Publication number
CN106331729B
CN106331729B CN201610804697.7A CN201610804697A CN106331729B CN 106331729 B CN106331729 B CN 106331729B CN 201610804697 A CN201610804697 A CN 201610804697A CN 106331729 B CN106331729 B CN 106331729B
Authority
CN
China
Prior art keywords
block
pixel
edge
depth map
motion vector
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
CN201610804697.7A
Other languages
Chinese (zh)
Other versions
CN106331729A (en
Inventor
刘琚
肖依凡
曲爱喜
郭志鑫
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shandong University
Original Assignee
Shandong University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shandong University filed Critical Shandong University
Priority to CN201610804697.7A priority Critical patent/CN106331729B/en
Publication of CN106331729A publication Critical patent/CN106331729A/en
Application granted granted Critical
Publication of CN106331729B publication Critical patent/CN106331729B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/50Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
    • H04N19/597Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding specially adapted for multi-view video sequence encoding
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/10Processing, recording or transmission of stereoscopic or multi-view image signals
    • H04N13/106Processing image signals
    • H04N13/139Format conversion, e.g. of frame-rate or size

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Image Analysis (AREA)
  • Compression Or Coding Systems Of Tv Signals (AREA)

Abstract

The present invention utilizes the correlation of depth map and texture maps, the classification of edge block and flat block is carried out to depth map, and the motion vector of different macro blocks is obtained using the Block matching criterion that texture gradient information is added according to label, motion vector post-processing and adaptive-interpolation are carried out to different macro blocks respectively, while obtaining depth map interpolated frame and texture maps interpolated frame.Compared to conversion method in traditional frame per second, the invention strengthens the processing to depth map edge, and the depth map of insertion has better local edge, and the texture plot quality of insertion is also more preferable.

Description

Conversion method in a kind of adaptive equalization three-dimensional video-frequency frame per second based on correlation
Technical field
The present invention relates to switch technologies in a kind of three-dimensional video-frequency frame per second, belong to image, multimedia signal processing technique field.
Technical background
Free view-point TV plays the viewing effect for the 3 D stereo that presents to audience by multiple views, thus is sent out extensively Exhibition.Since transmission bandwidth limits, current 3 D video research generallys use deep video (MVD) mode with application.Three-dimensional view The presentation of virtual view is combined corresponding depth map by true viewpoint texture maps and is passed through based on depth map-texture maps Rendering in frequency (DIBR) it obtains, depth map is not used to direct viewing, and is only used as synthesis virtual view.
Switch technology can break through the limitation of network transmission bandwidth in frame per second, carry out multiple to video frame rate in receiving end and mention Height promotes viewing quality to improve video fluency.It is a kind of linear interpolation mistake based on before and after frames that essence is converted in frame per second Journey.Simple interpolation such as frame iterative method does not improve fluidity of motion with the frame method of average, therefore people consider motion vector On to frame per second in conversion, by intermediate frame being inserted on movement locus of object to obtain more smooth video effect.Based on fortune Conversion method includes three key steps in the frame per second of dynamic compensation: estimation, motion vector are post-processed and are mended based on movement The interpolation repaid.
In free view-point TV, texture maps are the practical image watched of spectators, and depth map is as texture maps depth information Supplement, can be used to synthesize the texture maps of other viewpoints.It is corresponding when convert in frame per second to a certain viewpoint texture maps, Its associated depth figure is also required to convert in the frame per second for doing identical multiple.Depth map is by the different depth level of scene with different ashes Angle value indicates that the more close then depth value of distance is smaller, and being converted to gray value of image indicates that then gray value is bigger.Depth map gray value The place of variation is usually the intersection of different objects in scene, we are known as edge.The edge of depth map plays in DIBR Highly important effect, it determines the quality of the virtual view of synthesis on certain depth.Therefore, if we with and texture Scheme identical method and depth map convert in frame per second, then because motion match fault and smooth bring image are fuzzy It just will become the fuzzy and marginal error at edge in depth map.Therefore correctly guarantee edge to become in depth map frame per second to convert Emphasis.
Summary of the invention
The present invention utilizes the correlation of depth map and texture maps, and the classification of edge block and flat block is carried out to depth map, and The motion vector of different macro blocks is obtained using the Block matching criterion that texture gradient information is added according to label, respectively to difference Macro block carries out motion vector post-processing and adaptive-interpolation, while obtaining depth map interpolated frame and texture maps interpolated frame.Compared to biography Conversion method in the frame per second of system, the invention strengthen the processing to depth map edge, and the depth map of insertion has better edge The texture plot quality of characteristic, insertion is also more preferable.
The technical solution adopted by the invention is as follows:
Conversion method in a kind of adaptive equalization three-dimensional video-frequency frame per second based on correlation, it is characterised in that this method includes Following steps:
Step 1: extracting the depth map and texture maps of synchronization and subsequent time, and save as image pair;
Step 2: depth map being subjected to edge block label, and edge blocks are using the pixel classification based on k mean cluster;
Step 3: being classified according to pixel and carry out estimation, block is carried out using the block matching criterion that texture weight is added With search, the block-based motion estimation of routine UMHS search and the UMHS Block- matching of quaternary tree are carried out respectively to different types of piece Estimation obtains original motion vector;
Step 4: adaptive motion vector post-processing is carried out to the pixels of different classifications, respectively to flat block and edge block into Row outlier detection is simultaneously repaired, and accurate motion vectors are obtained;
Step 5: the adaptive interpolation method based on motion compensation;
Step 6: the filling-up hole interpolation based on edge judgement is classified to cavity point, utilizes the filling-up hole judged based on edge The texture maps and depth map that interpolation is finally inserted into;
Step 7: being respectively synthesized texture sequence and depth map sequence and export.
Preferably, depth map is divided into the identical image block of size in step 2, to pixel depth value progress side in block Difference calculates, and the block that variance is greater than threshold value is labeled as edge block, the block less than threshold value is labeled as flat block;Using k mean cluster Method carries out prospect background separation to the macro block that label is, is carried out with the macro block that k means clustering method is to label Pixel cluster, wherein the smaller pixel class of gray value is marked as background pixel, and the big pixel class of gray value is marked as prospect Pixel.
Preferably, in step 3, using the absolute error and TSAD that texture gradient weight is added as block matching criterion, Increase texture proportion, the edge block and flat block sort obtained according to previous step, the macro block for being to label into The conventional UMHS block-based motion estimation of row, the UMHS block matching motion for carrying out quaternary tree for the macro block labeled as edge block are estimated Meter obtains original motion vector.
Preferably, motion vector post-processing is carried out to flat block and edge block respectively in step 4, it is first for flat block First relatively flat piece with the difference of the motion vector in eight neighborhood being both flat block, sentence according to compared with average motion vector Whether disconnected current block motion vector is abnormal, if it is abnormal motion vector, corrects current abnormal fortune using mean value substitution method Dynamic vector;For edge block, the eight neighborhood mean value for carrying out same depth layer to the sub-macroblock for only including single depth layer first is entangled Just, the eight neighborhood of background pixel point and foreground pixel point is then carried out respectively to not only including prospect but also including the sub-macroblock of background Mean value is corrected, and accurate motion vectors are obtained.
Preferably, the filling-up hole interpolation judged based on edge is carried out in step 6, by pixel empty in depth map etc. Away from four pixels carry out gray value comparison two-by-two, if there is any two pixel gray value difference be greater than threshold value, then recognize It is the cavity in fringe region for the cavity, reverse compensation interpolation is carried out to it;Otherwise it is assumed that the cavity is flat place Cavity directly obtains pixel value interpolation using neighborhood territory pixel average mode to obtain complete interpolated frame;With same information mark Note does filling-up hole processing to texture maps, obtains texture maps interpolated frame.
The present invention utilizes the correlation of depth map and texture maps, and the classification of edge block and flat block is carried out to depth map, and This classification information is applied in estimation, motion vector post-processing and Interpolation Process based on motion compensation, to edge Block of pixels uses the estimation of motion vectors and smoothing processing of pixel rank, and obtained motion vector is more accurate, final to compensate Obtained depth map edge is apparent, and the quality of texture maps is also higher.
Detailed description of the invention
Fig. 1 is to carry out transition diagram in two times of frame per second to depth map and texture maps simultaneously in three-dimensional video-frequency.
Fig. 2 is holistic approach flow chart of the present invention.
Fig. 3 is that result figure is marked comprising edge pixel point macro block in depth map.
Fig. 4 is that depth map edge block carries out K mean cluster schematic diagram.
Fig. 5 is edge macro block K mean cluster process schematic.
Fig. 6 is texture maps macroblock texture gradient schematic diagram.
Fig. 7 is quaternary tree estimation schematic diagram.
Fig. 8 is that empty type judges schematic diagram.
Fig. 9 is experimental result picture of the present invention, and (a) is the 32nd frame texture maps of Beergarden, and (b) present invention synthesizes The 32nd frame texture maps of Beergarden, are (c) the 32nd depth maps of Beergarden, are (d) Beergarden that the present invention synthesizes 32nd frame depth map.
Specific embodiment
Conversion method detailed process in three-dimensional video-frequency frame per second proposed by the present invention is as shown in Figure 1, first carry out depth map Marginal classification is obtained with a later frame using the adaptive motion estimation based on edge according to present frame of the classification results to texture maps Initial motion vectors, then post-process to obtain optimal motion vectors using the motion vector based on depth information, then to empty block The interpolation that motion compensation is completed in fringe region judgement is carried out, reaches and is converted in the frame per second of depth map and texture maps, effectively reduce depth Degree figure edge misplugs value, reaches high quality and rebuilds purpose.
The present invention is further detailed in (but not limited to this example) and attached drawing combined with specific embodiments below.
(1) video frame is read in
(1) video frame is written, saves the t frame of texture maps as present frame, is denoted as ft, corresponding depth map is denoted as dt;T+1 frame is denoted as f as reference framet+2, correspond to depth map and be denoted as dt+2;Insertion texture maps are denoted as ft+1, insertion depth figure It is denoted as dt+1;Each corresponding depth map of frame texture maps is denoted as an image pair;
(2) image preprocessing
(1) depth map edge block marks:
Different objects has the different direction of motion in Same Scene, and the motion search based on Block- matching easily hands over object It is divided at boundary in same search block, makes originally there is the pixel in different motion direction to possess the identical direction of motion.Depth Figure has apparent depth value to change in two articles intersection, so as to be divided using the variation of detection depth value containing edge pixel The search block and the search block without edge of point.Change in depth variances sigma indicates in change in depth degree block.As shown in formula 1, Depth map is divided into the image block of same size, l (pi) be each pixel label, if depth value p in current blockiSide Difference is greater than a certain threshold value Thσ, then this block is labeled as edge block, is denoted as 1, is otherwise labeled as flat block, 0 is denoted as, such as 2 institute of formula Show, wherein mbSize is macroblock size, and μ is average depth value.Attached drawing 3 is the change in depth region detected, macro shown in box Block is edge block:
(2) prospect background separates:
For depth map, the image block for being generally referenced as edge includes pixel there are two depth layer, it is believed that ash Angle value it is big be foreground pixel point, gray value is small for background pixel point.K mean cluster is done to two kinds of pixels, by prospect background Pixel is separated.K means clustering process is as shown in Fig. 4.Depth map image block is progressively scanned, in the macro block for being labeled as edge Start to be clustered at block, arbitrarily select two starting points, cluster centre is updated using gray scale difference as distance, until in edge block All pixels point is divided into two classes, and wherein the small label of gray value is, the big label of gray value is Point, as shown in Fig. 3;
(3) motion estimation process based on depth map edge
(1) matching criterior of texture enhancing:
The present invention uses rapid motion estimating method Unsymmetrieal-CrossMuti-Hexagon Search (UMHS) block-matching search is carried out.It is a kind of mixed type Block- matching search, has search speed fast, is not easy to fall into part The advantages of smallest point;With the absolute error enhanced based on texture and (texture enhancement-based sum of absolute differences;TSAD it) is used as estimation matching criterior, uses between the block of texture maps absolute error and as master Cost function is wanted, texture (such as attached drawing 6) weight of current block is added, arrives formula 5 as final block search criterion, such as formula 3 It is shown:
Wherein
TSAD=SAD+ γ SAD_Texture (formula 5)
Wherein (x, y) is present frame texture maps ftPixel to be matched;For reference frame texture maps diagonally opposing corner element difference Absolute value, the also known as absolute error and SAD_Texture of texture information;piRefer to current macro, piyAnd pixIt is macro block respectively Length and width, m be reference picture in macro block length and width, v be TSAD minimum when optimum movement vector.
(2) the quaternary tree estimation of edge block
For marking the block for being in depth map, accordingly, these blocks are found in texture maps, current texture figure is macro Block carries out quadtree decomposition, and carries out estimation respectively to four sub-macroblocks under each macro block, and estimation criterion still uses TSAD matching cost function, finds the best matching blocks of each fritter, so that it is determined that the optimal movement of each fritter is sweared in macro block Amount, as shown in Fig. 7.
(4) the motion vector last handling process based on depth map
(1) flat block vector post-processes
A. judge abnormal motion vector
It determines the flat block in current flat block eight neighborhood, calculates the average motion vector of current block and surrounding flat block As shown in formula 6, (x, y) is current flat block, piFor current block eight neighborhood block, l (pi) flat block is expressed as when taking 1.If worked as Preceding piece with the difference D of average motion vectorcGreater than the mean difference D_ave of surrounding flat block and average motion vector, then will work as Preceding piece is determined as abnormal mass, such as formula.
B. abnormal point is corrected
For abnormal flat site macro block, unreliable fortune is corrected using flat block SAD weighted average in surrounding field Dynamic vector.As shown in formula 10.Wherein ωτ(pi) it is field block SAD weight, as shown in formula 11.
Wherein pjFor field N aroundm(P) pixel, l (pi) it is that current block marks, v (pi) be current block original fortune Dynamic vector.
(2) edge block vector post-processes
Since edge blocks use quaternary tree estimation, so including four motion vectors in current edge block.According to The prospect background pixel marked in pretreatment, to this edge block macro block, if being all prospect or background in its sub-macroblock, selection is worked as The flat block that Quan Yuqi sub-macroblock is in same depth in preceding macro block eight neighborhood does vector average, and it is macro that mean value vector is assigned to this son All pixels point in block;There is the sub-macroblock of foreground pixel again for existing background pixel, by the foreground pixel in all sub-macroblocks Point selects to be all that the macro block of prospect carries out vector average in sub-macroblock eight neighborhood, and mean vector is assigned to foreground pixel point;Together Sample, be all the macro block progress vector average of background to the background pixel point in all sub-macroblocks, in selection sub-macroblock eight neighborhood, And mean vector is assigned to background pixel point.
(5) interpolation based on fortune function compensation
(1) overlapping block interpolation
If there is and only one motion vector be directed toward interpolation pixel, then current pixel location point is mended by propulsion It repays to obtain, then λ in formula 1211=1, middle λ22=0;If there is multiple motion vectors be directed toward interpolation pixel, then when Front position pixel by pixel and its a later frame in the smallest macro block of TSAD value average expression, at this time Wherein ft+1(x, y) indicates texture maps interpolated frame, ftAnd ft+2It is texture maps present frame and texture maps ginseng respectively Examine frame;Dt+1(x, y) indicates depth map interpolated frame, DtAnd Dt+1It is depth map present frame and depth map reference frame, v respectivelyxAnd vyFor The transverse and longitudinal coordinate of obtained motion vector.
(2) empty type decision
For the interpolation pixel that no motion vector is directed toward, first determine whether that it is in flat pixels point or edge Pixel.Since empty range is usually no more than the width of a search macro block, it is possible to up and down to the cavity point The point of times bulk lengths m (attached drawing 8) carries out rapid edge judgement, as shown in formula 13, d (a1,b1) and d (a2,b2) indicate four Any two points depth value on angle thinks that current point is place if depth difference between any two points on four angles is greater than threshold value Thd Otherwise empty point in edge is considered as current point and is in flat site.
(c) cavity point motion compensated interpolation
Since the cavity of fringe region is to be generated due to prospect and background generation relative motion, and pixel exists at cavity It in former frame and is not present, therefore uses the interpolation of reverse compensation for the pixel in fringe region, at this point, formula λ in 1211=0, middle λ22=1;For the empty point of flat, the point in cavity non-in the eight neighborhood is carried out flat Cavity point pixel value is directly obtained, as shown in formula 14, p (x, y) is that pixel is put in current cavity, and n is non-cavity in eight neighborhood Pixel, piIt is respective pixel value:
The application selected two groups of stereoscopic video sequence Beergarden (512*384) BookArrival (512*384) into Row is tested and is promoted with the frame per second filtered based on three sides, the frame per second method for improving based on full search is compared, the standard of evaluation It is Y-PSNR PSNR (Peak Signal to Noise Ratio) and structural similarity SSIM (structural Similarity index measurement), value shows that more greatly interpolated frame quality is better, and the results are shown in Table 1, it can be seen that The problems such as the application compares the interpolated frame better quality that conversion regime obtains in other two frame per second, can effectively solve cavity.
Table 1

Claims (4)

1. conversion method in a kind of adaptive equalization three-dimensional video-frequency frame per second based on correlation, it is characterised in that this method include with Lower step:
Step 1: extracting the depth map and texture maps of synchronization and subsequent time, and save as image pair;
Step 2: depth map is divided into the identical image block of size, edge block is carried out to each image block or flat block marks, and Each pixel is classified as prospect or background using the pixel classification based on k mean cluster by edge blocks;
Step 3: being classified according to pixel and carry out estimation, Block- matching is carried out using the block matching criterion that texture weight is added and is searched The UMHS Block- matching of rope, the block-based motion estimation and quaternary tree that carry out routine UMHS search respectively to flat block, edge block is transported Dynamic estimation, obtains original motion vector;
Step 4: adaptive motion vector post-processing being carried out to the pixel of different classifications, flat block and edge block are carried out respectively different Often point, which detects, simultaneously repairs, for flat block, relatively flat piece first with the difference of the motion vector in eight neighborhood being both flat block, According to judging whether current block motion vector is abnormal compared with average motion vector, if it is abnormal motion vector, use Mean value substitution method corrects current abnormal motion vector;For edge block, first to the sub-macroblock for only including single depth layer The eight neighborhood mean value for carrying out same depth layer is corrected, and then to not only including prospect but also including the sub-macroblock of background, is carried on the back respectively The eight neighborhood mean value of scene vegetarian refreshments and foreground pixel point is corrected, and accurate motion vectors are obtained;
Step 5: the adaptive interpolation method based on motion compensation;
Step 6: the filling-up hole interpolation based on edge judgement is classified to cavity point, utilizes the filling-up hole interpolation judged based on edge The texture maps and depth map being finally inserted into;
Step 7: being respectively synthesized texture sequence and depth map sequence and export.
2. conversion method in the adaptive equalization three-dimensional video-frequency frame per second according to claim 1 based on correlation, feature It is: depth map is divided into the identical image block of size in step 2, variance calculating is carried out to pixel depth value in block, it will The block that variance is greater than threshold value is labeled as edge block, and the block less than threshold value is labeled as flat block;Using k means clustering method to label Prospect background separation is carried out for the macro block at edge, carries out pixel cluster with the macro block that k means clustering method is to label, The middle smaller pixel class of gray value is marked as background pixel, and the big pixel class of gray value is marked as foreground pixel.
3. conversion method in the adaptive equalization three-dimensional video-frequency frame per second according to claim 1 based on correlation, feature It is: in step 3, using the absolute error and TSAD that texture gradient weight is added as block matching criterion, increases shared by texture Specific gravity, the edge block and flat block sort obtained according to step 2 carry out conventional UMHS block to the macro block that label is With estimation, the UMHS block-based motion estimation of quaternary tree is carried out for the macro block labeled as edge block, obtains original motion Vector.
4. conversion method in the adaptive equalization three-dimensional video-frequency frame per second according to claim 1 based on correlation, feature It is: using the filling-up hole interpolation judged based on edge in step 6, passes through four pixels equidistant to pixel empty in depth map Point carries out depth value comparison two-by-two, if there is any two pixel gray value difference is greater than threshold value, then it is assumed that the cavity is place In the cavity of fringe region, reverse compensation interpolation is carried out to it;Otherwise it is assumed that the cavity is flat place cavity, using neighborhood Pixel average mode directly obtains pixel value interpolation to obtain complete interpolated frame;Texture maps are mended with same information flag Hole processing, obtains texture maps interpolated frame.
CN201610804697.7A 2016-09-06 2016-09-06 Conversion method in a kind of adaptive equalization three-dimensional video-frequency frame per second based on correlation Expired - Fee Related CN106331729B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201610804697.7A CN106331729B (en) 2016-09-06 2016-09-06 Conversion method in a kind of adaptive equalization three-dimensional video-frequency frame per second based on correlation

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201610804697.7A CN106331729B (en) 2016-09-06 2016-09-06 Conversion method in a kind of adaptive equalization three-dimensional video-frequency frame per second based on correlation

Publications (2)

Publication Number Publication Date
CN106331729A CN106331729A (en) 2017-01-11
CN106331729B true CN106331729B (en) 2019-04-16

Family

ID=57787480

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201610804697.7A Expired - Fee Related CN106331729B (en) 2016-09-06 2016-09-06 Conversion method in a kind of adaptive equalization three-dimensional video-frequency frame per second based on correlation

Country Status (1)

Country Link
CN (1) CN106331729B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP3646604B1 (en) * 2017-06-30 2024-10-16 InterDigital VC Holdings, Inc. Weighted to spherically uniform psnr for 360-degree video quality evaluation using cubemap-based projections
CN110650346B (en) * 2019-09-26 2022-04-22 西安邮电大学 3D-HEVC depth map motion estimation parallel implementation method and structure

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20120176536A1 (en) * 2011-01-12 2012-07-12 Avi Levy Adaptive Frame Rate Conversion
CN103905812A (en) * 2014-03-27 2014-07-02 北京工业大学 Texture/depth combination up-sampling method
CN103997653A (en) * 2014-05-12 2014-08-20 上海大学 Depth video encoding method based on edges and oriented toward virtual visual rendering
CN104754359B (en) * 2015-01-26 2017-07-21 清华大学深圳研究生院 A kind of depth map encoding distortion prediction method of Two Dimensional Free viewpoint video

Also Published As

Publication number Publication date
CN106331729A (en) 2017-01-11

Similar Documents

Publication Publication Date Title
US9269153B2 (en) Spatio-temporal confidence maps
CN101682794B (en) Method, apparatus and system for processing depth-related information
US7742657B2 (en) Method for synthesizing intermediate image using mesh based on multi-view square camera structure and device using the same and computer-readable medium having thereon program performing function embodying the same
CN101640809B (en) Depth extraction method of merging motion information and geometric information
CN102075779B (en) Intermediate view synthesizing method based on block matching disparity estimation
US20090129667A1 (en) Device and method for estimatiming depth map, and method for generating intermediate image and method for encoding multi-view video using the same
CN109690620A (en) Threedimensional model generating means and threedimensional model generation method
CN102263957B (en) Search-window adaptive parallax estimation method
CN106210449A (en) The frame rate up-conversion method for estimating of a kind of Multi-information acquisition and system
US9661307B1 (en) Depth map generation using motion cues for conversion of monoscopic visual content to stereoscopic 3D
Wu et al. A novel method for semi-automatic 2D to 3D video conversion
WO2013095248A1 (en) Method and processor for 3d scene representation
CN106331729B (en) Conversion method in a kind of adaptive equalization three-dimensional video-frequency frame per second based on correlation
Lin et al. A 2D to 3D conversion scheme based on depth cues analysis for MPEG videos
Lee et al. High-Resolution Depth Map Generation by Applying Stereo Matching Based on Initial Depth Informaton
US20130176388A1 (en) Method and device for providing temporally consistent disparity estimations
Shih et al. A depth refinement algorithm for multi-view video synthesis
Jung et al. Superpixel matching-based depth propagation for 2D-to-3D conversion with joint bilateral filtering
Lin et al. Semi-automatic 2D-to-3D video conversion based on depth propagation from key-frames
Lin et al. Sprite generation for hole filling in depth image-based rendering
Lü et al. Virtual view synthesis for multi-view 3D display
Choi Hierarchical block-based disparity estimation
Cai et al. Image-guided depth propagation using superpixel matching and adaptive autoregressive model
JP4208142B2 (en) Hidden region interpolation method for free viewpoint images
Zhang et al. An adaptive object-based reconstruction of intermediate views from stereoscopic images

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20190416

CF01 Termination of patent right due to non-payment of annual fee