CN106331729A - Method of adaptively compensating stereo video frame rate up conversion based on correlation - Google Patents

Method of adaptively compensating stereo video frame rate up conversion based on correlation Download PDF

Info

Publication number
CN106331729A
CN106331729A CN201610804697.7A CN201610804697A CN106331729A CN 106331729 A CN106331729 A CN 106331729A CN 201610804697 A CN201610804697 A CN 201610804697A CN 106331729 A CN106331729 A CN 106331729A
Authority
CN
China
Prior art keywords
block
pixel
edge
depth map
motion vector
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201610804697.7A
Other languages
Chinese (zh)
Other versions
CN106331729B (en
Inventor
刘琚
肖依凡
曲爱喜
郭志鑫
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shandong University
Original Assignee
Shandong University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shandong University filed Critical Shandong University
Priority to CN201610804697.7A priority Critical patent/CN106331729B/en
Publication of CN106331729A publication Critical patent/CN106331729A/en
Application granted granted Critical
Publication of CN106331729B publication Critical patent/CN106331729B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/50Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
    • H04N19/597Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding specially adapted for multi-view video sequence encoding
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/10Processing, recording or transmission of stereoscopic or multi-view image signals
    • H04N13/106Processing image signals
    • H04N13/139Format conversion, e.g. of frame-rate or size

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Image Analysis (AREA)
  • Compression Or Coding Systems Of Tv Signals (AREA)

Abstract

The correlation between a depth map and a texture map is used, classification of edge blocks and flat blocks is carried out on the depth map, a block motion matching criterion with texture gradient information added is adopted to obtain motion vectors of different macro blocks according to marks, motion vector post-processing and adaptive interpolation are carried out on the different macro blocks respectively, and a depth map interpolation frame and a texture map interpolation frame are obtained at the same time. Compared with the traditional frame rate up conversion method, the method of the invention enhances processing on edges of the depth map, the interpolated depth map has better edge features, and the interpolated texture map has better quality.

Description

Conversion method in a kind of adaptive equalization three-dimensional video-frequency frame per second based on dependency
Technical field
The present invention relates to switch technology in a kind of three-dimensional video-frequency frame per second, belong to image, multimedia signal processing technique field.
Technical background
Free view-point TV plays the viewing effect of the 3 D stereo that presents to audience by multiple views, thus is extensively sent out Exhibition.Owing to transmission bandwidth limits, current 3 D video research generally uses deep video (MVD) pattern with application.Three-dimensional regards In Pin, virtual view presents by the corresponding depth map of true viewpoint texture maps associating through based on depth map-texture maps Rendering (DIBR) obtaining, depth map is not used for direct viewing, and is only used as synthesizing virtual view.
In frame per second, switch technology can break through the restriction of network transmission bandwidth, at receiving terminal, video frame rate is carried out multiple and carries Height, thus improve video fluency, promote viewing quality.Changing essence in frame per second is a kind of linear interpolation mistake based on front and back's frame Journey.Fluidity of motion is not improved by simple interpolation such as frame iterative method with frame averaging method, and therefore motion vector is considered by people In frame per second in conversion, obtain more smooth video effect by inserting intermediate frame on movement locus of object.Based on fortune In the dynamic frame per second compensated, conversion method includes three key steps: estimation, motion vector post processing and mend based on motion The interpolation repaid.
In free view-point TV, texture maps is the image of the actual viewing of spectators, and depth map is as texture maps depth information Supplement, can be for synthesizing the texture maps of other viewpoints.When a certain viewpoint texture maps being carried out conversion in frame per second, corresponding, Its associated depth figure is also required to do in the frame per second of identical multiple conversion.Depth map by the different depth level of scene with different ashes Angle value represents, the nearlyest then depth value of distance is the least, is converted to image intensity value and represents that then gray value is the biggest.Depth map gray value The place of change is usually the intersection of different objects in scene, and we are referred to as edge.The edge of depth map plays in DIBR Highly important effect, it determines the quality of the virtual view of synthesis on certain depth.Therefore, if we with and texture Scheme identical method to carry out depth map changing in frame per second, then because motion match error and smooth bring image blurring The fuzzy of edge and marginal error will be become in depth map.The most correctly ensure edge to become in depth map frame per second to change Emphasis.
Summary of the invention
The present invention utilizes the dependency of depth map and texture maps, and depth map carries out the classification of edge block and flat block, and Use the Block matching criterion adding texture gradient information to obtain the motion vector of different macro block, respectively to difference according to labelling Macro block carries out motion vector post processing and adaptive-interpolation, obtains depth map interpolated frame and texture maps interpolated frame simultaneously.Compare biography Conversion method in the frame per second of system, this invention strengthens the process to depth map edge, and the depth map of insertion has more preferable edge Characteristic, the texture maps quality of insertion is more preferable.
The technical solution used in the present invention is as follows:
Conversion method in a kind of adaptive equalization three-dimensional video-frequency frame per second based on dependency, it is characterised in that the method includes Following steps:
Step 1: extract synchronization and the depth map of subsequent time and texture maps, and save as image pair;
Step 2: depth map is carried out edge block labelling, and edge blocks uses pixel based on k mean cluster classification;
Step 3: carry out estimation according to pixel classification, uses the block matching criterion adding texture weights to carry out block Join search, different types of piece is carried out respectively the block-based motion estimation of conventional UMHS search and the UMHS Block-matching of quaternary tree Estimation, it is thus achieved that original motion vector;
Step 4: the pixel of difference classification is carried out adaptive motion vector post processing, respectively flat block and edge block is entered Row outlier detection is also repaired, and obtains accurate motion vectors;
Step 5: adaptive interpolation method based on motion compensation;
Step 6: the filling-up hole interpolation judged based on edge, classifies to cavity point, utilizes the filling-up hole judged based on edge Interpolation obtains texture maps and the depth map being finally inserted into;
Step 7: be respectively synthesized texture sequence and depth map sequence and export.
Preferably, in step 2 depth map is divided into the image block that size is identical, to the pixel depth value side of carrying out in block Difference calculates, and more than the block of threshold value, variance is labeled as edge block, is labeled as flat block less than the block of threshold value;Use k mean cluster Method carries out prospect background separation to the macro block being labeled as edge, carries out, with k means clustering method, the macro block being labeled as edge Pixel clusters, and the pixel class that wherein gray value is less is marked as background pixel, and the pixel class that gray value is big is marked as prospect Pixel.
Preferably, in step 3, use and add the absolute error of texture gradient weights and TSAD as block matching criterion, Increasing texture proportion, the edge block obtained according to previous step is classified with flat block, is entered the macro block being labeled as flat block The UMHS block-based motion estimation that row is conventional, the UMHS block matching motion carrying out quaternary tree for being labeled as the macro block of edge block is estimated Meter, it is thus achieved that original motion vector.
Preferably, the most respectively flat block and edge block are carried out motion vector post processing, for flat block, first First relatively flat piece is both the difference of motion vector of flat block in eight neighborhood, according to relatively the sentencing of average motion vector Disconnected current block motion vector is the most abnormal, if abnormal motion vector, uses the current abnormal fortune of average substitution method corrigendum Dynamic vector;For edge block, the eight neighborhood average that first sub-macroblock only comprising single depth layer carries out same depth layer is entangled Just, then to not only comprising prospect but also comprise the sub-macroblock of background, background pixel point and the eight neighborhood of foreground pixel point are carried out respectively Average is corrected, and obtains accurate motion vectors.
Preferably, carry out the filling-up hole interpolation judged based on edge in step 6, by cavity pixel in depth map etc. Away from four pixels carry out gray value two-by-two and compare, if having any two pixel gray value difference more than threshold value, then recognize It is in the cavity of marginal area for this cavity, it is carried out backward moving and compensates interpolation;Otherwise it is assumed that this cavity is flat place Cavity, uses neighborhood territory pixel average mode to directly obtain pixel value interpolation thus obtains complete interpolated frame;With same information mark Texture maps is done filling-up hole and is processed by note, obtains texture maps interpolated frame.
The present invention utilizes the dependency of depth map and texture maps, and depth map carries out the classification of edge block and flat block, and By in this classification information operating to estimation, motion vector post processing and Interpolation Process based on motion compensation, to edge Block of pixels uses estimation of motion vectors and the smoothing processing of pixel rank, and the motion vector obtained is more accurate, finally compensates The depth map edge obtained is apparent, and the quality of texture maps is the highest.
Accompanying drawing explanation
Fig. 1 is depth map and texture maps to be carried out transition diagram in two times of frame per second in three-dimensional video-frequency simultaneously.
Fig. 2 is holistic approach flow chart of the present invention.
Fig. 3 is to be marked result figure to comprising edge pixel point macro block in depth map.
Fig. 4 is that depth map edge block carries out K mean cluster schematic diagram.
Fig. 5 is edge macro block K means clustering process schematic diagram.
Fig. 6 is texture maps macroblock texture gradient schematic diagram.
Fig. 7 is quaternary tree estimation schematic diagram.
Fig. 8 is that cavity type judges schematic diagram.
Fig. 9 is experimental result picture of the present invention, and (a) is Beergarden the 32nd frame texture maps, and (b) is that the present invention synthesizes Beergarden the 32nd frame texture maps, (c) is Beergarden the 32nd depth map, and (d) is the Beergarden that the present invention synthesizes 32nd frame depth map.
Detailed description of the invention
In the three-dimensional video-frequency frame per second that the present invention proposes, first conversion method idiographic flow as it is shown in figure 1, carry out depth map Marginal classification, uses adaptive motion estimation based on edge to obtain with a later frame the present frame of texture maps according to classification results Initial motion vectors, then use motion vector post processing based on depth information to obtain optimal motion vectors, then to cavity block Carry out marginal area and judged the interpolation of motion compensation, reach to change in the frame per second of depth map and texture maps, effectively reduce deep Degree figure edge misplugs value, reaches high-quality and rebuilds purpose.
Below in conjunction with specific embodiment (but being not limited to this example) and accompanying drawing, the present invention is further detailed.
(1) frame of video is read in
(1) write frame of video, the t frame of preservation texture maps, as present frame, is denoted as ft, corresponding depth map is denoted as dt;T+1 frame, as reference frame, is designated as ft+2, its corresponding depth map is denoted as dt+2;Insert texture maps and be denoted as ft+1, insertion depth figure It is denoted as dt+1;Each corresponding depth map of frame texture maps is designated as an image pair;
(2) Image semantic classification
(1) depth map edge block labelling:
Objects different in Same Scene has the different direction of motion, and object is easily handed over by motion search based on Block-matching It is divided at boundary in same search block, makes the pixel originally with different motion direction have the identical direction of motion.The degree of depth Figure has obvious depth value to change at two articles intersection, such that it is able to utilize detection depth value change to divide containing edge pixel The search block of point and the search block without edge.In change in depth degree block, change in depth variances sigma represents.As shown in Equation 1, Depth map is divided into the image block of formed objects, l (pi) it is the labelling of each pixel, if depth value p in current blockiSide Difference is more than a certain threshold value Thσ, then this block is labeled as edge block, is designated as 1, is otherwise labeled as flat block, is designated as 0, such as formula 2 institute Showing, wherein mbSize is macroblock size, and μ is average depth value.Accompanying drawing 3 is the change in depth region detected, grand shown in square frame Block is edge block:
(2) prospect background separates:
For depth map, the image block being generally referenced as edge is contained within the pixel of two depth layer, it is believed that ash What angle value was big is foreground pixel point, gray value little for background pixel point.Two kinds of pixels are done K mean cluster, by prospect background Pixel separates.K means clustering process is as shown in Figure 4.Progressive scan depth map image block, is being labeled as the macro block at edge Proceed by cluster at block, arbitrarily select two starting points, gray scale difference is updated cluster centre as distance, until in edge block All pixels are divided into two classes, and what wherein gray value was little is labeled as background pixel point, and what gray value was big is labeled as foreground pixel Point, as shown in Figure 3;
(3) motion estimation process based on depth map edge
(1) matching criterior that texture strengthens:
The present invention uses rapid motion estimating method Unsymmetrieal-CrossMuti-Hexagon Search (UMHS) block-matching search is carried out.It is a kind of mixed type Block-matching search, has search speed fast, is difficult to be absorbed in local The advantage of smallest point;With the absolute error strengthened based on texture and (texture enhancement-based sum of absolute differences;TSAD) as estimation matching criterior, by absolute error between the block of texture maps with as main Want cost function, add texture (such as the accompanying drawing 6) weights of current block, as final block search criterion, such as formula 3 to formula 5 Shown in:
Wherein
TSAD=SAD+ γ SAD_Texture (formula 5)
Wherein (x y) is present frame texture maps ftPixel to be matched;For reference frame texture maps diagonally opposing corner element difference Absolute value, be also called absolute error and the SAD_Texture of texture information;piRefer to current macro, piyAnd pixIt is macro block respectively Length and width, m is the length and width of macro block in reference picture, and v is optimum movement vector during TSAD minimum.
(2) the quaternary tree estimation of edge block
For depth map is labeled as the block at edge, accordingly, texture maps finds these blocks, by grand for current texture figure Block carries out quadtree decomposition, and four sub-macroblock under each macro block are carried out estimation respectively, and estimation criterion still uses TSAD Matching power flow function, finds the best matching blocks of each fritter, so that it is determined that the optimal movement of each fritter is vowed in macro block Amount, as shown in Figure 7.
(4) motion vector last handling process based on depth map
(1) flat block vector post processing
A. abnormal motion vector is judged
Determine the flat block in current flat block eight neighborhood, calculate the average motion vector of current block and surrounding flat block As shown in Equation 6, (x y) is current flat block, piFor current block eight neighborhood block, l (pi) it is expressed as flat block when taking 1.If worked as Front piece with difference D of average motion vectorcMore than the mean difference D_ave of surrounding flat block Yu average motion vector, then ought Front piece is judged to abnormal mass, such as formula.
B. abnormity point correction
For abnormal flat site macro block, in employing field around, flat block SAD weighted average revises unreliable fortune Dynamic vector.As shown in Equation 10.Wherein ωτ(pi) it is field block SAD weights, as shown in Equation 11.
Wherein pjFor field N aroundm(P) pixel, l (pi) it is current block labelling, v (pi) it is the original fortune of current block Dynamic vector.
(2) edge block vector post processing
Owing to edge blocks uses quaternary tree estimation, so comprising four motion vectors in current edge block.According to The prospect background pixel of labelling in pretreatment, to this edge block macro block, if being all prospect or background in its sub-macroblock, selects to work as In front macro block eight neighborhood, complete and its sub-macroblock is in the flat block of the same degree of depth and does vector average, mean value vector is assigned to this son grand All pixels in block;Existing background pixel is had again to the sub-macroblock of foreground pixel, by the foreground pixel in all sub-macroblock Point, the macro block being all prospect in selecting sub-macroblock eight neighborhood carries out vector average, and mean vector is assigned to foreground pixel point;With Sample, to the background pixel point in all sub-macroblock, the macro block being all background in selecting sub-macroblock eight neighborhood carries out vector average, And mean vector is assigned to background pixel point.
(5) interpolation compensated based on fortune merit
(1) overlapping block interpolation
If having and only one of which motion vector pointing to interpolation pixel, then current pixel location point is mended by propulsion Repay and obtain, then λ in formula 1211=1, middle λ22=0;If have multiple motion vector point to interpolation pixel, then when Front position pixel is by pixel in the macro block of TSAD value minimum and the average expression of its a later frame, now Wherein ft+1(x y) represents texture maps interpolated frame, ftAnd ft+2It is texture maps present frame and texture maps ginseng respectively Examine frame;Dt+1(x y) represents depth map interpolated frame, DtAnd Dt+1It is depth map present frame and depth map reference frame, v respectivelyxAnd vyFor The transverse and longitudinal coordinate of the motion vector obtained.
f t + 1 ( x , y ) = λ 1 f t ( x - 1 2 v x , y - 1 2 v y ) + λ 2 f t + 2 ( x + 1 2 v x , y + 1 2 v y )
(2) cavity type decision
For the interpolation pixel not having motion vector to point to, first determine whether that it is in flat pixels point or edge Pixel.Owing to cavity scope is usually no more than the width of a search macro block, it is possible to this cavity point up and down The point of times bulk lengths m (accompanying drawing 8) carries out rapid edge judgement, as shown in Equation 13, d (a1,b1) and d (a2,b2) represent four Any two points depth value on angle, if depth difference is more than threshold value Thd between any two points on four angles, thinks that current point is place In the empty point of edge, otherwise it is considered as currently putting being in flat site.
(c) cavity point motion compensated interpolation
Owing to the cavity of marginal area produces due to prospect and background generation relative motion, and at cavity, pixel exists Former frame does not exist, therefore for being in the interpolation that the pixel of marginal area uses backward moving to compensate, now, formula λ in 1211=0, middle λ22=1;For the empty point of flat, the point in cavity non-in this eight neighborhood is put down All directly obtaining cavity point pixel value, as shown in Equation 14, (x, is y) current cavity point pixel to p, and n is non-cavity in eight neighborhood Pixel, piIt is respective pixel value:
The application has selected two groups of stereoscopic video sequence Beergarden (512*384) BookArrival (512*384) to enter Row test also compares with frame per second liftings based on three limit filtering, frame per second method for improving based on full search, the standard of evaluation It is Y-PSNR PSNR (Peak Signal to Noise Ratio) and structural similarity SSIM (structural Similarity index measurement), value shows that the most greatly interpolated frame quality is the best, and result is as shown in table 1, it can be seen that The application compares the interpolated frame better quality that in other two frame per second, conversion regime obtains, and can effectively solve the problems such as cavity.
Table 1

Claims (5)

1. conversion method in an adaptive equalization three-dimensional video-frequency frame per second based on dependency, it is characterised in that the method include with Lower step:
Step 1: extract synchronization and the depth map of subsequent time and texture maps, and save as image pair;
Step 2: depth map is carried out edge block labelling, and edge blocks uses pixel based on k mean cluster classification;
Step 3: carry out estimation according to pixel classification, uses the block matching criterion adding texture weights to carry out Block-matching and searches Rope, carries out the block-based motion estimation of conventional UMHS search and the UMHS block matching motion of quaternary tree respectively to different types of piece Estimate, it is thus achieved that original motion vector;
Step 4: the pixel of difference classification is carried out adaptive motion vector post processing, respectively flat block and edge block is carried out different Often some detection is also repaired, and obtains accurate motion vectors;
Step 5: adaptive interpolation method based on motion compensation;
Step 6: the filling-up hole interpolation judged based on edge, classifies to cavity point, utilizes the filling-up hole interpolation judged based on edge Obtain texture maps and the depth map being finally inserted into;
Step 7: be respectively synthesized texture sequence and depth map sequence and export.
Conversion method in adaptive equalization three-dimensional video-frequency frame per second based on dependency the most according to claim 1, its feature It is: in step 2 depth map is divided into the image block that size is identical, pixel depth value in block is carried out variance calculating, will Variance is labeled as edge block more than the block of threshold value, is labeled as flat block less than the block of threshold value;Use k means clustering method to labelling Macro block for edge carries out prospect background separation, with k means clustering method, the macro block being labeled as edge is carried out pixel cluster, its The pixel class that middle gray value is less is marked as background pixel, and the pixel class that gray value is big is marked as foreground pixel.
Conversion method in adaptive equalization three-dimensional video-frequency frame per second based on dependency the most according to claim 1, its feature Being: in step 3, the absolute error of employing addition texture gradient weights and TSAD, as block matching criterion, increase shared by texture Proportion, the edge block obtained according to previous step is classified with flat block, and the macro block being labeled as flat block carries out the UMHS of routine Block-based motion estimation, carries out the UMHS block-based motion estimation of quaternary tree, it is thus achieved that original for being labeled as the macro block of edge block Motion vector.
Conversion method in adaptive equalization three-dimensional video-frequency frame per second based on dependency the most according to claim 1, its feature It is: step 4 carries out motion vector post processing to flat block and edge block respectively, for flat block, the most relatively flat piece With the difference of the motion vector being both flat block in eight neighborhood, according to average motion vector relatively judge current block motion Vector is the most abnormal, if abnormal motion vector, uses average substitution method to correct current abnormal motion vector;For limit Edge block, the eight neighborhood average that first sub-macroblock only comprising single depth layer carries out same depth layer is corrected, then to both wrapping Comprise again the sub-macroblock of background containing prospect, the eight neighborhood average carrying out background pixel point and foreground pixel point respectively is corrected, and obtains Accurate motion vectors.
Conversion method in adaptive equalization three-dimensional video-frequency frame per second based on dependency the most according to claim 1, its feature It is: step 6 utilizes the filling-up hole interpolation judged based on edge, by four pixels equidistant to cavity pixel in depth map Point carries out depth value two-by-two and compares, if having any two pixel gray value difference more than threshold value, then it is assumed that this cavity is place In the cavity of marginal area, it is carried out backward moving and compensates interpolation;Otherwise it is assumed that this cavity is flat place cavity, use neighborhood Pixel average mode directly obtains pixel value interpolation thus obtains complete interpolated frame;With same information flag, texture maps is mended Hole processes, and obtains texture maps interpolated frame.
CN201610804697.7A 2016-09-06 2016-09-06 Conversion method in a kind of adaptive equalization three-dimensional video-frequency frame per second based on correlation Active CN106331729B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201610804697.7A CN106331729B (en) 2016-09-06 2016-09-06 Conversion method in a kind of adaptive equalization three-dimensional video-frequency frame per second based on correlation

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201610804697.7A CN106331729B (en) 2016-09-06 2016-09-06 Conversion method in a kind of adaptive equalization three-dimensional video-frequency frame per second based on correlation

Publications (2)

Publication Number Publication Date
CN106331729A true CN106331729A (en) 2017-01-11
CN106331729B CN106331729B (en) 2019-04-16

Family

ID=57787480

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201610804697.7A Active CN106331729B (en) 2016-09-06 2016-09-06 Conversion method in a kind of adaptive equalization three-dimensional video-frequency frame per second based on correlation

Country Status (1)

Country Link
CN (1) CN106331729B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110650346A (en) * 2019-09-26 2020-01-03 西安邮电大学 3D-HEVC depth map motion estimation parallel implementation method and structure
CN110870317A (en) * 2017-06-30 2020-03-06 Vid拓展公司 Weighted-to-spherical homogeneous PSNR for 360 degree video quality assessment using cube map based projection

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20120176536A1 (en) * 2011-01-12 2012-07-12 Avi Levy Adaptive Frame Rate Conversion
CN103905812A (en) * 2014-03-27 2014-07-02 北京工业大学 Texture/depth combination up-sampling method
CN103997653A (en) * 2014-05-12 2014-08-20 上海大学 Depth video encoding method based on edges and oriented toward virtual visual rendering
CN104754359A (en) * 2015-01-26 2015-07-01 清华大学深圳研究生院 Depth map coding distortion forecasting method for two-dimensional free viewpoint video

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20120176536A1 (en) * 2011-01-12 2012-07-12 Avi Levy Adaptive Frame Rate Conversion
CN103905812A (en) * 2014-03-27 2014-07-02 北京工业大学 Texture/depth combination up-sampling method
CN103997653A (en) * 2014-05-12 2014-08-20 上海大学 Depth video encoding method based on edges and oriented toward virtual visual rendering
CN104754359A (en) * 2015-01-26 2015-07-01 清华大学深圳研究生院 Depth map coding distortion forecasting method for two-dimensional free viewpoint video

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
AIXI QU ET AL.: "A FRAME RATE UP-CONVERSION METHOD WITH QUADRUPLE MOTION VECTOR POST-PROCESSING", 《2016 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP)》 *

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110870317A (en) * 2017-06-30 2020-03-06 Vid拓展公司 Weighted-to-spherical homogeneous PSNR for 360 degree video quality assessment using cube map based projection
CN110870317B (en) * 2017-06-30 2023-05-23 Vid拓展公司 Method and apparatus for encoding 360 degree video content
CN110650346A (en) * 2019-09-26 2020-01-03 西安邮电大学 3D-HEVC depth map motion estimation parallel implementation method and structure
CN110650346B (en) * 2019-09-26 2022-04-22 西安邮电大学 3D-HEVC depth map motion estimation parallel implementation method and structure

Also Published As

Publication number Publication date
CN106331729B (en) 2019-04-16

Similar Documents

Publication Publication Date Title
US9269153B2 (en) Spatio-temporal confidence maps
CN102075779B (en) Intermediate view synthesizing method based on block matching disparity estimation
CN104219533B (en) A kind of bi-directional motion estimation method and up-conversion method of video frame rate and system
CN106210449B (en) Multi-information fusion frame rate up-conversion motion estimation method and system
US20140098089A1 (en) Image processing device, image processing method, and program
CN103369208B (en) Self adaptation interlace-removing method and device
CN101682794A (en) Method, apparatus and system for processing depth-related information
CN102665086A (en) Method for obtaining parallax by using region-based local stereo matching
CN102263957B (en) Search-window adaptive parallax estimation method
CN102254348A (en) Block matching parallax estimation-based middle view synthesizing method
CN102163334A (en) Method for extracting video object under dynamic background based on fisher linear discriminant analysis
CN103051857B (en) Motion compensation-based 1/4 pixel precision video image deinterlacing method
Wu et al. A novel method for semi-automatic 2D to 3D video conversion
CN104065946A (en) Cavity filling method based on image sequence
CN106447718B (en) A kind of 2D turns 3D depth estimation method
Lie et al. Key-frame-based background sprite generation for hole filling in depth image-based rendering
CN109493373A (en) A kind of solid matching method based on binocular stereo vision
US20130083993A1 (en) Image processing device, image processing method, and program
CN102761765B (en) Deep and repaid frame inserting method for three-dimensional video
CN103260032B (en) A kind of frame per second method for improving of stereoscopic video depth map sequence
CN106331729A (en) Method of adaptively compensating stereo video frame rate up conversion based on correlation
CN108668135A (en) A kind of three-dimensional video-frequency B hiding frames error methods based on human eye perception
Jin et al. Pixel-level view synthesis distortion estimation for 3D video coding
Lin et al. A 2D to 3D conversion scheme based on depth cues analysis for MPEG videos
CN102547343B (en) Stereoscopic image processing method, stereoscopic image processing device and display unit

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant