CN105828106A - Non-integral multiple frame rate improving method based on motion information - Google Patents

Non-integral multiple frame rate improving method based on motion information Download PDF

Info

Publication number
CN105828106A
CN105828106A CN201610235788.3A CN201610235788A CN105828106A CN 105828106 A CN105828106 A CN 105828106A CN 201610235788 A CN201610235788 A CN 201610235788A CN 105828106 A CN105828106 A CN 105828106A
Authority
CN
China
Prior art keywords
interframe
frame
interleave
degree
integral multiple
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201610235788.3A
Other languages
Chinese (zh)
Other versions
CN105828106B (en
Inventor
刘琚
曲爱喜
肖依凡
杨本亚
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
SUZHOU RESEARCH INSTITUTE SHANDONG UNIVERSITY
Original Assignee
SUZHOU RESEARCH INSTITUTE SHANDONG UNIVERSITY
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by SUZHOU RESEARCH INSTITUTE SHANDONG UNIVERSITY filed Critical SUZHOU RESEARCH INSTITUTE SHANDONG UNIVERSITY
Priority to CN201610235788.3A priority Critical patent/CN105828106B/en
Publication of CN105828106A publication Critical patent/CN105828106A/en
Application granted granted Critical
Publication of CN105828106B publication Critical patent/CN105828106B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/234Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs
    • H04N21/2343Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs involving reformatting operations of video signals for distribution or compliance with end-user requests or end-user device requirements
    • H04N21/234381Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs involving reformatting operations of video signals for distribution or compliance with end-user requests or end-user device requirements by altering the temporal resolution, e.g. decreasing the frame rate by frame skipping
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs
    • H04N21/4402Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs involving reformatting operations of video signals for household redistribution, storage or real-time display
    • H04N21/440281Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs involving reformatting operations of video signals for household redistribution, storage or real-time display by altering the temporal resolution, e.g. by frame skipping

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Compression Or Coding Systems Of Tv Signals (AREA)
  • Television Systems (AREA)

Abstract

The invention provides a non-integral multiple frame rate improving method based on motion information. Firstly, interframe relevance is utilized to judge whether a scene change occurs in a fundamental group; secondly, a method of combining interframe motion speed and the interframe relevance is utilized to accurately judge the speed of motion of an object, thereby obtaining positions into which frames are to be inserted; and finally, a motion compensation method is utilized to obtain interpolation frames in the positions into which the frames are to be inserted, thereby ultimately realizing non-integral multiple frame rate improvement. The non-integral multiple frame rate improving method based on motion information breaks through the limit of integral multiple frame rate improvement, takes full advantage of interframe motion information, and inserts frames in places where the motion speed is faster, thereby enabling a video to be more fluent in motion after frame rate improvement, and enhancing a visual effect; and motion compensation is utilized to obtain the frames to be inserted, thereby reducing motion jitter and blur phenomena. At last, superiority and feasibility of the method provided by the invention compared with a traditional method are analyzed and proved from three aspects of theory, subjectivity and objectivity.

Description

A kind of non-integral multiple frame per second method for improving based on movable information
Technical field
The present invention relates to a kind of method utilizing inter motion information convert video signals frame per second, belong to multi-media processing field.
Background technology
During multi-media processing, due to factors such as the matching problems in the network bandwidth, the performance of the digital signal processor realizing encoding and decoding and different multimedia product intercommunication, it is often necessary to video is carried out frame rate conversion.Such as, (1) under conditions of channel width is restricted, transmission frame per second (being down to per second 10 or 15 frames) must be reduced at coding side, the video content of an only transmission part, so can recover complete video content in decoding end by switch technology in frame per second, both improved the utilization rate of channel width, in turn ensure that video quality can meet user's viewing demand;(2) coupling in the intercommunication of different multimedia product, the conversion between the video format of i.e. different frame per second.Movie program source is to play film with 24 frame rate per second, and TV NTSC (National Television Selection) standard is to play with the speed of 30 frames per second, and the play frame rate of liquid crystal display then needs to reach 60 frames per second.Therefore to movie program source can be made to play on television equipment and obtain high-quality visual quality, it is necessary to carry out changing in the frame per second of program source, i.e. frame per second promotes.
Studying at present more in frame rate conversion is that two times of frame per second promote, i.e. integral multiple frame per second promotes.This lifting is many in order to improve video frame rate to reach to improve the purpose that visual quality, increase video realism and smoothness are felt.But the frame rate conversion in real world applications needs the mutual conversion between various video frame per second, it is not necessarily lifting two times, it is desirable that non-integral multiple frame per second promotes in the case of a lot, i.e. (source frame per second is M frame/second to lifting factor k, after lifting, frame per second is N second/frame, then lifting factor k=N/M) it is mark.For reducing temporal redundancy in the most above-mentioned example, need to be transformed into 24 frames/second (k=8/5) on the decoding end video to 15 frames/second;In order to play film source program at liquid crystal display, need 24 frames/be transformed into 60 frames/second (k=5/2) on the second.Meanwhile, in two times of frame per second liftings, all frames can be treated substantially equally and time domain distribution is full symmetric, but in non-integral multiple frame per second promotes, time domain distribution becomes complicated the most accordingly.Therefore promoting for non-integral multiple frame per second, face two problems, one is the determination of interleave position, and another is then the interpolation frame how obtaining frame position to be inserted.Mostly current existing non-integral multiple frame per second method for improving is to utilize uniform intervals method to determine interleave position, and utilize adjacent two frames at frame position to be inserted to carry out frame duplication or linear summation acquisition interpolated frame, the easy hardware of these methods realizes, but all it is inevitably present shortcoming: original series has been carried out time domain translation, and change the movement locus of object, the motion skip in visual effect can be caused.
Summary of the invention
In order to realize the conversion between various video form, the invention provides a kind of non-integral multiple frame per second method for improving, breach the restriction that original two times of frame per second promote, the suitability is more extensive.The motion skip problem caused in non-integral multiple frame rate conversion to solve traditional method to realize, the frame per second method for improving of the present invention is based on movable information, the movable information i.e. making full use of interframe determines the position treating interleave, at the local interleave that movement velocity is fast, so that the video motion after frame per second promotes is more smooth, strengthen visual effect;Utilize motion compensation to obtain simultaneously and treat interleave, thus reduce motion jitter and blooming.
In the present invention, it is a basic group with M frame, reconstructs N-M frame, it is achieved non-integral multiple frame per second lifting (N > M, and N/M be mark) of video frame rate M frame/second to N frame/second.Judge in this basic group, whether occurrence scene converts first with interframe degree of association;The speed that next method judgment object utilizing interframe movement speed and interframe degree of association to combine moves, thus obtain N-M the position treating interleave;The method finally utilizing motion compensation obtains the interpolation frame at frame position to be inserted, thus realizes non-integral multiple frame per second and promote.
The technical solution of the present invention is as follows:
A kind of non-integral multiple frame per second method for improving based on movable information, it is characterised in that the method comprises the following steps:
Step 1: process original video, is processed as frame;
Step 2: degree of correlation between adjacent two frames in seeking basic group successively, i.e. interframe degree of association, and its sequence is processed;
Step 3: scene detection in basic group, carries out front frame and replicates acquisition interpolation frame at scene change;
Step 4: obtain in basic group degree of association between the block between block motion vector and the interframe correspondence macro block between adjacent two frame correspondence macro blocks successively, then degree of association between described block motion vector and block is combined and try to achieve interframe movement speed, and its sequence is processed;
Step 5: be primarily based on interframe degree of association and interframe movement speed accurate judgment object motion speed, next utilizes the similarity between adjacent interframe movement speed to judge the degree of accuracy of interframe movement speed required by step 4, finally determines in the interframe that speed accuracy fast, required of moving is high and treats the position of interleave;
Step 6: based on the block motion vector between adjacent two frame correspondence macro blocks required in step 4, carry out motion compensation interleave at frame position to be inserted;
Step 7: frame and the video of the primitive frame high frame per second of synthesis will be inserted.
Preferably, in seeking a basic group the most successively, the correlation coefficient absolute value of adjacent two interframe is as interframe degree of association, according to sorting from small to large, and the frame position corresponding to reservation, for step 5.
Preferably, judge in basic group, whether occurrence scene changes by interframe degree of association and the magnitude relationship set between threshold value in step 3.
Preferably, use diamond search method to carry out estimation in step 4, obtain the motion vector of each macro block, utilize the correlation coefficient absolute value between corresponding macro block as its degree of association simultaneously;Then use the method that truncates, remove the vector of the high macro block of degree of association, residue macro block vector size is averaged as interframe movement speed;Finally interframe movement speed is sorted from big to small, and the frame position corresponding to reservation, for step 5.
Preferably, in steps of 5, interframe degree of association after sequence and interframe movement speed in integrating step 2 and 4, determine fireballing local interleave, if now interleave number is less than the sum needing interleave, then recycle the similarity between adjacent two interframe movement speed, at the interframe interleave that similarity is high and speed is big, thus ensure that interleave position changes fast position at movement locus.
Preferably, in step 6, use bi directional motion compensation mode interleave at above-mentioned gained interleave, reduce cavity and the eclipsing effects of required interpolation frame.
The method positions interleave position exactly, and improves the quality of interpolation frame, thus strengthens visual effect on the basis of changing frame per second on successfully, reduces motion skip, makes the video motion after lifting more coherent.
Accompanying drawing explanation
Fig. 1 is the flow chart of the present invention;
Fig. 2 is diamond search template schematic diagram;
Fig. 3 is two-way overlapped block motion compensation schematic diagram;
Fig. 4 is that the non-integral multiple frame per second of traditional method 1 promotes schematic diagram;
Fig. 5 is that the non-integral multiple frame per second of traditional method 2 promotes schematic diagram;
Fig. 6 is frame replication defective schematic diagram;
Fig. 7 is Akiyo video simulation result PSNR and SSIM comparison diagram.
Detailed description of the invention
The present invention proposes a kind of non-integral multiple frame per second method for improving based on movable information.Be a basic group to former video continuous N frame, first interframe adjacent in basic group carried out scene detection, at scene changes before frame replicate carry out interleave;In conjunction with block motion estimation and block degree of association, the method for averaging that truncates is used to obtain interframe movement speed;The position treating interleave is determined then in conjunction with interframe movement speed and interframe degree of association;Next utilizes the size of the similarity between adjacent two interframe movement speed and each interframe movement speed to determine remaining frame position to be inserted;Overlapping block bi directional motion compensation is finally used to obtain the interpolation frame at frame position to be inserted, and then reach the purpose that non-integral multiple frame per second promotes, solve the cavity in interpolation frame and overlap problem simultaneously, and make interleave be on former movement locus thus eliminate motion blur and jitter phenomenon, improve the visual effect after non-integral multiple frame per second promotes.
The overall flow of the method for the invention is specifically called out in Fig. 1, rises to N frame/as a example by the second (N > M, and N/M be mark) by frame per second M frame/second, be embodied as step as follows:
(1) reading in video, reading M frame is a basic group, carries out frame operation in units of basic group.
(2) interframe degree of association in basic group is obtained: seek interframe correlation coefficient absolute value successively | ρt|, as interframe degree of association γt, and it is sorted from small to large, retains corresponding frame position simultaneously, interframe degree of association as shown in Equation 1:
γt=| ρt|
Note: f in formulatAnd ft+1Two frames adjacent in representing a basic group respectively, E [] and D [] represents average and the variance of frame pixel value respectively.
(3) scene detection: judge whether occurrence scene converts two interframe according to interframe degree of association and the magnitude relationship setting threshold value.
Wherein, interframe degree of correlation γtValue changes in the range of 0~1, γt=0-0.3 represents that adjacent two frame degrees of correlation are low, has i.e. been likely to occur scene change;Degree of association γt=0.3-0.9 represents that adjacent two frame degrees of correlation are common, and the most likely the movement velocity of two interframe is bigger;Degree of association γt=0.9-1.0 represents that adjacent two frame degrees of correlation are high, the most likely the movement velocity of two interframe smaller or without motion.So setting threshold value T=0.3, work as ftAnd ft+1Two interframe degree of association γtThink during≤T that two interframe there occurs scene detection, be then here labeled as frame position to be inserted, replicated by front frame simultaneously and obtain interpolation frame ft+1/2, as shown in Equation 2:
ft+1/2=ft(formula 2)
(4) interframe movement velocity is obtained: the block degree of association between the block motion vector macro block corresponding with interframe obtaining in basic group between adjacent two frame correspondence macro blocks successively, both combine and try to achieve interframe movement speed, and its sequence is processed, specifically include following step:
A. to adjacent two frame ftAnd ft+1Carry out estimation, use performance preferable diamond search method (DS) in Block-matching search.The search pattern of diamond search is the biggest the least, big rhombus (such as accompanying drawing 2 (a)) step-size in search is big, can be used for coarse positioning, so that search procedure will not be absorbed in local optimum, little rhombus (such as accompanying drawing 2 (b)) is then used for being accurately positioned, it is thus achieved that more accurate match point.DS block matching algorithm obtains the block motion vector between adjacent two interframe correspondence macro blocks.
B. to adjacent two frame ftAnd ft+1Carrying out macro block identical in upper step a to divide, and calculate the relative coefficient absolute value between corresponding macro block, as degree of association between block, its computing formula and meaning are identical with interframe degree of association.
C. combine degree of association between the block motion vector between corresponding macro block and block, use pinch tail method to remove the motion vector of the degree of association macro block more than 0.9, remaining piece of vector size is averaged, as adjacent two frame ftAnd ft+1Between movement velocity vt, t=1,2 ..., M-1.
D. it is repeated in carrying out above-mentioned three steps to M frame in basic group, finally gives M-1 interframe movement speed, then to it according to sorting from big to small, and retain the position of corresponding frame.
(5) all positions treating interleave are determined: in order to improve the accuracy of frame position to be inserted, two steps realize the determination of interleave position.First combining the interframe degree of association after sequence and interframe movement speed, choosing fast simultaneously two interframe that interframe degree of association is less of interframe speed is interleave position;Then judge that now interleave number has reached requirement N-M the most, if reaching requirement, directly carrying out next step, otherwise continuing to choose interleave position: calculate similarity S between adjacent two interframe movement speed, as shown in Equation 3.In similarity Si,i+1Two bigger interframe movement speed vi,vi+1In choose two interframe corresponding to bigger speed and carry out interleave, until interleave position number reaches requirement.
(6) motion compensation interleave: the frame position to be inserted obtaining above-mentioned steps carries out interleave, in order to effectively solve blocking effect and the cavity problem that block-based motion estimation brings, in using two times of frame per second to promote, conventional overlapping block bi directional motion compensation carries out interleave, as shown in Figure 3, wherein, what solid line represented is coupling original for N × N, and it is current block that B represents, and N1-N8 represents block around current block;Dashed boxes is distinguished the most up and down centered by above nine blocks, left and right four direction extends w width and forms the extension blocks of (N+2w) × (N+2w).
According to former frame per second M with promote after the relation of frame per second N, can interleave in two kinds of situation, specific as follows:
A.M < N < during 2*M, such as M=15, N=24, the video of 15 frames/second will rise to 24 frames/second, now need to find 9 interleave positions.Finding successively according to above-mentioned steps, then in interleave position promotes with two times, conventional interleave method inserts a frame.
B.2*M < during N, such as M=24, N=60, the video of 24 frames/second will rise to 60 frames/second, now need to insert 36 frames.In this case, still find 12 (N-2*M) individual interleave positions according to above-mentioned steps, then in the interleave position searched out promotes with three times, conventional interleave method inserts two interpolation frames, and in promoting with two times between other adjacent frames, conventional interleave method inserts an interpolation frame.
In terms of theory analysis, subjective analysis and objective analysis three, illustrate that the present invention is relative to the superiority of traditional non-integral multiple frame per second method for improving and feasibility below.
First, in terms of the generation two of interleave position and interpolation frame, the present invention is analyzed relative to the superiority of traditional non-integral multiple frame per second method for improving and necessity.In the present invention, relating to two kinds of traditional methods 1 and 2, its concrete operations are as follows:
1) traditional method 1: determine interleave position according to weight parameter α of frame each in basic group, uses frame to replicate and obtains interpolation frame.The method is that the weight parameter of the configuration of each frame is identical, as shown in formula (4):
Implement: if successively each present frame in M input video frame being performed following process. weight α >=1 of this present frame, then this present frame is carried out frame and replicates interleave, its weight parameter deducts 1, and is added in the weight parameter of next frame by remainder;Conversely, when weight α < 1 of this present frame, then this present frame is exported as output video frame, and the weight parameter of this present frame is added in the weight parameter of next frame;So, within the unit interval, N number of output video frame is exported.
Citing: 24 frames/second 30 frames/second, i.e. 4-5, every 4 frames be one basis set, nowInterleave schematic diagram is as shown in Figure 4.
2) traditional method 2: determine interleave position according to weight parameter α of frame each in basic group, use and adjacent two frame linearity summations are obtained interpolation frame.The method is that each frame adaptive distributes weight parameter, as shown in formula (5):
K=N/M;
Citing: 24 frames/second 30 frames/second, i.e. 4-5, N=5, M=4, every 4 frames be one basis set, now k=5/4;Time domain profile parameter=0,1/5,2/5,3/5,0,1/5,2/5 ... }, interleave schematic diagram is as shown in Figure 5.
Theory analysis:
The inventive method and the relative analysis of above two traditional method: either choose interleave position and still generate interpolation frame, traditional method does not all account for the movable information of moving object, if from accompanying drawing 6 we it is also seen that input picture frame be repeated twice, during so each interleave the real motion position of image all by mistake expression.Either method 1 or method 2, the interpolated frame obtained all changes the real trace of object of which movement, so we can be appreciated that the irregular even dynamic motion stopped, the motion jitter that namely we often say after frame per second promotes.And the method applied in the present invention takes full advantage of movable information and chooses interleave position and utilize motion compensation to obtain interpolation frame, thus ensure that interpolation frame, on former movement locus, is effectively prevented from the motion jitter after traditional method frame per second promotes and motion skip phenomenon.
Subjective analysis:
Experiment video: source video is avi form.Wherein, color video (resolution 384*288, former frame per second 24 frames/second), news video (resolution 352*288, former frame per second 15 frames/second), foreman video (resolution 352*288, former frame per second 25 frames/second), vtext video (resolution 1920*1080, former frame per second 30 frames/second).
Experiment content: be respectively adopted the inventive method and two kinds of traditional methods carry out non-integral multiple frame per second lifting to above-mentioned four kinds of different frame-rate video.Color video: 24 frames/second is promoted to 30 frames/second;News video: 15 frames/second is promoted to 25 frames/second;Foreman video: 25 frames/second is promoted to 30 frames/second;Vtext video: 30 frames/second is promoted to 48 frames/second.Then playing video on same player, appraiser marks.
Video quality assessment personnel: 15 have certain image procossing Knowledge Base but raw without the university research understood to video sequence quality evaluation.
Subjective evaluation method: use the most classical appraisal procedure, carries out contrast marking to the video obtained under three kinds of methods, and marking rank is that 1-5 divides, wherein 1 point to represent this video worst, 5 points to represent this video best, and scoring can be decimal.It is simultaneous for the particularity of the inventive method and traditional method, is that judgment criteria is given a mark respectively from two angles of motion continuity between video pictures definition and video sequence, the most each average and obtain the subjective scoring obtaining video under distinct methods.
The final subjective scoring of different video is as shown in table 1, can be seen that the method for the present invention has certain superiority in motion continuity, but owing to using block-based motion compensation interleave, sometimes have blocking effect, thus cause image sharpness once in a while not as traditional method.Therefore by improving the quality of the promoted video of the inventive method, accuracy based on block motion compensation frame per second method for improving can be improved as far as possible.
Table 1
Objective analysis:
Using has appraisal procedure PSNR of reference frame and SSIM value to compare, first structure treats the source video sequence that mark times frame per second promotes, standard yuv video is tested Akiyo sequence (original frame per second 30 frames/second) front 120 frames and carries out random frame losing and obtain the 96 frames source video sequence of 24 frames/second (the composition frame per second be);Then as source video sequence, non-integral multiple frame per second lifting (24 frames/second 30 frames/second) is carried out respectively by the inventive method and traditional method 1 and 2, it is thus achieved that 120 frames after the frame per second lifting of 5/4 times;120 frame of video finally obtained distinct methods and original 120 frames carry out objective comparative analysis based on PSNR, SSIM value: as shown in accompanying drawing 7 (a) and (b), non-integral multiple frame per second lifting is carried out using the described method of the present invention, all increase compared to traditional method PSNR value and SSIM value, thus fully prove feasibility and the superiority of the present invention.

Claims (6)

1. a non-integral multiple frame per second method for improving based on movable information, it is characterised in that the method comprises the following steps:
Step 1: process original video, is processed as frame;
Step 2: degree of correlation between adjacent two frames in seeking basic group successively, i.e. interframe degree of association, and its sequence is processed;
Step 3: scene detection in basic group, carries out front frame and replicates acquisition interpolation frame at scene change;
Step 4: obtain in basic group degree of association between the block between block motion vector and the interframe correspondence macro block between adjacent two frame correspondence macro blocks successively, then degree of association between described block motion vector and block is combined and try to achieve interframe movement speed, and its sequence is processed;
Step 5: be primarily based on interframe degree of association and interframe movement speed accurate judgment object motion speed, next utilizes the similarity between adjacent interframe movement speed to judge the degree of accuracy of interframe movement speed required by step 4, finally determines in the interframe that speed accuracy fast, required of moving is high and treats the position of interleave;
Step 6: based on the block motion vector between adjacent two frame correspondence macro blocks required in step 4, carry out motion compensation interleave at frame position to be inserted;
Step 7: frame and the video of the primitive frame high frame per second of synthesis will be inserted.
2. non-integral multiple frame per second method for improving based on movable information as claimed in claim 1, it is characterized in that: in seeking a basic group the most successively, the correlation coefficient absolute value of adjacent two interframe is as interframe degree of association, according to sorting from small to large, and the frame position corresponding to retaining, for step 5.
3. non-integral multiple frame per second method for improving based on movable information as claimed in claim 1, it is characterised in that: judge in basic group, whether occurrence scene changes by interframe degree of association and the magnitude relationship set between threshold value in step 3.
4. non-integral multiple frame per second method for improving based on movable information as claimed in claim 1, it is characterized in that: use diamond search method to carry out estimation in step 4, obtain the motion vector of each macro block, utilize the correlation coefficient absolute value between corresponding macro block as its degree of association simultaneously;Then use the method that truncates, remove the vector of the high macro block of degree of association, residue macro block vector size is averaged as interframe movement speed;Finally interframe movement speed is sorted from big to small, and the frame position corresponding to reservation, for step 5.
5. non-integral multiple frame per second method for improving based on movable information as claimed in claim 1, it is characterized in that: in steps of 5, interframe degree of association after sequence and interframe movement speed in integrating step 2 and 4, determine fireballing local interleave, if now interleave number is less than the sum needing interleave, then recycle the similarity between adjacent two interframe movement speed, at the interframe interleave that similarity is high and speed is big, thus ensure that interleave position changes fast position at movement locus.
6. non-integral multiple frame per second method for improving based on movable information as claimed in claim 1, it is characterised in that: in step 6, use bi directional motion compensation mode interleave at above-mentioned gained interleave, reduce cavity and the eclipsing effects of required interpolation frame.
CN201610235788.3A 2016-04-15 2016-04-15 A kind of non-integral multiple frame per second method for improving based on motion information Expired - Fee Related CN105828106B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201610235788.3A CN105828106B (en) 2016-04-15 2016-04-15 A kind of non-integral multiple frame per second method for improving based on motion information

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201610235788.3A CN105828106B (en) 2016-04-15 2016-04-15 A kind of non-integral multiple frame per second method for improving based on motion information

Publications (2)

Publication Number Publication Date
CN105828106A true CN105828106A (en) 2016-08-03
CN105828106B CN105828106B (en) 2019-01-04

Family

ID=56526937

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201610235788.3A Expired - Fee Related CN105828106B (en) 2016-04-15 2016-04-15 A kind of non-integral multiple frame per second method for improving based on motion information

Country Status (1)

Country Link
CN (1) CN105828106B (en)

Cited By (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108200446A (en) * 2018-01-12 2018-06-22 北京蜜枝科技有限公司 Multimedia interactive system and method on the line of virtual image
CN110149555A (en) * 2018-08-14 2019-08-20 腾讯科技(深圳)有限公司 Method for processing video frequency and video receiving apparatus
CN110198412A (en) * 2019-05-31 2019-09-03 维沃移动通信有限公司 A kind of video recording method and electronic equipment
CN111083417A (en) * 2019-12-10 2020-04-28 Oppo广东移动通信有限公司 Image processing method and related product
CN111263193A (en) * 2020-01-21 2020-06-09 北京三体云联科技有限公司 Video frame up-down sampling method and device, and video live broadcasting method and system
CN111641829A (en) * 2020-05-16 2020-09-08 Oppo广东移动通信有限公司 Video processing method, device, system, storage medium and electronic equipment
CN111885336A (en) * 2020-06-19 2020-11-03 成都东方盛行电子有限责任公司 Non-frame-coding rate conversion method under frame mode
WO2021011804A1 (en) * 2019-07-17 2021-01-21 Home Box Office, Inc. Video frame pulldown based on frame analysis
CN112584232A (en) * 2019-09-30 2021-03-30 北京金山云网络技术有限公司 Video frame insertion method and device and server
CN112584196A (en) * 2019-09-30 2021-03-30 北京金山云网络技术有限公司 Video frame insertion method and device and server
CN112788337A (en) * 2020-12-28 2021-05-11 深圳创维-Rgb电子有限公司 Video automatic motion compensation method, device, equipment and storage medium
CN114938461A (en) * 2022-04-01 2022-08-23 网宿科技股份有限公司 Video processing method, device and equipment and readable storage medium
CN116366886A (en) * 2023-02-27 2023-06-30 泰德网聚(北京)科技股份有限公司 Video quick editing system based on smoothing processing
WO2023174123A1 (en) * 2022-03-14 2023-09-21 维沃移动通信有限公司 Display control chip, display panel, and related device, method and apparatus
WO2024131035A1 (en) * 2022-12-21 2024-06-27 上海哔哩哔哩科技有限公司 Video frame interpolation method and apparatus

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080181312A1 (en) * 2006-12-25 2008-07-31 Hitachi Ltd. Television receiver apparatus and a frame-rate converting method for the same
CN102523439A (en) * 2011-12-07 2012-06-27 天津天地伟业物联网技术有限公司 Video frame rate improving system and frame rate improving method
CN102665061A (en) * 2012-04-27 2012-09-12 中山大学 Motion vector processing-based frame rate up-conversion method and device
EP2701386A1 (en) * 2012-08-21 2014-02-26 MediaTek, Inc Video processing apparatus and method
CN105100807A (en) * 2015-08-28 2015-11-25 山东大学 Motion vector post-processing based frame rate up-conversion method

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080181312A1 (en) * 2006-12-25 2008-07-31 Hitachi Ltd. Television receiver apparatus and a frame-rate converting method for the same
CN102523439A (en) * 2011-12-07 2012-06-27 天津天地伟业物联网技术有限公司 Video frame rate improving system and frame rate improving method
CN102665061A (en) * 2012-04-27 2012-09-12 中山大学 Motion vector processing-based frame rate up-conversion method and device
EP2701386A1 (en) * 2012-08-21 2014-02-26 MediaTek, Inc Video processing apparatus and method
CN105100807A (en) * 2015-08-28 2015-11-25 山东大学 Motion vector post-processing based frame rate up-conversion method

Cited By (24)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108200446A (en) * 2018-01-12 2018-06-22 北京蜜枝科技有限公司 Multimedia interactive system and method on the line of virtual image
CN110149555A (en) * 2018-08-14 2019-08-20 腾讯科技(深圳)有限公司 Method for processing video frequency and video receiving apparatus
CN110198412A (en) * 2019-05-31 2019-09-03 维沃移动通信有限公司 A kind of video recording method and electronic equipment
CN110198412B (en) * 2019-05-31 2020-09-18 维沃移动通信有限公司 Video recording method and electronic equipment
US11711490B2 (en) 2019-07-17 2023-07-25 Home Box Office, Inc. Video frame pulldown based on frame analysis
US11303847B2 (en) 2019-07-17 2022-04-12 Home Box Office, Inc. Video frame pulldown based on frame analysis
WO2021011804A1 (en) * 2019-07-17 2021-01-21 Home Box Office, Inc. Video frame pulldown based on frame analysis
CN112584232A (en) * 2019-09-30 2021-03-30 北京金山云网络技术有限公司 Video frame insertion method and device and server
CN112584196A (en) * 2019-09-30 2021-03-30 北京金山云网络技术有限公司 Video frame insertion method and device and server
CN111083417B (en) * 2019-12-10 2021-10-19 Oppo广东移动通信有限公司 Image processing method and related product
CN111083417A (en) * 2019-12-10 2020-04-28 Oppo广东移动通信有限公司 Image processing method and related product
CN111263193A (en) * 2020-01-21 2020-06-09 北京三体云联科技有限公司 Video frame up-down sampling method and device, and video live broadcasting method and system
CN111263193B (en) * 2020-01-21 2022-06-17 北京世纪好未来教育科技有限公司 Video frame up-down sampling method and device, and video live broadcasting method and system
CN111641829B (en) * 2020-05-16 2022-07-22 Oppo广东移动通信有限公司 Video processing method, device and system, storage medium and electronic equipment
CN111641829A (en) * 2020-05-16 2020-09-08 Oppo广东移动通信有限公司 Video processing method, device, system, storage medium and electronic equipment
CN111885336B (en) * 2020-06-19 2022-03-29 成都东方盛行电子有限责任公司 Non-frame-coding rate conversion method under frame mode
CN111885336A (en) * 2020-06-19 2020-11-03 成都东方盛行电子有限责任公司 Non-frame-coding rate conversion method under frame mode
WO2022143078A1 (en) * 2020-12-28 2022-07-07 深圳创维-Rgb电子有限公司 Video automatic motion compensation method, apparatus, and device, and storage medium
CN112788337A (en) * 2020-12-28 2021-05-11 深圳创维-Rgb电子有限公司 Video automatic motion compensation method, device, equipment and storage medium
WO2023174123A1 (en) * 2022-03-14 2023-09-21 维沃移动通信有限公司 Display control chip, display panel, and related device, method and apparatus
CN114938461A (en) * 2022-04-01 2022-08-23 网宿科技股份有限公司 Video processing method, device and equipment and readable storage medium
WO2024131035A1 (en) * 2022-12-21 2024-06-27 上海哔哩哔哩科技有限公司 Video frame interpolation method and apparatus
CN116366886A (en) * 2023-02-27 2023-06-30 泰德网聚(北京)科技股份有限公司 Video quick editing system based on smoothing processing
CN116366886B (en) * 2023-02-27 2024-03-19 泰德网聚(北京)科技股份有限公司 Video quick editing system based on smoothing processing

Also Published As

Publication number Publication date
CN105828106B (en) 2019-01-04

Similar Documents

Publication Publication Date Title
CN105828106A (en) Non-integral multiple frame rate improving method based on motion information
US9148622B2 (en) Halo reduction in frame-rate-conversion using hybrid bi-directional motion vectors for occlusion/disocclusion detection
KR101519941B1 (en) Multi-level bidirectional motion estimation method and device
CN104219533B (en) A kind of bi-directional motion estimation method and up-conversion method of video frame rate and system
CN103402098B (en) A kind of video frame interpolation method based on image interpolation
US8243194B2 (en) Method and apparatus for frame interpolation
CN101754047B (en) Method for detection of film mode or camera mode
CN1414787A (en) Device and method for using adaptive moving compensation conversion frame and/or semi-frame speed
CN103051857B (en) Motion compensation-based 1/4 pixel precision video image deinterlacing method
KR20050089886A (en) Background motion vector detection
DE102019218316A1 (en) 3D RENDER-TO-VIDEO ENCODER PIPELINE FOR IMPROVED VISUAL QUALITY AND LOW LATENCY
US20170094306A1 (en) Method of acquiring neighboring disparity vectors for multi-texture and multi-depth video
Heinrich et al. Optimization of hierarchical 3DRS motion estimators for picture rate conversion
TWI490819B (en) Image processing method and apparatus thereof
Luo et al. A fast motion estimation algorithm based on adaptive pattern and search priority
US20110129156A1 (en) Block-Edge Detecting Method and Associated Device
CN101483790B (en) Movie mode video signal detection method
KR20170080496A (en) A method and device for frame rate conversion
US20140126639A1 (en) Motion Estimation Method
CN110446107B (en) Video frame rate up-conversion method suitable for scaling motion and brightness change
CN104580978A (en) Video detecting and processing method and video detecting and processing device
WO2016199418A1 (en) Frame rate conversion system
TWI243600B (en) Selected area comparison method with high-operational efficient
CN108282653B (en) Motion compensation de-interlacing method and system based on motion estimation of bipolar field
JP2007510213A (en) Improved motion vector field for tracking small fast moving objects

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20190104

Termination date: 20190415