CN107295214A - Interpolated frame localization method and device - Google Patents
Interpolated frame localization method and device Download PDFInfo
- Publication number
- CN107295214A CN107295214A CN201710679927.6A CN201710679927A CN107295214A CN 107295214 A CN107295214 A CN 107295214A CN 201710679927 A CN201710679927 A CN 201710679927A CN 107295214 A CN107295214 A CN 107295214A
- Authority
- CN
- China
- Prior art keywords
- frame
- video
- measured
- interpolated frame
- interpolated
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N5/00—Details of television systems
- H04N5/14—Picture signal circuitry for video frequency region
- H04N5/144—Movement detection
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/43—Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
- H04N21/44—Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs
- H04N21/44008—Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs involving operations for analysing video streams, e.g. detecting features or characteristics in the video stream
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Television Systems (AREA)
Abstract
The invention provides interpolated frame localization method and device, it is related to technical field of multimedia information, wherein, the interpolated frame localization method includes:Markov feature is extracted from the training set sample built in advance, and build classification mode by integrated classifier, calculate the video collection Γ of video to be measured Markov statistical nature, to obtain the result of decision, every three in the result of decision continuous intermediate frames of interpolated frame that are detected as are incorporated into the interpolated frame Ξ detected in every wheel circulation, the frame number of intermediate frame is designated as Nup, demarcate the position location of all interpolated frames detected and be denoted as ψ, ψ is revised as ψ and Ξ union, TsFor the reproduction time of the Γ after renewal,Σ be Γ in frame of video number and, f be video to be measured frame per second, judge whether to meet Nup<Ts, if it is, the position location set ψ of output interpolated frame, if not, updating the difference set that Γ is Γ and Ξ, recalculates Γ Markov statistical nature, so as to realize to being accurately positioned based on the interpolated frame changed on motion compensated video.
Description
Technical field
The present invention relates to technical field of multimedia information, more particularly to interpolated frame localization method and device.
Background technology
With the development and the popularization of video capture equipment of science and technology, digital video progressively enriches our daily life,
The video voluntarily shot can be put into social platform with everybody share by many people.But, existing video is clapped at present
The shooting quality for taking the photograph equipment (for example, DV and smart mobile phone etc.) is undesirable, and video frame rate does not reach high definition standard, causes user
It could be uploaded after the video of shooting must be handled by Video editing software in the later stage.And most Video editing softwares are all
It is the video for meeting the characteristics of motion that high frame per second is produced by the method for forgery.
During above-mentioned video " forgery ", conversion operation method is divided into two classes in common frame per second, referring to Fig. 1, one
Class is that frame is replicated or frame is average, that is, the frame of video inserted between two primitive frames is the former frame or this two frame in two frames
Usually there is the easily perceivable ghost of human eye in the average value of respective pixel point value sum, the upper converting video that this kind of method is produced
Or motion jitter phenomenon, cause the viewing experience of video poor;Another kind of changed in motion compensation frame per second, and this mode overcomes
The negative effect that simple upper conversion regime is produced, can extract the movement locus of Moving Objects in two continuous frames in video, and according to
The new frame of video of trace generator, the new frame of video produced by this mode more meets the characteristics of motion of object video,
Just more approach original video frame.
But, judge do not have also for the positioning using the video interpolation frame that generation is changed in the frame per second of motion compensation approach
Reliable technology and means.Also, the interpolated frame generated using motion compensation approach needs to take into full account intraframe motion object
Movement locus, the movement locus of Moving Objects is obtained and at this on this frame in shifting method using various motion estimation techniques
Meet the frame of video of the characteristics of motion, so, the frame of video and former frame of generation on movement locus using motion compensation strategy generating
Structural similarity be less than 99.5% this threshold value, therefore, existing determination methods can not accurately position above-mentioned interpolated frame.
To sum up, at present on based on the interpolation frame alignment changed on motion compensated video the problem of, there is no effective solution
Method.
The content of the invention
In view of this, the purpose of the embodiment of the present invention is the provision of interpolated frame localization method and device, by extracting horse
Er Kefu features etc., improve the accuracy of the positioning based on the interpolated frame changed on motion compensated video.
In a first aspect, the embodiments of the invention provide interpolated frame localization method, including:
Markov feature is extracted from the training set sample built in advance, Markov feature is input to collection composition
Classification mode is built in class device;
Calculate the video collection Γ of video to be measured Markov statistical nature, utilize the video collection Γ of video to be measured
Markov statistical nature and classification mode obtain the result of decision, and, update the video collection Γ of video to be measured;
Incorporate every three in the result of decision continuous intermediate frames of interpolated frame that are detected as into detected in every wheel circulation insert
It is worth in frame Ξ, wherein, the frame number of intermediate frame is designated as Nup, the position location for demarcating all interpolated frames detected is denoted as ψ, ψ modifications
For ψ and Ξ union, TsFor the reproduction time of the Γ after renewal,Wherein, ∑ is the video collection Γ of video to be measured
The number of interior frame of video and, f be video to be measured frame per second;
Judge whether to meet Nup<Ts;
If it is, the position location set ψ of output interpolated frame;
If not, updating the difference set that Γ is Γ and Ξ, the video collection Γ of video to be measured Markov statistics is recalculated
Feature.
With reference in a first aspect, the embodiments of the invention provide the possible embodiment of the first of first aspect, wherein, meter
Calculating the video collection Γ of video to be measured Markov statistical nature includes:
Until frame second from the bottom since the second frame of the video collection of video to be measured, the time difference matrix TFDM per frame
Calculation formula isWherein, F
(x, y, t) is video sequence, and x, y and t distinguish representation space coordinate and time scale, and, the TFDM of the first frame and last frame
Equal to front cross frame and the difference of last two interframe;
Space difference matrixs of the space-time difference matrix STFDM along eight directions is calculated respectively, wherein, eight direction difference
ForEach space difference matrix is intercepted respectively with threshold value set in advance, uses single order
Markoff process is modeled respectively along eight directions, and calculates the experience matrix of all directionsWith
Extracting final Markov statistical nature according to the experience matrix of all directions isWith
With reference in a first aspect, the embodiments of the invention provide the possible embodiment of second of first aspect, wherein, side
Method also includes:
The video collection Γ of video to be measured initial value is all frames of video to be measured;
The often wheel position location set ψ of interpolated frame initial value is empty set;
All interpolated frame Ξ detected initial value is empty set;
The frame number of intermediate frame is designated as NupInitial value be 0;
The reproduction time T of Γ after renewalsInitial value be 0.
With reference to second of possible embodiment of first aspect, the embodiments of the invention provide the third of first aspect
Possible embodiment, wherein, method also includes:
In the presence of not having interpolated frame in the video collection Γ of video to be measured, then the position location collection for exporting interpolated frame is combined into
It is empty;
In the presence of having interpolated frame in the video collection Γ of video to be measured, and, the execution number of times of judgement, which exceedes, to be preset
Threshold value when, then export interpolated frame position location collection be combined into ψ.
With reference to the third possible embodiment of first aspect, the embodiments of the invention provide the 4th of first aspect kind
Possible embodiment, wherein, method also includes:
As ∑ Stp>0, deliberated index is setWherein,SpFor the positive sample of interpolated frame, StpFor the real sample of interpolated frame
This, SfpFor the negative and positive sample of interpolated frame;
As ∑ StpWhen=0, deliberated index F is set1For 0.
Second aspect, the embodiments of the invention provide interpolated frame positioner, including:
Classification mode builds module, for extracting Markov feature from the training set sample built in advance, by horse
Er Kefu features, which are input in integrated classifier, builds classification mode;
Video update module, the Markov statistical nature of the video collection Γ for calculating video to be measured, using to be measured
The video collection Γ of video Markov statistical nature and classification mode obtains the result of decision, and, update regarding for video to be measured
Frequency set Γ;
Locating module, is circulated for incorporating every three in the result of decision continuous intermediate frames for being detected as interpolated frame into every wheel
In in the interpolated frame Ξ that detects, wherein, the frame number of intermediate frame is designated as Nup, demarcate the sprocket bit of all interpolated frames detected
Put and be denoted as ψ, ψ is revised as ψ and Ξ union, TsFor the reproduction time of the Γ after renewal,Wherein, ∑ regards to be to be measured
The number of frame of video in the video collection Γ of frequency and, f be video to be measured frame per second;
Judge module, for judging whether to meet Nup<Ts;
Affirmative performing module, for if it is, the position location set ψ of output interpolated frame;
Negate performing module, for if not, the difference set that renewal Γ is Γ and Ξ, recalculates the video set of video to be measured
Close Γ Markov statistical nature.
With reference to second aspect, the embodiments of the invention provide the possible embodiment of the first of second aspect, wherein, it is fixed
Position module includes:
Time difference matrix calculation unit, for the video collection since video to be measured the second frame until second from the bottom
Frame, be per the time difference matrix TFDM calculation formula of frameWherein, F (x, y, t) be regarding
Frequency sequence, x, y and t difference representation space coordinates and time scale, and, the TFDM of the first frame and last frame be equal to front cross frame and
The difference of last two interframe;
Space difference matrix computing unit, for calculating spaces of the space-time difference matrix STFDM along eight directions respectively
Difference matrix, wherein, eight directions are respectivelyIntercepted respectively with threshold value set in advance
Each space difference matrix, is modeled, and calculate the warp of all directions using single order markoff process respectively along eight directions
Test matrix With
Position positioning unit, be for extracting final Markov statistical nature according to the experience matrix of all directionsWith
With reference to second aspect, the embodiments of the invention provide the possible embodiment of second of second aspect, wherein, also
Including:
Video collection initial value setting module, for the video collection Γ of video to be measured initial value for video to be measured institute
There is frame;
Position location initial value setting module, the initial value of the position location set ψ for often taking turns interpolated frame is empty set;
Interpolated frame initial value setting module, the initial value of all interpolated frame Ξ for detecting is empty set;
Intermediate frame initial value setting module, the frame number for intermediate frame is designated as NupInitial value be 0;
Reproduction time initial value setting module, the reproduction time T for the Γ after renewalsInitial value be 0.
With reference to second of possible embodiment of second aspect, the embodiments of the invention provide the third of second aspect
Possible embodiment, wherein, in addition to:
Not there is no module in interpolated frame, in the presence of not having interpolated frame in the video collection Γ of video to be measured, then exporting
The position location collection of interpolated frame is combined into sky;
Module is repeated, in the presence of having interpolated frame in the video collection Γ of video to be measured, and, the execution of judgement
When number of times exceedes threshold value set in advance, then the position location collection for exporting interpolated frame is combined into ψ.
With reference to the third possible embodiment of second aspect, the embodiments of the invention provide the 4th of second aspect kind
Possible embodiment, wherein, in addition to:
Positive number field assessment module, for as ∑ Stp>0, deliberated index is setWherein,SpFor the positive sample of interpolated frame, StpFor the real sample of interpolated frame
This, SfpFor the negative and positive sample of interpolated frame;
Null value assessment module, for as ∑ StpWhen=0, deliberated index F is set1For 0.
Interpolated frame localization method and device provided in an embodiment of the present invention, wherein, the interpolated frame localization method includes:It is first
First, Markov feature is extracted from the training set sample built in advance, Markov feature is input to integrated classifier
Middle structure classification mode, afterwards, calculates the video collection Γ of video to be measured Markov statistical nature, utilizes video to be measured
Video collection Γ Markov statistical nature and classification mode obtains the result of decision, also, updates the video set of video to be measured
Γ is closed, then, every three in the result of decision continuous intermediate frames of interpolated frame that are detected as are incorporated into detected in every wheel circulation
In interpolated frame Ξ, wherein, the frame number of intermediate frame is designated as Nup, demarcate the position location of all interpolated frames detected and be denoted as ψ, ψ is repaiied
It is changed to ψ and Ξ union, TsFor the reproduction time of the Γ after renewal,Wherein, ∑ is the video collection of video to be measured
The number of frame of video in Γ and, f be video to be measured frame per second, afterwards, judge whether to meet Nup<TsThis condition, if on
Condition satisfaction is stated, then exports the position location set ψ of interpolated frame, set ψ in position location is based on motion compensated video turns
The location information of the interpolated frame changed;If above-mentioned condition is unsatisfactory for, more
The difference set that new Γ is Γ and Ξ, recalculates the video collection Γ of video to be measured Markov statistical nature, afterwards
The processing procedures such as the above-mentioned judgement of progress are repeated, is realized and is changed in motion compensation frame per second by aforesaid operations, and effectively extracted
The location information of interpolated frame in high frame-rate video.
Other features and advantages of the present invention will be illustrated in the following description, also, partly be become from specification
Obtain it is clear that or being understood by implementing the present invention.The purpose of the present invention and other advantages are in specification, claims
And specifically noted structure is realized and obtained in accompanying drawing.
To enable the above objects, features and advantages of the present invention to become apparent, preferred embodiment cited below particularly, and coordinate
Appended accompanying drawing, is described in detail below.
Brief description of the drawings
, below will be to specific in order to illustrate more clearly of the specific embodiment of the invention or technical scheme of the prior art
The accompanying drawing used required in embodiment or description of the prior art is briefly described, it should be apparent that, in describing below
Accompanying drawing is some embodiments of the present invention, for those of ordinary skill in the art, before creative work is not paid
Put, other accompanying drawings can also be obtained according to these accompanying drawings.
Fig. 1 shows the connection figure of interpolated frame localization method of the prior art;
Fig. 2 shows the connection figure for the interpolated frame localization method that the embodiment of the present invention is provided;
Fig. 3 shows the structural framing figure for the interpolated frame positioner that the embodiment of the present invention is provided;
Fig. 4 shows the structure connection figure for the interpolated frame positioner that the embodiment of the present invention is provided.
Icon:1- classification modes build module;2- video update modules;3- locating modules;4- judge modules;5- is held certainly
Row module;6- negates performing module;31- time difference matrix calculation units;32- spaces difference matrix computing unit;33- positions
Positioning unit.
Embodiment
Below in conjunction with accompanying drawing in the embodiment of the present invention, the technical scheme in the embodiment of the present invention is carried out clear, complete
Ground is described, it is clear that described embodiment is only a part of embodiment of the invention, rather than whole embodiments.Generally exist
The component of the embodiment of the present invention described and illustrated in accompanying drawing can be arranged and designed with a variety of configurations herein.Cause
This, the detailed description of the embodiments of the invention to providing in the accompanying drawings is not intended to limit claimed invention below
Scope, but it is merely representative of the selected embodiment of the present invention.Based on embodiments of the invention, those skilled in the art are not doing
The every other embodiment obtained on the premise of going out creative work, belongs to the scope of protection of the invention.
At present, in frame per second during conversion operation, the interpolated frame generated using motion compensation approach usually considers frame
The movement locus of interior Moving Objects, and using the movement locus of various motion estimation techniques acquisition Moving Objects, afterwards at this
Meet the frame of video i.e. interpolated frame of the characteristics of motion on movement locus using motion compensation strategy generating, however, existing judgement side
Method can not accurately position above-mentioned interpolated frame.
Based on this, the embodiments of the invention provide interpolated frame localization method and device, it is described below by embodiment.
Embodiment 1
First, come the theoretical foundation source based on the interpolation frame alignment changed on motion compensated video under illustrating:Motion is mended
It is the characteristics of motion using frame in object to repay conversion method on video, to obtain real motion track, and takes motion compensation plan
Interpolated frame is slightly obtained, aforesaid operations process depends on the correctness of the movement locus obtained.However, the motion of frame in object has
Complexity and nonrigid feature, it is more difficult to complete real movement locus is obtained, so as to cause intraframe motion object region
Pixel value and real pixel value between have differences, also just change the correlation between pixel to a certain extent, and this
Invention positions the position of interpolated frame using the change of this correlation.
Referring to Fig. 2, the interpolated frame localization method that the present embodiment is proposed specifically includes following steps:
Step S101:From the training set sample built in advance extract Markov feature, with distinguish primitive frame and
Correlation properties in interpolated frame between Moving Objects area pixel, Markov feature is input to build in integrated classifier and classified
Pattern, common integrated classifier is Ensemble Classifier graders, that is, obtains energy area using SVMs
Divide the difference of primitive frame and interpolated frame.
Step S102:The video collection Γ of video to be measured Markov statistical nature is calculated, regarding for video to be measured is utilized
Frequency set Γ Markov statistical nature and classification mode obtains the result of decision, and, the video collection Γ of video to be measured is updated,
It is first detection frame being obtained in the result of decision to carrying out in the range of last detection frame to update Γ.
Step S103:Incorporate every three in the result of decision continuous intermediate frames of interpolated frame that are detected as into every wheel circulation inspection
In the interpolated frame Ξ measured, wherein, the frame number of intermediate frame is designated as Nup, demarcate the position location note of all interpolated frames detected
Make ψ, ψ is revised as ψ and Ξ union, TsFor the reproduction time of the Γ after renewal,Wherein, ∑ is video to be measured
The number of frame of video in video collection Γ and, f be video to be measured frame per second.
Step S104:Judge whether to meet Nup<Ts。
Step S105:If it is, the position location set ψ of output interpolated frame.
Step S106:If not, updating the difference set that Γ is Γ and Ξ, the video collection Γ of video to be measured horse is recalculated
Er Kefu statistical natures.
Herein, it is necessary to remark additionally:The scope of the video collection of 1 video to be measured is:All frames of candidate video are made
For initial estimation, then gradually it is changed to detect first interpolated frame to last interpolated frame scope, so.Scope is just contained
All interpolated frames and its left and right neighboring reference frame are covered, this it also avoid terminating in advance for splicing video.2 interpolation
Frame generally regards as the intermediate frame in the every three continuous sliding windows for checking frame, only has a frame in each sliding window
Overlapping, this is due to that Markov statistical nature is extracted according to every three successive frames of video to be measured.3 when video to be measured
Do not have in video collection Γ in the presence of interpolated frame, the position location collection of output interpolated frame is combined into sky;And when the video of video to be measured
In the presence of having interpolated frame in set Γ, also, judge whether to meet Nup<TsExecution number of times exceed threshold value (example set in advance
Such as, 10 times) when, then the position location collection for exporting interpolated frame is combined into ψ, and the execution of localization method is effectively reduced by above-mentioned setting
Time, and then improve the execution efficiency of interpolation frame alignment.
Here, it is necessary to which the Markov statistical nature for the video collection Γ for illustrating to calculate video to be measured in step S102 is specific
Including:
1 since the second frame of the video collection of video to be measured until frame second from the bottom, the time difference matrix per frame
TFDM calculation formula areIts
In, F (x, y, t) is video sequence, x, y and t difference representation space coordinates and time scale, and, the first frame and last frame
TFDM is equal to the difference of front cross frame and last two interframe.
2 calculate space difference matrixs of the space-time difference matrix STFDM along eight directions respectively, wherein, eight directions point
It is notEach space difference matrix is intercepted respectively with threshold value set in advance, uses one
Rank markoff process is modeled respectively along eight directions, and calculates the experience matrix of all directionsWith
3 extract final Markov statistical nature according to the experience matrix of all directions isWithHere, it is necessary to illustrate,
It is 3 that threshold value set in advance, which chooses numerical value, then the intrinsic dimensionality of matrix is (2*3+1)2=49, i.e., the experience of above-mentioned all directions
The intrinsic dimensionality of matrix is 49,WithIntrinsic dimensionality be 49 respectively so that total dimension of the present invention is
98。
In addition, interpolated frame localization method also includes:Each initial value is set specifically, the video collection Γ of video to be measured
Initial value is all frames of video to be measured, and the often wheel position location set ψ of interpolated frame initial value is empty set, and what is detected is all
Interpolated frame Ξ initial value is empty set, and the frame number of intermediate frame is designated as NupInitial value be 0, the reproduction time T of the Γ after renewals
Initial value be 0, by the setting of each above-mentioned initial value the position-finding of interpolated frame can be made to have a unified starting point, and then raising
The accuracy of test.
In addition, in order to be tested, tested to 25 different video groups, video average length is 450 frames, video with
Machine is divided into 2 classes:Therein 50% shares to train Ensemble Classifier graders as training sample set, remaining
50% be used for test, interpolated frame localization method also includes:As ∑ Stp>0, deliberated index is set
Wherein,
SpFor the positive sample of interpolated frame, StpFor interpolated frame
Real sample, SfpFor the negative and positive sample of interpolated frame, and as ∑ StpWhen=0, deliberated index F is set1For 0.And with it is conventional at present
Video editing software (for example, MSU, YUVsoft, MVTools2, Respeedr) respectively video to be measured be without overcompression at
The video of reason and video to be measured are to be contrasted in the case of H.264 compressing video, obtain following contrast table 1 and contrast table 2.
Contrast table 1
Contrast table 2
In summary, the interpolated frame localization method that the present embodiment is provided includes:First, from the training set sample built in advance
Markov feature is extracted in this, Markov feature is input in integrated classifier and builds classification mode, afterwards, calculating is treated
The video collection Γ of video Markov statistical nature is surveyed, counts special using the video collection Γ of video to be measured Markov
Classification mode of seeking peace obtains the result of decision, also, updates the video collection Γ of video to be measured, secondly, by the result of decision every three
The individual continuous intermediate frame of interpolated frame that is detected as is incorporated into the interpolated frame ψ detected in every wheel circulation, wherein, the frame number of intermediate frame
Mesh is designated as Nup, demarcate the position location of all interpolated frames detected and be denoted as Ξ, ψ is revised as ψ and Ξ union, TsAfter renewal
Γ reproduction time,Wherein, ∑ for the frame of video in the video collection Γ of video to be measured number and, f is treats
The frame per second of video is surveyed, Next, it is determined whether meeting Nup<TsIf the above-mentioned result judged is yes, exports the positioning of interpolated frame
Location sets ψ, if the above-mentioned result judged is no, updates the difference set that Γ is Γ and Ξ, recalculates the video set of video to be measured
Γ Markov statistical nature is closed, is realized and is changed in the motion compensation frame per second based on motion compensated video by above-mentioned steps
Process, and accurately it is extracted the location information of interpolated frame in high frame-rate video.
Embodiment 2
Referring to Fig. 3 and Fig. 4, present embodiments providing interpolated frame positioner includes:The classification mode being sequentially connected is built
Module 1, video update module 2, locating module 3, certainly judge module 4, performing module 5 and negative performing module 6, in use,
Classification mode, which builds module 1, to be used to from the training set sample built in advance extract Markov feature, and Markov is special
Levy to be input in integrated classifier and build classification mode, video update module 2 is used for the video collection Γ's for calculating video to be measured
Markov statistical nature, decision-making is obtained using the video collection Γ of video to be measured Markov statistical nature and classification mode
As a result, and, update the video collection Γ of video to be measured, locating module 3, for continuously being detected as inserting by every three in the result of decision
The intermediate frame of value frame is incorporated into the interpolated frame Ξ detected in every wheel circulation, wherein, the frame number of intermediate frame is designated as Nup, demarcation
The position location of all interpolated frames detected is denoted as ψ, and ψ is revised as ψ and Ξ union, TsFor the reproduction time of the Γ after renewal,Wherein, ∑ for video to be measured video collection Γ in frame of video number and, f be video to be measured frame per second,
Judge module 4 is used to judge whether to meet Nup<Ts, performing module 5 is for if it is, the position location of output interpolated frame certainly
Set ψ, negative performing module 6 is used for if not, the difference set that renewal Γ is Γ and Ξ, recalculates the video collection of video to be measured
Γ Markov statistical nature.
Wherein, locating module 3 includes:Time difference matrix calculation unit 31, space difference matrix computing unit 32 and position
Positioning unit 33 is put, in use, time difference matrix calculation unit 31 is used to open from the second frame of the video collection of video to be measured
Begin until frame second from the bottom, the time difference matrix TFDM calculation formula per frame areWherein, F (x, y, t) be regarding
Frequency sequence, x, y and t difference representation space coordinates and time scale, and, the TFDM of the first frame and last frame be equal to front cross frame and
The difference of last two interframe, space difference matrix computing unit 32 is used to calculate space-time difference matrix STFDM respectively along eight
The space difference matrix in direction, wherein, eight directions are respectivelyWith threshold set in advance
Value intercepts each space difference matrix respectively, is modeled using single order markoff process respectively along eight directions, and calculates each
The experience matrix in individual directionWithPosition positioning unit 33 is used
It is in extracting final Markov statistical nature according to the experience matrix of all directionsWith
In addition, interpolated frame positioner also includes:Video collection initial value setting module is used for the video set of video to be measured
The initial value for closing Γ is all frames of video to be measured, and position location initial value setting module is used for the position location for often taking turns interpolated frame
Set ψ initial value is empty set, and the initial value that interpolated frame initial value setting module is used for all interpolated frame Ξ detected is sky
Collection, the frame number that intermediate frame initial value setting module is used for intermediate frame is designated as NupInitial value be 0, reproduction time initial value setting
Module is used for the reproduction time T of the Γ after updatingsInitial value be 0.
In addition, interpolated frame positioner also includes:There is no module and repeat module in interpolated frame, interpolated frame is not present
Module is used in the presence of not having interpolated frame in the video collection Γ of video to be measured, then the position location collection for exporting interpolated frame is combined into
Sky, repeating module is used in the presence of having an interpolated frame in the video collection Γ when video to be measured, and, the execution number of times of judgement is super
When crossing threshold value set in advance, then the position location collection for exporting interpolated frame is combined into ψ.
In addition, interpolated frame positioner also includes:Positive number field assessment module and null value assessment module, in use, positive number field
Assessment module is used to work as ∑ Stp>0, deliberated index is setWherein,SpFor the positive sample of interpolated frame, StpFor the real sample of interpolated frame
This, SfpFor the negative and positive sample of interpolated frame, null value assessment module is used to work as ∑ StpWhen=0, deliberated index F is set1For 0.
In summary, the interpolated frame positioner that the present embodiment is provided includes:The classification mode being sequentially connected builds module
1st, video update module 2, locating module 3, certainly judge module 4, performing module 5 and negative performing module 6, in use, classification
Mode construction module 1 is used to from the training set sample built in advance extract Markov feature, and Markov feature is defeated
Enter and classification mode is built into integrated classifier, video update module 2 is used for the Ma Er for calculating the video collection Γ of video to be measured
Section's husband's statistical nature, decision-making knot is obtained using the video collection Γ of video to be measured Markov statistical nature and classification mode
Really, and, update the video collection Γ of video to be measured, locating module 3, for being continuously detected as interpolation by every three in the result of decision
The intermediate frame of frame is incorporated into the interpolated frame Ξ detected in every wheel circulation, wherein, the frame number of intermediate frame is designated as Nup, demarcation inspection
The position location of all interpolated frames measured is denoted as ψ, and ψ is revised as ψ and Ξ union, TsFor the reproduction time of the Γ after renewal,Wherein, Σ for video to be measured video collection Γ in frame of video number and, f be video to be measured frame per second,
Judge module 4 is used to judge whether to meet Nup<Ts, performing module 5 is for if it is, the position location of output interpolated frame certainly
Set ψ, negative performing module 6 is used for if not, the difference set that renewal Γ is Γ and Ξ, recalculates the video collection of video to be measured
Γ Markov statistical nature, by the setting of above-mentioned modules, is efficiently solved based on conversion on motion compensated video
Interpolated frame the problem of be accurately positioned.
Finally it should be noted that:Embodiment described above, is only the embodiment of the present invention, to illustrate the present invention
Technical scheme, rather than its limitations, protection scope of the present invention is not limited thereto, although with reference to the foregoing embodiments to this hair
It is bright to be described in detail, it will be understood by those within the art that:Any one skilled in the art
The invention discloses technical scope in, it can still modify to the technical scheme described in previous embodiment or can be light
Change is readily conceivable that, or equivalent substitution is carried out to which part technical characteristic;And these modifications, change or replacement, do not make
The essence of appropriate technical solution departs from the spirit and scope of technical scheme of the embodiment of the present invention, should all cover the protection in the present invention
Within the scope of.Therefore, protection scope of the present invention described should be defined by scope of the claims.
Claims (10)
1. interpolated frame localization method, it is characterised in that including:
Markov feature is extracted from the training set sample built in advance, the Markov feature is input to collection composition
Classification mode is built in class device;
The video collection Γ of video to be measured Markov statistical nature is calculated, the video collection Γ of the video to be measured is utilized
Markov statistical nature and the classification mode obtain the result of decision, and, update the video collection Γ of the video to be measured;
Incorporate every three in the result of decision continuous intermediate frames of interpolated frame that are detected as into detected in every wheel circulation insert
It is worth in frame Ξ, wherein, the frame number of the intermediate frame is designated as Nup, the position location for demarcating all interpolated frames detected is denoted as ψ, ψ
It is revised as ψ and Ξ union, TsFor the reproduction time of the Γ after renewal,Wherein, ∑ regarding for the video to be measured
The number of frame of video in frequency set Γ and, f for video to be measured frame per second;
Judge whether to meet Nup<Ts;
If it is, the position location set ψ of output interpolated frame;
If not, updating the difference set that Γ is Γ and Ξ, the video collection Γ of the video to be measured Markov statistics is recalculated
Feature.
2. interpolated frame localization method according to claim 1, it is characterised in that the video collection of the calculating video to be measured
Γ Markov statistical nature includes:
Until frame second from the bottom since the second frame of the video collection of the video to be measured, the time difference matrix TFDM per frame
Calculation formula isWherein, F
(x, y, t) is video sequence, and x, y and t distinguish representation space coordinate and time scale, and, the TFDM of the first frame and last frame
Equal to front cross frame and the difference of last two interframe;
Space difference matrixs of the space-time difference matrix STFDM along eight directions is calculated respectively, wherein, eight directions difference
For ←, →, ↓, ↑, ↘, ↖, ↗, ↙ }, each space difference matrix is intercepted respectively with threshold value set in advance, is used
Single order markoff process is modeled respectively along eight directions, and calculates the experience matrix of all directionsMt ↑、Mt ↓、With
Extracting final Markov statistical nature according to the experience matrix of all directions isWith
3. interpolated frame localization method according to claim 1, it is characterised in that methods described also includes:
The video collection Γ of the video to be measured initial value is all frames of video to be measured;
The initial value for often taking turns the position location set ψ of the interpolated frame is empty set;
The initial value of all interpolated frame Ξ detected is empty set;
The frame number of the intermediate frame is designated as NupInitial value be 0;
The reproduction time T of Γ after the renewalsInitial value be 0.
4. interpolated frame localization method according to claim 3, it is characterised in that methods described also includes:
In the presence of not having interpolated frame in the video collection Γ of the video to be measured, then the position location collection for exporting interpolated frame is combined into
It is empty;
In the presence of having interpolated frame in the video collection Γ of the video to be measured, and, the execution number of times of the judgement exceedes in advance
During the threshold value of setting, then the position location set ψ of interpolated frame is exported.
5. interpolated frame localization method according to claim 4, it is characterised in that methods described also includes:
As ∑ Stp>0, deliberated index is setWherein,SpFor the positive sample of the interpolated frame, StpFor the interpolated frame
Real sample, SfpFor the negative and positive sample of the interpolated frame;
As ∑ StpWhen=0, deliberated index F is set1For 0.
6. interpolated frame positioner, it is characterised in that including:
Classification mode builds module, for extracting Markov feature from the training set sample built in advance, by the horse
Er Kefu features, which are input in integrated classifier, builds classification mode;
Video update module, the Markov statistical nature of the video collection Γ for calculating video to be measured, using described to be measured
The video collection Γ of video Markov statistical nature and the classification mode obtain the result of decision, and, update described to be measured
The video collection Γ of video;
Locating module, is circulated for incorporating every three in the result of decision continuous intermediate frames for being detected as interpolated frame into every wheel
In in the interpolated frame Ξ that detects, wherein, the frame number of the intermediate frame is designated as Nup, demarcate all interpolated frames detected and determine
Position position is denoted as ψ, and ψ is revised as ψ and Ξ union, TsFor the reproduction time of the Γ after renewal,Wherein, ∑ is institute
State frame of video in the video collection Γ of video to be measured number and, f is the frame per second of video to be measured;
Judge module, for judging whether to meet Nup<Ts;
Affirmative performing module, for if it is, the position location set ψ of output interpolated frame;
Negate performing module, for if not, the difference set that renewal Γ is Γ and Ξ, recalculates the video set of the video to be measured
Close Γ Markov statistical nature.
7. interpolated frame positioner according to claim 6, it is characterised in that the locating module includes:
Time difference matrix calculation unit, for since the second frame of the video collection of the video to be measured until second from the bottom
Frame, be per the time difference matrix TFDM calculation formula of frameWherein, F (x, y, t) be regarding
Frequency sequence, x, y and t difference representation space coordinates and time scale, and, the TFDM of the first frame and last frame be equal to front cross frame and
The difference of last two interframe;
Space difference matrix computing unit, for calculating space difference of the space-time difference matrix STFDM along eight directions respectively
Matrix, wherein, eight directions be respectively ←, →, ↓, ↑ , ↘ , ↖ , ↗ , ↙ }, cut respectively with threshold value set in advance
Each space difference matrix is taken, is modeled using single order markoff process respectively along eight directions, and calculate each side
To experience matrixMt ↑、Mt ↓、With
Position positioning unit, be for extracting final Markov statistical nature according to the experience matrix of all directionsWith
8. interpolated frame positioner according to claim 6, it is characterised in that also include:
Video collection initial value setting module, for the video collection Γ of the video to be measured initial value for video to be measured institute
There is frame;
Position location initial value setting module, the initial value of the position location set ψ for often taking turns the interpolated frame is empty set;
Interpolated frame initial value setting module, the initial value for all interpolated frame Ξ detected is empty set;
Intermediate frame initial value setting module, the frame number for the intermediate frame is designated as NupInitial value be 0;
Reproduction time initial value setting module, the reproduction time T for the Γ after the renewalsInitial value be 0.
9. interpolated frame positioner according to claim 8, it is characterised in that also include:
Not there is no module in interpolated frame, in the presence of not having interpolated frame in the video collection Γ of the video to be measured, then exporting
The position location collection of interpolated frame is combined into sky;
Module is repeated, in the presence of having interpolated frame in the video collection Γ of the video to be measured, and, the judgement
When performing number of times more than threshold value set in advance, then the position location collection for exporting interpolated frame is combined into ψ.
10. interpolated frame positioner according to claim 9, it is characterised in that also include:
Positive number field assessment module, for as ∑ Stp>0, deliberated index is setWherein,SpFor the positive sample of the interpolated frame, StpFor the interpolated frame
Real sample, SfpFor the negative and positive sample of the interpolated frame;
Null value assessment module, for as ∑ StpWhen=0, deliberated index F is set1For 0.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710679927.6A CN107295214B (en) | 2017-08-09 | 2017-08-09 | Interpolated frame localization method and device |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710679927.6A CN107295214B (en) | 2017-08-09 | 2017-08-09 | Interpolated frame localization method and device |
Publications (2)
Publication Number | Publication Date |
---|---|
CN107295214A true CN107295214A (en) | 2017-10-24 |
CN107295214B CN107295214B (en) | 2019-12-03 |
Family
ID=60105746
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201710679927.6A Active CN107295214B (en) | 2017-08-09 | 2017-08-09 | Interpolated frame localization method and device |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN107295214B (en) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110049205A (en) * | 2019-04-26 | 2019-07-23 | 湖南科技大学 | The detection method that video motion compensation frame interpolation based on Chebyshev matrix is distorted |
CN111263193A (en) * | 2020-01-21 | 2020-06-09 | 北京三体云联科技有限公司 | Video frame up-down sampling method and device, and video live broadcasting method and system |
Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN1450816A (en) * | 2003-04-22 | 2003-10-22 | 上海大学 | Stereo video stream coder/decoder and stereo video coding/decoding system |
CN1592398A (en) * | 2003-03-28 | 2005-03-09 | 株式会社东芝 | Method of generating frame interpolation and image display system therefor |
EP2214137A2 (en) * | 2009-01-29 | 2010-08-04 | Vestel Elektronik Sanayi ve Ticaret A.S. | A method and apparatus for frame interpolation |
CN103702128A (en) * | 2013-12-24 | 2014-04-02 | 浙江工商大学 | Interpolation frame generating method applied to up-conversion of video frame rate |
CN104079950A (en) * | 2014-07-04 | 2014-10-01 | 福建天晴数码有限公司 | Video output processing method, device and system and video receiving processing method, device and system |
CN106031144A (en) * | 2014-03-28 | 2016-10-12 | 华为技术有限公司 | Method and device for generating a motion-compensated video frame |
CN106230611A (en) * | 2015-06-02 | 2016-12-14 | 杜比实验室特许公司 | There is intelligence retransmit and system for monitoring quality in the service of interpolation |
CN106331723A (en) * | 2016-08-18 | 2017-01-11 | 上海交通大学 | Video frame rate up-conversion method and system based on motion region segmentation |
-
2017
- 2017-08-09 CN CN201710679927.6A patent/CN107295214B/en active Active
Patent Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN1592398A (en) * | 2003-03-28 | 2005-03-09 | 株式会社东芝 | Method of generating frame interpolation and image display system therefor |
CN1450816A (en) * | 2003-04-22 | 2003-10-22 | 上海大学 | Stereo video stream coder/decoder and stereo video coding/decoding system |
EP2214137A2 (en) * | 2009-01-29 | 2010-08-04 | Vestel Elektronik Sanayi ve Ticaret A.S. | A method and apparatus for frame interpolation |
CN103702128A (en) * | 2013-12-24 | 2014-04-02 | 浙江工商大学 | Interpolation frame generating method applied to up-conversion of video frame rate |
CN106031144A (en) * | 2014-03-28 | 2016-10-12 | 华为技术有限公司 | Method and device for generating a motion-compensated video frame |
CN104079950A (en) * | 2014-07-04 | 2014-10-01 | 福建天晴数码有限公司 | Video output processing method, device and system and video receiving processing method, device and system |
CN106230611A (en) * | 2015-06-02 | 2016-12-14 | 杜比实验室特许公司 | There is intelligence retransmit and system for monitoring quality in the service of interpolation |
CN106331723A (en) * | 2016-08-18 | 2017-01-11 | 上海交通大学 | Video frame rate up-conversion method and system based on motion region segmentation |
Non-Patent Citations (2)
Title |
---|
杨越,高新波,李金秀: "《一种运动自适应的帧速率上转换算法》", 《中国图象图形学报》 * |
薛冲冲: "《视频压缩中运动估计算法及预测搜索起始点的研究》", 《CNKI优秀硕士学位论文全文库》 * |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110049205A (en) * | 2019-04-26 | 2019-07-23 | 湖南科技大学 | The detection method that video motion compensation frame interpolation based on Chebyshev matrix is distorted |
CN111263193A (en) * | 2020-01-21 | 2020-06-09 | 北京三体云联科技有限公司 | Video frame up-down sampling method and device, and video live broadcasting method and system |
CN111263193B (en) * | 2020-01-21 | 2022-06-17 | 北京世纪好未来教育科技有限公司 | Video frame up-down sampling method and device, and video live broadcasting method and system |
Also Published As
Publication number | Publication date |
---|---|
CN107295214B (en) | 2019-12-03 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN102184221B (en) | Real-time video abstract generation method based on user preferences | |
Shen et al. | Exemplar-based human action pose correction and tagging | |
CN107808132A (en) | A kind of scene image classification method for merging topic model | |
EP2706507A1 (en) | Method and apparatus for generating morphing animation | |
CN107423398A (en) | Exchange method, device, storage medium and computer equipment | |
CN107993238A (en) | A kind of head-and-shoulder area image partition method and device based on attention model | |
CN108764298B (en) | Electric power image environment influence identification method based on single classifier | |
CN109145766A (en) | Model training method, device, recognition methods, electronic equipment and storage medium | |
CN107871101A (en) | A kind of method for detecting human face and device | |
CN109492596B (en) | Pedestrian detection method and system based on K-means clustering and regional recommendation network | |
CN107480642A (en) | A kind of video actions recognition methods based on Time Domain Piecewise network | |
CN106068514A (en) | For identifying the system and method for face in free media | |
CN110047095A (en) | Tracking, device and terminal device based on target detection | |
CN109272509A (en) | A kind of object detection method of consecutive image, device, equipment and storage medium | |
CN106326857A (en) | Gender identification method and gender identification device based on face image | |
CN104615996B (en) | A kind of various visual angles two-dimension human face automatic positioning method for characteristic point | |
CN106875007A (en) | End-to-end deep neural network is remembered based on convolution shot and long term for voice fraud detection | |
CN106503723A (en) | A kind of video classification methods and device | |
CN109948637A (en) | Object test equipment, method for checking object and computer-readable medium | |
CN112183435A (en) | Two-stage hand target detection method | |
CN110047081A (en) | Example dividing method, device, equipment and the medium of chest x-ray image | |
CN107146237A (en) | A kind of method for tracking target learnt based on presence with estimating | |
CN112434608A (en) | Human behavior identification method and system based on double-current combined network | |
CN110781976A (en) | Extension method of training image, training method and related device | |
CN109978074A (en) | Image aesthetic feeling and emotion joint classification method and system based on depth multi-task learning |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant | ||
CP03 | Change of name, title or address | ||
CP03 | Change of name, title or address |
Address after: 410000 room 801, accelerator production workshop, building B1, Haichuang science and Technology Industrial Park, No. 627 Lugu Avenue, Changsha high tech Development Zone, Changsha City, Hunan Province Patentee after: Hunan Xingtian Electronic Technology Co.,Ltd. Address before: 410000 room 801, accelerator production workshop, building B1, Haichuang science and Technology Industrial Park, No. 627 Lugu Avenue, high tech Development Zone, Changsha, Hunan Patentee before: HUNAN XING TIAN ELECTRONIC TECHNOLOGY Co.,Ltd. |