CN111885337B - Multi-frame-rate video efficient editing method - Google Patents

Multi-frame-rate video efficient editing method Download PDF

Info

Publication number
CN111885337B
CN111885337B CN202010565009.2A CN202010565009A CN111885337B CN 111885337 B CN111885337 B CN 111885337B CN 202010565009 A CN202010565009 A CN 202010565009A CN 111885337 B CN111885337 B CN 111885337B
Authority
CN
China
Prior art keywords
frame
field
target
calculating
follows
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010565009.2A
Other languages
Chinese (zh)
Other versions
CN111885337A (en
Inventor
马萧萧
周熙
孟宪林
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Chengdu Dongfangshengxing Electronics Co ltd
Original Assignee
Chengdu Dongfangshengxing Electronics Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Chengdu Dongfangshengxing Electronics Co ltd filed Critical Chengdu Dongfangshengxing Electronics Co ltd
Priority to CN202010565009.2A priority Critical patent/CN111885337B/en
Publication of CN111885337A publication Critical patent/CN111885337A/en
Application granted granted Critical
Publication of CN111885337B publication Critical patent/CN111885337B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/01Conversion of standards, e.g. involving analogue television standards or digital television standards processed at pixel level
    • H04N7/0127Conversion of standards, e.g. involving analogue television standards or digital television standards processed at pixel level by changing the field or frame frequency of the incoming video signal, e.g. frame rate converter
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • H04N5/262Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects

Abstract

The invention discloses a multi-frame rate video high-efficiency editing method, which comprises the following steps: acquiring frame rate files of a source video and a target video; calculating a target frame through the sequence numbers of two frames to be referred to of the target frame and the two frames to be referred to of the target frame; performing frame rate conversion through the target frame; wherein the calculating the target frame comprises calculating the nth frame of the target video in the field mode and calculating the nth frame of the target video in the frame mode. The invention provides the calculation of the nth frame of the target video in the field mode and the calculation of the nth frame of the target video in the frame mode, and the frame rates of the two modes can be calculated; the invention solves the problems of low efficiency and unsmooth picture when a plurality of files with different frame rates are not edited.

Description

Multi-frame-rate video efficient editing method
Technical Field
The invention relates to the field of high-definition processing, in particular to a multi-frame-rate video efficient editing method.
Background
The television we watch at ordinary times is frames per second, i.e. multiple images are changed per second, and human eyes do not feel flickering due to the persistence of vision effect. Each frame of image is divided into two fields for scanning, wherein the scanning refers to that the electron beams scan from top to bottom in the horizontal direction line by line in the kinescope, the first field scans odd lines first, the second field scans even lines, namely what we often say is interlaced scanning, and one frame of image is completed after two fields are scanned. When the field frequency is Hz and the frame frequency is Hz, the odd field and the even field scan the same frame image, and the two adjacent frame images are different unless the image is still. The computer monitor used by us is the same as the scanning mode of the television picture tube. In the case of video processing using Premiere, different settings are made depending on the purpose. If the format is converted into a special format on a computer such as MPG, WMV and the like, the concept of management field and frame is not needed, and the encoder can automatically process. If the format is converted into a format for watching on a television or a DVAVI format used on a computer, the setting of 'field' selects 'field-first', otherwise, the situation of subtitle flicker is easy to occur, the frame number is automatically set when setting PAL system or NTSC system, and the frame number can also be manually set as the frame if some DV short films.
For the existing multi-frame rate editing technology, when a non-editing video is edited, a file is shot by a mobile phone, a camera and an unmanned aerial vehicle, the shot file formats are 25 frames, 30 frames, 60 frames and the like, and the existing non-editing video has low efficiency and unsmooth pictures when files with different frame rates are edited.
Disclosure of Invention
The invention aims to overcome the defects of the prior art and provides a multi-frame rate video efficient editing method.
The invention aims to be realized by the following technical scheme: a multi-frame rate video efficient editing method comprises the following steps:
s1, acquiring frame rate file information of the source video and the target video;
s2, calculating a target frame according to the target frame file information of the target video;
and S3, performing frame rate conversion through the calculated target frame.
Wherein the step S2 includes the calculation of the nth frame of the target video in the field mode and the calculation of the nth frame of the target video in the frame mode.
The calculation of the nth frame of the target video of the field mode specifically includes calculating a target top field pixel value and calculating a target bottom field pixel value.
The step S2 of calculating the target top field pixel value includes the following sub-steps:
s2011, calculating the sequence numbers of two to-be-referenced fields of the top field, wherein the sequence numbers of the two to-be-referenced fields are marked as RefNo1 and RefNo 2;
s2012, calculating the weight of the RefNo1 field, wherein the specific calculation formula is as follows:
R1=RefNo2-((2*n-2)*k+1);
wherein k represents the frame rate ratio of the target video to the source video;
s2013, judging whether the RefNo1 field is a bottom field; if yes, executing step S2014, otherwise, executing step S2015;
s2014, converting the RefNo1 field data SrcField1 into a top field to obtain SrcField 1', and calculating a target top field pixel value, wherein the specific calculation formula is as follows:
DstField1=R1*SrcField1’+(1-R1)*SrcField2;
s2015, converting the RefNo2 field data SrcField2 into a top field to obtain SrcField 2', and calculating a target top field pixel value, wherein a specific calculation formula is as follows:
DstField1=R1*SrcField1+(1-R1)*SrcField2’;
wherein the DstField1 represents a target top field pixel value; r1 represents the weight of the RefNo1 field.
The step S2011 specifically calculates the formula as follows:
RefNo1=floor((2*n-2)*k+1);
RefNo2=RefNo1+1。
the step S2 of calculating the target bottom field pixel value includes the following sub-steps:
s2021, calculating sequence numbers of two field to-be-referenced fields of the bottom field, wherein the sequence numbers of the two field to-be-referenced fields are recorded as RefNo3 and RefNo 4;
s2022, calculating the weight of the RefNo3 field, wherein the specific calculation formula is as follows:
R2=RefNo4-((2*n-1)*k;
wherein R2 represents the weight of the RefNo3 field;
s2023, judging whether the RefNo3 field is a bottom field; if yes, go to step S2024, otherwise go to step S2025;
s2024, converting the RefNo4 field data SrcField4 into a bottom field to obtain SrcField 4', and calculating a target bottom field pixel value, wherein the specific calculation formula is as follows:
DstField2=R2*SrcField3+(1-R2)*SrcField4’;
s2025, converting the RefNo3 field data SrcField3 into a bottom field to obtain SrcField 3', and calculating a target bottom field pixel value, wherein the specific calculation formula is as follows:
DstField2=R2*SrcField3’+(1-R2)*SrcField4;
wherein the DstField2 represents a target bottom field pixel value; r2 represents the weight of the RefNo3 field.
The step S2021 specifically calculates the formula as follows:
RefNo3=floor((2*n-1)*k+1);
RefNo4=RefNo1+1。
the calculation of the nth frame of the target video in the frame mode specifically comprises the following steps:
s2031, calculating the sequence numbers of two frames to be referred to of the target frame, and respectively marking as RefNo5 and RefNo 6;
s2032, calculating the weight of the RefNo5 frame, wherein the specific calculation formula is as follows:
R3=RefNo6-((n-1)*k+1);
wherein R3 represents the weight of the RefNo5 frame;
and S2033, obtaining the serial number of the frame to be referred to in the step S2031 and the weight obtained in the step S2032, and calculating the target frame.
Step S2031 is to calculate the serial numbers of two frames to be referred to of the target frame, and the specific calculation formula is as follows:
RefNo5=floor((n-1)*k+1);
RefNo6=RefNo5+1。
the step S2033 specifically includes the following substeps:
s20331, collecting frame information of two frame serial numbers RefNo5 and RefNo6 obtained by calculation in the step S2031, wherein the collected information is SrcFrame1 and SrcFrame2 respectively;
s20332, calculating the target frame by the two frame numbers and the weight of the RefNo5 frame, wherein the specific calculation formula is as follows:
DstFrame=R3*SrcFrame1+(1-R3)*SrcFrame2;
where DsrFrame represents the target frame and R3 represents the weight of the RefNo5 frame.
The invention has the beneficial effects that:
(1) the invention solves the problems of low efficiency and unsmooth picture when the non-editing is used for editing files with various frame rates;
(2) the invention provides the calculation of the nth frame of the target video in the field mode and the calculation of the nth frame of the target video in the frame mode, and the frame rates of the two modes can be converted.
Drawings
FIG. 1 is a block flow diagram of the method of the present invention;
FIG. 2 is a block diagram illustrating a method for calculating the nth frame of a target video in field mode according to the present invention;
fig. 3 is a flow chart of a method for calculating an nth frame of a target video in a frame mode according to the present invention.
Detailed Description
In order to more clearly understand the technical features, objects and effects of the present invention, the embodiments of the present invention will be described with reference to the accompanying drawings, but the scope of the present invention is not limited to the following.
The specific principle flow of this embodiment is as follows:
as shown in fig. 1, a method for efficiently editing a multi-frame rate video includes the following steps:
s1, acquiring frame rate file information of the source video and the target video;
s2, calculating a target frame according to the target frame file information of the target video;
and S3, performing frame rate conversion through the calculated target frame.
Wherein the step S2 includes the calculation of the nth frame of the target video in the field mode and the calculation of the nth frame of the target video in the frame mode.
(1) Field mode:
the target frame calculation principle and flow is as shown in figure 2,
the calculation of the nth frame of the target video of the field mode specifically includes calculating a target top field pixel value and calculating a target bottom field pixel value.
The step S2 of calculating the target top field pixel value includes the following sub-steps:
s2011, calculating the sequence numbers of two to-be-referenced fields of the top field, wherein the sequence numbers of the two to-be-referenced fields are marked as RefNo1 and RefNo 2;
s2012, calculating the weight of the RefNo1 field, wherein the specific calculation formula is as follows:
R1=RefNo2-((2*n-2)*k+1);
wherein k represents the frame rate ratio of the target video to the source video;
s2013, judging whether the RefNo1 field is a bottom field; if yes, executing step S2014, otherwise, executing step S2015;
s2014, converting the RefNo1 field data SrcField1 into a top field to obtain SrcField 1', and calculating a target top field pixel value, wherein the specific calculation formula is as follows:
DstField1=R1*SrcField1’+(1-R1)*SrcField2;
s2015, converting the RefNo2 field data SrcField2 into a top field to obtain SrcField 2', and calculating a target top field pixel value, wherein a specific calculation formula is as follows:
DstField1=R1*SrcField1+(1-R1)*SrcField2’;
wherein the DstField1 represents a target top field pixel value; r1 represents the weight of the RefNo1 field.
The step S2011 specifically calculates the formula as follows:
RefNo1=floor((2*n-2)*k+1);
RefNo2=RefNo1+1。
the step S2 of calculating the target bottom field pixel value includes the following sub-steps:
s2021, calculating sequence numbers of two field to-be-referenced fields of the bottom field, wherein the sequence numbers of the two field to-be-referenced fields are recorded as RefNo3 and RefNo 4;
s2022, calculating the weight of the RefNo3 field, wherein the specific calculation formula is as follows:
R2=RefNo4-((2*n-1)*k;
wherein R2 represents the weight of the RefNo3 field;
s2023, judging whether the RefNo3 field is a bottom field; if yes, go to step S2024, otherwise go to step S2025;
s2024, converting the RefNo4 field data SrcField4 into a bottom field to obtain SrcField 4', and calculating a target bottom field pixel value, wherein the specific calculation formula is as follows:
DstField2=R2*SrcField3+(1-R2)*SrcField4’;
s2025, converting the RefNo3 field data SrcField3 into a bottom field to obtain SrcField 3', and calculating a target bottom field pixel value, wherein the specific calculation formula is as follows:
DstField2=R2*SrcField3’+(1-R2)*SrcField4;
wherein the DstField2 represents a target bottom field pixel value; r2 represents the weight of the RefNo3 field.
The step S2021 specifically calculates the formula as follows:
RefNo3=floor((2*n-1)*k+1);
RefNo4=RefNo1+1。
(2) frame mode:
the specific calculation principle and flow are shown in FIG. 3;
the calculation of the nth frame of the target video in the frame mode specifically comprises the following steps:
s2031, calculating the sequence numbers of two frames to be referred to of the target frame, and respectively marking as RefNo5 and RefNo 6;
s2032, calculating the weight of the RefNo5 frame, wherein the specific calculation formula is as follows:
R3=RefNo6-((n-1)*k+1);
wherein R3 represents the weight of the RefNo5 frame;
and S2033, obtaining the serial number of the frame to be referred to in the step S2031 and the weight obtained in the step S2032, and calculating the target frame.
Step S2031 is to calculate the serial numbers of two frames to be referred to of the target frame, and the specific calculation formula is as follows:
RefNo5=floor((n-1)*k+1);
RefNo6=RefNo5+1。
the step S2033 specifically includes the following substeps:
s20331, collecting frame information of two frame serial numbers RefNo5 and RefNo6 obtained by calculation in the step S2031, wherein the collected information is SrcFrame1 and SrcFrame2 respectively;
s20332, calculating the target frame by the two frame numbers and the weight of the RefNo5 frame, wherein the specific calculation formula is as follows:
DstFrame=R3*SrcFrame1+(1-R3)*SrcFrame2;
where DsrFrame represents the target frame and R3 represents the weight of the RefNo5 frame.
The foregoing is illustrative of the preferred embodiments of this invention, and it is to be understood that the invention is not limited to the precise form disclosed herein and that various other combinations, modifications, and environments may be resorted to, falling within the scope of the concept as disclosed herein, either as described above or as apparent to those skilled in the relevant art. And that modifications and variations may be effected by those skilled in the art without departing from the spirit and scope of the invention as defined by the appended claims.

Claims (1)

1. A multi-frame rate video efficient editing method is characterized by comprising the following steps:
s1, acquiring frame rate file information of the source video and the target video;
s2, calculating the target frame according to the target frame rate file information of the target video;
s3, performing frame rate conversion on the target frame obtained by calculation;
wherein the step S2 includes the calculation of the nth frame of the target video in the field mode and the calculation of the nth frame of the target video in the frame mode;
the calculation of the nth frame of the target video in the field mode specifically comprises calculating a target top field pixel value and calculating a target bottom field pixel value;
the field mode calculation target top field pixel value comprises the following sub-steps:
s2011, calculating the sequence numbers of two to-be-referenced fields of the top field, wherein the sequence numbers of the two to-be-referenced fields are marked as RefNo1 and RefNo 2; step S2011 specifically calculates the formula as follows:
RefNo1=floor((2*n-2)*k+1);
RefNo2=RefNo1+1;
s2012, calculating the weight of the RefNo1 field, wherein the specific calculation formula is as follows:
R1=RefNo2-((2*n-2)*k+1);
wherein k represents the frame rate ratio of the target video to the source video;
s2013, judging whether the RefNo1 field is a bottom field; if yes, executing step S2014, otherwise, executing step S2015;
s2014, converting the RefNo1 field data SrcField1 into a top field to obtain SrcField 1', and calculating a target top field pixel value, wherein the specific calculation formula is as follows:
DstField1=R1*SrcField1’+(1-R1)*SrcField2;
s2015, converting the RefNo2 field data SrcField2 into a top field to obtain SrcField 2', and calculating a target top field pixel value, wherein a specific calculation formula is as follows:
DstField1=R1*SrcField1+(1-R1)*SrcField2’;
wherein the DstField1 represents a target top field pixel value; r1 represents the weight of the RefNo1 field; the field mode calculation target bottom field pixel values comprises the sub-steps of:
s2021, calculating sequence numbers of two field to-be-referenced fields of the bottom field, wherein the sequence numbers of the two field to-be-referenced fields are recorded as RefNo3 and RefNo 4; step S2021 specifically calculates the formula as follows:
RefNo3=floor((2*n-1)*k+1);
RefNo4=RefNo1+1;
s2022, calculating the weight of the RefNo3 field, wherein the specific calculation formula is as follows:
R2=RefNo4-((2*n-1)*k;
wherein R2 represents the weight of the RefNo3 field;
s2023, judging whether the RefNo3 field is a bottom field; if yes, go to step S2024, otherwise go to step S2025;
s2024, converting the RefNo4 field data SrcField4 into a bottom field to obtain SrcField 4', and calculating a target bottom field pixel value, wherein the specific calculation formula is as follows:
DstField2=R2*SrcField3+(1-R2)*SrcField4’;
s2025, converting the RefNo3 field data SrcField3 into a bottom field to obtain SrcField 3', and calculating a target bottom field pixel value, wherein the specific calculation formula is as follows:
DstField2=R2*SrcField3’+(1-R2)*SrcField4;
wherein the DstField2 represents a target bottom field pixel value; r2 represents the weight of the RefNo3 field;
the calculation of the nth frame of the target video in the frame mode specifically comprises the following steps:
s2031, calculating the sequence numbers of two frames to be referred to of the target frame, and respectively marking as RefNo5 and RefNo 6; step S2031 is to calculate the serial numbers of two frames to be referred to of the target frame, and the specific calculation formula is as follows:
RefNo5=floor((n-1)*k+1);
RefNo6=RefNo5+1;
s2032, calculating the weight of the RefNo5 frame, wherein the specific calculation formula is as follows:
R3=RefNo6-((n-1)*k+1);
wherein R3 represents the weight of the RefNo5 frame;
s2033, obtaining the serial number of the frame to be referred to and the weight obtained in the step S2032 through the step S2031, and calculating a target frame;
the step S2033 specifically includes the following substeps:
s20331, collecting frame information of two frame serial numbers RefNo5 and RefNo6 obtained by calculation in the step S2031, wherein the collected information is SrcFrame1 and SrcFrame2 respectively;
s20332, calculating the target frame by the two frame numbers and the weight of the RefNo5 frame, wherein the specific calculation formula is as follows:
DstFrame=R3*SrcFrame1+(1-R3)*SrcFrame2;
where DsrFrame represents the target frame and R3 represents the weight of the RefNo5 frame.
CN202010565009.2A 2020-06-19 2020-06-19 Multi-frame-rate video efficient editing method Active CN111885337B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010565009.2A CN111885337B (en) 2020-06-19 2020-06-19 Multi-frame-rate video efficient editing method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010565009.2A CN111885337B (en) 2020-06-19 2020-06-19 Multi-frame-rate video efficient editing method

Publications (2)

Publication Number Publication Date
CN111885337A CN111885337A (en) 2020-11-03
CN111885337B true CN111885337B (en) 2022-03-29

Family

ID=73156536

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010565009.2A Active CN111885337B (en) 2020-06-19 2020-06-19 Multi-frame-rate video efficient editing method

Country Status (1)

Country Link
CN (1) CN111885337B (en)

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110876060A (en) * 2018-08-31 2020-03-10 网宿科技股份有限公司 Code rate adjusting method and device in coding process

Family Cites Families (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20070074781A (en) * 2006-01-10 2007-07-18 삼성전자주식회사 Frame rate converter
EP2106136A1 (en) * 2008-03-28 2009-09-30 Sony Corporation Motion compensated temporal interpolation for frame rate conversion of video signals
CN102131058B (en) * 2011-04-12 2013-04-17 上海理滋芯片设计有限公司 Speed conversion processing module and method of high definition digital video frame
CN105578207A (en) * 2015-12-18 2016-05-11 无锡天脉聚源传媒科技有限公司 Video frame rate conversion method and device
CN106488227B (en) * 2016-10-12 2019-03-15 广东中星电子有限公司 A kind of video reference frame management method and system
CN107222758A (en) * 2016-12-16 2017-09-29 深圳市万佳安物联科技股份有限公司 A kind of method for converting video frame rate and device
CN110248132B (en) * 2019-05-31 2020-12-01 成都东方盛行电子有限责任公司 Video frame rate interpolation method
CN111263193B (en) * 2020-01-21 2022-06-17 北京世纪好未来教育科技有限公司 Video frame up-down sampling method and device, and video live broadcasting method and system

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110876060A (en) * 2018-08-31 2020-03-10 网宿科技股份有限公司 Code rate adjusting method and device in coding process

Also Published As

Publication number Publication date
CN111885337A (en) 2020-11-03

Similar Documents

Publication Publication Date Title
US7098959B2 (en) Frame interpolation and apparatus using frame interpolation
JP3291247B2 (en) Converter and method for converting progressive video signal to interlaced video signal
EP0622000B1 (en) Method and apparatus for video camera image film simulation
US4449143A (en) Transcodeable vertically scanned high-definition television system
JP5107349B2 (en) Image scaling based on motion vectors
US20100265353A1 (en) Image Processing Device, Image Sensing Device And Image Reproduction Device
US5844617A (en) Method and apparatus for enhancing the vertical resolution of a television signal having degraded vertical chrominance transitions
CN101094354B (en) Image signal processing apparatus, image display, and image display method
CN1484914A (en) Method and apparatus for interface progressive video conversion
CN1290326C (en) Motion and edge-adaptive signal frame frequency up-conversion method and system
US8547479B2 (en) Display apparatus and control method thereof
EP1235426A2 (en) A method for presenting improved motion image sequences
EP1389873A3 (en) Image processing apparatus, image processing method, recording medium and program
US7679676B2 (en) Spatial signal conversion
CN111885337B (en) Multi-frame-rate video efficient editing method
US5430489A (en) Video to film conversion
JP2004336608A (en) Method and circuit for converting image data, and electronic camera
CN111866432B (en) Non-frame-coding rate conversion method in field mode
KR20030019244A (en) Methods and apparatus for providing video still frame and video capture features from interlaced video signals
JP2005045700A (en) Motion estimation method for moving picture interpolation and motion estimation apparatus for moving picture interpolation
KR20060023150A (en) Spatial signal conversion
KR20060132877A (en) Method and apparatus for deinterlacing of video using motion compensated temporal interpolation
CN108377353B (en) Video processing method applied to embedded system
JP2006332904A (en) Contour emphasizing circuit
CN115633145A (en) Video de-interlacing and super-parting method based on circulating Unet network

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant