CN111885335A - Ultrahigh-definition down-conversion rendering method - Google Patents

Ultrahigh-definition down-conversion rendering method Download PDF

Info

Publication number
CN111885335A
CN111885335A CN202010564525.3A CN202010564525A CN111885335A CN 111885335 A CN111885335 A CN 111885335A CN 202010564525 A CN202010564525 A CN 202010564525A CN 111885335 A CN111885335 A CN 111885335A
Authority
CN
China
Prior art keywords
frame
field
conversion
mode
curframe
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010564525.3A
Other languages
Chinese (zh)
Other versions
CN111885335B (en
Inventor
马萧萧
刘科材
莫海燕
康佳星
孟宪林
唐雄
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Chengdu Dongfangshengxing Electronics Co ltd
Original Assignee
Chengdu Dongfangshengxing Electronics Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Chengdu Dongfangshengxing Electronics Co ltd filed Critical Chengdu Dongfangshengxing Electronics Co ltd
Priority to CN202010564525.3A priority Critical patent/CN111885335B/en
Publication of CN111885335A publication Critical patent/CN111885335A/en
Application granted granted Critical
Publication of CN111885335B publication Critical patent/CN111885335B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/01Conversion of standards, e.g. involving analogue television standards or digital television standards processed at pixel level
    • H04N7/0135Conversion of standards, e.g. involving analogue television standards or digital television standards processed at pixel level involving interpolation processes
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/14Picture signal circuitry for video frequency region
    • H04N5/21Circuitry for suppressing or minimising disturbance, e.g. moiré or halo

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Television Systems (AREA)

Abstract

The invention discloses an ultra-high definition down-conversion rendering method, which comprises the following steps: collecting a current frame CurFrame, a previous frame PreFrame and a next frame NextFrame in a source video sequence; selecting a mode, the mode including a frame-to-frame mode, a frame-to-field mode, a field-to-frame mode, and a field-to-field mode; down conversion is performed according to the selected mode. The invention corrects the bottom field and the top field by using a de-interlacing algorithm, performs down-conversion on videos in different modes by using an interpolation algorithm, and performs vertical filtering on down-converted frames, thereby solving the problems of motion picture jitter, picture discontinuity and the like when the ultra-high definition (4 k) is edited in a high-definition mode.

Description

Ultrahigh-definition down-conversion rendering method
Technical Field
The invention relates to the field of high-definition processing, in particular to an ultrahigh-definition down-conversion rendering method.
Background
The conversion of the high definition to standard definition signal is called down-conversion. A Down Converter (Down Converter) is used. According to the current television standard in China, when a high-definition signal is down-converted to a standard-definition signal, only the line frequency and the aspect ratio need to be converted because the field frequency is unchanged. The process is as follows: firstly, converting a high-definition interlaced scanning signal into a progressive scanning signal, then completing the conversion of pixels by using an interpolation technology, and finally converting the high-definition interlaced scanning signal into an interlaced scanning signal with 576i/50 format through interlacing processing. After down-conversion, the image has better acutance, color saturation and definition, and the picture is more exquisite and soft.
However, in the ultra high definition (4 k) mode, when some special pictures such as relatively thin horizontal or vertical stripes are converted from high definition to standard definition, the phenomena of motion picture jitter, picture discontinuity and the like are often generated.
Disclosure of Invention
The invention aims to overcome the defects of the prior art and provides an ultra-high definition down-conversion rendering method.
The invention aims to be realized by the following technical scheme: an ultra-high-definition down-conversion rendering method comprises the following steps:
s1, collecting a current frame CurFrame, a previous frame PreFrame of the current frame and a next frame NextFrame of the current frame in the source video sequence;
s2, selecting modes, wherein the modes comprise a frame-frame mode, a frame-field mode, a field-frame mode and a field-field mode;
s3, down-conversion is performed according to the mode selected in the step S2.
The frame-to-frame mode down-conversion comprises the sub-steps of:
s3011, inputting a CurFrame;
s3012, performing down-conversion on the CurFrame through an interpolation algorithm to obtain a down-converted frame ScaleFrame 1;
s3013, vertically filtering the ScaleFrame1 to obtain a vertically filtered frame FilterFrame 1.
The frame-to-field mode down-conversion includes calculating a top field frame number and calculating a bottom field frame number; wherein, the calculating the top field frame number comprises the following substeps:
s30211, inputting a CurFrame, wherein a TempFrame1 frame actually used for down-conversion is equal to the CurFrame;
s30212, down-converting the TempFrame1 through an interpolation algorithm to obtain a down-converted frame ScaleFrame 2;
s30213, vertically filtering the ScaleFrame2 to obtain a vertically filtered frame FilterFrame2, wherein the FilterFrame2 is a top field;
the calculating the bottom field frame number comprises the following substeps:
s30221, inputting CurFrame and NextFrame, and calculating linear interpolation of the two input frames to obtain a frame TempFrame2 actually used for down-conversion;
s30212, down-converting the TempFrame2 through an interpolation algorithm to obtain a down-converted frame ScaleFrame 3;
s30213, vertically filtering the ScaleFrame3 to obtain a vertically filtered frame FilterFrame3, wherein the FilterFrame3 is a bottom field.
The field-frame mode down-conversion comprises the sub-steps of:
s3031, inputting CurFrame, PreFrame and NextFrame, de-interlacing the CurFrame through a de-interlacing algorithm,
obtaining a deinterlaced progressive frame 1;
s3032, performing down-conversion on the deinterlaced frame progressive frame1 through an interpolation algorithm to obtain a ScaleFrame 4;
s3033, vertically filtering the ScaleFrame4 to obtain a vertically filtered frame FilterFrame 4.
The field-to-field mode down-conversion comprises the sub-steps of:
s3041, inputting the CurFrame, the PreFrame and the NextFrame, and de-interlacing the CurFrame through a de-interlacing algorithm; correcting the bottom field when calculating the top field, and correcting the top field when calculating the bottom field; obtaining a deinterlaced progressive frame 2;
s3042, performing down-conversion on the deinterlaced frame progressive frame2 through an interpolation algorithm to obtain a ScaleFrame 5;
s3043, vertically filtering the ScaleFrame5 to obtain a FilterFrame 5; when the top field is calculated, the result is the top field of the FilterFrame5, and when the bottom field is calculated, the result is the bottom field of the FilterFrame 5.
The invention has the beneficial effects that: the invention corrects the bottom field and the top field by using the de-interlacing algorithm and performs down-conversion on videos in different modes by using the interpolation algorithm, thereby solving the problems of jitter, discontinuous pictures and the like of moving pictures when the ultra-high definition (4 k) is edited in a high-definition mode.
Drawings
FIG. 1 is a block flow diagram of the method of the present invention;
FIG. 2 is a flowchart of the method of step S3 according to the present invention.
Detailed Description
In order to more clearly understand the technical features, objects and effects of the present invention, the embodiments of the present invention will be described with reference to the accompanying drawings, but the scope of the present invention is not limited to the following.
As shown in fig. 1, an ultra-high definition down-conversion rendering method includes the following steps:
s1, collecting a current frame CurFrame, a previous frame PreFrame of the current frame and a next frame NextFrame of the current frame in the source video sequence;
s2, selecting modes, wherein the modes comprise a frame-frame mode, a frame-field mode, a field-frame mode and a field-field mode;
s3, down-conversion is performed according to the mode selected in the step S2.
As shown in fig. 2, the frame-to-frame mode down-conversion includes the sub-steps of:
s3011, inputting a CurFrame;
s3012, performing down-conversion on the CurFrame through an interpolation algorithm to obtain a down-converted frame ScaleFrame 1;
s3013, vertically filtering the ScaleFrame1 to obtain a vertically filtered frame FilterFrame 1.
The frame-to-field mode down-conversion includes calculating a top field frame number and calculating a bottom field frame number; wherein, the calculating the top field frame number comprises the following substeps:
s30211, inputting a CurFrame, wherein a TempFrame1 frame actually used for down-conversion is equal to the CurFrame;
s30212, down-converting the TempFrame1 through an interpolation algorithm to obtain a down-converted frame ScaleFrame 2;
s30213, vertically filtering the ScaleFrame2 to obtain a vertically filtered frame FilterFrame2, wherein the FilterFrame2 is a top field;
the calculating the bottom field frame number comprises the following substeps:
s30221, inputting CurFrame and NextFrame, and calculating linear interpolation of the two input frames to obtain a frame TempFrame2 actually used for down-conversion;
s30212, down-converting the TempFrame2 through an interpolation algorithm to obtain a down-converted frame ScaleFrame 3;
s30213, vertically filtering the ScaleFrame3 to obtain a vertically filtered frame FilterFrame3, wherein the FilterFrame3 is a bottom field.
The field-frame mode down-conversion comprises the sub-steps of:
s3031, inputting CurFrame, PreFrame and NextFrame, de-interlacing the CurFrame through a de-interlacing algorithm,
obtaining a deinterlaced progressive frame 1;
s3032, performing down-conversion on the deinterlaced frame progressive frame1 through an interpolation algorithm to obtain a ScaleFrame 4;
s3033, vertically filtering the ScaleFrame4 to obtain a vertically filtered frame FilterFrame 4.
The field-to-field mode down-conversion comprises the sub-steps of:
s3041, inputting the CurFrame, the PreFrame and the NextFrame, and de-interlacing the CurFrame through a de-interlacing algorithm; correcting the bottom field when calculating the top field, and correcting the top field when calculating the bottom field; obtaining a deinterlaced progressive frame 2;
s3042, performing down-conversion on the deinterlaced frame progressive frame2 through an interpolation algorithm to obtain a ScaleFrame 5;
s3043, vertically filtering the ScaleFrame5 to obtain a FilterFrame 5; when the top field is calculated, the result is the top field of the FilterFrame5, and when the bottom field is calculated, the result is the bottom field of the FilterFrame 5.
The foregoing is illustrative of the preferred embodiments of this invention, and it is to be understood that the invention is not limited to the precise form disclosed herein and that various other combinations, modifications, and environments may be resorted to, falling within the scope of the concept as disclosed herein, either as described above or as apparent to those skilled in the relevant art. And that modifications and variations may be effected by those skilled in the art without departing from the spirit and scope of the invention as defined by the appended claims.

Claims (5)

1. An ultra-high-definition down-conversion rendering method is characterized by comprising the following steps of:
s1, collecting a current frame CurFrame, a previous frame PreFrame of the current frame and a next frame NextFrame of the current frame in the source video sequence;
s2, selecting modes, wherein the modes comprise a frame-frame mode, a frame-field mode, a field-frame mode and a field-field mode;
s3, down-conversion is performed according to the mode selected in the step S2.
2. The ultra high definition down-conversion rendering method of claim 1, wherein the frame-to-frame mode down-conversion comprises the sub-steps of:
s3011, inputting a CurFrame;
s3012, performing down-conversion on the CurFrame through an interpolation algorithm to obtain a down-converted frame ScaleFrame 1;
s3013, vertically filtering the ScaleFrame1 to obtain a vertically filtered frame FilterFrame 1.
3. The ultra high definition down-conversion rendering method of claim 1, wherein the frame-to-field mode down-conversion comprises calculating a top field frame number and calculating a bottom field frame number; wherein, the calculating the top field frame number comprises the following substeps:
s30211, inputting a CurFrame, wherein a TempFrame1 frame actually used for down-conversion is equal to the CurFrame;
s30212, down-converting the TempFrame1 through an interpolation algorithm to obtain a down-converted frame ScaleFrame 2;
s30213, vertically filtering the ScaleFrame2 to obtain a vertically filtered frame FilterFrame2, wherein the FilterFrame2 is a top field;
the calculating the bottom field frame number comprises the following substeps:
s30221, inputting CurFrame and NextFrame, and calculating linear interpolation of the two input frames to obtain a frame TempFrame2 actually used for down-conversion;
s30212, down-converting the TempFrame2 through an interpolation algorithm to obtain a down-converted frame ScaleFrame 3;
s30213, vertically filtering the ScaleFrame3 to obtain a vertically filtered frame FilterFrame3, wherein the FilterFrame3 is a bottom field.
4. The ultra high definition down-conversion rendering method according to claim 1, wherein the field-frame mode down-conversion includes the sub-steps of:
s3031, inputting CurFrame, PreFrame and NextFrame, de-interlacing the CurFrame through a de-interlacing algorithm,
obtaining a deinterlaced progressive frame 1;
s3032, performing down-conversion on the deinterlaced frame progressive frame1 through an interpolation algorithm to obtain a ScaleFrame 4;
s3033, vertically filtering the ScaleFrame4 to obtain a vertically filtered frame FilterFrame 4.
5. The ultra high definition down-conversion rendering method of claim 1, wherein the field-field mode down-conversion comprises the sub-steps of:
s3041, inputting the CurFrame, the PreFrame and the NextFrame, and de-interlacing the CurFrame through a de-interlacing algorithm; correcting the bottom field when calculating the top field, and correcting the top field when calculating the bottom field; obtaining a deinterlaced progressive frame 2;
s3042, performing down-conversion on the deinterlaced frame progressive frame2 through an interpolation algorithm to obtain a ScaleFrame 5;
s3043, vertically filtering the ScaleFrame5 to obtain a FilterFrame 5; when the top field is calculated, the result is the top field of the FilterFrame5, and when the bottom field is calculated, the result is the bottom field of the FilterFrame 5.
CN202010564525.3A 2020-06-19 2020-06-19 Ultrahigh-definition down-conversion rendering method Active CN111885335B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010564525.3A CN111885335B (en) 2020-06-19 2020-06-19 Ultrahigh-definition down-conversion rendering method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010564525.3A CN111885335B (en) 2020-06-19 2020-06-19 Ultrahigh-definition down-conversion rendering method

Publications (2)

Publication Number Publication Date
CN111885335A true CN111885335A (en) 2020-11-03
CN111885335B CN111885335B (en) 2022-03-29

Family

ID=73158299

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010564525.3A Active CN111885335B (en) 2020-06-19 2020-06-19 Ultrahigh-definition down-conversion rendering method

Country Status (1)

Country Link
CN (1) CN111885335B (en)

Citations (19)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1221259A (en) * 1997-12-02 1999-06-30 大宇电子株式会社 Method and apparatus for encoding mode signals for use in binary shape coder
CN1738431A (en) * 2005-09-08 2006-02-22 上海广电(集团)有限公司中央研究院 Frame field self-adaptive detection method
US20060146188A1 (en) * 2002-04-15 2006-07-06 Microsoft Corporation Methods and Apparatuses for Facilitating Processing of Interlaced Video Images for Progressive Video Displays
CN101536500A (en) * 2006-11-08 2009-09-16 马维尔国际贸易有限公司 Advanced deinterlacer for high-definition and standard-definition video
CN101662681A (en) * 2008-04-11 2010-03-03 特克特朗尼克国际销售有限责任公司 A method of determining field dominance in a sequence of video frames
US20100157146A1 (en) * 2008-12-23 2010-06-24 Samsung Electronics Co., Ltd. Apparatus and method for converting image in an image processing system
CN102131058A (en) * 2011-04-12 2011-07-20 上海理滋芯片设计有限公司 Speed conversion processing module and method of high definition digital video frame
WO2013163751A1 (en) * 2012-04-30 2013-11-07 Mcmaster University De-interlacing and frame rate upconversion for high definition video
CN103763501A (en) * 2014-01-14 2014-04-30 合一网络技术(北京)有限公司 Self-adaptive video de-interlacing algorithm and device thereof
CN105208313A (en) * 2015-09-25 2015-12-30 上海兆芯集成电路有限公司 Display device and video signal processing method
CN105657317A (en) * 2014-11-14 2016-06-08 澜起科技(上海)有限公司 Interlaced video motion detection method and system in video de-interlacing
US20170150095A1 (en) * 2015-11-25 2017-05-25 Samsung Electronics Co., Ltd. Apparatus and method for frame rate conversion
CN107071404A (en) * 2001-11-21 2017-08-18 谷歌技术控股有限责任公司 The method and apparatus encoded to the image sequence with multiple images
CN108134938A (en) * 2016-12-01 2018-06-08 中兴通讯股份有限公司 Videoscanning mode detects, correcting method and video broadcasting method and device
CN108495073A (en) * 2018-03-29 2018-09-04 福州瑞芯微电子股份有限公司 A kind of picture frame field detecting method, storage medium and computer
CN109327734A (en) * 2018-11-27 2019-02-12 成都索贝数码科技股份有限公司 A method of HDR video that surveying light based on dynamic downconverts to SDR video
CN109688360A (en) * 2018-12-07 2019-04-26 成都东方盛行电子有限责任公司 A kind of interlaced video scaling method of sampling
CN110166798A (en) * 2019-05-31 2019-08-23 成都东方盛行电子有限责任公司 A kind of down conversion method and device edited based on 4K HDR
CN110248132A (en) * 2019-05-31 2019-09-17 成都东方盛行电子有限责任公司 A kind of video frame rate interpolation method

Patent Citations (19)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1221259A (en) * 1997-12-02 1999-06-30 大宇电子株式会社 Method and apparatus for encoding mode signals for use in binary shape coder
CN107071404A (en) * 2001-11-21 2017-08-18 谷歌技术控股有限责任公司 The method and apparatus encoded to the image sequence with multiple images
US20060146188A1 (en) * 2002-04-15 2006-07-06 Microsoft Corporation Methods and Apparatuses for Facilitating Processing of Interlaced Video Images for Progressive Video Displays
CN1738431A (en) * 2005-09-08 2006-02-22 上海广电(集团)有限公司中央研究院 Frame field self-adaptive detection method
CN101536500A (en) * 2006-11-08 2009-09-16 马维尔国际贸易有限公司 Advanced deinterlacer for high-definition and standard-definition video
CN101662681A (en) * 2008-04-11 2010-03-03 特克特朗尼克国际销售有限责任公司 A method of determining field dominance in a sequence of video frames
US20100157146A1 (en) * 2008-12-23 2010-06-24 Samsung Electronics Co., Ltd. Apparatus and method for converting image in an image processing system
CN102131058A (en) * 2011-04-12 2011-07-20 上海理滋芯片设计有限公司 Speed conversion processing module and method of high definition digital video frame
WO2013163751A1 (en) * 2012-04-30 2013-11-07 Mcmaster University De-interlacing and frame rate upconversion for high definition video
CN103763501A (en) * 2014-01-14 2014-04-30 合一网络技术(北京)有限公司 Self-adaptive video de-interlacing algorithm and device thereof
CN105657317A (en) * 2014-11-14 2016-06-08 澜起科技(上海)有限公司 Interlaced video motion detection method and system in video de-interlacing
CN105208313A (en) * 2015-09-25 2015-12-30 上海兆芯集成电路有限公司 Display device and video signal processing method
US20170150095A1 (en) * 2015-11-25 2017-05-25 Samsung Electronics Co., Ltd. Apparatus and method for frame rate conversion
CN108134938A (en) * 2016-12-01 2018-06-08 中兴通讯股份有限公司 Videoscanning mode detects, correcting method and video broadcasting method and device
CN108495073A (en) * 2018-03-29 2018-09-04 福州瑞芯微电子股份有限公司 A kind of picture frame field detecting method, storage medium and computer
CN109327734A (en) * 2018-11-27 2019-02-12 成都索贝数码科技股份有限公司 A method of HDR video that surveying light based on dynamic downconverts to SDR video
CN109688360A (en) * 2018-12-07 2019-04-26 成都东方盛行电子有限责任公司 A kind of interlaced video scaling method of sampling
CN110166798A (en) * 2019-05-31 2019-08-23 成都东方盛行电子有限责任公司 A kind of down conversion method and device edited based on 4K HDR
CN110248132A (en) * 2019-05-31 2019-09-17 成都东方盛行电子有限责任公司 A kind of video frame rate interpolation method

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
王国忠: "高清与标清视频之间的上下变换", 《电视技术》 *

Also Published As

Publication number Publication date
CN111885335B (en) 2022-03-29

Similar Documents

Publication Publication Date Title
US6473460B1 (en) Method and apparatus for calculating motion vectors
CN1229983C (en) Method for processing video signal and video processing unit
TW200534214A (en) Adaptive display controller
US5844617A (en) Method and apparatus for enhancing the vertical resolution of a television signal having degraded vertical chrominance transitions
JP2001320679A (en) Device and method for concealing interpolation artifact in video interlaced to progressive scan converter
US6208382B1 (en) Color video processing system and method
US7006147B2 (en) Method and system for MPEG chroma de-interlacing
JP2008236812A (en) Apparatus for performing format conversion
KR20050000956A (en) Apparatus for converting video format
US20070024747A1 (en) Video processing apparatus and method
JP4933209B2 (en) Video processing device
US20070040943A1 (en) Digital noise reduction apparatus and method and video signal processing apparatus
US20050018767A1 (en) Apparatus and method for detecting film mode
US6898243B1 (en) Apparatus and methods for down-conversion video de-interlacing
JP2002057993A (en) Interlace.progressive converter, interlace.progressive conversion method and recording medium
CN111885335B (en) Ultrahigh-definition down-conversion rendering method
US7129989B2 (en) Four-field motion adaptive de-interlacing
US20060033839A1 (en) De-interlacing method
EP1646236A2 (en) System and method for display of 50 Hz video at 60 Hz
JP4801678B2 (en) Color difference signal IP conversion method
US8243814B2 (en) Combing artifacts detection apparatus and combing artifacts detection method
WO2010092631A1 (en) Video processing device
US7423692B2 (en) De-interlace method and method for generating de-interlace algorithm
Zhang et al. An efficient motion adaptive deinterlacing algorithm using improved edge-based line average interpolation
Ouyang et al. Advanced motion search and adaptation techniques for deinterlacing

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant