CN113261276A - Deinterlacing interpolation method, device and system, video processing method and storage medium - Google Patents

Deinterlacing interpolation method, device and system, video processing method and storage medium Download PDF

Info

Publication number
CN113261276A
CN113261276A CN201980082524.0A CN201980082524A CN113261276A CN 113261276 A CN113261276 A CN 113261276A CN 201980082524 A CN201980082524 A CN 201980082524A CN 113261276 A CN113261276 A CN 113261276A
Authority
CN
China
Prior art keywords
interpolation
edge
point
weight
interpolated
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201980082524.0A
Other languages
Chinese (zh)
Other versions
CN113261276B (en
Inventor
王伙荣
肖佳
张强强
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xian Novastar Electronic Technology Co Ltd
Original Assignee
Xian Novastar Electronic Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xian Novastar Electronic Technology Co Ltd filed Critical Xian Novastar Electronic Technology Co Ltd
Publication of CN113261276A publication Critical patent/CN113261276A/en
Application granted granted Critical
Publication of CN113261276B publication Critical patent/CN113261276B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/01Conversion of standards, e.g. involving analogue television standards or digital television standards processed at pixel level

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Television Systems (AREA)

Abstract

The embodiment of the invention discloses a de-interlacing interpolation method, a de-interlacing interpolation device, a de-interlacing interpolation system, a video processing method and a storage medium. The de-interlacing interpolation method comprises the following steps: carrying out motion detection based on interlaced video data to obtain a motion weight of a point to be interpolated; performing static interpolation to obtain a static interpolation result of the point to be interpolated; carrying out dynamic interpolation to obtain a dynamic interpolation result of the point to be interpolated; and superposing the static interpolation result and the dynamic interpolation result based on the motion weight to obtain the pixel value of the point to be interpolated. The motion weight of the point to be interpolated is obtained through motion detection, and the static interpolation result and the dynamic interpolation result of the point to be interpolated are superposed by using the motion weight obtained through the motion detection to finally obtain the pixel value of the point to be interpolated, so that the de-interlacing processing effect can be improved, and effective de-dithering under the condition of intense de-dithering of an interlaced video signal is realized.

Description

Deinterlacing interpolation method, device and system, video processing method and storage medium Technical Field
The present invention relates to the field of image interpolation and video processing technologies, and in particular, to a de-interlacing interpolation method, a de-interlacing interpolation apparatus, a video processing method for a de-interlacing interpolation system, and a storage medium.
Background
The traditional analog television system generally adopts an interlaced scanning mode to reduce the bandwidth, and with the development of high-definition digital televisions, phenomena such as crawling, picture flickering and edge blurring and sawtooth generated when images move rapidly caused by the interlaced scanning mode of the traditional analog television system are more and more prominent. However, due to economic development, the conventional analog television system will continue to exist for a certain period of time, and thus the de-interlacing process of the analog television system has become a key part of the video processing system.
Common algorithms for deinterlacing include interlacing (Weaving), Field removal (deinterlacing), Line Doubling (Line Doubling), Intra-Field Interpolation (Intra-Field Interpolation), Field fusion (Field Blending), and Motion Adaptive deinterlacing (Motion Adaptive De-interlacing), however, the related deinterlacing still has the problems of aliasing after low-angle edge processing and unsmooth playing after video processing.
Disclosure of Invention
Embodiments of the present application provide a de-interlacing interpolation method, a de-interlacing interpolation apparatus, a de-interlacing interpolation system, a video processing method, and a storage medium, so as to achieve an enhancement of a de-interlacing processing effect.
In one aspect, a de-interlacing interpolation method provided in an embodiment of the present application includes: carrying out motion detection based on interlaced video data to obtain a motion weight of a point to be interpolated; performing static interpolation to obtain a static interpolation result of the point to be interpolated; carrying out dynamic interpolation to obtain a dynamic interpolation result of the point to be interpolated; and superposing the static interpolation result and the dynamic interpolation result based on the motion weight to obtain the pixel value of the point to be interpolated.
In an embodiment of the present application, the performing motion detection based on interlaced video data to obtain a motion weight of a point to be interpolated includes: forming a first filtering template by using pixel point data of the first two fields, and forming a second filtering template by using pixel point data of the current field and the next field; making a difference between the first filtering template and the second filtering template and taking an absolute value to obtain a motion weight value intermediate variable of the point to be interpolated; and obtaining the motion weight of the point to be interpolated based on the motion weight of the previous point to be interpolated which is positioned at the same pixel point position as the point to be interpolated and the motion weight intermediate variable.
In an embodiment of the present application, the obtaining a static interpolation result of the point to be interpolated by performing static interpolation includes: and interpolating by using an interleaving method to obtain the static interpolation result.
In an embodiment of the present application, the performing dynamic interpolation to obtain a dynamic interpolation result of the point to be interpolated includes: carrying out edge detection to obtain a plurality of groups of edge detection values; calculating to obtain a weight for edge interpolation in the designated direction and a weight set for local weighted edge interpolation based on the plurality of groups of edge detection values; based on the weight value for the edge interpolation in the appointed direction, carrying out the edge interpolation in the appointed direction to obtain an edge interpolation result in the appointed direction; based on the weight set for local weighted edge interpolation and the multiple groups of edge detection values, local weighted edge interpolation is carried out to obtain a local weighted edge interpolation result; and summing the edge interpolation result in the appointed direction and the local weighted edge interpolation result to average to obtain the dynamic interpolation result.
In an embodiment of the present application, the performing edge detection to obtain a plurality of sets of edge detection values includes: constructing a filtering template taking the point to be interpolated as a template center; and filtering by a plurality of edge detection operators in the constructed filtering template to obtain the plurality of groups of edge detection values, wherein the filtering templates adopted by the filtering by the plurality of edge detection operators are different.
In an embodiment of the application, the calculating, based on the plurality of sets of edge detection values, a set of weights for edge interpolation in the specified direction and a set of weights for local weighted edge interpolation includes: calculating the sum of the transverse edge detection value and the longitudinal edge detection value of each group of edge detection values in the multiple groups of edge detection values to obtain a weight set for local weighted edge interpolation; and calculating the sum of all transverse edge detection values and all longitudinal edge detection values in the multiple groups of edge detection values to obtain the weight value for the edge interpolation in the specified direction.
In an embodiment of the application, the performing, based on the weight for edge interpolation in the designated direction, edge interpolation in the designated direction to obtain an edge interpolation result in the designated direction includes: and summing the pixel values of the six neighborhood pixel points of the point to be interpolated, and multiplying the sum by the weight value for the edge interpolation in the specified direction to obtain an edge interpolation result in the specified direction, wherein the edge interpolation in the specified direction is vertical and diagonal interpolation.
In an embodiment of the present application, the performing local weighted edge interpolation based on the weight set for local weighted edge interpolation and the plurality of sets of edge detection values to obtain a local weighted edge interpolation result includes: determining pixel point pairs for interpolation based on the ratio and the positive and negative of the transverse edge detection value and the longitudinal edge detection value of each group of edge detection values in the multiple groups of edge detection values; obtaining a local weighting calculation result corresponding to a group of edge detection values based on the pixel value of the interpolation pixel point pair and a corresponding local weighting edge weight in the local weighting edge interpolation weight set; and summing a plurality of local weighting calculation results respectively corresponding to the plurality of groups of edge detection values to obtain a local weighting edge interpolation result.
In an embodiment of the application, the obtaining of the pixel value of the point to be interpolated by superimposing the static interpolation result and the dynamic interpolation result based on the motion weight includes: calculating the pixel value of the point to be interpolated by adopting the following formula:
F(i,j)=Static_value*(1-Motion_weight_value_current_field)+Motion_value*Motion_weight_value_current_field
wherein F (i, j) represents the point to be interpolated, Motion _ weight _ value _ current _ field represents the Motion weight of the point to be interpolated, Static _ value represents the Static interpolation result, and Motion _ value represents the dynamic interpolation result.
On the other hand, an embodiment of the present application provides a de-interlacing interpolation apparatus, which can perform any one of the de-interlacing interpolation methods described above, and includes: the motion detection module is used for carrying out motion detection based on interlaced video data to obtain a motion weight of a point to be interpolated; the static interpolation module is used for carrying out static interpolation to obtain a static interpolation result of the point to be interpolated; the dynamic interpolation module is used for carrying out dynamic interpolation to obtain a dynamic interpolation result of the point to be interpolated; and the interpolation result superposition module is used for superposing the static interpolation result and the dynamic interpolation result based on the motion weight to obtain the pixel value of the point to be interpolated.
In one embodiment of the present application, the motion detection module comprises: the filtering template forming unit is used for forming a first filtering template by using the pixel point data of the first two fields and forming a second filtering template by using the pixel point data of the current field and the pixel point data of the next field; the intermediate variable calculation unit is used for carrying out difference on the first filtering template and the second filtering template and taking an absolute value to obtain a motion weight intermediate variable of the point to be interpolated; and the motion weight calculation unit is used for obtaining the motion weight of the point to be interpolated based on the motion weight of the point to be interpolated in the previous field which is positioned at the same pixel point position as the point to be interpolated in the current field and the motion weight intermediate variable.
In one embodiment of the present application, the dynamic interpolation module includes: the edge detection unit is used for carrying out edge detection to obtain a plurality of groups of edge detection values; a weight calculation unit, configured to calculate, based on the multiple groups of edge detection values, a weight for edge interpolation in the specified direction and a weight set for local weighted edge interpolation; the designated direction edge interpolation unit is used for carrying out designated direction edge interpolation based on the weight for the designated direction edge interpolation to obtain a designated direction edge interpolation result; a local weighted edge interpolation unit, configured to perform local weighted edge interpolation based on the weight set for local weighted edge interpolation and the plurality of sets of edge detection values to obtain a local weighted edge interpolation result; and the result averaging unit is used for summing and averaging the edge interpolation result in the designated direction and the local weighted edge interpolation result to obtain the dynamic interpolation result.
In another aspect, an embodiment of the present application provides a de-interlacing interpolation system, including: a processor and a memory; wherein the memory stores instructions for execution by the processor and the processor is configured to execute the instructions to implement any of the de-interlacing interpolation methods described above.
In another aspect, an embodiment of the present application provides a storage medium, where the storage medium is a non-volatile memory and stores program code, and when the program code is executed by a computer, the method implements any one of the foregoing de-interlacing interpolation methods.
In addition, a video processing method provided by the embodiment of the present application includes: receiving an interlaced video signal; based on any de-interlacing interpolation method, de-interlacing processing is carried out on the interlaced video signal to obtain a de-interlaced video signal; and outputting the de-interlaced video signal.
According to the embodiment of the application, the motion weight of the point to be interpolated is obtained through motion detection, and a mode of combining static interpolation and dynamic interpolation is adopted, and the motion weight obtained through motion detection is utilized to superpose the static interpolation result and the dynamic interpolation result of the point to be interpolated, so that the pixel value of the point to be interpolated is finally obtained, and when the motion weight is applied to de-interlacing processing of interlaced video signals, one or more of the following technical effects can be achieved: (i) the shaking phenomenon of the interlaced video signals can be effectively removed after being violently processed by the video processing method of the embodiment, the contrast of the image edge is increased, and the image is clearly displayed; (ii) the motion trail of the moving object can be correctly judged, so that the probability of interpolation direction interpolation error is reduced to the minimum; (iii) interlaced video with high noise can be processed; and (iv) the speed of the object motion can be detected, and the video is smoothly played.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings needed to be used in the description of the embodiments are briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings without creative efforts.
Fig. 1A is a flowchart of a de-interlacing interpolation method according to a first embodiment of the present application.
Fig. 1B is a flowchart illustrating sub-steps of step S15 in fig. 1A.
Fig. 2 is a schematic diagram of a filtering template for calculating motion weight intermediate variables used in step S11 in fig. 1A.
Fig. 3 is a schematic diagram of the interleaving interpolation used in step S13 in fig. 1A.
Fig. 4 is a schematic diagram of a filtering template used in sub-step S151 in fig. 1B.
FIG. 5 is a diagram illustrating an interpolation template used in sub-step S155 of FIG. 1B.
Fig. 6 is a schematic diagram of an interpolation template used in sub-step S157 in fig. 1B.
Fig. 7 is a block diagram of a de-interlacing interpolation apparatus according to a second embodiment of the present application.
Fig. 8A is a schematic diagram of the elements of the motion detection module in fig. 7.
Fig. 8B is a schematic diagram of the units of the dynamic interpolation module in fig. 7.
FIG. 8C is a diagram illustrating sub-units of the edge detection unit shown in FIG. 8B.
FIG. 8D is a diagram illustrating sub-units of the weight calculation unit shown in FIG. 8B.
Fig. 8E is a diagram of a sub-unit of the local weighted edge interpolation unit 757 in fig. 8B.
Fig. 9 is a schematic structural diagram of a de-interlacing interpolation system according to a third embodiment of the present application.
Fig. 10 is a schematic diagram of a storage medium according to a fourth embodiment of the present application.
Fig. 11 is a flowchart of a video processing method according to a fifth embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
[ first embodiment ] A method for manufacturing a semiconductor device
As shown in fig. 1A and fig. 1B, a de-interlacing interpolation method provided in the first embodiment of the present application includes the following steps:
s11, motion detection step
In the Motion detection (Motion detection) of this embodiment, the Motion condition of the interpolation point in the current field is estimated by using the image data of four fields including the first two fields, the current field and the next field. The method specifically comprises the following steps: and forming a 3 x 3 filtering template by using pixel data of the first two fields, and forming a 3 x 3 filtering template by using pixel data of the current field and the next field, then performing subtraction on pixel data in the two 3 x 3 filtering templates, taking an absolute value, and averaging the absolute value with the motion weight of the point to be interpolated, which is obtained by calculation of the previous field and is located at the same pixel point position as the point to be interpolated of the current field, so as to calculate the motion weight of the point to be interpolated of the current field.
More specifically, as shown in fig. 2, i represents a pixel row number, j represents a pixel column number, t represents the present field, (t-2) and (t-1) represent the first two fields, and (t +1) represents the next field; the calculation of the Motion weight intermediate variable Motion _ weight _ value of the point F (i, j) to be interpolated satisfies the following formula:
Figure PCTCN2019070943-APPB-000001
accordingly, the Motion weight Motion _ weight _ value _ current _ field of the local field to-be-interpolated point F (i, j) satisfies the following formula:
Motion_weight_value_current_field=(SUM(Motion_weight_value)+Motion_weight_value_front_field)/2
wherein abs represents an absolute value taking function, SUM represents a summation function of each element in Motion _ weight _ value, and Motion _ weight _ value _ front _ field represents a Motion weight of a to-be-interpolated point which is calculated in a previous field and is located at the same pixel point position as a to-be-interpolated point F (i, j) in the current field.
S13, static interpolation step
The static interpolation of the present embodiment is, for example, an interleaving (Weaving) interpolation. The interpolation of interleaving method is that the pixel lines of the image data of front and back odd and even fields are used for duplication and filling, that is, the odd and even fields of the same frame are directly combined into a progressive frame. For a static image, an effective image can be completely restored by an inter-field copying mode of interleaving interpolation so as to avoid the phenomenon of wrong interpolation direction; moreover, the interpolation of the interleaving method plays a good role in processing the pseudo-interleaving signals, and if the Cadence detection (Cadence detection) is accurate, a high-definition image can be restored.
More specifically, fig. 3 shows a schematic diagram of the principle of the interleaving interpolation, and the Static interpolation result Static _ value of the point F (i, j) to be interpolated in this embodiment can be obtained through the interleaving interpolation.
S15, the dynamic interpolation step, for example, includes the following substeps S151-S159:
s151, an edge detection substep
In the object edge detection of this embodiment, for example, Sobel (Sobel) operator filtering is used in the field, and the edge direction and angle can be determined simultaneously, and seven Sobel operators are used for filtering. The filtering template is, for example, a template of 5 × 5 shown in fig. 4 (with a point F (i, j, t) to be interpolated as the template center), where F (i-1, j-1, t), F (i-1, j +1, t), F (i +1, j-1, t), F (i +1, j +1, t) and F (i, j, t) are the centers of the respective Sobel operators 3 × 3 templates.
In fig. 4, Sobel operator filtering of F (i, j, t) is different from other six Sobel operator filtering, specifically as follows:
the transverse filtering template of the Sobel operator adopted by F (i, j, t) is
Figure PCTCN2019070943-APPB-000002
The longitudinal filtering template is
Figure PCTCN2019070943-APPB-000003
Accordingly, the output lateral edge detection value is:
sobel _ x (i, j, t) — (F (i-1, j-1, t) +2F (i-1, j, t) + F (i-1, j +1, t)) - (F (i +1, j-1, t) +2F (i +1, j, t) + F (i +1, j +1, t)), and the output longitudinal edge detection value is:
sobel_y(i,j,t)=2(F(i-1,j-1,t)+F(i+1,j-1,t))-2(F(i-1,j+1,t)+F(i+1,j+1,t));
as for the other six Sobel operator filters, the adopted transverse filtering template of the Sobel operator is
Figure PCTCN2019070943-APPB-000004
The longitudinal filtering template is
Figure PCTCN2019070943-APPB-000005
Correspondingly, six lateral edge detection values sobel _ x (i-1, j-1, t), sobel _ x (i-1, j +1, t), sobel _ x (i +1, j-1, t), sobel _ x (i +1, j, t) and sobel _ x (i +1, j +1, t) and six longitudinal edge detection values sobel _ y (i-1, j-1, t), sobel _ y (i-1, j +1, t), sobel _ y (i +1, j-1, t), sobel _ y (i +1, j, t) and sobel _ y (i +1, j +1, t); taking F (i-1, j-1, t) as an example, the corresponding transverse edge detection value is:
sobel _ x (i-1, j-1, t) ═ F (i-3, j-2, t) +2F (i-3, j-1, t) + F (i-3, j, t)) - (F (i +1, j-2, t) +2F (i +1, j-1, t) + F (i +1, j, t)) and its corresponding longitudinal edge detection values are:
sobel_y(i-1,j-1,t)=(F(i-3,j-2,t)+2F(i-1,j-2,t)+F(i+1,j-2,t))-(F(i-3,j,t)+2F(i-1,j,t)+F(i+1,j,t))。
in summary, in the present embodiment, through the sub-step S151 of edge detection, seven sets of edge detection values are obtained, and each set of edge detection values includes a transverse edge detection value and a longitudinal edge detection value, where the seven sets of edge detection values are: [ sobel _ x (i-1, j, t), sobel _ y (i-1, j, t) ], [ sobel _ x (i-1, j +1, t), sobel _ y (i-1, j +1, t) ], [ sobel _ x (i +1, j-1, t), [ sobel _ y (i +1, j-1, t) ], [ sobel _ x (i +1, j, t), sobel _ y (i +1, j, t) ], [ sobel _ x (i +1, j +1, t) ], sobel _ y (i +1, j +1, t) ], and [ sobel _ x (i, j, t) ], so that the absolute value of each edge of the sobel _ y (i +1, j +1, t) ] and the absolute value of the edge of each sobel _ y _ x (i, j, t) of the current sobel _ y _ x and the absolute value of the sobel _ y, the larger the ratio, the more the transverse direction, and the smaller the ratio, the more the longitudinal direction; the positive and negative of sobel _ x and sobel _ y can determine which quadrant the current interpolation edge is in. Furthermore, in the edge detection substep S151, the filtering template used by the Sobel operator filtering adopted by F (i, j, t) is different from the filtering templates used by the other six Sobel operator filtering, that is, the edge detection substep S151 uses a plurality of different edge detection filtering templates for filtering; taking Sobel operator filtering as an example, the same edge detection filtering template is composed of a transverse filtering template and a longitudinal filtering template, and the difference of the edge detection filtering templates may refer to the difference of the longitudinal filtering templates but the same transverse filtering template or other situations. In addition, it can be understood that other edge detection operators can be adopted to replace the Sobel operator for edge detection.
S153, weight calculation substep
Specifically, the seven Sobel operator filters executed in the edge detection substep S151 each have a set of edge detection values, i.e., a transverse edge detection value Sobel _ x and a longitudinal edge detection value Sobel _ y, which are respectively calculated and output, and the weight required for the edge interpolation can be calculated and output through the Sobel _ x and Sobel _ y in each set of edge detection values; specifically, the sum of the sobel _ x and the sobel _ y in each group of edge detection values may be taken as a weight, where taking the high bit refers to a certain number of bits obtained by truncating a decimal point from the sum of the sobel _ x and the sobel _ y, and the specific number of bits may be determined according to actual needs.
For example, for seven sets of edge detection values [ sobel _ x (i-1, j, t), sobel _ y (i-1, j, t) ], [ sobel _ x (i-1, j, t), [ sobel _ y (i-1, j, t) ], [ sobel _ x (i-1, j +1, t), [ sobel _ y (i-1, j +1, t) ], [ sobel _ x (i +1, j-1, t), [ sobel _ y (i +1, j-1, t) ], [ sobel _ x (i +1, j, t), [ sobel _ x (i +1, j +1, t) ], sobel _ x (i +1, j +1, t), [ sobel _ y (i +1, j +1, t) ], and [ sobel _ x (i, j, t) ], which are calculated in the edge detection values and the seven sobel _ x (i +1, j, t) ], as a weight set for local weighted edge interpolation; in addition, another weight value can be calculated by adding all the sobel _ x and sobel _ y in the seven groups of edge detection values in high order and can be used as the weight value for the edge interpolation in the designated direction.
S155, a direction-assigned edge interpolation substep
Referring to fig. 5, it is an interpolation template used in the direction-specifying edge interpolation substep S155 of the present embodiment. In this embodiment, the direction edge interpolation is designated as vertical and diagonal direction interpolation, which is done to increase the weight of the vertical and diagonal directions and keep more information in the vertical and diagonal directions because the point to be interpolated has the largest relationship with the pixel values of the surrounding eight neighborhoods.
Specifically, taking fig. 5 as an example, the calculation method of the edge interpolation result in the designated direction is as follows: the pixel values F (i-1, j-1, t), F (i-1, j +1, t), F (i +1, j-1, t), F (i +1, j, t) and F (i +1, j +1, t) of the six-neighborhood pixel points of the point F (i, j) to be interpolated are all added (summed) and multiplied by the weight value to calculate the weight value for the edge interpolation in the designated direction obtained in the substep S153, thereby obtaining the edge interpolation Result in the designated direction Result1, such as the vertical and diagonal direction interpolation results of the embodiment.
S157, local weighted edge interpolation sub-step
Specifically, in this embodiment, the local weighted edge interpolation is performed by using the weight set for local weighted edge interpolation obtained in the weight calculation substep S153 and a plurality of sets of edge detection values, for example, seven sets of edge detection values, obtained in the edge detection substep S151, and combining a preset-size interpolation template.
Taking fig. 6 as an example, the size of the interpolation template is 18 × 2, and 0-17 are the uplink and downlink pixels centered on the point F (i, j) to be interpolated; the pixel point pairs for interpolation can be judged according to the ratio and the positive and negative of the sobel _ x and the sobel _ y in the seven groups of edge detection values.
For example, if the ratio of sobel _ x to sobel _ y of F (i, j, t) (i.e., the ratio of sobel _ x (i, j, t) to sobel _ y (i, j, t)) is 45 degrees, the edge direction is diagonal; the positive and negative of the sobel _ x and sobel _ y are exclusive-ored, the exclusive-or result is that the regular arrow points to the right (as shown in fig. 6), otherwise, the exclusive-or result is negative, and the arrow points to the left. Thus, a set of edge detection values (i.e. sobel _ x (i, j, t) and sobel _ y (i, j, t)) of F (i, j, t) ultimately provides a local weighted edge interpolation result with a scale: the corresponding local weighted edge interpolation is multiplied by the average value of F (9) and F (7) (i.e. multiplied by (F (9) + F (7))/2) by the weight, so as to obtain a local weighted calculation result, where F (9) and F (7) are pixel values, such as luminance values, of the pixel point 9 and the pixel point 7 in fig. 6.
Therefore, in the local weighted edge interpolation sub-step S157, seven local weighted calculation results are obtained by using the weight set for local weighted edge interpolation and all of the seven sets of edge detection values, and then the seven local weighted calculation results are added as the local weighted edge interpolation Result 2.
S159, result averaging substep
Specifically, the Result1 of the edge interpolation in the designated direction obtained in the sub-step S155 of the edge interpolation in the designated direction and the Result2 of the edge interpolation in the local weighted edge obtained in the sub-step S157 of the edge interpolation in the local weighted direction are summed and averaged, so that the Result Motion _ value ═ Result1+ Result2)/2 of the dynamic interpolation Result is obtained.
S17, interpolation result superposition step
Specifically, the Motion weight Motion _ weight _ value _ current _ field of the point F (i, j) to be interpolated obtained in the Motion detection step S11 is combined to superimpose the Static interpolation result Static _ value obtained in the Static interpolation step S13 and the dynamic interpolation result Motion _ value obtained in the dynamic interpolation step S15, so as to obtain the pixel value of the point F (i, j) to be interpolated, for example, using the following formula:
F(i,j)=Static_value*(1-Motion_weight_value_current_field)+Motion_value*Motion_weight_value_current_field
through verification, the de-interlacing interpolation method provided by the embodiment has a remarkable improvement effect on a low-angle interpolation effect, and has a better effect than a motion adaptive (motion adaptive) de-interlacing interpolation algorithm and an edge adaptive (edge adaptive) de-interlacing interpolation algorithm which are commercially applied at present.
[ second embodiment ]
As shown in fig. 7, a second embodiment of the present application provides a de-interlacing interpolation apparatus 70, including: a motion detection module 71, a static interpolation module 73, a dynamic interpolation module 75, and an interpolation result superposition module 77.
Specifically, the motion detection module 71 is configured to perform motion detection based on interlaced video data to obtain a motion weight of a point to be interpolated; the static interpolation module 73 is configured to perform static interpolation to obtain a static interpolation result of the point to be interpolated; the dynamic interpolation module 75 is configured to perform dynamic interpolation to obtain a dynamic interpolation result of the point to be interpolated; and an interpolation result superposition module 77, configured to superpose the static interpolation result and the dynamic interpolation result based on the motion weight to obtain a pixel value of the point to be interpolated.
For the detailed functional details of the motion detection module 71, the static interpolation module 73, the dynamic interpolation module 75, and the interpolation result superposition module 77, reference may be made to the related descriptions of steps S11, S13, S15, and S17 in the foregoing first embodiment, and no further description is provided here. Further, it is noted that the motion detection module 71, the static interpolation module 73, the dynamic interpolation module 75 and the interpolation result superposition module 77 may be software modules stored in a non-volatile memory and executed by one or more processors to perform the relevant operations to perform the steps S11, S13, S15 and S17 in the foregoing first embodiment.
Referring to fig. 8A, the motion detection module 71 may be further divided to include: a filtering template composition unit 711, an intermediate variable calculation unit 713, and a motion weight calculation unit 715. The filtering template forming unit 711 is configured to form a first filtering template by using the pixel point data of the first two fields, and form a second filtering template by using the pixel point data of the current field and the pixel point data of the next field, for example; the intermediate variable calculation unit 713 is configured to, for example, perform a difference between the first filtering template and the second filtering template and take an absolute value to obtain a motion weight intermediate variable of the point to be interpolated; and the motion weight calculation unit 715 is configured to obtain the motion weight of the to-be-interpolated point based on the motion weight of the to-be-interpolated point in the previous field at the same pixel point position as the to-be-interpolated point in the current field and the motion weight intermediate variable. For the details of the functions of the filtering template component unit 711, the intermediate variable calculating unit 713, and the motion weight calculating unit 715, reference may be made to the detailed description of step S11 in the first embodiment, which is not repeated herein. It should be noted that the filtering template component unit 711, the intermediate variable calculating unit 713 and the motion weight calculating unit 715 may be software modules, which are stored in a non-volatile memory and are executed by one or more processors to perform the relevant operations to perform the step S11 in the first embodiment.
Referring to fig. 8B, the dynamic interpolation module 75 may be further divided to include: an edge detection unit 751, a weight calculation unit 753, a specified direction edge interpolation unit 755, a local weighted edge interpolation unit 757, and a result averaging unit 759. The edge detection unit 751 is configured to perform edge detection to obtain a plurality of sets of edge detection values; the weight calculation unit 753 is configured to calculate, based on the plurality of sets of edge detection values, a weight for edge interpolation in the designated direction and a weight set for local weighted edge interpolation; the designated direction edge interpolation unit 755 is configured to perform designated direction edge interpolation to obtain a designated direction edge interpolation result, for example, based on the designated direction edge interpolation weight; the local weighted edge interpolation unit 757 is configured to, for example, perform local weighted edge interpolation based on the weight set for local weighted edge interpolation and the plurality of sets of edge detection values to obtain a local weighted edge interpolation result; and a result averaging unit 759, for example, configured to sum and average the edge interpolation result in the designated direction and the edge interpolation result in the local weighted manner to obtain the dynamic interpolation result. As for the specific functional details of the edge detection unit 751, the weight calculation unit 753, the direction-specified edge interpolation unit 755, the local weighted edge interpolation unit 757, and the result averaging unit 759, reference may be made to the related descriptions of the sub-steps S511, S153, S155, S157, and S159 of step S15 in the foregoing first embodiment, and no further description is given here. Further, it is to be noted that the edge detection unit 751, the weight calculation unit 753, the specified direction edge interpolation unit 755, the local weighted edge interpolation unit 757, and the result averaging unit 759 may be software modules that are stored in a nonvolatile memory and that are associated by one or more processors to perform the respective sub-steps S511, S153, S155, S157, and S159 in step S15 in the foregoing first embodiment.
Referring to fig. 8C, the edge detection unit 751 may be further divided to include: a filtering template construction subunit 7511 and an edge detection operator filtering subunit 7513. The filtering template construction subunit 7511 is, for example, configured to construct a filtering template with the point to be interpolated as a template center; and an edge detection operator filtering subunit 7513, for example, configured to perform multiple edge detection operator filtering in the constructed filtering template to obtain the multiple sets of edge detection values, where the multiple edge detection operator filtering employs different filtering templates. As for the specific functional details of the filtering template construction subunit 7511 and the edge detection operator filtering subunit 7513, reference may be made to the related detailed description of the sub-step S151 in the foregoing first embodiment, and further description is omitted here. Furthermore, it is worth noting that the filtering template construction subunit 7511 and the edge detection operator filtering subunit 7513 may be software modules, stored in a non-volatile memory and executed by one or more processors to perform the relevant operations to perform the sub-step S151 in the aforementioned first embodiment.
Referring to fig. 8D, the weight calculation unit 753 may be further divided to include: a first calculation subunit 7531 and a second calculation subunit 7533. The first calculating subunit 7531 is configured to calculate a sum of a lateral edge detection value and a longitudinal edge detection value of each of the plurality of sets of edge detection values, to obtain the weight set for local weighted edge interpolation; and a second calculation subunit 7533, for example, configured to calculate a sum of all the lateral edge detection values and all the longitudinal edge detection values in the plurality of sets of edge detection values to obtain the weight for edge interpolation in the designated direction. For the specific functional details of the first calculating subunit 7531 and the second calculating subunit 7533, reference may be made to the detailed description of the sub-step S153 in the foregoing first embodiment, which is not repeated herein. Furthermore, it is to be noted that the first calculation subunit 7531 and the second calculation subunit 7533 can be software modules stored in a non-volatile memory and executed by one or more processors to perform the relevant operations to perform the sub-step S153 in the first embodiment.
Referring to fig. 8E, the local weighted edge interpolation unit 757 may be further divided to include: a determination subunit 7571, a weighted value calculator subunit 7573, and a summation subunit 7575. Wherein the determining subunit 7571 is configured to determine, for example, a pixel point pair for interpolation based on the magnitude and the positive or negative of the ratio of the lateral edge detection value and the longitudinal edge detection value of each of the plurality of sets of edge detection values; the weighted value calculating operator unit 7573 is configured to obtain a local weighted calculation result corresponding to a group of edge detection values, for example, based on the pixel value of the interpolation pixel point pair and a corresponding local weighted edge interpolation weight in the local weighted edge interpolation weight set; and a summation subunit 7575, for example, configured to sum a plurality of local weighting calculation results corresponding to the plurality of sets of edge detection values, respectively, to obtain the local weighted edge interpolation result. For the specific functional details of the determining subunit 7571, the weighted value calculating subunit 7573, and the summing subunit 7575, reference may be made to the detailed description related to the sub-step S157 in the foregoing first embodiment, and no further description is provided here. Furthermore, it is worth mentioning that the determining subunit 7571, the weighted value operator unit 7573, and the summing subunit 7575 may be software modules stored in a non-volatile memory and executed by one or more processors to perform the related operations to perform the sub-step S157 in the aforementioned first embodiment.
[ third embodiment ]
As shown in fig. 9, a de-interlacing interpolation system 90 according to a third embodiment of the present application includes: a processor 91 and a memory 93; wherein the memory 93 stores instructions that are executed by the processor 91 and that, for example, cause the processor 91 to perform operations to perform the de-interlacing interpolation method described in the foregoing first embodiment.
[ fourth example ] A
As shown in fig. 10, a storage medium 100, which is a non-volatile memory and stores program code, when executed by a computer, implements the de-interlacing interpolation method according to the first embodiment.
[ fifth embodiment ]
As shown in fig. 11, a video processing method according to a fifth embodiment of the present application includes:
step S011: receiving an interlaced video signal;
step S013: based on a de-interlacing interpolation method, de-interlacing processing is carried out on the interlaced video signal to obtain a de-interlaced video signal; and
step S015: outputting the de-interlaced video signal.
Specifically, in this embodiment, the de-interlacing interpolation method adopted in step S013 is, for example, the de-interlacing interpolation method described in the foregoing first embodiment, and specific details thereof may refer to relevant descriptions of each step in the foregoing first embodiment, which are not described herein again; furthermore, the de-interlaced video signal is, for example, a progressive video signal. In addition, it should be noted that the video processing method of the present embodiment may be executed by a programmable logic device (e.g., FPGA), but the present application is not limited thereto.
In this embodiment, by using the de-interlacing interpolation method described in the first embodiment, the motion weight of the point to be interpolated is obtained through motion detection, and a mode of combining static interpolation and dynamic interpolation is adopted, and the motion weight obtained through motion detection is used to superimpose the static interpolation result and the dynamic interpolation result of the point to be interpolated to finally obtain the pixel value of the point to be interpolated, so when the de-interlacing interpolation method is applied to an interlaced video signal, one or more of the following technical effects can be achieved:
(a) the shaking phenomenon of the interlaced video signals can be effectively removed after being violently processed by the video processing method of the embodiment, the contrast of the image edge is increased, and the image is clearly displayed;
(b) the motion trail of the moving object can be correctly judged, so that the probability of interpolation direction interpolation error is reduced to the minimum;
(c) interlaced video with high noise can be processed; and
(d) the speed of the object motion can be detected, and the video playing is smooth.
In the several embodiments provided in the present application, it should be understood that the disclosed system, apparatus and/or method may be implemented in other manners. For example, the above-described apparatus embodiments are merely illustrative, and for example, the division of the units/modules is only one logical division, and there may be other divisions in actual implementation, for example, multiple units or modules may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
The units/modules described as separate parts may or may not be physically separate, and parts displayed as units/modules may or may not be physical units, may be located in one place, or may be distributed on multiple network units. Some or all of the units/modules may be selected according to actual needs to achieve the purpose of the solution of the present embodiment.
In addition, functional units/modules in the embodiments of the present application may be integrated into one processing unit/module, or each unit/module may exist alone physically, or two or more units/modules may be integrated into one unit/module. The integrated units/modules may be implemented in the form of hardware, or may be implemented in the form of hardware plus software functional units/modules.
The integrated units/modules, which are implemented in the form of software functional units/modules, may be stored in a computer readable storage medium. The software functional unit is stored in a storage medium and includes several instructions for causing one or more processors of a computer device (which may be a personal computer, a server, or a network device) to execute some steps of the methods described in the embodiments of the present application. And the aforementioned storage medium includes: various media capable of storing program codes, such as a usb disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk, or an optical disk.
Finally, it should be noted that: the above embodiments are only used to illustrate the technical solutions of the present application, and not to limit the same; although the present application has been described in detail with reference to the foregoing embodiments, it should be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; and such modifications or substitutions do not depart from the spirit and scope of the corresponding technical solutions in the embodiments of the present application.
[ INDUSTRIAL APPLICABILITY ]
The method comprises the steps of obtaining a motion weight of a point to be interpolated through motion detection, and superposing a static interpolation result and a dynamic interpolation result of the point to be interpolated by using the motion weight obtained through the motion detection in a mode of combining static interpolation and dynamic interpolation to finally obtain a pixel value of the point to be interpolated; therefore, the de-interlacing processing effect can be improved, and effective de-jittering of the interlaced video signals under the severe jittering condition is realized.

Claims (15)

  1. A method of de-interlacing interpolation, comprising:
    carrying out motion detection based on interlaced video data to obtain a motion weight of a point to be interpolated;
    performing static interpolation to obtain a static interpolation result of the point to be interpolated;
    carrying out dynamic interpolation to obtain a dynamic interpolation result of the point to be interpolated; and
    and superposing the static interpolation result and the dynamic interpolation result based on the motion weight to obtain the pixel value of the point to be interpolated.
  2. The deinterlacing interpolation method of claim 1, wherein the motion detection based on the interlaced video data to obtain the motion weight of the point to be interpolated comprises:
    forming a first filtering template by using pixel point data of the first two fields, and forming a second filtering template by using pixel point data of the current field and the next field;
    making a difference between the first filtering template and the second filtering template and taking an absolute value to obtain a motion weight value intermediate variable of the point to be interpolated; and
    and obtaining the motion weight of the point to be interpolated based on the motion weight of the previous point to be interpolated which is positioned at the same pixel point position as the point to be interpolated and the motion weight intermediate variable.
  3. The deinterlacing interpolation method of claim 1, wherein the performing static interpolation to obtain the static interpolation result of the point to be interpolated comprises:
    and interpolating by using an interleaving method to obtain the static interpolation result.
  4. The deinterlacing interpolation method of claim 1, wherein the performing dynamic interpolation to obtain the dynamic interpolation result of the point to be interpolated comprises:
    carrying out edge detection to obtain a plurality of groups of edge detection values;
    calculating to obtain a weight for edge interpolation in the designated direction and a weight set for local weighted edge interpolation based on the plurality of groups of edge detection values;
    based on the weight value for the edge interpolation in the appointed direction, carrying out the edge interpolation in the appointed direction to obtain an edge interpolation result in the appointed direction;
    based on the weight set for local weighted edge interpolation and the multiple groups of edge detection values, local weighted edge interpolation is carried out to obtain a local weighted edge interpolation result; and
    and summing the edge interpolation result in the appointed direction and the local weighted edge interpolation result and averaging to obtain the dynamic interpolation result.
  5. The deinterlacing interpolation method of claim 4, wherein the performing edge detection to obtain a plurality of sets of edge detection values comprises:
    constructing a filtering template taking the point to be interpolated as a template center; and
    and filtering by a plurality of edge detection operators in the constructed filtering template to obtain the plurality of groups of edge detection values, wherein the filtering templates adopted by the filtering by the plurality of edge detection operators are different.
  6. The de-interlacing interpolation method according to claim 4, wherein said calculating a set of weights for edge interpolation in a specified direction and a set of weights for local weighted edge interpolation based on the plurality of sets of edge detection values comprises:
    calculating the sum of the transverse edge detection value and the longitudinal edge detection value of each group of edge detection values in the multiple groups of edge detection values to obtain a weight set for local weighted edge interpolation; and
    and calculating the sum of all transverse edge detection values and all longitudinal edge detection values in the multiple groups of edge detection values to obtain the weight value for the edge interpolation in the specified direction.
  7. The de-interlacing interpolation method according to claim 4, wherein the performing the edge interpolation in the designated direction based on the weight value for the edge interpolation in the designated direction to obtain the edge interpolation result in the designated direction comprises:
    and summing the pixel values of the six neighborhood pixel points of the point to be interpolated, and multiplying the sum by the weight value for the edge interpolation in the specified direction to obtain an edge interpolation result in the specified direction, wherein the edge interpolation in the specified direction is vertical and diagonal interpolation.
  8. The deinterlacing interpolation method of claim 4, wherein the performing the locally weighted edge interpolation based on the set of weights for locally weighted edge interpolation and the plurality of sets of edge detection values to obtain the locally weighted edge interpolation result comprises:
    determining pixel point pairs for interpolation based on the ratio and the positive and negative of the transverse edge detection value and the longitudinal edge detection value of each group of edge detection values in the multiple groups of edge detection values;
    obtaining a local weighting calculation result corresponding to a group of edge detection values based on the pixel value of the interpolation pixel point pair and a corresponding local weighting edge weight in the local weighting edge interpolation weight set; and
    and summing a plurality of local weighting calculation results respectively corresponding to the plurality of groups of edge detection values to obtain a local weighting edge interpolation result.
  9. The deinterlacing interpolation method of claim 1, wherein the superimposing the static interpolation result and the dynamic interpolation result based on the motion weight to obtain the pixel value of the point to be interpolated comprises:
    calculating the pixel value of the point to be interpolated by adopting the following formula:
    F(i,j)=Static_value*(1-Motion_weight_value_current_field)+Motion_value*Motion_weight_value_current_field
    wherein F (i, j) represents the point to be interpolated, Motion _ weight _ value _ current _ field represents the Motion weight of the point to be interpolated, Static _ value represents the Static interpolation result, and Motion _ value represents the dynamic interpolation result.
  10. A de-interlacing interpolation apparatus comprising:
    the motion detection module is used for carrying out motion detection based on interlaced video data to obtain a motion weight of a point to be interpolated;
    the static interpolation module is used for carrying out static interpolation to obtain a static interpolation result of the point to be interpolated;
    the dynamic interpolation module is used for carrying out dynamic interpolation to obtain a dynamic interpolation result of the point to be interpolated; and
    and the interpolation result superposition module is used for superposing the static interpolation result and the dynamic interpolation result based on the motion weight to obtain the pixel value of the point to be interpolated.
  11. The de-interlacing interpolation apparatus according to claim 10, wherein the motion detection module includes:
    the filtering template forming unit is used for forming a first filtering template by using the pixel point data of the first two fields and forming a second filtering template by using the pixel point data of the current field and the pixel point data of the next field;
    the intermediate variable calculation unit is used for carrying out difference on the first filtering template and the second filtering template and taking an absolute value to obtain a motion weight intermediate variable of the point to be interpolated; and
    and the motion weight calculation unit is used for obtaining the motion weight of the point to be interpolated based on the motion weight of the point to be interpolated in the previous field which is positioned at the same pixel point position as the point to be interpolated in the current field and the motion weight intermediate variable.
  12. The de-interlacing interpolation apparatus according to claim 10, wherein the dynamic interpolation module includes:
    the edge detection unit is used for carrying out edge detection to obtain a plurality of groups of edge detection values;
    a weight calculation unit, configured to calculate, based on the multiple groups of edge detection values, a weight for edge interpolation in the specified direction and a weight set for local weighted edge interpolation;
    the designated direction edge interpolation unit is used for carrying out designated direction edge interpolation based on the weight for the designated direction edge interpolation to obtain a designated direction edge interpolation result;
    a local weighted edge interpolation unit, configured to perform local weighted edge interpolation based on the weight set for local weighted edge interpolation and the plurality of sets of edge detection values to obtain a local weighted edge interpolation result; and
    and the result averaging unit is used for summing and averaging the edge interpolation result in the designated direction and the local weighted edge interpolation result to obtain the dynamic interpolation result.
  13. A de-interlacing interpolation system comprising: a processor and a memory; wherein the memory stores instructions for execution by the processor and the processor is configured to execute the instructions to implement the de-interlacing interpolation method of any one of claims 1 to 9.
  14. A storage medium, wherein the storage medium is a non-volatile memory and stores program code, which when executed by a computer implements the de-interlacing interpolation method according to any one of claims 1 to 9.
  15. A video processing method, comprising:
    receiving an interlaced video signal;
    based on the de-interlacing interpolation method according to any one of claims 1 to 9, de-interlacing the interlaced video signal to obtain a de-interlaced video signal; and
    outputting the de-interlaced video signal.
CN201980082524.0A 2019-01-09 2019-01-09 De-interlacing interpolation method, de-interlacing interpolation device, de-interlacing interpolation system, video processing method and storage medium Active CN113261276B (en)

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2019/070943 WO2020142916A1 (en) 2019-01-09 2019-01-09 De-interlacing interpolation method, device and system, video processing method and storage medium

Publications (2)

Publication Number Publication Date
CN113261276A true CN113261276A (en) 2021-08-13
CN113261276B CN113261276B (en) 2023-08-22

Family

ID=71520591

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201980082524.0A Active CN113261276B (en) 2019-01-09 2019-01-09 De-interlacing interpolation method, de-interlacing interpolation device, de-interlacing interpolation system, video processing method and storage medium

Country Status (2)

Country Link
CN (1) CN113261276B (en)
WO (1) WO2020142916A1 (en)

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20070121001A1 (en) * 2005-11-30 2007-05-31 Lsi Logic Corporation Accurate motion detection for the combination of motion adaptive and motion compensation de-interlacing applications
CN101014086A (en) * 2007-01-31 2007-08-08 天津大学 De-interlacing apparatus using motion detection and adaptive weighted filter
CN101309385A (en) * 2008-07-09 2008-11-19 北京航空航天大学 Alternate line eliminating process method based on motion detection
CN201222771Y (en) * 2008-05-30 2009-04-15 深圳艾科创新微电子有限公司 High speed edge self-adapting de-interlaced interpolation device
US20090102966A1 (en) * 2007-10-17 2009-04-23 Trident Technologies, Inc. Systems and methods of motion and edge adaptive processing including motion compensation features
CN102215368A (en) * 2011-06-02 2011-10-12 中山大学 Motion self-adaptive de-interlacing method based on visual characteristics
CN104202555A (en) * 2014-09-29 2014-12-10 建荣集成电路科技(珠海)有限公司 Method and device for deinterlacing
CN105611214A (en) * 2016-02-21 2016-05-25 上海大学 Method for de-interlacing through intra-field linear interpolation based on multidirectional detection
CN107018350A (en) * 2017-04-21 2017-08-04 西安诺瓦电子科技有限公司 Video deinterlacing processing method and processing device
CN107135367A (en) * 2017-04-26 2017-09-05 西安诺瓦电子科技有限公司 Video interlace-removing method and device, method for processing video frequency and device

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20040009967A (en) * 2002-07-26 2004-01-31 삼성전자주식회사 Apparatus and method for deinterlacing
TW200743365A (en) * 2006-05-05 2007-11-16 Univ Nat Central Method of de-interlace processing by edge detection
CN101106685B (en) * 2007-08-31 2010-06-02 湖北科创高新网络视频股份有限公司 An deinterlacing method and device based on motion detection
CN106027943B (en) * 2016-07-11 2019-01-15 北京大学 A kind of video interlace-removing method

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20070121001A1 (en) * 2005-11-30 2007-05-31 Lsi Logic Corporation Accurate motion detection for the combination of motion adaptive and motion compensation de-interlacing applications
CN101014086A (en) * 2007-01-31 2007-08-08 天津大学 De-interlacing apparatus using motion detection and adaptive weighted filter
US20090102966A1 (en) * 2007-10-17 2009-04-23 Trident Technologies, Inc. Systems and methods of motion and edge adaptive processing including motion compensation features
CN201222771Y (en) * 2008-05-30 2009-04-15 深圳艾科创新微电子有限公司 High speed edge self-adapting de-interlaced interpolation device
CN101309385A (en) * 2008-07-09 2008-11-19 北京航空航天大学 Alternate line eliminating process method based on motion detection
CN102215368A (en) * 2011-06-02 2011-10-12 中山大学 Motion self-adaptive de-interlacing method based on visual characteristics
CN104202555A (en) * 2014-09-29 2014-12-10 建荣集成电路科技(珠海)有限公司 Method and device for deinterlacing
CN105611214A (en) * 2016-02-21 2016-05-25 上海大学 Method for de-interlacing through intra-field linear interpolation based on multidirectional detection
CN107018350A (en) * 2017-04-21 2017-08-04 西安诺瓦电子科技有限公司 Video deinterlacing processing method and processing device
CN107135367A (en) * 2017-04-26 2017-09-05 西安诺瓦电子科技有限公司 Video interlace-removing method and device, method for processing video frequency and device

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
孙春凤;袁峰;丁振良;: "一种新的边缘保持局部自适应图像插值算法", no. 10 *
聂苗;黎英;石力卓;蒋佳晨;闫亚超;: "基于视频监控系统的运动自适应去隔行算法" *
聂苗;黎英;石力卓;蒋佳晨;闫亚超;: "基于视频监控系统的运动自适应去隔行算法", 计算机应用, no. 10 *

Also Published As

Publication number Publication date
WO2020142916A1 (en) 2020-07-16
CN113261276B (en) 2023-08-22

Similar Documents

Publication Publication Date Title
EP0677958B1 (en) Motion adaptive scan conversion using directional edge interpolation
US8189105B2 (en) Systems and methods of motion and edge adaptive processing including motion compensation features
JP3644874B2 (en) Image interpolation device
JPS63313981A (en) Digital television image motion vector processor
KR100914619B1 (en) Spatio-temporal adaptive video de-interlacing
JP2011041306A (en) De-interlacing of video signal
JPS63313987A (en) Method for reducing number of movtion vector of digital television image reduction method
JPS63313982A (en) Television image motion vector evaluation method
Chen et al. Effective demosaicking algorithm based on edge property for color filter arrays
US5793443A (en) Motion vector detection circuit
US6686923B2 (en) Motion adaptive de-interlacing circuit and method
JP4892714B2 (en) Field Dominance Judgment Method in Video Frame Sequence
KR100563023B1 (en) Method and system for edge-adaptive interpolation for interlace-to-progressive conversion
US20080063307A1 (en) Pixel Interpolation
US8565309B2 (en) System and method for motion vector collection for motion compensated interpolation of digital video
WO2006020532A2 (en) Fast area-selected filtering for pixel-noise and analog artifacts reduction
TWI323610B (en) Apparatus and method for video de-interlace
US7573530B2 (en) Method and system for video noise reduction based on moving content detection
WO2021179954A1 (en) Video processing method and apparatus, device, and storage medium
CN113261276A (en) Deinterlacing interpolation method, device and system, video processing method and storage medium
Lien et al. Efficient VLSI architecture for edge-oriented demosaicking
Baranov et al. High-quality uhd demosaicing on low-cost fpga
US10264212B1 (en) Low-complexity deinterlacing with motion detection and overlay compensation
US20080002055A1 (en) Spatio-temporal adaptive video de-interlacing for parallel processing
JP4274430B2 (en) Motion vector detection device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant