CN112270738B - Self-adaptive sub-pixel rendering method and device - Google Patents

Self-adaptive sub-pixel rendering method and device Download PDF

Info

Publication number
CN112270738B
CN112270738B CN202011276767.9A CN202011276767A CN112270738B CN 112270738 B CN112270738 B CN 112270738B CN 202011276767 A CN202011276767 A CN 202011276767A CN 112270738 B CN112270738 B CN 112270738B
Authority
CN
China
Prior art keywords
rendering
vertical
horizontal
low sharpness
sharpness
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202011276767.9A
Other languages
Chinese (zh)
Other versions
CN112270738A (en
Inventor
陈涛
林江
王洪剑
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Tongtu Semiconductor Technology Co ltd
Original Assignee
Shanghai Tongtu Semiconductor Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Tongtu Semiconductor Technology Co ltd filed Critical Shanghai Tongtu Semiconductor Technology Co ltd
Priority to CN202011276767.9A priority Critical patent/CN112270738B/en
Publication of CN112270738A publication Critical patent/CN112270738A/en
Application granted granted Critical
Publication of CN112270738B publication Critical patent/CN112270738B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/10Geometric effects
    • G06T15/20Perspective computation
    • G06T15/205Image-based rendering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/90Determination of colour characteristics

Abstract

The invention discloses a self-adaptive sub-pixel rendering method and a device, wherein the method comprises the following steps: step S1, respectively carrying out vertical frequency analysis and horizontal frequency analysis on an input image to obtain a horizontal high-low sharpness rendering ratio and a vertical high-low sharpness rendering ratio; step S2, performing vertical high-low sharpness rendering on the input image, and performing vertical high-low sharpness rendering mixing based on the vertical high-low sharpness rendering proportion obtained by the vertical frequency analysis in the step S1 to obtain a vertical rendering result; and step S3, performing horizontal high-low sharpness rendering on the vertical rendering result in the step S2, and performing horizontal high-low sharpness rendering mixing based on the horizontal high-low sharpness rendering proportion obtained by horizontal frequency analysis in the step S1 to obtain a final image rendering result.

Description

Self-adaptive sub-pixel rendering method and device
Technical Field
The present invention relates to the field of image processing technologies, and in particular, to a method and apparatus for adaptive sub-pixel rendering.
Background
A typical display screen is a square pixel divided into three equal parts, each block being given red, green and blue colors, respectively, so that a color pixel can be formed. However, in some small-sized displays, the number of sub-pixels as a whole is reduced due to manufacturing process problems. In order not to affect the display effect, a subpixel rendering method corresponding to the display screen needs to be employed.
Subpixel reduction often leads to sharpness reduction and color cast problems, which are often addressed by subpixel rendering methods. However, none of the current subpixel rendering methods solves these two problems well. For example, in order to solve the sharpness problem, the color cast phenomenon becomes more serious; conversely, if the color shift problem is to be solved, the sharpness is further reduced. Therefore, in order to achieve a higher quality display effect, the sharpness problem and the color cast problem need to be better solved, and a sub-pixel rendering method and apparatus need to be provided to solve the above problems.
Disclosure of Invention
In order to overcome the defects of the prior art, the invention aims to provide a self-adaptive sub-pixel rendering method and device, which better solve the problems of definition reduction and color cast caused by the reduction of the existing sub-pixels by mixing the low-sharpness rendering result and the high-sharpness rendering result.
In order to achieve the above objective, the present invention provides an adaptive sub-pixel rendering method, which includes the following steps:
step S1, respectively carrying out vertical frequency analysis and horizontal frequency analysis on an input image to obtain a horizontal high-low sharpness rendering ratio and a vertical high-low sharpness rendering ratio;
step S2, performing vertical high-low sharpness rendering on the input image, and performing vertical high-low sharpness rendering mixing based on the vertical high-low sharpness rendering proportion obtained by the vertical frequency analysis in the step S1 to obtain a vertical rendering result;
and step S3, performing horizontal high-low sharpness rendering on the vertical rendering result in the step S2, and performing horizontal high-low sharpness rendering mixing based on the horizontal high-low sharpness rendering proportion obtained by horizontal frequency analysis in the step S1 to obtain a final image rendering result.
Preferably, in step S1, a frequency detection filter is used to perform frequency analysis on the input image.
Preferably, for horizontal frequency analysis, step S1 further comprises:
step S100, the input image and two horizontal frequency detection filters are respectively convolved, and convolution results are respectively recorded as h_cI0 and h_cI1;
step S101, correspondingly subtracting the two convolution images h_cI0 and h_cI1 to obtain a difference image h_dI;
step S102, calculating the horizontal high sharpness rendered proportional image of the input image as h_ wI according to the difference image h_dI of the two convolution results.
Preferably, for vertical frequency analysis, step S1 further comprises:
step S100, the input image is respectively convolved with two vertical frequency detection filters, and convolution results are respectively recorded as v_cI0 and v_cI1;
step S101, correspondingly subtracting two convolution images v_cI0 and v_cI1 to obtain a difference image v_dI;
step S102, calculating the vertical high sharpness rendered proportional image of the input image as v_ wI according to the difference image v_dI of the two convolution results.
Preferably, step S2 further comprises:
step S200, convolving each channel image of the input image with a vertical high-low sharpness rendering filter respectively to obtain a result of vertical high-low sharpness rendering of each channel image;
step S201, after calculating the vertical high-low sharpness rendering result of each channel image, based on the vertical high-low sharpness rendering ratio obtained by the vertical frequency analysis in step S1, mixing the vertical high-low sharpness rendering results of each channel image to obtain the sub-pixel vertical rendering result.
Preferably, step S3 further comprises:
step S300, convolving each channel image after vertical rendering with a horizontal high-low sharpness rendering filter respectively to obtain a result of horizontal high-low sharpness rendering of each channel image;
step S301, after the horizontal high-low sharpness rendering result of each channel image is calculated, the horizontal high-low sharpness rendering result is mixed based on the horizontal frequency analysis in step S1 to obtain the horizontal high-low sharpness rendering ratio, and the final image rendering result is obtained.
In order to achieve the above object, the present invention further provides an adaptive sub-pixel rendering device, including:
the frequency analysis module is used for respectively carrying out vertical frequency analysis and horizontal frequency analysis on the input image to obtain a horizontal high-low sharpness rendered proportional image and a vertical high-low sharpness rendered proportional image;
the vertical rendering module is used for performing vertical high-low sharpness rendering on the input image, and performing vertical high-low sharpness rendering mixing based on the vertical high-low sharpness rendering proportion obtained by vertical frequency analysis in the frequency analysis module to obtain a vertical rendering result;
and the horizontal rendering module is used for performing horizontal high-low sharpness rendering on the vertical rendering result of the vertical rendering module, and performing horizontal high-low sharpness rendering mixing based on the horizontal high-low sharpness rendering proportion obtained by horizontal frequency analysis in the frequency analysis module to obtain a final image rendering result.
Preferably, the frequency analysis module is used for respectively convoluting the two filters with the input image during vertical frequency analysis and horizontal frequency analysis, and then calculating the ratio of high-low sharpness rendering by the difference of the two convolution results.
Preferably, the vertical rendering module is specifically configured to:
convolving each channel image of the input image with a vertical high-low sharpness rendering filter respectively to obtain a result of vertical high-low sharpness rendering of each channel image;
and after the vertical high-low sharpness rendering result of each channel image is calculated, mixing the vertical high-low sharpness rendering results of each channel image based on the vertical high-low sharpness rendering proportion obtained by the vertical frequency analysis in the frequency analysis module, so as to obtain the sub-pixel vertical rendering result.
Preferably, the horizontal rendering module is specifically configured to:
convolving each channel image after vertical rendering with a horizontal high-low sharpness rendering filter respectively to obtain a result of horizontal high-low sharpness rendering of each channel image;
and after the horizontal high-low sharpness rendering results of each channel image are calculated, mixing the horizontal high-low sharpness rendering results based on the horizontal frequency analysis in the frequency analysis module to obtain the horizontal high-low sharpness rendering ratio, and obtaining the final image rendering result.
Compared with the prior art, the self-adaptive sub-pixel rendering method and device provided by the invention have the advantages that the vertical frequency analysis and the horizontal frequency analysis are respectively carried out on the input image to obtain the horizontal high-low sharpness rendering proportion and the vertical high-low sharpness rendering proportion, then the input image is firstly subjected to vertical rendering and then to horizontal rendering, and the high-low sharpness rendering is mixed according to the high-low sharpness rendering proportion obtained based on the vertical/horizontal frequency analysis during rendering, so that the sharpness reduction and color cast problems caused by the reduction of the existing sub-pixels are better solved through mixing the low sharpness rendering result and the high sharpness rendering result.
Drawings
FIG. 1 is a flow chart of steps of an adaptive sub-pixel rendering method according to the present invention;
FIG. 2 is a flow chart of horizontal frequency analysis in an embodiment of the invention;
FIG. 3 is a flow chart of vertical rendering in an embodiment of the invention;
FIG. 4 is a flow chart of horizontal rendering in an embodiment of the invention;
fig. 5 is a system architecture diagram of an adaptive sub-pixel rendering device according to the present invention.
Detailed Description
Other advantages and effects of the present invention will become readily apparent to those skilled in the art from the following disclosure, when considered in light of the accompanying drawings, by describing embodiments of the present invention with specific embodiments thereof. The invention may be practiced or carried out in other embodiments and details within the scope and range of equivalents of the various features and advantages of the invention.
Fig. 1 is a flow chart illustrating steps of an adaptive sub-pixel rendering method according to the present invention. As shown in fig. 1, the adaptive sub-pixel rendering method of the present invention includes the following steps:
and S1, respectively carrying out vertical frequency analysis and horizontal frequency analysis on the input image to obtain the rendering proportion of the horizontal high sharpness and the horizontal sharpness and the rendering proportion of the vertical high sharpness and the horizontal sharpness.
In a specific embodiment of the present invention, a frequency detection filter is used to perform frequency analysis on an input image, that is, two filters (two horizontal frequency detection filters are used for horizontal frequency analysis and two vertical frequency detection filters are used for vertical frequency analysis) are respectively convolved with the input image, and then the ratio of high-low sharpness rendering is calculated by the difference between the two convolution results.
For the calculation of the scale of high and low sharpness rendering, the horizontal and vertical are similar. Here, a horizontal example is specifically described, and a basic flow chart thereof is shown in fig. 2. The input image I is noted, and the two horizontal frequency detection filters are noted as pf0 and pf1, respectively. For example, the horizontal frequency detection filters may be set as follows:
pf0=[-1,2,1]/2,
pf1=[-1,0,2,0,-1]/2。
specifically, step S1 further includes:
in step S100, the input image I is first convolved with two horizontal frequency detection filters pf0 and pf1, respectively, to obtain a convolution result denoted as h_ci0, h_ci1, and the specific calculation formula is as follows:
h_cI0=max(|conv(Ir,pf0)|,|conv(Ig,pf0)|,|conv(Ib,pf0)|),
h_cI1=max(|conv(Ir,pf1)|,|conv(Ig,pf1)|,|conv(Ib,pf1)|),
ir, ig and Ib are red, green and blue channel images of an input image I respectively, conv represents image convolution, |·| represents taking an absolute value, and max (·) represents taking a maximum value.
Step S101, subtracting the two convolution images h_ci0 and h_c1 to obtain a difference image h_di:
h_dI=max(0,h_cI0-h_cI1)。
step S102, finally, a high sharpness rendered proportional image of the input image I is calculated according to the difference image h_dI of the two convolution results to be h_ wI. Specifically, for the input image I, the scale image whose horizontal high sharpness is rendered is h_ wI (referred to as scale image because the image content is a scale value), the following is calculated:
h_wI=min(1,k*max(0,h_dI-th)),
wherein k is more than or equal to 0, th is more than or equal to 0 and less than or equal to 1, and min and max respectively represent minimum and maximum values.
Likewise, the vertical high sharpness rendering scale image v_ wI of the input image I can also be calculated according to the above method.
And S2, performing vertical high-low sharpness rendering on the input image, and then performing vertical high-low sharpness rendering mixing based on the vertical high-low sharpness rendering proportion obtained by the vertical frequency analysis in the step S1 to obtain a vertical rendering result.
The invention carries out the rendering process in two steps, namely, vertical rendering is carried out firstly, and then horizontal rendering is carried out. The vertical rendering is to perform vertical rendering on an input image to obtain a vertical rendering result. The method comprises the steps of calculating the vertical rendering results of the high sharpness and the low sharpness respectively, and then mixing based on the ratio of the high sharpness to the low sharpness rendering to obtain the vertical rendering results, wherein a specific vertical rendering flow diagram is shown in fig. 3.
Specifically, step S2 further includes:
step S200, convolving each channel image of the input image with the vertical high and low sharpness rendering filters, respectively, so as to obtain the result of vertical high and low sharpness rendering of each channel image.
The input image is denoted as I, the red, green and blue color channel images thereof are Ir, ig and Ib, respectively, and their corresponding vertical high and low sharpness filters are denoted as v_fr0, v_fr1, v_fg0, v_fg1, v_fb0, v_fb1, respectively. The red channel image is marked with the vertical high-low sharpness rendering results of spr_v_Ir0 and spr_v_Ir1, the green channel image is marked with the vertical high-low sharpness rendering images of spr_v_Ig0 and spr_v_Ig1, and the blue channel image is marked with the vertical high-low sharpness rendering images of spr_v_Ib0 and spr_v_Ib1. For example, a vertical high sharpness filter may be chosen as [0,1,0], and a vertical low sharpness filter may be chosen as [0.5,0.5,0]. The specific formula of the channel image high and low sharpness rendering result calculation is as follows:
spr_v_Ir0=conv(Ir,v_fr0),
spr_v_Ir1=conv(Ir,v_fr1),
spr_v_Ig0=conv(Ig,v_fg0),
spr_v_Ig1=conv(Ig,v_fg1),
spr_v_Ib0=conv(Ir,v_fb0),
spr_v_Ib1=conv(Ir,v_fb1),
where conv represents the image convolution.
Step S201, after calculating the vertical high-low sharpness rendering results, based on the vertical high-low sharpness rendering ratio obtained by the vertical frequency analysis in step S1, mixing the vertical high-low sharpness rendering results to obtain the sub-pixel vertical rendering results.
Specifically, the specific formula of the vertical rendering blend is as follows:
spr_v_Ir=v_wI*spr_v_Ir0+(1-v_wI)*spr_v_Ir1,
spr_v_Ig=v_wI*spr_v_Ig0+(1-v_wI)*spr_v_Ig1,
spr_v_Ib=v_wI*spr_v_Ib0+(1-v_wI)*spr_v_Ib1,
wherein, spr_v_ir, spr_v_ig, and spr_v_ib are respectively the vertical rendering blending results of the red, green and blue channel images.
And step S3, performing horizontal high-low sharpness rendering on the vertical rendering result in the step S2, and then performing horizontal high-low sharpness rendering mixing based on the horizontal high-low sharpness rendering proportion obtained by horizontal frequency analysis in the step S1 to obtain a horizontal rendering result, which is also a final image rendering result.
The horizontal rendering is to render the image result vertically in the horizontal direction of the sub-pixels to obtain a horizontal rendering result, respectively calculating a horizontal high sharpness rendering result and a horizontal low sharpness rendering result, and then mixing based on the horizontal high sharpness rendering ratio and the horizontal low sharpness rendering ratio to obtain the horizontal rendering result. A schematic of the horizontal rendering flow is shown in fig. 4.
Specifically, step S3 further includes:
step S300, convolving each channel image after vertical rendering with a horizontal high-sharpness rendering filter and a horizontal low-sharpness rendering filter respectively, so that a result of horizontal high-sharpness rendering of each channel image can be obtained.
The high and low sharpness filters of the red, green and blue three channel images are respectively denoted as h_fr0, h_fr1, h_fg0, h_fg1, h_fb0 and h_fb1. The high-low sharpness rendering results of the red channel image are indicated as spr_h_Ir0 and spr_h_Ir1, the high-low sharpness rendering images of the green channel image are indicated as spr_h_Ig0 and spr_h_Ig1, and the high-low sharpness rendering images of the blue channel image are indicated as spr_v_Ib0 and spr_v_Ib1. For example, the horizontal high sharpness filter may be selected as [0,1,0], and the horizontal low sharpness filter may be selected as [1/3,1/3 ]. The specific formula of each channel image level high and low sharpness rendering result calculation is as follows:
spr_h_Ir0=conv(spr_v_Ir,h_fr0),
spr_h_Ir1=conv(spr_v_Ir,h_fr1),
spr_h_Ig0=conv(spr_v_Ig,h_fg0),
spr_h_Ig1=conv(spr_v_Ig,h_fg1),
spr_h_Ib0=conv(spr_v_Ib,h_fb0),
spr_h_Ib1=conv(spr_v_Ib,h_fb1),
where conv represents the image convolution.
Step S301, after the horizontal high-low sharpness rendering result is calculated, the horizontal high-low sharpness rendering result is mixed based on the horizontal frequency analysis in step S1 to obtain the horizontal high-low sharpness rendering ratio, and the sub-pixel horizontal rendering result, namely the final image rendering result, is obtained. In a specific embodiment of the present invention, the specific formula of the horizontal rendering mix is as follows:
spr_h_Ir=h_wI*spr_h_Ir0+(1-h_wI)*spr_h_Ir1,
spr_h_Ig=h_wI*spr_h_Ig0+(1-h_wI)*spr_h_Ig1,
spr_h_Ib=h_wI*spr_h_Ib0+(1-h_wI)*spr_h_Ib1,
wherein, spr_h_Ir, spr_h_Ig, spr_h_Ib are respectively red, green and blue channel horizontal rendering mixed results.
Therefore, according to the sub-pixel rendering method, according to the detected image frequency, a corresponding rendering method is adopted, color cast easily occurs after rendering of low-frequency content, and the sub-pixel rendering method can adopt a rendering method with slightly lower sharpness, so that the color cast problem is solved, and sharpness is less lost; for the content with high frequency, color cast is not easy to occur after rendering, and the invention can adopt a rendering method with high sharpness without losing sharpness.
That is, the present invention mixes the result of low sharpness rendering and the result of high sharpness rendering based on detection and analysis of image frequency, and the mixing ratio is based on detection and analysis of image frequency. If the frequency is high, the result of high sharpness rendering is fully employed in the blending. If the frequency is low, the result of low sharpness rendering is fully employed in the blending. If the frequency is in the middle, the two are mixed in a certain proportion.
It should be noted that, currently, the subpixel arrangement formats commonly used on the small-sized display screen include RGBG format, GGRB format, delta-RGB format, and the like, and the present invention can be adapted to different subpixel arrangement formats, including but not limited to the above three formats.
Fig. 5 is a system architecture diagram of an adaptive sub-pixel rendering device according to the present invention. As shown in fig. 5, an adaptive sub-pixel rendering apparatus of the present invention includes:
the frequency analysis module 501 performs vertical frequency analysis and horizontal frequency analysis on the input image respectively to obtain a horizontal high-low sharpness rendered proportional image and a vertical high-low sharpness rendered proportional image.
In a specific embodiment of the present invention, the frequency analysis module 501 uses the image to perform frequency analysis, that is, two filters (two horizontal frequency detection filters are used for horizontal frequency analysis and two vertical frequency detection filters are used for vertical frequency analysis) are respectively convolved with the input image, and then the ratio of high-low sharpness rendering is calculated by the difference of the two convolution results.
For the calculation of the scale of high and low sharpness rendering, the horizontal and vertical are similar. Here, taking the horizontal example as a concrete explanation, the input image I is noted, and two horizontal frequency detection filters are noted as pf0 and pf1, respectively. For example, the horizontal frequency detection filters may be set as follows:
pf0=[-1,2,1]/2,
pf1=[-1,0,2,0,-1]/2。
specifically, the frequency analysis module 501 is configured to:
firstly, respectively convolving an input image I with two horizontal frequency detection filters pf0 and pf1 to obtain convolution results, namely h_cI0 and h_cI1, wherein the specific calculation formula is as follows:
h_cI0=max(|conv(Ir,pf0)|,|conv(Ig,pf0)|,|conv(Ib,pf0)|),
h_cI1=max(|conv(Ir,pf1)|,|conv(Ig,pf1)|,|conv(Ib,pf1)|),
ir, ig and Ib are red, green and blue channel images of an input image I respectively, conv represents image convolution, |·| represents taking an absolute value, and max (·) represents taking a maximum value.
Then, the two convolution images h_c0 and h_c1 are correspondingly subtracted to obtain a difference image h_dI:
h_dI=max(0,h_cI0-h_cI1)。
finally, a high sharpness rendered proportional image of the input image I is calculated according to the difference image h_dI of the two convolution results to be h_ wI. Specifically, for the input image I, the scaled image of which the horizontal high sharpness is rendered is h_ wI, calculated as follows:
h_wI=min(1,k*max(0,h_dI-th)),
wherein k is more than or equal to 0, th is more than or equal to 0 and less than or equal to 1, and min and max respectively represent minimum and maximum values.
Likewise, the frequency analysis module 501 may also calculate a vertical high sharpness rendering scale image v_ wI of the input image I.
The vertical rendering module 502 is configured to perform vertical high-low sharpness rendering on an input image, and then perform vertical high-low sharpness rendering mixing based on the vertical high-low sharpness rendering ratio image obtained by the vertical frequency analysis in the frequency analysis module 501, so as to obtain a vertical rendering result.
That is, the present invention performs the rendering process in two steps, and performs the vertical rendering first and then the horizontal rendering later. The vertical rendering is to perform vertical rendering on an input image to obtain a vertical rendering result. The vertical rendering module 502 of the invention calculates the vertical rendering results of the high sharpness and the low sharpness respectively, and then mixes the results based on the ratio of the vertical high sharpness to the low sharpness rendering to obtain the vertical rendering results.
The vertical rendering module 502 is specifically configured to:
and convolving each channel image of the input image with a vertical high-sharpness rendering filter and a vertical low-sharpness rendering filter respectively to obtain a vertical high-sharpness rendering result and a vertical low-sharpness rendering result of each channel image.
The input image is denoted as I, the red, green and blue color channel images thereof are Ir, ig and Ib, respectively, and their corresponding vertical high and low sharpness filters are denoted as v_fr0, v_fr1, v_fg0, v_fg1, v_fb0, v_fb1, respectively. The red channel image is marked with the vertical high-low sharpness rendering results of spr_v_Ir0 and spr_v_Ir1, the green channel image is marked with the vertical high-low sharpness rendering images of spr_v_Ig0 and spr_v_Ig1, and the blue channel image is marked with the vertical high-low sharpness rendering images of spr_v_Ib0 and spr_v_Ib1. For example, a vertical high sharpness filter may be chosen as [0,1,0], and a vertical low sharpness filter may be chosen as [0.5,0.5,0]. The specific formula of the channel image high and low sharpness rendering result calculation is as follows:
spr_v_Ir0=conv(Ir,v_fr0),
spr_v_Ir1=conv(Ir,v_fr1),
spr_v_Ig0=conv(Ig,v_fg0),
spr_v_Ig1=conv(Ig,v_fg1),
spr_v_Ib0=conv(Ir,v_fb0),
spr_v_Ib1=conv(Ir,v_fb1),
where conv represents the image convolution.
After the vertical high-sharpness and low-sharpness rendering results are calculated, based on the vertical high-sharpness and low-sharpness rendering proportion image obtained by the vertical frequency analysis in the frequency analysis module 501, the vertical high-sharpness and low-sharpness rendering results are mixed, and the sub-pixel vertical rendering results are obtained.
Specifically, the specific formula of the vertical rendering blend is as follows:
spr_v_Ir=v_wI*spr_v_Ir0+(1-v_wI)*spr_v_Ir1,
spr_v_Ig=v_wI*spr_v_Ig0+(1-v_wI)*spr_v_Ig1,
spr_v_Ib=v_wI*spr_v_Ib0+(1-v_wI)*spr_v_Ib1,
wherein, spr_v_ir, spr_v_ig, and spr_v_ib are respectively the vertical rendering blending results of the red, green and blue channel images.
The horizontal rendering module 503 performs horizontal high-low sharpness rendering on the vertical rendering result of the vertical rendering module 502, and then performs mixing of horizontal high-low sharpness rendering based on the horizontal high-low sharpness rendering proportion obtained by horizontal frequency analysis in the frequency analysis module 501, so as to obtain a horizontal rendering result, which is also a final image rendering result.
The horizontal rendering is to perform sub-pixel horizontal rendering on the vertical rendering image result to obtain a horizontal rendering result, wherein the horizontal rendering result is calculated by respectively calculating a horizontal high sharpness rendering result and a horizontal low sharpness rendering result, and then the horizontal rendering result is obtained by mixing the horizontal high sharpness rendering ratio and the horizontal low sharpness rendering ratio based on the frequency analysis module 501.
The horizontal rendering module 503 is specifically configured to:
and respectively convolving each channel image after vertical rendering with a horizontal high-sharpness rendering filter and a horizontal low-sharpness rendering filter to obtain the results of horizontal high-sharpness rendering and low-sharpness rendering of each channel image.
The high and low sharpness filters of the red, green and blue three channel images are respectively denoted as h_fr0, h_fr1, h_fg0, h_fg1, h_fb0 and h_fb1. The high-low sharpness rendering results of the red channel image are indicated as spr_h_Ir0 and spr_h_Ir1, the high-low sharpness rendering images of the green channel image are indicated as spr_h_Ig0 and spr_h_Ig1, and the high-low sharpness rendering images of the blue channel image are indicated as spr_v_Ib0 and spr_v_Ib1. For example, the horizontal high sharpness filter may be selected as [0,1,0], and the horizontal low sharpness filter may be selected as [1/3,1/3 ]. The specific formula of each channel image level high and low sharpness rendering result calculation is as follows:
spr_h_Ir0=conv(spr_v_Ir,h_fr0),
spr_h_Ir1=conv(spr_v_Ir,h_fr1),
spr_h_Ig0=conv(spr_v_Ig,h_fg0),
spr_h_Ig1=conv(spr_v_Ig,h_fg1),
spr_h_Ib0=conv(spr_v_Ib,h_fb0),
spr_h_Ib1=conv(spr_v_Ib,h_fb1),
where conv represents the image convolution.
After the horizontal high-low sharpness rendering result is calculated, the horizontal high-low sharpness rendering result is mixed based on the horizontal frequency analysis in the frequency analysis module 501 to obtain the horizontal high-low sharpness rendering ratio, and the sub-pixel horizontal rendering result, namely the final image rendering result, is obtained. In a specific embodiment of the present invention, the specific formula of the horizontal rendering mix is as follows:
spr_h_Ir=h_wI*spr_h_Ir0+(1-h_wI)*spr_h_Ir1,
spr_h_Ig=h_wI*spr_h_Ig0+(1-h_wI)*spr_h_Ig1,
spr_h_Ib=h_wI*spr_h_Ib0+(1-h_wI)*spr_h_Ib1,
wherein, spr_h_Ir, spr_h_Ig, spr_h_Ib are respectively red, green and blue channel horizontal rendering mixed results.
In summary, according to the self-adaptive sub-pixel rendering method and device, the vertical frequency analysis and the horizontal frequency analysis are performed on the input image to obtain the horizontal high-low sharpness rendering proportion and the vertical high-low sharpness rendering proportion, then the input image is vertically rendered and then horizontally rendered, and the high-low sharpness rendering is mixed according to the high-low sharpness rendering proportion obtained based on the vertical/horizontal frequency analysis during rendering, so that the problems of sharpness reduction and color cast caused by the reduction of the existing sub-pixels are better solved through mixing the low sharpness rendering result and the high sharpness rendering result.
The above embodiments are merely illustrative of the principles of the present invention and its effectiveness, and are not intended to limit the invention. Modifications and variations may be made to the above-described embodiments by those skilled in the art without departing from the spirit and scope of the invention. Accordingly, the scope of the invention is to be indicated by the appended claims.

Claims (10)

1. An adaptive sub-pixel rendering method, comprising the steps of:
step S1, respectively carrying out vertical frequency analysis and horizontal frequency analysis on an input image to obtain a horizontal high-low sharpness rendering ratio and a vertical high-low sharpness rendering ratio;
step S2, performing vertical high-low sharpness rendering on the input image, and performing vertical high-low sharpness rendering mixing based on the vertical high-low sharpness rendering proportion obtained by the vertical frequency analysis in the step S1 to obtain a vertical rendering result;
and step S3, performing horizontal high-low sharpness rendering on the vertical rendering result in the step S2, and performing horizontal high-low sharpness rendering mixing based on the horizontal high-low sharpness rendering proportion obtained by horizontal frequency analysis in the step S1 to obtain a final image rendering result.
2. An adaptive sub-pixel rendering method as claimed in claim 1, wherein: in step S1, a frequency detection filter is used to perform frequency analysis on the input image.
3. An adaptive sub-pixel rendering method as claimed in claim 2, wherein for horizontal frequency analysis, step S1 further comprises:
step S100, the input image and two horizontal frequency detection filters are respectively convolved, and convolution results are respectively recorded as h_cI0 and h_cI1;
step S101, correspondingly subtracting the two convolution images h_cI0 and h_cI1 obtained in the step S100 to obtain a difference image h_dI;
step S102, calculating the horizontal high sharpness rendered proportional image of the input image as h_ wI according to the difference image h_dI of the two convolution results.
4. An adaptive sub-pixel rendering method as claimed in claim 2, wherein for vertical frequency analysis, step S1 further comprises:
step S100, the input image is respectively convolved with two vertical frequency detection filters, and convolution results are respectively recorded as v_cI0 and v_cI1;
step S101, correspondingly subtracting the two convolution images v_cI0 and v_cI1 obtained in the step S100 to obtain a difference image v_dI;
step S102, calculating the vertical high sharpness rendered proportional image of the input image as v_ wI according to the difference image v_dI of the two convolution results.
5. The adaptive sub-pixel rendering method as claimed in claim 2, wherein the step S2 further comprises:
step S200, convolving each channel image of the input image with a vertical high-low sharpness rendering filter respectively to obtain a result of vertical high-low sharpness rendering of each channel image;
step S201, after calculating the vertical high-low sharpness rendering result of each channel image, based on the vertical high-low sharpness rendering ratio obtained by the vertical frequency analysis in step S1, mixing the vertical high-low sharpness rendering results of each channel image to obtain the sub-pixel vertical rendering result.
6. The adaptive sub-pixel rendering method as claimed in claim 5, wherein the step S3 further comprises:
step S300, convolving each channel image after vertical rendering with a horizontal high-low sharpness rendering filter respectively to obtain a result of horizontal high-low sharpness rendering of each channel image;
step S301, after the horizontal high-low sharpness rendering result of each channel image is calculated, the horizontal high-low sharpness rendering result is mixed based on the horizontal frequency analysis in step S1 to obtain the horizontal high-low sharpness rendering ratio, and the final image rendering result is obtained.
7. An adaptive sub-pixel rendering apparatus, comprising:
the frequency analysis module is used for respectively carrying out vertical frequency analysis and horizontal frequency analysis on the input image to obtain a horizontal high-low sharpness rendered proportional image and a vertical high-low sharpness rendered proportional image;
the vertical rendering module is used for performing vertical high-low sharpness rendering on the input image, and performing vertical high-low sharpness rendering mixing based on the vertical high-low sharpness rendering proportion obtained by vertical frequency analysis in the frequency analysis module to obtain a vertical rendering result;
and the horizontal rendering module is used for performing horizontal high-low sharpness rendering on the vertical rendering result of the vertical rendering module, and performing horizontal high-low sharpness rendering mixing based on the horizontal high-low sharpness rendering proportion obtained by horizontal frequency analysis in the frequency analysis module to obtain a final image rendering result.
8. An adaptive sub-pixel rendering apparatus as claimed in claim 7, wherein: and when the frequency analysis module is used for vertical frequency analysis and horizontal frequency analysis, the two filters are respectively convolved with the input image, and the difference of the two convolution results is used for calculating to obtain the proportion of high-low sharpness rendering.
9. The adaptive sub-pixel rendering device of claim 8, wherein the vertical rendering module is specifically configured to:
convolving each channel image of the input image with a vertical high-low sharpness rendering filter respectively to obtain a result of vertical high-low sharpness rendering of each channel image;
and after the vertical high-low sharpness rendering result of each channel image is calculated, mixing the vertical high-low sharpness rendering results of each channel image based on the vertical high-low sharpness rendering proportion obtained by the vertical frequency analysis in the frequency analysis module, so as to obtain the sub-pixel vertical rendering result.
10. The adaptive sub-pixel rendering device of claim 8, wherein the horizontal rendering module is specifically configured to:
convolving each channel image after vertical rendering with a horizontal high-low sharpness rendering filter respectively to obtain a result of horizontal high-low sharpness rendering of each channel image;
and after the horizontal high-low sharpness rendering results of each channel image are calculated, mixing the horizontal high-low sharpness rendering results based on the horizontal frequency analysis in the frequency analysis module to obtain the horizontal high-low sharpness rendering ratio, and obtaining the final image rendering result.
CN202011276767.9A 2020-11-16 2020-11-16 Self-adaptive sub-pixel rendering method and device Active CN112270738B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011276767.9A CN112270738B (en) 2020-11-16 2020-11-16 Self-adaptive sub-pixel rendering method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011276767.9A CN112270738B (en) 2020-11-16 2020-11-16 Self-adaptive sub-pixel rendering method and device

Publications (2)

Publication Number Publication Date
CN112270738A CN112270738A (en) 2021-01-26
CN112270738B true CN112270738B (en) 2024-01-26

Family

ID=74340579

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011276767.9A Active CN112270738B (en) 2020-11-16 2020-11-16 Self-adaptive sub-pixel rendering method and device

Country Status (1)

Country Link
CN (1) CN112270738B (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1871634A (en) * 2003-10-28 2006-11-29 克雷沃耶提公司 System and method for performing image reconstruction and subpixel rendering to effect scaling for multi-mode display
JP2014082768A (en) * 2013-11-27 2014-05-08 Sony Corp Imaging apparatus and control method thereof
CN103974011A (en) * 2013-10-21 2014-08-06 浙江大学 Projection image blurring eliminating method
CN110047417A (en) * 2019-04-24 2019-07-23 上海兆芯集成电路有限公司 Sub-pixel rendering method and device
CN111861949A (en) * 2020-04-21 2020-10-30 北京联合大学 Multi-exposure image fusion method and system based on generation countermeasure network

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2005099281A2 (en) * 2004-03-30 2005-10-20 Cernium, Inc. Quality analysis in imaging
US20180137602A1 (en) * 2016-11-14 2018-05-17 Google Inc. Low resolution rgb rendering for efficient transmission

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1871634A (en) * 2003-10-28 2006-11-29 克雷沃耶提公司 System and method for performing image reconstruction and subpixel rendering to effect scaling for multi-mode display
CN103974011A (en) * 2013-10-21 2014-08-06 浙江大学 Projection image blurring eliminating method
JP2014082768A (en) * 2013-11-27 2014-05-08 Sony Corp Imaging apparatus and control method thereof
CN110047417A (en) * 2019-04-24 2019-07-23 上海兆芯集成电路有限公司 Sub-pixel rendering method and device
CN111861949A (en) * 2020-04-21 2020-10-30 北京联合大学 Multi-exposure image fusion method and system based on generation countermeasure network

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
基于惯性矩的图像融合算法;陈彦燕;王元庆;;计算机测量与控制(第06期);全文 *

Also Published As

Publication number Publication date
CN112270738A (en) 2021-01-26

Similar Documents

Publication Publication Date Title
CN104299592B (en) Liquid crystal panel and driving method thereof
CN108766372B (en) Method for improving mura phenomenon of display panel
KR100505681B1 (en) Interpolator providing for high resolution by interpolation with adaptive filtering for Bayer pattern color signal, digital image signal processor comprising it, and method thereof
CN101562002B (en) Controller, hold-type display device, electronic apparatus and signal adjusting method
US6256068B1 (en) Image data format conversion apparatus
US8594466B2 (en) Image data converting device, method for converting image data, program and storage medium
US9368088B2 (en) Display, image processing unit, image processing method, and electronic apparatus
US20050243109A1 (en) Method and apparatus for converting a color image
US7508448B1 (en) Method and apparatus for filtering video data using a programmable graphics processor
WO2016106865A1 (en) Sub-pixel compensation and colouring method for rgbw display device based on edge pixel detection
CN104485077A (en) Liquid crystal display panel and driving method thereof
JP2017538148A (en) Liquid crystal panel and pixel unit setting method
GB2543978A (en) Image display method and display system
CN106328089A (en) Pixel driving method
US20070103588A1 (en) System and method for adjacent field comparison in video processing
CN102542998B (en) Gamma correction method
CN104680518A (en) Blue screen image matting method based on chroma overflowing processing
CN112270738B (en) Self-adaptive sub-pixel rendering method and device
CN104461441A (en) Rendering method, rendering device and display device
TWI544785B (en) Image downsampling apparatus and method
US10290252B2 (en) Image display method, image display apparatus and delta pixel arrangement display device
US6697540B1 (en) Method for digital image interpolation and sharpness enhancement
Yoshiyama et al. 19.5 L: LateNews Paper: A New Advantage of MultiPrimaryColor Displays
CN101068368B (en) Colour changing device and method for multi-primary colours type displaying equipment
US7750974B2 (en) System and method for static region detection in video processing

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant