CN111275793B - Text rendering method and device, electronic equipment and storage medium - Google Patents

Text rendering method and device, electronic equipment and storage medium Download PDF

Info

Publication number
CN111275793B
CN111275793B CN201811480908.1A CN201811480908A CN111275793B CN 111275793 B CN111275793 B CN 111275793B CN 201811480908 A CN201811480908 A CN 201811480908A CN 111275793 B CN111275793 B CN 111275793B
Authority
CN
China
Prior art keywords
rendered
sub
pixel
region
text
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201811480908.1A
Other languages
Chinese (zh)
Other versions
CN111275793A (en
Inventor
邓斌
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Kingsoft Office Software Inc
Zhuhai Kingsoft Office Software Co Ltd
Guangzhou Kingsoft Mobile Technology Co Ltd
Original Assignee
Beijing Kingsoft Office Software Inc
Zhuhai Kingsoft Office Software Co Ltd
Guangzhou Kingsoft Mobile Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Kingsoft Office Software Inc, Zhuhai Kingsoft Office Software Co Ltd, Guangzhou Kingsoft Mobile Technology Co Ltd filed Critical Beijing Kingsoft Office Software Inc
Priority to CN201811480908.1A priority Critical patent/CN111275793B/en
Publication of CN111275793A publication Critical patent/CN111275793A/en
Application granted granted Critical
Publication of CN111275793B publication Critical patent/CN111275793B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T11/002D [Two Dimensional] image generation
    • G06T11/60Editing figures and text; Combining figures or text
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T11/002D [Two Dimensional] image generation
    • G06T11/001Texturing; Colouring; Generation of texture or colour
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/70Denoising; Smoothing
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Controls And Circuits For Display Device (AREA)
  • Facsimile Image Signal Circuits (AREA)

Abstract

The embodiment of the invention provides a text rendering method, a text rendering device, electronic equipment and a storage medium. The scheme is as follows: the method comprises the steps of obtaining a region to be rendered containing characters to be rendered, dividing each pixel point in the region to be rendered to obtain a plurality of sub-pixel points corresponding to the pixel point, determining the plurality of sub-pixel points at the boundary positions of the character region and the background region in the region to be rendered as the sub-pixel points to be rendered, and rendering the plurality of sub-pixel points to be rendered to obtain the rendered characters corresponding to the characters to be rendered. According to the method provided by the embodiment of the invention, the pixel points are divided into a plurality of sub-pixel points with smaller areas, the character areas and the background areas in the areas to be rendered where the characters to be rendered are located are rendered based on the sub-pixel points, so that the jaggy phenomenon of the edges of the characters is weakened, the transition between the character areas and the background areas is smoother, and the visual effect is enhanced.

Description

Text rendering method and device, electronic equipment and storage medium
Technical Field
The present invention relates to the field of computer technologies, and in particular, to a text rendering method, a text rendering device, an electronic device, and a storage medium.
Background
In computer graphics, the most ideal display effect of each word is represented by a vector graph, and when the word is displayed on a screen, the vector graph corresponding to each word can be represented by a pixel point through a rasterization processing technology. In the process, the transition between the characters and the background is smoother by the character rendering technology, and the jaggy phenomenon of the characters is weakened.
Currently, when text in a document is rendered, two rendering methods, namely black and white rendering and gray level rendering, are generally adopted. The black-and-white rendering is to set the pixel value of the pixel point in the uncovered area to the pixel value representing white by setting the pixel value of the pixel point in the covered area of the vector graphics to the pixel value representing black. The gray level rendering is to determine the gray level value of each pixel point according to the area covered by the vector graphics in the screen. As shown in FIG. 1-a, FIG. 1-a is a vector graphic of the letter "e". Black and white rendering as shown in fig. 1-b, the gray value representing black is 0, the gray value representing white is 255, the gray value of the pixel point in the covered area of the vector graphic "e" is set to 0, and the gray value of the pixel point in the uncovered area of the vector graphic "e" is set to 255. The gray rendering is shown in fig. 1-c, the pixel point completely covered by the vector graphic "e" is black, and the gray value of the pixel point is 0; the pixel points which are not completely covered by the vector graph 'e' are gray with different degrees, and the gray values of the pixel points are between 0 and 255; the pixel not covered by the vector graphic "e" is white, and the gray value of the pixel is 255.
When the gray level rendering or the black-and-white rendering is adopted to render the characters in the document, obvious jazziness phenomenon still exists at the edges of the characters, and the transition between the characters and the background is still not smooth enough.
Disclosure of Invention
The embodiment of the invention aims to provide a character rendering method, a character rendering device, electronic equipment and a storage medium, so as to weaken the jagging phenomenon of character edges, enable transition between a character area and a background area to be smoother and enhance visual effect. The specific technical scheme is as follows:
the embodiment of the invention provides a text rendering method, which comprises the following steps:
acquiring a region to be rendered containing characters to be rendered;
dividing each pixel point in the region to be rendered to obtain a plurality of sub-pixel points corresponding to the pixel point;
determining a plurality of sub-pixel points at the boundary positions of the text region and the background region in the region to be rendered, and taking the sub-pixel points as the sub-pixel points to be rendered; the text region is a region covered by the text to be rendered in the region to be rendered, and the background region is a region uncovered by the text to be rendered in the region to be rendered;
rendering is carried out on the plurality of sub-pixel points to be rendered, and rendering words corresponding to the words to be rendered are obtained.
Optionally, the step of dividing each pixel point in the region to be rendered to obtain a plurality of sub-pixel points corresponding to the pixel point includes:
and dividing each pixel point in the region to be rendered into a preset number of sub-pixel points in a preset direction to obtain a preset number of sub-pixel points corresponding to the pixel point.
Optionally, the preset direction includes a horizontal direction and/or a vertical direction.
Optionally, the method further comprises:
determining a pixel value of each pixel point in the region to be rendered;
determining an initial sub-pixel value of each sub-pixel corresponding to each pixel based on the pixel value of each pixel;
the step of determining a plurality of sub-pixel points at the boundary position of the text region and the background region in the region to be rendered as the sub-pixel points to be rendered includes:
and determining a plurality of sub-pixel points at the boundary position of the text region and the background region in the region to be rendered by utilizing a preset residual function according to the initial sub-pixel value of each sub-pixel point in the region to be rendered, so as to obtain the sub-pixel points to be rendered.
Optionally, the preset residual function E (θ) is:
Wherein M (x, y, θ) =A-B+.g (w, σ) ×S (x '-w, y') dw, w 2 =u 2 +v 2 ,/>
w is a preset window function, (x, y) is the coordinate of a pixel point, M (x, y, theta) is the predicted pixel value of the pixel point, theta is a model parameter vector, T is a transposition operation, I (x, y) is the pixel value of the pixel point, A is the initial sub-pixel value of the pixel lighting area, B is the peak value of the initial sub-pixel value of the pixel point dark area, (u, v) is the floating point coordinate of the sub-pixel point corresponding to the dark area, W is the square value of the sum of squares of the coordinate values u and v, sigma is a preset standard deviation, G (W, sigma) is a preset Gaussian function, exp is an exponential function based on e, S (x '-W, y') is a two-dimensional step function, and (x ', y') is the coordinate corresponding to the coordinate system where (x, y) is located,the angle between the X-axis direction of the coordinate system where (u, v) is located and the X-axis direction of the coordinate system where (X, y) is located is convolution operation, and [ pi ] dw is integral operation to w.
Optionally, the step of rendering the plurality of sub-pixel points to be rendered to obtain rendered text corresponding to the text to be rendered includes:
and re-determining the sub-pixel value of each sub-pixel point to be rendered by utilizing a low-pass window filter with a preset coefficient based on the initial sub-pixel values of the plurality of sub-pixel points to be rendered, so as to obtain the rendered text corresponding to the text to be rendered.
Optionally, the preset coefficient is:
wherein i is the number of sub-pixel points obtained by dividing one pixel point in a preset direction.
The embodiment of the invention also provides a text rendering device, which comprises:
the first acquisition module is used for acquiring a region to be rendered containing characters to be rendered;
the second acquisition module is used for dividing each pixel point in the region to be rendered to obtain a plurality of sub-pixel points corresponding to the pixel point;
the first determining module is used for determining a plurality of sub-pixel points at the boundary positions of the text area and the background area in the area to be rendered, and the sub-pixel points are used as sub-pixel points to be rendered; the text region is a region covered by the text to be rendered in the region to be rendered, and the background region is a region uncovered by the text to be rendered in the region to be rendered;
and the rendering module is used for rendering the plurality of sub-pixel points to be rendered to obtain rendering characters corresponding to the characters to be rendered.
Optionally, the second obtaining module is specifically configured to divide, for each pixel point in the area to be rendered, the pixel point into a preset number of sub-pixel points in a preset direction, so as to obtain a preset number of sub-pixel points corresponding to the pixel point.
Optionally, the preset direction includes a horizontal direction and/or a vertical direction.
Optionally, the apparatus further includes:
a second determining module, configured to determine a pixel value of each pixel point in the area to be rendered; determining an initial sub-pixel value of each sub-pixel corresponding to each pixel based on the pixel value of each pixel;
the first determining module is specifically configured to determine, according to an initial subpixel value of each subpixel in the region to be rendered, a plurality of subpixels at a boundary position between a text region and a background region in the region to be rendered by using a preset residual function, so as to obtain the subpixels to be rendered.
Optionally, the preset residual function E (θ) is:
wherein M (x, y, θ) =A-B+.g (w, σ) ×S (x '-w, y') dw, w 2 =u 2 +v 2 ,/>
w is a preset window function, (x, y) is the coordinate of a pixel point, M (x, y, theta) is the predicted pixel value of the pixel point, theta is a model parameter vector, T is a transposition operation, I (x, y) is the pixel value of the pixel point, A is the initial sub-pixel value of the pixel lighting area, B is the peak value of the initial sub-pixel value of the pixel point dark area, (u, v) is the floating point coordinate of the sub-pixel point corresponding to the dark area, W is the square value of the sum of squares of the coordinate values u and v, sigma is a preset standard deviation, G (W, sigma) is a preset Gaussian function, exp is an exponential function based on e, S (x '-W, y') is a two-dimensional step function, and (x ', y') is the coordinate corresponding to the coordinate system where (x, y) is located, The angle between the X-axis direction of the coordinate system where (u, v) is located and the X-axis direction of the coordinate system where (X, y) is located is convolution operation, and [ pi ] dw is integral operation to w.
Optionally, the rendering module is specifically configured to re-determine, based on initial sub-pixel values of a plurality of sub-pixel points to be rendered, the sub-pixel value of each sub-pixel point to be rendered by using a low-pass window filter with a preset coefficient, so as to obtain a rendering text corresponding to the text to be rendered.
Optionally, the preset coefficient is:
wherein i is the number of sub-pixel points obtained by dividing one pixel point in a preset direction.
The embodiment of the invention also provides electronic equipment, which comprises a processor, a communication interface, a memory and a communication bus, wherein the processor, the communication interface and the memory are communicated with each other through the communication bus;
a memory for storing a computer program;
and the processor is used for realizing any one of the text rendering method steps when executing the program stored in the memory.
The embodiment of the invention also provides a computer readable storage medium, wherein the computer readable storage medium stores a computer program, and the computer program is executed by a processor to perform any one of the text rendering method steps.
The embodiment of the invention also provides a computer program product containing instructions, which when run on a computer, cause the computer to execute any of the text rendering methods.
According to the character rendering method, the device, the electronic equipment and the storage medium, a region to be rendered containing characters to be rendered can be obtained, each pixel point in the region to be rendered is divided to obtain a plurality of sub-pixel points corresponding to the pixel point, the plurality of sub-pixel points at the boundary positions of the character region and the background region in the region to be rendered are determined to serve as the sub-pixel points to be rendered, and rendering characters corresponding to the characters to be rendered are obtained. The text region is a region covered by the text to be rendered in the region to be rendered, and the background region is a region uncovered by the text to be rendered in the region to be rendered. According to the method provided by the embodiment of the invention, the pixel points are divided into a plurality of sub-pixel points with smaller areas, the character areas and the background areas in the areas to be rendered where the characters to be rendered are located are rendered based on the sub-pixel points, so that the jaggy phenomenon of the edges of the characters is weakened, the transition between the character areas and the background areas is smoother, and the visual effect is enhanced.
Of course, it is not necessary for any one product or method of practicing the invention to achieve all of the advantages set forth above at the same time.
Drawings
In order to more clearly illustrate the embodiments of the invention or the technical solutions in the prior art, the drawings that are required in the embodiments or the description of the prior art will be briefly described, it being obvious that the drawings in the following description are only some embodiments of the invention, and that other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
FIG. 1-a is a vector graphic provided in an embodiment of the present invention;
FIG. 1-b is a black and white rendering diagram provided by an embodiment of the present invention;
FIG. 1-c is a gray scale rendering diagram provided by an embodiment of the present invention;
FIG. 2 is a flowchart of a text rendering method according to an embodiment of the present invention;
FIG. 3-a is a schematic diagram of a pixel in a region to be rendered according to an embodiment of the present invention;
FIG. 3-b is a first schematic diagram illustrating a pixel segmentation method according to an embodiment of the present invention;
fig. 3-c is a second schematic diagram of a pixel segmentation method according to an embodiment of the present invention;
FIG. 4-a is a schematic diagram illustrating the edge aliasing of a partial region of a text to be rendered;
FIG. 4-b is a schematic diagram of an edge aliasing phenomenon for rendering a local region of text
FIG. 5 is a schematic diagram of a subpixel rendering process according to an embodiment of the present invention;
FIG. 6-a is a schematic diagram of a 4-neighborhood rendering method according to an embodiment of the present invention;
FIG. 6-b is a schematic diagram of a D-neighborhood rendering method according to an embodiment of the present invention;
FIG. 6-c is a schematic diagram of an 8-neighborhood rendering method according to an embodiment of the present invention;
FIG. 7 is a schematic diagram of a text rendering device according to an embodiment of the present invention;
fig. 8 is a schematic structural diagram of an electronic device according to an embodiment of the present invention.
Detailed Description
The following description of the embodiments of the present invention will be made clearly and completely with reference to the accompanying drawings, in which it is apparent that the embodiments described are only some embodiments of the present invention, but not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
In order to solve the problems that the black-and-white rendering and the gray rendering cause edge jaggy phenomenon in the rendering of the Chinese characters in the document and the transition between the text area and the background area is not smooth enough, the embodiment of the invention provides a text rendering method. The method can be applied to any electronic device. In the text rendering method provided by the embodiment of the invention, a text region to be rendered containing text to be rendered can be obtained, each pixel point in the region to be rendered is divided to obtain a plurality of sub-pixel points corresponding to the pixel point, the plurality of sub-pixel points at the boundary positions of the text region and the background region in the region to be rendered are determined and used as the sub-pixel points to be rendered, and the plurality of sub-pixel points to be rendered are rendered to obtain the text to be rendered corresponding to the text to be rendered. The text region is a region covered by the text to be rendered in the region to be rendered, and the background region is a region uncovered by the text to be rendered in the region to be rendered.
According to the method provided by the embodiment of the invention, the pixel points are divided into a plurality of sub-pixel points with smaller areas, the character areas and the background areas in the areas to be rendered where the characters to be rendered are located are rendered based on the sub-pixel points, so that the jaggy phenomenon of the edges of the characters is weakened, the transition between the character areas and the background areas is smoother, and the visual effect is enhanced.
The following describes embodiments of the present invention in detail by way of specific examples.
Referring to fig. 2, fig. 2 is a flowchart of a text rendering method according to an embodiment of the present invention. The method comprises the following steps.
Step S201, a region to be rendered including the text to be rendered is obtained.
In this step, the region where the text to be rendered is located may be obtained from the target document as the region to be rendered. The target document may be a commonly used document type, such as a document of the type ". Doc", ". Txt", "pdf", or the like. The text to be rendered may be each text contained in the target document. The text to be rendered can be vector graphics or rendered text.
In one embodiment, a rectangular area tangential to the text to be rendered in the target document may be selected as the area where the text to be rendered is located, i.e., the area to be rendered. For example, in the target document, the area pixels where the text to be rendered is located are rectangular areas of 20×20, and the area to be rendered is the location where the rectangular areas of 20×20 are located.
In one embodiment of the present invention, the area to be rendered can be freely set according to actual requirements, as long as the text to be rendered is completely located in the area to be rendered. For example, for the text to be rendered with the size of 20×20, a rectangular area with the left and right edges of the 20×20 rectangular area respectively expanded by 24×20 sizes of 2 pixels may be selected as the area to be rendered. For another example, a rectangular region with the upper and lower edges of the 20×20 rectangular region respectively expanded by the size of 20×24 pixels of 2 pixels may be selected as the region to be rendered.
In the embodiment of the invention, the text to be rendered may be vector graphics as shown in fig. 1-a, or may be text rendered by other rendering methods as shown in fig. 1-b and fig. 1-c. The specific type of the text to be rendered is not specifically limited in the embodiment of the present invention.
Step S202, for each pixel point in the area to be rendered, dividing the pixel point to obtain a plurality of sub-pixel points corresponding to the pixel point.
In this step, for each pixel in the region to be rendered, the pixel may be divided into a plurality of sub-pixels. The area of the sub-pixel is smaller than the area of the pixel. And rendering is performed by utilizing the sub-pixel points, so that the edge jagging phenomenon of the rendering text corresponding to the text to be rendered can be weakened.
In an optional embodiment, for each pixel point in the area to be rendered, the pixel point is divided into a preset number of sub-pixel points in a preset direction, so as to obtain a preset number of sub-pixel points corresponding to the pixel point. The preset number may be an integer of 2, 3, 4, etc. In one embodiment, the preset direction may include a horizontal direction and/or a vertical direction.
In one example, a pixel is shown in FIG. 3-a. If the preset direction is the horizontal direction, the preset number is 3, and the pixel point is divided into 3 sub-pixel points in the horizontal direction, so as to obtain 3 sub-pixel points corresponding to the pixel point, as shown in fig. 3-b. If the preset direction is two directions of the horizontal direction and the vertical direction, the preset number is 4, and the pixel point is divided into 2×2=4 sub-pixel points in the horizontal direction and the vertical direction, so as to obtain 4 sub-pixel points corresponding to the pixel point, as shown in fig. 3-c.
Step S203, determining a plurality of sub-pixel points at the boundary positions of the text region and the background region in the region to be rendered, as the sub-pixel points to be rendered. The text region is a region covered by the text to be rendered in the region to be rendered, and the background region is a region uncovered by the text to be rendered in the region to be rendered.
In this step, for an area to be rendered, a text area and a background area may be included in the area to be rendered. And determining a plurality of sub-pixel points of the text region and the background region in the region to be rendered at the boundary position as sub-pixel points to be rendered. The method for determining the sub-pixel to be rendered includes, but is not limited to, edge detection methods such as a fitting method, a moment method, an interpolation method, and the like.
In one embodiment of the present invention, if the text to be rendered may be a vector graphic, the text region of the vector graphic may not cover the complete pixel at the edge positions of the text region and the background region. Based on the sub-pixel points corresponding to the pixel points, the edge detection algorithm can be utilized to accurately position the sub-pixel points at the boundary positions of the text region and the background region in the region to be rendered, namely the sub-pixel points to be rendered, so that the rendering of the text to be rendered is more accurate.
In one embodiment of the present invention, if the text to be rendered is a text in which the vector graphics is rasterized, the pixel value, brightness, etc. of each pixel point in the text region to be rendered are uniquely determined, and the sub-pixel points corresponding to the pixel point are the same in terms of sub-pixel value, brightness, etc. When determining the corresponding sub-pixel point to be rendered for the text to be rendered, determining that the difference value of the sub-pixel values of two adjacent sub-pixel points is larger than a first threshold value and/or the difference value of the brightness is larger than a second threshold value, and determining the boundary position of the text area and the background area by the two adjacent sub-pixel points to further determine the sub-pixel point to be rendered. For example, the text area is black, the background area is white, each sub-pixel point in the area to be rendered can be traversed, and a plurality of sub-pixel points in the transition area of two colors of black and white are determined to be used as the sub-pixel points to be rendered. For example, each pixel corresponds to 3 sub-pixels, and two adjacent sub-pixels can be determined to be white and black respectively in the horizontal direction, so that 5 sub-pixels can be taken from left to right with the two sub-pixels as centers, and sub-pixels to be rendered are obtained. At this time, the number of sub-pixel points to be rendered is 12 sub-pixel points.
Step S204, rendering is carried out on the plurality of sub-pixel points to be rendered, and rendering words corresponding to the words to be rendered are obtained.
In the step, rendering is carried out on a plurality of sub-pixel points to be rendered, and rendering characters corresponding to the characters to be rendered are obtained. Because the area of the sub-pixel point is smaller than that of the pixel point, the sub-pixel point to be rendered is rendered based on the sub-pixel point, the jagging phenomenon of the character edge is weakened, meanwhile, the transition between the background area and the character area in the area to be rendered is smoother, and the visual effect is enhanced.
For example, in the region to be rendered, the color corresponding to the text region is black, and the color of the background region is white. And rendering the plurality of sub-pixel points to be rendered, namely rendering each sub-pixel point to be rendered into gray with different degrees, so that the plurality of sub-pixel points to be rendered show a visual effect that the colors corresponding to the background areas gradually smooth to the colors corresponding to the text areas, and rendering text corresponding to the text to be rendered is obtained.
In addition, since each pixel point is divided into a plurality of sub-pixel points with smaller areas, the edge jaggy phenomenon of the rendering text corresponding to the text to be rendered is changed from the pixel point level to the sub-pixel point level, and the jaggy phenomenon is obviously weakened. Specifically, as shown in fig. 4-a, fig. 4-a is a schematic diagram of edge aliasing at a local area of a text to be rendered. Wherein each grid represents a pixel point. FIG. 4-b is a schematic diagram of edge aliasing at the same region in the rendered text corresponding to the text to be rendered in FIG. 4-a. Each pixel point in fig. 4-a is divided into 3 sub-pixel points in the horizontal direction. And rendering is carried out by adopting the text rendering method provided by the embodiment of the invention, so as to obtain the figure 4-b. The jagging of the text edges in fig. 4-b is significantly impaired compared to fig. 4-a.
In an alternative embodiment, after determining the plurality of sub-pixels corresponding to each pixel in the area to be rendered according to step S202, the initial sub-pixel value of each sub-pixel corresponding to each pixel may be determined based on the pixel value of each pixel.
In one embodiment, the RGB (Red Green Blue) value of each pixel is used to represent the pixel value of the pixel. And the RGB value of each sub-pixel corresponding to each pixel can be determined according to the RGB value of each pixel.
In one example, according to the RGB value of a pixel point being (255 ), the pixel point corresponds to 3 sub-pixel points, and the initial sub-pixel value of each sub-pixel point may be (255 ), or may be the gray value 255 corresponding to one of the three RGB color channels.
In one embodiment, the above pixel values and sub-pixel values may also be represented by, for example, a luminance value, HSV (Hu Saturation Value, hue saturation brightness), or the like, which is not particularly limited in the embodiment of the present invention.
In an embodiment of the present invention, according to the initial subpixel value, the embodiment of the present invention further provides a method for determining a subpixel point to be rendered. The following embodiments can be specifically adopted.
And determining a plurality of sub-pixel point positions corresponding to boundary positions of a text region and a background region in the text to be rendered according to a gradient descent method by utilizing a preset residual function according to the initial sub-pixel value, and obtaining the sub-pixel point to be rendered.
The above-mentioned preset residual function can be expressed as:
wherein M (x, y, θ) =A-B+.g (w, σ) ×S (x '-w, y') dw,
w is a preset window function, (x, y) is the coordinate of a pixel point, M (x, y, theta) is the predicted pixel value of the pixel point, theta is a model parameter vector, T is a transposition operation, I (x, y) is the pixel value of the pixel point, A is the initial sub-pixel value of the pixel lighting area, B is the peak value of the initial sub-pixel value of the pixel point dark area, (u, v) is the floating point coordinate of the sub-pixel point corresponding to the dark area, W is the open square value of the sum of squares of the coordinate values u and v, sigma is a preset standard deviation, G (W, sigma) is a preset Gaussian function, exp is the base of eAn exponential function, S (x '-w, y') is a two-dimensional step function, and (x ', y') is (u, v) and converted into a corresponding coordinate in a coordinate system where (x, y) is located,the angle between the X-axis direction of the coordinate system where (u, v) is located and the X-axis direction of the coordinate system where (X, y) is located is convolution operation, and [ pi ] dw is integral operation to w.
The method for determining the sub-pixel point to be rendered by using the residual function belongs to one of the fitting methods in the edge detection algorithm. Fitting a step phenomenon at the boundary position of the text region and the background region by using an obtained model of convolution of the two-dimensional step function and a preset Gaussian function. After the boundary positions of the text area and the background area are determined, a plurality of sub-pixel points to be rendered can be accurately determined.
In an optional embodiment, in the step S204, the sub-pixel value of each sub-pixel point to be rendered may be redetermined by using a low-pass window filter with a preset coefficient based on the initial sub-pixel values of the plurality of sub-pixel points to be rendered, so as to obtain the rendered text corresponding to the text to be rendered.
In one example, the preset coefficient may be expressed as:
wherein i is the number of sub-pixel points obtained by dividing one pixel point in a preset direction.
For example, if each pixel is divided into 3 sub-pixels in the horizontal direction, the preset coefficient may be expressed asNamely +.>For another example, if each pixel is divided into 4 sub-pixels in the horizontal direction, the preset coefficient may be expressed as +.>Namely +.>
In the embodiment of the invention, after the number of sub-pixel points obtained by dividing one pixel point in the preset direction is i, the preset coefficient can be determined asAnd then, according to the initial sub-pixel value of each sub-pixel point to be rendered in the plurality of sub-pixel points to be rendered, the adjacent sub-pixel points are rendered by utilizing a low-pass window filter with a preset coefficient, and the sub-pixel value of each sub-pixel point to be rendered is redetermined, so that the rendering text corresponding to the text to be rendered is obtained.
Specifically, fig. 5 is an illustration of sub-pixel rendering according to an embodiment of the present invention. Wherein, the number i of sub-pixel points obtained by dividing one pixel point in the horizontal direction is 3, and the preset coefficient is determined asA1-A3 are 3 sub-pixel points obtained by dividing the same pixel point, A4-A6 are 3 sub-pixel points obtained by dividing the same pixel point, and A7-A9 are 3 sub-pixel points obtained by dividing the same sub-pixel point. a, a 1 -a 2 The initial subpixel values corresponding to the subpixels A1-A9. The initial subpixel value of A5 is a 5
If the sub-pixel to be rendered is A3-A7, the sub-pixel to be rendered A5 is taken as an example for illustration. Initial subpixel value a according to A3-A7 3 -a 7 Using a preset coefficient asRendering A5. For the sub-pixel point A5 to be rendered, respectively, a 3 And a 7 Is->Rendering to A5, a 4 And a 6 Is->Render to A5, a 5 Is->Rendering to A5. And so on, according to the rendering method of the A5, each sub-pixel point to be rendered is rendered by using the initial sub-pixel value of the sub-pixel point adjacent to the sub-pixel point to be rendered, and the sub-pixel value of each sub-pixel point to be rendered after being rendered can be redetermined.
Specifically, the subpixel values of A3 after rendering may be determined to be represented as:
the subpixel values of A4 after rendering may be determined to be represented as:
the subpixel value of A5 after rendering may be determined to be represented as:
the subpixel values of A6 after rendering may be determined to be represented as:
the subpixel value of A7 after rendering may be determined to be represented as:
the sub-pixel values of the sub-pixel points to be rendered are determined again in the mode, and then the rendering characters of the characters to be rendered are obtained.
The above-mentioned redetermining of the subpixel values of the subpixel points to be rendered is illustrated. Still referring to FIG. 5, wherein A1-A4 are sub-pixel points of the background area, and the initial sub-pixel value of each sub-pixel point is 255; A5-A9 are sub-pixel points of the text area, and the initial sub-pixel value of each sub-pixel point is 0.
If A3-A7 are sub-pixel points to be rendered, the sub-pixel value V after A3 rendering A3 The method comprises the following steps:
sub-pixel value V after A4 rendering A4 The method comprises the following steps:
sub-pixel value V after A5 rendering A5 The method comprises the following steps:
sub-pixel value V after A6 rendering A6 The method comprises the following steps:
sub-pixel value V after A7 rendering A7 The method comprises the following steps:
according to the sub-pixel values of the sub-pixel points to be rendered, the sub-pixel values of the sub-pixel points to be rendered after A3-A7 are sequentially indicated as 226, 170, 85, and 0. The sub-pixel values of the sub-pixel points to be rendered after rendering are sequentially decreased until the sub-pixel values are the same as the initial sub-pixel values of the text area. The visual effect presented by the rendered subpixel values is that the corresponding colors of A3-A7 gradually transition from gray to black. Therefore, by adopting the character rendering method, the phenomenon that the color of the background area gradually transits to the color of the character area is displayed in the area to be rendered of the character to be rendered, and the transition between the background area and the character area is smoother.
In the embodiment of the present invention, the preset coefficients may be freely set according to the number of sub-pixel points to be rendered, the user requirements, and the like, which are not limited herein. For example, each pixel is divided into 3 sub-pixels, and the preset coefficient may beCan also be +.>I.e. < ->Even +.>
Still referring to FIG. 5, if the predetermined coefficient isAnd A2 is a sub-pixel point to be rendered, the sub-pixel value after rendering A2 can be determined by the initial sub-pixel values of A1, A2, A3 and A4. For example, the A2 rendered subpixel value V A2 The method comprises the following steps: />
In one embodiment of the present invention, after determining the subpixel value of each subpixel to be rendered, if the difference between the subpixel values of two adjacent subpixels to be rendered is smaller than the preset difference, the subpixel values of the two subpixels to be rendered may be adjusted, so that the subpixel values of each two subpixels to be rendered are greater than the preset difference, and further, the transition between the text region and the background region of the rendered text is smoother.
Still referring to fig. 5, if the sub-pixel values of the sub-pixel points A3-A7 to be rendered are 230, 170, 100, 80, 30 in sequence, and the preset difference is 30, the difference between the sub-pixel value 100 of the sub-pixel points A5 and the sub-pixel value 80 of the sub-pixel points A6 is 20, 20<30, and the difference between the sub-pixel value 80 of the sub-pixel points A6 and the sub-pixel value 30 of the sub-pixel points A7 is 50, 50>30. Because the number of sub-pixel points to be rendered is determined, in order to make the transition among A5, A6 and A7 smoother, the sub-pixel values after A6 rendering can be adjusted so that the difference among the sub-pixel values after A5, A6 and A7 rendering is greater than 30. The A6 rendered sub-pixel value may be adjusted to 65, for example. The difference value between each sub-pixel point to be rendered is larger than a preset value, so that the transition between the text area of the rendered text and the background area is smoother.
In one embodiment of the present invention, other adjacent sub-pixel points may be used to render each sub-pixel point to be rendered. Fig. 6-a is a schematic diagram of a 4-neighborhood rendering method according to an embodiment of the present invention, fig. 6-b is a schematic diagram of a D-neighborhood rendering method according to an embodiment of the present invention, and fig. 6-c is a schematic diagram of an 8-neighborhood rendering method according to an embodiment of the present invention. The gray sub-pixel points in the figure represent sub-pixel points to be rendered. In the embodiment of the invention, the manner of rendering the sub-pixel points to be rendered by adopting the adjacent sub-pixel points is not particularly limited.
Specifically, in FIG. 6-a, each cell represents one subpixel, and the gray subpixel is the subpixel to be rendered. The initial subpixel value of each subpixel point is its corresponding label, e.g., the initial subpixel value of the subpixel point to be rendered in FIG. 6-a is B3.
As shown in fig. 6-a, the rendered subpixel values of the subpixels to be rendered are commonly determined by the initial subpixel values of 5 subpixels in the graph. If the preset coefficient isThe rendered subpixel values of the subpixel points to be rendered may be: sub-pixels to be renderedInitial subpixel values of two adjacent subpixels around a point +. >Initial sub-pixel value of two sub-pixel points adjacent to each other up and down +.>And +.o. of the initial subpixel point of the subpixel point to be rendered>The sum value of the sub-pixel values after the sub-pixel points to be rendered are: />
As shown in fig. 6-b, the rendered subpixel values of the subpixels to be rendered are determined jointly by the initial subpixel values of 5 subpixels in the graph. If the preset coefficient isThe rendered subpixel values of the subpixel points to be rendered may be: initial subpixel values of two subpixels to the left of the subpixel to be rendered +.>Initial subpixel values of two subpixels on the right side +.>And +.o. of the initial subpixel value of the subpixel point to be rendered>Sum of (d) and (d). The subpixel values after the subpixel points to be rendered are as follows: />
As shown in fig. 6-c, the rendered sub-pixel values of the sub-pixel points to be rendered are represented byThe initial subpixel values of 9 subpixels in the graph are determined together. If the preset coefficient isThe sub-pixel value after the sub-pixel point to be rendered is the initial sub-pixel value of each sub-pixel point in the graph>The sum value of the sub-pixel values after the sub-pixel points to be rendered are:
in summary, by the text rendering method provided by the embodiment of the invention, the pixel point can be divided into a plurality of sub-pixel points with smaller areas, and the text region and the background region in the region to be rendered where the text to be rendered is located are rendered based on the sub-pixel points, so that the jaggy phenomenon of the text edge is weakened, the transition between the text region and the background region is smoother, and the visual effect is enhanced.
Based on the same inventive concept, according to the text rendering method provided by the embodiment of the invention, the embodiment of the invention also provides a text rendering device. Referring to fig. 7, fig. 7 is a schematic structural diagram of a text rendering device according to an embodiment of the present invention. The device comprises the following modules.
The first obtaining module 701 is configured to obtain a to-be-rendered area including the to-be-rendered text.
The second obtaining module 702 is configured to divide each pixel point in the region to be rendered, so as to obtain a plurality of sub-pixel points corresponding to the pixel point.
A first determining module 703, configured to determine a plurality of sub-pixel points at boundary positions of a text region and a background region in a region to be rendered, as sub-pixel points to be rendered; the text area is an area covered by text to be rendered in the area to be rendered, and the background area is an area uncovered by text to be rendered in the area to be rendered.
And the rendering module 704 is configured to render the plurality of sub-pixel points to be rendered to obtain rendered text corresponding to the text to be rendered.
Optionally, the second obtaining module 702 may be specifically configured to divide, for each pixel point in the area to be rendered, the pixel point into a preset number of sub-pixel points in a preset direction, so as to obtain a preset number of sub-pixel points corresponding to the pixel point.
Optionally, the preset direction may include a horizontal direction and/or a vertical direction.
Optionally, the text rendering device may further include:
the second determining module is used for determining the pixel value of each pixel point in the area to be rendered; and determining an initial sub-pixel value of each sub-pixel corresponding to each pixel based on the pixel value of each pixel.
The first determining module 703 may be specifically configured to determine, according to an initial subpixel value of each subpixel in the region to be rendered, a plurality of subpixels at a boundary position between a text region and a background region in the region to be rendered by using a preset residual function, so as to obtain the subpixels to be rendered.
Optionally, the preset residual function E (θ) is:
wherein M (x, y, θ) =A-B+.g (w, σ) ×S (x '-w, y') dw,
w is a preset window function, (x, y) is the coordinate of a pixel point, M (x, y, theta) is the predicted pixel value of the pixel point, theta is a model parameter vector, T is a transposition operation, I (x, y) is the pixel value of the pixel point, A is the initial sub-pixel value of the pixel lighting area, B is the peak value of the initial sub-pixel value of the dark area of the pixel point, and (u, v) is the dark area pairThe floating point coordinates of the sub-pixel points are calculated by converting the floating point coordinates, w being the open square value of the sum of squares of the coordinate values u and v, sigma being the preset standard deviation, G (w, sigma) being the preset Gaussian function, exp being the exponential function based on e, S (x '-w, y') being the two-dimensional step function, (x ', y') being (u, v) to the corresponding coordinates in the coordinate system where (x, y) is located, The angle between the X-axis direction of the coordinate system where (u, v) is located and the X-axis direction of the coordinate system where (X, y) is located is convolution operation, and [ pi ] dw is integral operation to w.
Optionally, the rendering module 704 may be specifically configured to redetermine the sub-pixel value of each sub-pixel point to be rendered by using a low-pass window filter with a preset coefficient based on the initial sub-pixel values of the plurality of sub-pixel points to be rendered, so as to obtain a rendering text corresponding to the text to be rendered.
Alternatively, the preset coefficient may be expressed as:
wherein i is the number of sub-pixel points obtained by dividing one pixel point in a preset direction.
According to the character rendering device provided by the embodiment of the invention, the pixel points can be divided into a plurality of sub-pixel points with smaller areas, the character areas and the background areas in the areas to be rendered where the characters to be rendered are located are rendered based on the sub-pixel points, the jaggy phenomenon of the edges of the characters is weakened, the transition between the character areas and the background areas is smoother, and the visual effect is enhanced.
Based on the same inventive concept, the embodiment of the invention also provides an electronic device according to the text rendering method provided by the embodiment of the invention. Referring to fig. 8, fig. 8 is a schematic structural diagram of an electronic device according to an embodiment of the present invention. The electronic device comprises a processor 801, a communication interface 802, a memory 803 and a communication bus 804, wherein the processor 801, the communication interface 802 and the memory 803 are in communication with each other through the communication bus 804;
A memory 803 for storing a computer program;
the processor 801, when executing the program stored in the memory 803, implements the following steps:
acquiring a region to be rendered containing characters to be rendered;
dividing each pixel point in the region to be rendered to obtain a plurality of sub-pixel points corresponding to the pixel point;
determining a plurality of sub-pixel points at the boundary positions of a text region and a background region in a region to be rendered, and taking the sub-pixel points as the sub-pixel points to be rendered; the text area is an area covered by text to be rendered in the area to be rendered, and the background area is an area uncovered by text to be rendered in the area to be rendered;
rendering is carried out on the plurality of sub-pixel points to be rendered, and rendering words corresponding to the words to be rendered are obtained.
According to the electronic equipment provided by the embodiment of the invention, the pixel points can be divided into a plurality of sub-pixel points with smaller areas, the character areas and the background areas in the areas to be rendered where the characters to be rendered are located are rendered based on the sub-pixel points, the jaggy phenomenon of the edges of the characters is weakened, the transition between the character areas and the background areas is smoother, and the visual effect is enhanced.
The communication bus mentioned above for the electronic devices may be a peripheral component interconnect standard (Peripheral Component Interconnect, PCI) bus or an extended industry standard architecture (Extended Industry Standard Architecture, EISA) bus, etc. The communication bus may be classified as an address bus, a data bus, a control bus, or the like. For ease of illustration, the figures are shown with only one bold line, but not with only one bus or one type of bus.
The communication interface is used for communication between the electronic device and other devices.
The Memory may include random access Memory (Random Access Memory, RAM) or may include Non-Volatile Memory (NVM), such as at least one disk Memory. Optionally, the memory may also be at least one memory device located remotely from the aforementioned processor.
The processor may be a general-purpose processor, including a central processing unit (Central Processing Unit, CPU), a network processor (Network Processor, NP), etc.; but also digital signal processors (Digital Signal Processing, DSP), application specific integrated circuits (Application Specific Integrated Circuit, ASIC), field programmable gate arrays (Field-Programmable Gate Array, FPGA) or other programmable logic devices, discrete gate or transistor logic devices, discrete hardware components.
Based on the same inventive concept, according to the text rendering method provided by the embodiment of the invention, the embodiment of the invention also provides a computer readable storage medium, and a computer program is stored in the computer readable storage medium, and when the computer program is executed by a processor, the steps of any one of the text rendering methods are realized.
Based on the same inventive concept, according to the text rendering method provided by the above embodiment of the present invention, the embodiment of the present invention further provides a computer program product containing instructions, which when run on a computer, cause the computer to execute any one of the text rendering methods of the above embodiment.
In the above embodiments, it may be implemented in whole or in part by software, hardware, firmware, or any combination thereof. When implemented in software, may be implemented in whole or in part in the form of a computer program product. The computer program product includes one or more computer instructions. When loaded and executed on a computer, produces a flow or function in accordance with embodiments of the present invention, in whole or in part. The computer may be a general purpose computer, a special purpose computer, a computer network, or other programmable apparatus. The computer instructions may be stored in or transmitted from one computer-readable storage medium to another, for example, by wired (e.g., coaxial cable, optical fiber, digital Subscriber Line (DSL)), or wireless (e.g., infrared, wireless, microwave, etc.). The computer readable storage medium may be any available medium that can be accessed by a computer or a data storage device such as a server, data center, etc. that contains an integration of one or more available media. The usable medium may be a magnetic medium (e.g., floppy Disk, hard Disk, magnetic tape), an optical medium (e.g., DVD), or a semiconductor medium (e.g., solid State Disk (SSD)), etc.
It is noted that relational terms such as first and second, and the like are used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Moreover, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising one … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
In this specification, each embodiment is described in a related manner, and identical and similar parts of each embodiment are all referred to each other, and each embodiment mainly describes differences from other embodiments. In particular, for embodiments of the apparatus, electronic device, computer readable storage medium, and computer program product, which are substantially similar to method embodiments, the description is relatively simple, and reference is made to the section of the method embodiments for relevance.
The foregoing description is only of the preferred embodiments of the present invention and is not intended to limit the scope of the present invention. Any modification, equivalent replacement, improvement, etc. made within the spirit and principle of the present invention are included in the protection scope of the present invention.

Claims (12)

1. A text rendering method, comprising:
acquiring a region to be rendered containing characters to be rendered;
dividing each pixel point in the region to be rendered to obtain a plurality of sub-pixel points corresponding to the pixel point;
determining a plurality of sub-pixel points at the boundary positions of the text region and the background region in the region to be rendered, and taking the sub-pixel points as the sub-pixel points to be rendered; the text region is a region covered by the text to be rendered in the region to be rendered, and the background region is a region uncovered by the text to be rendered in the region to be rendered;
rendering is carried out on the plurality of sub-pixel points to be rendered, and rendering words corresponding to the words to be rendered are obtained;
the method further comprises the steps of:
determining a pixel value of each pixel point in the region to be rendered;
determining an initial sub-pixel value of each sub-pixel corresponding to each pixel based on the pixel value of each pixel;
The step of determining a plurality of sub-pixel points at the boundary position of the text region and the background region in the region to be rendered as the sub-pixel points to be rendered includes:
determining a plurality of sub-pixel points at the boundary position of a text region and a background region in the region to be rendered by utilizing a preset residual function according to the initial sub-pixel value of each sub-pixel point in the region to be rendered, so as to obtain sub-pixel points to be rendered;
wherein, the preset residual function E (θ) is:
wherein M (x, y, θ) =A-B+.g (w, σ) ×S (x '-w, y') dw, w 2 =u 2 +v 2 ,/>
w is a preset window function, (x, y) is the coordinate of a pixel point, M (x, y, theta) is the predicted pixel value of the pixel point, theta is a model parameter vector, T is a transposition operation, I (x, y) is the pixel value of the pixel point, A is the initial sub-pixel value of the pixel lighting area, B is the peak value of the initial sub-pixel value of the pixel point dark area, (u, v) is the floating point coordinate of the sub-pixel point corresponding to the dark area, W is the square value of the sum of squares of the coordinate values u and v, sigma is a preset standard deviation, G (W, sigma) is a preset Gaussian function, exp is an exponential function based on e, S (x '-W, y') is a two-dimensional step function, and (x ', y') is the coordinate corresponding to the coordinate system where (x, y) is located, The angle between the X-axis direction of the coordinate system where (u, v) is located and the X-axis direction of the coordinate system where (X, y) is located is convolution operation, and [ pi ] dw is integral operation to w.
2. The method according to claim 1, wherein the step of dividing each pixel point in the region to be rendered to obtain a plurality of sub-pixel points corresponding to the pixel point includes:
and dividing each pixel point in the region to be rendered into a preset number of sub-pixel points in a preset direction to obtain a preset number of sub-pixel points corresponding to the pixel point.
3. The method according to claim 2, wherein the preset direction comprises a horizontal direction and/or a vertical direction.
4. The method of claim 1, wherein the step of rendering the plurality of sub-pixel points to be rendered to obtain a rendered text corresponding to the text to be rendered comprises:
and re-determining the sub-pixel value of each sub-pixel point to be rendered by utilizing a low-pass window filter with a preset coefficient based on the initial sub-pixel values of the plurality of sub-pixel points to be rendered, so as to obtain the rendered text corresponding to the text to be rendered.
5. The method of claim 4, wherein the predetermined coefficients are:
Wherein i is the number of sub-pixel points obtained by dividing one pixel point in a preset direction.
6. A character rendering device, comprising:
the first acquisition module is used for acquiring a region to be rendered containing characters to be rendered;
the second acquisition module is used for dividing each pixel point in the region to be rendered to obtain a plurality of sub-pixel points corresponding to the pixel point;
the first determining module is used for determining a plurality of sub-pixel points at the boundary positions of the text area and the background area in the area to be rendered, and the sub-pixel points are used as sub-pixel points to be rendered; the text region is a region covered by the text to be rendered in the region to be rendered, and the background region is a region uncovered by the text to be rendered in the region to be rendered;
the rendering module is used for rendering the plurality of sub-pixel points to be rendered to obtain rendering characters corresponding to the characters to be rendered;
the apparatus further comprises:
a second determining module, configured to determine a pixel value of each pixel point in the area to be rendered; determining an initial sub-pixel value of each sub-pixel corresponding to each pixel based on the pixel value of each pixel;
The first determining module is specifically configured to determine, according to an initial subpixel value of each subpixel in the region to be rendered, a plurality of subpixels at a boundary position between a text region and a background region in the region to be rendered by using a preset residual function, so as to obtain the subpixels to be rendered;
wherein, the preset residual function E (θ) is:
wherein M (x, y, θ) =A-B+.g (w, σ) ×S (x '-w, y') dw, w 2 =u 2 +v 2 ,/>
w is a preset window function, (x, y) is the coordinate of a pixel point, M (x, y, theta) is the predicted pixel value of the pixel point, theta is a model parameter vector, T is a transposition operation, I (x, y) is the pixel value of the pixel point, A is the initial sub-pixel value of the pixel lighting area, B is the peak value of the initial sub-pixel value of the pixel point dark area, (u, v) is the floating point coordinate of the sub-pixel point corresponding to the dark area, W is the square value of the sum of squares of the coordinate values u and v, sigma is a preset standard deviation, G (W, sigma) is a preset Gaussian function, exp is an exponential function based on e, S (x '-W, y') is a two-dimensional step function, and (x ', y') is the coordinate corresponding to the coordinate system where (x, y) is located,the angle between the X-axis direction of the coordinate system where (u, v) is located and the X-axis direction of the coordinate system where (X, y) is located is convolution operation, and [ pi ] dw is integral operation to w.
7. The apparatus of claim 6, wherein the second obtaining module is specifically configured to divide, for each pixel point in the area to be rendered, the pixel point into a preset number of sub-pixel points in a preset direction, so as to obtain a preset number of sub-pixel points corresponding to the pixel point.
8. The device of claim 7, wherein the predetermined direction comprises a horizontal direction and/or a vertical direction.
9. The apparatus of claim 6, wherein the rendering module is specifically configured to re-determine, based on initial sub-pixel values of a plurality of sub-pixel points to be rendered, the sub-pixel value of each sub-pixel point to be rendered by using a low-pass window filter with a preset coefficient, so as to obtain a rendered text corresponding to the text to be rendered.
10. The apparatus of claim 9, wherein the preset coefficients are:
wherein i is the number of sub-pixel points obtained by dividing one pixel point in a preset direction.
11. The electronic equipment is characterized by comprising a processor, a communication interface, a memory and a communication bus, wherein the processor, the communication interface and the memory are communicated with each other through the communication bus;
A memory for storing a computer program;
a processor for carrying out the method steps of any one of claims 1-5 when executing a program stored on a memory.
12. A computer-readable storage medium, characterized in that the computer-readable storage medium has stored therein a computer program which, when executed by a processor, implements the method steps of any of claims 1-5.
CN201811480908.1A 2018-12-05 2018-12-05 Text rendering method and device, electronic equipment and storage medium Active CN111275793B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811480908.1A CN111275793B (en) 2018-12-05 2018-12-05 Text rendering method and device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811480908.1A CN111275793B (en) 2018-12-05 2018-12-05 Text rendering method and device, electronic equipment and storage medium

Publications (2)

Publication Number Publication Date
CN111275793A CN111275793A (en) 2020-06-12
CN111275793B true CN111275793B (en) 2023-09-29

Family

ID=71003199

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811480908.1A Active CN111275793B (en) 2018-12-05 2018-12-05 Text rendering method and device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN111275793B (en)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102708585A (en) * 2012-05-09 2012-10-03 北京像素软件科技股份有限公司 Method for rendering contour edges of models
CN104821147A (en) * 2015-05-27 2015-08-05 京东方科技集团股份有限公司 Sub-pixel rendering method
CN106097429A (en) * 2016-06-23 2016-11-09 腾讯科技(深圳)有限公司 A kind of image processing method and device

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP4199159B2 (en) * 2004-06-09 2008-12-17 株式会社東芝 Drawing processing apparatus, drawing processing method, and drawing processing program

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102708585A (en) * 2012-05-09 2012-10-03 北京像素软件科技股份有限公司 Method for rendering contour edges of models
CN104821147A (en) * 2015-05-27 2015-08-05 京东方科技集团股份有限公司 Sub-pixel rendering method
CN106097429A (en) * 2016-06-23 2016-11-09 腾讯科技(深圳)有限公司 A kind of image processing method and device

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
李华 ; 杨华民 ; 韩成 ; 赵建平 ; .基于亚像素级几何图元覆盖检测算法.系统仿真学报.2017,(第11期),全文. *

Also Published As

Publication number Publication date
CN111275793A (en) 2020-06-12

Similar Documents

Publication Publication Date Title
CN112348815B (en) Image processing method, image processing apparatus, and non-transitory storage medium
US11275961B2 (en) Character image processing method and apparatus, device, and storage medium
US10803554B2 (en) Image processing method and device
CN108288253B (en) HDR image generation method and device
US11054969B2 (en) Method and device for displaying page of electronic book, and terminal device
US20160343155A1 (en) Dynamic filling of shapes for graphical display of data
WO2020177584A1 (en) Graphic typesetting method and related device
CN111062365B (en) Method, apparatus, chip circuit and computer readable storage medium for recognizing mixed typeset text
CN105005461A (en) Icon display method and terminal
WO2019041842A1 (en) Image processing method and device, storage medium and computer device
CN113744142B (en) Image restoration method, electronic device and storage medium
EP4322109A1 (en) Green screen matting method and apparatus, and electronic device
US10403040B2 (en) Vector graphics rendering techniques
CN113989167A (en) Contour extraction method, device, equipment and medium based on seed point self-growth
CN113808004B (en) Image conversion device, image conversion method, and computer program for image conversion
CN111275793B (en) Text rendering method and device, electronic equipment and storage medium
CN110942488B (en) Image processing device, image processing system, image processing method, and recording medium
US20170274285A1 (en) Method and apparatus for automating the creation of a puzzle pix playable on a computational device from a photograph or drawing
US9594955B2 (en) Modified wallis filter for improving the local contrast of GIS related images
US20110221775A1 (en) Method for transforming displaying images
CN113139921B (en) Image processing method, display device, electronic device and storage medium
US11657511B2 (en) Heuristics-based detection of image space suitable for overlaying media content
WO2022125127A1 (en) Detection of image space suitable for overlaying media content
CN111833256A (en) Image enhancement method, image enhancement device, computer device and readable storage medium
US20220237902A1 (en) Conversion device, conversion learning device, conversion method, conversion learning method, conversion program, and conversion learning program

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant