CN110263301B - Method and device for determining color of text - Google Patents

Method and device for determining color of text Download PDF

Info

Publication number
CN110263301B
CN110263301B CN201910565932.3A CN201910565932A CN110263301B CN 110263301 B CN110263301 B CN 110263301B CN 201910565932 A CN201910565932 A CN 201910565932A CN 110263301 B CN110263301 B CN 110263301B
Authority
CN
China
Prior art keywords
sub
image
text
color
determining
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910565932.3A
Other languages
Chinese (zh)
Other versions
CN110263301A (en
Inventor
钟姿艳
郝郁
程荣
赵沐为
袁闻骞
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Baidu Netcom Science and Technology Co Ltd
Original Assignee
Beijing Baidu Netcom Science and Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Baidu Netcom Science and Technology Co Ltd filed Critical Beijing Baidu Netcom Science and Technology Co Ltd
Priority to CN201910565932.3A priority Critical patent/CN110263301B/en
Publication of CN110263301A publication Critical patent/CN110263301A/en
Priority to KR1020190167075A priority patent/KR102360271B1/en
Priority to JP2019230485A priority patent/JP7261732B2/en
Priority to US16/722,302 priority patent/US11481927B2/en
Application granted granted Critical
Publication of CN110263301B publication Critical patent/CN110263301B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/10Text processing
    • G06F40/103Formatting, i.e. changing of presentation of documents
    • G06F40/109Font handling; Temporal or kinetic typography
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/90Determination of colour characteristics
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/10Text processing
    • G06F40/12Use of codes for handling textual entities
    • G06F40/126Character encoding
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/10Text processing
    • G06F40/12Use of codes for handling textual entities
    • G06F40/14Tree-structured documents
    • G06F40/143Markup, e.g. Standard Generalized Markup Language [SGML] or Document Type Definition [DTD]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T11/002D [Two Dimensional] image generation
    • G06T11/001Texturing; Colouring; Generation of texture or colour
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/28Quantising the image, e.g. histogram thresholding for discrimination between background and foreground patterns
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/50Extraction of image or video features by performing operations within image blocks; by using histograms, e.g. histogram of oriented gradients [HoG]; by summing image-intensity values; Projection analysis
    • G06V10/507Summing image-intensity values; Histogram projection analysis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/56Extraction of image or video features relating to colour
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V30/00Character recognition; Recognising digital ink; Document-oriented image-based pattern recognition
    • G06V30/40Document-oriented image-based pattern recognition
    • G06V30/41Analysis of document content
    • G06V30/413Classification of content, e.g. text, photographs or tables

Abstract

The embodiment of the disclosure discloses a method and a device for determining the color of a text. One embodiment of the method comprises the following steps: in response to detecting a text box in the canvas, determining a sub-image from the canvas that corresponds to the text box; acquiring color values of pixels in the sub-images, and determining average color values of the sub-images; determining an average luminance value of the sub-image based on the average color value of the sub-image; based on the average brightness value of the sub-images, the text color of the text box to be input is determined. The embodiment can realize automatic matching of the character colors which are matched with the canvas, and the character display effect is enhanced.

Description

Method and device for determining color of text
Technical Field
Embodiments of the present disclosure relate to the field of computer technology, and in particular, to a method and apparatus for determining a color of text.
Background
With the popularity of electronic devices such as mobile phones and computers, design-class tools (e.g., photoshop, powerPoint, etc.) are increasingly used in work and life. For design class tools, when text is presented, there is often a problem in that the canvas is not sufficiently contrasted with the font colors, which can make it difficult for a user to discern the text from the canvas.
In the prior art, users of design class tools (e.g., photoshop, powerPoint, etc.) typically employ manual adjustment of font colors to enhance the contrast between the canvas and text.
Disclosure of Invention
The embodiment of the disclosure provides a method and a device for determining the color of a text.
In a first aspect, embodiments of the present disclosure provide a method for determining a color of text, the method comprising: in response to detecting a text box in the canvas, determining a sub-image from the canvas that corresponds to the text box; acquiring color values of pixels in the sub-images, and determining average color values of the sub-images; determining an average luminance value of the sub-image based on the average color value of the sub-image; based on the average brightness value of the sub-images, the color of the text box to be input is determined.
In some embodiments, prior to acquiring the color values of the pixels in the sub-image, the method further comprises: acquiring element information of DOM nodes of the sub-images to generate an XML document; creating an SVG template with a specified size, wherein the specified size is the size of the sub-image; and analyzing the XML file by taking the XML file as the content of the SVG template to generate the vector graphics of the sub-image.
In some embodiments, the method further comprises: creating canvas of specified size; vector graphics are drawn into the created canvas to obtain color values for pixels in the sub-image.
In some embodiments, determining the color of the text box to be input based on the average luminance value of the sub-images comprises: acquiring preset contrast, wherein the contrast is the contrast between characters and sub-images; determining the brightness value of the text based on the average brightness value and the contrast of the sub-images; and determining the gray value of the text according to the determined brightness value of the text.
In some embodiments, determining the color of the text box to be input based on the average luminance value of the sub-images comprises: setting the color of the text box to be input as black in response to determining that the average brightness value of the sub-images is greater than a preset threshold value; and setting the color of the text box to be input as white in response to determining that the average brightness value of the sub-images is smaller than or equal to a preset threshold value.
In some embodiments, determining a sub-image from the canvas that corresponds to the text box includes: determining a location of at least one vertex of the text box in the canvas; acquiring the length and width of a text box; a sub-image is determined from the canvas based on the determined location of the at least one vertex and the length and width.
In a second aspect, embodiments of the present disclosure provide an apparatus for determining a color of text, the apparatus comprising: a sub-image determining unit configured to determine a sub-image corresponding to a text box from the canvas in response to detecting the text box in the canvas; an acquisition unit configured to acquire color values of pixels in the sub-image, and determine an average color value of the sub-image; an average luminance value determination unit configured to determine an average luminance value of the sub-image based on the average color value of the sub-image; and a color determining unit configured to determine a color of a text of the text box to be input based on the average luminance value of the sub-images.
In some embodiments, the apparatus further comprises: a document generation unit configured to acquire element information of DOM nodes of the sub-image to generate an XML document; a template creation unit configured to create an SVG template of a specified size, wherein the specified size is a size of the sub-image; and the vector graphic generation unit is configured to analyze the XML file by taking the XML file as the content of the SVG template to generate the vector graphic of the sub-image.
In some embodiments, the apparatus further comprises: a canvas creation unit configured to create a canvas of a specified size; and a drawing unit configured to draw the vector graphics into the created canvas to obtain color values of pixels in the sub-image.
In some embodiments, the color determination unit is further configured to: acquiring preset contrast, wherein the contrast is the contrast between characters and sub-images; determining the brightness value of the text based on the average brightness value and the contrast of the sub-images; and determining the gray value of the text according to the determined brightness value of the text.
In some embodiments, the color determination unit is further configured to: setting the color of the text box to be input as black in response to determining that the average brightness value of the sub-images is greater than a preset threshold value; and setting the color of the text box to be input as white in response to determining that the average brightness value of the sub-images is smaller than or equal to a preset threshold value.
In some embodiments, the sub-image determination unit is further configured to: determining a location of at least one vertex of the text box in the canvas; acquiring the length and width of a text box; a sub-image is determined from the canvas based on the determined location of the at least one vertex and the length and width.
According to the method and the device for determining the color of the text, provided by the embodiment of the disclosure, in response to detection of the text box in the canvas, the sub-image corresponding to the text box is determined from the canvas, then the color value of the pixel in the sub-image is acquired to determine the average color value of the sub-image, then the average brightness value of the sub-image can be determined based on the average color value of the sub-image, and finally the color of the text box to be input can be determined based on the average brightness value of the sub-image, so that the text color suitable for the canvas is automatically matched, and the text display effect is enhanced.
Drawings
Other features, objects and advantages of the present disclosure will become more apparent upon reading of the detailed description of non-limiting embodiments, made with reference to the following drawings:
FIG. 1 is an exemplary system architecture diagram to which the present disclosure may be applied;
FIG. 2 is a flow chart of one embodiment of a method for determining the color of text according to the present disclosure;
FIG. 3 is a schematic illustration of one application scenario of a method for determining the color of text according to an embodiment of the present disclosure;
FIG. 4 is a flow chart of another embodiment of a method for determining the color of text according to an embodiment of the present disclosure;
FIG. 5 is a schematic structural view of one embodiment of an apparatus for determining the color of text according to the present disclosure;
fig. 6 is a schematic diagram of a computer system suitable for use in implementing embodiments of the present disclosure.
Detailed Description
The present disclosure is described in further detail below with reference to the drawings and examples. It is to be understood that the specific embodiments described herein are merely illustrative of the application and are not limiting of the application. It should be noted that, for convenience of description, only the portions related to the present application are shown in the drawings.
It should be noted that, without conflict, the embodiments of the present disclosure and features of the embodiments may be combined with each other. The present disclosure will be described in detail below with reference to the accompanying drawings in conjunction with embodiments.
FIG. 1 illustrates an exemplary system architecture 100 of a method for determining the color of text or an apparatus for determining the color of text to which embodiments of the present disclosure may be applied.
As shown in fig. 1, a system architecture 100 may include terminal devices 101, 102, 103, a network 104, and a server 105. The network 104 is used as a medium to provide communication links between the terminal devices 101, 102, 103 and the server 105. The network 104 may include various connection types, such as wired, wireless communication links, or fiber optic cables, among others.
The user may interact with the server 105 via the network 104 using the terminal devices 101, 102, 103 to receive or transmit information or the like. Various client applications, such as design class tools, image processing class applications, web browser applications, shopping class applications, search class applications, instant messaging tools, mailbox clients, social platform software, etc., may be installed on the terminal devices 101, 102, 103.
The terminal devices 101, 102, 103 may be hardware or software. When the terminal devices 101, 102, 103 are hardware, they may be various electronic devices having a display screen and supporting text display, including but not limited to smartphones, tablet computers, electronic book readers, laptop and desktop computers, and the like. When the terminal devices 101, 102, 103 are software, they can be installed in the above-listed electronic devices. It may be implemented as a plurality of software or software modules, for example, for providing distributed services, or as a single software or software module. The present application is not particularly limited herein.
The server 105 may be a server providing various services, such as a background server providing support for design class applications running on the terminal devices 101, 102, 103. The background server can perform data analysis and other processes on canvas, text boxes and the like in the design class tool, determine the colors of the characters of the text boxes to be input, and feed back the processing results (such as characters with determined colors) to the terminal equipment.
It should be noted that, the method for determining the color of the text provided by the embodiment of the present application is generally performed by the server 105, and may also be performed by the terminal devices 101, 102, 103. Accordingly, the means for determining the color of the text are typically provided in the server 105, but may also be provided in the terminal devices 101, 102, 103.
The server may be hardware or software. When the server is hardware, the server may be implemented as a distributed server cluster formed by a plurality of servers, or may be implemented as a single server. When the server is software, it may be implemented as a plurality of software or software modules, for example, for providing distributed services, or as a single software or software module. The present application is not particularly limited herein.
It should be understood that the number of terminal devices, networks and servers in fig. 1 is merely illustrative. There may be any number of terminal devices, networks, and servers, as desired for implementation.
With continued reference to fig. 2, a flow 200 of one embodiment of a method for determining the color of text according to the present disclosure is shown. The method for determining the color of the text comprises the following steps:
in response to detecting a text box in the canvas, a sub-image corresponding to the text box is determined from the canvas, step 201.
In this embodiment, a user often needs to set a text box for inputting text on the canvas of a design class tool before inputting text using the design class tool (e.g., photoshop, etc.). An execution subject of the method for determining the color of text (e.g., the server shown in fig. 1) may detect whether a user sets a text box in a canvas of a design class tool used from a terminal with which the user performs text input using a wired connection or a wireless connection. If a text box is detected in the canvas, the execution body can determine a sub-image corresponding to the text box from the canvas.
In some optional implementations of this embodiment, the execution body may determine positions corresponding to respective vertices of the text box on the canvas, and then determine an image corresponding to the text box on the canvas using the determined positions, where the image is a sub-image corresponding to the text box in the canvas. Or, the execution body may further determine a position corresponding to one vertex of the text box in the canvas; then obtaining the length and width of the text box; and finally, determining an image corresponding to the text box in the canvas according to the determined position of the vertex and the length and width of the text box, wherein the image is a sub-image corresponding to the text box in the canvas.
Step 202, obtaining color values of pixels in the sub-image, and determining average color values of the sub-image.
In this embodiment, the canvas may be a solid color, a gradual color, or a combination of pictures. Therefore, the color values of the respective pixels in the sub-image may be the same or different. Based on the sub-image acquired in step 201, the execution subject may acquire the color values of the pixels of the sub-image by various means. As an example, the execution subject may import the sub-image into an existing picture color value reading tool to read color values of pixels in the sub-image. Finally, after obtaining the color values of the pixels in the sub-image, the execution body may calculate an average value of the color values of the pixels, thereby determining an average color value of the sub-image.
In step 203, an average luminance value of the sub-image is determined based on the average color value of the sub-image.
In this embodiment, the executing entity may determine the average luminance value of the sub-image using the average color value of the sub-image based on the average color value of the sub-image determined in step 202. As an example, the execution subject may acquire color values of RGB color space of each pixel of the sub-image and then convert the color values of RGB space into YC b C r Color values of the color space, thereby obtaining the sub-image in YC b C r Luminance component Y, blue chrominance component C of color space b Red chrominance component C r Wherein the luminance component Y may be regarded as an average luminance value of the above sub-images.
Step 204, determining the color of the text box to be input based on the average brightness value of the sub-images.
In this embodiment, the execution subject may determine the color of the text box to be input by various means based on the average luminance value of the sub-image determined in step 203. For example, first, a text of a text box to be input may be set to any brightness value, and in response to determining that the contrast between a sub-image and the text does not reach a preset value, a brightness value range in an image composed of the sub-image and the text is stretched or compressed with the set average brightness value of the sub-image as a reference to improve the contrast of the image, so that the brightness value of the text and a color corresponding to the brightness value may be determined.
As an example, the average luminance value of the sub-image is 0 (i.e., the sub-image is a black image), the gray value of the text input to the text box is 255 (i.e., the text color is white), and at this time, the contrast ratio between the sub-image and the text is high, and the display effect of the text is also better. Similarly, the average brightness value of the sub-image is 255 (i.e., the sub-image is a white image), the gray value of the text input to the text box is 0 (i.e., the text color is black), and the display effect of the text is also good. It is to be understood that the above sub-image is not limited to white and black, and similarly, the color of the text input in the text box is not limited to black and white, but may be a color corresponding to other gray values.
In some optional implementations of this embodiment, the execution body may also detect in real time whether the canvas is replaced. The execution body may reset the color of the text box to be input if the canvas is replaced. Specifically, the execution body may re-execute the steps 201 to 204 to reset the color of the text box to be input.
Compared with the prior art, the method for determining the color of the text provided by the embodiment can automatically determine the text color suitable for the canvas when a user inputs the text by using the design tool, and the display effect of the text on the canvas is enhanced. Meanwhile, the method for determining the color of the text avoids the user from manually adjusting the color of the text which is suitable for the canvas, and improves the efficiency of text color matching.
With continued reference to fig. 3, fig. 3 is a schematic diagram of an application scenario of the method for determining the color of text according to the present embodiment. In the application scenario of FIG. 3, a user may add a text box 302 for entering text in a canvas 301 of a design class tool before entering text with the design class tool; in response to detecting the presence of the text box 302 in the canvas 301, the background server may determine a sub-image 303 corresponding to the text box 301 from the canvas 301 (the canvas corresponding within the dashed line of the text box 302); then, the color values of the pixels in the sub-image 303 are acquired, so that the background server can determine the average color value of the sub-image 303; then, based on the average color value in the sub-image 303, the background server may determine an average luminance value of the sub-image 303; finally, based on the average luminance value of the sub-image 303, the background server may determine the color of the text box 301 to be entered. At this point, the user may enter text of the determined color within text box 302, as shown in FIG. 3, the text "Heat a nice day-! The color of the' has higher contrast with canvas, and the display effect of the characters is better. Therefore, the method for determining the color of the text provided by the embodiment achieves the purpose of automatically enhancing the display effect of the text in the canvas, avoids the manual adjustment of the color of the text by a user, and improves the efficiency of text color matching.
According to the method for determining the color of the text, provided by the embodiment of the disclosure, in response to detecting the text box in the canvas, determining the sub-image corresponding to the text box from the canvas, acquiring the color value of the pixel in the sub-image to determine the average color value of the sub-image, determining the average brightness value of the sub-image based on the average color value of the sub-image, and determining the color of the text to be input into the text box based on the average brightness value of the sub-image, so that the text color matched with the canvas is automatically matched, and the text display effect is enhanced.
With further reference to FIG. 4, a flow 400 of another embodiment of a method for determining the color of text is shown. The process 400 of the method for determining the color of text includes the steps of:
in response to detecting a text box in the canvas, a sub-image corresponding to the text box is determined from the canvas, step 401.
In this embodiment, before a user inputs a text using a design class tool (e.g., photoshop, etc.), it is often necessary to set a text box for inputting the text on a canvas of the design class tool. An execution subject of the method for determining the color of text (e.g., the server shown in fig. 1) may detect whether a user sets a text box in a canvas of a design class tool used from a terminal with which the user performs text input using a wired connection or a wireless connection. If a text box is detected in the canvas, the execution body can determine a sub-image corresponding to the text box from the canvas.
In some optional implementations of this embodiment, before acquiring the color values of the pixels in the sub-image, the executing body may further execute the following steps: acquiring element information of DOM nodes of the sub-images to generate an XML document; creating an SVG template with a specified size, wherein the specified size is the size of the sub-image; and analyzing the XML file by taking the XML file as the content of the SVG template to generate the vector graphics of the sub-image. Specifically, the execution body may traverse DOM nodes in the sub-image, and then serialize the obtained DOM nodes into an XML document; then defining an SVG template with a specified size (i.e., setting the length and width of the SVG graphics to be the length and width of the sub-image, respectively), and determining that the SVG template is contained within the foreignObject tag; and finally, setting the XML file as the content of the SVG template, and analyzing the XML file to generate an SVG image, wherein the SVG image is the vector image of the sub-image. Thus, the execution subject can acquire the color attribute of the vector graphics of the generated sub-image, and thus can acquire the color value of the pixel in the sub-image. In some alternative implementations of this embodiment, the execution body may also create a canvas (canvas) of a specified size, which may have a height and width that are the same as the height and width of the sub-image, respectively. Then, the execution subject can draw the vector graphics of the sub-image into the created canvas (canvas), so that the execution subject can directly return the color values of the pixels in the sub-image through the getImageData method, and the method is simpler.
Step 402, obtaining color values of pixels in the sub-image, and determining an average color value of the sub-image.
In this embodiment, the canvas may be a solid color, a gradual color, or a combination of pictures. Therefore, the color values of the respective pixels in the sub-image may be the same or different. Based on the sub-image acquired in step 401, the execution subject may acquire the color values of the pixels of the sub-image by various means. As an example, the execution subject may import the sub-image into an existing picture color value reading tool to read color values of pixels in the sub-image. Finally, after obtaining the color values of the pixels in the sub-image, the execution body may calculate an average value of the color values of the pixels, thereby determining an average color value of the sub-image.
Step 403, determining an average luminance value of the sub-image based on the average color value of the sub-image.
In this embodiment, the executing entity may determine the average luminance value of the sub-image using the average color value of the sub-image based on the average color value of the sub-image determined in step 402. As an example, the execution subject may acquire color values of RGB color space of each pixel of the sub-image and then convert the color values of RGB space into YC b C r Color values of the color space, thereby obtaining the sub-image in YC b C r Luminance component Y, blue chrominance component C of color space b Red chrominance component C r Wherein the luminance component Y may be regarded as an average luminance value of the above sub-images.
In some optional implementations of this embodiment, the average color value of the sub-image obtained by the executing body may be a color value of an RGB space, including a red component R, a green component G, and a blue component B. The execution subject may calculate the average luminance value of the sub-image by the following luminance value formula:
l=0.2126×r+0.7152×g+0.0722×b, where L is an average luminance value of the sub-image, R is a red component in an average color value of the sub-image, G is a green component in an average color value of the sub-image, and B is a blue component in an average color value of the sub-image. Here, the red component R, the green component G, and the blue component B in the average color value of the sub-image may be converted color component values so that the above-described execution subject may normalize the luminance values of the respective colors by the above-described luminance value formula. The obtained luminance value corresponding to black is 0, the luminance value corresponding to white is 1, and the luminance values corresponding to other colors are in the interval of 0-1.
In step 404, in response to determining that the average brightness value of the sub-image is greater than the preset threshold, the color of the text to be input into the text box is set to black.
In this embodiment, the text in the text box may be only white or black. When the average brightness value of the sub-image is larger, the text can be set to black to increase the contrast of the sub-image and the text. When the average brightness value of the sub-image is smaller, the text may be set to white to increase the contrast of the sub-image with the text. Therefore, the executing body may preset a preset threshold value to compare with the average brightness value of the sub-images, and set the color of the text to be input into the text box to be black when it is determined that the average brightness value of the sub-images is greater than the preset threshold value.
In step 405, in response to determining that the average brightness value of the sub-image is less than or equal to the preset threshold, the color of the text to be input into the text box is set to be white.
In this embodiment, based on the preset threshold set in step 404, the executing body sets the color of the text to be input into the text box to be white when determining that the average brightness value of the sub-image is less than or equal to the preset threshold.
In some alternative implementations of the present embodiment, the contrast of text and sub-images may be calculated by the following formula:
contrast= (l1+0.05)/(l2+0.05), where Contrast is the Contrast between the text and the sub-image, L1 is the relatively large luminance value in the sub-image and the text, and L2 is the relatively small luminance value in the sub-image and the text. Therefore, a brightness value can be reversely deduced from the formula, the brightness value is the same as the contrast of the white text and the black text, and then the execution body can determine the brightness value as the preset threshold value. It will be appreciated that the person skilled in the art may also empirically set the above-mentioned preset threshold value, without being limited thereto.
In some optional implementations of this embodiment, the user may further preset the contrast between the colors of the sub-image and the text of the text box to be input, so that the executing body may obtain the contrast preset by the user. Then, the execution subject may substitute the average luminance of the sub-image and the acquired Contrast ratio into the formula contrast= (l1+0.05)/(l2+0.05), so that the luminance value of the text may be calculated. And finally, converting the calculated brightness value of the characters to determine the gray value of the characters, and further determining the color of the characters.
As can be seen from fig. 4, the method for determining the color of the text in the embodiment can directly determine that the text color of the text to be input into the text box is white or black by comparing the preset threshold value with the average brightness value of the sub-image, which is simpler and further improves the efficiency of setting the text color.
With further reference to fig. 5, as an implementation of the method shown in the above figures, the present disclosure provides an embodiment of an apparatus for determining the color of text, which corresponds to the method embodiment shown in fig. 2, and which is particularly applicable to various execution bodies.
As shown in fig. 5, the apparatus 500 for determining the color of text according to the present embodiment includes: a sub-image determination unit 501, an acquisition unit 502, an average luminance value determination unit 503, and a color determination unit 504. Wherein the sub-image determination unit 501 is configured to determine, in response to detecting a text box in the canvas, a sub-image corresponding to the text box from the canvas; the obtaining unit 502 is configured to obtain color values of pixels in the sub-image, and determine an average color value of the sub-image; the average luminance value determination unit 503 is configured to determine an average luminance value of the sub-image based on the average color value of the sub-image; the color determination unit 504 is configured to determine the color of the text box to be input based on the average luminance value of the sub-images.
In this embodiment, the above-mentioned sub-image determining unit 501 may determine, in response to detecting a text box in a canvas, a sub-image corresponding to the text box from the canvas, then the acquiring unit 502 may acquire color values of pixels in the sub-image to determine an average color value of the sub-image, then the average luminance value determining unit 503 may determine an average luminance value of the sub-image based on the average color value of the sub-image, and finally the color determining unit 504 may determine a color of a text to be input to the text box based on the average luminance value of the sub-image, thereby realizing automatic matching of text colors adapted to the canvas and enhancing the effect of text display.
In some optional implementations of this embodiment, the apparatus 500 for determining a color of a text further includes: a document generation unit configured to acquire element information of DOM nodes of the sub-image to generate an XML document; a template creation unit configured to create an SVG template of a specified size, wherein the specified size is a size of the sub-image; and the vector graphic generation unit is configured to analyze the XML file by taking the XML file as the content of the SVG template to generate the vector graphic of the sub-image.
In some optional implementations of this embodiment, the apparatus 500 for determining a color of a text further includes: a canvas creation unit configured to create a canvas of a specified size; and a drawing unit configured to draw the vector graphics into the created canvas to obtain color values of pixels in the sub-image.
In some optional implementations of the present embodiment, the color determination unit 504 is specifically configured to: acquiring preset contrast, wherein the contrast is the contrast between characters and sub-images; determining the brightness value of the text based on the average brightness value and the contrast of the sub-images; and determining the gray value of the text according to the determined brightness value of the text.
In some optional implementations of the present embodiment, the color determination unit 504 is further configured to: setting the color of the text box to be input as black in response to determining that the average brightness value of the sub-images is greater than a preset threshold value; and setting the color of the text box to be input as white in response to determining that the average brightness value of the sub-images is smaller than or equal to a preset threshold value.
In some optional implementations of the present embodiment, the sub-image determination unit 501 is further configured to: determining a location of at least one vertex of the text box in the canvas; acquiring the length and width of a text box; a sub-image is determined from the canvas based on the determined location of the at least one vertex and the length and width.
The elements recited in apparatus 500 correspond to the various steps in the method described with reference to fig. 2. Thus, the operations and features described above with respect to the method are equally applicable to the apparatus 500 and the units contained therein, and are not described in detail herein.
Referring now to fig. 6, a schematic diagram of an electronic device (e.g., server in fig. 1) 600 suitable for use in implementing embodiments of the present disclosure is shown. The server illustrated in fig. 6 is merely an example, and should not be construed as limiting the functionality and scope of use of the embodiments of the present disclosure in any way.
As shown in fig. 6, the electronic device 600 may include a processing means (e.g., a central processing unit, a graphics processor, etc.) 601, which may perform various appropriate actions and processes according to a program stored in a Read Only Memory (ROM) 602 or a program loaded from a storage means 608 into a Random Access Memory (RAM) 603. In the RAM603, various programs and data required for the operation of the electronic apparatus 600 are also stored. The processing device 601, the ROM 602, and the RAM603 are connected to each other through a bus 604. An input/output (I/O) interface 605 is also connected to bus 604.
In general, the following devices may be connected to the I/O interface 605: input devices 606 including, for example, a touch screen, touchpad, keyboard, mouse, camera, microphone, accelerometer, gyroscope, and the like; an output device 607 including, for example, a Liquid Crystal Display (LCD), a speaker, a vibrator, and the like; storage 608 including, for example, magnetic tape, hard disk, etc.; and a communication device 609. The communication means 609 may allow the electronic device 600 to communicate with other devices wirelessly or by wire to exchange data. While fig. 6 shows an electronic device 600 having various means, it is to be understood that not all of the illustrated means are required to be implemented or provided. More or fewer devices may be implemented or provided instead. Each block shown in fig. 6 may represent one device or a plurality of devices as needed.
In particular, according to embodiments of the present disclosure, the processes described above with reference to flowcharts may be implemented as computer software programs. For example, embodiments of the present disclosure include a computer program product comprising a computer program embodied on a computer readable medium, the computer program comprising program code for performing the method shown in the flowcharts. In such an embodiment, the computer program may be downloaded and installed from a network via communication means 609, or from storage means 608, or from ROM 602. The above-described functions defined in the methods of the embodiments of the present disclosure are performed when the computer program is executed by the processing means 601. It should be noted that, the computer readable medium according to the embodiments of the present disclosure may be a computer readable signal medium or a computer readable storage medium, or any combination of the two. The computer readable storage medium can be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or a combination of any of the foregoing. More specific examples of the computer-readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In an embodiment of the present disclosure, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. Whereas in embodiments of the present disclosure, the computer-readable signal medium may comprise a data signal propagated in baseband or as part of a carrier wave, with computer-readable program code embodied therein. Such a propagated data signal may take any of a variety of forms, including, but not limited to, electro-magnetic, optical, or any suitable combination of the foregoing. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to: electrical wires, fiber optic cables, RF (radio frequency), and the like, or any suitable combination of the foregoing.
The computer readable medium may be contained in the electronic device; or may exist alone without being incorporated into the electronic device. The computer readable medium carries one or more programs which, when executed by the electronic device, cause the electronic device to: in response to detecting a text box in the canvas, determining a sub-image from the canvas that corresponds to the text box; acquiring color values of pixels in the sub-images, and determining average color values of the sub-images; determining an average luminance value of the sub-image based on the average color value of the sub-image; based on the average brightness value of the sub-images, the color of the text box to be input is determined.
Computer program code for carrying out operations of embodiments of the present disclosure may be written in one or more programming languages, including an object oriented programming language such as Java, smalltalk, C ++ and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any kind of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or may be connected to an external computer (for example, through the Internet using an Internet service provider).
The flowcharts and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The units involved in the embodiments described in the present disclosure may be implemented by means of software, or may be implemented by means of hardware. The described units may also be provided in a processor, for example, described as: a processor includes a sub-image determination unit, an acquisition unit, an average luminance value determination unit, and a color determination unit. Where the names of the units do not constitute a limitation on the unit itself in some cases, for example, the sub-image determination unit may also be described as "in response to detecting a text box in a canvas, a unit of a sub-image corresponding to the text box is determined from the canvas".
The foregoing description is only of the preferred embodiments of the present disclosure and description of the principles of the technology being employed. It will be appreciated by those skilled in the art that the scope of the application in the embodiments of the present disclosure is not limited to the specific combination of the above technical features, but encompasses other technical features formed by any combination of the above technical features or their equivalents without departing from the spirit of the application. Such as the above-described features, are mutually substituted with (but not limited to) the features having similar functions disclosed in the embodiments of the present disclosure.

Claims (10)

1. A method for determining the color of text, comprising:
in response to detecting a text box in a canvas, determining a sub-image corresponding to the text box from the canvas;
acquiring color values of pixels in the sub-images, and determining average color values of the sub-images;
determining an average luminance value of the sub-image based on the average color value of the sub-image;
determining the color of the text to be input into the text box based on the average brightness value of the sub-images;
before the obtaining the color value of the pixel in the sub-image, the method further comprises:
acquiring element information of DOM nodes of the sub-images to generate XML documents; creating an SVG template consistent with the sub-image size; analyzing the XML document as the content of the SVG template to generate a vector graph of the sub-image; creating a canvas consistent with the sub-image size; drawing the vector graphics into the created canvas, and acquiring color values of pixels in the sub-images through a getImageData method;
further comprises: detecting whether the canvas is replaced in real time, and if the canvas is replaced, re-executing the step of determining the sub-image corresponding to the text box from the canvas, the step of determining the average color value of the sub-image, the step of determining the average brightness value of the sub-image and the step of determining the color of the text to be input into the text box to re-determine the color of the text to be input into the text box.
2. The method of claim 1, wherein the determining a color of a text to be entered into the text box based on the average luminance value of the sub-images comprises:
acquiring preset contrast, wherein the contrast is the contrast between the characters and the sub-images;
determining the brightness value of the text based on the average brightness value of the sub-image and the contrast;
and determining the gray value of the text according to the determined brightness value of the text.
3. The method of claim 1, wherein the determining a color of a text to be entered into the text box based on the average luminance value of the sub-images comprises:
setting the color of the text to be input into the text box to be black in response to determining that the average brightness value of the sub-image is larger than a preset threshold value;
and setting the color of the text to be input into the text box to be white in response to determining that the average brightness value of the sub-image is smaller than or equal to a preset threshold value.
4. The method of one of claims 1-3, wherein the determining a sub-image from the canvas that corresponds to the text box comprises:
determining a location of at least one vertex of the text box in the canvas;
acquiring the length and the width of the text box;
the sub-image is determined from the canvas based on the determined location of the at least one vertex and the length and width.
5. An apparatus for determining a color of text, comprising:
a sub-image determining unit configured to determine a sub-image corresponding to a text box in a canvas from the canvas in response to detecting the text box;
an acquisition unit configured to acquire color values of pixels in the sub-image, and determine an average color value of the sub-image;
an average luminance value determination unit configured to determine an average luminance value of the sub-image based on the average color value of the sub-image;
a color determination unit configured to determine a color of a word to be input to the text box based on an average luminance value of the sub-images;
a document generation unit configured to acquire element information of DOM nodes of the sub-image to generate an XML document;
a template creation unit configured to create an SVG template in conformity with the sub-image size;
a vector graphics generating unit configured to parse the XML document with the XML document as the content of the SVG template, and generate a vector graphics of the sub-image;
a canvas creation unit configured to create a canvas having a size consistent with the sub-image;
the drawing unit is configured to draw the vector graphics into the created canvas and acquire color values of pixels in the sub-images through a getImageData method;
and a canvas replacement detection and processing unit configured to detect in real time whether the canvas is replaced, and if the canvas is replaced, re-executing the step of determining a sub-image corresponding to the text box from the canvas, the step of determining an average color value of the sub-image, the step of determining an average brightness value of the sub-image, and the step of determining a color of a text to be input to the text box to re-determine the color of the text to be input to the text box.
6. The apparatus according to claim 5, wherein the color determination unit is specifically configured to:
acquiring preset contrast, wherein the contrast is the contrast between the characters and the sub-images;
determining the brightness value of the text based on the average brightness value of the sub-image and the contrast;
and determining the gray value of the text according to the determined brightness value of the text.
7. The apparatus of claim 5, wherein the color determination unit is further configured to:
setting the color of the text to be input into the text box to be black in response to determining that the average brightness value of the sub-image is larger than a preset threshold value;
and setting the color of the text to be input into the text box to be white in response to determining that the average brightness value of the sub-image is smaller than or equal to a preset threshold value.
8. The apparatus according to one of claims 5-7, wherein the sub-image determination unit is further configured to:
determining a location of at least one vertex of the text box in the canvas;
acquiring the length and the width of the text box;
the sub-image is determined from the canvas based on the determined location of the at least one vertex and the length and width.
9. An electronic device, comprising:
one or more processors;
a storage device having one or more programs stored thereon,
when executed by the one or more processors, causes the one or more processors to implement the method of any of claims 1-4.
10. A computer readable medium having stored thereon a computer program, wherein the program when executed by a processor implements the method of any of claims 1-4.
CN201910565932.3A 2019-06-27 2019-06-27 Method and device for determining color of text Active CN110263301B (en)

Priority Applications (4)

Application Number Priority Date Filing Date Title
CN201910565932.3A CN110263301B (en) 2019-06-27 2019-06-27 Method and device for determining color of text
KR1020190167075A KR102360271B1 (en) 2019-06-27 2019-12-13 Methods and devices for determining the color of a text
JP2019230485A JP7261732B2 (en) 2019-06-27 2019-12-20 Method and apparatus for determining character color
US16/722,302 US11481927B2 (en) 2019-06-27 2019-12-20 Method and apparatus for determining text color

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910565932.3A CN110263301B (en) 2019-06-27 2019-06-27 Method and device for determining color of text

Publications (2)

Publication Number Publication Date
CN110263301A CN110263301A (en) 2019-09-20
CN110263301B true CN110263301B (en) 2023-12-05

Family

ID=67922170

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910565932.3A Active CN110263301B (en) 2019-06-27 2019-06-27 Method and device for determining color of text

Country Status (4)

Country Link
US (1) US11481927B2 (en)
JP (1) JP7261732B2 (en)
KR (1) KR102360271B1 (en)
CN (1) CN110263301B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112203022B (en) * 2020-10-28 2022-08-19 努比亚技术有限公司 Electrochromic control method and device and computer readable storage medium
CN112862927B (en) * 2021-01-07 2023-07-25 北京字跳网络技术有限公司 Method, apparatus, device and medium for publishing video
US11496738B1 (en) * 2021-03-24 2022-11-08 Amazon Technologies, Inc. Optimized reduced bitrate encoding for titles and credits in video content

Citations (21)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7064759B1 (en) * 2003-05-29 2006-06-20 Apple Computer, Inc. Methods and apparatus for displaying a frame with contrasting text
JP2008134472A (en) * 2006-11-28 2008-06-12 Canon Inc Information processor and information processing method
CN102063809A (en) * 2010-12-29 2011-05-18 鸿富锦精密工业(深圳)有限公司 Electronic reading device and control method thereof
CN102289668A (en) * 2011-09-07 2011-12-21 谭洪舟 Binaryzation processing method of self-adaption word image based on pixel neighborhood feature
CN102340698A (en) * 2011-10-12 2012-02-01 福建新大陆通信科技股份有限公司 Scalable vector graphics (SVG)-based set-top box interface representation method
CN102339461A (en) * 2010-07-27 2012-02-01 夏普株式会社 Method and equipment for enhancing image
JP2013004094A (en) * 2011-06-16 2013-01-07 Fujitsu Ltd Text emphasis method and device and text extraction method and device
CN103019682A (en) * 2012-11-20 2013-04-03 清华大学 Method for displaying date by combining user-defined graphics into SVG (Scalable Vector Graphics)
CN103150742A (en) * 2011-12-06 2013-06-12 上海可鲁系统软件有限公司 Method and device for vector graphic dynamic rendering
CN103166945A (en) * 2011-12-14 2013-06-19 北京千橡网景科技发展有限公司 Picture processing method and system
CN103226619A (en) * 2013-05-23 2013-07-31 北京邮电大学 Native vector diagram format conversion method and system
CN103377468A (en) * 2012-04-26 2013-10-30 上海竞天科技股份有限公司 Image processing device and image processing method
CN103955549A (en) * 2014-05-26 2014-07-30 重庆大学 Web GIS system based on SVG and data input and search method thereof
CN104391691A (en) * 2014-11-07 2015-03-04 久邦计算机技术(广州)有限公司 Icon and text processing method
CN105302445A (en) * 2015-11-12 2016-02-03 小米科技有限责任公司 Graphical user interface drawing method and device
CN105975955A (en) * 2016-05-27 2016-09-28 北京好运到信息科技有限公司 Detection method of text area in image
CN107066430A (en) * 2017-04-21 2017-08-18 广州爱九游信息技术有限公司 Image processing method, device, service end and client
CN107250968A (en) * 2015-02-24 2017-10-13 恩波里亚电信股份两合公司 The operating method of mobile terminal device, the application for mobile terminal device and mobile terminal device
CN107977946A (en) * 2017-12-20 2018-05-01 百度在线网络技术(北京)有限公司 Method and apparatus for handling image
CN108550328A (en) * 2017-04-10 2018-09-18 韩厚华 A kind of more picture painting canvas timing displaying devices
CN109522975A (en) * 2018-09-18 2019-03-26 平安科技(深圳)有限公司 Handwriting samples generation method, device, computer equipment and storage medium

Family Cites Families (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20070283246A1 (en) * 2004-04-08 2007-12-06 Just System Corporation Processing Documents In Multiple Markup Representations
JPWO2006137565A1 (en) * 2005-06-24 2009-01-22 株式会社ジャストシステム Document processing apparatus and document processing method
WO2007028137A2 (en) 2005-09-01 2007-03-08 Nokia Corporation Method for embedding svg content into an iso base media file format for progressive downloading and streaming of rich media content
US7576755B2 (en) * 2007-02-13 2009-08-18 Microsoft Corporation Picture collage systems and methods
TWI401669B (en) * 2008-04-11 2013-07-11 Novatek Microelectronics Corp Image processing circuit and method thereof for enhancing text displaying
KR101023389B1 (en) * 2009-02-23 2011-03-18 삼성전자주식회사 Apparatus and method for improving performance of character recognition
JP5350862B2 (en) * 2009-04-03 2013-11-27 株式会社ソニー・コンピュータエンタテインメント Portable information terminal and information input method
US8806331B2 (en) * 2009-07-20 2014-08-12 Interactive Memories, Inc. System and methods for creating and editing photo-based projects on a digital network
US20160154239A9 (en) * 2010-02-03 2016-06-02 Hoyt Mac Layson, JR. Head Mounted Portable Wireless Display Device For Location Derived Messaging
JP2012133195A (en) * 2010-12-22 2012-07-12 Nippon Telegr & Teleph Corp <Ntt> High visibility color presenting device, high visibility color presenting method and high visibility color presenting program
RU2523925C2 (en) * 2011-11-17 2014-07-27 Корпорация "САМСУНГ ЭЛЕКТРОНИКС Ко., Лтд." Method and apparatus for dynamically visualising collection of images in form of collage
JP2013123119A (en) * 2011-12-09 2013-06-20 Sharp Corp Image processing apparatus, image forming apparatus, image reading apparatus, image processing method, computer program, and recording medium
US9542907B2 (en) 2013-06-09 2017-01-10 Apple Inc. Content adjustment in graphical user interface based on background content
US20150287227A1 (en) * 2014-04-06 2015-10-08 InsightSoftware.com International Unlimited Dynamic filling of shapes for graphical display of data
JP6378645B2 (en) * 2014-06-13 2018-08-22 キヤノン株式会社 Information processing apparatus, control method, and program
TWI629675B (en) * 2017-08-18 2018-07-11 財團法人工業技術研究院 Image recognition system and information displaying method thereof

Patent Citations (21)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7064759B1 (en) * 2003-05-29 2006-06-20 Apple Computer, Inc. Methods and apparatus for displaying a frame with contrasting text
JP2008134472A (en) * 2006-11-28 2008-06-12 Canon Inc Information processor and information processing method
CN102339461A (en) * 2010-07-27 2012-02-01 夏普株式会社 Method and equipment for enhancing image
CN102063809A (en) * 2010-12-29 2011-05-18 鸿富锦精密工业(深圳)有限公司 Electronic reading device and control method thereof
JP2013004094A (en) * 2011-06-16 2013-01-07 Fujitsu Ltd Text emphasis method and device and text extraction method and device
CN102289668A (en) * 2011-09-07 2011-12-21 谭洪舟 Binaryzation processing method of self-adaption word image based on pixel neighborhood feature
CN102340698A (en) * 2011-10-12 2012-02-01 福建新大陆通信科技股份有限公司 Scalable vector graphics (SVG)-based set-top box interface representation method
CN103150742A (en) * 2011-12-06 2013-06-12 上海可鲁系统软件有限公司 Method and device for vector graphic dynamic rendering
CN103166945A (en) * 2011-12-14 2013-06-19 北京千橡网景科技发展有限公司 Picture processing method and system
CN103377468A (en) * 2012-04-26 2013-10-30 上海竞天科技股份有限公司 Image processing device and image processing method
CN103019682A (en) * 2012-11-20 2013-04-03 清华大学 Method for displaying date by combining user-defined graphics into SVG (Scalable Vector Graphics)
CN103226619A (en) * 2013-05-23 2013-07-31 北京邮电大学 Native vector diagram format conversion method and system
CN103955549A (en) * 2014-05-26 2014-07-30 重庆大学 Web GIS system based on SVG and data input and search method thereof
CN104391691A (en) * 2014-11-07 2015-03-04 久邦计算机技术(广州)有限公司 Icon and text processing method
CN107250968A (en) * 2015-02-24 2017-10-13 恩波里亚电信股份两合公司 The operating method of mobile terminal device, the application for mobile terminal device and mobile terminal device
CN105302445A (en) * 2015-11-12 2016-02-03 小米科技有限责任公司 Graphical user interface drawing method and device
CN105975955A (en) * 2016-05-27 2016-09-28 北京好运到信息科技有限公司 Detection method of text area in image
CN108550328A (en) * 2017-04-10 2018-09-18 韩厚华 A kind of more picture painting canvas timing displaying devices
CN107066430A (en) * 2017-04-21 2017-08-18 广州爱九游信息技术有限公司 Image processing method, device, service end and client
CN107977946A (en) * 2017-12-20 2018-05-01 百度在线网络技术(北京)有限公司 Method and apparatus for handling image
CN109522975A (en) * 2018-09-18 2019-03-26 平安科技(深圳)有限公司 Handwriting samples generation method, device, computer equipment and storage medium

Non-Patent Citations (5)

* Cited by examiner, † Cited by third party
Title
A Survey of Scholarly Data Visualization;Jiaying Liu等;IEEE;第6卷;第19205页-19221页 *
SVG技术分析;陈亚楠;谢声时;邢诒俊;;华南金融电脑(第08期);第46页-47页 *
基于SVG绘图技术实现流程图展示的研究;张莹;电脑与电信(第5期);第77页-79页 *
手绘动画视频中的手绘素材解析方式研究;魏博等;软件;第39卷(第11期);第178页-181页 *
物联网监测显控系统的设计与实现;钟姿艳;中国优秀硕士学位论文全文数据库信息科技辑(第3期);第I138-1722页 *

Also Published As

Publication number Publication date
US11481927B2 (en) 2022-10-25
KR102360271B1 (en) 2022-02-07
CN110263301A (en) 2019-09-20
JP7261732B2 (en) 2023-04-20
JP2021006982A (en) 2021-01-21
US20200410718A1 (en) 2020-12-31
KR20210001858A (en) 2021-01-06

Similar Documents

Publication Publication Date Title
CN110263301B (en) Method and device for determining color of text
US11514263B2 (en) Method and apparatus for processing image
CN110222694B (en) Image processing method, image processing device, electronic equipment and computer readable medium
CN113808231B (en) Information processing method and device, image rendering method and device, and electronic device
CN110675465B (en) Method and apparatus for generating image
CN110211030B (en) Image generation method and device
CN112306793A (en) Method and device for monitoring webpage
CN110865862A (en) Page background setting method and device and electronic equipment
CN112181568A (en) Locally adapting screen method and apparatus
US11190653B2 (en) Techniques for capturing an image within the context of a document
CN109272526B (en) Image processing method and system and electronic equipment
CN113923474B (en) Video frame processing method, device, electronic equipment and storage medium
CN110633773B (en) Two-dimensional code generation method and device for terminal equipment
CN110751251B (en) Method and device for generating and transforming two-dimensional code image matrix
CN110555799A (en) Method and apparatus for processing video
US9696957B2 (en) Graphic processing method, system and server
CN115756461A (en) Annotation template generation method, image identification method and device and electronic equipment
US10319126B2 (en) Ribbon to quick access toolbar icon conversion
CN116823700A (en) Image quality determining method and device
CN110599437A (en) Method and apparatus for processing video
CN113569092B (en) Video classification method and device, electronic equipment and storage medium
CN115937338B (en) Image processing method, device, equipment and medium
CN117315172B (en) Map page configuration method, map page configuration device, electronic equipment and computer readable medium
CN113435454B (en) Data processing method, device and equipment
CN114089994A (en) Method and device for adjusting text color

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant