CN110570493B - Font mapping processing method and device, storage medium and electronic equipment - Google Patents

Font mapping processing method and device, storage medium and electronic equipment Download PDF

Info

Publication number
CN110570493B
CN110570493B CN201910872757.2A CN201910872757A CN110570493B CN 110570493 B CN110570493 B CN 110570493B CN 201910872757 A CN201910872757 A CN 201910872757A CN 110570493 B CN110570493 B CN 110570493B
Authority
CN
China
Prior art keywords
value
pixel
font
block
channel value
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910872757.2A
Other languages
Chinese (zh)
Other versions
CN110570493A (en
Inventor
潘乐乐
任帅
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Netease Hangzhou Network Co Ltd
Original Assignee
Netease Hangzhou Network Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Netease Hangzhou Network Co Ltd filed Critical Netease Hangzhou Network Co Ltd
Priority to CN201910872757.2A priority Critical patent/CN110570493B/en
Publication of CN110570493A publication Critical patent/CN110570493A/en
Application granted granted Critical
Publication of CN110570493B publication Critical patent/CN110570493B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T9/00Image coding
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T11/002D [Two Dimensional] image generation
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • Image Processing (AREA)
  • Compression Of Band Width Or Redundancy In Fax (AREA)

Abstract

The embodiment of the invention relates to a font mapping processing method and device, a storage medium and electronic equipment, and relates to the technical field of image rendering, wherein the method comprises the following steps: dividing a font to be processed into a plurality of pixel blocks, and obtaining two interpolation endpoints of each pixel block according to a maximum channel value, a minimum channel value and a brightness value of each pixel block; according to the current channel value of each pixel in the pixel block, calculating the weight value of each pixel between the two interpolation endpoints, and storing the weight value of each pixel into the current compression block corresponding to the pixel block to obtain a target compression block; and compressing the font mapping to be processed by utilizing the target compression block to obtain compressed texture data, and uploading the compressed texture data to a mapping area. The embodiment of the invention improves the compression efficiency and the rendering efficiency.

Description

Font mapping processing method and device, storage medium and electronic equipment
Technical Field
The embodiment of the invention relates to the technical field of image rendering, in particular to a font mapping processing method, a font mapping processing device, a computer readable storage medium and electronic equipment.
Background
In existing computer platform applications, there are often situations where a large number of dynamic Chinese characters need to be displayed. For example, in chinese games running on a computer or gaming machine, it is desirable to dynamically display a large number of chinese characters. In the prior art, when displaying the Chinese characters, the Chinese characters need to be generated into textures, and because of the huge number of Chinese characters, a large amount of texture memory is needed to be occupied. For the graphic accelerator with limited available resources, the occupation condition of texture memory directly affects the display speed, chinese characters occupy a large amount of texture memory to increase the requirement on the display memory, and the bandwidth required for reading the texture map is greatly increased, so that the display speed is slowed down.
In order to end the above problem, the prior art adopts the following technical scheme: firstly, generating a dot matrix of a Chinese character to be displayed, and generating a picture of the Chinese character to be displayed according to the dot matrix of the Chinese character to be displayed; then creating a texture with the compressed format of the same size as the picture of the Chinese character to be displayed; and finally, correspondingly filling the pictures of the Chinese characters to be displayed into the text of the compression format, and then storing the textures of the compression format of the Chinese characters to be displayed.
However, this solution has the following drawbacks: on one hand, textures occupy the video memory, and a large burden is easily generated on a system under limited video memory resources; on the other hand, the texture format of the text texture is different from that of the common picture display node, so that the text texture and the common picture display node cannot be rendered in batches.
Therefore, it is necessary to provide a new font mapping method and apparatus.
It should be noted that the information of the present invention in the above background section is only for enhancing the understanding of the background of the present invention and thus may include information that does not form the prior art that is already known to those of ordinary skill in the art.
Disclosure of Invention
The invention aims to provide a font mapping processing method, a font mapping processing device, a computer readable storage medium and electronic equipment, so as to solve the problem that text textures cannot be combined with textures of common pictures due to the limitations and defects of the related art at least to a certain extent.
According to one aspect of the present disclosure, there is provided a font mapping processing method, including:
dividing a font to be processed into a plurality of pixel blocks, and obtaining two interpolation endpoints of each pixel block according to a maximum channel value, a minimum channel value and a brightness value of each pixel block;
According to the current channel value of each pixel in the pixel block, calculating the weight value of each pixel between the two interpolation endpoints, and storing the weight value of each pixel into the current compression block corresponding to the pixel block to obtain a target compression block;
and compressing the font mapping to be processed by utilizing the target compression block to obtain compressed texture data, and uploading the compressed texture data to a mapping area.
In one exemplary embodiment of the present disclosure, deriving two interpolation endpoints for each of the pixel blocks from a maximum channel value, a minimum channel value, and a luminance of each of the pixel blocks includes:
calculating the maximum channel value and the minimum channel value of each pixel block;
obtaining two interpolation endpoints of each pixel block according to the maximum channel value, the minimum channel value and the first brightness value and the second brightness value of each pixel block;
wherein one of the two interpolation endpoints comprises a maximum channel value and a first brightness value; the other interpolation endpoint comprises a minimum channel value and a second brightness value;
wherein the first luminance value and the second luminance value are the same.
In one exemplary embodiment of the present disclosure, deriving two interpolation endpoints for each of the pixel blocks from the maximum channel value, the minimum channel value, and the first luminance value and the second luminance value of each of the pixel blocks includes:
Judging whether the maximum channel value is equal to the minimum channel value;
and when the maximum channel value is not equal to the minimum channel value, obtaining two interpolation endpoints of each pixel block according to the maximum channel value, the minimum channel value and the first brightness value and the second brightness value of each pixel block.
In an exemplary embodiment of the present disclosure, calculating a weight value for each pixel between the two interpolation endpoints according to a current channel value for each pixel in each pixel block includes:
obtaining a maximum coding value and a minimum coding value corresponding to the maximum channel value and the minimum channel value from a first preset list;
obtaining a maximum decoding value and a minimum decoding value corresponding to the maximum coding value and the minimum coding value from a second preset list;
and acquiring a weight value of each pixel between the two interpolation endpoints from a third preset list according to the current channel value, the maximum decoding value and the minimum decoding value of each pixel in each pixel block.
In an exemplary embodiment of the present disclosure, the first preset list, the second preset list, and the third preset list are all calculated by an offline manner based on an ASTC algorithm.
In an exemplary embodiment of the present disclosure, compressing the font-rendering to be processed with the target compression block to obtain compressed texture data includes:
and compressing the font mapping to be processed through a single plane compression mode by utilizing the target compression block to obtain the compressed texture data.
In one exemplary embodiment of the present disclosure, uploading the compressed texture data to a map area includes:
creating a physical map according to the data format of the compressed texture data, and dividing the physical map into a plurality of map areas with preset sizes;
uploading the compressed texture data to a map area in an idle state.
In an exemplary embodiment of the present disclosure, after uploading the compressed texture data to a map area, the font-map processing method further includes:
and pooling the to-be-processed font maps with the same rendering state in the map area, and rendering the pooled to-be-processed font maps.
In an exemplary embodiment of the present disclosure, the font map comprises a plain font map and/or an artistic font map;
when the font mapping is an artistic font mapping, after dividing the font mapping to be processed into a plurality of pixel blocks, the font mapping processing method further comprises:
And performing expansion processing on the pixels included in each pixel block by using target pixels having the same color value as the pixels included in the pixel block.
According to an aspect of the present disclosure, there is provided a font mapping apparatus including:
the first processing module is used for dividing the font mapping to be processed into a plurality of pixel blocks, and obtaining two interpolation endpoints of each pixel block according to the maximum channel value, the minimum channel value and the brightness value of each pixel block;
the weight value calculation module is used for calculating the weight value of each pixel between the two interpolation endpoints according to the current channel value of each pixel in the pixel block, and storing the weight value of each pixel into the current compression block corresponding to the pixel block to obtain a target compression block;
and the compression module is used for compressing the font mapping to be processed by utilizing the target compression block to obtain compressed texture data, and uploading the compressed texture data to a mapping area.
According to one aspect of the present disclosure, there is provided a computer-readable storage medium having stored thereon a computer program which, when executed by a processor, implements the font mapping processing method of any one of the above.
According to one aspect of the present disclosure, there is provided an electronic device including:
a processor; and
a memory for storing executable instructions of the processor;
wherein the processor is configured to perform any of the font mapping methods described above via execution of the executable instructions.
On one hand, the font mapping processing method and device of the embodiment of the invention divide the font mapping to be processed into a plurality of pixel blocks, and obtain two interpolation endpoints of each pixel block according to the maximum channel value, the minimum channel value and the brightness value of each pixel block; then, according to the current channel value of each pixel in the pixel block, calculating the weight value of each pixel between the two interpolation endpoints, and storing the weight value of each pixel into the current compression block corresponding to the pixel block to obtain a target compression block; finally, compressing the font mapping to be processed by utilizing the target compression block to obtain compressed texture data, uploading the compressed texture data to a mapping area, so that the font compressed texture data and the compressed texture data of a common picture can be subjected to batch rendering through the mapping area, the problem that the prior art is incapable of carrying out batch rendering due to different texture formats of Chinese textures and common picture display nodes is solved, and the batch rendering efficiency is improved; on the other hand, the target compression block is utilized to compress the font mapping to be processed to obtain compressed texture data, so that the problem that textures occupy video memory in the prior art, and a large burden is easily generated on a system under limited video memory resources is solved, and the burden of the system is reduced; further, according to the current channel value of each pixel in the pixel block, calculating the weight value of each pixel between the two interpolation endpoints, and storing the weight value of each pixel into the current compression block corresponding to the pixel block to obtain a target compression block, so that the accuracy of the target compression block is improved, and the accuracy of compressed texture data is further improved; and simultaneously, the compression efficiency is improved.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the invention as claimed.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the invention and together with the description, serve to explain the principles of the invention. It is evident that the drawings in the following description are only some embodiments of the present invention and that other drawings may be obtained from these drawings without inventive effort for a person of ordinary skill in the art.
Fig. 1 schematically shows a flow chart of a font mapping processing method according to an exemplary embodiment of the invention.
Fig. 2 schematically shows a flow chart of a method of calculating a weight value for each pixel between two interpolation endpoints from the current channel value of each pixel in a block of pixels according to an exemplary embodiment of the invention.
Fig. 3 schematically shows a flow chart of another font mapping processing method according to an exemplary embodiment of the invention.
Fig. 4 schematically shows a flow chart of a method of compressing a color artwork texture having a tracing or shading, according to an exemplary embodiment of the present invention.
Fig. 5 schematically shows an example diagram of an application scenario for compressing a color artwork texture with a tracing or shading according to an example embodiment of the present invention.
FIG. 6 schematically shows a flow chart of a method of rendering a font map according to an example embodiment of the invention.
Fig. 7 schematically shows a block diagram of a font mapping processing apparatus according to an exemplary embodiment of the invention.
Fig. 8 schematically shows an electronic device for implementing the above-described font mapping processing method according to an exemplary embodiment of the present invention.
Detailed Description
Example embodiments will now be described more fully with reference to the accompanying drawings. However, the exemplary embodiments may be embodied in many forms and should not be construed as limited to the examples set forth herein; rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the concept of the example embodiments to those skilled in the art. The described features, structures, or characteristics may be combined in any suitable manner in one or more embodiments. In the following description, numerous specific details are provided to give a thorough understanding of embodiments of the invention. One skilled in the relevant art will recognize, however, that the invention may be practiced without one or more of the specific details, or with other methods, components, devices, steps, etc. In other instances, well-known aspects have not been shown or described in detail to avoid obscuring aspects of the invention.
Furthermore, the drawings are merely schematic illustrations of the present invention and are not necessarily drawn to scale. The same reference numerals in the drawings denote the same or similar parts, and thus a repetitive description thereof will be omitted. Some of the block diagrams shown in the figures are functional entities and do not necessarily correspond to physically or logically separate entities. These functional entities may be implemented in software or in one or more hardware modules or integrated circuits or in different networks and/or processor devices and/or microcontroller devices.
In this exemplary embodiment, a font mapping processing method is provided first, where the method may operate on a server, a server cluster, or a cloud server; of course, those skilled in the art may also operate the method of the present invention on other platforms as required, and this is not a particular limitation in the present exemplary embodiment. Referring to fig. 1, the font mapping processing method may include the steps of:
s110, dividing the font map to be processed into a plurality of pixel blocks, and obtaining two interpolation endpoints of each pixel block according to the maximum channel value, the minimum channel value and the brightness value of each pixel block.
And S120, calculating the weight value of each pixel between the two interpolation endpoints according to the current channel value of each pixel in the pixel block, and storing the weight value of each pixel into the current compression block corresponding to the pixel block to obtain a target compression block.
And S130, compressing the font mapping to be processed by utilizing the target compression block to obtain compressed texture data, and uploading the compressed texture data to a mapping area.
In the font mapping processing method, on one hand, the font mapping to be processed is divided into a plurality of pixel blocks, and two interpolation endpoints of each pixel block are obtained according to the maximum channel value, the minimum channel value and the brightness value of each pixel block; then, according to the current channel value of each pixel in the pixel block, calculating the weight value of each pixel between the two interpolation endpoints, and storing the weight value of each pixel into the current compression block corresponding to the pixel block to obtain a target compression block; finally, compressing the font mapping to be processed by utilizing the target compression block to obtain compressed texture data, uploading the compressed texture data to a mapping area, so that the font compressed texture data and the compressed texture data of a common picture can be subjected to batch rendering through the mapping area, the problem that the prior art is incapable of carrying out batch rendering due to different texture formats of Chinese textures and common picture display nodes is solved, and the batch rendering efficiency is improved; on the other hand, the target compression block is utilized to compress the font mapping to be processed to obtain compressed texture data, so that the problem that textures occupy video memory in the prior art, and a large burden is easily generated on a system under limited video memory resources is solved, and the burden of the system is reduced; further, according to the current channel value of each pixel in the pixel block, calculating the weight value of each pixel between the two interpolation endpoints, and storing the weight value of each pixel into the current compression block corresponding to the pixel block to obtain a target compression block, so that the accuracy of the target compression block is improved, and the accuracy of compressed texture data is further improved; and simultaneously, the compression efficiency is improved.
Hereinafter, each step involved in the font mapping processing method in the exemplary embodiment of the present disclosure will be explained and illustrated in detail with reference to the accompanying drawings.
First, proper nouns involved in the exemplary embodiments of the present disclosure are explained as follows.
Color endpoint: a color vector representing an RGBA has four components.
Color endpoint value: a color vector of RGBA can be broken down into n (n < =4) independent components, each of which is a color endpoint value.
Color endpoint encoded value: converting each color endpoint value into another value according to a certain compression coding algorithm, wherein the value is the color endpoint coding value; and, one color endpoint value V0 is compression-encoded, then the decoded color endpoint decoding value V0', v0≡v0', and the error between the two is the precision loss after compression encoding/decoding.
CEM (color endpoint compression mode): how to convert the n color endpoint decoded values into two color endpoints is described.
LDR luminence+alpha (direct): one of the color endpoint compression modes, representing the conversion of 4 color endpoint decoding values V0', V1', V2', and V3' into two color endpoints e0 and e1. Where the RGBA component of e0 is (V0 ', V0', V0', V2'), and the RGBA component of e1 is (V1 ', V1', V1', V3').
LDR RGBA (direct): one of the color endpoint compression modes, representing the conversion of the 8 color endpoint decoded values into two color endpoints e0, e1, is referred to by the conversion rule as ASTC format requirement.
Compressing the data block: and compressing the pixel blocks with the size of w x h into data blocks with the size of 128bits according to the compression format of the ASTC. A 4*4 size block of pixels is selected for compression herein.
Quint bit: the upper three bits of a number are taken as a value and the value is less than or equal to 5, then the upper three bits of the number may be referred to as the Quint bits.
The Trit bit: the value of the upper bit of a number is taken as a value and the value is equal to or less than 3, and then the value of the upper bit of the number may be called the Trit bit.
LSB bit: representing the least significant bit (Least Significant Bit) of a number.
ISE: integer Sequence Encoding an integer lossless compression coding method, if an integer can be split into a Quint bit+LSB bit composition, then 3 Quint bits of the integer are expressed by only 7bits after ISE compression coding; if an integer can be split into a Trit bit + LSB bit composition, then 5 Trit bits of such an integer need only be expressed in 8bits after ISE compression encoding.
Color endpoint decoding value: the color endpoint encoded value is converted to another value according to the corresponding decoding algorithm (corresponding to the compression encoding algorithm of the value), which is the color endpoint decoded value.
In step S110, the font to be processed is divided into a plurality of pixel blocks, and two interpolation endpoints of each pixel block are obtained according to the maximum channel value, the minimum channel value and the luminance value of each pixel block.
In the present exemplary embodiment, first, a font map to be processed is divided into a plurality of pixel blocks; wherein the pixel block may be, for example, a 4*4 pixel block; or may be a 2 x 2 pixel block or other pixel block, which is not particularly limited in this example; and secondly, obtaining two interpolation endpoints of each pixel block according to the maximum channel value, the minimum channel value and the brightness value of each pixel block. Specifically, obtaining the two interpolation endpoints of each pixel block according to the maximum channel value, the minimum channel value, and the luminance value of each pixel block may include: firstly, calculating a maximum channel value and a minimum channel value of the font map to be processed; then, obtaining two interpolation endpoints of the font mapping to be processed according to the maximum channel value, the minimum channel value and the first brightness value and the second brightness value of the font mapping to be processed; wherein one of the two interpolation endpoints comprises a maximum channel value and a first brightness value; the other interpolation endpoint comprises a minimum channel value and a second brightness value; the first luminance value and the second luminance value are the same. In detail:
Firstly, calculating a maximum channel value maxAlpha and a minimum channel value minAlpha; and then obtaining two interpolation endpoints according to the maximum channel value, the minimum channel value, the first brightness value and the second brightness value. For example, e0, e1 are two color end points, according to the description, luminance=0xff, where luminance is a luminance value, v0 and v1 are a first luminance value and a second luminance value, respectively, and there are: v0=v1=0xff, v2=minalpha, v3=maxalpha. Further, v0, v1, v2, v3 may be referred to in the exemplary embodiments of the present invention as: color endpoint value; v0, v1, v2, v3 need to be encoded according to the coding mode of color endpoint value, the value after encoding being referred to herein as color endpoint encode value; further, the 4 values color endpoint encode value are then ISE compressed and encoded according to the astc format document, and then stored in the corresponding position (size of block is 128 bits) of the current compressed block (block) of the astc.
It should be further added that, before calculating the interpolation endpoint, the method further includes: judging whether the maximum channel value is equal to the minimum channel value; and when the maximum channel value is not equal to the minimum channel value, obtaining two interpolation endpoints of each pixel block according to the maximum channel value, the minimum channel value and the first brightness value and the second brightness value of each pixel block. And when the maximum channel value is the same as the minimum channel value, the color value in the pixel block can be directly written into the compression block.
And, since the font map can be classified into a general font map and an artistic font map (such as with lace or halation, etc.); when the font map to be processed is an artistic font map, after obtaining the pixel blocks, it is also necessary to perform expansion processing on the pixels included in each of the pixel blocks by using target pixels having the same color value as the pixels included in each of the pixel blocks. For example, each pixel in a 2 x 2 pixel block may be expanded to four pixels of the same color value, resulting in a 4*4 pixel block. It should be noted that, because of the specificity of the artistic font map, when dividing the pixel blocks, in order to restore the artistic font to the greatest extent in the rendering process, the artistic font map may be divided into smaller pixel blocks when dividing the pixel blocks; however, in order to make it possible to perform batch rendering with other normal font maps, it is possible to perform expansion processing to obtain a pixel block having the same pixels as those of the normal font maps.
In step S120, according to the current channel value of each pixel in the pixel block, a weight value of each pixel between the two interpolation endpoints is calculated, and the weight value of each pixel is stored into the current compression block corresponding to the pixel block to obtain a target compression block.
In the present exemplary embodiment, first, a weight value of each pixel between two interpolation endpoints is calculated from the current channel value of each pixel in the pixel block. Specifically, referring to fig. 2, calculating the weight value of each pixel between two interpolation endpoints according to the current channel value of each pixel in the font map to be processed may include step S210 to step S230, which will be described in detail below.
In step S210, a maximum code value and a minimum code value corresponding to the maximum channel value and the minimum channel value are obtained from a first preset list.
In step S220, a maximum decoding value and a minimum decoding value corresponding to the maximum encoding value and the minimum encoding value are obtained from a second preset list.
In step S230, a weight value of each pixel between the two interpolation endpoints is obtained from a third preset list according to the current channel value, the maximum decoding value and the minimum decoding value of each pixel in the font map to be processed.
Hereinafter, step S210 to step S230 will be explained and explained.
First, the first preset list, the second preset list and the third preset list are all obtained by offline calculation based on an ASTC algorithm. Next, the first preset list (encoding table), the second preset list (decoding table), and the third preset list (weight encoding table) are explained and explained.
First, the coding table a is a mapping from color endpoint values to color endpoint coding values, and since the coding mode of each component of the color endpoint has been determined in advance, the coding table a can be calculated offline. The ASTC format document describes how to convert from color endpoint encoded values to color endpoint decoded values, so we can calculate the best encoding table a from the decoding algorithm. Assuming that each component of the color endpoint is represented using a compressed encoded value of LSB bits (6 bits) +quint bits (range 0,191), then the offline calculation of the encoding table a is as follows, where unqtooval is the encoding table a and valueToUnq is the decoding table C:
valueToUnq=[0for i in xrange(192)]
unqToValue=[0for i in xrange(256)]
for v in xrange(192):
fedcba=v&0x3F
f=(fedcba>>5)&0x1
e=(fedcba>>4)&0x1
d=(fedcba>>3)&0x1
c=(fedcba>>2)&0x1
b=(fedcba>>1)&0x1
a=(fedcba>>0)&0x1
A=0x1FF if a else 0
B=(f<<8)|(e<<7)|(d<<6)|(c<<5)|(b<<4)|f
C=5
D=(v>>6)&0x3
unq=D*C+B
unq=unq^A
unq=(A&0x80)|(unq>>2)
valueToUnq[v]=unq
for unq in xrange(256):
bestValue=0
bestUnqDiff=(unq-valueToUnq[bestValue])**2
for v in xrange(192):
diff=(unq-valueToUnq[v])**2
if diff<bestUnqDiff:
bestUnqDiff=diff
bestValue=v
unqToValue[unq]=bestValue
secondly, an ISE compression coding table calculation algorithm of the Trit bit/the Quint bit: according to the description of the ASTC format document, each Trit bit needs 2bits to be expressed, and 5 Trit bits need 10bits to be expressed, but considering the case that each Trit bit cannot be equal to (11) B, 5 Trit bits can be compressed to 8bits of space. The same is true for the compression of the Quint bits. Here we describe only the implementation of ISE compression encoding tables for the Trit bits. The ASTC format document defines a decoding algorithm, and a compression coding table of 5 Trit bits can be calculated offline according to the decoding algorithm, as follows:
TableB[256]={0u,1u,2u,4u,5u,6u,8u,9u,10u,16u,17u,18u,20u,21u,22u,24u,25u,26u,3u,7u,15u,19u,23u,27u,12u,13u,14u,32u,33u,34u,36u,37u,38u,40u,41u,42u,48u,49u,50u,52u,53u,54u,56u,57u,58u,35u,39u,47u,51u,55u,59u,44u,45u,46u,64u,65u,66u,68u,69u,70u,72u,73u,74u,80u,81u,82u,84u,85u,86u,88u,89u,90u,67u,71u,79u,83u,87u,91u,76u,77u,78u,128u,129u,130u,132u,133u,134u,136u,137u,138u,144u,145u,146u,148u,149u,150u,152u,153u,154u,131u,135u,143u,147u,151u,155u,140u,141u,142u,160u,161u,162u,164u,165u,166u,168u,169u,170u,176u,177u,178u,180u,181u,182u,184u,185u,186u,163u,167u,175u,179u,183u,187u,172u,173u,174u,192u,193u,194u,196u,197u,198u,200u,201u,202u,208u,209u,210u,212u,213u,214u,216u,217u,218u,195u,199u,207u,211u,215u,219u,204u,205u,206u,96u,97u,98u,100u,101u,102u,104u,105u,106u,112u,113u,114u,116u,117u,118u,120u,121u,122u,99u,103u,111u,115u,119u,123u,108u,109u,110u,224u,225u,226u,228u,229u,230u,232u,233u,234u,240u,241u,242u,244u,245u,246u,248u,249u,250u,227u,231u,239u,243u,247u,251u,236u,237u,238u,28u,29u,30u,60u,61u,62u,92u,93u,94u,156u,157u,158u,188u,189u,190u,220u,221u,222u,31u,63u,127u,159u,191u,255u,252u,253u,254u,0u,0u,0u,0u,0u,0u,0u,0u,0u,0u,0u,0u,0u};
Subscript index of tableb=t4+t81+t3×27+t2×9+t1×3+t0, where t0, t1, t2, t3, t4 are values of 5 Trit bits respectively, and according to the table, ISE encoded values of multiple Trit bits can be directly obtained, and then the encoded values and LSB bits are directly written into the compressed block according to the document format requirement of ASTC.
Further, regarding the two-dimensional table calculation algorithm of the weight coding value: assuming that the compression coding mode of the weight value is that the weight value of each pixel occupies 5bits of space, namely a weight value range [0,31], we take the weight coding mode as an example to describe the calculation flow of the weight coding two-dimensional table:
if the Alpha value of a certain pixel=alpha, then the corresponding weight value of that pixel=int (round (1.0 x max (Alpha-minDecAlpha, 0)/(maxdecaalpha-minDecAlpha) x 31.0,0.0,31.0)), where minDecAlpha/maxdecaalpha is the color endpoint decoding value of minAlpha/maxAlpha. Since ASTC format documents require weight values to be saved from the high order to the low order of the address, i.e. bit-wise reversed. Therefore, the weight coding value corresponding to each diff and maxDiff can be calculated offline and stored into two-dimensional data, and the algorithm is as follows, wherein TableC is the weight coding value in the coding mode:
TableC=[[0]*256]*256
for maxDiff in xrange(1,256):
for diff in xrange(0,maxDiff+1):
weight=1.0*diff/maxDiff
weight=int(round(clamp(weight*31.0,0.0,31.0)))
TableC[maxDiff][diff]=ReverseBit5(weight)
And in the running process, according to the calculated maxDIff of the pixel block and the calculated diff of the pixel, directly looking up a table to obtain the weight coding value of the pixel, and then directly storing the weight coding value into a compression block.
Further, for a 4x4 pixel size compressed block, the two color endpoints are selected, and since only Alpha channels are active, the largest and smallest Alpha value is selected as the two color endpoints for the block within the 4x4 pixel block.
For the weight values, a separate weight value per pixel is chosen, i.e. 16 weight values are needed for a block of 4x4 pixels. By utilizing the characteristic that the weight value and the color endpoint code value of the ASTC are variable in length, two compression coding modes can be adopted:
coding mode one: the weight value of each pixel occupies 5bits of space, namely a weight value range [0,31]; each component of the color endpoint is represented using a compressed encoded value of LSB bits (6 bits) +trit bits (range [0,191 ]), such that 16 weight values require 80bits of space; two color endpoints, according to the selected CEM, need 4 color endpoint code values in total, combine ISE compression coding, need to occupy 31bits; the 17bits configuration information required by the mode is combined, the space size is just 128bits, and the document format requirement of ASTC is met.
The weight value of each pixel is represented by a compression coding value of LSB bits (3 bits) +Trit bits, namely a weight value range [0,23], and the 16 weight values need 74bits of space in combination with ISE compression coding; each component of the color endpoint uses an 8bits coding mode, a range [0,255], two color endpoints, according to the selected CEM, totally need 4 color endpoint coding values, and occupy 32bits of space; in combination with the 17bit configuration information required for this mode, there is a total of 123bits of space, and each compressed block of ASTC is 128bits in size, thus freeing up 5bits of space.
Finally, after each list is obtained, each corresponding value can be obtained by looking up a table, and finally the corresponding value is compressed into the current compressed block to obtain the target compressed block.
In step S130, the target compression block is used to compress the font mapping to be processed to obtain compressed texture data, and the compressed texture data is uploaded to a mapping area.
In this example embodiment, compressing the font to be processed map with the target compression block to obtain compressed texture data may include: and compressing the font mapping to be processed through a single plane compression mode by utilizing the target compression block to obtain the compressed texture data. In detail:
Although the texture generated by the common text has only the change information of Alpha channel, it can be theoretically compressed by adopting single channel compression mode of ASTC, according to the format document of ASTC, if it is compressed by adopting a Luminance single channel, the color value obtained by graphic compression will be (Luminance, luminance,0 xFF), but such format needs to be rendered correctly, and special processing needs to be done in the loader to convert Luminance into Alpha channel, which makes it impossible to render in batch with common picture nodes. So, the compression can be performed in the mode of LDR luminance+alpha (direct), and luminance=0xff is fixed, so that RGBA data= (1.0, alpha) sampled by the compressed map in the loader is compressed.
Further, after obtaining the compressed texture data, the compressed texture data is uploaded to the mapping area. Specifically, firstly, creating a physical map according to the data format of the compressed texture number, and dividing the physical map into a plurality of map areas with preset sizes; and secondly, uploading the compressed texture data to a mapping area in an idle state. In detail:
first, a physical map is created according to a data format of compressed texture data, wherein pixels of the physical map may be 4096×4096, for example, or may be other pixels, which is not particularly limited in this example. Then, the physical map is divided into a plurality of map areas with preset sizes, and compressed texture data is uploaded to the map areas in an idle state. Wherein compressed texture data may be threaded into one or more regions of a map based on its size.
Further, after uploading the compressed texture data to the map area in the idle state, the method may further include: and pooling the to-be-processed font maps with the same rendering state in the map area, and rendering the pooled to-be-processed font maps. By batching the compressed texture data with the same rendering state, the rendering batch can be greatly reduced, the burden of a system is further reduced, and the rendering speed is improved. In addition, the color of the text can be the same as that of a common UI picture node and is used as a vertex color attribute to be uploaded to the GPU, so that the rendering of the text and the common UI picture node is consistent in the Pixel loader with gl_FragColor=diffuseColor (tex, uv), and therefore the text and the picture node can be subjected to batch rendering by using the same simple loader, and the rendering efficiency is improved.
Hereinafter, the real-time compression method of the font map will be described by taking ASTC, block size 4x4 as an example, and compression of weight and color endpoint value in the first mode. Referring to fig. 3, the compression flow chart is as follows:
step S301, calculating the maximum value maxAlpha and the minimum value minAlpha of the Alpha;
step S302, judging whether the maximum value maxAlpha and the minimum value minAlpha are the same; if so, jumping to step S308; if not, jumping to step S303;
Step S303, query the encoding table to obtain four color endpoint values: color endpoint code values corresponding to 0xFF, maxalpha and minAlpha;
step S304, writing the color endpoint coding value into the compression block according to the compression coding mode of ISE; wherein, ISE compression coding of high order (Trit bit/Quint bit) is also obtained by inquiring the coding table;
step S305, according to the code value of the color endpoint, inquiring a decoding table to obtain corresponding color endpoint decoding values minDecAlpha and maxDecAlpha; wherein maxdiff=maxdecalpha-minDecAlpha;
step S306, calculating a weight value of each pixel in the pixel block: diff = Alpha-minDecAlpha; then directly inquiring the two-dimensional table to obtain a corresponding weight code according to maxDIff and diff, and writing the weight code into a compression block;
step S307, writing the selected compression mode and the weight coding mode into the compression block;
step S308, a single color mode, namely directly writing a single color/single color coding mode into a block according to an ASTC document;
step S309, the compression of one pixel block is ended.
To sum up, the test results on Intel Core i 7-8700.2 ghz CPU generate 1024 x 1024 font maps for the commonly used 1300 chinese characters, and use offline astenc (an ASTC standard compression library developed by ARM formula) and the real-time map compression algorithm herein to compress block 4x4, the comparison results are shown in table 1 below:
TABLE 1
Compression method Single core runtime (ms) Compression signal to noise ratio (dB)
astcenc(very fast) 966.4 47.1
astcenc(exhaustive) 127015.3 55.4
Real-time compression algorithm herein 3.9 51.3
The compression method of the color art word texture with the drawing or shading will be explained with reference to fig. 4 and 5. Referring to fig. 4, a method of compressing a color artwork texture with a tracing or shading may include the steps of:
in step S410, each pixel is enlarged to four pixels of the same color value. As shown in fig. 5, C00, C01, C10, and C11 are color values of four pixels, respectively, and a 2x 2-sized map is enlarged and then changed to a 4x4 map.
Step S420, selecting an ASTC compression algorithm with each block of 4x4 pixel size for compression, and judging whether four color values of C00, C01, C10 and C11 are equal; if not, jumping to step S430; if so, jumping to step S460;
in step S430, the 4x4 block is divided into upper and lower parts, or left and right parts. As shown in fig. 5, it can be divided into upper and lower parts; two color endpoints are respectively selected for the upper and lower parts of the same 4x4 block, wherein the two color endpoints of the upper part are C00 and C01, and the two color endpoints of the lower part are C10 and C11.
In step S440, the weight value of each pixel is calculated according to the minimum endpoint value and the maximum endpoint value of the upper and lower portions. As shown in fig. 5, if the minimum endpoint value of the upper half is C00, the maximum endpoint value is C01, the minimum endpoint value of the lower half is C11, and the maximum endpoint value is C10, the weight value of each pixel is as shown in fig. 5.
Step S450, according to the calculated compression coding value of the color endpoint and the weight value of each pixel, the compression coding value/weight value/compression mode of the color endpoint can be written into the compression block according to the ASTC format document in combination with the selected compression mode. The coding mode of the color endpoint is LDR RGBA (direct), each component of the color endpoint uses a compressed coding value representation (range [0,19 ]) of LSB bits (2 bits) +Quint bits, and the compression coding of the color endpoint adopts an off-line calculation coding table, and a table look-up direct acquisition mode is adopted in the running process.
Step S460, directly writing the color value of C00 into the compression block according to the ASTC format document; or taking two different color values as endpoint colors, and calculating the weight value of each pixel; then writing RGBA values of the two end point colors and the weight value of each pixel into the compression block according to the ASTC format document.
In the following, a method for rendering a font mapping after the font mapping processing method is processed according to the exemplary embodiment of the present disclosure is explained and described with reference to fig. 6. Referring to FIG. 6, rendering a font map may include the steps of:
In step S610, a picture of the text to be displayed is generated. The picture may be a plain text picture (only Alpha channel has information, rgb= (1.0, 1.0)), or a color art picture (information with four channels RGBA) with effects of tracing/shading, etc.
In step S620, an ASTC compressed format texture is created that can accommodate the size of the literal picture.
In step S630, the picture is partitioned according to the size of 4x4 pixels, then fast ASTC compression is performed on a block-by-block basis, and the data of the compressed block is filled into the ASTC compression format texture.
In step S640, the text compresses the texture, or the physical map uploaded by Page.
Step S650, performing batch rendering on the text using the same loader as the common picture node.
The font mapping processing method provided by the example embodiment of the invention has at least the following advantages:
on the one hand, according to the format requirement of ASTC, specific ASTC compression modes are selected for mapping compression aiming at the characteristics of generating textures of dynamic characters, so that the problem of high performance overhead caused by the fact that most modes need to be traversed in an offline ASTC compression algorithm is avoided. Specific:
firstly, for textures generated by common characters, a single plane is selected as a compression block mode, the size of a compression block is 4x4 pixels, and the compression coding mode of the weight value is that the weight value of each pixel occupies 5bits of space, namely a weight value range [0,31]; the coding mode of the color end points is LDRLuminance+alpha (direct), wherein Luminance is fixed to be 0xFF, and Alpha is changed between 0 and 255 according to characters; each component of the color endpoint is represented using a compressed encoded value of LSB bits (6 bits) +trit bits (range [0,191 ]);
Further, for the color artistic word texture with tracing/shading, the magnification is selected to be doubled for compression, 1/2 is reduced in a UV coordinate conversion mode during rendering, a single plane is selected for a compression block mode, and the size of the compression block is 4x4 pixels; the color endpoint and the weight value are selected and divided into an upper part and a lower part or a left part and a right part for compression coding respectively, the coding mode of the color endpoint is LDRRGBA (direct), and each component of the color endpoint is represented by a compression coding value of LSB bits (2 bits) +Quint bits (range [0,19 ]).
On the other hand, under the selected compression mode, the compression codes of the color endpoint values are calculated offline to generate a compression code table, and the compression codes of the color endpoint values are obtained by directly looking up a table in the operation process, so that the calculation cost in the operation process is saved.
In yet another aspect, for normal text-generated textures, since there is only a change in Alpha, in the selected compression mode, a weighted two-dimensional mapping Table [ maxDiffAlpha ] [ diffAlpha ], where maxDiffAlpha is a [1,255] range and diffAlpha is a [0, maxDiffAlpha ] range, is computed offline. When the method is operated, the Alpha difference of two color endpoints of the compression block is calculated as a first index of Table, the Alpha value of each pixel and the Alpha difference of the minimum color endpoint of the compression block are used as a second index of Table, and the Table lookup can acquire the optimal weight compression coding value of the pixel, so that the Alpha of the pixel is closest to the original Alpha value of the pixel after the ASTC is decompressed. Furthermore, for the color artistic word texture, the weight value of each pixel is simply selected according to the mode selection described above, so that the calculation is performed directly.
Furthermore, each pixel weight calculation adopts a color endpoint decoding value instead of a color endpoint value, so that the compression signal-to-noise ratio can be improved, and meanwhile, the calculation of the color endpoint decoding value also adopts a table look-up mode to save the cost in operation.
Further, whether the texture is generated by ordinary characters or by color artistic characters, if the color values of all pixels of a compression block are equal, or only two different color values are adopted, the special compression mode is adopted to directly perform compression coding.
The disclosure also provides a font mapping processing device. Referring to fig. 7, the font map processing apparatus may include: a first processing module 710, a weight calculation module 720, and a compression module 730. Wherein:
the first processing module 710 may be configured to divide the font to be processed into a plurality of pixel blocks, and obtain two interpolation endpoints of each of the pixel blocks according to a maximum channel value, a minimum channel value, and a luminance value of each of the pixel blocks.
The weight value calculating module 720 may be configured to calculate a weight value of each pixel between the two interpolation endpoints according to a current channel value of each pixel in the pixel block, and store the weight value of each pixel in a current compression block corresponding to the pixel block to obtain a target compression block.
The compression module 730 may be configured to compress the font to be processed by using the target compression block to obtain compressed texture data, and upload the compressed texture data to a mapping area.
In one exemplary embodiment of the present disclosure, deriving two interpolation endpoints for each of the pixel blocks from a maximum channel value, a minimum channel value, and a luminance of each of the pixel blocks includes:
calculating the maximum channel value and the minimum channel value of each pixel block;
obtaining two interpolation endpoints of each pixel block according to the maximum channel value, the minimum channel value and the first brightness value and the second brightness value of each pixel block;
wherein one of the two interpolation endpoints comprises a maximum channel value and a first brightness value; the other interpolation endpoint comprises a minimum channel value and a second brightness value;
wherein the first luminance value and the second luminance value are the same.
In one exemplary embodiment of the present disclosure, deriving two interpolation endpoints for each of the pixel blocks from the maximum channel value, the minimum channel value, and the first luminance value and the second luminance value of each of the pixel blocks includes:
Judging whether the maximum channel value is equal to the minimum channel value;
and when the maximum channel value is not equal to the minimum channel value, obtaining two interpolation endpoints of each pixel block according to the maximum channel value, the minimum channel value and the first brightness value and the second brightness value of each pixel block.
In an exemplary embodiment of the present disclosure, calculating a weight value for each pixel between the two interpolation endpoints according to a current channel value for each pixel in each pixel block includes:
obtaining a maximum coding value and a minimum coding value corresponding to the maximum channel value and the minimum channel value from a first preset list;
obtaining a maximum decoding value and a minimum decoding value corresponding to the maximum coding value and the minimum coding value from a second preset list;
and acquiring a weight value of each pixel between the two interpolation endpoints from a third preset list according to the current channel value, the maximum decoding value and the minimum decoding value of each pixel in each pixel block.
In an exemplary embodiment of the present disclosure, the first preset list, the second preset list, and the third preset list are all calculated by an offline manner based on an ASTC algorithm.
In an exemplary embodiment of the present disclosure, compressing the font-rendering to be processed with the target compression block to obtain compressed texture data includes:
and compressing the font mapping to be processed through a single plane compression mode by utilizing the target compression block to obtain the compressed texture data.
In one exemplary embodiment of the present disclosure, uploading the compressed texture data to a map area includes:
creating a physical map according to the data format of the compressed texture data, and dividing the physical map into a plurality of map areas with preset sizes;
uploading the compressed texture data to a map area in an idle state.
In an exemplary embodiment of the present disclosure, the font mapping processing apparatus further includes:
and the batch combination module is used for carrying out batch combination on each to-be-processed font chartlet with the same rendering state in the chartlet area, and rendering the to-be-processed font chartlet after batch combination.
In an exemplary embodiment of the present disclosure, the font map comprises a plain font map and/or an artistic font map;
wherein, the font mapping processing device further comprises:
the pixel expansion module may be configured to perform expansion processing on the pixels included in each of the pixel blocks using target pixels having the same color value as the pixels included in each of the pixel blocks.
The specific details of each module in the above font mapping processing apparatus are described in detail in the corresponding font mapping processing method, so that the details are not repeated here.
It should be noted that although in the above detailed description several modules or units of a device for action execution are mentioned, such a division is not mandatory. Indeed, the features and functions of two or more modules or units described above may be embodied in one module or unit in accordance with embodiments of the invention. Conversely, the features and functions of one module or unit described above may be further divided into a plurality of modules or units to be embodied.
Furthermore, although the steps of the methods of the present invention are depicted in the accompanying drawings in a particular order, this is not required to either imply that the steps must be performed in that particular order, or that all of the illustrated steps be performed, to achieve desirable results. Additionally or alternatively, certain steps may be omitted, multiple steps combined into one step to perform, and/or one step decomposed into multiple steps to perform, etc.
In an exemplary embodiment of the present invention, an electronic device capable of implementing the above method is also provided.
Those skilled in the art will appreciate that the various aspects of the invention may be implemented as a system, method, or program product. Accordingly, aspects of the invention may be embodied in the following forms, namely: an entirely hardware embodiment, an entirely software embodiment (including firmware, micro-code, etc.) or an embodiment combining hardware and software aspects may be referred to herein as a "circuit," module "or" system.
An electronic device 800 according to such an embodiment of the invention is described below with reference to fig. 8. The electronic device 800 shown in fig. 8 is merely an example and should not be construed as limiting the functionality and scope of use of embodiments of the present invention.
As shown in fig. 8, the electronic device 800 is embodied in the form of a general purpose computing device. Components of electronic device 800 may include, but are not limited to: the at least one processing unit 810, the at least one memory unit 820, and a bus 830 connecting the various system components, including the memory unit 820 and the processing unit 810.
Wherein the storage unit stores program code that is executable by the processing unit 810 such that the processing unit 810 performs steps according to various exemplary embodiments of the present invention described in the above section of the "exemplary method" of the present specification. For example, the processing unit 810 may perform step S110 as shown in fig. 1: dividing a font to be processed into a plurality of pixel blocks, and obtaining two interpolation endpoints of each pixel block according to a maximum channel value, a minimum channel value and a brightness value of each pixel block; step S120: according to the current channel value of each pixel in the pixel block, calculating the weight value of each pixel between the two interpolation endpoints, and storing the weight value of each pixel into the current compression block corresponding to the pixel block to obtain a target compression block; step S130: and compressing the font mapping to be processed by utilizing the target compression block to obtain compressed texture data, and uploading the compressed texture data to a mapping area.
The storage unit 820 may include readable media in the form of volatile storage units, such as Random Access Memory (RAM) 8201 and/or cache memory 8202, and may further include Read Only Memory (ROM) 8203.
Storage unit 820 may also include a program/utility 8204 having a set (at least one) of program modules 8205, such program modules 8205 including, but not limited to: an operating system, one or more application programs, other program modules, and program data, each or some combination of which may include an implementation of a network environment.
Bus 830 may be one or more of several types of bus structures including a memory unit bus or memory unit controller, a peripheral bus, an accelerated graphics port, a processing unit, or a local bus using any of a variety of bus architectures.
The electronic device 800 may also communicate with one or more external devices 900 (e.g., keyboard, pointing device, bluetooth device, etc.), one or more devices that enable a user to interact with the electronic device 800, and/or any device (e.g., router, modem, etc.) that enables the electronic device 800 to communicate with one or more other computing devices. Such communication may occur through an input/output (I/O) interface 850. Also, electronic device 800 may communicate with one or more networks such as a Local Area Network (LAN), a Wide Area Network (WAN), and/or a public network, such as the Internet, through network adapter 860. As shown, network adapter 860 communicates with other modules of electronic device 800 over bus 830. It should be appreciated that although not shown, other hardware and/or software modules may be used in connection with electronic device 800, including, but not limited to: microcode, device drivers, redundant processing units, external disk drive arrays, RAID systems, tape drives, data backup storage systems, and the like.
From the above description of embodiments, those skilled in the art will readily appreciate that the example embodiments described herein may be implemented in software, or may be implemented in software in combination with the necessary hardware. Thus, the technical solution according to the embodiments of the present invention may be embodied in the form of a software product, which may be stored in a non-volatile storage medium (may be a CD-ROM, a U-disk, a mobile hard disk, etc.) or on a network, and includes several instructions to cause a computing device (may be a personal computer, a server, a terminal device, or a network device, etc.) to perform the method according to the embodiments of the present invention.
In an exemplary embodiment of the present invention, a computer-readable storage medium having stored thereon a program product capable of implementing the method described above in the present specification is also provided. In some possible embodiments, the various aspects of the invention may also be implemented in the form of a program product comprising program code for causing a terminal device to carry out the steps according to the various exemplary embodiments of the invention as described in the "exemplary methods" section of this specification, when said program product is run on the terminal device.
A program product for implementing the above-described method according to an embodiment of the present invention may employ a portable compact disc read-only memory (CD-ROM) and include program code, and may be run on a terminal device, such as a personal computer. However, the program product of the present invention is not limited thereto, and in this document, a readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device.
The program product may employ any combination of one or more readable media. The readable medium may be a readable signal medium or a readable storage medium. The readable storage medium can be, for example, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or a combination of any of the foregoing. More specific examples (a non-exhaustive list) of the readable storage medium would include the following: an electrical connection having one or more wires, a portable disk, a hard disk, random Access Memory (RAM), read-only memory (ROM), erasable programmable read-only memory (EPROM or flash memory), optical fiber, portable compact disk read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
The computer readable signal medium may include a data signal propagated in baseband or as part of a carrier wave with readable program code embodied therein. Such a propagated data signal may take any of a variety of forms, including, but not limited to, electro-magnetic, optical, or any suitable combination of the foregoing. A readable signal medium may also be any readable medium that is not a readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device.
Program code embodied on a readable medium may be transmitted using any appropriate medium, including but not limited to wireless, wireline, optical fiber cable, RF, etc., or any suitable combination of the foregoing.
Program code for carrying out operations of the present invention may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, C++ or the like and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computing device, partly on the user's device, as a stand-alone software package, partly on the user's computing device, partly on a remote computing device, or entirely on the remote computing device or server. In the case of remote computing devices, the remote computing device may be connected to the user computing device through any kind of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or may be connected to an external computing device (e.g., connected via the Internet using an Internet service provider).
Furthermore, the above-described drawings are only schematic illustrations of processes included in the method according to the exemplary embodiment of the present invention, and are not intended to be limiting. It will be readily appreciated that the processes shown in the above figures do not indicate or limit the temporal order of these processes. In addition, it is also readily understood that these processes may be performed synchronously or asynchronously, for example, among a plurality of modules.
Other embodiments of the invention will be apparent to those skilled in the art from consideration of the specification and practice of the invention disclosed herein. This application is intended to cover any variations, uses, or adaptations of the invention following, in general, the principles of the invention and including such departures from the present disclosure as come within known or customary practice within the art to which the invention pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the invention being indicated by the following claims.

Claims (9)

1. A font mapping method, comprising:
dividing a font to be processed into a plurality of pixel blocks, and obtaining two interpolation endpoints of each pixel block according to a maximum channel value, a minimum channel value and a brightness value of each pixel block;
According to the current channel value of each pixel in the pixel block, calculating the weight value of each pixel between the two interpolation endpoints, and storing the weight value of each pixel into the current compression block corresponding to the pixel block to obtain a target compression block; wherein each pixel has an independent weight value;
compressing the font mapping to be processed through a single plane compression mode by utilizing the target compression block to obtain compressed texture data, creating a physical mapping according to a data format of the compressed texture data, and dividing the physical mapping into a plurality of mapping areas with preset sizes; uploading the compressed texture data to a map area in an idle state; wherein the single plane compression mode comprises a color endpoint compression mode;
and pooling the to-be-processed font maps with the same rendering state in the map area, and rendering the pooled to-be-processed font maps.
2. The font mapping processing method according to claim 1, wherein obtaining two interpolation endpoints of each of the pixel blocks according to a maximum channel value, a minimum channel value, and a luminance of each of the pixel blocks comprises:
Calculating the maximum channel value and the minimum channel value of each pixel block;
obtaining two interpolation endpoints of each pixel block according to the maximum channel value, the minimum channel value and the first brightness value and the second brightness value of each pixel block;
wherein one of the two interpolation endpoints comprises a maximum channel value and a first brightness value; the other interpolation endpoint comprises a minimum channel value and a second brightness value;
wherein the first luminance value and the second luminance value are the same.
3. The font mapping processing method according to claim 2, wherein obtaining two interpolation endpoints of each of the pixel blocks according to the maximum channel value, the minimum channel value, and the first luminance value and the second luminance value of each of the pixel blocks comprises:
judging whether the maximum channel value is equal to the minimum channel value;
and when the maximum channel value is not equal to the minimum channel value, obtaining two interpolation endpoints of each pixel block according to the maximum channel value, the minimum channel value and the first brightness value and the second brightness value of each pixel block.
4. The font mapping processing method according to claim 1, wherein calculating a weight value of each pixel between the two interpolation endpoints according to a current channel value of each pixel in each pixel block comprises:
Obtaining a maximum coding value and a minimum coding value corresponding to the maximum channel value and the minimum channel value from a first preset list;
obtaining a maximum decoding value and a minimum decoding value corresponding to the maximum coding value and the minimum coding value from a second preset list;
and acquiring a weight value of each pixel between the two interpolation endpoints from a third preset list according to the current channel value, the maximum decoding value and the minimum decoding value of each pixel in each pixel block.
5. The font mapping method according to claim 4, wherein the first preset list, the second preset list and the third preset list are all obtained by offline calculation based on ASTC algorithm.
6. A font mapping method according to claim 1, characterized in that the font mapping comprises a normal font mapping and/or an artistic font mapping;
when the font mapping is an artistic font mapping, after dividing the font mapping to be processed into a plurality of pixel blocks, the font mapping processing method further comprises:
and performing expansion processing on the pixels included in each pixel block by using target pixels having the same color value as the pixels included in the pixel block.
7. A font mapping processing apparatus, comprising:
the first processing module is used for dividing the font mapping to be processed into a plurality of pixel blocks, and obtaining two interpolation endpoints of each pixel block according to the maximum channel value, the minimum channel value and the brightness value of each pixel block;
the weight value calculation module is used for calculating the weight value of each pixel between the two interpolation endpoints according to the current channel value of each pixel in the pixel block, and storing the weight value of each pixel into the current compression block corresponding to the pixel block to obtain a target compression block; wherein each pixel has an independent weight value;
the compression module is used for compressing the font mapping to be processed through a single plane compression mode by utilizing the target compression block to obtain compressed texture data, creating a physical mapping according to the data format of the compressed texture data, and dividing the physical mapping into a plurality of mapping areas with preset sizes; uploading the compressed texture data to a map area in an idle state; wherein the single plane compression mode comprises a color endpoint compression mode;
and the batch combination module is used for carrying out batch combination on each to-be-processed font chartlet with the same rendering state in the chartlet area, and rendering the to-be-processed font chartlet after batch combination.
8. A computer readable storage medium having stored thereon a computer program, which when executed by a processor implements the font mapping method of any of claims 1-6.
9. An electronic device, comprising:
a processor; and
a memory for storing executable instructions of the processor;
wherein the processor is configured to perform the font mapping method of any of claims 1-6 via execution of the executable instructions.
CN201910872757.2A 2019-09-16 2019-09-16 Font mapping processing method and device, storage medium and electronic equipment Active CN110570493B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910872757.2A CN110570493B (en) 2019-09-16 2019-09-16 Font mapping processing method and device, storage medium and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910872757.2A CN110570493B (en) 2019-09-16 2019-09-16 Font mapping processing method and device, storage medium and electronic equipment

Publications (2)

Publication Number Publication Date
CN110570493A CN110570493A (en) 2019-12-13
CN110570493B true CN110570493B (en) 2023-07-18

Family

ID=68780229

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910872757.2A Active CN110570493B (en) 2019-09-16 2019-09-16 Font mapping processing method and device, storage medium and electronic equipment

Country Status (1)

Country Link
CN (1) CN110570493B (en)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111968190B (en) * 2020-08-21 2024-02-09 网易(杭州)网络有限公司 Compression method and device for game map and electronic equipment
CN112489180B (en) * 2020-10-30 2021-08-24 完美世界(北京)软件科技发展有限公司 Data processing method, system, electronic device and computer readable medium
CN114445264B (en) * 2022-01-25 2022-11-01 上海秉匠信息科技有限公司 Texture compression method and device, electronic equipment and computer readable storage medium
CN117710620B (en) * 2024-02-05 2024-05-07 江西求是高等研究院 Method, system, storage medium and terminal for detecting target visibility of simulation intelligent agent

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1271137A (en) * 1999-04-16 2000-10-25 旺宏电子股份有限公司 Method and device for interpolating data
CN101499174A (en) * 2009-01-16 2009-08-05 深圳市中青宝网网络科技股份有限公司 Method for reducing texture memory required in dynamic Chinese character generation
US9640149B2 (en) * 2015-07-21 2017-05-02 Think Silicon Sa Methods for fixed rate block based compression of image data
CN108810544A (en) * 2017-04-28 2018-11-13 想象技术有限公司 Multi output decoder for texture decompression

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1271137A (en) * 1999-04-16 2000-10-25 旺宏电子股份有限公司 Method and device for interpolating data
CN101499174A (en) * 2009-01-16 2009-08-05 深圳市中青宝网网络科技股份有限公司 Method for reducing texture memory required in dynamic Chinese character generation
US9640149B2 (en) * 2015-07-21 2017-05-02 Think Silicon Sa Methods for fixed rate block based compression of image data
CN108810544A (en) * 2017-04-28 2018-11-13 想象技术有限公司 Multi output decoder for texture decompression

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
几种主流贴图压缩算法的实现原理详解;feng;《https://gameinstitute.qq.com/community/detail/123075》;20180313;1-8 *

Also Published As

Publication number Publication date
CN110570493A (en) 2019-12-13

Similar Documents

Publication Publication Date Title
CN110570493B (en) Font mapping processing method and device, storage medium and electronic equipment
CN109600618B (en) Video compression method, decompression method, device, terminal and medium
US6798833B2 (en) Video frame compression/decompression hardware system
CN113079379A (en) Video compression method, device, equipment and computer readable storage medium
CN112527752B (en) Data compression method, data compression device, computer readable storage medium and electronic equipment
JP2014027658A (en) Compression encoding and decoding method and apparatus
CN113630125A (en) Data compression method, data encoding method, data decompression method, data encoding device, data decompression device, electronic equipment and storage medium
CN114337678A (en) Data compression method, device, equipment and storage medium
CN113613289B (en) Bluetooth data transmission method, system and communication equipment
WO2021237510A1 (en) Data decompression method and system, and processor and computer storage medium
WO2023030557A2 (en) Data compression method and apparatus, and data decompression method and apparatus
CN107172425B (en) Thumbnail generation method and device and terminal equipment
CN116566397A (en) Encoding method, decoding method, encoder, decoder, electronic device, and storage medium
CN114070470A (en) Encoding and decoding method and device
CN111080728A (en) Map processing method, device, equipment and storage medium
CN112395468A (en) Number management method and device, electronic equipment and storage medium
US7733249B2 (en) Method and system of compressing and decompressing data
JPH09247466A (en) Encoding device
JP2891818B2 (en) Encoding device
JPH10105672A (en) Computer and memory integrated circuit with operation function to be used in this computer
CN115841140B (en) Anti-max pooling operation method and device, electronic equipment and storage medium
JP2006005478A (en) Image encoder and image decoder
CN115802057A (en) Data processing method, readable medium and electronic device
JP3342380B2 (en) Encoding and decoding apparatus and image processing apparatus to which the same is applied
CN116320449A (en) Multimedia file compression method and device and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant