CN113412470A - Method and device for processing image layer - Google Patents

Method and device for processing image layer Download PDF

Info

Publication number
CN113412470A
CN113412470A CN201980091778.9A CN201980091778A CN113412470A CN 113412470 A CN113412470 A CN 113412470A CN 201980091778 A CN201980091778 A CN 201980091778A CN 113412470 A CN113412470 A CN 113412470A
Authority
CN
China
Prior art keywords
fillet
area
layer
data
pixel
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201980091778.9A
Other languages
Chinese (zh)
Other versions
CN113412470B (en
Inventor
王海军
张秀峰
宋丹娜
赖焰根
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Huawei Technologies Co Ltd
Original Assignee
Huawei Technologies Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huawei Technologies Co Ltd filed Critical Huawei Technologies Co Ltd
Publication of CN113412470A publication Critical patent/CN113412470A/en
Application granted granted Critical
Publication of CN113412470B publication Critical patent/CN113412470B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/451Execution arrangements for user interfaces

Abstract

A method and apparatus for processing an image layer. The method comprises the following steps: reading first data from a storage device, wherein the first data is partial data of the fillet layer, and the first data comprises image data of a fillet area in the fillet layer; generating the fillet layer according to the first data and second data, wherein the second data are remaining data except the partial data in the fillet layer; and superposing the fillet layer and at least one target layer to obtain an image to be displayed. According to the method, the partial data of the fillet layer is read from the storage device, and accordingly, the partial data of the fillet layer can be stored in the storage device, and the storage capacity of the fillet layer is reduced.

Description

Method and device for processing image layer Technical Field
The present application relates to the field of image processing, and more particularly, to a method and apparatus for processing image layers.
Background
With the development of mobile terminal technology, more and more terminal devices adopt a round-corner display screen, four corners of the round-corner display screen can be cut into curves or arcs, and some hollowed parts with curved edges can be designed on the display screen to be used for arranging elements visible to a user, such as a front-facing camera or a receiver. These curved locations on the terminal device may be referred to as fillets.
However, due to the arrangement of the pixels on the screen (such as RGB sub-pixel arrangement, RGBG sub-pixel arrangement, etc.), the display effect of the screen is affected by the jagging phenomenon easily occurring when the image is displayed near the curve on the display screen.
At present, the sawtooth phenomenon at the curve position is softened by adopting a mode of superposing a fillet graphic layer on an original graphic layer, and the area displayed near the fillet of the display screen in the fillet graphic layer can be called as a fillet area. The fillet map layer is generally a black map layer, the final display image can be controlled to display the content of the original map layer or the black content of the fillet map layer in the fillet map layer overlapping area by controlling the transparency value of the pixels in the fillet map layer, but the final display image generally only displays the content of the fillet area in the fillet map layer. However, the display subsystem performing the overlay processing can only perform the overlay of the rectangular layer, so that when the image data of the rounded-corner layer is stored, the data of all pixels of the rounded-corner layer needs to be stored in the memory, and the image data of the rounded-corner layer stored in the memory is too much, which further causes the problem of power consumption increase caused by the increase of the memory.
Disclosure of Invention
The application provides a method and a device for processing layers, which can reduce the storage capacity of a fillet layer.
In a first aspect, a method for processing an image layer is provided, where the method includes: reading first data from a storage device, wherein the first data is partial data of the fillet layer, and the first data comprises image data of a fillet area in the fillet layer; generating the fillet layer according to the first data and second data, wherein the second data are remaining data except the partial data in the fillet layer; and superposing the fillet layer and at least one target layer to obtain an image to be displayed.
In the technical scheme of the application, when the display subsystem reads the image data of the fillet layer from the storage device, not all data of the fillet layer but partial data of the fillet layer are read, and accordingly, in the storage process, partial data of the fillet layer can be stored in the storage device, so that the storage capacity of the fillet layer can be reduced.
Furthermore, when the display subsystem reads the image data of the fillet layer, the reading channel is not required to read all data of the fillet layer, so that the reading bandwidth can be saved, and the power consumption is reduced.
Specifically, at present, the display subsystem reads all image data of the fillet layer from the memory through the read channel, and the bandwidth required for the display subsystem to read the data through the read channel is the read bandwidth.
It should be understood that the fillet map layer in the embodiment of the present application may be regarded as an additional superimposed map layer for solving a sawtooth phenomenon that occurs when a display screen fillet displays an image, and the fillet map layer is mainly used for a fillet position of a display screen, so that the fillet map layer may not be a complete map layer matched with the size of the display screen.
In the embodiment of the present application, the "round corner" is described with respect to the display screen, and the portion on the display screen where the curved edge exists may be regarded as a round corner on the display screen (or called a screen round corner). The 'fillet area' in the embodiment of the application is described relative to a fillet layer, an area where pixels for images near a fillet of a display screen are located in the fillet layer may be referred to as a fillet area, and the fillet area may also be understood as an area on the fillet layer where content is displayed near the fillet of the display screen. Other areas of the fillet pattern layer besides the fillet area may also be referred to as non-fillet areas.
The color data of the pixels in the map layer is used for representing the colors displayed by the pixels, and may include various chrominance components, such as a red component (R value), a blue component (B value), a green component (G value), or a white component (W value), etc., where each chrominance component may be represented by a numerical value, generally represented by 0 to 255, different values of chrominance components may represent different luminances, and different values of each chrominance component may be mixed into various colors.
The transparency data of the pixels in the layers may characterize the transparency of the pixel colors, indicating how much pixel values should be displayed, e.g. when there are two stacks of foreground and background colors, the transparency of the pixels in the foreground color layer may control whether the display screen displays the foreground color or the background color, or how much the foreground and background colors are mixed each to display a new color, etc. The degree of transparency of a pixel color can also be expressed as a numerical value, which can be referred to as a transparency value of a pixel, and can be expressed by 0 to 255 in general. When the transparency value of the pixel of the foreground color layer is smaller than or equal to the first transparency threshold value, and the foreground color layer and the background color layer are superposed, the difference between the color displayed by the display screen and the background color cannot be identified by human eyes (the display screen can be considered not to display the foreground color), and then the pixel can be considered to be transparent, and the pixel can also be called as a transparent pixel; when the transparency value of the pixel of the foreground color layer is greater than or equal to the second transparency threshold, and when the foreground color layer and the background color layer are laminated, the difference between the color displayed by the display screen and the foreground color cannot be recognized by human eyes (the display screen can be considered not to display the background color), the pixel can be considered to be opaque, such a pixel is referred to as an opaque pixel in the embodiment of the application, and a transparency value between the first transparency threshold and the second transparency threshold can indicate that the pixel is considered to be semitransparent, that is, between transparent and opaque.
In the embodiment of the application, when the transparency value of the pixel of the fillet layer is smaller than or equal to the first transparency threshold, the pixel is a transparent pixel, and the region where the transparent pixel is located is a transparent area; when the transparency value of the pixel of the fillet image layer is larger than or equal to a second transparency threshold value, the pixel is an opaque pixel, and the area where the opaque pixel is located is an opaque area or an opaque area; when the transparency value of the pixel of the fillet image layer is between the first transparency threshold and the second transparency threshold (middle value), the pixel is a transition region pixel, and the region where the transition region pixel is located is a transition region. In the embodiment of the application, the transition area of the rounded corner pattern layer is positioned between the transparent area and the opaque area, or between the two transparent areas; the transparency values of the pixels in the transition region of the rounded corner layer in the embodiment of the application are gradually changed.
With reference to the first aspect, in a possible implementation manner, the image data of the fillet area includes pixel information of the fillet area and location information of the fillet area.
And the position information of the fillet area is used for representing the position of the fillet area in the fillet layer. According to the embodiment of the application, the position of the pixel corresponding to the pixel information of the fillet area can be determined through the position information of the fillet area.
The conventional method for storing the content of the fillet layer is to store all pixels in the fillet layer in sequence from a first pixel according to rows or columns, and when a display subsystem reads data, the data are read according to the sequence to restore the fillet layer.
With reference to the first aspect, in a possible implementation manner, the reading the first data from the storage device includes: and reading the pixel information of the fillet area and the position information of the fillet area from a memory.
According to the embodiment of the application, the image data of the fillet area are stored in the memory, compared with the existing method that all data of the fillet layer are stored in the memory, the storage amount of the fillet layer is greatly reduced, and meanwhile, the reading bandwidth of the display subsystem when the data of the fillet layer in the memory are read can be further reduced, so that the power consumption can be reduced.
With reference to the first aspect, in a possible implementation manner, the reading of the first data from the storage device includes: and reading the pixel information of the fillet area from the memory, and reading the position information of the fillet area from the on-chip memory.
It should be understood that, the storage device in the embodiment of the present application includes: the memory is mainly a random access memory RAM (or random access memory, internal memory, main memory, or main memory), and the on-chip memory includes a register, a cache (or cache), or a cache (cache).
It should also be understood that the on-chip memory in the embodiments of the present application refers to a memory device integrated with a processor or a processing module such as a CPU, a GPU, or a display subsystem on one chip, while the memory in the embodiments of the present application is located on a separate chip and is not in the same chip as the processor or the processing module such as the CPU, the GPU, or the display subsystem.
In the embodiment of the application, the display subsystem reads the pixel information of the fillet area from the memory, reads the position information of the fillet area from the on-chip memory, and accordingly, the pixel information of the fillet area and the position information of the fillet area are stored in different storage devices.
With reference to the first aspect, in a possible implementation manner, the reading the first data from the storage device includes: and reading the pixel information of the fillet area and the position information of the fillet area from an on-chip memory.
In the embodiment of the application, the pixel information of the fillet area and the position information of the fillet area are both stored in the on-chip memory, and the image data of the fillet layer does not need to be stored in the memory.
It should be understood that the read bandwidth in the embodiment of the present application is a bandwidth for the display subsystem to read data from the memory through the read channel.
With reference to the first aspect, in a possible implementation manner, the fillet area is a transition area of the fillet map layer, where the transition area is an area where pixels with transparency values between a first transparency threshold and a second transparency threshold are located on the fillet map layer.
The transition area is a main part playing a role in fading the sawteeth, and in the embodiment of the application, only the pixel information of the transition area of the fillet layer can be stored, so that the storage capacity of the fillet layer, especially the storage capacity in a memory, can be further reduced, the reading bandwidth can be further saved, and the power consumption can be reduced.
With reference to the first aspect, in a possible implementation manner, the position information of the fillet area includes: a starting position or an ending position of a plurality of consecutive pixels in the rounded region, and a length of the plurality of consecutive pixels.
It should be understood that the plurality of consecutive pixels may be in the same row of the transition region or in the same column of the transition region.
In the embodiment of the present application, the position of the fillet area may be determined by the starting position or the ending position of a plurality of consecutive pixels and the lengths of the plurality of consecutive pixels.
With reference to the first aspect, in a possible implementation manner, the rounded corner area is a rectangular area including a transition area of the rounded corner image layer, where the transition area is an area where pixels with transparency values between a first transparency threshold and a second transparency threshold are located on the rounded corner image layer.
With reference to the first aspect, in a possible implementation manner, the position information of the fillet area includes: position information of corner points of the rectangular region.
Optionally, the position information of the fillet area includes: position information of two, three or four corner points of the rectangular area.
In the embodiment of the application, the position of the fillet area can be determined through the position information of the corner points of the rectangular area.
Optionally, the position information of the fillet area includes: the position information of any corner point of the rectangular area and the length and width of the rectangular area with the direction.
In the embodiment of the application, the position of the fillet area can be determined through the position of one corner point of the rectangular area and the length and width of the rectangular area.
The above-mentioned storage data amount of the image data in the fillet area can be saved by saving the position of the fillet area in the encoding manner.
With reference to the first aspect, in a possible implementation manner, the pixel information of the fillet area is a transparency value of a pixel in the fillet area.
The storage amount of the fillet layer can be further reduced by storing the transparency value of the pixel in the fillet area, so that the reading bandwidth can be further saved, and the power consumption can be reduced.
With reference to the first aspect, in a possible implementation manner, the generating the rounded-corner layer according to the first data and the second data includes: generating a fillet area of the fillet map layer according to the first data; and filling other areas except the fillet area in the fillet layer according to the second data, wherein the second data comprises transparency values of transparent pixels.
With reference to the first aspect, in a possible implementation manner, the image data of the fillet area is pixel information of the fillet area, and the second data includes position information of the fillet area.
The position information of the rounded corner area may be not stored in the storage device but may be part of the second data.
With reference to the first aspect, in a possible implementation manner, the first data is image data of a fillet area in the fillet layer, or the first data only includes image data of the fillet area in the fillet layer.
Compared with the existing method for storing all data of the fillet map layer, the method for storing the image data of the fillet area of the fillet map layer can greatly reduce the storage amount of the fillet map layer, particularly reduce the storage amount of the fillet map layer in a memory, thereby further saving the reading bandwidth and reducing the power consumption.
With reference to the first aspect, in a possible implementation manner, the second data is a preset value or a default value. For example, the second data belongs to a preset value or default value pre-configured in the display subsystem. For another example, the second data is stored in a non-volatile memory.
Optionally, the second data comprises a transparency value of one pixel of a non-fillet area in the fillet layer.
The display subsystem may fill the transparency of the pixels of the non-rounded area according to the transparency value of the one pixel.
Optionally, the second data comprises a chrominance component of one pixel of a non-rounded region in the rounded layer.
The display subsystem may fill the colors of the pixels of the non-rounded areas according to the chrominance components of the one pixel.
Optionally, the second data comprises a chrominance component of one pixel in the rounded image layer.
The display subsystem may fill the colors of all pixels in the rounded corner layer according to the chrominance component of the one pixel.
In a second aspect, there is provided an apparatus comprising means or modules for performing the steps of the method according to the first aspect or any one of the possible implementation manners of the first aspect. Each module or unit may be implemented in software, hardware, or a combination of software and hardware.
In a third aspect, an apparatus for processing an image layer is provided, where the memory is used to store a computer program, and the processor is used to call and run the computer program from the memory, so that the apparatus executes the method in any one of the possible implementation manners of the first aspect and the first aspect.
In a fourth aspect, there is provided a computer program product comprising instructions which, when run on a computer, cause the computer to perform the method of the first aspect or any of the possible implementations of the first aspect.
In a fifth aspect, a computer-readable storage medium is provided, in which instructions are stored, which, when executed on a computer, cause the computer to perform the method of the first aspect or any one of the possible implementations of the first aspect.
In a sixth aspect, an apparatus is provided, which includes a display subsystem and a processor, where the display subsystem is configured to perform the method in any one of the possible implementations of the first aspect and the first aspect, and the processor is configured to draw a rounded corner layer before the display subsystem performs the method in any one of the possible implementations of the first aspect and the first aspect, and store partial data of the rounded corner layer in a storage device.
Optionally, the processor comprises a CPU or GPU.
In a seventh aspect, an apparatus is provided, which includes a display subsystem and a display device, where the display subsystem is configured to perform the method in any one of the possible implementations of the first aspect and the first aspect, and the display device is configured to display an image to be displayed, which is obtained by the display subsystem after performing the method in any one of the possible implementations of the first aspect and the first aspect.
Drawings
Fig. 1 is a logical structure block diagram of a terminal device according to an embodiment of the present application;
fig. 2 is an external view of a terminal device according to an embodiment of the present application;
fig. 3 is a schematic diagram of a hardware architecture for performing layer overlay by using a terminal device according to an embodiment of the present application;
FIG. 4 is a schematic diagram illustrating a corner layer overlay according to an embodiment of the present application;
FIG. 5 is a schematic diagram of a portion of pixels in a rounded pattern layer according to an embodiment of the present application;
FIG. 6 is a schematic flow chart diagram of a method of processing layers according to an embodiment of the application;
FIG. 7 is a diagram illustrating a method for processing layers according to an embodiment of the present application;
FIG. 8 is a schematic flow chart diagram of a method of processing layers according to one embodiment of the present application;
FIG. 9 is a schematic block diagram of an apparatus provided in an embodiment of the present application;
fig. 10 is a schematic structural diagram of an apparatus provided in an embodiment of the present application.
Detailed Description
The technical solution in the present application will be described below with reference to the accompanying drawings.
Devices referred to in embodiments of the present application may include handheld devices, in-vehicle devices, wearable devices, computing devices, or other processing devices connected to a wireless modem. But may also include subscriber units, cellular phones (cellular phones), smart phones (smart phones), Personal Digital Assistants (PDAs), tablet computers, handheld devices (handsets), laptop computers (laptop computers), Machine Type Communication (MTC) terminals, point of sale (POS) terminals, in-vehicle computers, Stations (ST) in Wireless Local Area Networks (WLANs), which may be cordless telephones, Session Initiation Protocol (SIP) telephones, Wireless Local Loop (WLL) stations, and next generation communication systems, for example, devices in a fifth-generation communication (5G) network or devices in a Public Land Mobile Network (PLMN) network for future evolution, and other devices having a display screen.
In the embodiment of the present application, a device is taken as an example of a terminal device, and fig. 1 shows a logical structure block diagram of the terminal device related to the embodiment of the present application. As shown in fig. 1, the hardware layer of the terminal device includes a Central Processing Unit (CPU), a Graphics Processing Unit (GPU), and/or the like. Optionally, the hardware layer of the terminal device may further include an input device, a display device, a memory controller, a display controller, a network interface not shown in the figure, and the like.
Where an input device may be used to detect a user operation and generate user operation information indicative of the user operation, the input device may include, by way of example and not limitation, one or more of a physical keyboard, function keys (e.g., volume control keys, switch keys, etc.), a trackball, a mouse, a joystick, a touch screen, a light mouse (a light mouse is a touch-sensitive surface that does not display visual output, or is an extension of a touch-sensitive surface formed by a touch screen), and the like. The touch screen can comprise a touch sensor and a touch screen controller, the touch sensor can detect the touch position of a user and transmit touch information to the touch screen controller after receiving the touch position, the touch screen controller receives the touch information, converts the touch information into contact point coordinates and transmits the contact point coordinates to the CPU, and meanwhile, the touch screen controller can also receive and execute commands sent by the CPU.
The display device may be used to present visual information such as a user interface, an image or a video, for example, the display device may display information input by a user or information provided to the user, various menus of the terminal device, and the like, and may include a display such as a Liquid Crystal Display (LCD), an Organic Light Emitting Display (OLED), a cathode ray tube (cathode ray tube) display, a holographic (holographic) display, a projection (projector), and the like, by way of example and not limitation. The display device in the embodiment of the application belongs to one type of output devices of terminal equipment.
The storage device is a memory device of a computer system in the terminal device and is used for storing programs and data, and all information in the computer, including input original data, computer programs, intermediate operation results and final operation results, are stored in the storage device. A volatile memory, namely a Random Access Memory (RAM) in the storage device is an internal memory that directly exchanges data with the CPU, and is also called a main memory (abbreviated as main memory) or an internal memory (abbreviated as internal memory), which is one of important components of the terminal device. The memory in the embodiment of the present application mainly refers to a RAM, and the memory may include a Static RAM (SRAM) and a Dynamic RAM (DRAM), where the DRAM may further include a double data rate synchronous dynamic random access memory (DDR SDRAM, DDR SDRAM for short), and the like. In the embodiment of the application, the RAM and the CPU are not integrated on one chip, and the RAM and the CPU belong to different chips.
The memory controller is an important component for controlling the memory inside the computer system and enabling the memory to exchange data with the CPU through the memory controller, and the memory controller determines the memory performance of the computer system.
The storage device further includes some storage devices, such as on-chip storage devices, such as a register (register), a cache (also referred to as a cache), or a cache (cache), which have a much faster read/write speed than the memory, but may lose the storage contents after power is turned off. The on-chip memory in the embodiment of the present application is a memory integrated on one chip with a processor or a processing module such as a CPU, a GPU, or a display subsystem, that is, the on-chip memory and the processor or the processing module belong to the same chip. For example, the register is an on-chip memory, the register is an internal element of the CPU or is integrated on the same chip with the CPU, and the register can read data from a cache region or from a memory, where the read-write speed is the fastest. The cache or cache is also an on-chip memory, and is in the CPU or integrated with the CPU on one chip, which is between the CPU and the memory, so as to alleviate the speed mismatch between the CPU and the memory. The memory may also be integrated with the GPU as an on-chip memory, or integrated with the display subsystem (a module for image processing on a chip) as an on-chip memory, or integrated with another processor or processing module as an on-chip memory, which is not limited in this embodiment.
In the embodiments of the present application, a storage device integrated with a processor or a processing module such as a CPU, a GPU, or a display subsystem on one chip may be referred to as an on-chip memory, and the memory in the embodiments of the present application is located on an independent chip and does not belong to the same chip as the processor or the processing module such as the CPU, the GPU, or the display subsystem.
The storage device also includes some memories that store stable data and are not lost when power is off, i.e., nonvolatile memories such as Read Only Memory (ROM), flash memory (flash memory), or the like. The non-volatile memory, similar to the memory, is also a device located off-chip from the CPU, and is a separate chip.
A display controller (display controller) may set up certain driving conditions by instructing a system to adjust a series of parameters or characteristics, such as voltage, phase, frequency, peak value, effective value, timing, or duty ratio, applied to a pixel to make a display change more.
Above the hardware layer, an operating system (e.g., Android, etc.) and some applications may run. The core library layer is a core part of the operating system, and includes input/output services, core services, a graphics device interface, and a graphics engine (graphics engine) that implements CPU and GPU graphics processing, and the like. The graphics engine may include a 2D engine, a 3D engine, a compositor (composition), and a frame buffer (frame buffer), among others. Besides, the terminal device further comprises a driving layer, a framework layer and an application layer. The driver layer may include a CPU driver, a GPU driver, a display controller driver, and the like. The frame layer may include a graphic service (graphic service), a system service (system service), a web service (web service), a user service (customer service), and the like; the graphic service may include, for example, widgets (widgets), canvases (canvases), views (views), rendering scripts (render scripts), and the like. The application layer may include a desktop (launcher), a media player (media player), a browser (browser), and the like.
Taking fig. 1 as an example, the method for processing a layer provided in the embodiment of the present application is applied to a terminal device, and a hardware layer of the terminal device may include hardware such as a processor (e.g., a CPU and/or a GPU), a display controller (display controller), a memory controller, an input device, and a display device. The kernel library layer (kernel library) may include input/output services (I/O services), kernel services (kernel services), and graphics engines (graphics engines).
Fig. 2 shows an external schematic view of a terminal device according to an embodiment of the present application. As shown, 201 is a display screen of the terminal device, which may also be referred to as a screen, and four corners of the display screen may be cut into an arc shape, as shown in a dotted box 202 in the figure. A part of the screen can be removed from the display screen 201 to form a hollow portion, so as to meet the arrangement requirement of elements such as a front camera and a headphone, for example, a portion filled with a black background in the figure is a hollow portion formed after a part of the screen is removed from the display screen 201, and the hollow portion cannot display contents. For aesthetic reasons, the edges of the hollowed-out portion of the display screen 201 are generally streamlined, i.e. the intersection of the hollowed-out portion and the display screen is a curved line, as shown in the dashed box 203. The hollowed-out portion of the display screen 201 may be connected to the edge of the screen, or may form a closed hollowed-out area on the screen, for example, the hollowed-out portion may be in the shape of a hole.
The display screen 201 comprises a large number of pixels, which may be decomposed into sub-pixels one level lower than the pixels, in order that each individual pixel may display various colors. Typically, the pixels will include three primary colors of red, green and blue, which may exhibit different intensities and which are visually blended to form the desired color. The luminance of the three primary colors can be represented by a number, usually 0 to 255, with different numbers representing different luminances, and the primary colors used to represent the pixels can be generally referred to as chrominance components. In some pixel arrangements, a single pixel may include more or fewer sub-pixels, for example, some single pixels may include only two of the three primary colors, and some single pixels may include a white sub-pixel in addition to the three colors. The material of the screen and the arrangement of the sub-pixels are important factors affecting the display effect of the screen.
Fig. 2 exemplarily shows two sub-pixel arrangements. For example, the display panel subpixels may be arranged in an RGB subpixel arrangement as shown in region 204, which may also be referred to as a standard RGB arrangement. The RGB sub-pixel arrangement can be applied to a Liquid Crystal Display (LCD), and is an arrangement in which a square pixel is divided into three equal parts, each equal part is a sub-pixel of the pixel, each equal part is assigned with a different color, and the three sub-pixels are an integral body, i.e., a color pixel. In a standard RGB arrangement, each pixel can independently display the desired color, and the order of the three sub-pixels can be arbitrary, such as "red, green, blue (R, G, B)", "blue, green, red (B, G, R)", etc.
For another example, the arrangement of the display panel sub-pixels may be RGBG sub-pixel arrangement as shown in the region 204', and this arrangement may also be referred to as P arrangement or diamond pixel arrangement (diamond pixel arrangement). The RGBG subpixel arrangement can be applied to an organic light-emitting diode (OLED), and a single pixel generally has only two red and green (R, G) or blue and green (B, G) subpixels, and each pixel needs to display content through a common subpixel of an adjacent pixel.
The arrangement of the sub-pixels of the display panel can also adopt other arrangements, such as RGBW sub-pixel arrangement, and the like, which are not listed here. Only by taking the two sub-pixel arrangement modes as an example, due to the pixel arrangement mode, when a curved edge is formed on the display screen (for example, an arc line at the four corners of the screen, a curve at the junction of the hollowed portion of the screen and the screen), a sawing phenomenon (or referred to as a sawtooth phenomenon) is inevitably generated when an image is displayed at these places, thereby affecting the display effect of the screen and being not beautiful enough.
In order to improve the sawtooth phenomenon generated when the curve edge of the display screen is positioned at the display image, the sawtooth can be softened by superposing the fillet image layer on the original image layer and controlling the transparency of the pixels in the fillet image layer. The process of superimposing the rounded layers will be described below with reference to fig. 3.
Fig. 3 shows a hardware architecture diagram for performing layer overlay by the terminal device according to the embodiment of the present application. It should be understood that the fillet map layer in the embodiment of the present application may be regarded as an additional superimposed map layer for solving the sawtooth phenomenon that occurs when the above-mentioned display screen fillet displays an image, and the fillet map layer is mainly used for the fillet position of the display screen, so the fillet map layer may not be a complete map layer matched with the size of the display screen. The position of the "round corner" of the display screen includes not only the four corners of the display screen, but also the hollowed area on the screen, that is, the position of the curve on the structure of the display screen can be regarded as the round corner of the display screen, and in addition, the "round corner" includes not only the arc shape, but also other streamline designs, such as the round shape, the euler spiral shape and other complex curves. From the structure of the display screen, the round corner of the display screen is the position where a curve exists, and from the view of the pixels on the layer, the area where the pixels used for the image near the round corner of the display screen are located in the embodiment of the present application may be referred to as a round corner area. A rounded corner area may also be understood as an area on the rounded corner layer where content is displayed near the rounded corner of the display screen.
The hardware architecture shown in fig. 3 mainly includes a central processing unit CPU, an image processor GPU, a memory (in the embodiment of the present application, a double data rate synchronous dynamic random access memory (DDR SDRAM, for short) is taken as an example for description), a display subsystem (DSS), a display controller (display controller), a panel (panel), and the like, and the following description refers to fig. 3. In the embodiment of the present application, the CPU may draw the content of the fillet layer, and store the content of the fillet layer in the DDR. The "contents of the rounded angle layer" described herein may include color data, transparency (alpha) data, and the like of the pixels of the rounded angle layer.
The color data of the pixel is used to represent the color displayed by the pixel, and may include various chrominance components, such as a red component (R value), a blue component (B value), a green component (G value), or a white component (W value), etc., each chrominance component may be represented by a numerical value, generally represented by 0 to 255, different values of chrominance components may represent different luminances, and different values of chrominance components may be mixed into various colors.
The transparency data of a pixel may characterize the transparency of the color of the pixel, indicating how much of the pixel value should be displayed, e.g. when there are two map stacks of foreground and background colors, the transparency of the pixel in the foreground color map layer may control whether the display screen displays the foreground color or the background color, or how much of each of the foreground and background colors is mixed to display the new color, etc. The transparency degree of the pixel color can also be represented by a numerical value, which can be referred to as a transparency value of the pixel, and can also be represented by 0-255 generally, for example, when the transparency value of the pixel is less than or equal to a first transparency threshold value, and when a foreground color layer and a background color layer are laminated, a difference between a color displayed by a display screen and a background color cannot be recognized by human eyes (the display screen can be considered not to display the foreground color), the pixel can be considered to be transparent, and such a pixel can also be referred to as a transparent pixel; when the transparency value of the pixel is greater than or equal to the second transparency threshold, and when the foreground color layer and the background color layer are laminated, the difference between the color displayed by the display screen and the foreground color cannot be recognized by human eyes (the display screen can be considered not to display the background color), the pixel can be considered to be opaque, such a pixel is called as an opaque pixel in the embodiment of the application, and the pixel can be considered to be translucent, namely between transparent and opaque, when the transparency value is between the first transparency threshold and the second transparency threshold.
In this embodiment of the present application, the content of the fillet layer drawn by the CPU may include transparency of a pixel in the fillet layer, a chrominance component of a color of the pixel in the fillet layer, and the like, and in this embodiment of the present application, for example, the chrominance component of a single pixel includes R, G, B. By way of example and not limitation, the top content and the bottom content of the fillet layer drawn by the CPU are shown in the left diagram of fig. 4, the top content may be used to fade the jaggies of the fillets at the two corners of the upper portion of the display screen and the fillets of the hollowed-out portion at the middle position of the upper portion when the image is displayed, and the bottom content may be used to fade the jaggies of the fillets at the two corners of the lower portion of the display screen when the image is displayed. The top content and the bottom content of each rounded corner layer can be respectively located in two rounded corner layers, the width of each rounded corner layer can be matched with the width of a display screen, the height can be freely set or preset by a system, for example, the rounded corner layer where the top content is located can be called as a top layer, and the height can be set to be 85 lines; the rounded layer where the bottom content is located may be referred to as the bottom layer, and the height thereof may be set to 30 lines. As shown in the left diagram of fig. 4, the rounded pattern layer where the top content is located and the rounded pattern layer where the bottom content is located are both rectangular pattern layers.
For example, the image data stored in the layer 0(layer 0) shown in the DDR in fig. 3 is the content of the fillet layer, and the image data stored in the layer0 may include a transparency value of a pixel in the fillet layer and a chrominance component value of the pixel in the fillet layer. Multiple layers can be stored in the DDR, and each layer can store different types of data, such as text, tables, plug-ins, or graphics. By way of example and not limitation, the DDR in fig. 3 stores 6 layers (layer0 to layer5), and of course, a greater or fewer number of layers may be stored in the DDR, which is not limited in this embodiment of the present application.
The GPU may perform complex mathematical and geometric calculations, and is mainly responsible for image processing, for example, the GPU may also perform operations performed by the CPU, draw the content of the rounded corner layer by the GPU and store the content in the DDR, and the GPU may also perform layer overlay processing, for example, when the number of channels of the hardware display subsystem DSS is insufficient, the layers may be overlaid by the GPU and then sent to the DSS for overlay. For example, as shown in fig. 3, the display subsystem DSS includes 5 read channels (RCH 0-RCH 4), and the DDR stores 6 layers, where the number of layers is greater than the number of read channels, so that two of the layers may be superimposed by the GPU, and then one layer formed after the superimposition is written into the DDR, where the number of layers stored in the DDR becomes 5, and then the DSS reads 5 layers through the 5 read channels.
The display subsystem DSS is a module responsible for image processing, and may read image data of each layer from the DDR through a read channel (e.g., RCH0 to RCH4 in the figure), calculate each layer data according to the layer distribution, generate superimposed data after superimposing (OV), and finally transmit the superimposed data to a panel (i.e., a display screen) for display. The display subsystem DSS may not superimpose all the read layers, and may select a part of the layers for superimposing processing by Switch (SW) control after reading the image data of each layer. The display controller in the display subsystem may control the display effect, for example, by instructing the system to adjust a series of parameters or characteristics, such as voltage, phase, frequency, peak value, effective value, timing, or duty ratio, applied to the pixels to establish a certain driving condition, thereby making the display change more.
It was mentioned above that layer 0(layer 0) may store the contents of the rounded layers when the rounded layers are present, and in the ARGB domain, the contents of the rounded layers may be as shown by the top and bottom contents in fig. 4, by way of example and not limitation. The fillet layer where the top content and the bottom content are located may be a black layer, that is, each chroma component value of each pixel in the fillet layer is 0(RGB (0,0,0)), and transparency values of pixels in the fillet layer may be different, where transparency values of a part of pixels may be less than or equal to the first transparency threshold, the part of pixels is transparent pixels, for example, pixels in an area where white is displayed in the top content and the bottom content are transparent pixels, pixels in a white area do not display colors of pixels, and a white area including transparent pixels may be referred to as a transparent area; still another part of the pixels may have a transparency value greater than or equal to the second transparency threshold, the part of the pixels being opaque, e.g., the pixels in the areas of the top and bottom content that appear black are opaque pixels, the pixels in the black area display the color of the pixels (black), and the black area including the opaque pixels may be referred to as a non-transparent area (also referred to as an opaque area). In order to optimize the display effect, there are not direct jumps (not black or white) between the transparent and non-transparent regions in the top and bottom contents of the rounded image layer, but there are several pixels between the transparent and non-transparent regions with transparency values between the first transparency threshold and the second transparency threshold, that is, the transparency values of the pixels from the transparent region to the non-transparent region (or from the non-transparent region to the transparent region) are gradually changed, which can refer to the gray area between the two straight lines shown in fig. 5.
Fig. 5 exemplarily shows a part of pixels in a rounded corner layer, and the embodiment of the present application assumes that a first transparency threshold is 0 and a second transparency threshold is 255, where a square with a black color on the graph may represent an opaque pixel, the transparency value of the opaque pixel may be 255, and an area where the opaque pixel is located is an opaque area, for example, an area on the left side of a line L1 in the graph; the squares with white colors in the figure may represent transparent pixels, the transparency value of the transparent pixels may be 0, and the region where the transparent pixels are located is a transparent region, for example, the region on the right side of the line L2 in the figure. Between the opaque and transparent regions there are grey squares, the different degrees of grey representing different transparency values, the grey squares may represent transition region pixels, the transparency values of which are intermediate values (between 0 and 255), and the region in which the transition region pixels are located may be referred to as the transition region, e.g. the region between lines L1 and L2. The region where the transparency values of the pixels are graded from 0 to 255 or from 255 to 0 may both be referred to as a transition region, which for the rounded corner layer shown in fig. 4 is located between the transparent and opaque regions of the rounded corner layer.
For another example, in the embodiment of the present application, it is assumed that the first transparency threshold is 5, and all pixels with a transparency value of less than or equal to 5 may be considered as transparent pixels, that is, the transparency value of the pixel represented by the square with white color in the drawing is not greater than 5; assuming that the second transparency threshold is 250, all pixels with a transparency value of 250 or more may be regarded as opaque pixels, that is, the transparency value of the pixel represented by the black square in the drawing is not less than 250; the transparency value of the pixels represented by the gray squares is between 5 and 250 (intermediate value), the part of the pixels is the transition region pixels, and the area where the gray squares are located is the transition region.
For another example, the black squares in fig. 5 may also be white squares, and it is still assumed that the first transparency threshold is 5 and the second transparency threshold is 250, that is, the transparency value of the pixels in the left region of L1 is not greater than 5, the transparency value of the pixels in the right region of L2 is not greater than 5, the left region of L1 and the right region of L2 are both transparent regions, the transparency value of the pixels in the region between the two transparent regions is between 5 and 250, the pixels in the region are transition region pixels, and the region where the transition region pixels are located is a transition region.
In other words, in the rounded corner layer, as long as the transparency value of a pixel is between the first transparency threshold and the second transparency threshold, the pixel is a transition region pixel, and the region where the transition region pixel is located is a transition region. The transition zone may be between a transparent zone and an opaque zone, or between two transparent zones. In general, in the rounded corner layer, the transparency values of the pixels in the transition region are graded from a first transparency threshold to a second transparency threshold or from the second transparency threshold to the first transparency threshold. It should be understood that the determination of the first transparency threshold and the second transparency threshold in the embodiments of the present application may be determined according to the discrimination capability of human eyes for different color differences.
When the fillet graphic layer and other display graphic layers are superposed, the transparency value of the pixel of the fillet graphic layer can be controlled to finally display the black of the fillet graphic layer or the display content of the graphic layer below the fillet graphic layer in the superposition area of the fillet graphic layer, and the transparency values of a plurality of pixels in the transition area of the fillet graphic layer are controlled to be intermediate values, so that the smooth transition of any shape can be obtained. Still referring to fig. 4 and 5, of the top and bottom contents of the rounded layer, the white areas indicate that the transparency of the pixels in that area is less than or equal to the first transparency threshold, and when superimposed with other display layers, the display screen will display the contents of the layer below the rounded layer without displaying the black contents of the rounded layer; the black area indicates that the transparency of the pixels in the area is larger than or equal to a second transparency threshold, and when the black area is superposed with other display layers, the display screen displays the black content of the rounded corner layer, but does not display the display content of the layer below the rounded corner layer; a transition region (not shown in fig. 4) exists between the white region and the black region, the transparency value of the pixel in the transition region is an intermediate value (the transparency value between the first transparency threshold and the second transparency threshold), when the transition region is superimposed with other display layers, the display screen displays the content after the fillet layer and the layer below the fillet layer are superimposed, because the transparency value of the part of the pixel is gradually changed, the color display degree of the part of the pixel is also gradually changed, the pixel with a larger transparency value displays a darker color, and the pixel with a smaller transparency value displays a lighter color, taking the content shown in fig. 5 as an example, the color of the transition region pixel displays a color gradual change from black-dark gray-light gray-transparent due to the gradual change of the transparency value, and can play a role of fading sawtooth. By way of example and not limitation, the right diagram in fig. 4 shows a display effect on the display screen after the fillet layers are superimposed, if there is no hollowed portion on the display screen, the effect displayed in the right diagram is the effect after the fillet layers are superimposed, and if there is a hollowed portion on the display screen at a position corresponding to the top content, since the screen has no pixel in the hollowed portion, the display content of the black area of the fillet layers is not actually displayed completely.
When the display subsystem DSS performs stack addition, the rectangular layers can be processed, so the rounded corner layers are generally rectangular layers, such as the top layer and the bottom layer shown in fig. 4, and the DSS needs to acquire pixel values of all pixels in the rounded corner layers of the rectangle when performing the stack addition processing, so after the CPU has drawn the rounded corner layers, the CPU needs to store the pixel values of all pixels in the rounded corner layers in a memory (e.g., DDR), and when the display subsystem DSS reads the layer data, the display subsystem DSS also reads all pixel values of the rounded corner layers through a read channel. However, as described above, the colors of all pixels in the rounded corner layer may be black, that is, the chrominance component values of all pixels in the rounded corner layer may be the same, and the transparency values of the pixels in the transparent area and/or the opaque area may also be the same, that is, the pixel values of most pixels in the rounded corner layer are the same, and the rounded corner area (i.e., the transition area) in the rounded corner layer mainly plays a role of fading jaggies, and storing all data in the rounded corner layer in the memory may cause a large amount of repeated data to be stored in the memory, and a large amount of data storage may also cause an increase in read bandwidth, thereby causing a problem of power consumption increase. Therefore, it is desirable to provide a method for reducing the amount of storage of the fillet layer in the memory.
Therefore, fig. 6 shows a method for processing layers according to an embodiment of the present application, which can reduce the storage amount of the rounded layers, and the following describes the embodiment of the present application in detail with reference to fig. 6.
In step S610, the display subsystem reads first data from a storage device, where the first data is partial data of the fillet layer, and the first data includes image data of a fillet area in the fillet layer.
It should be understood that the "round corner" in the embodiment of the present application is described with respect to the display screen, that is, from the structure of the display screen, the portion where the curved edge exists on the display screen can be regarded as a round corner on the display screen (or screen round corner). The 'fillet area' in the embodiment of the present application is described with respect to a fillet layer, that is, when viewed from pixels in the layer, an area where pixels for displaying an image near a fillet of a screen in the fillet layer are located may be referred to as a fillet area, where the transition area of the fillet layer is mentioned above, and the 'fillet area' in the embodiment of the present application may also be understood as an area including the fillet layer transition area in the fillet layer. Other areas of the fillet pattern layer besides the fillet area may also be referred to as non-fillet areas.
In this embodiment of the application, the first data is partial data of a fillet layer, and it can be understood that the first data includes data of partial pixels in the fillet layer, and the first data includes image data of a fillet area of the fillet layer. Since the first data includes image data of a fillet area in the fillet layer, reading the first data from the storage device includes reading the image data of the fillet area in the fillet layer from the storage device. Alternatively, the first data may include only image data of the fillet area in the fillet layer, in other words, the first data is image data of the fillet area in the fillet layer.
The memory device in the embodiments of the present application may include at least one of a memory and an on-chip memory, wherein the on-chip memory may include at least one of a cache, a register, or a cache, and the processor or the processing module is integrated on one chip. The memory is independent of the processor or the chip on which the processing module is located. The reading of the first data from the storage device may be reading the first data from the same storage device, or reading the first data from a different storage device.
The display subsystem reads partial data of the fillet layer from the storage device, and accordingly, when the storage device stores the partial data of the fillet layer, the storage capacity of the fillet layer can be reduced compared with the existing method of storing all data of the fillet layer in the memory. When the first data is the image data of the fillet area of the fillet layer, only the image data of the fillet area can be stored during storage, and the storage amount of the fillet layer is further reduced.
As a possible implementation manner, the image data of the fillet area in the fillet layer may include pixel information of the fillet area and position information of the fillet area. And the position information of the fillet area is used for representing the position of the fillet area in the fillet layer. According to the embodiment of the application, the position of the pixel corresponding to the pixel information of the fillet area can be determined through the position information of the fillet area.
Taking the example that the first data only includes the image data of the fillet area in the fillet layer, there are various ways to read the first data (i.e., the image data of the fillet area in the fillet layer) from the storage device.
As an example, the pixel information of the fillet area and the position information of the fillet area are both stored in a memory, that is, the storage device is a memory, and in the process of reading the DSS by the display subsystem, the display subsystem may read the pixel information of the fillet area and the position information of the fillet area of the fillet layer from the memory. According to the embodiment of the application, the image data of the fillet area are stored in the memory, compared with the existing method that all data of the fillet layer are stored in the memory, the storage amount of the fillet layer is greatly reduced, and meanwhile, the reading bandwidth of the display subsystem when the data of the fillet layer in the memory are read can be further reduced, so that the power consumption can be reduced.
As another example, the pixel information of the rounded corner region and the position information of the rounded corner region may be stored in different storage devices, for example, the pixel information of the rounded corner region is stored in a memory, and the position information of the rounded corner region is stored in an on-chip memory (e.g., a register), and then during the display subsystem DSS reading process, the display subsystem reads the pixel information of the rounded corner region from the memory and the position information of the rounded corner region from the on-chip memory (e.g., a register). According to the embodiment of the application, the image data of the fillet area is stored in different storage devices, compared with the existing method that all data of the fillet layer are stored in the memory, the storage amount of the fillet layer is greatly reduced, meanwhile, the storage amount of the fillet layer in the memory is reduced, and accordingly, the reading bandwidth of the display subsystem in reading the data of the fillet layer in the memory can be reduced, so that the power consumption can be reduced.
As yet another example, the pixel information of the fillet area and the position information of the fillet area are both stored in an on-chip memory (e.g., a register), that is, the storage device is an on-chip memory, and the display subsystem may read the pixel information of the fillet area and the position information of the fillet area of the fillet layer from the on-chip memory. In the embodiment of the application, the image data of the fillet area is stored in the on-chip memory instead of the memory, compared with the prior art that all data of the fillet layer is stored in the memory, the storage amount of the fillet layer is greatly reduced, meanwhile, the storage amount of the fillet layer in the memory is reduced, and accordingly, the reading bandwidth of the display subsystem when the data of the fillet layer in the memory is read can be reduced, so that the power consumption can be reduced.
Alternatively, the pixel information of the fillet area may include a transparency value of the pixel of the fillet area and color information, such as a chrominance component, of the pixel of the fillet area. In other words, in the above-listed example, when the display subsystem reads the pixel information of the fillet area, the read data includes the transparency value of the pixel and the chrominance component of the pixel in the fillet area, and the transparency value of the pixel and the chrominance component of the pixel in the fillet area are stored in the same storage device.
As still another example, since the pixel information of the rounded corner region may include a transparency value and a chrominance component of the pixel of the rounded corner region, the above information included in the pixel information of the rounded corner region may also be separately stored in different storage devices in the embodiment of the present application, for example, the transparency value of the pixel of the rounded corner region may be stored in a memory, and the chrominance component of the pixel of the rounded corner region may be stored in an on-chip memory. In this way, when the display subsystem reads, the display subsystem reads a part of the information in the pixel information of the fillet area from the memory and the on-chip memory, respectively, that is, reads the transparency value of the pixel in the fillet area from the memory, and reads the color information of the pixel in the fillet area from the on-chip memory. And the position information of the fillet area can be stored together with the transparency value of the pixel of the fillet area, or can be stored together with the color information of the pixel of the fillet area, or can be separately stored in another different storage device.
According to the embodiment of the application, the pixel information in the image data of the fillet area is separately stored in different storage devices, compared with the existing method that all data of the fillet layer are stored in the memory, the storage amount of the fillet layer is greatly reduced, meanwhile, the storage amount of the fillet layer in the memory is reduced, and accordingly, the reading bandwidth of the display subsystem in reading the data of the fillet layer in the memory can be reduced, so that the power consumption can be reduced.
Alternatively, the pixel information of the fillet area may be a transparency value of the pixel of the fillet area. In other words, after the fillet map layer is drawn, only the transparency values of the pixels in the fillet area may be stored, and the color of the pixels in the fillet area may be a default value or a preset value, which may be a value already stored in a non-volatile memory, such as a flash memory. The preset value or default value may also be a value pre-configured in the display subsystem such that the display subsystem acquires the preset value or default value.
In the embodiment of the application, only the transparency value of the pixel in the fillet area can be stored, and the color in the fillet area is set as the default value, so that the storage amount of the fillet layer can be further reduced. Alternatively, storing the pixel information of the fillet area and the position information of the fillet area in the fillet layer may be performed by a CPU or a GPU. Optionally, the fillet area of the fillet layer may be a transition area of the fillet layer, where the transition area is an area where pixels on the fillet layer with transparency values between the first transparency threshold and the second transparency threshold are located. For example, the rounded corner area is located between a transparent region and an opaque region of the rounded corner layer, or the rounded corner area is located between two transparent regions of the rounded corner layer. Optionally, the fillet area of the fillet layer may also be an area including a transition area of the fillet layer, where the transition area is an area where pixels on the fillet layer with transparency values between the first transparency threshold and the second transparency threshold are located. For example, the rounded corner region may be a rectangular region including the transition region, a parallelogram region including the transition region, a trapezoid region including the transition region, an irregular region, or the like.
Optionally, the position information of the fillet area may describe a position of the fillet area, that is, a position of the whole area representing the fillet area, for example, if the fillet area is a rectangle, the position information of the fillet area may include positions of corner pixels of the rectangle area, and a rectangle area may be determined by positions of pixels of at least two corners; if the fillet area is a transition area, the position information of the fillet area may include the position of the start pixel of the plurality of continuous pixels in the transition area and the length of the plurality of continuous pixels, or include the position of the end pixel of the plurality of continuous pixels in the transition area and the length of the plurality of continuous pixels, and the position of the plurality of continuous pixels may be determined by the position of the start pixel (or the position of the end pixel) and the length of the continuous pixels. Optionally, the position information of the fillet area may also describe the position of each pixel of the fillet area, for example, the position information of the fillet area may include a coordinate position of the pixel of the fillet area, and the position of the pixel of the fillet area is determined, that is, the position of the fillet area is determined equivalently.
In the embodiment of the application, the position of the fillet area can be determined through the position information of the fillet area, and in addition, the position information of the fillet area can be stored in a coding mode, that is, the position of the fillet area can be determined by using less storage space, and the storage space of the fillet layer can be reduced.
Since the fillet area of the fillet pattern layer is generally fixed in some areas, the position of the fillet area may be fixed. Therefore, as another possible implementation, the image data of the fillet area in the fillet layer may include pixel information of the fillet area, and the position information of the fillet area may be a default value or a preset value, which may be a value already stored in a non-volatile memory, such as a flash memory. The preset value or default value may also be a value pre-configured in the display subsystem such that the display subsystem acquires the preset value or default value.
Accordingly, the storage manner of the pixel information of the fillet area, the manner of reading the first data (i.e., the pixel information of the fillet area in the fillet layer) from the storage device, and the storage form when the position information of the fillet area is used as the preset value may refer to the above description. For example, reading pixel information of the fillet area (including transparency value and chrominance component of the pixel of the fillet area) from the memory; or reading pixel information of the fillet area (including transparency values and chrominance components of pixels of the fillet area) from an on-chip memory; or reading the transparency value of the pixel in the fillet area from the memory, and reading the chrominance component of the pixel in the fillet area from the on-chip memory; or reading the transparency value of the pixel in the fillet area from the memory or the on-chip memory, and the chrominance component of the pixel in the fillet area is a preset value or a default value, etc., which are not described in detail in this application.
In the foregoing, several ways of reading the image data of the fillet area in the fillet layer from the storage device are mentioned, and when the first data further includes image data of other partial areas except the fillet area, the above method is also applicable, and for brevity, details are not described here again.
In step S620, the display subsystem generates the fillet layer according to the first data and second data, where the second data is remaining data in the fillet layer except the partial data.
As mentioned above, when the display subsystem superimposes the fillet map layer, the rectangular map layer can be processed, in step S610, the display subsystem reads partial data in the fillet map layer from the storage device, where the partial data includes image data of a fillet area in the fillet map layer, and when the display subsystem superimposes, the display subsystem further needs to obtain remaining partial data of the fillet map layer, that is, the second data in this embodiment of the application.
Alternatively, if the first data is image data of a fillet area in the fillet layer (the image data of the fillet area includes pixel information of the fillet area and position information of the fillet area, where the pixel information of the fillet area includes a transparency value and a chrominance component of a pixel), the second data may be image data of an area (hereinafter, described as a "non-fillet area") other than the fillet area. Optionally, the second data may include transparency values and chrominance components of pixels of the non-rounded region. Optionally, the transparency values of the pixels in the non-rounded areas are the same, for example, the transparency values of the pixels in the non-rounded areas are all less than or equal to the first transparency value threshold, e.g., the transparency values are all 0. Alternatively, the second data may comprise a transparency value of one pixel in the non-fillet region. Optionally, the color of the pixels of the non-rounded corner regions is the same, for example, the pixels of the non-rounded corner regions are all black, green, blue or other colors. Alternatively, the second data may comprise a chrominance component of one pixel in the non-fillet region, e.g., RGB (127,200,235) or RGB (255 ), etc. Optionally, the chrominance components of each pixel of the non-fillet region are the same, e.g., the color of each pixel is (0,0,0) or (127,127,127) or (255 ), etc. Alternatively, the second data may include one chrominance component of one pixel in a non-fillet area, for example, the second data includes any one of a red component R, a green component G, and a blue component B in RGB. Alternatively, the color of the pixels in the non-fillet area and the color of the pixels in the fillet area may be the same or different. In other words, the chrominance components of the pixels of the rounded area included in the first data and the chrominance components of the pixels of the non-rounded area included in the second data may be the same or different. For example, the color of the non-fillet area is black, the color of the fillet area may be black, and the same color as the panel edge, such as red, blue, yellow, etc., may also be used.
Optionally, if the first data is image data of a fillet area in the fillet image layer (the image data of the fillet area includes pixel information of the fillet area and position information of the fillet area, where the pixel information of the fillet area is a transparency value of a pixel), the second data includes image data of an area (described below as a "non-fillet area") other than the fillet area and a chrominance component of the pixel of the fillet area. Optionally, the color of the pixels of the non-rounded corner region and the color of the pixels of the rounded corner region included in the second data are the same, i.e. the chrominance components of the pixels of the non-rounded corner region are the same as the chrominance components of the pixels of the rounded corner region. In other words, the rounded corner layers are single color layers, for example, the rounded corner layers are black layers, red layers, green layers, and the like. Optionally, the second data may include a chrominance component of one pixel in the rounded image layer to fill the color of all pixels of the rounded image layer.
Optionally, if the first data is image data of a fillet area in the fillet layer (the image data of the fillet area is pixel information of the fillet area, where the pixel information of the fillet area includes a transparency value and a chrominance component of a pixel), the second data includes image data of an area (described below as a "non-fillet area") other than the fillet area and position information of the fillet area. Optionally, the second data may include a transparency value and a chrominance component of a pixel in the non-fillet area, and location information of the fillet area, and specifically, the transparency value and the chrominance component of the pixel in the non-fillet area included in the second data may refer to "the first data is image data of the fillet area in the fillet layer (the image data of the fillet area includes pixel information of the fillet area and location information of the fillet area, where the pixel information of the fillet area includes the transparency value and the chrominance component of the pixel)" described above, which is not described herein again. The manner of the position information of the fillet area included in the second data may refer to the manner of storing the position information of the fillet area in the storage device in step S610, and is not described herein again.
Alternatively, if the first data is image data of a fillet area in the fillet layer (the image data of the fillet area is pixel information of the fillet area, where the pixel information of the fillet area is a transparency value of a pixel), the second data includes transparency values and chrominance components of pixels of other areas (described below as a "non-fillet area") except for the fillet area, and chrominance components of the pixels of the fillet area and position information of the fillet area. Optionally, the form of the information included in the second data may refer to the related content, and is not described herein again.
Optionally, the second data in this embodiment of the present application is a preset value or a default value, and the preset value or the default value is stored in a non-volatile memory, such as a flash memory or a ROM. The second data is stored in non-volatile memory, which cannot be read directly by the display subsystem, and thus can be obtained by: after each system power-on, the CPU may run the computer program to configure the default value or default value from the non-volatile memory into the register, and the display subsystem may retrieve the second data from the register. The second data may be set by the system when the system leaves the factory, or may be customized by the user. When the system leaves the factory, the second data is written into the nonvolatile memory; when the user customizes the second data, the user can output the custom value by operating on the input device, the input device generates user operation information or an instruction for indicating the custom value, and the CPU can store the user custom value in the nonvolatile memory after executing the instruction. The display subsystem may automatically obtain the default or preset value after the device is powered on for subsequent processing. Of course, if the display subsystem supports reading the second data directly from the non-volatile memory, the display subsystem may also read the second data directly from the non-volatile memory storing the second data.
In the embodiment of the application, the second data is the remaining data except the first data in the rounded-angle layer, and the second data adopts a default value and can not be stored in the memory, so that the storage capacity of the rounded-angle layer can be reduced. Optionally, the generating, by the display subsystem, the rounded corner layer according to the first data and the second data may include: generating a fillet area of the fillet map layer according to the first data; and filling other areas except the fillet area in the fillet layer according to second data, wherein the second data comprises the transparency of the pixels of the non-fillet area and the preset value or the default value of the chrominance component.
Optionally, the second data is a transparency value of the transparent pixel. In other words, the display subsystem fills the pixels of the non-rounded areas as transparent pixels according to the second data. First data is transparent pixel's transparency value to other regions except the fillet region in the fillet map layer are filled to second data, are equivalent to fill other regions except the fillet region in the fillet map layer for transparent pixel, both can restore the fillet map layer, can also reduce the memory space of the image data in other regions except the fillet region in the fillet map layer, have also just reduced the memory space on fillet map layer.
In step S630, the rounded-corner layer and at least one target layer are superimposed to obtain an image to be displayed. The target layer is a layer other than the rounded layer and is used for displaying contents on the target layer, such as images, texts, or tables, on the display screen.
Some specific, non-limiting examples of embodiments of the present application will be described in more detail below in conjunction with fig. 7 and 8.
Fig. 7 shows a part of the pixels of the rounded pattern layer. For convenience of understanding, in the embodiment of the present application, the first transparency threshold is set to 0, and the second transparency threshold is set to 255. In FIG. 7, the transparency value of a pixel in a white area is 0, a transparent area of a fillet layer is shown, the transparency value of a pixel in a black area is 255, an opaque area of the fillet layer is shown, and a gray area between the black area and the white area is shown as a transition area of the fillet layer, wherein the transparency value of the pixel is 0-255. The line L in fig. 7 can be understood as a curve of a rounded corner on the display screen.
In the embodiment of the present application, the first data only includes image data of a fillet area in a fillet layer. There are various ways of storing image data in the fillet area in the fillet layer. Optionally, in this embodiment of the present application, the image data of the fillet layer includes pixel information of a fillet area and position information of the fillet area.
As a possible implementation manner, after the drawing of the fillet map layer is completed, when the CPU stores the image data of the fillet map layer, the pixel information of the fillet area and the position information of the fillet area may be stored in the memory.
For example, the pixel information of the fillet area includes a transparency value and a chrominance component of the pixel of the fillet area, and the CPU may store the chrominance component and the transparency value of the pixel of the fillet area and the position information (i.e., the first data in the embodiment of the present application) of the fillet area in the memory. The image data (i.e., the second data in the embodiment of the present application) of the other area except the fillet area in the fillet layer may be stored in the nonvolatile memory, the second data is configured to the register by the nonvolatile memory after the device is powered on, and when the fillet layer is generated in the display subsystem, the image data of the other area except the fillet area in the fillet layer is configured by the register.
In the superposition process, when the display subsystem reads the image data of the fillet layer, the chroma component value and the transparency value of the pixel of the fillet area and the position information (namely the first data in the embodiment of the application) of the fillet area are read from the memory, and the fillet area of the fillet layer can be generated according to the information; the image data (such as color and transparency) of the other areas except the fillet area can be filled with default values, for example, the filling values of the pixels of the other areas except the fillet area are obtained by register configuration.
For another example, the pixel information of the fillet area is a transparency value of a pixel of the fillet area, and the CPU may store the transparency value of the pixel of the fillet area and the position information (i.e., the first data in this embodiment) of the fillet area in the memory. The image data of the other area except the fillet area and the chrominance component of the fillet area (i.e., the second data in the embodiment of the present application) in the fillet image layer are stored in the nonvolatile memory, the second data is configured to the register by the nonvolatile memory after the device is powered on, and when the fillet image layer is generated in the display subsystem, the register is configured to acquire the image data of the other area except the fillet area and the chrominance component of the fillet area in the fillet image layer.
In the overlay process, when the display subsystem reads the image data of the fillet layer, the transparency value of the pixel in the fillet area and the position information (i.e., the first data in the embodiment of the present application) of the fillet area are read from the memory, and the color of the pixel in the fillet area and the pixel value (such as color or transparency) in the other area except the fillet area may be filled with default values, for example, the color of the pixel in the fillet area and the pixel filling value in the other area except the fillet area are configured by a register. In the embodiment of the application, the color of the fillet area is the color displayed by the fillet of the display screen, and the color can be the same as the edge of the panel and also can be the user-defined color.
As another possible implementation manner, after the drawing of the fillet map layer is completed, when the CPU stores the image data of the fillet map layer, the pixel information of the fillet area may be stored in the memory, and the position information of the fillet area may be stored in the on-chip memory. For example, the pixel information of the fillet area includes a transparency value and a chrominance component of the pixel of the fillet area, and the CPU may store the transparency value and the chrominance component of the pixel of the fillet area in the memory and store the position information of the fillet area in the on-chip memory. The image data (i.e., the second data in the embodiment of the present application) of the other area except the fillet area in the fillet layer may be stored in the nonvolatile memory, the second data is configured to the register by the nonvolatile memory after the device is powered on, and when the fillet layer is generated in the display subsystem, the register is configured to obtain the image data of the other area except the fillet area in the fillet layer.
In the superposition process, when the display subsystem reads the image data of the fillet layer, the transparency value and the chrominance component of the pixel of the fillet area are read from the memory, the position information of the fillet area is read from the on-chip memory, and the fillet area of the fillet layer can be generated according to the information; the image data for other areas than the rounded areas (e.g. color, or transparency) may be filled in with default values, e.g. by a register configuring the filling values of the pixels of the other areas outside the rounded areas.
For another example, the pixel information of the fillet area is a transparency value of a pixel of the fillet area, and the CPU may store the transparency value of the pixel of the fillet area in the memory and store the position information of the fillet area in the on-chip memory. The image data of the other area except the fillet area and the chrominance component of the fillet area (i.e., the second data in the embodiment of the present application) in the fillet image layer are stored in the nonvolatile memory, the second data is configured to the register by the nonvolatile memory after the device is powered on, and when the fillet image layer is generated in the display subsystem, the register is configured to acquire the image data of the other area except the fillet area and the chrominance component of the fillet area in the fillet image layer.
In the superposition process, when the display subsystem reads the image data of the fillet layer, reading the transparency value of the pixel of the fillet area from the memory, and reading the position information of the fillet area from the on-chip memory; default values may be used for the color of the rounded corner region pixels and the pixel values (e.g., color, or transparency) of other regions outside the rounded corner region, for example, the color of the rounded corner region pixels and the pixel fill values of other regions outside the rounded corner region may be configured by registers. In the embodiment of the application, the color of the fillet area is the color displayed by the fillet of the display screen, and the color can be the same as the edge of the panel and also can be the user-defined color.
As another possible implementation manner, after the fillet map layer is drawn, when the CPU stores the image data of the fillet map layer, the pixel information of the fillet area and the position information of the fillet area may be stored in the on-chip memory. For example, the pixel information of the fillet area includes a transparency value and a chrominance component of the pixel of the fillet area, and the CPU may store the transparency value, the chrominance component, and the position information (i.e., the first data in the embodiment of the present application) of the pixel of the fillet area in the on-chip memory. The image data (i.e., the second data in the embodiment of the present application) of the other area except the fillet area in the fillet layer may be stored in the nonvolatile memory, the second data is configured to the register by the nonvolatile memory after the device is powered on, and when the fillet layer is generated in the display subsystem, the register is configured to obtain the image data of the other area except the fillet area in the fillet layer.
In the superposition process, when the display subsystem reads the image data of the fillet layer, the transparency value and the chrominance component of the pixel of the fillet area and the position information (namely the first data in the embodiment of the application) of the fillet area are read from the on-chip memory, and the fillet area of the fillet layer can be generated according to the information; the image data for other areas than the rounded areas (e.g. color, or transparency) may be filled in with default values, e.g. by a register configuring the filling values of the pixels of the other areas outside the rounded areas.
For another example, the pixel information of the fillet area is a transparency value of a pixel of the fillet area, and the CPU may store the transparency value of the pixel of the fillet area and the position information (i.e., the first data in the embodiment of the present application) of the fillet area in the on-chip memory. The image data of the other area except the fillet area and the chrominance component of the fillet area (i.e., the second data in the embodiment of the present application) in the fillet image layer are stored in the nonvolatile memory, the second data is configured to the register by the nonvolatile memory after the device is powered on, and when the fillet image layer is generated in the display subsystem, the register is configured to acquire the image data of the other area except the fillet area and the chrominance component of the fillet area in the fillet image layer.
Optionally, in this embodiment of the application, the image data of the fillet layer may be pixel information of a fillet area, and the position information of the fillet area is included in the second data, and may be set as a preset value or a default value and stored in the nonvolatile memory.
For example, after the fillet layer is drawn, when the CPU stores the image data of the fillet layer, the pixel information of the fillet area may be stored in the memory or the on-chip memory. For example, the pixel information of the fillet area includes a transparency value and a chrominance component of the pixel of the fillet area, and the CPU may store both the transparency value and the chrominance component of the pixel of the fillet area in the memory or the on-chip memory, or store both in the memory and the on-chip memory, respectively. And the image data of other areas except the fillet area in the fillet layer and the position information of the fillet area can be stored in a nonvolatile memory, second data after the device is powered on is configured to a register by the nonvolatile memory, and when the fillet layer is generated in the display subsystem, the register is configured to acquire the image data of other areas except the fillet area in the fillet layer and the position of the fillet area.
For another example, the pixel information of the fillet area is a transparency value of the pixel of the fillet area, and the CPU may store the transparency value of the pixel of the fillet area in the memory or the on-chip memory. And when the fillet layer is generated in the display subsystem, the image data of the other area except the fillet area in the fillet layer, the position of the fillet area and the chrominance component of the fillet area are acquired by the register configuration.
The pixel information of the fillet area of the fillet layer may be stored in hexadecimal, for example, the transparency value and the chrominance component of the pixel may be represented by decimal values 0 to 255, and during the storage, the decimal value may be converted into a hexadecimal value to be stored, for example, the transparency value of the pixel is 216, and accordingly, the transparency value is D8, and may also be represented by 0xD8, the color of the pixel is black, and each chrominance component is 255, and accordingly, the transparency value is FF, and may also be represented by 0 xFF. The storage of the position information of the fillet area of the fillet layer may be in various ways.
As an example, the fillet area of the fillet layer is a transition area of the fillet layer, that is, the fillet area is a gray area in the figure, and for convenience of understanding, the "fillet area" is directly described as the "transition area". For example, the CPU may store the location of each pixel of the transition region, e.g., the coordinates of each pixel.
As shown in fig. 7, the pixels in the rounded corner layer have row and column serial numbers from which the location of the pixels can be determined. The figure schematically shows pixels of an area having a length and a width of 10, where R represents a row, R0 to R9 represent row numbers of 0 to 9, respectively, C represents a column, and C0 to C9 represent column numbers of 0 to 9, respectively. Taking the pixel at the upper right corner as an example, the pixel is located at the 0 th row and the 9 th column, so the row number of the pixel is 0, and the column number is 9. When the position information of the pixel is stored, the row number of the pixel is 0x00 and the column number is 0x09, which can be expressed in hexadecimal. As shown in table 1 below, the storage contents of the rounded corner image layer are exemplarily shown by taking the case of storing only the pixel transparency value of the transition region and the position information of the transition region as an example. Each pixel value corresponds to a pixel determined by a row number and a column number.
TABLE 1
Line number Column number Pixel value Line number Column number Pixel value
0x00 0x07 0xD8 0x03 0x04 0x80
0x00 0x08 0xD8 0x04 0x02 0xD8
0x00 0x09 0xD8 0x04 0x03 0x80
0x01 0x05 0xD8 0x05 0x01 0xD8
0x01 0x06 0xD8 0x05 0x02 0xD8
0x01 0x07 0xD8 0x06 0x01 0xD8
0x01 0x08 0x80 0x06 0x02 0x80
0x02 0x03 0xD8 0x07 0x00 0xD8
0x02 0x04 0xD8 0x07 0x01 0xD8
0x02 0x05 0xD8 0x08 0x00 0xD8
0x02 0x06 0x80 0x08 0x01 0x80
0x03 0x02 0xD8 0x09 0x00 0xD8
0x03 0x03 0xD8 - - -
To further reduce the amount of memory, the location information of the transition zone may be stored in an encoded manner. For example, since the row number or the column number of the plurality of consecutive pixels is the same, the position of the start pixel of the plurality of consecutive pixels in the transition area and the length of the plurality of consecutive pixels can be saved. It should be understood that the start pixel in the embodiments of the present application refers to the first pixel in the plurality of consecutive pixels, and the end pixel refers to the last pixel in the plurality of consecutive pixels. It should also be understood that a plurality of consecutive pixels in the embodiment of the present application may be in the same row or the same column, and the embodiment of the present application is not limited thereto.
Taking fig. 7 as an example, the coordinates of the starting pixel of each row of pixels in the transition area, such as the row number and the column number, and the length of the transition area in this segment, that is, the length of each row of pixels, may be saved, or may be understood as the number of pixels in each row. From the above information the location of the transition zone can be determined. As shown in table 2 below, the storage content of the rounded corner image layer, i.e. the encoding of the transition region shown in fig. 7, is exemplarily shown by taking the storage of only the pixel transparency value of the transition region and the location information of the transition region as an example. The pixel values of each row in the table correspond to the pixels of the transition region of the row in a one-to-one manner, wherein the first pixel value of each row corresponds to the first pixel of each row.
TABLE 2
Line number Column number Length of Pixel value
0x00 0x07 0x03 0xD8 0xD8 0xD8
0x01 0x05 0x04 0xD8 0xD8 0xD8 0x80
0x02 0x03 0x04 0xD8 0xD8 0xD8 0x80
0x03 0x02 0x03 0xD8 0xD8 0x80
0x04 0x02 0x02 0xD8 0x80
0x05 0x01 0x02 0xD8 0xD8
0x06 0x01 0x02 0xD8 0x80
0x07 0x00 0x02 0xD8 0xD8
0x08 0x00 0x02 0xD8 0x80
0x09 0x00 0x01 0xD8
Of course, the coordinates of the start pixel of each column of pixels in the transition region and the length of the transition region may also be saved. For another example, coordinates of a start pixel and coordinates of an end pixel in a plurality of consecutive pixels in the transition region may also be saved, for example, coordinates of a start pixel and coordinates of an end pixel of each row of pixels in the transition region are saved, and consecutive pixels between the end pixel and the start pixel are stored by default. The location of the transition region may also be determined by the coordinates of the start pixel and the end pixel of each row of pixels.
As shown in table 3 below, the storage contents of the rounded corner layer are exemplarily shown by taking the case of storing only the pixel transparency value of the transition region and the position information of the transition region as an example. The pixel value of each row in the table corresponds to the pixels in the transition area of the row one by one, wherein the first pixel value of each row corresponds to the first pixel determined by the row sequence number (start) and the column sequence number (start) of each row, and the last pixel value of each row corresponds to the last pixel determined by the row sequence number (end) and the column sequence number (end) of each row.
TABLE 3
Rank number (start) Column number (original) Rank number (terminal) Column number (Final) Pixel value
0x00 0x07 0x00 0x09 0xD8 0xD8 0xD8
0x01 0x05 0x01 0x08 0xD8 0xD8 0xD8 0x80
0x02 0x03 0x02 0x06 0xD8 0xD8 0xD8 0x80
0x03 0x02 0x03 0x04 0xD8 0xD8 0x80
0x04 0x02 0x04 0x03 0xD8 0x80
0x05 0x01 0x05 0x02 0xD8 0xD8
0x06 0x01 0x06 0x02 0xD8 0x80
0x07 0x00 0x07 0x01 0xD8 0xD8
0x08 0x00 0x08 0x01 0xD8 0x80
0x09 0x00 0x09 0x00 0xD8
For another example, the row number and the column number of the starting pixel of each row of pixels in the transition area, and the column numbers of other pixels may also be stored, and the position of the transition area may also be determined by the above information. As shown in table 4 below, the storage contents of the rounded corner image layer are exemplarily shown by taking the case of storing only the pixel transparency value of the transition region and the position information of the transition region as an example. The pixel value of each row in the table corresponds to the pixels in the transition area of the row one by one, wherein the first pixel value of each row corresponds to the first pixel determined by the row number and the column number 1 of each row, and the last pixel value of each row corresponds to the last pixel determined by the row number and the column number 4 of each row.
TABLE 4
Line number Column number 1 Column number 2 Column number 3 Column number 4 Pixel value
0x00 0x07 0x08 0x09 - 0xD8 0xD8 0xD8
0x01 0x05 0x06 0x07 0x08 0xD8 0xD8 0xD8 0x80
0x02 0x03 0x04 0x05 0x06 0xD8 0xD8 0xD8 0x80
0x03 0x02 0x03 0x04 - 0xD8 0xD8 0x80
0x04 0x02 0x03 - - 0xD8 0x80
0x05 0x01 0x02 - - 0xD8 0xD8
0x06 0x01 0x02 - - 0xD8 0x80
0x07 0x00 0x01 - - 0xD8 0xD8
0x08 0x00 0x01 - - 0xD8 0x80
0x09 0x00 - - - 0xD8
As another example, the rounded corner regions of the rounded corner layer may be rectangular regions including transition regions of the rounded corner layer, and still take fig. 7 as an example, the rounded corner regions are rectangular regions shown in fig. 7, which include transition regions, and further include partially fully transparent regions and partially opaque regions. When the fillet area is the rectangular area of the transition area including the fillet map layer, the storage mode of the position information of the fillet area may be similar to the storage mode when the fillet area is the transition area, and for brevity, the description is omitted.
In order to further reduce the storage amount, when storing the position information of the rounded corner region, only the position information of the corner points of the rounded corner region of the rectangle may be stored, for example, the positions of two diagonal points, or the positions of any three corner points, or the positions of four corner points of the rectangle region may be stored, the position of the start corner point of the rectangle region and the length and width of the rectangle region may also be stored, and the position of any corner point of the rectangle region and the length and width of the rectangle region having a direction and the like may also be stored. When the display subsystem reads the position information of the fillet area, the position of the fillet area can be determined according to the information. For example, the storage contents of the location information of the rounded corner area may be as shown in the following table 5.
TABLE 5
Figure PCTCN2019083900-APPB-000001
Table 5 shows only some exemplary storage manners of the location information of the fillet area, but it should be understood that the manner of storing the location of the fillet area according to the embodiment of the present application is not limited thereto. Taking the example of storing only the transparency value of the pixel in the rounded corner area, the stored pixel information in the rounded corner area can be as shown in table 6 below. The pixel values of each row in the table correspond to the pixels of each row of the rounded corner area of the rectangle.
TABLE 6
0xFF 0xFF 0xFF 0xFF 0xFF 0xFF 0xFF 0xD8 0xD8 0xD8
0xFF 0xFF 0xFF 0xFF 0xFF 0xD8 0xD8 0xD8 0x80 0x00
0xFF 0xFF 0xFF 0xD8 0xD8 0xD8 0x80 0x00 0x00 0x00
0xFF 0xFF 0xD8 0xD8 0x80 0x00 0x00 0x00 0x00 0x00
0xFF 0xFF 0xD8 0x80 0x00 0x00 0x00 0x00 0x00 0x00
0xFF 0xD8 0xD8 0x00 0x00 0x00 0x00 0x00 0x00 0x00
0xFF 0xD8 0x80 0x00 0x00 0x00 0x00 0x00 0x00 0x00
0xD8 0xD8 0x00 0x00 0x00 0x00 0x00 0x00 0x00 0x00
0xD8 0x80 0x00 0x00 0x00 0x00 0x00 0x00 0x00 0x00
0x80 0x00 0x00 0x00 0x00 0x00 0x00 0x00 0x00 0x00
Fig. 8 shows a schematic flowchart of a method for processing an image layer according to an embodiment of the present application. As shown in fig. 8, the process flow for the rounded corner layer is as follows.
And S810, drawing a fillet map layer. This step may be performed by the CPU or GPU. The position of the fillet area in the fillet graphic layer corresponds to the fillet position of the display screen. Illustratively, the drawn rounded layer contents may be the top and bottom contents as shown in FIG. 4. Step S810 may be performed only once after the system is powered on. That is, the CPU or the GPU may draw the fillet layer only once, and after S830 is executed, in the subsequent overlay process, the display subsystem may directly read the content of the fillet layer stored after the first drawing without redrawing. The second data in the embodiment of the present application is stored in the non-volatile memory, and after the system is powered on, the CPU also configures the second data in the register so as to be used for decoding and restoring the fillet layer in step S850.
S820, the CPU or the GPU encodes the content of the fillet layer to be stored, and the encoded image data of the fillet layer is the first data in the embodiment of the present application. In this step, the CPU or the GPU may encode only the image data of the fillet area of the fillet layer, and the first data is the encoded image data of the fillet area. The image data of the fillet area may include pixel information of the fillet area and position information of the fillet area. Optionally, when the fillet area is a transition area, the CPU or the GPU encodes the position information of the transition area and the pixel information of the transition area. The encoding method can be referred to the above description of fig. 7, and is not described herein again. Alternatively, the fillet area is an area including a transition area, and the CPU or the GPU encodes position information and pixel information of the area including the transition area. The encoding method can be referred to the above description of fig. 7, and is not described herein again.
S830, the CPU or the GPU stores the encoded image data (first data in the embodiment of the present application) of the fillet area in the storage device. Alternatively, the CPU or the GPU may store the pixel information and the position information of the encoded rounded corner region in the memory. Alternatively, the CPU or the GPU may store the pixel information of the encoded fillet area in the memory, and store the position information of the encoded fillet area in the on-chip memory such as a register. Alternatively, the CPU or the GPU may store the pixel information and the position information of the encoded rounded corner region in the on-chip memory.
In step S830, the CPU or the GPU stores only part of the data (i.e., the first data in the embodiment of the present application) in the fillet layer, and the remaining part of the data in the fillet layer except for the first data is stored in the nonvolatile memory, i.e., the second data described in step S810. Optionally, in this step, pixel information and position information of a fillet area of the fillet layer may be stored, where the pixel information of the fillet area may include a transparency value of a pixel of the fillet area and a chrominance component of the pixel of the fillet area. The second data may include pixel information of a non-fillet area of the fillet layer, for example, the second data includes a transparency value and a chrominance component of one pixel of the non-fillet area, and in step S850, the transparency value and the chrominance component of the one pixel may be filled into other areas of the fillet layer except for the fillet area. Alternatively, the transparency value of the one pixel may be a transparency value of a transparent pixel, that is, in step S850, other areas except for the fillet area in the fillet layer may be filled with the transparent pixel. Optionally, in this step, pixel information and position information of a fillet area of the fillet layer may be stored, where the pixel information of the fillet area is a fillet area transparency value. The second data may include pixel information of a non-fillet area of the fillet layer and color information of the fillet area, for example, the second data includes a transparency value and a chrominance component of one pixel of the non-fillet area, and in step S850, the transparency value and the chrominance component of the one pixel may be filled into other areas of the fillet layer except for the fillet area. The second data may further include a chrominance component of one pixel of the fillet area, and in step S850, the chrominance component of the one pixel may be filled into the fillet area in the fillet layer. Of course, the chrominance components of the rounded corner region and the chrominance components of the non-rounded corner region may be the same, so that the second data may also include only the chrominance component of one pixel in the rounded corner layer, and in step S850, the chrominance component of the one pixel may be filled into the entire rounded corner layer.
S840, the display subsystem reads the image data of the fillet layer in the memory into the read channel RCH. In this step, the display subsystem mainly reads the image data of the fillet layer stored in the memory from the memory through the read channel. When the CPU or the GPU does not store the image data of the rounded image layer in the memory in step S830, step S840 may be omitted.
S850, the display subsystem decodes the pixel information and the position information of the coded fillet area. The display subsystem may acquire the corresponding stored image data from the storage device in which the CPU or the GPU stores the image data of the rounded image layer in S830. The display subsystem may restore the area corresponding to the part of data according to the acquired data, for example, if the image data of the fillet area is stored in step S830, the display subsystem may restore the fillet area of the fillet layer according to the acquired data. In this step, while decoding the pixel information and the position information of the encoded fillet area, the display subsystem obtains second data from a register configured with the second data, and may automatically fill other areas in the fillet layer with the second data, thereby restoring the fillet layer.
And S860, the display subsystem performs superposition and post-processing. And the display subsystem superposes the fillet layer and at least one target layer, and performs image algorithm processing, synthesis, display sending and other processing.
Method embodiments of the present application are described above in detail with reference to fig. 1 to 8, and apparatus embodiments of the present application are described below in detail with reference to fig. 9 to 10. It is to be understood that the description of the method embodiments corresponds to the description of the apparatus embodiments, and therefore reference may be made to the preceding method embodiments for parts not described in detail.
Fig. 9 is a schematic structural diagram of an apparatus provided in an embodiment of the present application. The device 900 in fig. 9 may be the above mentioned display subsystem, for example, a specific example of the display subsystem DSS of fig. 3. The apparatus shown in fig. 9 may be used to implement the method performed by the display subsystem described above, in particular, the apparatus 900 may be used to perform the method of fig. 6, and may implement the embodiment shown in fig. 7, and the description is not repeated to avoid redundancy.
The apparatus 900 shown in fig. 9 includes a reading module 910, a generating module 920, and a superimposing module 930. A reading module 910, configured to read first data from a storage device, where the first data is partial data of the fillet layer, and the first data includes image data of a fillet area in the fillet layer. A generating module 920, configured to generate the fillet layer according to the first data and second data, where the second data is remaining data in the fillet layer except for the partial data. And an overlapping module 930, configured to overlap the rounded corner layer and the at least one target layer to obtain an image to be displayed. Optionally, the image data of the fillet area includes pixel information of the fillet area and position information of the fillet area. Optionally, the storage device includes a memory and an on-chip memory, and the reading module 910 is specifically configured to read pixel information of the fillet area from the memory, and read position information of the fillet area from the on-chip memory. Optionally, the fillet area is a transition area of the fillet map layer, where the transition area is an area where pixels with transparency values between a first transparency threshold and a second transparency threshold on the fillet map layer are located. Optionally, the position information of the fillet area includes: a starting position or an ending position of a plurality of consecutive pixels in the rounded region, and a length of the plurality of consecutive pixels.
Optionally, the fillet area is a rectangular area including a transition area of the fillet map layer, where the transition area is an area where pixels with transparency values between a first transparency threshold and a second transparency threshold on the fillet map layer are located. Optionally, the position information of the fillet area includes: position information of corner points of the rectangular region. Optionally, the pixel information of the fillet area is a transparency value of a pixel in the fillet area. Optionally, the generating module 920 is specifically configured to generate a fillet area of the fillet layer according to the first data; and filling other areas except the fillet area in the fillet layer according to the second data, wherein the second data comprises transparency values of transparent pixels. Optionally, the first data is image data of a fillet area in the fillet layer. Optionally, the second data is a preset value or a default value.
At least one module of the apparatus shown in fig. 9 may be implemented by software, hardware, or a combination of software and hardware. The software may cause the computer program instructions to be executed by various types of processors, such as a CPU. The hardware may be one or more of various types of circuitry, such as logic circuitry, algorithmic circuitry, digital circuitry, analog circuitry, or programmable circuitry. Thus, the display subsystem may include hardware circuitry and the necessary software drivers, including the aforementioned computer program instructions.
In another embodiment, the functionality of the display subsystem may be implemented by a processor, such as the CPU mentioned in the previous embodiments. Fig. 10 shows a schematic block diagram of an apparatus 1000 provided in an embodiment of the present application. The device 1000 in fig. 10 may be the above-mentioned device, and may be a specific example of a terminal device having the hardware layers of fig. 1, for example. The apparatus 1000 in fig. 10 comprises: at least one processor 1010, at least one network interface 1040 or other user interface 1030, memory 1020, and at least one communication bus 1050. A communication bus 1050 is used to enable communications among the components. Optionally, the user interface 1030 includes a display (e.g., OLED, LCD, CRT, holographic imaging device, or projection device, etc.), a keyboard, or a pointing device (e.g., mouse, trackball, touch pad, or touch screen, etc.). Alternatively, the network interface 1040 may include various types of wired or wireless transceivers.
Memory 1020 may be used to store software programs and modules and to provide instructions and data to processor 1010. The memory 1020 may mainly include a storage program area and a storage data area, wherein the storage program area may store an operating system, or an application program required by at least one function (such as a sound playing function, an image playing function, and the like), and the like; the storage data area may store data (such as audio data, or a phonebook, etc.) created according to the use of the terminal device 1000, and the like. In addition, the memory 1020 may include a read-only memory and a Random Access Memory (RAM), and a portion of the memory 1020 may further include a non-volatile memory (NVM), such as at least one magnetic disk storage device (magnetic disk storage), a flash memory (flash memory) device, or other volatile solid state storage devices.
In some embodiments, memory 1020 stores elements, executable modules or data structures, or a subset thereof, or an expanded set thereof as follows: an operating system including various system programs, such as a framework layer, a core library layer, and a driver layer shown in fig. 1, for implementing various basic services and processing hardware-based tasks; the application module includes various applications, such as a desktop (launcher) and a media player (media player), a browser (browser) and the like shown in fig. 1, and is used for implementing various application services.
In an embodiment of the present application, the processor 1010, by invoking programs or instructions stored by the memory 1020, is configured to: reading first target data of a fillet layer from a memory storage device, wherein the first data is partial data of the fillet layer, the target data is partial data of the fillet layer, and the first target data comprises image data of a fillet area in the fillet layer; generating the fillet image layer according to the target first data and second data of the fillet image layer, wherein the second data comprises second image data of other areas except for a fillet area of the remaining part data except the part data in the fillet image layer, and generating the fillet image layer; and superposing the fillet layer and at least one target layer to obtain an image to be displayed. By this approach, the processor 1010 calls the program or instructions to implement the functionality of the display subsystem.
The device 1000 may correspond to (for example, may be configured with or be itself the terminal device described in fig. 1 or fig. 2), and each module or unit in the device 1000 is respectively configured to execute each action or processing procedure executed by the terminal device in the methods in fig. 5 to fig. 8, and here, a detailed description thereof is omitted to avoid redundancy.
Those of ordinary skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
It is clear to those skilled in the art that, for convenience and brevity of description, the specific working processes of the above-described systems, apparatuses and units may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again. In the several embodiments provided in the present application, it should be understood that the disclosed system, apparatus and method may be implemented in other ways. For example, the above-described apparatus embodiments are merely illustrative, and for example, the division of the units is only one logical division, and other divisions may be realized in practice, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The functions, if implemented in the form of software functional units and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present application or portions thereof that substantially contribute to the prior art may be embodied in the form of a software product stored in a storage medium and including instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present application. And the aforementioned storage medium includes: various media capable of storing program codes, such as a usb disk, a removable hard disk, a read-only memory (ROM), a Random Access Memory (RAM), a magnetic disk, or an optical disk.
The above description is only for the specific embodiments of the present application, but the scope of the present application is not limited thereto, and any person skilled in the art can easily conceive of the changes or substitutions within the technical scope of the present application, and all the changes or substitutions should be covered by the scope of the present application. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.

Claims (22)

  1. A method for processing an image layer, comprising:
    reading first data from a storage device, wherein the first data is partial data of the fillet layer, and the first data comprises image data of a fillet area in the fillet layer;
    generating the fillet layer according to the first data and second data, wherein the second data are remaining data except the partial data in the fillet layer;
    and superposing the fillet layer and at least one target layer to obtain an image to be displayed.
  2. The method of claim 1, wherein the image data of the fillet area comprises pixel information of the fillet area and position information of the fillet area.
  3. The method of claim 2, wherein the storage device comprises a memory and an on-chip memory, and wherein reading the first data from the storage device comprises:
    and reading the pixel information of the fillet area from the memory, and reading the position information of the fillet area from the on-chip memory.
  4. The method according to claim 2 or 3, wherein the fillet area is a transition area of the fillet layer, wherein the transition area is an area where pixels with transparency values between a first transparency threshold and a second transparency threshold are located on the fillet layer.
  5. The method of claim 4, wherein the location information of the fillet area comprises: a starting position or an ending position of a plurality of consecutive pixels in the rounded region, and a length of the plurality of consecutive pixels.
  6. The method according to claim 2 or 3, wherein the rounded corner region is a rectangular region including a transition region of the rounded corner layer, wherein the transition region is a region where pixels with transparency values between a first transparency threshold and a second transparency threshold are located on the rounded corner layer.
  7. The method of claim 6, wherein the location information of the fillet area comprises: position information of corner points of the rectangular region.
  8. The method according to any one of claims 2 to 7, wherein the pixel information of the fillet area is a transparency value of a pixel in the fillet area.
  9. The method according to any one of claims 1 to 8, wherein the generating the fillet layer according to the first data and the second data comprises:
    generating a fillet area of the fillet map layer according to the first data;
    and filling other areas except the fillet area in the fillet layer according to the second data, wherein the second data comprises transparency values of transparent pixels.
  10. The method according to any one of claims 1 to 9, wherein the first data is image data of a fillet area in the fillet layer.
  11. The method of any one of claims 1 to 10, wherein the second data is a preset value or a default value.
  12. An apparatus, comprising:
    the reading module is used for reading first data from a storage device, wherein the first data comprises image data of a fillet area in a fillet layer;
    a generating module, configured to generate the fillet image layer according to the first data and second data, where the second data is remaining data in the fillet image layer except for the partial data;
    and the superposition module is used for superposing the fillet layer and at least one target layer to obtain an image to be displayed.
  13. The apparatus of claim 12, wherein the image data of the fillet area comprises pixel information of the fillet area and location information of the fillet area.
  14. The apparatus of claim 13, wherein the storage device comprises a memory and an on-chip memory,
    the reading module is specifically configured to read pixel information of the fillet area from the memory, and read position information of the fillet area from the on-chip memory.
  15. The apparatus according to claim 13 or 14, wherein the fillet area is a transition area of the fillet layer, wherein the transition area is an area where pixels with transparency values between a first transparency threshold and a second transparency threshold are located on the fillet layer.
  16. The apparatus of claim 15, wherein the position information of the fillet area comprises: a starting position or an ending position of a plurality of consecutive pixels in the rounded region, and a length of the plurality of consecutive pixels.
  17. The apparatus according to claim 13 or 14, wherein the rounded corner region is a rectangular region including a transition region of the rounded corner layer, wherein the transition region is a region where pixels with transparency values between a first transparency threshold and a second transparency threshold are located on the rounded corner layer.
  18. The apparatus of claim 17, wherein the position information of the fillet area comprises: position information of corner points of the rectangular region.
  19. The apparatus according to any one of claims 13 to 18, wherein the pixel information of the fillet area is a transparency value of a pixel in the fillet area.
  20. The apparatus according to any one of claims 12 to 19, wherein the generating module is specifically configured to generate a fillet area of the fillet layer according to the first data, where the first data includes image data of the fillet area in the fillet layer;
    and filling other areas except the fillet area in the fillet layer according to the second data, wherein the second data comprises transparency values of transparent pixels.
  21. The apparatus according to any of claims 12 to 20, wherein the first data is image data of a fillet area in the fillet layer.
  22. The apparatus of any one of claims 12 to 21, wherein the second data is a preset value or a default value.
CN201980091778.9A 2019-04-23 2019-04-23 Method and device for processing image layer Active CN113412470B (en)

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2019/083900 WO2020215207A1 (en) 2019-04-23 2019-04-23 Method and device for processing layers

Publications (2)

Publication Number Publication Date
CN113412470A true CN113412470A (en) 2021-09-17
CN113412470B CN113412470B (en) 2023-09-08

Family

ID=72941271

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201980091778.9A Active CN113412470B (en) 2019-04-23 2019-04-23 Method and device for processing image layer

Country Status (2)

Country Link
CN (1) CN113412470B (en)
WO (1) WO2020215207A1 (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113412470B (en) * 2019-04-23 2023-09-08 华为技术有限公司 Method and device for processing image layer
CN112637667A (en) * 2020-12-30 2021-04-09 上海铼锶信息技术有限公司 Progress bar display method and intelligent electronic equipment

Citations (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20170034446A1 (en) * 2015-07-29 2017-02-02 Samsung Electronics Co., Ltd. User terminal apparatus and control method thereof
CN106681706A (en) * 2016-08-09 2017-05-17 腾讯科技(深圳)有限公司 Application progress processing method and terminal
US20170186218A1 (en) * 2015-12-28 2017-06-29 Le Holding (Beijing) Co., Ltd. Method for loading 360 degree images, a loading module and mobile terminal
CN107391073A (en) * 2017-07-26 2017-11-24 北京小米移动软件有限公司 Display module and electronic equipment
CN107643885A (en) * 2017-10-13 2018-01-30 北京小米移动软件有限公司 Display screen, method for displaying image and terminal and storage medium
CN107656791A (en) * 2017-10-18 2018-02-02 珠海市魅族科技有限公司 Method, terminal device, computer installation and the storage medium of display
CN107682730A (en) * 2017-09-18 2018-02-09 北京嗨动视觉科技有限公司 Map overlay processing method, map overlay processing unit and video processor
WO2018036526A1 (en) * 2016-08-24 2018-03-01 北京小米移动软件有限公司 Display method and device
CN107766038A (en) * 2017-10-24 2018-03-06 四川长虹电器股份有限公司 It is a kind of to carry out the method that profile is cut out and beautified to UI controls based on android system
US20180088393A1 (en) * 2016-09-28 2018-03-29 Beijing Xiaomi Mobile Software Co., Ltd. Electronic device, display method and medium
CN107977947A (en) * 2017-12-22 2018-05-01 维沃移动通信有限公司 A kind of image processing method, mobile terminal
CN108900693A (en) * 2018-05-25 2018-11-27 北京小米移动软件有限公司 Window display method and device
CN109375978A (en) * 2018-10-17 2019-02-22 奇酷互联网络科技(深圳)有限公司 Display methods, computer equipment and the storage medium of fillet screen
WO2019063495A2 (en) * 2017-09-29 2019-04-04 Inventrans Bvba Method, device and computer program for overlaying a graphical image
WO2020215207A1 (en) * 2019-04-23 2020-10-29 华为技术有限公司 Method and device for processing layers
CN112204619A (en) * 2019-04-23 2021-01-08 华为技术有限公司 Method and device for processing image layer

Patent Citations (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20170034446A1 (en) * 2015-07-29 2017-02-02 Samsung Electronics Co., Ltd. User terminal apparatus and control method thereof
US20170186218A1 (en) * 2015-12-28 2017-06-29 Le Holding (Beijing) Co., Ltd. Method for loading 360 degree images, a loading module and mobile terminal
CN106681706A (en) * 2016-08-09 2017-05-17 腾讯科技(深圳)有限公司 Application progress processing method and terminal
WO2018036526A1 (en) * 2016-08-24 2018-03-01 北京小米移动软件有限公司 Display method and device
US20180088393A1 (en) * 2016-09-28 2018-03-29 Beijing Xiaomi Mobile Software Co., Ltd. Electronic device, display method and medium
JP2019500631A (en) * 2016-09-28 2019-01-10 北京小米移動軟件有限公司Beijing Xiaomi Mobile Software Co.,Ltd. Electronic device and display method
CN107391073A (en) * 2017-07-26 2017-11-24 北京小米移动软件有限公司 Display module and electronic equipment
CN107682730A (en) * 2017-09-18 2018-02-09 北京嗨动视觉科技有限公司 Map overlay processing method, map overlay processing unit and video processor
WO2019063495A2 (en) * 2017-09-29 2019-04-04 Inventrans Bvba Method, device and computer program for overlaying a graphical image
CN107643885A (en) * 2017-10-13 2018-01-30 北京小米移动软件有限公司 Display screen, method for displaying image and terminal and storage medium
CN107656791A (en) * 2017-10-18 2018-02-02 珠海市魅族科技有限公司 Method, terminal device, computer installation and the storage medium of display
CN107766038A (en) * 2017-10-24 2018-03-06 四川长虹电器股份有限公司 It is a kind of to carry out the method that profile is cut out and beautified to UI controls based on android system
CN107977947A (en) * 2017-12-22 2018-05-01 维沃移动通信有限公司 A kind of image processing method, mobile terminal
CN108900693A (en) * 2018-05-25 2018-11-27 北京小米移动软件有限公司 Window display method and device
CN109375978A (en) * 2018-10-17 2019-02-22 奇酷互联网络科技(深圳)有限公司 Display methods, computer equipment and the storage medium of fillet screen
WO2020215207A1 (en) * 2019-04-23 2020-10-29 华为技术有限公司 Method and device for processing layers
CN112204619A (en) * 2019-04-23 2021-01-08 华为技术有限公司 Method and device for processing image layer

Also Published As

Publication number Publication date
CN113412470B (en) 2023-09-08
WO2020215207A1 (en) 2020-10-29

Similar Documents

Publication Publication Date Title
KR102578167B1 (en) Method of driving display device and display device performing the same
CN109472839B (en) Image generation method and device, computer equipment and computer storage medium
US7557817B2 (en) Method and apparatus for overlaying reduced color resolution images
CN109166159B (en) Method and device for acquiring dominant tone of image and terminal
AU2003200970B2 (en) Hardware-enhanced graphics rendering of sub-component-oriented characters
US11398195B2 (en) Backlight brightness processing method and system, backlight brightness adjustment method, storage medium
US11027201B2 (en) Composing an image
KR102601853B1 (en) Display device and image processing method thereof
JP2008170988A (en) Graphics controller, method of generating composite image from main image and tile image, and method of superimposing main image on tile background image using graphics controller
US20070040849A1 (en) Making an overlay image edge artifact less conspicuous
US20150287220A1 (en) Rendering text using anti-aliasing techniques, cached coverage values, and/or reuse of font color values
KR102316376B1 (en) Method of modifying image data, display device performing the same and computer-readable medium storing the same
US10649711B2 (en) Method of switching display of a terminal and a terminal
CN113412470B (en) Method and device for processing image layer
TWI597708B (en) Electronic display
KR102224742B1 (en) Image display method
US20060055657A1 (en) Display apparatus, display control method , program and recording medium
WO2020215212A1 (en) Layer processing method and device
CN103650004B (en) Image processing apparatus, image processing method and integrated circuit
CN111650995A (en) Image display method and device, mobile terminal and storage medium
US20200192554A1 (en) Terminal device, display control method implemented by the terminal device, computer readable storage medium and computer program product
US9092911B2 (en) Subpixel shape smoothing based on predicted shape background information
CN109766069B (en) Auxiliary display method, device, electronic equipment and computer readable storage medium
US20230016631A1 (en) Methods for color-blindness remediation through image color correction
JP2004093932A (en) Font processor, terminal device, font processing method, and font processing program

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant