CN114089933B - Display parameter adjusting method, electronic device, chip and readable storage medium - Google Patents

Display parameter adjusting method, electronic device, chip and readable storage medium Download PDF

Info

Publication number
CN114089933B
CN114089933B CN202110644476.9A CN202110644476A CN114089933B CN 114089933 B CN114089933 B CN 114089933B CN 202110644476 A CN202110644476 A CN 202110644476A CN 114089933 B CN114089933 B CN 114089933B
Authority
CN
China
Prior art keywords
offset
rendering
application
synthesis
frame images
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110644476.9A
Other languages
Chinese (zh)
Other versions
CN114089933A (en
Inventor
冯林
单双双
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Glory Smart Technology Development Co ltd
Original Assignee
Honor Device Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Honor Device Co Ltd filed Critical Honor Device Co Ltd
Priority to CN202110644476.9A priority Critical patent/CN114089933B/en
Publication of CN114089933A publication Critical patent/CN114089933A/en
Application granted granted Critical
Publication of CN114089933B publication Critical patent/CN114089933B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/14Digital output to display device ; Cooperation and interconnection of the display device with other functional units
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/10Geometric effects
    • G06T15/20Perspective computation
    • G06T15/205Image-based rendering

Abstract

The embodiment of the application is applicable to the technical field of display, and provides a display parameter adjusting method, electronic equipment, a chip and a readable storage medium. The method for adjusting the display parameters comprises the steps of obtaining application drawing rendering time and layer synthesis time of a plurality of historical frame images; determining a target application drawing rendering offset and a target layer synthesis offset according to the application drawing rendering time and the layer synthesis time of the plurality of historical frame images; the occupation ratio of ideal frame images in the plurality of historical frame images is larger than a preset occupation ratio; the ideal frame image is a historical frame image which completes an application drawing and rendering process and a layer synthesis process in a vertical synchronization period when a target application drawing and rendering offset and a target layer synthesis offset are used; and adjusting the current application rendering offset according to the target application rendering offset, and adjusting the current layer synthesis offset according to the target layer synthesis offset. The method for adjusting the display parameters can improve the graphic display efficiency.

Description

Display parameter adjusting method, electronic device, chip and readable storage medium
Technical Field
The embodiment of the application relates to the technical field of display, in particular to a display parameter adjusting method, electronic equipment, a chip and a readable storage medium.
Background
With the rapid development of science and technology, a great number of electronic devices including computers, tablet computers, vehicle-mounted terminals, wearable devices, mobile phones and the like are emerging. Most electronic devices are equipped with a display screen, with a display system.
The display pipeline of the display system mainly comprises three processes of Application (APP) drawing and rendering, layer composition and hardware display. The layer composition flow may be implemented by a display composition process (e.g., surfaceFlinger). To avoid display of the stuck and raised visual representation of the graphics, the display system synchronizes the various processes of the display pipeline via vertical synchronization (Vsync) signals.
Generally, for a frame image, when the display system displays the frame image based on the Vsync signal, it is necessary to execute an application rendering process in the nth Vsync period, a layer composition process in the N +1 th Vsync period, and a hardware rendering process in the N +2 th Vsync period. That is, at least 2 Vsync cycle times are required from rendering of an image frame by application drawing to sending the image to hardware display, and the graphic display efficiency needs to be improved.
Disclosure of Invention
The embodiment of the application provides a display parameter adjusting method, electronic equipment, a chip and a readable storage medium, and improves the graphic display efficiency.
In a first aspect, a method for adjusting display parameters is provided, including: acquiring application drawing rendering time and layer synthesis time of a plurality of historical frame images; determining a target application drawing rendering offset and a target layer synthesis offset according to the application drawing rendering time and the layer synthesis time of the plurality of historical frame images; and adjusting the current application rendering offset according to the target application rendering offset, and adjusting the current layer synthesis offset according to the target layer synthesis offset. The ideal frame image is a historical frame image which completes an application rendering process and a layer composition process in a vertical synchronization period when a target application rendering offset and a target layer composition offset are used.
According to the method for adjusting the display parameters, the ratio of ideal frame images in the historical frame images can be larger than the preset ratio according to the target application rendering offset and the target layer synthesis offset determined by the application rendering duration and the layer synthesis duration of the multiple historical frame images. The application rendering duration and the layer synthesis duration of the historical frame image can represent the application rendering duration and the layer synthesis duration of the frame image to be displayed, so that after the current application rendering offset and the current layer synthesis offset are adjusted according to the target application rendering offset and the target layer synthesis offset, the application rendering and the layer synthesis of part of the frame image to be displayed can be completed in one vertical synchronization period, and the hardware sending and displaying can be started at the starting time of the next vertical synchronization period. The display parameter adjusting method can ensure that only one vertical synchronization period is needed from application drawing rendering to hardware display sending of a frame of image, can compress the display processing time of the frame of image, reduces display delay, improves the image display efficiency and improves touch response.
In one possible implementation, determining a target application rendering offset and a target layer composition offset according to application rendering durations and layer composition durations of a plurality of historical frame images includes: solving an offset optimization model according to the application rendering duration and the layer synthesis duration of the plurality of historical frame images to obtain an optimal solution of an application rendering offset adjustment value; and determining a target application rendering offset and a target layer synthesis offset according to the optimal solution of the application rendering offset adjustment value. The offset optimization model is used for representing the maximum value of the product of the chiral gain and the frame loss prevention rate; the following chirality gain is the sum of the application rendering offset adjustment value and a vertical synchronization period; the non-frame-loss rate refers to a ratio of the historical frame images without frame loss among the plurality of historical frame images when the application rendering offset and the layer composition offset obtained according to the application rendering offset adjustment value are used.
In the implementation mode, the offset optimization model is used for representing the maximum value of the product of the chiral gain and the non-frame-loss rate, and the optimal solution of the application rendering offset adjustment value obtained by solving the offset optimization model can not only ensure the chiral gain but also ensure the non-frame-loss rate, so that the proportion of ideal frame images in historical frame images is maximum under the target application rendering offset and the target application rendering offset, and further the to-be-displayed frame images, the chiral gain and the non-frame-loss rate are maximized, and the display effect is optimal.
In a possible implementation manner, determining a target application rendering offset and a target layer synthesis offset according to an optimal solution of an application rendering offset adjustment value includes: determining the sum of the optimal solution of the application rendering offset adjustment value and the initial application rendering offset as a target application rendering offset; and determining a target layer synthesis offset according to the target application drawing rendering offset. The initial application rendering offset refers to an application rendering offset when offset adjustment is not performed.
In a possible implementation manner, determining a target layer composition offset according to a target application rendering offset includes: calculating an average value of application drawing rendering durations of a plurality of historical frame images; and determining the sum of the average value and the target application drawing rendering offset as a target layer synthesis offset.
In a possible implementation manner, before the plurality of historical frame images are frame images in a first historical time period, adjusting a current application rendering offset according to a target application rendering offset, and adjusting a current layer synthesis offset according to a target layer synthesis offset, the method further includes: obtaining target application drawing and rendering offsets and target layer synthesis offsets corresponding to a plurality of second historical time periods, wherein the plurality of second historical time periods are located before the first historical time period, and the plurality of second historical time periods are continuous with the first historical time period; adjusting the current application rendering offset according to the target application rendering offset, and adjusting the current layer synthesis offset according to the target layer synthesis offset, including: if the plurality of rendering selectable values are all larger than the current application rendering offset, determining an application rendering offset setting value according to the plurality of rendering selectable values, and determining a layer synthesis offset setting value according to the plurality of synthesis selectable values; the rendering selectable values comprise a target application drawing rendering offset corresponding to a first historical time period and a target application drawing rendering offset corresponding to a plurality of second historical time periods; the plurality of synthetic selectable values comprise target layer synthetic offset corresponding to a first historical time period and target layer synthetic offset corresponding to a plurality of second historical time periods; and adjusting the current application rendering offset as an application rendering offset setting value, and adjusting the current layer synthesis offset as a layer synthesis offset setting value.
In the implementation mode, a plurality of rendering selectable values and a plurality of synthesis selectable values are respectively calculated according to the frame images in the first historical time period and the frame images in the second historical time period, and the current offset is adjusted when the rendering selectable values are all larger than the current application rendering offset, so that frequent adjustment of the application rendering offset and the layer synthesis offset of a software system can be avoided, the system load is reduced, and the system operation efficiency is improved.
In one possible implementation, determining an application rendering offset setting value according to a plurality of rendering selectable values, and determining a layer composition offset setting value according to a plurality of composition selectable values includes: determining a rendering selectable value which is the smallest among the plurality of rendering selectable values as an application rendering offset setting value; and determining the smallest synthesis selectable value in the plurality of synthesis selectable values as the layer synthesis offset setting value.
In this implementation manner, the currently applied rendering offset is adjusted to the smallest rendering selectable value among the plurality of rendering selectable values, and the current layer composition offset is adjusted to the smallest composition selectable value among the plurality of composition selectable values, so that the frame loss rate is minimized, and the display effect is improved to the greatest extent.
In a possible implementation manner, before determining a target application rendering offset and a target layer composition offset according to application rendering durations and layer composition durations of a plurality of historical frame images, the method further includes: determining that the stability of the rendering and synthesizing time lengths of the plurality of historical frame images meets a preset condition according to the application rendering time lengths and the layer synthesizing time lengths of the plurality of historical frame images, wherein the proportion of adjustable frame images in the plurality of historical frame images is greater than a preset proportion; the rendering synthesis duration refers to the sum of application rendering duration and layer synthesis duration of the historical frame image, the adjustable frame image refers to the historical frame image of which the sum of the rendering synthesis duration and the initial application rendering offset is smaller than a vertical synchronization period, and the initial application rendering offset refers to the application rendering offset when offset adjustment is not performed.
In the implementation manner, the stability of the rendering and synthesizing time lengths of the plurality of historical frame images meets a preset condition, and the proportion of the adjustable frame images in the plurality of historical frame images is greater than the preset proportion, namely the offset adjustment condition. In the implementation mode, the offset is adjusted under the condition that the offset adjustment condition is met, so that the adjustment of the offset is ensured under the condition that the application rendering time of the frame image is stable and frame loss is not easy to generate, and the graphic display effect after the offset adjustment is ensured.
In a possible implementation manner, determining that the stability of the rendering and synthesizing durations of the plurality of historical frame images meets a preset condition according to the application rendering durations and the layer synthesizing durations of the plurality of historical frame images, and the proportion of adjustable frame images in the plurality of historical frame images is greater than a preset proportion includes: calculating the variance of the rendering and synthesizing time lengths of the plurality of historical frame images according to the application drawing rendering time lengths and the layer synthesizing time lengths of the plurality of historical frame images; calculating a variance threshold according to the application drawing rendering time length and the layer composition time length of the plurality of historical frame images, a preset proportion, a vertical synchronization period and an initial application drawing rendering offset; if the variance is smaller than the variance threshold, it is determined that the stability of the rendering and synthesizing time lengths of the plurality of historical frame images meets a preset condition, and the proportion of the adjustable frame images in the plurality of historical frame images is larger than a preset proportion.
In the implementation mode, the offset adjustment condition is expressed by a variance threshold, and whether the offset setting condition is met or not is simply, quickly and accurately determined by judging the size relation between the variance of the rendering and synthesizing duration of the plurality of historical frame images and the variance threshold.
In one possible implementation, the preset ratio is 0.9973.
In the implementation mode, the preset proportion is set to be 0.9973, the normal distribution 3 sigma criterion is met, the variance threshold value is convenient to calculate, and the accuracy of the judgment of the offset adjustment condition can be improved.
In one possible implementation, calculating a variance threshold according to an application rendering duration, a preset ratio, a vertical synchronization period, and an initial application rendering offset of a plurality of historical frame images includes:
according to
Figure BDA0003108570370000031
Calculating a variance threshold, wherein threshold Represents a variance threshold, T Vsync Indicating a vertical synchronization period, APPoffset1 indicating an initial application rendering offset, RCTAvg indicating an average of rendering composition durations of a plurality of history frame images; the rendering and synthesizing duration refers to the sum of application drawing rendering duration of the historical frame image and layer synthesizing duration.
In one possible implementation, the method is applied to an electronic device, and the electronic device comprises a vertical synchronization control module, a vertical synchronization algorithm module and a setting module;
the vertical synchronization control module acquires application drawing rendering time and layer synthesis time of a plurality of historical frame images;
the vertical synchronization control module determines whether the stability of the rendering and synthesizing time length of the plurality of historical frame images meets a preset condition and determines whether the proportion of adjustable frame images in the plurality of historical frame images is greater than a preset proportion;
determining that the stability of the rendering and synthesizing time lengths of the plurality of historical frame images meets a preset condition, and the proportion of adjustable frame images in the plurality of historical frame images is greater than a preset proportion, sending first indication information and the application rendering time lengths and the layer synthesizing time lengths of the plurality of historical frame images to a vertical synchronization algorithm module by a vertical synchronization control module, wherein the first indication information is used for indicating the vertical synchronization algorithm module to calculate a target application rendering offset and a target layer synthesizing offset;
in response to receiving the first indication information, the vertical synchronization algorithm module calculates a target application rendering offset and a target layer synthesis offset based on application rendering durations and layer synthesis durations of the plurality of historical frame images;
the vertical synchronization algorithm module sends the target application drawing rendering offset and the target layer synthesis offset to the setting module;
the setting module sets the application drawing rendering offset in the setting module based on the target application drawing rendering offset, and sets the layer synthesis offset in the setting module based on the target layer synthesis offset;
determining that the stability of the rendering synthesis duration of the plurality of historical frame images does not meet a preset condition, or the proportion of adjustable frame images in the plurality of historical frame images is smaller than or equal to a preset proportion, sending second indication information, an initial application rendering offset and an initial layer synthesis offset to a setting module by a vertical synchronization control module, wherein the second indication information is used for indicating the setting module to set the application rendering offset in the setting module based on the initial application rendering offset, and setting the layer synthesis offset in the setting module based on the initial layer synthesis offset;
and in response to receiving the second indication information, the setting module sets an application rendering offset in the setting module based on the initial application rendering offset, and sets a layer synthesis offset in the setting module based on the initial layer synthesis offset.
In a possible implementation manner, the electronic device further includes an application rendering module, a layer composition module, and an offset vertical synchronization generation module;
the offset vertical synchronization generation module receives a vertical synchronization signal;
the offset vertical synchronization generation module acquires a set application drawing rendering offset and a set layer synthesis offset from the setting module;
the offset vertical synchronization generation module generates a drawing and rendering synchronization signal based on the vertical synchronization signal and the set application drawing and rendering offset, and generates a synthesis synchronization signal based on the vertical synchronization signal and the set layer synthesis offset;
the offset vertical synchronization generation module sends the drawing rendering synchronization signal to the application drawing rendering module, so that the application drawing rendering module triggers and executes an application drawing rendering process based on the drawing rendering synchronization signal;
and the offset vertical synchronization generation module sends the synthesized synchronization signal to the layer synthesis module, so that the layer synthesis module triggers and executes a layer synthesis process based on the synthesized synchronization signal.
In a second aspect, there is provided an apparatus for adjusting display parameters, including: the acquisition module is used for acquiring application drawing rendering time and layer synthesis time of a plurality of historical frame images; the determining module is used for determining a target application drawing rendering offset and a target layer synthesis offset according to the application drawing rendering time and the layer synthesis time of the plurality of historical frame images; the target application drawing rendering offset is the offset of a drawing rendering synchronous signal relative to a vertical synchronous signal for triggering the application drawing rendering process executed on the frame image to be displayed; the target layer synthesis offset refers to the offset of a synthesis synchronous signal which triggers the execution of a layer synthesis process on a frame image to be displayed relative to a vertical synchronous signal; the occupation ratio of ideal frame images in the plurality of historical frame images is larger than a preset occupation ratio; the ideal frame image is a historical frame image which completes an application rendering process and a layer composition process in a vertical synchronization period when a target application rendering offset and a target layer composition offset are used; and the adjusting module is used for adjusting the current application drawing rendering offset according to the target application drawing rendering offset and adjusting the current layer synthesis offset according to the target layer synthesis offset.
In a third aspect, an electronic device is provided, where the electronic device includes a processor, and the processor is configured to couple with a memory, read instructions in the memory, and cause the electronic device to perform the method provided in the first aspect according to the instructions.
In a fourth aspect, a computer-readable storage medium is provided, which stores computer instructions that, when executed on an electronic device, cause the electronic device to perform the method provided in the first aspect.
In a fifth aspect, a chip is provided, the chip comprising a processor configured to couple with a memory and execute a computer program in the memory to perform the method provided in the first aspect.
In a sixth aspect, a computer program product comprising instructions for causing an electronic device to perform the method provided in the first aspect when the computer program product is run on the electronic device is provided.
Drawings
Fig. 1 is a schematic structural diagram of an electronic device according to an embodiment of the present disclosure;
fig. 2 is a block diagram of a software structure of an electronic device according to an embodiment of the present disclosure;
FIG. 3 is a schematic diagram of a display in which the Vsync signal triggers three display processes simultaneously;
fig. 4 is a schematic display diagram illustrating three display processes triggered after the Vsync signal is shifted;
fig. 5 is another display diagram illustrating the shift setting of the Vsync signal and the triggering of three display processes;
FIG. 6 is a diagram of a display system architecture to which the method of the present application is applied;
fig. 7 is a schematic process diagram of information interaction performed by each module of the display system according to the embodiment of the present application;
fig. 8 is a schematic flowchart of a method for adjusting display parameters according to an embodiment of the present disclosure;
FIG. 9 is a flow chart of a graphical display incorporating a frame loss condition according to an embodiment of the present application;
FIG. 10 is a flow chart of another graphical display incorporating a frame loss condition provided by an embodiment of the present application;
FIG. 11 is a schematic view of an offset adjusted display according to an embodiment of the present disclosure;
FIG. 12 is a schematic view of another display flow after the offset adjustment according to the embodiment of the present application;
fig. 13 is a schematic structural diagram of a display parameter adjusting apparatus according to an embodiment of the present disclosure.
Detailed Description
The technical solutions in the embodiments of the present application will be described below with reference to the drawings in the embodiments of the present application. In the description of the embodiments herein, "/" means "or" unless otherwise specified, for example, a/B may mean a or B; "and/or" herein is merely an association describing an associated object, and means that there may be three relationships, e.g., a and/or B, which may mean: a exists alone, A and B exist simultaneously, and B exists alone. In addition, in the description of the embodiments of the present application, "a plurality" means two or more than two.
In the following, the terms "first", "second" and "third" are used for descriptive purposes only and are not to be construed as indicating or implying relative importance or implicitly indicating the number of technical features indicated. Thus, features defined as "first", "second", "third" may explicitly or implicitly include one or more of the features.
The method for adjusting display parameters provided in the embodiments of the present application may be applied to an electronic device that includes a built-in display system and supports Vsync offset amount (offset) setting. Optionally, the electronic device may be a desktop computer, a notebook computer, a tablet computer, a palm computer, a vehicle-mounted terminal, a sales terminal, a wearable device, a mobile phone, or the like.
Fig. 1 shows a schematic structural diagram of an electronic device 100. The electronic device 100 may include a processor 110, an external memory interface 120, an internal memory 121, a Universal Serial Bus (USB) interface 130, a charging management module 140, a power management module 141, a battery 142, an antenna 1, an antenna 2, a mobile communication module 150, a wireless communication module 160, an audio module 170, a speaker 170A, a receiver 170B, a microphone 170C, an earphone interface 170D, a sensor module 180, a key 190, a motor 191, an indicator 192, a camera 193, a display screen 194, a Subscriber Identification Module (SIM) card interface 195, and the like. The sensor module 180 may include a pressure sensor 180A, a gyroscope sensor 180B, an air pressure sensor 180C, a magnetic sensor 180D, an acceleration sensor 180E, a distance sensor 180F, a proximity light sensor 180G, a fingerprint sensor 180H, a temperature sensor 180J, a touch sensor 180K, an ambient light sensor 180L, a bone conduction sensor 180M, and the like.
It is to be understood that the illustrated structure of the embodiment of the present application does not specifically limit the electronic device 100. In other embodiments of the present application, electronic device 100 may include more or fewer components than shown, or some components may be combined, some components may be split, or a different arrangement of components. The illustrated components may be implemented in hardware, software, or a combination of software and hardware.
Processor 110 may include one or more processing units, such as: the processor 110 may include an Application Processor (AP), a modem processor, a Graphics Processing Unit (GPU), an Image Signal Processor (ISP), a controller, a memory, a video codec, a Digital Signal Processor (DSP), a baseband processor, and/or a neural-Network Processing Unit (NPU), etc. Wherein, the different processing units may be independent devices or may be integrated in one or more processors.
The controller may be, among other things, a neural center and a command center of the electronic device 100. The controller can generate an operation control signal according to the instruction operation code and the timing signal to complete the control of instruction fetching and instruction execution.
A memory may also be provided in processor 110 for storing instructions and data. In some embodiments, the memory in the processor 110 is a cache memory. The memory may hold instructions or data that have just been used or recycled by the processor 110. If the processor 110 needs to reuse the instruction or data, it can be called directly from the memory. Avoiding repeated accesses reduces the latency of the processor 110, thereby increasing the efficiency of the system.
In some embodiments, processor 110 may include one or more interfaces. The interface may include an integrated circuit (I2C) interface, an integrated circuit built-in audio (I2S) interface, a Pulse Code Modulation (PCM) interface, a universal asynchronous receiver/transmitter (UART) interface, a Mobile Industry Processor Interface (MIPI), a general-purpose input/output (GPIO) interface, a Subscriber Identity Module (SIM) interface, and/or a Universal Serial Bus (USB) interface, etc.
The I2C interface is a bi-directional synchronous serial bus that includes a serial data line (SDA) and a Serial Clock Line (SCL). In some embodiments, processor 110 may include multiple sets of I2C buses. The processor 110 may be coupled to the touch sensor 180K, the charger, the flash, the camera 193, etc. through different I2C bus interfaces, respectively. For example: the processor 110 may be coupled to the touch sensor 180K via an I2C interface, such that the processor 110 and the touch sensor 180K communicate via an I2C bus interface to implement the touch functionality of the electronic device 100.
The I2S interface may be used for audio communication. In some embodiments, processor 110 may include multiple sets of I2S buses. The processor 110 may be coupled to the audio module 170 via an I2S bus to enable communication between the processor 110 and the audio module 170. In some embodiments, the audio module 170 can transmit audio signals to the wireless communication module 160 through the I2S interface, so as to receive phone calls through the bluetooth headset.
The PCM interface may also be used for audio communication, sampling, quantizing and encoding analog signals. In some embodiments, audio module 170 and wireless communication module 160 may be coupled by a PCM bus interface. In some embodiments, the audio module 170 may also transmit audio signals to the wireless communication module 160 through the PCM interface, so as to implement a function of answering a call through a bluetooth headset. Both the I2S interface and the PCM interface may be used for audio communication.
The UART interface is a universal serial data bus used for asynchronous communications. The bus may be a bidirectional communication bus. It converts the data to be transmitted between serial communication and parallel communication. In some embodiments, a UART interface is generally used to connect the processor 110 with the wireless communication module 160. For example: the processor 110 communicates with a bluetooth module in the wireless communication module 160 through a UART interface to implement a bluetooth function. In some embodiments, the audio module 170 may transmit the audio signal to the wireless communication module 160 through a UART interface, so as to implement the function of playing music through a bluetooth headset.
MIPI interfaces may be used to connect processor 110 with peripheral devices such as display screen 194, camera 193, and the like. The MIPI interface includes a Camera Serial Interface (CSI), a Display Serial Interface (DSI), and the like. In some embodiments, processor 110 and camera 193 communicate through a CSI interface to implement the capture functionality of electronic device 100. The processor 110 and the display screen 194 communicate through the DSI interface to implement the display function of the electronic device 100.
The GPIO interface may be configured by software. The GPIO interface may be configured as a control signal and may also be configured as a data signal. In some embodiments, a GPIO interface may be used to connect the processor 110 with the camera 193, the display 194, the wireless communication module 160, the audio module 170, the sensor module 180, and the like. The GPIO interface may also be configured as an I2C interface, an I2S interface, a UART interface, a MIPI interface, and the like.
The USB interface 130 is an interface conforming to the USB standard specification, and may be a Mini USB interface, a Micro USB interface, a USB Type C interface, or the like. The USB interface 130 may be used to connect a charger to charge the electronic device 100, and may also be used to transmit data between the electronic device 100 and a peripheral device. And the method can also be used for connecting a headset and playing audio through the headset. The interface may also be used to connect other electronic devices, such as AR devices and the like.
It should be understood that the interface connection relationship between the modules illustrated in the embodiments of the present application is only an illustration, and does not limit the structure of the electronic device 100. In other embodiments of the present application, the electronic device 100 may also adopt different interface connection manners or a combination of multiple interface connection manners in the above embodiments.
The charging management module 140 is configured to receive charging input from a charger. The charger may be a wireless charger or a wired charger. In some wired charging embodiments, the charging management module 140 may receive charging input from a wired charger via the USB interface 130. In some wireless charging embodiments, the charging management module 140 may receive a wireless charging input through a wireless charging coil of the electronic device 100. The charging management module 140 may also supply power to the electronic device 100 through the power management module 141 while charging the battery 142.
The power management module 141 is used to connect the battery 142, the charging management module 140 and the processor 110. The power management module 141 receives input from the battery 142 and/or the charge management module 140 and provides power to the processor 110, the internal memory 121, the external memory, the display 194, the camera 193, the wireless communication module 160, and the like. The power management module 141 may also be used to monitor parameters such as battery capacity, battery cycle count, battery state of health (leakage, impedance), etc. In some other embodiments, the power management module 141 may also be disposed in the processor 110. In other embodiments, the power management module 141 and the charging management module 140 may be disposed in the same device.
The wireless communication function of the electronic device 100 may be implemented by the antenna 1, the antenna 2, the mobile communication module 150, the wireless communication module 160, a modem processor, a baseband processor, and the like.
The antennas 1 and 2 are used for transmitting and receiving electromagnetic wave signals. Each antenna in the electronic device 100 may be used to cover a single or multiple communication bands. Different antennas can also be multiplexed to improve the utilization of the antennas. For example: the antenna 1 may be multiplexed as a diversity antenna of a wireless local area network. In other embodiments, the antenna may be used in conjunction with a tuning switch.
The mobile communication module 150 may provide a solution including wireless communication of 2G/3G/4G/5G, etc. applied to the electronic device 100. The mobile communication module 150 may include at least one filter, a switch, a power amplifier, a Low Noise Amplifier (LNA), and the like. The mobile communication module 150 may receive the electromagnetic wave from the antenna 1, filter, amplify, etc. the received electromagnetic wave, and transmit the electromagnetic wave to the modem processor for demodulation. The mobile communication module 150 may also amplify the signal modulated by the modem processor, and convert the signal into electromagnetic wave through the antenna 1 to radiate the electromagnetic wave. In some embodiments, at least some of the functional modules of the mobile communication module 150 may be disposed in the processor 110. In some embodiments, at least some of the functional modules of the mobile communication module 150 may be disposed in the same device as at least some of the modules of the processor 110.
The modem processor may include a modulator and a demodulator. The modulator is used for modulating a low-frequency baseband signal to be transmitted into a medium-high frequency signal. The demodulator is used for demodulating the received electromagnetic wave signal into a low-frequency baseband signal. The demodulator then passes the demodulated low frequency baseband signal to a baseband processor for processing. The low frequency baseband signal is processed by the baseband processor and then transferred to the application processor. The application processor outputs a sound signal through an audio device (not limited to the speaker 170A, the receiver 170B, etc.) or displays an image or video through the display screen 194. In some embodiments, the modem processor may be a stand-alone device. In other embodiments, the modem processor may be provided in the same device as the mobile communication module 150 or other functional modules, independent of the processor 110.
The wireless communication module 160 may provide a solution for wireless communication applied to the electronic device 100, including Wireless Local Area Networks (WLANs) (e.g., wireless fidelity (Wi-Fi) networks), bluetooth (bluetooth, BT), Global Navigation Satellite System (GNSS), Frequency Modulation (FM), Near Field Communication (NFC), Infrared (IR), and the like. The wireless communication module 160 may be one or more devices integrating at least one communication processing module. The wireless communication module 160 receives electromagnetic waves via the antenna 2, performs frequency modulation and filtering processing on electromagnetic wave signals, and transmits the processed signals to the processor 110. The wireless communication module 160 may also receive a signal to be transmitted from the processor 110, perform frequency modulation and amplification on the signal, and convert the signal into electromagnetic waves through the antenna 2 to radiate the electromagnetic waves.
In some embodiments, antenna 1 of electronic device 100 is coupled to mobile communication module 150 and antenna 2 is coupled to wireless communication module 160 so that electronic device 100 can communicate with networks and other devices through wireless communication techniques. The wireless communication technology may include global system for mobile communications (GSM), General Packet Radio Service (GPRS), code division multiple access (code division multiple access, CDMA), Wideband Code Division Multiple Access (WCDMA), time-division code division multiple access (time-division code division multiple access, TD-SCDMA), Long Term Evolution (LTE), fifth Generation wireless communication systems (5G, the 5th Generation of wireless communication systems), BT, GNSS, WLAN, NFC, FM, and/or IR technology, etc. The GNSS may include a Global Positioning System (GPS), a global navigation satellite system (GLONASS), a beidou navigation satellite system (BDS), a quasi-zenith satellite system (QZSS), and/or a Satellite Based Augmentation System (SBAS).
The electronic device 100 implements display functions via the GPU, the display screen 194, and the application processor. The GPU is a microprocessor for image processing, and is connected to the display screen 194 and an application processor. The GPU is used to perform mathematical and geometric calculations for graphics rendering. The processor 110 may include one or more GPUs that execute program instructions to generate or alter display information.
The display screen 194 is used to display images, video, and the like. The display screen 194 includes a display panel. The display panel may adopt a Liquid Crystal Display (LCD), an organic light-emitting diode (OLED), an active-matrix organic light-emitting diode (active-matrix organic light-emitting diode, AMOLED), a flexible light-emitting diode (FLED), a miniature, a Micro-oeld, a quantum dot light-emitting diode (QLED), and the like. In some embodiments, the electronic device 100 may include 1 or N display screens 194, with N being a positive integer greater than 1.
The electronic device 100 may implement a shooting function through the ISP, the camera 193, the video codec, the GPU, the display 194, the application processor, and the like.
The ISP is used to process the data fed back by the camera 193. For example, when a photo is taken, the shutter is opened, light is transmitted to the camera photosensitive element through the lens, the optical signal is converted into an electrical signal, and the camera photosensitive element transmits the electrical signal to the ISP for processing and converting into an image visible to naked eyes. The ISP can also carry out algorithm optimization on the noise, brightness and skin color of the image. The ISP can also optimize parameters such as exposure, color temperature and the like of a shooting scene. In some embodiments, the ISP may be provided in camera 193.
The camera 193 is used to capture still images or video. The object generates an optical image through the lens and projects the optical image to the photosensitive element. The photosensitive element may be a Charge Coupled Device (CCD) or a complementary metal-oxide-semiconductor (CMOS) phototransistor. The light sensing element converts the optical signal into an electrical signal, which is then passed to the ISP where it is converted into a digital image signal. And the ISP outputs the digital image signal to the DSP for processing. The DSP converts the digital image signal into image signal in standard RGB, YUV and other formats. In some embodiments, the electronic device 100 may include 1 or N cameras 193, N being a positive integer greater than 1.
The digital signal processor is used for processing digital signals, and can process digital image signals and other digital signals. For example, when the electronic device 100 selects a frequency bin, the digital signal processor is used to perform fourier transform or the like on the frequency bin energy.
Video codecs are used to compress or decompress digital video. The electronic device 100 may support one or more video codecs. In this way, the electronic device 100 may play or record video in a variety of encoding formats, such as: moving Picture Experts Group (MPEG) 1, MPEG2, MPEG3, MPEG4, and the like.
The NPU is a neural-network (NN) computing processor that processes input information quickly by using a biological neural network structure, for example, by using a transfer mode between neurons of a human brain, and can also learn by itself continuously. Applications such as intelligent recognition of the electronic device 100 can be realized through the NPU, for example: image recognition, face recognition, speech recognition, text understanding, and the like.
The external memory interface 120 may be used to connect an external memory card, such as a Micro SD card, to extend the memory capability of the electronic device 100. The external memory card communicates with the processor 110 through the external memory interface 120 to implement a data storage function. For example, files such as music, video, etc. are saved in an external memory card.
The internal memory 121 may be used to store computer-executable program code, which includes instructions. The processor 110 executes various functional applications of the electronic device 100 and data processing by executing instructions stored in the internal memory 121. The internal memory 121 may include a program storage area and a data storage area. The storage program area may store an operating system, an application program (such as a sound playing function, an image playing function, and the like) required by at least one function, and the like. The storage data area may store data (such as audio data, phone book, etc.) created during use of the electronic device 100, and the like. In addition, the internal memory 121 may include a high-speed random access memory, and may further include a nonvolatile memory, such as at least one magnetic disk storage device, a flash memory device, a universal flash memory (UFS), and the like.
The electronic device 100 may implement audio functions via the audio module 170, the speaker 170A, the receiver 170B, the microphone 170C, the headphone interface 170D, and the application processor. Such as music playing, recording, etc.
The audio module 170 is used to convert digital audio information into an analog audio signal output and also to convert an analog audio input into a digital audio signal. The audio module 170 may also be used to encode and decode audio signals. In some embodiments, the audio module 170 may be disposed in the processor 110, or some functional modules of the audio module 170 may be disposed in the processor 110.
The speaker 170A, also called a "horn", is used to convert the audio electrical signal into an acoustic signal. The electronic apparatus 100 can listen to music through the speaker 170A or listen to a handsfree call.
The receiver 170B, also called "earpiece", is used to convert the electrical audio signal into a sound signal. When the electronic apparatus 100 receives a call or voice information, it can receive voice by placing the receiver 170B close to the ear of the person.
The microphone 170C, also referred to as a "microphone," is used to convert sound signals into electrical signals. When making a call or transmitting voice information, the user can input a voice signal to the microphone 170C by speaking the user's mouth near the microphone 170C. The electronic device 100 may be provided with at least one microphone 170C. In other embodiments, the electronic device 100 may be provided with two microphones 170C to achieve a noise reduction function in addition to collecting sound signals. In other embodiments, the electronic device 100 may further include three, four or more microphones 170C to collect sound signals, reduce noise, identify sound sources, and perform directional recording.
The headphone interface 170D is used to connect a wired headphone. The headset interface 170D may be the USB interface 130, or may be a 3.5mm open mobile electronic device platform (OMTP) standard interface, a cellular telecommunications industry association (cellular telecommunications industry association of the USA, CTIA) standard interface.
The pressure sensor 180A is used for sensing a pressure signal, and converting the pressure signal into an electrical signal. In some embodiments, the pressure sensor 180A may be disposed on the display screen 194. The pressure sensor 180A can be of a wide variety, such as a resistive pressure sensor, an inductive pressure sensor, a capacitive pressure sensor, and the like. The capacitive pressure sensor may be a sensor comprising at least two parallel plates having an electrically conductive material. When a force acts on the pressure sensor 180A, the capacitance between the electrodes changes. The electronic device 100 determines the strength of the pressure from the change in capacitance. When a touch operation is applied to the display screen 194, the electronic apparatus 100 detects the intensity of the touch operation according to the pressure sensor 180A. The electronic apparatus 100 may also calculate the touched position from the detection signal of the pressure sensor 180A. In some embodiments, the touch operations that are applied to the same touch position but different touch operation intensities may correspond to different operation instructions. For example: and when the touch operation with the touch operation intensity smaller than the first pressure threshold value acts on the short message application icon, executing an instruction for viewing the short message. And when the touch operation with the touch operation intensity larger than or equal to the first pressure threshold value acts on the short message application icon, executing an instruction of newly building the short message.
The gyro sensor 180B may be used to determine the motion attitude of the electronic device 100. In some embodiments, the angular velocity of electronic device 100 about three axes (i.e., the x, y, and z axes) may be determined by gyroscope sensor 180B. The gyro sensor 180B may be used for photographing anti-shake. For example, when the shutter is pressed, the gyro sensor 180B detects a shake angle of the electronic device 100, calculates a distance to be compensated for by the lens module according to the shake angle, and allows the lens to counteract the shake of the electronic device 100 through a reverse movement, thereby achieving anti-shake. The gyroscope sensor 180B may also be used for navigation, somatosensory gaming scenes.
The air pressure sensor 180C is used to measure air pressure. In some embodiments, electronic device 100 calculates altitude from barometric pressure values measured by barometric pressure sensor 180C to assist in positioning and navigation.
The magnetic sensor 180D includes a hall sensor. The electronic device 100 may detect the opening and closing of the flip holster using the magnetic sensor 180D. In some embodiments, when the electronic device 100 is a flip phone, the electronic device 100 may detect the opening and closing of the flip according to the magnetic sensor 180D. And then according to the opening and closing state of the leather sheath or the opening and closing state of the flip cover, the automatic unlocking of the flip cover is set.
The acceleration sensor 180E may detect the magnitude of acceleration of the electronic device 100 in various directions (typically three axes). The magnitude and direction of gravity can be detected when the electronic device 100 is stationary. The method can also be used for recognizing the posture of the electronic equipment, and is applied to horizontal and vertical screen switching, pedometers and other applications.
A distance sensor 180F for measuring a distance. The electronic device 100 may measure the distance by infrared or laser. In some embodiments, taking a picture of a scene, electronic device 100 may utilize range sensor 180F to range for fast focus.
The proximity light sensor 180G may include, for example, a Light Emitting Diode (LED) and a light detector, such as a photodiode. The light emitting diode may be an infrared light emitting diode. The electronic device 100 emits infrared light to the outside through the light emitting diode. The electronic device 100 detects infrared reflected light from nearby objects using a photodiode. When sufficient reflected light is detected, it can be determined that there is an object near the electronic device 100. When insufficient reflected light is detected, the electronic device 100 may determine that there are no objects near the electronic device 100. The electronic device 100 can utilize the proximity light sensor 180G to detect that the user holds the electronic device 100 close to the ear for talking, so as to automatically turn off the screen to achieve the purpose of saving power. The proximity light sensor 180G may also be used in a holster mode, a pocket mode automatically unlocks and locks the screen.
The ambient light sensor 180L is used to sense the ambient light level. Electronic device 100 may adaptively adjust the brightness of display screen 194 based on the perceived ambient light level. The ambient light sensor 180L may also be used to automatically adjust the white balance when taking a picture. The ambient light sensor 180L may also cooperate with the proximity light sensor 180G to detect whether the electronic device 100 is in a pocket to prevent accidental touches.
The fingerprint sensor 180H is used to collect a fingerprint. The electronic device 100 can utilize the collected fingerprint characteristics to unlock the fingerprint, access the application lock, photograph the fingerprint, answer an incoming call with the fingerprint, and so on.
The temperature sensor 180J is used to detect temperature. In some embodiments, electronic device 100 implements a temperature processing strategy using the temperature detected by temperature sensor 180J. For example, when the temperature reported by the temperature sensor 180J exceeds a threshold, the electronic device 100 performs a reduction in performance of a processor located near the temperature sensor 180J, so as to reduce power consumption and implement thermal protection. In other embodiments, the electronic device 100 heats the battery 142 when the temperature is below another threshold to avoid the low temperature causing the electronic device 100 to shut down abnormally. In other embodiments, when the temperature is lower than a further threshold, the electronic device 100 performs boosting on the output voltage of the battery 142 to avoid abnormal shutdown due to low temperature.
The touch sensor 180K is also referred to as a "touch panel". The touch sensor 180K may be disposed on the display screen 194, and the touch sensor 180K and the display screen 194 form a touch screen, which is also called a "touch screen". The touch sensor 180K is used to detect a touch operation applied thereto or nearby. The touch sensor can communicate the detected touch operation to the application processor to determine the touch event type. Visual output associated with the touch operation may be provided through the display screen 194. In other embodiments, the touch sensor 180K may be disposed on a surface of the electronic device 100, different from the position of the display screen 194.
The bone conduction sensor 180M may acquire a vibration signal. In some embodiments, the bone conduction sensor 180M may acquire a vibration signal of the human vocal part vibrating the bone mass. The bone conduction sensor 180M may also contact the human pulse to receive the blood pressure pulsation signal. In some embodiments, the bone conduction sensor 180M may also be disposed in a headset, integrated into a bone conduction headset. The audio module 170 may analyze a voice signal based on the vibration signal of the bone mass vibrated by the sound part acquired by the bone conduction sensor 180M, so as to implement a voice function. The application processor can analyze heart rate information based on the blood pressure beating signal acquired by the bone conduction sensor 180M, so as to realize the heart rate detection function.
The keys 190 include a power-on key, a volume key, and the like. The keys 190 may be mechanical keys. Or may be touch keys. The electronic apparatus 100 may receive a key input, and generate a key signal input related to user setting and function control of the electronic apparatus 100.
The motor 191 may generate a vibration cue. The motor 191 may be used for incoming call vibration cues, as well as for touch vibration feedback. For example, touch operations applied to different applications (e.g., photographing, audio playing, etc.) may correspond to different vibration feedback effects. The motor 191 may also respond to different vibration feedback effects for touch operations applied to different areas of the display screen 194. Different application scenes (such as time reminding, receiving information, alarm clock, game and the like) can also correspond to different vibration feedback effects. The touch vibration feedback effect may also support customization.
Indicator 192 may be an indicator light that may be used to indicate a state of charge, a change in charge, or a message, missed call, notification, etc.
The SIM card interface 195 is used to connect a SIM card. The SIM card can be brought into and out of contact with the electronic apparatus 100 by being inserted into the SIM card interface 195 or being pulled out of the SIM card interface 195. The electronic device 100 may support 1 or N SIM card interfaces, N being a positive integer greater than 1. The SIM card interface 195 may support a Nano SIM card, a Micro SIM card, a SIM card, etc. The same SIM card interface 195 can be inserted with multiple cards at the same time. The types of the plurality of cards may be the same or different. The SIM card interface 195 may also be compatible with different types of SIM cards. The SIM card interface 195 may also be compatible with external memory cards. The electronic device 100 interacts with the network through the SIM card to implement functions such as communication and data communication. In some embodiments, the electronic device 100 employs esims, namely: an embedded SIM card. The eSIM card can be embedded in the electronic device 100 and cannot be separated from the electronic device 100.
It should be noted that any of the electronic devices mentioned in the embodiments of the present application may include more or less modules in the electronic device 100.
The software system of the electronic device 100 may employ a layered architecture, an event-driven architecture, a micro-core architecture, a micro-service architecture, or a cloud architecture. The embodiment of the present application takes an Android system with a hierarchical architecture as an example, and exemplarily illustrates a software structure of the electronic device 100.
Fig. 2 is a block diagram of a software structure of the electronic device 100 according to the embodiment of the present application.
The layered architecture of the electronic device 100 divides the software into several layers, each layer having a clear role and division of labor. The layers communicate with each other through a software interface. In some embodiments, the Android system is divided into four layers, an application layer, an application framework layer, an Android runtime (Android runtime) and system library, and a kernel layer from top to bottom.
The application layer may include a series of application packages.
As shown in fig. 2, the application package may include applications such as camera, gallery, calendar, phone call, map, navigation, WLAN, bluetooth, music, video, short message, etc.
The application framework layer provides an Application Programming Interface (API) and a programming framework for the application program of the application layer. The application framework layer includes a number of predefined functions.
As shown in FIG. 2, the application framework layers may include a window manager, content provider, view system, phone manager, resource manager, notification manager, and the like.
The window manager is used for managing window programs. The window manager can obtain the size of the display screen, judge whether a status bar exists, lock the screen, intercept the screen and the like.
The content provider is used to store and retrieve data and make it accessible to applications. The data may include video, images, audio, calls made and answered, browsing history and bookmarks, phone books, etc.
The view system includes visual controls such as controls to display text, controls to display pictures, and the like. The view system may be used to build applications. The display interface may be composed of one or more views. For example, the display interface including the short message notification icon may include a view for displaying text and a view for displaying pictures.
The phone manager is used to provide communication functions of the electronic device 100. Such as management of call status (including connection, hangup, etc.).
The resource manager provides various resources for the application, such as localized strings, icons, pictures, layout files, video files, and the like.
The notification manager enables the application to display notification information in the status bar, can be used to convey notification-type messages, can disappear automatically after a brief dwell, and does not require user interaction. Such as a notification manager used to inform download completion, message alerts, etc. The notification manager may also be a notification that appears in the form of a chart or scroll bar text at the top status bar of the system, such as a notification of a background running application, or a notification that appears on the screen in the form of a dialog window. For example, prompting text information in the status bar, sounding a prompt tone, vibrating the electronic device, flashing an indicator light, etc.
The Android Runtime comprises a core library and a virtual machine. The Android runtime is responsible for scheduling and managing an Android system.
The core library comprises two parts: one part is a function which needs to be called by java language, and the other part is a core library of android.
The application layer and the application framework layer run in a virtual machine. And executing java files of the application program layer and the application program framework layer into a binary file by the virtual machine. The virtual machine is used for performing the functions of object life cycle management, stack management, thread management, safety and exception management, garbage collection and the like.
The system library may include a plurality of functional modules. For example: surface managers (surface managers), Media Libraries (Media Libraries), three-dimensional graphics processing Libraries (e.g., OpenGL ES), 2D graphics engines (e.g., SGL), and the like.
The surface manager is used to manage the display subsystem and provide fusion of 2D and 3D layers for multiple applications.
The media library supports a variety of commonly used audio, video format playback and recording, and still image files, among others. The media library may support a variety of audio-video encoding formats, such as MPEG4, h.264, MP3, AAC, AMR, JPG, PNG, and the like.
The three-dimensional graphic processing library is used for realizing three-dimensional graphic drawing, image rendering, synthesis, layer processing and the like.
The 2D graphics engine is a drawing engine for 2D drawing.
The kernel layer is a layer between hardware and software. The kernel layer at least comprises a display driver, a camera driver, an audio driver, a sensor driver, a Wi-Fi driver and the like.
The display system of the electronic device 100 and the display method in the related art will be described below.
The display system is used for responding to a view updating request sent by a user through an application, acquiring relevant data of a frame image to be displayed according to the view updating request, processing the relevant data of the frame image to be displayed according to a display pipeline, and presenting a display interface to the user. The display pipeline mainly comprises a drawing and rendering process, a layer composition process and a display sending process. That is, for a certain frame of image, the display system is required to sequentially perform the rendering process, the layer composition process and the display process before displaying the image on the display screen. The rendering process can be implemented by an application, and is therefore also referred to as an application rendering process. The layer composition process may be implemented by a display composition process, for example, by using surfefinger, which is also referred to as surfefinger layer composition process. The rendering process is implemented by hardware (hardware), and is therefore also referred to as a hardware rendering process. In the following embodiments, layer composition is implemented by surfefinger as an example. For convenience of explanation, in the following description, the surfefinger layer synthesis is simply referred to as surfefinger synthesis, and the surfefinger layer synthesis flow is simply referred to as surfefinger synthesis flow. Meanwhile, it should be noted that, in the display system, the surfaceflunger is also used for receiving a view update request sent by a user through an application or for implementing other functions.
Specifically, the application of the drawing and rendering process means obtaining a plurality of layers of data of the image to be displayed, and drawing and rendering the plurality of layers of data of the image to be displayed to generate a plurality of layers. The surfefinger synthesis process is to synthesize a plurality of layers generated by the application rendering process, perform hardware blend rendering (HWC), and generate a frame image to be displayed. The hardware display sending process is to perform hardware display processing on the frame image to be displayed generated by the surface flicker synthesis process, and push the frame image to be displayed to a display screen.
In order to ensure display smoothness and avoid display jamming and the like, display systems generally perform display based on Vsync signals to synchronize three processes in a display pipeline. It is understood that the Vsync signal is a periodic signal, and the Vsync period may be set according to the refresh rate of the display screen, for example, when the refresh rate of the display screen is 60Hz, the Vsync period may be 16.6ms, i.e., the display system generates a control signal every 16.6ms to trigger the Vsync period.
In the related art, one implementation manner of the display system performing display based on the Vsync signal is that the Vsync signal simultaneously triggers the application rendering process, the surfaflinger synthesis process, and the hardware display sending process. For example, fig. 3 is a schematic diagram of a display in which the Vsync signal triggers three display processes simultaneously. As shown in fig. 3, if the user requests to display the 4 th frame image by applying a view update request transmitted to the surface flunger, the surface flunger receives the view update request at any time after or before the start time of the nth Vsync period, that is, the view update request is received at any time or a time a in a period a (including no end point) in fig. 3. Then, at the start time of the (N + 1) th Vsync period, the application starts an application rendering process, and starts application rendering on the 4 th frame image. Meanwhile, the SurfaceFlinger starts a SurfaceFlinger synthesis process, starts performing SurfaceFlinger synthesis on the 3 rd frame image, starts a hardware display sending process, and starts performing hardware display sending on the 2 nd frame image. And after the hardware display sending process is completed, the display screen displays the 2 nd frame image.
Similarly, if the surface flunger receives a view update request of the 5th frame image after the start time of the N +1 th Vsync period, at any time before or at the start time of the N +2 th Vsync period, that is, at any time B or at any time period B (not including the endpoint) in fig. 3. At the beginning of the N +2 th Vsync period, the application starts the application rendering process to start the application rendering on the 5th frame image, and simultaneously, the surface flunger starts the surface flunger synthesis process to start the surface flunger synthesis on the 4 th frame image, and the hardware starts the hardware display sending process to start the hardware display sending on the 3 rd frame image. And after the hardware display sending process is completed, the display screen displays the 3 rd frame image. And so on.
As can be seen from fig. 3, the display system performs display based on the Vsync signal, so that the delay time of each process in the display pipeline can be consistent, errors in the application rendering process and the surface flicker synthesis process are reduced, and display stuck and other phenomena are reduced. However, as can be seen from fig. 3, at least 2 Vsync cycles are required for each frame image to pass from the beginning of applying rendering to hardware rendering. For example, the 4 th frame image starts to be rendered during the N +1 th Vsync period, and needs to be displayed at least during the N +3 th Vsync period, and the graphic display efficiency needs to be improved.
Further, the analysis is carried out by combining the view updating request time and the hardware display sending time:
the hardware display sending time is short, and is generally about 2 ms. If the moment when the surfaflinger receives the view update request of the image of the 4 th frame is exactly the start moment of the N +1 th Vsync period, i.e. the moment a in fig. 3. Then, the time from the reception of the view update request from the surface flunger to the hardware bring-out is the same as the time from the start of the application of the rendering to the hardware bring-out, and 2 Vsync cycles are required, as in the period C in fig. 3. Further, it takes 2 cycles +2ms from the reception of the view update request from the surface flunger to the display of the 4 th frame image in the display screen, as in the period D in fig. 3. Thus, it can be seen that the total delay time (i.e., the time from receiving a view update request to displaying the frame image in the display screen) for the display of the 4 th frame of graphics by the display system is greater than 2 Vsync periods.
If the timing at which the surfaflinger receives the view update request of the 4 th frame image is before the N +1 th Vsync period start timing and is any one of the N Vsync period start timings, that is, any one of the periods a (including no endpoint) in fig. 3, the application rendering process of the 4 th frame image needs to wait until the start timing of the N +1 th Vsync period. Taking the view update request time of the 4 th frame image as the time c in fig. 3 as an example, the time from receiving the view update request from the surface flicker to sending the hardware display is longer than the time from starting rendering by applying rendering to the hardware display, and a difference between the start time a of the 2 Vsync periods + the (N + 1) th Vsync period and the view update request time c is required, as the time period E in fig. 3. Further, the difference between the start time a of 2 Vsync periods +2ms + the N +1 th Vsync period and the view update request time c, which is required from the reception of the view update request from the surface flunger to the display of the 4 th frame image on the display screen, is shown as a time period F in fig. 3. Thus, it can be seen that the display delay of the display system for the 4 th frame image is greater than 2 Vsync periods.
In summary, in the related art, the display system triggers three processes in the display pipeline simultaneously through the Vsync signal, at least 2 Vsync cycles are required for each frame of image to receive a view update request from the surface flicker and send the view to the hardware for display, and the total delay time of each frame of image is longer than 2 Vsync cycles, so that the image display efficiency is poor and the touch response is slow.
In the related art, in order to reduce the delay of frame image display, the display system performs offset setting on the Vsync signal by the surface flag, generates a rendering synchronization (hereinafter abbreviated as APP _ Vsync) signal and a synthesis synchronization (hereinafter abbreviated as SF _ Vsync) signal, and triggers the application rendering process, the surface flag synthesis process, and the hardware sending process by the APP _ Vsync signal, the SF _ Vsync signal, and the Vsync signal, respectively.
For example, fig. 4 is a schematic display diagram of three display flows triggered after the Vsync signal is offset. As shown in fig. 4, the surfefinger generates an APP _ Vsync signal after shifting the Vsync signal by the rendering offset amount APPoffset. Meanwhile, the surfefinger generates an SF _ Vsync signal by shifting the Vsync signal by the surfefinger synthesis offset amount SFoffset. In the related art, values of the APPoffset and the SFoffset are not limited, and the APPoffset duration and the SFoffset duration may be the same or different. However, in the related art display system, the values of APPoffset and SFoffset are fixed during the display process. It is understood that the APP _ Vsync signal resulting from the offset setting is also a periodic signal, and the APP _ Vsync period is the same as the Vsync period, but the start of each APP _ Vsync period is delayed by the APPoffset duration with respect to the start of the corresponding Vsync period. Similarly, the SF _ Vsync signal obtained by the offset setting is also a periodic signal, and the SF _ Vsync period is the same as the Vsync period, but the start timing of each SF _ Vsync period is delayed by the SFoffset duration with respect to the start timing of the corresponding Vsync period.
As shown in fig. 4, the surfefinger receives a view update request of the 4 th frame image at any time after or before the start time of the nth APP _ Vsync period, that is, the surfefinger receives a view update request of the 4 th frame image at any time or d in the period G (including no endpoint) in fig. 4. Then, the hardware starts the hardware display sending process at the starting time of the (N + 1) th Vsync period, the 2 nd frame image starts to be displayed, and after the hardware display sending process is completed, the display screen displays the 2 nd frame image. At a time delayed by the APPoffset duration after the start time of the (N + 1) th Vsync period, that is, at the start time of the (N + 1) th APP _ Vsync period (e.g., time d in fig. 4), the application starts the application rendering process, and starts the application rendering on the 4 th frame image. At a time delayed by SFoffset duration after the start time of the (N + 1) th Vsync period, that is, at the start time of the (N + 1) th SF _ Vsync period (e.g., time e in fig. 4), the surface flag starts a surface flag synthesis process, and the surface flag synthesis of the 3 rd frame image starts.
Similarly, the surfefinger receives the view update request of the 5th frame image at any time after or before the start time of the (N + 1) th APP _ Vsync period, that is, at any time or f in the period H (not including the endpoint) in fig. 4 of the surfefinger. The hardware starts the hardware display sending process at the starting time of the (N + 2) th Vsync period, the 3 rd frame image starts to be displayed, and the display screen displays the 3 rd frame image after the hardware display sending process is finished. At a time delayed by the APPoffset duration after the start time of the (N + 2) th Vsync period, that is, at the start time of the (N + 2) th APP _ Vsync period (e.g., time f in fig. 4), the application starts the application rendering process, and starts application rendering on the 5th frame image. At a time delayed by SFoffset duration from the start time of the N +2 th Vsync period, i.e., at the start time of the N +2 th SF _ Vsync period (e.g., time g in fig. 4), the surface flunger starts the surface flunger synthesis process, and starts the surface flunger synthesis of the 4 th frame image. And so on.
As can be seen from fig. 4, when the surfaflinger receives a view update request of the 4 th frame image at the start time of the N +1 th APP _ Vsync period or any time before the start time of the N +1 th APP _ Vsync period, that is, receives a view update request of the 4 th frame image at any time or time d in the time period G in fig. 4, since the time of the view update request misses the start time h of the N th APP _ Vsync period, an application cannot perform an application rendering process in the N th APP _ Vsync period, and needs to wait until the start time d of the N +1 th APP _ Vsync period to start an application rendering process for the 4 th frame image, and the display delay is consistent with the display delay in the scheme of fig. 3. Taking the example that the surfefinger receives the view update request at the time o in fig. 4, the time from the reception of the view update request to the hardware sending still needs the difference between the start time d of 2 APP _ Vsync periods + the (N + 1) th APP _ Vsync period and the view update request time o, as shown in the time period I in fig. 4. Further, the difference between the starting time d of the 2 APP _ Vsync periods +2ms + the N +1 th APP _ Vsync period and the view update request time o is required from the reception of the view update request to the display of the 4 th frame image on the display screen, as in the time period J in fig. 4. Thus, it can be seen that the total delay time of the display system in fig. 4 for the display of the 4 th frame image is still greater than 2 Vsync periods.
For example, fig. 5 is another display diagram in which three display flows are triggered after the Vsync signal is offset. As shown in fig. 5, when the surface flicker receives a view update request of the 4 th frame image at any time after or before the start time of the nth Vsync period, i.e., at any time or j in the period K (not including the endpoint) in fig. 5, after the start time of the nth Vsync period, the 4 th frame image can start to perform the application rendering process at the start time h of the nth APP _ Vsync period without waiting until the start time d of the N +1 th APP _ Vsync period. Taking the example that the surfefinger receives the view update request of the 4 th frame image at the i time in fig. 5, the difference between the start time h of 2 APP _ Vsync periods + the N +1 th APP _ Vsync period from the time when the view update request is received to the hardware rendering, and the view update request time i, is shown as the time period L in fig. 5. Further, the difference between the start time h of the 2 APP _ Vsync periods +2ms + the N +1 th APP _ Vsync period and the view update request time i is required from the reception of the view update request to the display of the 4 th frame image on the display screen, as in the time period M in fig. 5. It can be seen that the period L in fig. 5 is smaller than the period I in fig. 4, and the period M in fig. 5 is smaller than the period J in fig. 4. Therefore, compared to fig. 4, when the surface maker receives a view update request at the start time of the nth APP _ Vsync period or any time before the start time of the nth Vsync period after the start time of the nth Vsync period, the frame image to be displayed can start rendering for drawing in the nth Vsync period without waiting for the (N + 1) th Vsync period, and thus delay in displaying the frame image can be reduced to some extent.
However, in the display method shown in fig. 5, for each frame of image, at least 2 Vsync periods still need to pass from the beginning of applying rendering to hardware display, and the total delay time of each frame of image is still greater than 2 Vsync periods, the graphic display efficiency still needs to be improved, and the touch response is still slow.
With the rapid improvement of the performance of the electronic devices in the industry, the time required for the display system to perform the rendering process and the surface flicker synthesizing process on one frame of image is gradually shortened. In most cases, the display system does not need one Vsync period for completing the rendering process of drawing an application of one frame image and the surfaflinger composition process. In view of the above, an embodiment of the present application provides a method for adjusting display parameters, where an application rendering offset and a surface flinger synthesis offset are dynamically adjusted, so that at least a part of frame images can reach an ideal state, where the ideal state refers to that an application rendering process and a surface flinger synthesis process of the same frame image are completed in the same Vsync period, so that a total delay time for displaying the frame images is reduced by at least one Vsync period, and thus, the image display efficiency is improved.
For example, fig. 6 is a diagram illustrating an architecture of a display system to which the method according to the embodiment of the present application is applied. As shown in fig. 6, the display system may include an application rendering module 501, a layer composition module 502, a hardware display unit 503, a vertical synchronization control (Vsync) module (hereinafter, referred to as Vsync control module) 504, a vertical synchronization algorithm (Vsync algorithm) module (hereinafter, referred to as Vsync algorithm module) 505, a setting module 506, an offset vertical synchronization generation module (hereinafter, referred to as offset Vsync generation module) 508, a display screen 507, and display hardware 509.
The application rendering module 501 is a functional module in an application, and belongs to an application program layer. The layer composition module 502, Vsync control module 504, and Vsync algorithm module 505, the setting module 506, and the offset Vsync generation module 508 are functional modules in the surfaflinger. The layer composition module 502, the setting module 506, and the offset Vsync generation module 508 belong to an application architecture layer. Vsync control module 504 and Vsync algorithms module 505 belong to a system library. Specifically, the Vsync control module 504 and the Vsync algorithm module 505 belong to a native layer. The hardware display unit 503, the display 507 and the display hardware 50 are functional modules in hardware, and belong to a hardware layer. In particular, the setup module 506 of the application architecture layer may interact data with the Vsync algorithm module 505 in the system library through a touch service interface (itouchserviceinteffect) (not shown in fig. 6). Optionally, the setting module 506 may include units (not shown in fig. 6) such as a surfefingerbinder call (surfefingerbinder), an Event Thread (Event Thread), and a display synchronization Thread (displaylncthread).
It should be noted that the display system mentioned in the embodiments of the present application may include more or less modules in the display system mentioned above. For example, the display system may also include memory, a timer, and the like.
Fig. 7 is a schematic diagram illustrating a process of information interaction performed by each module of the display system in one embodiment. Specifically, as shown in fig. 7, after the mobile phone is turned on, the Vsync control module 504 may obtain parameters related to display of the history frame image, and determine whether to perform offset adjustment according to the parameters related to display of the history frame image. Illustratively, the Vsync control module 504 may perform the acquisition of the parameters related to the display of the history frame image after the mobile phone has been operated for a certain period of time (e.g., 3 s). The parameters related to the display of the historical frame images include, but are not limited to, an application drawing rendering time length and a surface flicker synthesis time length of the historical frame images. The offset adjustment refers to adjusting the current offset to a new offset, and includes adjusting the current application rendering offset to a new application rendering offset, and adjusting the current SurfaceFlinger synthesis offset to a new SurfaceFlinger synthesis offset. If the offset adjustment is required, the Vsync control module 504 sends the parameter related to the display of the history frame image and the first indication information indicating that the Vsync algorithm module 505 calculates the offset to the Vsync algorithm module 505. The Vsync algorithm module 505 receives the first indication information sent by the Vsync control module 504, and calculates a new application rendering offset and a new surfaflinger synthesis offset according to the application rendering duration and the surfaflinger synthesis duration of the history frame image. The Vsync algorithm module 505 transmits the calculated new application rendering offset and new surfaflinger synthesis offset to the setting module 506. The setting module 506 sets the offset in the system parameter in the setting module 506 according to the new application rendering offset and the new surface flinger synthesis offset, where the offset includes the application rendering offset and the surface flinger synthesis offset. It is to be appreciated that the offset in the system parameter can have a default value to which the offset in the system parameter is set by the setting module 506 for the first time. If the Vsync control module 504 determines that the offset amount adjustment is not necessary, the Vsync control module 504 sends the initial offset amount and second indication information indicating that the setting module 505 sets the offset amount as the initial offset amount to the setting module 505. The initial offset may include an initial application rendering offset and an initial SurfaceFlinger composition offset. The setting module 506 receives the second indication information sent by the Vsync control module 504, and sets the offset in the system parameter in the setting module 506 to the initial offset. In one possible implementation, the initial offset may be a default value of the offset in the system parameters in the setting module 506. Meanwhile, in a possible scenario, if the offset in the system parameter in the current time setting module 506 is an initial offset, after the setting module 506 receives the second indication information sent by the Vsync control module 504, the offset in the system parameter may be kept to be set as the initial offset, and resetting is not needed.
The display hardware 509 generates a periodic Vsync signal and distributes the Vsync signal to the hardware rendering unit 503 and the offset Vsync generation module 508. The offset Vsync generation module 508 acquires a set offset amount from the system parameters of the setting module 506, and generates an APP _ Vsync signal and an SF _ Vsync signal based on the Vsync signal and the set offset amount. The offset Vsync generation module 508 transmits the generated APP _ Vsync signal to the application rendering module 501, and transmits the generated SF _ Vsync signal to the layer composition module 502. The application rendering module 501 triggers execution of an application rendering process according to the APP _ Vsync signal. The application rendering module 501 then sends the rendered frame image to the layer composition module 502. The layer composition module 502 triggers execution of the surfaceflag composition flow according to the SF _ Vsync signal. Then, the layer composition module 502 sends the combined frame image to the hardware sending and displaying unit 503, and the hardware sending and displaying unit 503 triggers and executes the hardware sending and displaying process according to the Vsync signal. The hardware display unit 503 pushes the frame image to the display 507. The display 507 displays a frame image.
The method for adjusting the display parameters according to the present application will be described in detail below with reference to specific examples. The following embodiments may be combined with each other and may not be described in detail in some embodiments for the same or similar concepts or processes.
Fig. 8 is a flowchart illustrating a method for adjusting display parameters according to an embodiment of the present application, where the method can be applied to the display system shown in fig. 6. As shown in fig. 8, the method for adjusting display parameters according to this embodiment may include:
s601, the Vsync control module acquires application drawing rendering time length and surface Flinger synthesis time length of a plurality of historical frame images.
The historical frame image is a frame image which finishes the application rendering process and the surface flanger synthesis process before the current time. The application rendering duration refers to a duration for executing an application rendering process on one frame of the historical frame image. The surfefinger synthesis duration refers to a duration for performing a surfefinger synthesis procedure on one frame of history frame images.
Alternatively, the Vsync control module may preset a calculation period, and select a frame image within one or more calculation periods before the current time as the history frame image. For example, a frame image 1 s before the current time may be determined as a history frame image with a calculation cycle of 3 s. Assuming that the application rendering process and the surface flag synthesizing process of 100 frames of images are completed within 3s before the current time, the application rendering time and the surface flag synthesizing time of the 100 frames of images are respectively obtained.
It can be understood that, when each flow in the display pipeline is executed by each module of the display system, the time length spent for executing the flow can be calculated and stored. Specifically, the application rendering module may calculate an application rendering time period of a frame of image while executing an application rendering flow on the frame of image, and store the application rendering time period of the frame of image in the memory. Similarly, while the layer composition module performs drawing and rendering on a certain frame of image, the layer composition module may also calculate the duration of the surfflinger composition of the frame of image, and store the duration of the surfflinger composition of the frame of image in the memory. Optionally, the application rendering module and the layer composition module may respectively store the application rendering duration and the surface flinger composition duration of each frame image in a preset list in a memory, where the preset list may be, for example, a rendering composition duration variable (RC FrameTimeVar) list. The Vsync control module may retrieve the RC Frame Time Var list from the memory, and query the RC Frame Time Var list for an application rendering duration and a surfefinger synthesis duration of the required plurality of history Frame images.
S602, the Vsync control module judges whether the offset adjustment condition is met according to the application rendering duration and the SurfaceFlinger synthesis duration of the plurality of historical frame images. The offset adjustment condition is used for representing that the stability of the sum of the application rendering duration and the surface flag synthesis duration of the plurality of historical frame images meets a preset condition, the proportion of adjustable frame images in the plurality of historical frame images is larger than a preset proportion, and the adjustable frame images refer to the historical frame images of which the sum of the rendering synthesis duration and the initial application rendering offset is smaller than one Vsync period.
If the offset adjustment condition is satisfied, the Vsync control module transmits an application rendering duration and a surfaflinger synthesis duration of the plurality of history frame images to the Vsync algorithm module, and first indication information indicating that the Vsync algorithm module calculates the offset. The Vsync algorithm module performs steps S603 and S604 in response to the first indication information.
If the offset amount adjustment condition is not satisfied, the Vsync control module transmits the initial offset amount and second indication information indicating that the setting module sets the offset amount as the initial offset amount to the setting module, and the setting module performs step S605 in response to the second indication information.
For convenience of description, the sum of the application rendering duration and the surface flag compositing duration of the same frame image is simply referred to as rendering compositing duration (render composite time) below.
It is understood that the application rendering time length is not always the same for different frame images, the surface flinger composition time length is not always the same, and thus the rendering composition time length is not always the same. And under different application scenes, the stability degree of the rendering synthesis time is different. For example, when a game interface is displayed, the rendering composition time lengths of frame images at different stages of a game are different in stability. In the game loading process and the game fighting process, the rendering synthesis time of the frame images is stable, and the volatility is small. However, the rendering and composition time of the frame image is unstable and has large volatility before the game is loaded and the game is not played. In view of the above, the embodiment of the present application predicts the stability of the rendering and synthesizing duration of the frame image to be displayed by determining the stability of the rendering and synthesizing duration of the historical frame image.
Specifically, the stability is used to characterize stability of rendering and synthesizing time of the plurality of historical frame images. The stability meets the preset condition to represent that the rendering synthesis time of the plurality of historical frame images is stable, and the fluctuation is small. Optionally, the stability of the rendering composition duration of the multiple historical frame images may be represented by parameters such as variance, which is not limited in this application.
Meanwhile, in order to realize that the application rendering process and the surfefinger synthesis process of the same frame image to be displayed are completed in one Vsync period in the embodiment of the application, and to avoid frame loss of the frame image to be displayed, the frame loss condition of the frame image to be displayed can be predicted by judging the proportion of the adjustable frame image in the historical frame image through the Vsync control module. Specifically, a frame image in which the sum of the rendering composition time length and the initial application rendering offset is less than one Vsync period is defined as an adjustable frame image. The initial application rendering offset refers to an application rendering offset when the offset adjustment related to S603 in the method provided by the present application is not performed. Correspondingly, there may also be an initial SurfaceFlinger synthesis offset. The initial surfefinger synthesis offset is a surfefinger synthesis offset when offset adjustment related to S603 in the method provided by the present application is not performed. Optionally, the initial application rendering offset and the initial surfaflinger synthesis offset may be default values of the offset in the system parameters in the setting module. Of course, the initial application rendering offset and the initial surfaflinger synthesis offset may also be preset according to requirements. In a particular embodiment, the initial application render offset and the initial SurfaceFlinger composite offset may each be 1 ms. In another specific embodiment, the initial surfefinger synthesis offset may be 1ms, and the initial surfefinger synthesis offset may be 3 ms.
It can be understood that when the sum of the rendering and rendering duration of a certain frame of image and the initial application rendering offset amount is less than one Vsync period, the surfaflinger synthesis flow of the certain frame of image can be started at or after the end time of the application rendering flow and ended before the start time of the next Vsync period by setting the application rendering offset amount and the surfaflinger synthesis offset amount. Therefore, the frame image can finish the application synthesis rendering and the surface Flinger synthesis in one Vsync period, the hardware display sending process of the frame image is executed at the starting time of the next Vsync period, the frame loss phenomenon is avoided, the Vsync periods occupied by the application rendering process and the surface Flinger synthesis process are reduced, and the graphic display efficiency is improved.
On the contrary, when the sum of the rendering composition duration of a frame image and the initial application rendering offset is greater than or equal to one Vsync period, the application composition rendering and the surface flag composition of the frame image cannot be completed in one Vsync period no matter how the offset is adjusted, and then the hardware display sending process cannot be executed at the start time of the next Vsync period, and a frame loss phenomenon occurs.
Illustratively, fig. 9 is a flow chart of a graphical display including a frame loss situation in one embodiment. As shown in fig. 9, in the N +1 th Vsync period, the application rendering duration of the 4 th frame image is long, resulting in that the sum of the rendering composition duration of the 4 th frame image and the initial application rendering offset APPoffset apoffset 1 is more than one Vsync period. When the SF _ Vsync signal arrives, the surfaflinger synthesis flow is started, and at this time, the application rendering flow of the 4 th frame image is not finished, so that the surfaflinger synthesis has no data processing, and the 4 th frame image cannot be synthesized (see a dotted frame in fig. 9), so that the hardware sends no data processing when the N +2 th Vsync period arrives, the image cannot be pushed to the display screen, and the 3 rd frame image is maintained to be displayed on the display screen, and the 4 th frame image loses frames.
Illustratively, fig. 10 is a flow chart of a graphical display involving a frame loss situation in another embodiment. As shown in fig. 10, in the N + 1Vsync period, the surface flag synthesis duration of the 4 th frame image is long, resulting in that the sum of the rendering synthesis duration of the 4 th frame image and the initially applied rendering offset APPoffset1 is more than one Vsync period. When the N +2 th Vsync period comes, the hardware display sending flow is started, and at this time, the surfaflinger synthesis flow of the 4 th frame image is not finished, so that the hardware display sending has no data processing, an image cannot be pushed to the display screen (see a dashed box in fig. 10), and the 3 rd frame image is kept displayed on the display screen, so that the 4 th frame image loses frames.
In this embodiment, if the rendering synthesis time length stability of the historical frame image meets the preset condition and the proportion of the adjustable frame image exceeds the preset proportion, it is determined that the offset adjustment condition is met, the rendering synthesis time length stability of the frame image to be displayed is predicted, and the frame loss condition is less after the offset adjustment is performed.
As an implementation manner, if the Vsync control module selects a frame image in a calculation period before the current time as a history frame image in step S601, the Vsync control module may predict the stability and the frame loss of a frame image to be displayed in a period after the current time by determining whether the frame image in the calculation period satisfies the offset adjustment condition. For example, assuming that one calculation cycle is 3s, the Vsync control module selects a total of 100 frames of images within 3s before the current time as the history frame image, and determines whether the offset adjustment condition is satisfied from the 100 frames of history frame images. If the offset adjustment condition is met, predicting that the rendering synthesis time of the frame image to be displayed within 3s after the current moment is stable, and performing offset adjustment to reduce frame loss; otherwise, the rendering synthesis time of the frame image to be displayed within 3s after the current moment is predicted to be unstable, and the frame loss condition is more after the offset adjustment.
S603, the Vsync algorithm module determines a target application rendering offset and a target SurfaceFlinger synthesis offset according to the application rendering duration and the SurfaceFlinger synthesis duration of the plurality of historical frame images. And the proportion of ideal frame images in the historical frame images is greater than the preset proportion. The ideal frame image refers to a history frame image that can complete the application rendering flow and the surfefinger composition flow within one Vsync period when the target application rendering offset amount and the target surfefinger composition offset amount are used.
The target application rendering offset amount refers to an offset amount of the APP _ Vsync signal with respect to the Vsync signal that triggers execution of an application rendering flow for a frame image to be displayed. The target surfefinger synthesis shift amount refers to a shift amount of the SF _ Vsync signal with respect to the Vsync signal that triggers execution of a surfefinger synthesis flow for a frame image to be displayed.
If the Vsync control module determines that the offset adjustment condition is currently satisfied in S602, which indicates that the rendering and synthesizing duration of the historical frame image is stable, the rendering and synthesizing duration of the frame image to be predicted to be displayed is also stable, and the rendering and synthesizing duration of the frame image to be predicted to be displayed is close to that of the historical frame image. Therefore, in this embodiment, the Vsync algorithm module determines the target application rendering offset and the target surface flinger synthesis offset corresponding to the frame image to be displayed based on the application rendering durations and the surface flinger synthesis durations of the plurality of history frame images.
Specifically, the ideal frame image is also the frame image in which the start time of the rendering process and the start time of the surfefinger composition process are in the same Vsync period, and no frame is lost. The occupation ratio of ideal frame images in the historical frame images is larger than a preset occupation ratio and is used for representing that most frame images in the historical frame images can complete the application rendering process and the SurfaceFlinger synthesis process in one Vsync period under the target application rendering offset and the target SurfaceFlinger synthesis offset.
Optionally, the Vsync algorithm module may find the target application rendering offset and the target surface flinger synthesis offset by traversing the application rendering offset and the surface flinger synthesis offset within a certain range. Specifically, on the premise that the starting time of the application rendering process and the starting time of the surface flinger synthesis process are in the same Vsync period, the Vsync algorithm module can determine whether each historical frame image is lost or not according to a certain set application rendering offset and a surface flinger synthesis offset, and by combining the application rendering duration and the surface flinger synthesis duration of the historical frame image obtained by the Vsync control module in step S601, the Vsync algorithm module determines the historical frame image without frame loss as an ideal frame image, and further determines the proportion of the ideal frame image according to the frame loss condition.
S604, the setting module adjusts the current application rendering offset according to the target application rendering offset, and adjusts the current SurfaceFlinger synthesis offset according to the target SurfaceFlinger synthesis offset. The current application rendering offset refers to an application rendering offset at the current moment. The current surfaceflunger synthesis offset refers to a surfaceflunger synthesis offset at the current time.
As an alternative implementation, the Vsync algorithm module may directly send the target application rendering offset as a final application rendering offset setting value to the setting module, and the setting module sets the current application rendering offset as the target application rendering offset. Similarly, the Vsync algorithm module directly sends the target surface flinger synthetic offset as the final surface flinger synthetic offset setting value to the setting module, and the setting module sets the current surface flinger synthetic offset as the target surface flinger synthetic offset.
As an optional implementation manner, the Vsync algorithm module may also perform a certain calculation on the target application rendering offset obtained in step S603, obtain a final application rendering offset setting value, send the final application rendering offset setting value to the setting module, and the setting module adjusts the current application rendering offset to the final application rendering offset setting value. Similarly, the Vsync algorithm module performs a certain calculation on the target surfaflinger synthetic offset obtained in step S603 to obtain a final surfaflinger synthetic offset setting value, and sends the final surfaflinger synthetic offset setting value to the setting module, and the setting module adjusts the current surfaflinger synthetic offset to the final surfaflinger synthetic offset setting value.
For example, fig. 11 is a display diagram after adjusting the current application rendering offset to the target application rendering offset and adjusting the current surfefinger synthesis offset to the target surfefinger synthesis offset in one embodiment. As shown in fig. 11, in the nth Vsync period, a dotted frame in the application rendering flow indicates a processing time of the 3 rd frame image under the initial application rendering offset amount APPoffset1, and a dotted frame in the surface flag synthesis flow indicates a processing time of the 3 rd frame image under the initial surface flag synthesis offset amount SFoffset 1.
As can be seen from fig. 11, if the offset Vsync generation module generates an APP _ Vsync signal by the initial application rendering offset amount APPoffset1, and the offset Vsync generation module generates an SF _ Vsync signal by the initial surface flag synthesis offset amount SFoffset1, taking the nth Vsync period as an example, the application rendering process for the 3 rd frame image is not yet completed, and the surface flag synthesis process needs to be started. In order to ensure no frame loss, the application rendering module needs to execute the application rendering process on the 3 rd frame image after the start time of the nth Vsync period by the duration of apoffset 1, the layer composition module executes the surfefinger composition process on the 2 nd frame image after the start time of the nth Vsync period by the duration of SFoffset1, and the 3 rd frame image needs 2 Vsync periods from the start of the application rendering process to the hardware rendering process.
And the Vsync algorithm module generates the target application rendering offset value appoffset and the target surface flag synthesis offset value sfoffset through step S603, so that the offset Vsync generation module generates an APP _ Vsync signal by the target application rendering offset value appoffset, and the layer composition module can start to perform the surface flag synthesis procedure for the 3 rd frame image after the application rendering procedure is ended by the SF _ Vsync signal generated by the target surface flag synthesis offset value sfoffset, so that the application rendering and the surface flag synthesis for the 3 rd frame image can be completed within one Vsync period.
And S605, the setting module adjusts the current application rendering offset according to the initial application rendering offset and adjusts the current SurfaceFlinger synthesis offset according to the initial SurfaceFlinger synthesis offset.
If the Vsync control module determines that the offset adjustment condition is not met currently in S602, which indicates that the rendering and synthesis duration of the historical frame image is unstable, and/or the proportion of the adjustable frame image in the historical frame image is less than or equal to the preset proportion, it is predicted that the rendering and synthesis duration of the frame image to be displayed is unstable, and/or the sum of the rendering and synthesis duration of the frame image to be displayed and the initially applied rendering offset is predicted to be greater than one Vsync period, which may cause frame loss. Accordingly, the Vsync control module transmits the initial application rendering offset amount, the initial surfaflinger composition offset amount, and the second indication information to the setting module. And the setting module responds to the received second indication information, adjusts the current application rendering offset according to the initial application rendering offset, and adjusts the current surface Flinger synthesis offset according to the initial surface Flinger synthesis offset, so that the influence of frames on the graphic display effect is avoided.
As an optional implementation manner, the setting module may directly adjust the current application rendering offset to the initial application rendering offset, and directly adjust the current layer synthesis offset to the initial layer synthesis offset.
As an optional implementation manner, the setting module may also perform a certain calculation on the initial application rendering offset, and adjust the current application rendering offset to the calculated application rendering offset. Meanwhile, the setting module may also perform certain calculation on the initial surfefinger synthetic offset, and adjust the current surfefinger synthetic offset to the calculated surfefinger synthetic offset.
In one embodiment, before step S605, the method may further include: the setting module judges whether the current application rendering offset is equal to the initial application rendering offset and whether the current surfaceFlinger synthesis offset is equal to the initial surfaceFlinger synthesis offset. If so, the setting module keeps the current application rendering offset and the current surfaceFlinger synthesis offset unchanged; if not, go to step S605.
And S606, controlling the rendering, synthesis and display of the frame image to be displayed according to the adjusted application rendering offset and the adjusted surface Flinger synthesis offset.
And the offset Vsync generating module generates an APP _ Vsync signal according to the Vsync signal distributed by the display hardware and the adjusted application rendering offset amount, and sends the APP _ Vsync signal to the application rendering module. And the application drawing rendering module triggers an application drawing rendering process of the frame image to be displayed through the APP _ Vsync signal. Meanwhile, the offset Vsync generating module generates an SF _ Vsync signal according to the adjusted surfafinger synthesis offset amount, and sends the APP _ Vsync signal to the layer synthesizing module. The layer synthesis module triggers the surfaflinger synthesis flow of the frame image to be displayed through the SF _ Vsync signal. In addition, the hardware sending and displaying unit triggers the hardware sending and displaying flow of the frame image to be displayed through the Vsync signal distributed by the display hardware. Thus, the display of the frame image to be displayed is realized.
It can be seen that, in the method for adjusting display parameters provided in this embodiment, the application rendering duration and the surface flag synthesizing duration of the multiple historical frame images are obtained, and the target application rendering offset and the target surface flag synthesizing offset are determined according to the application rendering duration and the surface flag synthesizing duration of the multiple historical frame images, where under the target application rendering offset and the target surface flag synthesizing offset, the proportion of the ideal frame image in the historical frame image is greater than the preset proportion. The application drawing and rendering duration and the surface Flinger synthesis duration of the historical frame image can represent the application drawing and rendering duration and the surface Flinger synthesis duration of the frame image to be displayed. Therefore, after the current application rendering offset and the current surface flinger synthetic offset are adjusted according to the target application rendering offset and the target surface flinger synthetic offset, the application rendering and the surface flinger synthesis of partial frame images to be displayed can be completed in one Vsync period, so that hardware display sending can be started at the starting moment of the next Vsync period, only one Vsync period is needed for one frame image from application rendering to hardware display sending, the display processing time of the frame image can be shortened, the display delay is reduced, the image display efficiency is improved, and the touch response is improved.
Moreover, the method for adjusting display parameters according to this embodiment determines whether the offset adjustment condition is satisfied according to the application rendering time length and the surface flag synthesizing time length of the multiple historical frame images, and adjusts the offset after determining that the offset adjustment condition is satisfied, thereby ensuring that the offset is adjusted when the application rendering time length of the frame images is stable and frame loss is not likely to occur, and ensuring the graphics display effect after the offset is adjusted.
Optionally, in an embodiment, in step S602, the determining, by the Vsync control module, whether the offset adjustment condition is satisfied according to the application rendering duration and the surfaflinger synthesis duration of the plurality of history frame images may include:
and the Vsync control module calculates the variance and the variance threshold of the rendering and synthesizing time lengths of the plurality of historical frame images according to the application rendering time lengths and the surface Flinger synthesizing time lengths of the plurality of historical frame images. The Vsync control module determines whether the variance is less than a variance threshold. If the variance is smaller than the variance threshold, the offset adjustment condition is met; otherwise, the offset adjustment condition is not satisfied. The variance threshold is used for representing that the rendering and synthesizing time length stability of the plurality of historical frame images is in a critical state (namely in a critical state of stability and instability) meeting a preset condition and not meeting the preset condition, and the proportion of adjustable frame images in the plurality of historical frame images is equal to a preset proportion.
In this embodiment, the offset adjustment condition is expressed by a variance threshold, and whether the offset setting condition is satisfied is simply, quickly and accurately determined by determining a magnitude relationship between a variance of rendering and combining durations of a plurality of historical frame images and the variance threshold.
First, a method of calculating the variance will be described.
Optionally, the variance may be a normal variance or a filtered variance.
In one possible implementation manner, the variance of the rendering and combining time lengths of the plurality of historical frame images is a common variance, and the common variance can be calculated by formula (1):
Figure BDA0003108570370000221
wherein, σ represents a common variance, m represents the total number of the historical frame images, i represents the serial number of the historical frame images, RCT [ i ] represents the rendering and synthesizing time length of the ith historical frame image, and RCTAvg represents the average value of the rendering and synthesizing time lengths of the m historical frame images.
Figure BDA0003108570370000222
In another possible implementation manner, the variance of the rendering and combining time lengths of the plurality of historical frame images is a filtering variance, and the filtering variance can also be calculated by formula (3):
σ filter =a*σ now -(1-a)*σ old (3)
wherein σ filter Denotes a filtering variance, and a denotes a variance filter coefficient. Sigma now Variance, σ, of rendering composition duration of historical frame images representing current computation cycle old And representing the variance of the rendering synthesis time length of the historical frame image of the last calculation period.
The current calculation cycle is a calculation cycle before the current time. The last calculation cycle refers to a calculation cycle before the current calculation cycle, i.e., a second calculation cycle before the current time. Taking a calculation cycle of 3s as an example, the current time is 01: 08: 07, the current calculation cycle is a 3s time period before the current time, that is, 01: 08: 04-01: 08: 07. the last calculation cycle refers to a time period from the 3 rd time before the current time to the 6 th time before the current time, that is, 01: 08: 01-01: 08: 04.
the variance filter coefficient a can be set according to actual use requirements, for example, a can be 0.3, a can also be 0.7, and the like. Sigma now And σ old Can be calculated by formula (1), and is not described herein again.
In the implementation mode, whether the offset adjustment condition is met or not is determined through filtering the variance, so that noise data in the historical frame image can be effectively filtered, the accuracy of the offset adjustment condition is improved, the accuracy of subsequent offset setting is improved, and the graphic display effect is further improved.
The following describes a method of calculating the variance threshold.
According to the normal distribution 3 σ criterion, the probability that the data is distributed in μ +3 σ is 0.9973. Where μ represents the mean of the data and σ represents the variance of the data. With reference to the foregoing embodiment, μ is an average RCTAvg of rendering and synthesizing durations of m historical frame images, and σ is a common variance σ or a filtering variance σ in the foregoing embodiment filter . In the following examples, the variance σ is filtered filter The description is given for the sake of example. Then, the rendering composition time length of the historical frame image is distributed in RCTAvg +3 sigma filter Has a probability of 0.9973.
Therefore, assuming the preset ratio is 0.9973, the corresponding rendering composition time is RCTAvg +3 σ filter If the frame image satisfying the formula (4) is an adjustable frame image:
RCTAvg+3σ filter =T Vsync -APPoffset1 (4)
wherein, T Vsync Representing a Vsync period and APPoffset1 representing the initial application of a rendering offset.
Therefore, the variance threshold σ at the preset ratio of 0.9973 can be derived from equation (4) threshold Is formula (5):
Figure BDA0003108570370000231
in other words, the variance threshold is a variance of a critical rendering and synthesizing duration, where the critical rendering and synthesizing duration is a rendering and synthesizing duration when the proportion of the adjustable frame image in the historical frame image is a preset proportion. In other words, the ratio of the critical rendering composition time in the history frame image is a preset ratio, and the sum of the critical rendering composition time and the initial application rendering offset is equal to one Vsync period.
In one embodiment, the variance threshold may further take into account the addition of a preset threshold and expected revenue duration. Optionally, the critical rendering composition duration is a difference between one Vsync period and the initial application rendering offset, the preset threshold, and the expected profit duration. And (4) further increasing the preset threshold and the expected profit duration on the basis of the formula (4). That is, assuming that the preset ratio is 0.9973, the corresponding rendering and composition time is RCTAvg +3 σ filter And the frame image satisfying the formula (6) is the adjustable frame image:
RCTAvg+3σ filter +Delta+Gain=T Vsync -APPoffset1 (6)
where Delta represents a preset threshold and Gain represents an expected revenue duration. Delta and Gain can be set according to requirements, for example, Delta can be set to 1ms, and Gain can be set to 3 ms.
Thus, by following equation (6) one can derive a predictionLet the variance threshold σ at a ratio of 0.9973 threshold Is formula (7):
Figure BDA0003108570370000232
in the embodiment, the preset threshold is added in the calculation of the variance threshold, which is equivalent to adding the error margin in the offset adjustment condition, so that the possible errors in practical application are fully considered, and the flexibility in judging the offset adjustment condition is improved, thereby improving the accuracy of the offset adjustment and further improving the graphic display effect. Meanwhile, the expected profit duration is added in the calculation of the variance threshold, the minimum value of the increased offset is ensured, the profit after the offset adjustment is improved, and the improvement of the frame loss prevention rate after the offset adjustment is facilitated.
The above procedure of determining whether the offset adjustment condition is satisfied based on the variance and the variance threshold is exemplified below.
Taking the display screen refresh rate of 60Hz as an example, the Vsync period is 16.66 ms. Assuming that the display of the history frame images of 52 frames is completed in the current calculation period, the display of the history frame images of 48 frames is completed in the previous calculation period. The Vsync control module may respectively acquire the application drawing rendering duration and the surface flag synthesis duration of the 100 frames of history frame images from the RC FrameTimeVar list mentioned in the above embodiment, and may calculate the rendering synthesis duration of each frame of history frame image.
RCTAvg is calculated according to equation (2): assuming that the sum of rendering and synthesizing time lengths of the 52 frames of historical frame images is 265ms, according to the formula (2), the result can be obtained
Figure BDA0003108570370000233
The filtering variance σ is calculated according to equation (3) filter : suppose that the variance σ of the rendering composition time length of the history frame image of the previous calculation period calculated according to the formula (1) old 3, the variance of the rendering and synthesizing time length of the historical frame image of the current computing period computed according to the formula (1) is sigma now At 5, the filter coefficient a is 0.7. ThenFrom equation (3), the filtering variance σ can be obtained filter =0.7*5-(1-0.7)*3=2.6。
The variance threshold σ is calculated according to equation (7) threshold : assuming a preset threshold of 1ms, an expected revenue duration of 3ms, and 1ms for APPoffset 1. Then, the variance threshold is calculated according to equation (7) as:
Figure BDA0003108570370000241
will filter the variance σ filter And variance threshold σ threshold And comparing, wherein 2.6 is greater than 2.22, so that the offset adjustment condition is not satisfied, the current application rendering offset is adjusted to the initial application rendering offset, and the current SurfaceFlinger synthesis offset is adjusted to the initial SurfaceFlinger synthesis offset.
Optionally, the detailed description of the determination of the target application rendering offset and the target surfefinger synthesis offset by the Vsync algorithm module in S603 according to the application rendering duration and the surfefinger synthesis duration of the plurality of history frame images is described below with reference to the embodiments.
Due to the fact that the proportion of the ideal frame image in the historical frame image is larger than the preset proportion under the target application rendering offset and the target SurfaceFlinger synthesis offset. That is, after the offset amount adjustment, a certain proportion of the frame images in the history frame images should be able to complete the application rendering process and the surfaflinger composition process within one Vsync period. Whether an image can complete the application rendering process and the surface flicker composition process within one Vsync period can be considered from two aspects: gain with chirality and no frame loss. The following analyses were performed, respectively.
1) Gain of handedness
Since the timing of the input events has uncertainty with respect to the timing of the Vsync signal, the input events can be viewed as uniformly distributed random events. Wherein, the input event means that the user sends a view updating request to the display system through the input device. Taking the user as an example to perform input through the display screen, the input event refers to an operation of the user touching the display screen, for example, an operation of clicking or sliding the display screen. Fig. 12 is a schematic diagram of a display flow after the offset is adjusted in one embodiment. As shown in fig. 12, the initial application rendering offset APPoffset1 is adjusted by an application rendering offset adjustment value (referred to as an adjustment value in the figure for short) APPoffset2, so as to obtain the target application rendering offset APPoffset (i.e., APPoffset1+ APPoffset2, which is not shown in fig. 12). For convenience of explanation, a period of time remaining in the nth Vsync period except for APPoffset1 and APPoffset2 is defined as remaining time APPoffset 3. In the figure, a dashed box in the application rendering time axis indicates the processing time of the 3 rd frame image at the initial application rendering offset APPoffset 1. An input event occurring within the APPoffset1 time period is defined as input 1. An input event occurring within the APPoffset2 time period is defined as input 2. An input event that occurs within the APPoffset3 time period is defined as input 3.
First, as described in the above embodiment, for input 1, input 2, and input 3, as compared with fig. 3 to 5, the application rendering and the surfefinger composition for one frame of image can be completed within one Vsync period by adjusting the application rendering offset and the surfefinger composition offset, resulting in a achiral benefit for one Vsync period. For details, reference is made to the above embodiments, which are not described herein again.
Next, a chiral yield analysis was performed based on fig. 12: for input 1, the initial application rendering offset APPoffset1 is adjusted by APPoffset2, and the 3 rd frame image corresponding to input 1 before and after the adjustment performs the application rendering process in the nth Vsync period and the hardware rendering process in the N +1 th Vsync period. Thus, for input 1, no chiral benefit is followed by offset adjustment.
For input 2, the initial application rendering offset APPoffset1 is adjusted by APPoffset2, before the adjustment, the 3 rd frame image corresponding to input 2 misses the APP _ Vsync signal generated under the APPoffset1, and it is necessary to perform the application rendering process in the N +1 th Vsync period and the hardware rendering process in the N +2 th Vsync period. After the adjustment, the 3 rd frame image corresponding to the input 2 arrives before the start time of the nth APP _ Vsync cycle generated with an offset of APPoffset1+ APPoffset2, so the 3 rd frame image corresponding to the input 2 can execute the application rendering flow in the nth Vsync cycle and the hardware rendering flow in the N +1 th Vsync cycle. So, for input 2, the offset adjustment is followed by the chiral gain by one Vsync period.
For input 3, the initial application rendering offset APPoffset1 is adjusted by APPoffset2, and the 3 rd frame image corresponding to input 3 before and after the adjustment will execute the application rendering process in the N +1 th Vsync period and execute the hardware rendering process in the N +2 th Vsync period. Thus, for input 3, no chiral benefit follows the offset adjustment.
Therefore, for input 2, if the offset amount is adjusted so that the application rendering and surface flicker composition of one frame image can be completed in one Vsync period, then the achiral benefit of 2 Vsync periods will be obtained.
Based on the above analysis, according to the probability theory formula, the offset adjustment front and back and the chiral gain T are:
Figure BDA0003108570370000251
2) non-losing frame rate
Adjusting the application rendering offset and the surfefinger synthesis offset can bring the following chiral profit T, but frame loss may occur after adjustment. As shown in fig. 9 and 10, when the sum of the rendering and composition duration of a frame image and the rendering offset is greater than one Vsync period, a frame loss occurs. In addition, even if the sum of the rendering composition duration and the application rendering offset is less than or equal to one Vsync period, if the adjusted target application rendering offset and the target surface flinger composition offset are not appropriate, so that the start time of the surface flinger composition flow is before the end time of the application rendering flow, or the end time of the surface flinger composition flow exceeds the current Vsync period, a frame loss phenomenon may occur.
In summary, a frame loss occurs when any one of the following occurs:
a. the sum of the rendering and synthesizing duration of the frame image and the drawing and rendering offset of the target application is more than one Vsync period;
b. the target SurfaceFlinger synthesis offset is smaller than the sum of the application rendering offset and the application rendering duration;
c. the sum of the target surfaflinger synthesis offset and the surfaflinger synthesis duration is greater than one Vsync period.
Therefore, considering the following chiral benefits and the frame loss prevention rate, based on the concept of the optimization problem, the offset optimization model is established as follows:
Figure BDA0003108570370000252
where maxf () represents the maximum value of the finding function f (), APPoffset2 is the adjustment value for applying the rendering offset, ls is the frame loss rate, i.e., the ratio of the number of frame images without frame loss to the total number of frame images, and is (fa-fl)/fa. Where fa denotes the total number of frame images, and fl denotes the number of frame images with frame dropping in the frame images. When the target application offset and the target surface flicker synthesis offset are determined based on the history frame images, fa represents the total number of the history frame images, and fl represents the number of frame images with dropped frames in the history frame images.
Substituting the formula (8) and the frame loss free rate formula into the formula (9) to obtain an offset optimization model:
Figure BDA0003108570370000253
for the solution of equation (10), it is necessary to traverse 0 to T Vsync -each APPoffset2 within APPoffset 1. After one APPoffset2 is given, the SurfaceFlinger synthetic offset corresponding to the APPoffset2 is calculated. Based on each set of APPoffset1+ APPoffset2 and SurfaceFlinger synthesis offset, no lost computation is performed in combination with the application rendering duration and the SurfaceFlinger synthesis duration of each of the historical frame imagesFrame rate. And then solving APPoffset2 when the product of the chiral gain and the non-frame-losing rate is maximum to obtain the optimal solution of APPoffset 2.
Specifically, given APPoffset2, the surfefinger synthesis time length maximum value SFmax can be calculated according to equation (11):
SFmax=T Vsync -APPoffset1-APPoffset2-T APP (11)
wherein, T APP Indicating that the application renders the rendering duration. Optionally, the application rendering duration may be an average of the application rendering durations of the plurality of historical frame images. Of course, the application rendering duration may be selected in other manners according to the requirement.
Further, according to the maximum value SFmax of the synthesized duration of the surfaceflag, the surfflag synthesized offset SFOffset corresponding to the selected apoffset 2 can be obtained:
SFOffset=T Vsync -SFmax (12)
the formula (11) is substituted into the formula (12), and a calculation formula of the surfefinger synthesis offset SFOffset is obtained as follows:
SFOffset=APPoffset1+APPoffset2+T APP (13)
since the preset threshold and the expected profit duration have been considered when determining the offset adjustment condition, Delta and Gain are not subtracted in equation (13).
And respectively calculating whether each historical frame image loses frames according to APPoffset1, APPoffset2 and SFOffset, and further calculating the frame loss rate of the historical frame images according to the formula ls ═ fa-fl/fa.
It will be appreciated that the optimal APPoffset2 is solved based on equations (10) and (13) in a traversal of 0 to T Vsync In each APPoffset2 process within the range of APPoffset1, as the value of APPoffset2 increases, fl gradually increases, and f (APPoffset2) first increases and then decreases, so that an APPoffset2 value which enables the value of f (APPoffset2) to be the maximum value can be found, namely the optimal APPoffset 2.
After the optimal APPoffset2 is obtained, the target application rendering offset APPoffset may be calculated according to formula (13):
APPoffsetT=APPoffset1+APPoffset2 (14)
and according to the target application rendering offset APPOFfsetT, calculating a target SurfaceFlinger synthetic offset SFoffsetT according to the formula (13).
In this embodiment, because the chiral gain and the frame non-dropping rate are both factors that affect the proportion of ideal frame images, the offset optimization model is established based on the chiral gain and the frame non-dropping rate brought by the offset adjustment, and the application rendering offset adjustment value APPoffset2 obtained by solving the offset optimization model can ensure both the chiral gain and the frame non-dropping rate, so that the proportion of ideal frame images in the historical frame images is the largest under the target application rendering offset APPoffset and the target application rendering offset APPoffset, thereby maximizing the proportion of frame images to be displayed to the chiral gain and the frame non-dropping rate, and optimizing the display effect.
The following illustrates a process of determining the target application rendering offset appoffset, and the target application rendering offset appoffset:
taking the display screen refresh rate of 60Hz as an example, the Vsync period is 16.66 ms. Assuming that APPoffset1 is 1ms, the average of the plural history frame image application rendering durations is 2 ms. And traversing the APPoffset2 from 0ms to 15ms by taking 1ms as a step length, and calculating the product of the profit and the non-frame-losing rate corresponding to each APPoffset 2.
Assuming that the current APPoffset2 is 3ms, the adjusted application rendering offset is 1+3ms — 4 ms.
According to the formula (13), the surfefinger synthesis offset SFOffset is calculated to be 1+3+2 to 6 ms.
And calculating whether each frame of historical frame image in the RC FrameTimeVar list loses frames or not by taking 4ms as an application rendering offset and 6ms as a SurfaceFlinger synthesis offset. When the application rendering composition time + the surface Flinger composition time +4ms is greater than T Vsync (16.6ms), a frame loss is considered. Here, it is assumed that the historical frame images total 100 frames, and 10 frames are lost, and the non-loss frame rate ls is (100-10)/100 is 0.9.
Further, according to the formula (10), the offset optimization value f (APPoffset2) ═ 3+16.6 × 0.9 ═ 17.64 is calculated.
Referring to the above process, the offset optimized value under each apoffset 2 value is calculated, and finally the largest one is found out from all the offset optimized values, namely the most optimal solution of apoffset 2. It is assumed here that the resulting APPoffset2 optimal solution is 8 ms.
According to the optimal solution of APPoffset2, the target application rendering offset APPoffset may be calculated based on equation (14) to obtain APPoffset.
According to equation (13), sfoffset t is calculated to be 1+8+2 to 11 ms.
It should be noted that all values in the above examples are for illustrative purposes, and do not limit the values of the parameters.
Optionally, in an embodiment, before step S604, step S603N may be repeatedly executed, and N target application rendering offset selectable values (hereinafter referred to as rendering selectable values) and N target surface flinger synthesis offset selectable values (hereinafter referred to as synthesis selectable values) are calculated, where N is an integer greater than 1, and the method further includes:
the Vsync algorithm module judges whether the N rendering selectable values are all larger than the current application rendering offset; if yes, the Vsync algorithm module determines a target application rendering offset according to the N rendering selectable values, determines a target SurfaceFlinger synthesis offset according to the N synthesis selectable values, and executes step S604; if not, the setting module does not adjust the current application rendering offset and the current surface flicker synthesis offset, that is, S604 is not executed.
The value of N can be set according to actual requirements. When the target application rendering offset is determined according to the N rendering selectable values, optionally, the minimum value of the N rendering selectable values is determined as the target application rendering offset, so that the frame loss rate is minimum, and the display effect is improved to the maximum extent.
For example, assume that N is 3, the current calculation cycle is denoted as the mth calculation cycle, and the current application rendering offset is 7. Assuming that the frame image of the current calculation cycle is taken as the history frame image, the rendering selectable value calculated by step S603 is 8. With the frame image of the M-1 th calculation cycle (i.e., the last calculation cycle) as the history frame image, the rendering selectable value is calculated to be 9 by step S603. With the frame image of the M-2 th calculation cycle as the history frame image, the rendering selectable value is calculated to be 8 by step S603. Because the rendering selectable value 8 obtained in the mth calculation cycle is greater than the current application rendering offset 7, the rendering selectable value 9 obtained in the M-1 calculation cycle is greater than the current application rendering offset 7, and the rendering selectable value 8 obtained in the M-2 calculation cycle is greater than the current application rendering offset 7, the target application rendering offset is determined according to the 3 rendering selectable values. And determining the minimum value 8 of the three as a drawing rendering offset of the target application.
In this embodiment, N rendering selectable values are obtained by repeating the step S603, and the current offset is adjusted when all of the N rendering selectable values are greater than the current application rendering offset, so that frequent adjustment of the application rendering offset by the software system can be avoided, system load is reduced, and system operating efficiency is improved.
Fig. 13 is a schematic structural diagram of a display parameter adjusting apparatus according to an embodiment of the present disclosure. As shown in fig. 13, the apparatus for adjusting display parameters according to this embodiment may include:
an obtaining module 1101, configured to obtain application drawing rendering durations and layer composition durations of a plurality of historical frame images;
a determining module 1102, configured to determine a target application rendering offset and a target layer synthesis offset according to application rendering durations and layer synthesis durations of multiple historical frame images; the target application drawing rendering offset is the offset of a drawing rendering synchronous signal relative to a vertical synchronous signal for triggering the execution of an application drawing rendering process on the frame image to be displayed; the target layer synthesis offset refers to the offset of a synthesis synchronous signal which triggers the execution of a layer synthesis process on a frame image to be displayed relative to a vertical synchronous signal; the occupation ratio of ideal frame images in the plurality of historical frame images is larger than a preset occupation ratio; the ideal frame image is a historical frame image which completes an application drawing and rendering process and a layer synthesis process in a vertical synchronization period when a target application drawing and rendering offset and a target layer synthesis offset are used;
an adjusting module 1103, configured to adjust the current application rendering offset according to the target application rendering offset, and adjust the current layer synthesis offset according to the target layer synthesis offset.
In an embodiment, the determining module 1102 is specifically configured to solve the offset optimization model according to the application rendering duration and the layer composition duration of the multiple historical frame images, so as to obtain an optimal solution of an application rendering offset adjustment value; the offset optimization model is used for representing the maximum value of the product of the chiral gain and the frame loss prevention rate; the following chirality gain is the sum of the application rendering offset adjustment value and a vertical synchronization period; the frame non-loss rate refers to the proportion of historical frame images without frame loss in the plurality of historical frame images when the application rendering offset and the layer synthesis offset obtained according to the application rendering offset adjustment value are used; and determining a target application rendering offset and a target layer synthesis offset according to the optimal solution of the application rendering offset adjustment value.
In an embodiment, the determining module 1102 is specifically configured to determine a sum of an optimal solution of the application rendering offset adjustment value and an initial application rendering offset as a target application rendering offset, where the initial application rendering offset refers to an application rendering offset when offset adjustment is not performed; and determining a target layer synthesis offset according to the target application drawing rendering offset.
In one embodiment, the determining module 1102 is specifically configured to calculate an average value of application rendering durations of the plurality of historical frame images; and determining the sum of the average value and the target application drawing rendering offset as a target layer synthesis offset.
In an embodiment, the plurality of historical frame images are frame images in a first historical time period, the obtaining module 1101 is further configured to obtain target application rendering offsets and target layer synthesis offsets corresponding to a plurality of second historical time periods, where the plurality of second historical time periods are located before the first historical time period, and the plurality of second historical time periods are consecutive to the first historical time period;
the adjusting module 1103 is specifically configured to determine, if the rendering selectable values are all greater than the current application rendering offset, an application rendering offset setting value according to the rendering selectable values, and determine a layer composition offset setting value according to the composition selectable values; the rendering selectable values comprise a target application drawing rendering offset corresponding to a first historical time period and a target application drawing rendering offset corresponding to a plurality of second historical time periods; the plurality of synthetic selectable values comprise target layer synthetic offset corresponding to a first historical time period and target layer synthetic offset corresponding to a plurality of second historical time periods; and adjusting the current application rendering offset as an application rendering offset setting value, and adjusting the current layer synthesis offset as a layer synthesis offset setting value.
In one embodiment, the adjusting module 1103 is specifically configured to determine a rendering optional value that is the smallest among the plurality of rendering optional values as the application rendering offset setting value; and determining the smallest synthesis selectable value in the plurality of synthesis selectable values as the layer synthesis offset setting value.
In an embodiment, the apparatus for adjusting a display parameter further includes a determining module 1104, configured to determine, according to an application drawing rendering duration and a layer composition duration of the multiple historical frame images, that a stability of the rendering composition duration of the multiple historical frame images meets a preset condition, and a ratio of adjustable frame images in the multiple historical frame images is greater than a preset ratio; the rendering synthesis duration refers to the sum of application rendering duration and layer synthesis duration of the historical frame image, the adjustable frame image refers to the historical frame image of which the sum of the rendering synthesis duration and the initial application rendering offset is smaller than a vertical synchronization period, and the initial application rendering offset refers to the application rendering offset when offset adjustment is not performed.
In an embodiment, the determining module 1104 is specifically configured to calculate a variance of rendering and synthesizing durations of the plurality of historical frame images according to application rendering durations and layer synthesizing durations of the plurality of historical frame images; calculating a variance threshold according to the application drawing rendering time length and the layer synthesis time length of the plurality of historical frame images, a preset proportion, a vertical synchronization period and an initial application drawing rendering offset; if the variance is smaller than the variance threshold, it is determined that the stability of the rendering and synthesizing time lengths of the plurality of historical frame images meets a preset condition, and the proportion of the adjustable frame images in the plurality of historical frame images is larger than a preset proportion.
In one embodiment, the preset ratio is 0.9973.
In one embodiment, the determination module 1104 is specifically configured to determine according to
Figure BDA0003108570370000281
Calculating a variance threshold, wherein threshold Representing a variance threshold, T Vsync Indicating a vertical synchronization period, APPoffset1 indicating an initial application rendering offset, RCTAvg indicating an average of rendering composition durations of a plurality of history frame images; the rendering and synthesizing duration refers to the sum of application drawing rendering duration of the historical frame image and layer synthesizing duration.
The adjusting apparatus for display parameters provided in this embodiment is used to implement the adjusting method for display parameters in the foregoing embodiment, and the technical principle and the technical effect are similar, and are not described herein again.
An embodiment of the application provides an electronic device, and the structure of the electronic device is shown in fig. 1. The memory of the electronic device may be configured to store at least one program instruction, and the processor is configured to execute the at least one program instruction to implement the solution of the above-mentioned method embodiment. The implementation principle and technical effect are similar to those of the embodiments related to the method, and are not described herein again.
The embodiment of the application provides a chip. The chip comprises a processor, which is coupled with the memory and executes the computer program in the memory to execute the technical solution in the above embodiments. The principle and technical effects are similar to those of the related embodiments, and are not described herein again.
The embodiment of the present application provides a computer program product, which, when running on an electronic device, enables the electronic device to execute the technical solutions in the above embodiments. The principle and technical effects are similar to those of the related embodiments, and are not described herein again.
The embodiment of the present application provides a computer-readable storage medium, on which program instructions are stored, and when the program instructions are executed by an electronic device, the electronic device is enabled to execute the technical solutions of the above embodiments. The principle and technical effects are similar to those of the related embodiments, and are not described herein again. In summary, the above embodiments are only used for illustrating the technical solutions of the present application, and not for limiting the same; although the present application has been described in detail with reference to the foregoing embodiments, it should be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; and the modifications or the substitutions do not make the essence of the corresponding technical solutions depart from the scope of the technical solutions of the embodiments of the present application.

Claims (15)

1. A method for adjusting display parameters is characterized by comprising the following steps:
acquiring application drawing rendering time and layer synthesis time of a plurality of historical frame images;
determining a target application drawing rendering offset and a target layer synthesis offset according to the application drawing rendering time and the layer synthesis time of the plurality of historical frame images; the target application drawing rendering offset refers to the offset of a drawing rendering synchronous signal relative to a vertical synchronous signal for triggering the execution of an application drawing rendering process on the frame image to be displayed; the target layer synthesis offset is an offset of a synthesis synchronous signal which triggers the execution of a layer synthesis process on the frame image to be displayed relative to the vertical synchronous signal; the proportion of ideal frame images in the plurality of historical frame images is larger than a preset proportion; the ideal frame image is the historical frame image which completes the application drawing and rendering process and the layer composition process in a vertical synchronization period when the target application drawing and rendering offset and the target layer composition offset are used;
and adjusting the current application rendering offset according to the target application rendering offset, and adjusting the current layer synthesis offset according to the target layer synthesis offset.
2. The method of claim 1, wherein determining a target application rendering offset and a target layer composition offset based on an application rendering duration and a layer composition duration of the plurality of historical frame images comprises:
solving an offset optimization model according to the application rendering time length and the layer composition time length of the plurality of historical frame images to obtain an optimal solution of an application rendering offset adjustment value; the offset optimization model is used for representing the maximum value of the product of the chiral gain and the frame loss prevention rate; the chiral tracking gain is the sum of the application rendering offset adjustment value and one vertical synchronization period; the frame non-loss rate refers to the proportion of the historical frame images without frame loss in the plurality of historical frame images when the application rendering offset and the layer synthesis offset obtained according to the application rendering offset adjustment value are used;
and determining the target application rendering offset and the target layer synthesis offset according to the optimal solution of the application rendering offset adjustment value.
3. The method according to claim 1 or 2, wherein the determining the target application rendering offset and the target layer composition offset according to the optimal solution of the application rendering offset adjustment value comprises:
determining the sum of the optimal solution of the application rendering offset adjustment value and an initial application rendering offset as the target application rendering offset, wherein the initial application rendering offset refers to the application rendering offset when offset adjustment is not performed;
and determining the target layer synthesis offset according to the target application rendering offset.
4. The method according to claim 3, wherein the determining the target layer composition offset according to the target application render offset comprises:
calculating an average value of application drawing rendering durations of the plurality of historical frame images;
and determining the sum of the average value and the target application drawing rendering offset as the target layer synthesis offset.
5. The method according to any one of claims 1 to 4, wherein the plurality of historical frame images are frame images in a first historical time period, the adjusting the current application rendering offset according to the target application rendering offset and before adjusting the current layer composition offset according to the target layer composition offset further comprises:
acquiring target application drawing and rendering offsets and target layer synthesis offsets corresponding to a plurality of second historical time periods, wherein the plurality of second historical time periods are positioned before the first historical time period, and the plurality of second historical time periods are continuous with the first historical time period;
the adjusting the current application rendering offset according to the target application rendering offset and the current layer synthesis offset according to the target layer synthesis offset comprises:
if the rendering selectable values are all larger than the current application rendering offset, determining an application rendering offset setting value according to the rendering selectable values, and determining a layer composition offset setting value according to the composition selectable values; the plurality of rendering selectable values comprise a target application rendering offset corresponding to the first historical time period and a target application rendering offset corresponding to the plurality of second historical time periods; the plurality of synthesis selectable values comprise target layer synthesis offsets corresponding to the first historical time period and target layer synthesis offsets corresponding to the plurality of second historical time periods;
and adjusting the current application rendering offset to the application rendering offset setting value, and adjusting the current layer synthesis offset to the layer synthesis offset setting value.
6. The method of claim 5, wherein determining an application render offset setting based on the plurality of render selectable values and determining a layer composition offset setting based on a plurality of composition selectable values comprises:
determining a rendering selectable value that is the smallest among the plurality of rendering selectable values as the application rendering offset setting value;
and determining the smallest synthesis selectable value in the synthesis selectable values as the layer synthesis offset setting value.
7. The method according to any one of claims 1 to 6, wherein before determining the target application rendering offset and the target layer composition offset according to the application rendering duration and the layer composition duration of the plurality of historical frame images, further comprising:
determining that the stability of the rendering and synthesizing time lengths of the plurality of historical frame images meets a preset condition according to the application rendering time lengths and the layer synthesizing time lengths of the plurality of historical frame images, wherein the proportion of adjustable frame images in the plurality of historical frame images is greater than a preset proportion; the rendering synthesis duration refers to the sum of application rendering duration and layer synthesis duration of the historical frame image, the adjustable frame image refers to the historical frame image of which the sum of the rendering synthesis duration and an initial application rendering offset is smaller than one vertical synchronization period, and the initial application rendering offset refers to the application rendering offset when offset adjustment is not performed.
8. The method according to claim 7, wherein determining that the stability of the rendering and compositing durations of the plurality of historical frame images meets a preset condition according to the application rendering durations and the layer compositing durations of the plurality of historical frame images, and the proportion of the adjustable frame images in the plurality of historical frame images is greater than a preset proportion comprises:
calculating the variance of the rendering and synthesizing time lengths of the plurality of historical frame images according to the application drawing rendering time lengths and the layer synthesizing time lengths of the plurality of historical frame images;
calculating a variance threshold according to the application drawing rendering time length and the layer composition time length of the plurality of historical frame images, the preset proportion, the vertical synchronization period and the initial application drawing rendering offset;
if the variance is smaller than the variance threshold, determining that the stability of the rendering and synthesizing time length of the plurality of historical frame images meets the preset condition, and determining that the proportion of adjustable frame images in the plurality of historical frame images is larger than the preset proportion.
9. The method of claim 8, wherein the preset ratio is 0.9973.
10. The method of claim 9, wherein calculating a variance threshold based on the application rendering durations, the preset fraction, the vertical synchronization period, and the initial application rendering offsets for the plurality of historical frame images comprises:
according to
Figure FDA0003108570360000021
Calculating the variance threshold, wherein σ threshold Represents the variance threshold, T Vsvnc Represents the vertical synchronization period, APPoffset1 represents the initial application rendering offset, RCTAvg represents an average of rendering composition durations of the plurality of history frame images; the rendering synthesis duration refers to the sum of application drawing rendering duration of the historical frame image and layer synthesis duration.
11. The method according to any one of claims 1 to 10, wherein the method is applied to an electronic device comprising a vertical synchronization control module, a vertical synchronization algorithm module and a setting module;
the vertical synchronization control module acquires application drawing rendering time and layer synthesis time of the plurality of historical frame images;
the vertical synchronization control module determines whether the stability of the rendering and synthesizing time lengths of the plurality of historical frame images meets a preset condition and determines whether the proportion of adjustable frame images in the plurality of historical frame images is greater than a preset proportion;
determining that the stability of the rendering and synthesizing time lengths of the plurality of historical frame images meets a preset condition, and the proportion of adjustable frame images in the plurality of historical frame images is greater than a preset proportion, wherein the vertical synchronization control module sends first indication information and the application rendering time lengths and the layer synthesizing time lengths of the plurality of historical frame images to the vertical synchronization algorithm module, and the first indication information is used for indicating the vertical synchronization algorithm module to calculate the target application rendering offset and the target layer synthesizing offset;
in response to receiving the first indication information, the vertical synchronization algorithm module calculates the target application rendering offset and the target layer composition offset based on application rendering durations and layer composition durations of the plurality of historical frame images;
the vertical synchronization algorithm module sends the target application drawing rendering offset and the target layer synthesis offset to the setting module;
the setting module sets an application drawing rendering offset in the setting module based on the target application drawing rendering offset, and sets a layer synthesis offset in the setting module based on the target layer synthesis offset;
determining that the stability of the rendering synthesis duration of the plurality of historical frame images does not meet the preset condition, or the proportion of adjustable frame images in the plurality of historical frame images is smaller than or equal to the preset proportion, the vertical synchronization control module sending second indication information, an initial application rendering offset and an initial layer synthesis offset to the setting module, the second indication information being used for indicating the setting module to set the application rendering offset in the setting module based on the initial application rendering offset, and setting the layer synthesis offset in the setting module based on the initial layer synthesis offset;
in response to receiving the second indication information, the setting module sets an application rendering offset in the setting module based on the initial application rendering offset, and sets a layer synthesis offset in the setting module based on the initial layer synthesis offset.
12. The method of claim 11, wherein the electronic device further comprises an application rendering module, a layer composition module, and an offset vertical synchronization generation module;
the offset vertical synchronization generating module receives a vertical synchronization signal;
the offset vertical synchronization generation module acquires a set application drawing rendering offset and a set layer synthesis offset from the setting module;
the offset vertical synchronization generation module generates the rendering synchronization signal based on the vertical synchronization signal and the set application rendering offset, and generates the synthesis synchronization signal based on the vertical synchronization signal and the set layer synthesis offset;
the offset vertical synchronization generation module sends the drawing and rendering synchronization signal to the application drawing and rendering module, so that the application drawing and rendering module triggers and executes an application drawing and rendering process based on the drawing and rendering synchronization signal;
and the offset vertical synchronization generation module sends the synthesized synchronization signal to the layer synthesis module, so that the layer synthesis module triggers and executes a layer synthesis process based on the synthesized synchronization signal.
13. An electronic device, comprising a processor configured to couple to a memory and to read instructions in the memory and to cause the electronic device to perform the method of any of claims 1-12 in accordance with the instructions.
14. A computer-readable storage medium having stored thereon computer instructions which, when run on an electronic device, cause the electronic device to perform the method of any one of claims 1-12.
15. A chip, characterized in that the chip comprises a processor for coupling with a memory and executing a computer program in the memory to perform the method according to any of the claims 1 to 12.
CN202110644476.9A 2021-06-09 2021-06-09 Display parameter adjusting method, electronic device, chip and readable storage medium Active CN114089933B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110644476.9A CN114089933B (en) 2021-06-09 2021-06-09 Display parameter adjusting method, electronic device, chip and readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110644476.9A CN114089933B (en) 2021-06-09 2021-06-09 Display parameter adjusting method, electronic device, chip and readable storage medium

Publications (2)

Publication Number Publication Date
CN114089933A CN114089933A (en) 2022-02-25
CN114089933B true CN114089933B (en) 2022-09-02

Family

ID=80296052

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110644476.9A Active CN114089933B (en) 2021-06-09 2021-06-09 Display parameter adjusting method, electronic device, chip and readable storage medium

Country Status (1)

Country Link
CN (1) CN114089933B (en)

Families Citing this family (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2023216146A1 (en) * 2022-05-11 2023-11-16 北京小米移动软件有限公司 Display image updating method and apparatus and storage medium
CN117711355A (en) * 2022-08-24 2024-03-15 荣耀终端有限公司 Screen refresh rate switching method and electronic equipment
CN116089057B (en) * 2022-08-26 2023-10-20 荣耀终端有限公司 Resource scheduling method, device, storage medium and program product
CN116704087B (en) * 2022-10-17 2024-02-27 荣耀终端有限公司 Parameter adjustment method and electronic equipment
CN115665482B (en) * 2022-11-09 2023-06-30 腾讯科技(深圳)有限公司 Video rendering method, device, computer equipment and storage medium
CN116069187B (en) * 2023-01-28 2023-09-01 荣耀终端有限公司 Display method and electronic equipment
CN115942131B (en) * 2023-02-09 2023-09-01 蔚来汽车科技(安徽)有限公司 Method for guaranteeing vehicle looking-around function, cabin system, vehicle and storage medium
CN116600175B (en) * 2023-07-18 2023-10-20 荣耀终端有限公司 Frame loss prediction method and electronic equipment

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TW201508695A (en) * 2013-08-28 2015-03-01 Tencent Tech Shenzhen Co Ltd Image processing method and device
CN109471587A (en) * 2018-11-13 2019-03-15 掌阅科技股份有限公司 The display methods and electronic equipment of handwritten content based on Java Virtual Machine
CN110018759A (en) * 2019-04-10 2019-07-16 Oppo广东移动通信有限公司 Interface display method, device, terminal and storage medium
CN110609645A (en) * 2019-06-25 2019-12-24 华为技术有限公司 Control method based on vertical synchronization signal and electronic equipment
WO2021000921A1 (en) * 2019-07-03 2021-01-07 华为技术有限公司 Picture processing method based on vertical synchronous signals and electronic equipment
WO2021008420A1 (en) * 2019-07-17 2021-01-21 Oppo广东移动通信有限公司 Layer synthesis method and apparatus, electronic device, and storage medium
CN112351326A (en) * 2019-08-09 2021-02-09 华为技术有限公司 Image processing method based on vertical synchronization signal and electronic equipment

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103942049B (en) * 2014-04-14 2018-09-07 百度在线网络技术(北京)有限公司 Implementation method, client terminal device and the server of augmented reality

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TW201508695A (en) * 2013-08-28 2015-03-01 Tencent Tech Shenzhen Co Ltd Image processing method and device
CN109471587A (en) * 2018-11-13 2019-03-15 掌阅科技股份有限公司 The display methods and electronic equipment of handwritten content based on Java Virtual Machine
CN110018759A (en) * 2019-04-10 2019-07-16 Oppo广东移动通信有限公司 Interface display method, device, terminal and storage medium
CN110609645A (en) * 2019-06-25 2019-12-24 华为技术有限公司 Control method based on vertical synchronization signal and electronic equipment
WO2020259457A1 (en) * 2019-06-25 2020-12-30 华为技术有限公司 Control method based on vertical synchronizing signal, and electronic device
WO2021000921A1 (en) * 2019-07-03 2021-01-07 华为技术有限公司 Picture processing method based on vertical synchronous signals and electronic equipment
WO2021008420A1 (en) * 2019-07-17 2021-01-21 Oppo广东移动通信有限公司 Layer synthesis method and apparatus, electronic device, and storage medium
CN112351326A (en) * 2019-08-09 2021-02-09 华为技术有限公司 Image processing method based on vertical synchronization signal and electronic equipment

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
Relating performance of thin-film composite forward osmosis membranes to support layer formation and structure;AlbertoTiraferri等;《Journal of Membrane Science》;20110228;第340-352页 *
影视后期制作中调色技术的应用与实践;张生花等;《影视制作》;20131031;第57-61页 *

Also Published As

Publication number Publication date
CN114089933A (en) 2022-02-25

Similar Documents

Publication Publication Date Title
CN114089933B (en) Display parameter adjusting method, electronic device, chip and readable storage medium
CN109584879B (en) Voice control method and electronic equipment
CN109814766B (en) Application display method and electronic equipment
CN112130742B (en) Full screen display method and device of mobile terminal
CN109559270B (en) Image processing method and electronic equipment
CN113475057B (en) Video frame rate control method and related device
CN113838490B (en) Video synthesis method and device, electronic equipment and storage medium
CN113630572A (en) Frame rate switching method and related device
CN113722030B (en) Display method, electronic equipment and computer storage medium
CN112516590A (en) Frame rate identification method and electronic equipment
CN113986369B (en) Internet of things equipment control method and system, electronic equipment and storage medium
CN113542574A (en) Shooting preview method under zooming, terminal, storage medium and electronic equipment
CN114995715B (en) Control method of floating ball and related device
CN113590346B (en) Method and electronic equipment for processing service request
CN114740986A (en) Handwriting input display method and related equipment
CN114006976B (en) Interface display method and terminal equipment
WO2022143891A1 (en) Focal point synchronization method and electronic device
CN117850727A (en) Vsync signal control method, electronic device, storage medium and chip
CN115033094A (en) Method for evaluating motion state of user and related equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
TR01 Transfer of patent right

Effective date of registration: 20230913

Address after: 201306 building C, No. 888, Huanhu West 2nd Road, Lingang New Area, Pudong New Area, Shanghai

Patentee after: Shanghai Glory Smart Technology Development Co.,Ltd.

Address before: Unit 3401, unit a, building 6, Shenye Zhongcheng, No. 8089, Hongli West Road, Donghai community, Xiangmihu street, Futian District, Shenzhen, Guangdong 518040

Patentee before: Honor Device Co.,Ltd.

TR01 Transfer of patent right