WO2022089153A1 - 一种基于垂直同步信号的控制方法及电子设备 - Google Patents

一种基于垂直同步信号的控制方法及电子设备 Download PDF

Info

Publication number
WO2022089153A1
WO2022089153A1 PCT/CN2021/122218 CN2021122218W WO2022089153A1 WO 2022089153 A1 WO2022089153 A1 WO 2022089153A1 CN 2021122218 W CN2021122218 W CN 2021122218W WO 2022089153 A1 WO2022089153 A1 WO 2022089153A1
Authority
WO
WIPO (PCT)
Prior art keywords
electronic device
signal
layers
vsync
vertical synchronization
Prior art date
Application number
PCT/CN2021/122218
Other languages
English (en)
French (fr)
Inventor
陈健
李煜
余谭其
谭威
周越海
王亮
Original Assignee
华为技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from CN202011197544.3A external-priority patent/CN114531519B/zh
Application filed by 华为技术有限公司 filed Critical 华为技术有限公司
Priority to EP21884882.8A priority Critical patent/EP4224834A4/en
Priority to US18/251,094 priority patent/US20230410767A1/en
Publication of WO2022089153A1 publication Critical patent/WO2022089153A1/zh

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/04Synchronising
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09GARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
    • G09G5/00Control arrangements or circuits for visual indicators common to cathode-ray tube indicators and other visual indicators
    • G09G5/14Display of multiple viewports
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09GARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
    • G09G5/00Control arrangements or circuits for visual indicators common to cathode-ray tube indicators and other visual indicators
    • G09G5/36Control arrangements or circuits for visual indicators common to cathode-ray tube indicators and other visual indicators characterised by the display of a graphic pattern, e.g. using an all-points-addressable [APA] memory
    • G09G5/39Control of the bit-mapped memory
    • G09G5/395Arrangements specially adapted for transferring the contents of the bit-mapped memory to the screen
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5005Allocation of resources, e.g. of the central processing unit [CPU] to service a request
    • G06F9/5027Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/54Interprogram communication
    • G06F9/546Message passing systems or structures, e.g. queues
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09GARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
    • G09G3/00Control arrangements or circuits, of interest only in connection with visual indicators other than cathode-ray tubes
    • G09G3/20Control arrangements or circuits, of interest only in connection with visual indicators other than cathode-ray tubes for presentation of an assembly of a number of characters, e.g. a page, by composing the assembly by combination of individual elements arranged in a matrix no fixed position being assigned to or needed to be assigned to the individual characters or partial characters
    • G09G3/34Control arrangements or circuits, of interest only in connection with visual indicators other than cathode-ray tubes for presentation of an assembly of a number of characters, e.g. a page, by composing the assembly by combination of individual elements arranged in a matrix no fixed position being assigned to or needed to be assigned to the individual characters or partial characters by control of light from an independent source
    • G09G3/36Control arrangements or circuits, of interest only in connection with visual indicators other than cathode-ray tubes for presentation of an assembly of a number of characters, e.g. a page, by composing the assembly by combination of individual elements arranged in a matrix no fixed position being assigned to or needed to be assigned to the individual characters or partial characters by control of light from an independent source using liquid crystals
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09GARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
    • G09G5/00Control arrangements or circuits for visual indicators common to cathode-ray tube indicators and other visual indicators
    • G09G5/36Control arrangements or circuits for visual indicators common to cathode-ray tube indicators and other visual indicators characterised by the display of a graphic pattern, e.g. using an all-points-addressable [APA] memory
    • G09G5/37Details of the operation on graphic patterns
    • G09G5/377Details of the operation on graphic patterns for mixing or overlaying two or more graphic patterns
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2209/00Indexing scheme relating to G06F9/00
    • G06F2209/50Indexing scheme relating to G06F9/50
    • G06F2209/5018Thread allocation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2209/00Indexing scheme relating to G06F9/00
    • G06F2209/54Indexing scheme relating to G06F9/54
    • G06F2209/545Gui
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2209/00Indexing scheme relating to G06F9/00
    • G06F2209/54Indexing scheme relating to G06F9/54
    • G06F2209/548Queue
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/03Arrangements for converting the position or the displacement of a member into a coded form
    • G06F3/041Digitisers, e.g. for touch screens or touch pads, characterised by the transducing means
    • G06F3/0416Control or interface arrangements specially adapted for digitisers
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/14Digital output to display device ; Cooperation and interconnection of the display device with other functional units
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09GARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
    • G09G2320/00Control of display operating conditions
    • G09G2320/02Improving the quality of display appearance
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09GARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
    • G09G2340/00Aspects of display data processing
    • G09G2340/04Changes in size, position or resolution of an image
    • G09G2340/0407Resolution change, inclusive of the use of different resolutions for different screen areas
    • G09G2340/0435Change or adaptation of the frame rate of the video stream
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09GARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
    • G09G2340/00Aspects of display data processing
    • G09G2340/12Overlay of images, i.e. displayed pixel being the result of switching between the corresponding input pixels
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09GARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
    • G09G2354/00Aspects of interface with display user
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09GARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
    • G09G2360/00Aspects of the architecture of display systems
    • G09G2360/12Frame memory handling
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09GARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
    • G09G2360/00Aspects of the architecture of display systems
    • G09G2360/18Use of a frame buffer in a display terminal, inclusive of the display panel
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09GARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
    • G09G5/00Control arrangements or circuits for visual indicators common to cathode-ray tube indicators and other visual indicators
    • G09G5/12Synchronisation between the display unit and other units, e.g. other display units, video-disc players
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09GARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
    • G09G5/00Control arrangements or circuits for visual indicators common to cathode-ray tube indicators and other visual indicators
    • G09G5/18Timing circuits for raster scan displays

Definitions

  • the embodiments of the present application relate to the technical field of image processing and display, and in particular, to a control method and electronic device based on a vertical synchronization signal.
  • the premise of ensuring the continuity of the image displayed by the electronic device is that the display screen of the electronic device does not lose frames.
  • the high frame rate display of electronic devices is also a development trend.
  • the frame rate of electronic devices has evolved from 60 hertz (Hz) to 90 Hz to 120 Hz.
  • the higher the frame rate of the electronic device the more likely the problem of frame loss occurs, which will lead to incoherence of the display content of the electronic device and affect the user experience. Therefore, in a high frame rate scenario, how to reduce or even avoid the phenomenon of frame loss when an electronic device displays an image, and how to preserve the smoothness of an image displayed by the electronic device is an urgent problem to be solved.
  • Embodiments of the present application provide a control method and an electronic device based on a vertical synchronization signal, which can reduce the possibility of frame loss when the electronic device displays an image, and can ensure the smoothness of the displayed image on the display screen, thereby improving the user's visual experience.
  • the present application provides a control method based on a vertical synchronization signal, and the method is applied to an electronic device.
  • the method includes: the electronic device draws the first layer in response to the first vertical synchronization signal, and buffers the first layer in the first buffer queue; the electronic device responds to the second vertical synchronization signal and buffers the first layer in the first buffer queue.
  • the image frame is obtained by performing layer synthesis on the layers of the first buffer queue; if the number of layers buffered in the first buffer queue is less than the first preset threshold, the electronic device adjusts the signal period of the first vertical synchronization signal to the first duration, and the first The duration is less than the signal period of the second vertical synchronization signal.
  • the frame data cached in the first cache queue is insufficient, and the image displayed by the electronic device is prone to frame loss.
  • the reason for the frame loss phenomenon in the displayed image of the electronic device is that the consumer (eg, the synthesis thread) cannot read the frame data from the first buffer queue after a second vertical synchronization signal (eg, VSYNC_SF) arrives. Then, if a consumer (such as a synthesis thread) can read frame data from the first buffer queue after a VSYNC_SF signal arrives; then, there will be no frame loss phenomenon when the electronic device displays an image.
  • SF is Surface Flinger.
  • the electronic device may adjust the signal period of the first vertical synchronization signal to the first duration.
  • the signal period of the first vertical synchronization signal is smaller than that of the second vertical synchronization signal.
  • the production rate of the producer ie, the UI thread and the Render thread
  • the consumption rate of the consumer ie, the composition thread
  • the above solution can reduce the possibility of frame loss when the electronic device displays an image without increasing the power consumption of the electronic device, and ensure the smoothness of the displayed image on the display screen.
  • the electronic device switches the first application to the foreground application in response to the first application, and if the number of layers buffered in the first buffer queue is less than the first preset threshold, the first vertical synchronization The signal period of the signal is adjusted to the first duration.
  • the first cache queue is allocated for the first application.
  • the first buffer queue is applied for by a rendering (Render) thread of the electronic device and allocated for the first application.
  • the electronic device may allocate a buffer queue for each application respectively.
  • the foregoing foreground application is an application corresponding to the interface currently displayed on the display screen of the electronic device.
  • the application interface of the first application is displayed on the display screen of the electronic device and is visible to the user. If the electronic device performs layer drawing, layer rendering, image frame synthesis, and display for the first application, frame loss occurs, which will affect the smoothness of the displayed image on the display screen and affect the user experience. Therefore, when the first application is switched to the foreground application, the electronic device can perform the above method to adjust the signal period of the VSYNC_APP signal, so as to reduce the possibility of frame loss when the electronic device displays an image.
  • the electronic device after the electronic device adjusts the signal period of the VSYNC_APP signal to the first duration, the possibility of frame loss when the electronic device displays an image may not be reduced. For this situation, if the number of layers buffered in the first buffer queue is less than the second preset threshold, the electronic device can continue to reduce the signal period of the VSYNC_APP signal to reduce the possibility of frame loss when the electronic device displays images.
  • the method of the present application may further include: if the number of layers buffered in the first buffer queue is less than the second preset threshold, the electronic device adjusts the signal period of the first vertical synchronization signal to the second duration.
  • the second preset threshold is smaller than the first preset threshold, and the second duration is shorter than the first duration.
  • the electronic device can further reduce the signal period of the VSYNC_APP signal to reduce the possibility of frame loss when the electronic device displays images to ensure the smoothness of the displayed image on the display.
  • the electronic device may also When the number is greater than the third preset threshold, the signal period of the first vertical synchronization signal (ie the VSYNC_APP signal) is adjusted so that the signal period of the first vertical synchronization signal is equal to the signal period of the second vertical synchronization signal.
  • the third preset threshold is greater than or equal to the first preset threshold.
  • the electronic device can adjust the signal period of the first vertical synchronization signal so that the signal period of the first vertical synchronization signal is equal to the signal period of the second vertical synchronization signal. In this way, the problem that the first cache queue cannot cache the frame data produced by the UI thread and the Render thread (ie, the producer) can be avoided.
  • the electronic device when the foreground application of the electronic device changes, the electronic device can also be triggered to execute the above method, and the signal period of the VSYNC_APP signal can be re-adjusted to reduce frame loss when the electronic device displays images. possibility.
  • the electronic device in response to the foreground application being switched from the first application to the second application, the electronic device may adjust the signal period for adjusting the VSYNC_APP signal to the third duration.
  • the number of layers buffered in the first buffer queue is less than the first preset threshold, which specifically includes: in response to the second vertical synchronization signal, the electronic device, in response to the second vertical synchronization signal, Before the layer composition of the first frame in the cache queue is performed, the number of cached layers in the first cache queue is read, and the read number is less than the first preset threshold.
  • the first preset threshold is equal to 2.
  • the number of layers cached in the first cache queue is less than 2 before the frame data is dequeued from the first cache queue; then, after the frame data is dequeued from the first cache queue, the first cache queue The number of cached layers in the first cache queue is less than 1, that is, the number of cached layers in the first cache queue is 0.
  • a second vertical synchronization signal eg VSYNC_SF
  • the number of layers buffered in the first buffer queue is less than the first preset threshold, which specifically includes: in response to the second vertical synchronization signal, the electronic device, in response to the second vertical synchronization signal, After layer composition is performed on the first frame of the layer in the cache queue, the number of layers cached in the first cache queue is read, and the read number is less than the first preset threshold.
  • the first preset threshold is equal to one.
  • the number of layers cached in the first cache queue is less than 1 after the frame data is dequeued from the first cache queue, it means that the number of layers cached in the first cache queue is 0.
  • a second vertical synchronization signal eg VSYNC_SF
  • the above method may further include: if the number of layers buffered in the first buffer queue is greater than a third preset threshold, the electronic device adjusts the signal of the first vertical synchronization signal period, so that the signal period of the first vertical synchronization signal is equal to the signal period of the second vertical synchronization signal.
  • the second preset threshold is greater than or equal to the first preset threshold.
  • the number of layers buffered in the first buffer queue is greater than a third preset threshold, which specifically includes: in response to the second vertical synchronization signal, the electronic device, in response to the second vertical synchronization signal, Before the layer composition of the first frame in the cache queue is performed, the number of cached layers in the first cache queue is read, and the read number is greater than the third preset threshold.
  • the third preset threshold is equal to one.
  • the number of layers cached in the first cache queue is greater than 1 before the frame data is dequeued from the first cache queue, it means that the number of layers cached in the first cache queue is at least 2; then , after the frame data is dequeued from the first cache queue, the number of layers cached in the first cache queue is at least 1.
  • a second vertical synchronization signal eg VSYNC_SF
  • frame data can be read from the first buffer queue, and the electronic device displays images without frame loss.
  • the number of layers buffered in the first buffer queue is greater than a third preset threshold, which specifically includes: in response to the second vertical synchronization signal, the electronic device, in response to the second vertical synchronization signal, After layer composition is performed on the first frame of the layer in the cache queue, the number of layers cached in the first cache queue is read, and the read number is greater than the third preset threshold.
  • the third preset threshold is equal to zero.
  • the number of layers cached in the first cache queue is greater than 0 after the frame data is dequeued from the first cache queue, it means that the number of layers cached in the first cache queue is at least 1.
  • a second vertical synchronization signal such as VSYNC_SF
  • frame data can be read from the first buffer queue, and the electronic device displays images without frame loss.
  • the electronic device adjusts the signal period of the first vertical synchronization signal to the first duration, including: the electronic device adjusts the signal period of the first vertical synchronization signal to be smaller by ⁇ T, The signal period of the first vertical synchronization signal is equal to the first duration, and the first duration is smaller than the signal period of the second vertical synchronization signal.
  • the above-mentioned ⁇ T (eg, ⁇ T1 ) is a fixed time period pre-configured in the electronic device.
  • the above ⁇ T is determined according to the screen refresh rate of the electronic device. The higher the screen refresh rate, the smaller the ⁇ T, and the lower the screen refresh rate, the larger the ⁇ T.
  • ⁇ T is determined according to the difference between the signal period of the second vertical synchronization signal and the first drawing frame length of the first statistical period, where the first drawing frame length is drawn by the electronic device The time required for the layer, ⁇ T is less than or equal to the above difference.
  • the present application provides an electronic device, the electronic device includes a display screen, a memory and one or more processors; the display screen, the memory and the processor are coupled; the memory is used for storing computer program code, and the computer program code includes a computer Instructions, when the processor executes the computer instructions, the electronic device performs the method described in the first aspect and any possible design manners thereof.
  • the present application provides a chip system, which is applied to an electronic device including a display screen.
  • the chip system includes one or more interface circuits and one or more processors.
  • the interface circuit and the processor are interconnected by wires.
  • the interface circuit is used to receive a signal from the memory of the electronic device and send a signal to the processor, where the signal includes computer instructions stored in the memory; when the processor executes the computer instructions, the electronic device executes the first aspect and any possible possibilities thereof. Design method described.
  • the present application provides a computer storage medium, comprising computer instructions, which, when the computer instructions are executed on an electronic device, cause the electronic device to perform as described in the first aspect and any possible design manner thereof Methods.
  • the present application provides a computer program product, which, when the computer program product runs on a computer, causes the computer to execute the method described in the first aspect and any possible design manners thereof.
  • the electronic device described in the second aspect, the chip system described in the third aspect, the computer storage medium described in the fourth aspect, and the computer program product described in the fifth aspect can achieve beneficial effects.
  • the beneficial effects in the first aspect and any possible design manners thereof may be referred to, which will not be repeated here.
  • FIG. 1 is a schematic diagram of a hardware structure of an electronic device provided by an embodiment of the present application.
  • FIG. 2 is a schematic diagram of a vertical synchronization signal provided by an embodiment of the present application.
  • FIG. 3 is a schematic diagram of a software processing flow of an electronic device displaying an image in response to a touch operation according to an embodiment of the present application
  • 4A is a schematic diagram of the principle of layer drawing, rendering, synthesis and image frame display performed by an electronic device in a solution
  • 4B is a schematic diagram of the layer production and consumption principle of an electronic device in a solution
  • Figure 5A is a schematic diagram of the frame data (i.e. layer) change situation in the first buffer queue in the process of layer drawing, rendering, synthesis and image frame display for the electronic device shown in Figure 4A;
  • frame data i.e. layer
  • 5B is a schematic diagram of the change of frame data (ie, layers) in the first cache queue during the process of layer drawing, rendering, synthesis, and image frame display performed by the electronic device shown in FIG. 4A;
  • FIG. 6 is a schematic diagram of the principle of layer drawing, rendering, synthesis and image frame display performed by electronic equipment in a solution
  • FIG. 7 is a schematic diagram of the principle of a control method based on a vertical synchronization signal provided by an embodiment of the present application.
  • FIG. 8 is a flowchart of a control method based on a vertical synchronization signal provided by an embodiment of the present application
  • FIG. 9 is a schematic diagram of the principle of adjusting the VSYNC_APP signal and the VSYNC_SF signal in a control method based on a vertical synchronization signal provided by an embodiment of the present application;
  • FIG. 10 is a schematic diagram of the principle of layer drawing, rendering, synthesis, and image frame display performed by an electronic device according to an embodiment of the present application;
  • FIG. 11 is a schematic diagram of the principle of layer drawing, rendering, synthesis, and image frame display performed by another electronic device according to an embodiment of the present application;
  • FIG. 12 is a flowchart of another control method based on a vertical synchronization signal provided by an embodiment of the present application.
  • FIG. 13 is a flowchart of another control method based on a vertical synchronization signal provided by an embodiment of the present application.
  • 15 is a schematic diagram of the principle of electronic equipment adjusting the VSYNC_APP signal and the VSYNC_SF signal in a solution;
  • 16 is a schematic diagram of the principle of adjusting the VSYNC_APP signal and the VSYNC_SF signal by an electronic device according to an embodiment of the present application;
  • FIG. 17 is a schematic structural composition diagram of a chip system provided by an embodiment of the present application.
  • first and second are only used for descriptive purposes, and should not be construed as indicating or implying relative importance or implicitly indicating the number of indicated technical features.
  • a feature defined as “first” or “second” may expressly or implicitly include one or more of that feature.
  • plural means two or more.
  • An embodiment of the present application provides a control method based on a vertical synchronization signal, and the method can be applied to an electronic device including a display screen (eg, a touch screen).
  • a display screen eg, a touch screen.
  • the aforementioned electronic devices may be cell phones, tablet computers, desktops, laptops, handheld computers, notebook computers, ultra-mobile personal computers (UMPCs), netbooks, as well as cellular phones, personal digital assistants (personal digital assistant, PDA), augmented reality (augmented reality, AR) ⁇ virtual reality (virtual reality, VR) equipment and other equipment including display screen (such as touch screen), the specific form of the electronic equipment is not special in the embodiment of the present application limit.
  • FIG. 1 is a schematic structural diagram of an electronic device 100 according to an embodiment of the present application.
  • the electronic device 100 may include a processor 110 , an external memory interface 120 , an internal memory 121 , a universal serial bus (USB) interface 130 , a charging management module 140 , a power management module 141 , and a battery 142 , Antenna 1, Antenna 2, Mobile Communication Module 150, Wireless Communication Module 160, Audio Module 170, Speaker 170A, Receiver 170B, Microphone 170C, Headphone Interface 170D, Sensor Module 180, Key 190, Motor 191, Indicator 192, Camera 293 , a display screen 194, and a subscriber identification module (subscriber identification module, SIM) card interface 195 and the like.
  • SIM subscriber identification module
  • the sensor module 180 may include a pressure sensor 180A, a gyroscope sensor 180B, an air pressure sensor 180C, a magnetic sensor 180D, an acceleration sensor 180E, a distance sensor 180F, a proximity light sensor 180G, a fingerprint sensor 180H, a temperature sensor 180J, a touch sensor 180K, and an environmental sensor Light sensor 180L, bone conduction sensor 180M, etc.
  • the structure illustrated in this embodiment does not constitute a specific limitation on the electronic device 100 .
  • the electronic device 100 may include more or fewer components than shown, or some components may be combined, or some components may be split, or a different arrangement of components.
  • the illustrated components may be implemented in hardware, software, or a combination of software and hardware.
  • the processor 110 may include one or more processing units, for example, the processor 110 may include an application processor (application processor, AP), a modem processor, a graphics processor (graphics processing unit, GPU), an image signal processor (image signal processor, ISP), controller, memory, video codec, digital signal processor (digital signal processor, DSP), baseband processor, and/or neural-network processing unit (NPU) Wait. Wherein, different processing units may be independent devices, or may be integrated in one or more processors.
  • application processor application processor, AP
  • modem processor graphics processor
  • graphics processor graphics processor
  • ISP image signal processor
  • controller memory
  • video codec digital signal processor
  • DSP digital signal processor
  • NPU neural-network processing unit
  • the controller may be the nerve center and command center of the electronic device 100 .
  • the controller can generate an operation control signal according to the instruction operation code and timing signal, and complete the control of fetching and executing instructions.
  • a memory may also be provided in the processor 110 for storing instructions and data.
  • the memory in processor 110 is cache memory. This memory may hold instructions or data that have just been used or recycled by the processor 110 . If the processor 110 needs to use the instruction or data again, it can be called directly from the memory. Repeated accesses are avoided and the latency of the processor 110 is reduced, thereby increasing the efficiency of the system.
  • the processor 110 may include one or more interfaces.
  • the interface may include an integrated circuit (inter-integrated circuit, I2C) interface, an integrated circuit built-in audio (inter-integrated circuit sound, I2S) interface, a pulse code modulation (pulse code modulation, PCM) interface, a universal asynchronous transceiver (universal asynchronous transmitter) receiver/transmitter, UART) interface, mobile industry processor interface (MIPI), general-purpose input/output (GPIO) interface, subscriber identity module (SIM) interface, and / or universal serial bus (universal serial bus, USB) interface, etc.
  • I2C integrated circuit
  • I2S integrated circuit built-in audio
  • PCM pulse code modulation
  • PCM pulse code modulation
  • UART universal asynchronous transceiver
  • MIPI mobile industry processor interface
  • GPIO general-purpose input/output
  • SIM subscriber identity module
  • USB universal serial bus
  • the interface connection relationship between the modules illustrated in this embodiment is only a schematic illustration, and does not constitute a structural limitation of the electronic device 100 .
  • the electronic device 100 may also adopt different interface connection manners in the foregoing embodiments, or a combination of multiple interface connection manners.
  • the charging management module 140 is used to receive charging input from the charger. While the charging management module 140 charges the battery 142 , it can also supply power to the electronic device through the power management module 141 .
  • the power management module 141 is used for connecting the battery 142 , the charging management module 140 and the processor 110 .
  • the power management module 141 receives input from the battery 142 and/or the charge management module 140, and supplies power to the processor 110, the internal memory 121, the external memory, the display screen 194, the camera 293, and the wireless communication module 160.
  • the power management module 141 may also be provided in the processor 110 .
  • the power management module 141 and the charging management module 140 may also be provided in the same device.
  • the wireless communication function of the electronic device 100 can be realized by the antenna 1, the antenna 2, the mobile communication module 150, the wireless communication module 160, the modulation and demodulation processor, the baseband processor, and the like.
  • Antenna 1 and Antenna 2 are used to transmit and receive electromagnetic wave signals.
  • Each antenna in electronic device 100 may be used to cover a single or multiple communication frequency bands. Different antennas can also be reused to improve antenna utilization.
  • the antenna 1 can be multiplexed as a diversity antenna of the wireless local area network.
  • the mobile communication module 150 may provide wireless communication solutions including 2G/3G/4G/5G etc. applied on the electronic device 100 .
  • the mobile communication module 150 may include at least one filter, switch, power amplifier, low noise amplifier (LNA) and the like.
  • the mobile communication module 150 can receive electromagnetic waves from the antenna 1, filter and amplify the received electromagnetic waves, and transmit them to the modulation and demodulation processor for demodulation.
  • the mobile communication module 150 can also amplify the signal modulated by the modulation and demodulation processor, and then turn it into an electromagnetic wave for radiation through the antenna 1 .
  • the modem processor may include a modulator and a demodulator.
  • the modulator is used to modulate the low frequency baseband signal to be sent into a medium and high frequency signal.
  • the demodulator is used to demodulate the received electromagnetic wave signal into a low frequency baseband signal. Then the demodulator transmits the demodulated low-frequency baseband signal to the baseband processor for processing.
  • the low frequency baseband signal is processed by the baseband processor and passed to the application processor.
  • the application processor outputs sound signals through audio devices (not limited to the speaker 170A, the receiver 170B, etc.), or displays images or videos through the display screen 194 .
  • the wireless communication module 160 can provide applications on the electronic device 100 including wireless local area networks (WLAN) (such as wireless fidelity (Wi-Fi) networks), bluetooth (BT), global navigation satellites Wireless communication solutions such as global navigation satellite system (GNSS), frequency modulation (FM), near field communication (NFC), and infrared technology (IR).
  • WLAN wireless local area networks
  • BT Bluetooth
  • GNSS global navigation satellite system
  • FM frequency modulation
  • NFC near field communication
  • IR infrared technology
  • the wireless communication module 160 may be one or more devices integrating at least one communication processing module.
  • the wireless communication module 160 receives electromagnetic waves via the antenna 2 , frequency modulates and filters the electromagnetic wave signals, and sends the processed signals to the processor 110 .
  • the wireless communication module 160 can also receive the signal to be sent from the processor 110 , perform frequency modulation on it, amplify it, and convert it into electromagnetic waves for radiation through the antenna 2 .
  • the antenna 1 of the electronic device 100 is coupled with the mobile communication module 150, and the antenna 2 is coupled with the wireless communication module 160, so that the electronic device 100 can communicate with the network and other devices through wireless communication technology.
  • the wireless communication technology may include global system for mobile communications (GSM), general packet radio service (GPRS), code division multiple access (CDMA), broadband Code Division Multiple Access (WCDMA), Time Division Code Division Multiple Access (TD-SCDMA), Long Term Evolution (LTE), BT, GNSS, WLAN, NFC , FM, and/or IR technology, etc.
  • the GNSS may include global positioning system (global positioning system, GPS), global navigation satellite system (global navigation satellite system, GLONASS), Beidou navigation satellite system (beidou navigation satellite system, BDS), quasi-zenith satellite system (quasi -zenith satellite system, QZSS) and/or satellite based augmentation systems (SBAS).
  • global positioning system global positioning system, GPS
  • global navigation satellite system global navigation satellite system, GLONASS
  • Beidou navigation satellite system beidou navigation satellite system, BDS
  • quasi-zenith satellite system quadsi -zenith satellite system, QZSS
  • SBAS satellite based augmentation systems
  • the electronic device 100 implements a display function through a GPU, a display screen 194, an application processor, and the like.
  • the GPU is a microprocessor for image processing, and is connected to the display screen 194 and the application processor.
  • the GPU is used to perform mathematical and geometric calculations for graphics rendering.
  • Processor 110 may include one or more GPUs that execute program instructions to generate or alter display information.
  • Display screen 194 is used to display images, videos, and the like.
  • the display screen 194 includes a display panel.
  • the display panel can be a liquid crystal display (LCD), an organic light-emitting diode (OLED), an active-matrix organic light-emitting diode or an active-matrix organic light-emitting diode (active-matrix organic light).
  • LED organic light-emitting diode
  • AMOLED organic light-emitting diode
  • FLED flexible light-emitting diode
  • Miniled MicroLed, Micro-oLed, quantum dot light-emitting diode (quantum dot light emitting diodes, QLED) and so on.
  • the display screen 194 in this embodiment of the present application may be a touch screen. That is, the touch sensor 180K is integrated in the display screen 194 .
  • the touch sensor 180K may also be referred to as a "touch panel”. That is, the display screen 194 may include a display panel and a touch panel, and the touch sensor 180K and the display screen 194 form a touch screen, also referred to as a "touch screen”.
  • the touch sensor 180K is used to detect a touch operation on or near it. After the touch operation detected by the touch sensor 180K, the driver of the kernel layer (such as a TP driver) can transmit it to the upper layer to determine the type of the touch event. Visual output related to touch operations may be provided through display screen 194 .
  • the touch sensor 180K may also be disposed on the surface of the electronic device 100 , which is different from the location where the display screen 194 is located.
  • the electronic device 100 can realize the shooting function through the ISP, the camera 293, the video codec, the GPU, the display screen 194 and the application processor.
  • the ISP is used to process the data fed back by the camera 293 .
  • Camera 293 is used to capture still images or video.
  • a digital signal processor is used to process digital signals, in addition to processing digital image signals, it can also process other digital signals.
  • Video codecs are used to compress or decompress digital video.
  • the electronic device 100 may support one or more video codecs. In this way, the electronic device 100 can play or record videos of various encoding formats, such as: Moving Picture Experts Group (moving picture experts group, MPEG) 1, MPEG2, MPEG3, MPEG4 and so on.
  • the NPU is a neural-network (NN) computing processor.
  • NN neural-network
  • Applications such as intelligent cognition of the electronic device 100 can be implemented through the NPU, such as image recognition, face recognition, speech recognition, text understanding, and the like.
  • the external memory interface 120 can be used to connect an external memory card, such as a Micro SD card, to expand the storage capacity of the electronic device 100 .
  • the external memory card communicates with the processor 110 through the external memory interface 120 to realize the data storage function. For example to save files like music, video etc in external memory card.
  • Internal memory 121 may be used to store computer executable program code, which includes instructions.
  • the processor 110 executes various functional applications and data processing of the electronic device 100 by executing the instructions stored in the internal memory 121 .
  • the processor 110 may execute instructions stored in the internal memory 121, and the internal memory 121 may include a program storage area and a storage data area.
  • the storage program area can store an operating system, an application program required for at least one function (such as a sound playback function, an image playback function, etc.), and the like.
  • the storage data area may store data (such as audio data, phone book, etc.) created during the use of the electronic device 100 and the like.
  • the internal memory 121 may include high-speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, universal flash storage (UFS), and the like.
  • the electronic device 100 may implement audio functions through an audio module 170, a speaker 170A, a receiver 170B, a microphone 170C, an earphone interface 170D, an application processor, and the like. Such as music playback, recording, etc.
  • the audio module 170 is used for converting digital audio information into analog audio signal output, and also for converting analog audio input into digital audio signal. Audio module 170 may also be used to encode and decode audio signals. Speaker 170A, also referred to as a "speaker”, is used to convert audio electrical signals into sound signals. The receiver 170B, also referred to as “earpiece”, is used to convert audio electrical signals into sound signals. The microphone 170C, also called “microphone” or “microphone”, is used to convert sound signals into electrical signals. The earphone jack 170D is used to connect wired earphones.
  • the pressure sensor 180A is used to sense pressure signals, and can convert the pressure signals into electrical signals.
  • the pressure sensor 180A may be provided on the display screen 194 .
  • the capacitive pressure sensor may be comprised of at least two parallel plates of conductive material. When a force is applied to the pressure sensor 180A, the capacitance between the electrodes changes.
  • the electronic device 100 determines the intensity of the pressure according to the change in capacitance. When a touch operation acts on the display screen 194, the electronic device 100 detects the intensity of the touch operation according to the pressure sensor 180A.
  • the electronic device 100 may also calculate the touched position according to the detection signal of the pressure sensor 180A.
  • touch operations acting on the same touch position but with different touch operation intensities may correspond to different operation instructions.
  • the electronic device 100 may acquire the pressing force of the user's touch operation through the pressure sensor 180A.
  • the keys 190 include a power-on key, a volume key, and the like. Keys 190 may be mechanical keys. It can also be a touch key.
  • the electronic device 100 may receive key inputs and generate key signal inputs related to user settings and function control of the electronic device 100 .
  • Motor 191 can generate vibrating cues.
  • the motor 191 can be used for vibrating alerts for incoming calls, and can also be used for touch vibration feedback.
  • the indicator 192 can be an indicator light, which can be used to indicate the charging state, the change of the power, and can also be used to indicate a message, a missed call, a notification, and the like.
  • the SIM card interface 195 is used to connect a SIM card.
  • the SIM card can be contacted and separated from the electronic device 100 by inserting into the SIM card interface 195 or pulling out from the SIM card interface 195 .
  • the electronic device 100 may support 1 or N SIM card interfaces, where N is a positive integer greater than 1.
  • the SIM card interface 195 can support Nano SIM card, Micro SIM card, SIM card and so on.
  • this embodiment of the present application introduces vertical synchronization signals described in a technique, such as vertical synchronization signal 1 , vertical synchronization signal 2 , and vertical synchronization signal 3 .
  • Vertical synchronization signal 1 such as VSYNC_APP signal.
  • the vertical synchronization signal 1 can be used to trigger the drawing of one or more layers and render the drawn layers. That is to say, the above vertical synchronization signal 1 may be used to trigger the UI thread to draw one or more layers, and the Render thread will render the one or more layers drawn by the UI thread.
  • the vertical synchronization signal 1 (eg VSYNC_APP signal) is the first vertical synchronization signal.
  • Vertical synchronization signal 2 such as VSYNC_SF signal.
  • the vertical synchronization signal 2 can be used to trigger layer composition of one or more layers to be rendered to obtain image frames. That is to say, the above-mentioned vertical synchronization signal 2 can be used to trigger the composition thread to perform layer composition on one or more layers rendered by the Render thread to obtain an image frame.
  • the vertical synchronization signal 2 (eg VSYNC_SF signal) is the second vertical synchronization signal.
  • Vertical synchronization signal 3 such as HW_VSYNC signal.
  • the vertical synchronization signal 3 can be used to trigger hardware to refresh and display image frames.
  • the vertical synchronization signal 3 is a hardware signal triggered by the display screen driving of the electronic device.
  • the signal period T3 of the vertical synchronization signal 3 (eg HW_VSYNC) is determined according to the screen refresh rate of the display screen of the electronic device.
  • the signal period T3 of the vertical synchronization signal 3 is the reciprocal of the screen refresh rate of the display screen (eg, LCD or OLED) of the electronic device.
  • the screen refresh rate of the electronic device may be the same as the frame rate of the electronic device.
  • a high frame rate of an electronic device is a high screen refresh rate.
  • the screen refresh rate and frame rate of the display screen of the electronic device may be any value such as 60 Hz, 70 Hz, 75 Hz, 80 Hz, 90 Hz, or 120 Hz.
  • the electronic device may support multiple different frame rates.
  • the frame rate of the electronic device can be switched between the different frame rates mentioned above.
  • the frame rate described in the embodiments of this application is the frame rate currently used by the electronic device. That is, the signal period of the vertical synchronization signal 3 is the inverse of the frame rate currently used by the electronic device.
  • the vertical synchronization signal 3 in this embodiment of the present application is a periodic discrete signal.
  • the vertical sync signal 1 and the vertical sync signal 2 are generated based on the vertical sync signal 3 , that is, the vertical sync signal 3 may be the signal source of the vertical sync signal 1 and the vertical sync signal 2 .
  • the vertical synchronization signal 1 and the vertical synchronization signal 2 are synchronized with the vertical synchronization signal 3 . Therefore, in general, the signal periods of the vertical synchronization signal 1 and the vertical synchronization signal 2 are the same as the signal period of the vertical synchronization signal 3, and the phases are the same.
  • the signal period T1 of the vertical synchronization signal 1 and the signal period T2 of the vertical synchronization signal 2 are the same as the signal period T3 of the vertical synchronization signal 3 .
  • the phases of the vertical synchronization signal 1 , the vertical synchronization signal 2 , and the vertical synchronization signal 3 match. It can be understood that, in an actual implementation process, a certain phase error may exist between the vertical synchronization signal 1, the vertical synchronization signal 2, and the vertical synchronization signal 3 due to various factors (eg, processing performance). It should be noted that, when understanding the method of the embodiment of the present application, the above-mentioned phase error is ignored.
  • the above-mentioned vertical synchronization signal 1 , vertical synchronization signal 2 and vertical synchronization signal 3 are all periodic discrete signals.
  • the names of the vertical synchronization signals may be different.
  • the name of the vertical synchronization signal ie, vertical synchronization signal 1
  • the name of the vertical synchronization signal used to trigger the drawing of one or more layers may not be VSYNC_APP.
  • the name of the vertical synchronization signal is, as long as it is a synchronization signal with similar functions and conforms to the technical idea of the method provided by the embodiments of the present application, it should be covered within the protection scope of the present application.
  • the above-mentioned display screen is a touch screen
  • the user's operation on the display screen is a touch operation as an example
  • the introduction is from "the user's finger inputs a touch operation on the touch screen” to "the touch screen displays the touch operation”.
  • Corresponding image” process the software processing flow of the electronic device.
  • the electronic device may include: a touch panel (TP)/TP driver (Driver) 10, an Input framework (ie Input Framework) 20, a UI framework (ie UI Framework) 30, and a Display framework (ie Display Framework) 40 and hardware display module 50.
  • TP touch panel
  • Driver Driver
  • Input Framework ie Input Framework
  • UI framework ie UI Framework
  • Display framework ie Display Framework
  • the software processing flow of the electronic device may include the following steps (1) to (5).
  • Step (1) After the TP in the TP IC/TP driver 10 collects the touch operation of the user's finger on the TP of the electronic device, the TP driver reports the corresponding touch event to the Event Hub.
  • LCD Liquid Crystal Display
  • the UI framework in response to the user's touch operation on the TP or UI event, can call the UI thread to draw one or more layers corresponding to the touch event after the arrival of the vertical synchronization signal 1, and then call the Render thread to One or more layers are rendered; then, the hardware composition (Hardware Composer, HWC) can call the composition thread to draw one or more layers (that is, one or more images after rendering) after the arrival of the vertical synchronization signal 2. layer) to perform layer synthesis to obtain image frames; finally, the hardware display module can refresh and display the above image frames on the LCD after the arrival of the vertical synchronization signal 3.
  • the above UI event may be triggered by a user's touch operation on the TP.
  • the UI event may be triggered automatically by the electronic device.
  • the foreground application of the electronic device automatically switches the screen, the above UI event may be triggered.
  • the foreground application is the application corresponding to the interface currently displayed on the display screen of the electronic device.
  • the TP may periodically detect the user's touch operation. After the TP detects the touch operation, it can wake up the vertical synchronization signal 1 and vertical synchronization signal 2 to trigger the UI framework to perform layer drawing and rendering based on the vertical synchronization signal 1, and the hardware synthesis HWC to perform layer synthesis based on the vertical synchronization signal 2.
  • the detection period of the TP for detecting the touch operation is the same as the signal period T3 of the vertical synchronization signal 3 (eg HW_VSYNC).
  • the UI framework periodically performs layer drawing and rendering based on the vertical synchronization signal 1; the hardware synthesis HWC is based on the vertical synchronization signal 2 for periodic layer synthesis; LCD is based on the vertical synchronization signal 3 cycles.
  • the image frame is refreshed automatically.
  • the vertical synchronization signal 1 is the VSYNC_APP signal
  • the vertical synchronization signal 2 is the VSYNC_SF signal
  • the vertical synchronization signal 3 is the HW_VSYNC signal.
  • the UI thread of the electronic device in response to the VSYNC_APP signal at time t1 , executes "draw a" to draw layer a, and then the Render thread executes "render a" and "render a'" to render the image Layer a;
  • the synthesis thread of the electronic device responds to the VSYNC_SF signal at time t 2 , and executes "image frame synthesis a" to perform layer synthesis on the above-mentioned layer a to obtain image frame a;
  • the LCD of the electronic device responds to the HW_VSYNC signal at time t 3 , execute "image frame display a" to refresh and display the above image frame a.
  • the UI thread of the electronic device executes "draw b" to draw layer b, and then the Render thread executes "render b" and "render b'" to render the layer b.
  • Layer b the synthesis thread of the electronic device responds to the VSYNC_SF signal at time t3 and executes "image frame synthesis b" to perform layer synthesis on the above layer b to obtain image frame b; the LCD of the electronic device responds to the HW_VSYNC at time t4 signal, execute "image frame display b" to refresh and display the above-mentioned image frame b.
  • the execution of "render b" by the CPU is the preparation before the GPU performs layer rendering on the drawn layer b
  • the execution of "render b'" by the GPU is that the electronic device formally performs layer rendering on the drawn layer b. That is to say, the drawing described in the embodiments of the present application may include: layer drawing performed by the UI thread and preparations before layer rendering performed by the Render thread on the layer drawn by the UI thread.
  • the process of drawing, rendering and synthesizing layers by the electronic device can constitute a graphics generation consumption model, such as the graphics generation consumption model 400 shown in FIG. 4B .
  • the UI thread and the Render thread of the electronic device are used as producers to draw and render layers; the Render thread (that is, the Renderer) can save the layers that have been prepared for rendering.
  • the synthesis thread ie, the compositor Surface Flinger
  • the layer is synthesized to obtain an image frame, and the image frame is sent to the LCD (ie, Display Controller) of the electronic device for display.
  • the first buffer queue is allocated according to the application, and one application can allocate one buffer queue.
  • the first cache queue is applied for by the Render thread and allocated for the application.
  • producers such as UI threads and Render threads
  • consumers such as synthesis threads
  • the producer (such as the Render thread) generates a frame layer (ie, frame data) every other VSYNC cycle (such as the above synchronization cycle T Z ) and puts it in the first cache queue, and the consumer (such as the synthesis thread) every other VSYNC cycle (Such as the above-mentioned synchronization period T Z ) a frame layer (ie, frame data) is taken out from the first buffer queue for layer composition (also called image frame composition). That is, the production cycle of the UI thread and the Render thread as the producer is the same as the consumption cycle of the synthesis thread (ie Surface Flinger) as the consumer, and both are equal to the above synchronization cycle T Z .
  • the Render thread of the electronic device completes “rendering a”.
  • the Render thread can cache the rendered frame data a (ie, layer a) to the first cache queue, that is, the producer produces a frame of layer (ie, frame data), and caches the layer to the first cache queue. cache queue.
  • the number of frame data (ie, layers) in the first cache queue increases from 0 to 1 (ie, 0->1).
  • frame data a ie, layer a
  • frame data a is queued in the first buffer queue.
  • the composition thread of the electronic device may perform “image frame composition a” (also referred to as layer composition a).
  • the synthesis thread can read frame data a (ie, layer a) from the first cache queue and cache it to the first cache queue, that is, the consumer consumes a frame of layer from the first cache queue.
  • the number of frame data (ie, layers) in the first cache queue is reduced from 1 to 0 (ie, 1->0).
  • frame data a ie, layer a
  • frame data a is dequeued in the first buffer queue.
  • the Render thread of the electronic device completes “rendering b”.
  • the Render thread can cache the rendered frame data b (ie, layer b) to the first cache queue, that is, the producer produces a frame of layer (ie, frame data), and caches the layer to the first cache queue. cache queue.
  • the number of frame data (ie, layers) in the first cache queue increases from 0 to 1 (ie, 0->1).
  • frame data b ie, layer b
  • frame data b is queued in the first cache queue.
  • the composition thread of the electronic device may perform “image frame composition b” (also referred to as layer composition b).
  • the synthesis thread can read the frame data b (ie, layer b) from the first buffer queue and buffer it to the first buffer queue, that is, the consumer consumes a frame of layer from the first buffer queue.
  • the number of frame data (ie, layers) in the first cache queue is reduced from 1 to 0 (ie, 1->0).
  • frame data b (ie, layer b) is dequeued in the first buffer queue.
  • the electronic device may drop frames during the process of drawing, rendering, synthesizing, and refreshing display image frames of layers. In this way, the coherence and smoothness of the displayed image on the display screen will be affected, thereby affecting the user's visual experience.
  • the reason for the frame loss phenomenon of the displayed image of the electronic device may be that the UI thread and the Render thread take too long to draw and render, and the drawing and rendering cannot be completed within one VSYNC cycle (such as the above synchronization cycle T Z ).
  • the producer (such as the Render thread) cannot cache the frame data (that is, the rendered layer) in the first cache queue on time. That is to say, the producer (such as the Render thread) will not buffer frame data in the first buffer queue for at least one VSYNC cycle.
  • a consumer such as a compositing thread
  • the producer's production rate will be less than the consumer's consumption rate. If a sufficient amount of frame data is not buffered in the first buffer queue, it will occur that a consumer (eg, a synthesis thread) cannot read frame data from the first buffer queue after a VSYNC_SF signal arrives. Then, in this VSYNC cycle, the layer synthesis cannot be performed to obtain the image frame, and the image frame cannot be refreshed and displayed, and the LCD display screen cannot be updated, and the phenomenon of frame loss will occur. In this way, the coherence and smoothness of the displayed image on the display screen will be affected, thereby affecting the user's visual experience.
  • a consumer eg, a synthesis thread
  • the frame data b (ie layer b) cannot be cached in the first cache queue at time t3 ; at time t3 , the The number of frame data in a buffer queue is zero. Therefore, in response to the VSYNC_SF signal at time t3 , the synthesis thread cannot read the frame data from the first buffer queue, so that the image frame cannot be obtained by layer synthesis, and then the LCD of the electronic device cannot refresh the display image frame at time t4 . , the frame drop phenomenon occurs.
  • the Render thread completes "rendering b"; the number of frame data (ie layers) in the first buffer queue increases from 0 to 1 (ie 0->1).
  • the synthesis thread can read frame data b (ie layer b) from the first cache queue, and the number of frame data in the first cache queue is reduced from 1 to 0 (ie 1->2).
  • the LCD of the electronic device can only perform "image frame display b" to refresh and display image frames at time t5.
  • the reason for the frame loss phenomenon of the display image of the electronic device is not only that the UI thread takes a long time to draw the layer or the Render thread takes a long time to render the layer, but also may be because the electronic The device has a higher frame rate and screen refresh rate.
  • the signal period T Z 16.66ms of the VSYNC_APP signal, the VSYNC_SF signal and the HW_VSYNC signal. If the UI thread and the Render thread can complete the drawing and rendering of each frame of the layer within the 16.66ms, the display screen of the electronic device displays the image without dropping frames.
  • the signal period T Z 8.33 ms of the VSYNC_APP signal, the VSYNC_SF signal and the HW_VSYNC signal. If the UI thread and the Render thread can complete the drawing and rendering of each frame of the layer within the 8.33ms, the display screen of the electronic device displays the image without dropping frames.
  • the reason for the frame loss phenomenon of the image displayed by the electronic device may be that the electronic device cannot complete the drawing and rendering of a frame of layers within one frame (that is, a synchronization period T Z ), or that the electronic device cannot The layer composition of one frame of layers is completed within a frame (ie, one synchronization period T Z ). It is assumed that the UI thread and the Render thread of the electronic device take t cpu to process one frame of layer, and the time taken for the compositing thread to process one frame of layer is t SF . The condition for the electronic device to display the image without dropping frames is Max ⁇ t cpu ,t SF ⁇ T Z . Among them, Max ⁇ means to take the maximum value in ⁇ . In the following embodiments, the method of the embodiment of the present application is described by taking the UI thread and the Render thread unable to complete the drawing and rendering of one frame of layers in one frame, which results in frame loss of the displayed image of the electronic device as an example.
  • the operating frequencies of the CPU and GPU of the electronic device are increased.
  • increasing the working frequency of the CPU and GPU of the electronic device can increase the processing speed of the UI thread and the Render thread, thereby reducing the time it takes for the UI thread and the Render thread to draw and render layers, thereby reducing the display time of the electronic device.
  • Possibility of dropped frames while images
  • increasing the operating frequencies of the CPU and GPU of the electronic device will increase the power consumption of the electronic device and reduce the battery life of the electronic device. It can be seen that the scheme of reducing the frame loss rate by increasing the working frequency has low energy efficiency.
  • the embodiment of the present application provides a control method based on a vertical synchronization signal, which can adjust the signal period of the VSYNC_APP signal, so that the signal period of the VSYNC_APP signal is smaller than that of the VSYNC_SF signal.
  • the amount of frame data produced by the UI thread and the Render thread in the same duration can be greater than the amount of frame data consumed by the composition thread in the same duration, that is, the production rate shown in Figure 7 is greater than the consumption rate.
  • enough frame data can be buffered in the first buffer queue for consumption by the synthesis thread.
  • the problem that the synthesis thread cannot read frame data from the first buffer queue in response to the VSYNC_APP signal does not occur, and the possibility of frame loss when the electronic device displays images can be reduced.
  • the above solution can reduce the possibility of frame loss when the electronic device displays an image without increasing the power consumption of the electronic device, and ensure the smoothness of the displayed image on the display screen.
  • the execution body of the method provided in the embodiment of the present application may be an apparatus for processing an image.
  • the apparatus may be any of the above-mentioned electronic devices (for example, the apparatus may be the electronic apparatus 100 shown in FIG. 1 ).
  • the apparatus may also be a central processing unit (Central Processing Unit, CPU) of the electronic device, or a control module in the electronic device for executing the method provided by the embodiment of the present application.
  • CPU Central Processing Unit
  • the method provided by the embodiments of the present application is introduced by taking the above-mentioned method for image processing performed by an electronic device (such as a mobile phone) as an example.
  • the embodiment of the present application provides a control method based on a vertical synchronization signal.
  • the method can be applied to electronic equipment including a display screen.
  • the vertical synchronization signal 1 eg, the VSYNC_APP signal
  • the vertical synchronization signal 2 eg, the VSYNC_SF signal
  • the control method based on the vertical synchronization signal may include S801-S807.
  • the control method based on the vertical synchronization signal may include "a control flow before adjusting the vertical synchronization signal" and "adjusting the vertical synchronization signal, and a control flow after the adjustment”.
  • the above-mentioned “control flow before adjusting the vertical synchronization signal” may include S801-S803.
  • the electronic device draws the first layer of the first application in response to the VSYNC_APP signal, and buffers the first layer in the first buffer queue.
  • the drawing described in the embodiments of the present application may include: layer drawing performed by the UI thread and preparation before layer rendering performed by the Render thread on the layer drawn by the UI thread.
  • the drawing described in S801 may include “drawing a” and “rendering a” or “drawing b” and “rendering b” as shown in FIG. 6 .
  • the drawing described in S801 may include “drawing 1” and “rendering 1” or “drawing 2” and “rendering 2” shown in FIG. 10 .
  • S801 may include: the UI thread draws the first layer in response to the VSYNC_APP signal; the Render thread prepares the first layer drawn by the UI thread for rendering, and caches the first layer in the first cache queue. It should be noted that, after the Render thread caches the first layer in the first cache queue, the Render thread can formally render the first layer cached in the first cache queue. After that, the synthesis thread may perform layer synthesis on the layers cached in the first cache queue to obtain image frames.
  • the above-mentioned first application is a foreground application. For the detailed introduction of the foreground application, please refer to the relevant content in Embodiment (6).
  • the electronic device in response to the VSYNC_SF signal, performs layer synthesis on the layers buffered in the first buffer queue to obtain an image frame.
  • the electronic device refreshes and displays the image frame in response to the HW_VSYNC signal.
  • the signal period of the above VSYNC_APP signal is the production period T S in which the UI thread and the Render thread produce frame data to the first buffer queue.
  • the signal period of the above VSYNC_SF signal is the consumption period T X in which the synthesis thread consumes the frame data in the first buffer queue.
  • the electronic device controls the VSYNC_APP signal and the VSYNC_SF signal according to the same production cycle T S and consumption cycle T X .
  • T S the consumption cycle T X
  • the UI thread and the Render thread can complete the drawing and rendering preparation of one frame of layer within one frame (ie, one synchronization cycle T Z )
  • the production rate i.e. the producer
  • the consumption rate of the composition thread i.e. the consumer.
  • Electronic devices display images without dropping frames.
  • the consumption rate of the composition thread ie, the consumer
  • the consumption rate of the composition thread is greater than that of the UI thread and the Render thread (the production rate of the producer).
  • the number of frame data (ie layers) cached in the first cache queue will be 0, and consumers (such as synthesis threads) cannot read frame data from the first cache queue after a VSYNC_SF signal arrives.
  • layer synthesis cannot be performed to obtain an image frame, and the image frame cannot be refreshed and displayed, and the display screen of the LCD cannot be updated, and the phenomenon of frame loss as shown in Figure 6 will occur. In this way, the coherence and smoothness of the displayed image on the display screen will be affected, thereby affecting the user's visual experience.
  • the reason for the frame loss phenomenon of the displayed image of the electronic device is that the consumer (such as the synthesis thread) cannot read the frame data from the first buffer queue after a VSYNC_SF signal arrives. On the contrary, if a consumer (such as a synthesis thread) can read frame data from the first buffer queue after a VSYNC_SF signal arrives; then, there will be no frame loss phenomenon when the electronic device displays an image.
  • a consumer can read frame data (ie, a layer) from the first buffer queue on the premise that: when the VSYNC_SF signal arrives, there are frames in the first buffer queue. data (ie layers).
  • frame data ie layers
  • the production rate of the producer ie UI thread and Render thread
  • the consumption rate of the consumer ie synthesis thread
  • the production rate of the producer i.e. UI thread and Render thread
  • the consumption rate of the consumer i.e. the synthesis thread
  • the number of frame data produced by the producer is required to be greater than the number of frame data consumed by the consumer within the same time period .
  • the production cycle T S of the producer is required to be smaller than the consumption cycle T X of the consumer.
  • the production period T S of the producer can be adjusted so that the production period T S of the producer is smaller than the consumption period T X of the consumer. In this way, the possibility of frame loss when the electronic device displays images can be reduced, and the smoothness of the images displayed on the display screen can be ensured.
  • the above-mentioned “adjusting the vertical synchronization signal, and the control flow after adjustment” may include S804-S807.
  • the electronic device adjusts the signal period of the VSYNC_APP signal to a first duration, where the first duration is smaller than the signal period of the VSYNC_SF signal.
  • the signal period of the VSYNC_APP signal is shorter than that of the VSYNC_SF signal.
  • the signal cycle of the VSYNC_APP signal is the production cycle of the electronic device
  • the signal cycle of the VSYNC_SF signal is the consumption cycle of the electronic device.
  • T Z is the synchronization period of the electronic device
  • T Z is equal to the reciprocal of the screen refresh rate of the electronic device.
  • the production cycle (ie, the signal cycle of the VSYNC_APP signal) is adjusted from T S to T S ′.
  • the production cycle of the electronic device (such as the UI thread and Render thread of the electronic device) is T S , for example, the VSYNC_APP signal at time t 1 is separated from the VSYNC_APP signal at time t 2 TS arrives.
  • the production cycle of the electronic device is T S ′.
  • the next VSYNC_APP signal arrives at time t A.
  • the next VSYNC_APP signal arrives at time t B.
  • the next VSYNC_APP signal arrives at time t C.
  • the next VSYNC_APP signal arrives at time t D.
  • the signal cycle of the VSYNC_SF signal (ie, the consumption cycle of the electronic device) remains unchanged.
  • the next VSYNC_SF signal arrives at time t3 .
  • the next VSYNC_SF signal arrives at time t 4 .
  • the next VSYNC_SF signal arrives at time t5.
  • the electronic device draws the first layer in response to the adjusted VSYNC_APP signal, and buffers the first layer in the first buffer queue.
  • the electronic device in response to the VSYNC_SF signal, performs layer synthesis on the layers buffered in the first buffer queue to obtain an image frame.
  • the electronic device refreshes and displays the image frame in response to the HW_VSYNC signal.
  • this embodiment of the present application introduces the above-mentioned "control flow before adjusting the vertical synchronization signal" with reference to FIG.
  • the UI thread responds to the VSYNC_APP signal at time t1 , and executes the “Draw 1” shown in FIG. Layer 1 prepares for rendering, and buffers frame data 1 (ie, layer 1 ) into the first buffer queue.
  • the Render thread may also execute “render 1 ′” to render frame data 1 (ie, layer 1 ) in the first buffer queue.
  • the Render thread completes "rendering 1 " at time t1 ' after time t1 , and caches frame data 1 (ie, layer 1) to the first cache queue at time t1'; as shown in Figure 10
  • the number of frame data in the first buffer queue increases from 0 to 1 (ie, 0->1).
  • the electronic device executes S802, and the synthesis thread executes the “image frame synthesis 1 ” shown in FIG.
  • the synthesis thread reads the above frame data 1 (ie layer 1) from the first cache queue, and the frame data 1 (ie layer 1) is dequeued from the first cache queue .
  • the number of frame data in the first buffer queue is reduced from 1 to 0 (ie, 1->0).
  • the electronic device executes S803, and the LCD of the electronic device, in response to the VSYNC_SF signal at time t3 , executes the “image frame display 1” shown in FIG. 10 to refresh and display the above-mentioned image frame 1.
  • the UI thread responds to the VSYNC_APP signal at time t 2 , and executes “Draw 2” to draw layer 2 shown in FIG. 10, and the Render thread executes “Render 2” for this Layer 2 prepares for rendering, and caches frame data 2 (ie, layer 2 ) in the first cache queue.
  • the Render thread may also execute “render 2 ′” to render frame data 2 (ie, layer 2 ) in the first buffer queue.
  • the Render thread completes "rendering 2" at time t 2 ' after time t 2 , and caches frame data 2 (ie, layer 2) to the first cache queue at time t 2 '; as shown in Figure 10
  • the number of frame data in the first buffer queue increases from 0 to 1 (ie, 0->1).
  • the electronic device executes S802, and the synthesis thread executes the “image frame synthesis 2 ” shown in FIG.
  • the synthesis thread reads the above frame data 2 (ie layer 2) from the first cache queue, and the frame data 2 (ie layer 2) is dequeued from the first cache queue .
  • the number of frame data in the first buffer queue is reduced from 1 to 0 (ie, 1->0).
  • the electronic device executes S803, and the LCD of the electronic device, in response to the VSYNC_SF signal at time t4, executes the “image frame display 2” shown in FIG. 10 to refresh and display the above-mentioned image frame 2.
  • this embodiment of the present application describes the above-mentioned "control flow after adjusting the vertical synchronization signal" with reference to FIG.
  • the electronic device executes S804 at time t Q after time t 2 .
  • the signal period (ie, the production period) of the VSYNC_APP signal changes from T S to T S ′, and T S ′ ⁇ T S ;
  • the next VSYNC_APP signal arrives at time tA after the arrival time of the VSYNC_APP signal at time t1 (ie, time t1 ).
  • the time t A and the time t 1 are separated by T S ′.
  • the electronic device can execute S805, the UI thread responds to the VSYNC_APP signal at time tA , executes the “Draw 3” shown in FIG. 10 to draw layer 3, and the Render thread executes “Render 3” to prepare the layer 3 for rendering, and renders the layer 3.
  • Frame data 3 ie, layer 3 is buffered into the first buffer queue. As shown in FIG.
  • the Render thread may also execute “render 3′” to render frame data 3 (ie, layer 3 ) in the first buffer queue.
  • the Render thread completes "rendering 3" at time t 3 ' after time t A , and caches frame data 3 (ie, layer 3) to the first cache queue at time t 3 '; as shown in Figure 10
  • the number of frame data in the first buffer queue increases from 0 to 1 (ie, 0->1).
  • the electronic device executes S806, and the synthesis thread executes the “image frame synthesis 3 ” shown in FIG.
  • the synthesis thread reads the above frame data 3 (ie layer 3) from the first cache queue, and the frame data 3 (ie layer 3) is dequeued from the first cache queue .
  • the number of frame data in the first buffer queue is reduced from 1 to 0 (ie, 1->0).
  • the electronic device executes S807, and the LCD of the electronic device, in response to the VSYNC_SF signal at time t5, executes the “image frame display 3” shown in FIG. 10 to refresh and display the above-mentioned image frame 3.
  • the next VSYNC_APP signal arrives at time t B after the arrival time of the VSYNC_APP signal at time t A (ie, time t A ).
  • the time t B is separated from the time t A by T S ′.
  • the electronic device can execute S805, the UI thread responds to the VSYNC_APP signal at time tB, executes "Draw 4" to draw layer 4 shown in FIG.
  • Frame data 4 (ie, layer 4) is buffered into the first buffer queue.
  • the Render thread may also execute “render 4′” to render frame data 4 (ie, layer 4 ) in the first buffer queue.
  • the Render thread completes "rendering 4" at time t 4 ' after time t B , and caches frame data 4 (ie, layer 4) to the first cache queue at time t 4 '; as shown in Figure 10
  • the number of frame data in the first buffer queue increases from 0 to 1 (ie, 0->1).
  • the electronic device executes S806, and the synthesis thread executes the “image frame synthesis 4 ” shown in FIG.
  • the synthesis thread reads the above frame data 4 (ie layer 4) from the first cache queue, and the frame data 4 (ie layer 4) is dequeued from the first cache queue .
  • the number of frame data in the first buffer queue is reduced from 1 to 0 (ie, 1->0).
  • the electronic device executes S807, and the LCD of the electronic device, in response to the VSYNC_SF signal at time t6, executes the “image frame display 4” shown in FIG. 10 to refresh and display the above-mentioned image frame 4.
  • the electronic device executes S804, at time tC after the arrival time of the VSYNC_APP signal at time tB (ie, time tB), the next VSYNC_APP signal arrives.
  • the time t C and the time t B are separated by T S ′;
  • the time t d after the arrival time of the VSYNC_APP signal at the time t C (ie, the time t C ) the next VSYNC_APP signal arrives.
  • the time t d is separated from the time t C by T S ′.
  • the signal period (ie the production period) T S ′ of the VSYNC_APP signal is smaller than the signal period (ie the consumption period) T X of the VSYNC_SF signal; therefore, after the electronic device performs the above S805-S807 for a period of time, the UI thread and Render The number of frame data (ie layers) produced by threads (ie, producers) will be greater than the number of frame data (ie, layers) consumed by compositing threads (ie, consumers).
  • the number of frame data (ie layers) produced by the UI thread and the Render thread (ie, the producer) is larger than the number of frame data (ie, layers) consumed by the compositing thread (ie, the consumer) 1.
  • the first buffer queue in response to a VSYNC_SF signal, after the synthesis thread reads one frame layer from the first buffer queue, the first buffer queue also buffers another frame layer. In this way, even if the UI thread and the Render thread cannot complete the drawing and rendering of one frame of layer within one frame (ie, one synchronization period T Z ), the compositing thread can read from the first buffer queue after a VSYNC_SF signal arrives To frame data, you can avoid the phenomenon of dropped frames.
  • the composition thread ie, consumer
  • 32 533.61/16.67 ⁇ 32 frame data (ie, layers).
  • the producer such as the Render thread
  • the consumer such as the synthesis thread
  • One frame of layer ie, frame data
  • the 34th T S ′ at least one frame of layer (ie, frame data) can be cached in the first cache queue.
  • the compositing thread can read from the first buffer queue after a VSYNC_SF signal arrives To frame data, you can avoid the phenomenon of dropped frames.
  • the composition thread ie, consumer
  • the composition thread can consume 54 (600.05/11.11 ⁇ 54) frame data (ie, layers).
  • the producer such as the Render thread
  • the consumer such as the synthesis thread
  • One frame of layer ie, frame data
  • the 56th T S ′ for layer synthesis
  • at least one frame of layer ie, frame data
  • the compositing thread can read from the first buffer queue after a VSYNC_SF signal arrives To frame data, you can avoid the phenomenon of dropped frames.
  • the UI thread and the Render thread can produce 83 frames of data (ie the figure layer), while the composition thread (ie, consumer) can consume 82 (683.09/8.33 ⁇ 82) frame data (ie, layers).
  • the producer (such as the Render thread) generates a frame layer (ie frame data) every other T S ' and puts it into the first cache queue, and the consumer (such as the synthesis thread) every other T X from the first cache queue
  • One frame of layer ie, frame data
  • the consumer (such as the synthesis thread) every other T X from the first cache queue
  • the compositing thread can read from the first buffer queue after a VSYNC_SF signal arrives To frame data, you can avoid the phenomenon of dropped frames.
  • the above solution can reduce the possibility of frame loss when the electronic device displays an image without increasing the power consumption of the electronic device, and ensure the smoothness of the displayed image on the display screen.
  • the signal period of the VSYNC_APP signal becomes smaller. In this case, there may be such doubts: before the signal period of the VSYNC_APP signal is reduced, the image displayed by the electronic device may drop frames because the drawing and rendering cannot be completed within one frame (that is, the signal period of the VSYNC_APP signal). ; After the signal period of the VSYNC_APP signal is reduced, will it aggravate the frame loss problem?
  • the producer ie, the UI thread and the Render thread
  • the producer can produce more frame data within the same duration. In this way, enough frame data can be buffered in the first buffer queue.
  • the compositing thread can read the frame from the first buffer queue after a VSYNC_SF signal arrives data, you can avoid the phenomenon of dropped frames.
  • the period from time t 1 (that is, time t a ) to time t 4 (that is, time t e ) includes three TXs, time t a (that is, time t 1 )-time t e (that is, time t 4 time) This period includes 4 T S 's.
  • the electronic device executes S804 before time t1 .
  • the flow of the electronic device for performing drawing, rendering, image frame synthesis, and image frame display shown in FIG. 11 is described below in chronological order.
  • the next VSYNC_APP signal arrives at time tb after the arrival time of the VSYNC_APP signal at time t1 (ie time ta) (ie time t1 /time ta).
  • the time t b is separated from the time ta by T S ′.
  • the UI thread of the electronic device executes "Draw A" as shown in Figure 11 to draw layer A, and the Render thread executes "Render A" to prepare layer A for rendering, and converts frame data A ( That is, layer A) is cached to the first cache queue.
  • the Render thread may also execute “render A′” to render frame data A (ie, layer A) in the first buffer queue.
  • the Render thread completes "rendering A” at time t A ' after time t a , and caches frame data A (ie, layer A) to the first cache queue at time t A '; as shown in Figure 11
  • the number of frame data in the first buffer queue increases from 0 to 1 (ie, 0->1).
  • the UI thread of the electronic device responds to the VSYNC_APP signal at time t b , and executes “Draw B” as shown in FIG. 11 to draw layer B, and the Render thread executes “Render B” to prepare layer B for rendering , and cache the frame data B (ie layer B) to the first cache queue.
  • the Render thread may also execute “render B′” to render frame data B (ie, layer B) in the first buffer queue.
  • the synthesis thread of the electronic device executes “image frame synthesis A” shown in FIG. 11 to perform layer synthesis on the above frame data A (ie layer A) to obtain image frames A.
  • the synthesis thread reads the frame data A (ie, layer A) from the first cache queue, and the frame data A (ie, layer A) is dequeued from the first cache queue.
  • the number of frame data in the first buffer queue is reduced from 1 to 0 (ie, 1->0).
  • the Render thread completes “rendering B” at time t B ′ after time t 2 , and caches frame data B (ie, layer B) in the first buffer queue at time t B ′.
  • the number of frame data in the first buffer queue increases from 0 to 1 (ie, 0->1).
  • the UI thread of the electronic device responds to the VSYNC_APP signal at time t c , and executes “Draw C” as shown in FIG. 11 to draw layer C, and the Render thread executes “Render C” to render this layer C Prepare, and cache the frame data C (ie, layer C) to the first cache queue.
  • the Render thread may also execute “render C′” to render the frame data C (ie, layer C) in the first buffer queue.
  • the synthesis thread of the electronic device executes "image frame synthesis B" shown in FIG. 11 to perform layer synthesis on the above frame data B (ie layer B) to obtain image frames B.
  • the synthesis thread reads the frame data B (ie, layer B) from the first cache queue, and the frame data B (ie, layer B) is dequeued from the first cache queue.
  • the number of frame data in the first buffer queue is reduced from 1 to 0 (ie, 1->0).
  • the LCD of the electronic device performs “image frame display A” shown in FIG. 11 to refresh and display the above-mentioned image frame A.
  • the Render thread completes "rendering C " at time tC' after time t3 , and caches the frame data C (ie, layer C) to the first cache queue at time tC'; as shown in Figure 11
  • the number of frame data in the first buffer queue increases from 0 to 1 (ie, 0->1).
  • the UI thread of the electronic device executes “Draw D” as shown in FIG. 11 to draw layer D, and the Render thread executes “Render D” to prepare layer D for rendering , and buffer the frame data D (ie, layer D) to the first buffer queue.
  • the Render thread may also execute “render D′” to render frame data D (ie, layer D) in the first buffer queue.
  • the Render thread completes "rendering D" at time t D ' after time t d , and caches frame data D (ie, layer D) to the first cache queue at time t D '; as shown in Figure 11 As shown, at time t D ', the number of frame data in the first buffer queue increases from 1 to 2 (ie, 1->2).
  • the synthesis thread of the electronic device executes the “image frame synthesis C” shown in FIG. 11 to the above frame data C (ie layer C) Perform layer composition to obtain image frame C.
  • the synthesis thread reads the frame data C (ie, layer C) from the first cache queue, and the frame data C (ie, layer C) is dequeued from the first cache queue.
  • the number of frame data in the first buffer queue is reduced from 2 to 1 (ie, 1->0).
  • the LCD of the electronic device performs “image frame display B” shown in FIG. 11 to refresh and display the above-mentioned image frame B.
  • the UI thread of the electronic device executes the “Draw E” shown in FIG. 11 to draw the layer E, and the Render thread executes “Render E” to prepare the layer E for rendering, and Cache the frame data E (ie, layer E) to the first cache queue.
  • the Render thread may also execute “render E′” to render frame data E (ie, layer E) in the first buffer queue.
  • the Render thread completes "rendering E” at time t E ', and caches frame data E (ie, layer E) to the first cache queue at time t E '; as shown in Figure 11, at time t E At moment ', the number of frame data in the first buffer queue increases from 1 to 2 (ie 1->2).
  • the composition thread of the electronic device executes “image frame composition D” shown in FIG. 11 to perform layer composition on the above frame data D (ie layer D) to obtain an image frame D.
  • the synthesis thread reads the frame data D (ie, layer D) from the first cache queue, and the frame data D (ie, layer D) is dequeued from the first cache queue.
  • the number of frame data in the first buffer queue is reduced from 2 to 1 (ie, 1->0).
  • the LCD of the electronic device performs the "image frame display C" shown in FIG. 11 to refresh and display the above-mentioned image frame C.
  • the UI thread executes "Draw E” and the Render thread executes "Render E” it takes more than one frame.
  • there is one frame layer (ie, frame data) cached in the first cache queue for example, the number of frame data in the first cache queue is 1. Therefore, in response to the VSYNC_SF signal at time t6, the synthesis thread of the electronic device can read frame data from the first buffer queue.
  • the electronic device displays images without frame loss. It can be seen that, by using the method of the embodiment of the present application, the possibility of frame loss when an electronic device displays an image can be reduced, and the smoothness of the displayed image on the display screen can be ensured.
  • ⁇ T1 may be preconfigured in the electronic device.
  • ⁇ T1 may be any value such as 0.1 milliseconds (ms), 0.2 ms, or 0.5 ms.
  • the first drawing frame length is the time required for the electronic device to draw the layer.
  • the above-mentioned first statistical period may include multiple synchronization periods.
  • the electronic device may count the time required for each layer to be drawn in the multiple drawing layers of the UI thread in the first statistical period. Then, the electronic device may take the average value of the time durations required to draw the layer multiple times as the length of the first drawing frame. Alternatively, the electronic device may use the longest duration among the durations required to draw the layer multiple times as the length of the first drawing frame.
  • the duration required for the electronic device to draw the layer a and prepare the layer a for rendering is the duration a.
  • the duration required for the electronic device to draw the layer b and prepare the layer b for rendering is the duration b.
  • the duration c is required for the electronic device to draw the layer c and prepare the layer c for rendering.
  • the electronic device may calculate the average value of the duration a, the duration b, and the duration c to obtain the length of the first drawing frame. Alternatively, the electronic device may use the longest duration among the duration a, the duration b, and the duration c as the first drawing frame length.
  • the electronic device may determine ⁇ T1 according to the screen refresh rate of the electronic device. The higher the screen refresh rate of the electronic device, the smaller the ⁇ T1; the lower the screen refresh rate of the electronic device, the larger the ⁇ T1.
  • the synchronization period T Z of the electronic device is equal to the reciprocal of the screen refresh rate of the electronic device.
  • the signal period (ie the production period) T S of the VSYNC_APP signal and the signal period (ie the consumption period) T X of the VSYNC_SF signal are equal to the synchronization period T Z .
  • T S the number of frames in the drawing and rendering within one frame (ie, one T S ), and the more likely the electronic device is to drop frames. high. Therefore, in the case where T S is small, if ⁇ T1 is too large, the possibility of frame loss in the electronic device will increase. It can be seen that, if T S is small, ⁇ T1 can be set to a small value; if T S is large, ⁇ T1 can be set to a large value.
  • T X 16.67ms
  • ⁇ T1 can be 0.5ms
  • T S 11.11ms
  • ⁇ T1 can be is 0.2ms
  • T S 8.33ms
  • ⁇ T1 can be 0.1ms.
  • ⁇ T1 if the value of ⁇ T1 is too large, it will affect the effect of the image displayed by the electronic device and affect the user's visual experience. For example, if the value of ⁇ T1 is too large, the electronic device will produce 5 frames of layers within a certain period of time (such as 1s), and play the 5 frames of image frames corresponding to the 5 frames of layers; adjust the VSYNC_APP signal according to the ⁇ T1 After the signal period of 1000, the electronic device may produce 10 layers of layers in 1s, but still play 5 image frames. Therefore, the value of the above ⁇ T1 should not be too large. For example, ⁇ T1 is less than or equal to the preset duration threshold. The preset duration threshold may be 0.5ms or 1ms. The preset duration threshold may be preconfigured in the electronic device.
  • This embodiment of the present application introduces conditions for the electronic device to perform the above S804 to adjust the signal period of the VSYNC_APP signal.
  • the electronic device may perform S804 when the number of layers cached in the first cache queue is less than the first preset threshold.
  • the method in this embodiment of the present application may further include S1201 .
  • the electronic device determines whether the number of layers cached in the first cache queue is less than a first preset threshold.
  • the first preset threshold may be preconfigured in the electronic device.
  • the first preset threshold may be any value such as 1, 2, or 3.
  • S1201 may include: the Render thread of the electronic device, in response to the VSYNC_SF signal, reads the number of layers buffered in the first buffer queue, and determines whether the number of layers buffered in the first buffer queue is less than a first preset threshold.
  • the Render thread of the electronic device can read the first buffer queue before reading the frame data from the first buffer queue (ie, the frame data is dequeued from the first buffer queue).
  • the number of cached layers in the first cache queue is determined, and whether the number of cached layers in the first cache queue is less than the first preset threshold is determined.
  • the first preset threshold may be any value such as 2 or 3.
  • the first preset threshold equal to 2 as an example. It can be understood that if the number of layers cached in the first cache queue is less than 2 before the frame data is dequeued from the first cache queue; then, after the frame data is dequeued from the first cache queue, the first cache queue The number of cached layers in the first cache queue is less than 1, that is, the number of cached layers in the first cache queue is 0.
  • the Render thread of the electronic device in response to the VSYNC_SF signal, can read the frame data from the first buffer queue (ie, the frame data is dequeued from the first buffer queue), and then read the first buffer queue.
  • the number of cached layers in the first cache queue is determined, and whether the number of cached layers in the first cache queue is less than the first preset threshold is determined.
  • the first preset threshold may be any value such as 1 or 2.
  • the first preset threshold equal to 1 as an example. It can be understood that, if the number of layers cached in the first cache queue is less than 1 after the frame data is dequeued from the first cache queue, it means that the number of layers cached in the first cache queue is 0.
  • the electronic device may perform S804-S807.
  • the electronic device may perform S801-S803.
  • the electronic device can reduce the signal period of the VSYNC_APP signal when the possibility of frame loss in the displayed image of the electronic device is high, so as to reduce the possibility of frame loss when the electronic device displays the image, and ensure the display The smoothness of the image displayed on the screen.
  • the possibility of frame loss when the electronic device displays an image may not be reduced.
  • the electronic device can continue to reduce the signal period of the VSYNC_APP signal to reduce the possibility of frame loss when the electronic device displays images.
  • the method in this embodiment of the present application may further include S1301-S1302.
  • the method of this embodiment of the present application may further include S1301-S1302.
  • the electronic device determines whether the number of layers cached in the first cache queue is less than a second preset threshold.
  • the second preset threshold is smaller than the first preset threshold.
  • the second preset threshold when the first preset threshold is 2, the second preset threshold may be equal to 1; when the first preset threshold is 3, the second preset threshold may be equal to 1 or 2.
  • the values of the first preset threshold and the second preset threshold include but are not limited to the above specific values.
  • the electronic device may execute S1302. If the number of layers cached in the first cache queue is greater than or equal to the second preset threshold, the electronic device may perform S805-S807.
  • the electronic device adjusts the signal period of the VSYNC_APP signal to a second duration.
  • the second duration is shorter than the first duration.
  • the electronic device can further reduce the signal period of the VSYNC_APP signal to reduce the possibility of frame loss when the electronic device displays images to ensure the smoothness of the displayed image on the display.
  • the buffer space of the first buffer queue is limited.
  • the first cache queue can cache up to 2-frame layers (ie, frame data) or 3-frame layers (ie, frame data). Therefore, if the signal period of the VSYNC_APP signal is shorter than the signal period of the VSYNC_SF signal for a long time, there will be a problem that the first cache queue cannot cache the frame data produced by the UI thread and the Render thread (ie, the producer).
  • the electronic device may further adjust the signal period of the VSYNC_APP signal from T S ′ to T S when the number of layers buffered in the first buffer queue is greater than the third preset threshold.
  • the method of this embodiment of the present application may further include S1401-S1402.
  • the method of this embodiment of the present application may further include S1401-S1402.
  • the electronic device determines whether the number of layers cached in the first cache queue is greater than a third preset threshold.
  • the third preset threshold is greater than or equal to the first preset threshold.
  • the third preset threshold may be preconfigured in the electronic device.
  • the third preset threshold may be any value such as 0, 1, or 2.
  • S1401 may include: the Render thread of the electronic device, in response to the VSYNC_SF signal, reads the number of layers buffered in the first buffer queue, and determines whether the number of layers buffered in the first buffer queue is greater than a third preset threshold.
  • the Render thread of the electronic device in response to the VSYNC_SF signal, can read the first buffer queue before reading the frame data from the first buffer queue (ie, the frame data is dequeued from the first buffer queue).
  • the number of layers cached in the first cache queue is determined, and whether the number of layers cached in the first cache queue is greater than the third preset threshold.
  • the third preset threshold may be any value such as 1 or 2.
  • the third preset threshold equal to 1 as an example. It can be understood that if the number of layers cached in the first cache queue is greater than 1 before the frame data is dequeued from the first cache queue, it means that the number of layers cached in the first cache queue is at least 2; then , after the frame data is dequeued from the first cache queue, the number of layers cached in the first cache queue is at least 1.
  • the Render thread of the electronic device in response to the VSYNC_SF signal, can read the frame data from the first buffer queue (ie, the frame data is dequeued from the first buffer queue), and then read the first buffer queue.
  • the number of layers cached in the first cache queue is determined, and whether the number of layers cached in the first cache queue is greater than the third preset threshold.
  • the first preset threshold may be any value such as 0 or 1.
  • the first preset threshold 0 as an example. It can be understood that, if the number of layers cached in the first cache queue is greater than 0 after the frame data is dequeued from the first cache queue, it means that the number of layers cached in the first cache queue is at least 1.
  • the electronic device displays the image. There will be no frame drop.
  • the electronic device may perform S1402 and S801-S803.
  • the electronic device may continue to perform S805-S807.
  • the electronic device adjusts the signal period of the VSYNC_APP signal, so that the signal period of the VSYNC_APP signal is equal to the signal period of the VSYNC_SF signal.
  • the electronic device may adjust the signal period of the VSYNC_APP signal from T S ′ to T S .
  • the electronic device may adjust the signal period of the VSYNC_APP signal from T S ′ to T S after sufficient frame data is buffered in the first buffer queue. In this way, the problem that the first cache queue cannot cache the frame data produced by the UI thread and the Render thread (ie, the producer) can be avoided.
  • This embodiment of the present application introduces the timing when the electronic device performs the above method to adjust the signal period of the VSYNC_APP signal.
  • the electronic device may start to execute the above method in response to the first application being switched to the foreground application, so as to reduce the possibility of frame loss when the electronic device displays an image and ensure the smoothness of the displayed image on the display screen.
  • the application interface of the first application is displayed on the display screen of the electronic device and is visible to the user. If the electronic device performs layer drawing, layer rendering, image frame synthesis, and display for the first application, frame loss occurs, which will affect the smoothness of the displayed image on the display screen and affect the user experience. Therefore, when the first application is switched to the foreground application, the electronic device can perform the above method to adjust the signal period of the VSYNC_APP signal, so as to reduce the possibility of frame loss when the electronic device displays an image.
  • the foreground application may refer to: an application corresponding to the interface currently displayed on the display screen of the electronic device. That is to say, which application interface is currently displayed on the display screen of the electronic device, the application is the foreground application.
  • the foreground application may also refer to: an application applies for a new Activity (startActivity) through an activity manager (activity manager service, AMS) or an Activity in the pause state re-enters the activity state.
  • startActivity an activity manager
  • AMS activity manager service
  • the software system of the electronic device may adopt a layered architecture.
  • the layered architecture can divide the software into several layers, and each layer has a clear role and division of labor. Layers communicate with each other through software interfaces.
  • the software system of the electronic device is divided into three layers, which are, from top to bottom, an application layer (referred to as an application layer), an application framework layer (referred to as a framework layer), and a kernel layer (also referred to as a driver). layer).
  • the application layer may include a series of application packages. Such as camera, gallery, calendar, call, map, navigation, WLAN, Bluetooth, music, video, SMS and desktop launcher (Launcher) and other applications.
  • the framework layer provides an application programming interface (API) and a programming framework for applications in the application layer.
  • the application framework layer includes some predefined functions.
  • the framework layer may include a window manager (window manager service, WMS) and an activity manager AMS, and the like.
  • Window Manager WMS is used to manage window programs.
  • the window manager can get the size of the display screen, determine whether there is a status bar, lock the screen, take screenshots, etc.
  • the activity manager AMS is used to manage activities, and is responsible for starting, switching, scheduling, and application management and scheduling of various components in the system.
  • the kernel layer is the layer between hardware and software.
  • the kernel layer contains at least display drivers, camera drivers, audio drivers, and sensor drivers.
  • the kernel layer is the layer between hardware and software.
  • the user performs an input operation on the electronic device (such as an operation that triggers the electronic device to display an application), and the kernel layer can generate a corresponding input event (such as a folding screen expansion event) according to the input operation, and report the event to the application framework layer.
  • the window properties of the application are set by the activity management server AMS of the application framework layer.
  • the window management server WMS of the application framework layer draws the window according to the settings of the AMS, and then sends the window data to the display driver of the kernel layer, and the display driver displays the corresponding application interface on the folding screen.
  • the properties of the window may include the position and size of the Activity window, and the visible properties of the Activity window (ie, the state of the Activity window).
  • the position of the Activity window is the position of the Activity window on the display screen, and the size of the Activity window may be information such as width and height in the application startup config.
  • the visible property of the Activity window can be true or false. When the visible property of the Activity window is true, it means that the Activity window is in an active state, and the Activity window is visible to the user, that is, the display driver will display the content of the Activity window. When the visible attribute of the Activity of the Activity window is false, it means that the Activity window is in the pauase state, and the Activity window is invisible to the user, that is, the display driver will not display the content of the Activity window.
  • an application eg, application 1 or application 2
  • the activity manager AMS can request the window manager WMS to draw the window corresponding to the Activity, and call the display driver to display the interface.
  • the application that enters the active state will perform the following processing: (1) Create the Application object and the Context object; (2) Call Activity.attach() to create the window corresponding to the Activity; (3) Call the user's onCreate method, which setsContentView The method creates the activity's view DecorView; (4) calculates and draws the Activity view. After completing the above steps, the screen of the application will be displayed, and the application is the foreground application.
  • the screen content of the foreground application may include not only the screen that the user can see, but also the content without a user interface, the content of a transparent layer, or the content that is blocked by other application interfaces and invisible to the user.
  • the electronic device can also be triggered to execute the above method, and the signal period of the VSYNC_APP signal is re-adjusted, so as to reduce the possibility of frame loss when the electronic device displays an image.
  • the electronic device may adjust the signal period for adjusting the VSYNC_APP signal to the third duration.
  • the third duration is equal to T S - ⁇ T2.
  • the third duration is determined based on the second application. Electronic devices can set different ⁇ T2 according to different applications.
  • the electronic device may calculate the above ⁇ T2 according to the user's requirement.
  • the electronic device may determine ⁇ T2 according to the screen refresh rate of the electronic device. The higher the screen refresh rate of the electronic device, the smaller the ⁇ T2; the lower the screen refresh rate of the electronic device, the larger the ⁇ T2.
  • This embodiment introduces an application scenario in which an electronic device executes the method in any of the foregoing Embodiments (1) to (4).
  • the electronic device executes the method in any of the above-mentioned embodiments (1) to (4) in a list-type sliding scene, the effect of reducing the possibility of frame loss in the display layer of the electronic device is obvious.
  • the above-mentioned list-type sliding scene may also be referred to as a continuous sliding scene.
  • the mobile phone refreshes the scene of the mobile phone page in response to the user's continuous input of upward or downward swiping operations on the interface of the "shopping" application.
  • the mobile phone refreshes the mobile phone page in response to the user's continuous input of upward or downward sliding operations on the interface of the "contact book” application.
  • the mobile phone refreshes the scene of the mobile phone page in response to the user's continuous input of upward or downward sliding operations on the contact interface or the circle of friends interface of the "Instant Messaging" application.
  • the scenarios in which the mobile phone page is refreshed in the above examples can be referred to as list-type sliding scenes or continuous sliding scenes.
  • the electronic device can continuously refresh the page, that is, the producer in the electronic device will continuously produce multi-frame layers, and the consumer will also continuously consume multi-frame layers. Therefore, in these application scenarios, using the method of the embodiments of the present application, in the process of continuously producing and consuming layers, the production rate of the producer is greater than the consumption rate of the consumer, which can reduce the frame loss of the image displayed by the electronic device. possibility.
  • the implementation of the method of the embodiments of the present application has an obvious effect of reducing the possibility of frame loss in the display layer of the electronic device.
  • the above-mentioned one or more layers may be one layer.
  • the one or more first layers may include multiple layers.
  • the electronic device determines that the rendering of one or more layers is completed in different ways.
  • the completion of rendering of one or more layers means that the rendering of the above-mentioned one layer is completed.
  • the completion of rendering of one or more layers means: the rendering of the preset layers in the above-mentioned multiple layers is completed; or, the rendering of all the layers in the above-mentioned multiple layers is completed.
  • the preset layer may include a layer in which the ratio of the area of the layer to the area of the display screen is greater than the preset ratio threshold among the plurality of layers.
  • This embodiment introduces a specific manner in which the electronic device adjusts the signal period of the VSYNC_APP signal.
  • the HW_VSYNC signal is a signal triggered by a hardware drive, and the VSYNC_SF signal and the VSYNC_APP signal are generated based on the HW_VSYNC signal.
  • the signal period of the HW_VSYNC signal is equal to the inverse of the screen refresh rate of the electronic device.
  • the VSYNC_SF signal and the VSYNC_APP signal are in phase with the HW_VSYNC signal, and the signal period of the VSYNC_SF signal and the VSYNC_APP signal is equal to the signal period of the HW_VSYNC signal.
  • the HW_VSYNC signal is used as the input, and the SW_VSYNC signal can be obtained through DyspSync 1501 according to the current limit timestamp (Present Fence Timestamp).
  • the Present Fence Timestamp is a timestamp generated according to the current HW_VSYNC signal (ie, a hardware signal), and is used to record the input time of the current HW_VSYNC signal.
  • the SW_VSYNC signal is a software signal obtained through the training and calculation of the DyspSync 1501 module by using the HW_VSYNC signal (ie, the hardware signal) as an input.
  • the SW_VSYNC signal can be used as an intermediate signal to generate the VSYNC_SF signal and the VSYNC_APP signal.
  • phase adjustment (Phase Adjust) 1502 may be performed on the SW_VSYNC signal to obtain the VSYNC_SF signal and the VSYNC_APP signal.
  • the phase difference (SF_Phase) between the VSYNC_SF signal and the SW_VSYNC signal and the phase difference (APP_Phase) between the VSYNC_APP signal and the SW_VSYNC signal may be the same or different.
  • the electronic device can use SF_Phase to perform phase adjustment (Phase Adjust) 1502 on the SW_VSYNC signal to obtain a VSYNC_SF signal; and use APP_Phase to perform a phase adjustment (Phase Adjust) 1502 on the SW_VSYNC signal to obtain a VSYNC_APP signal.
  • Phase Adjust phase adjustment
  • APP_Phase phase adjustment
  • the signal period of the VSYNC_SF and VSYNC_APP signals is the same as the signal period of the HW_VSYNC signal.
  • the electronic device adopts the processing flow shown in FIG. 15 and cannot adjust the signal period of the VSYNC_SF signal and the VSYNC_APP signal.
  • the period adjustment (Period Adjust) 1601 shown in FIG. 16 may be added to the signal processing flow shown in FIG. 15 .
  • the electronic device can not only perform a phase adjustment (Phase Adjust) 1502 on the SW_VSYNC signal, but also perform a period adjustment (Period Adjust) 1601 on the SW_VSYNC signal.
  • the period difference (SF_Period) between the VSYNC_SF signal and the SW_VSYNC signal and the period difference (APP_Period) between the VSYNC_APP signal and the SW_VSYNC signal may be the same or different.
  • the electronic device may use SF_Phase to perform Phase Adjust (Phase Adjust) 1502 on the SW_VSYNC signal, and use SF_Period to perform Period Adjust (Period Adjust) 1601 on the SW_VSYNC signal to obtain the VSYNC_SF signal.
  • Phase Adjust Phase Adjust
  • Period Adjust Period Adjust
  • the electronic device can use APP_Phase to perform a phase adjustment (Period Adjust) 1502 on the SW_VSYNC signal, and use APP_Period to perform a period adjustment (Period Adjust) 1601 on the SW_VSYNC signal to obtain a VSYNC_APP signal.
  • APP_Phase to perform a phase adjustment (Period Adjust) 1502 on the SW_VSYNC signal
  • APP_Period to perform a period adjustment (Period Adjust) 1601 on the SW_VSYNC signal to obtain a VSYNC_APP signal.
  • a period adjustment (Period Adjust) module is added, such as period adjustment (Period Adjust) 1601 shown in FIG. 16 .
  • the electronic device can adjust the signal period of the VSYNC_SF signal or the VSYNC_APP signal.
  • Some embodiments of the present application provide an electronic device, which may include a layer drawing module, a layer rendering module, a layer composition module, a display module, a storage module, and a period adjustment module.
  • the layer drawing module is used to support the electronic device to perform the operations of drawing layers described in S801 and S805 in the above embodiments, and/or for other processes of the technology described herein.
  • the layer drawing module may be the above UI thread.
  • the layer rendering module is used to support the electronic device to perform the operations of preparing the layer for rendering as described in the above embodiments, and the operation of rendering the layer, and/or other processes for the techniques described herein.
  • the layer rendering module may be the above-mentioned Render thread.
  • the layer composition module is used to support the electronic device to perform S802, S806, S1201, S1301, S1401 in the above embodiments, and/or other processes for the techniques described herein.
  • the layer compositing module may be the above-mentioned compositing thread.
  • the display module is used to support the electronic device to perform S803, S807 in the above-described embodiments, and/or other processes for the techniques described herein.
  • the storage module is configured to save the first cache queue, support the electronic device to perform the operations of caching the first layer to the first cache queue described in S801 and S805, and/or other processes for the techniques described herein.
  • the period adjustment module is used to support the electronic device to perform S804, S1302, S1402 in the above-described embodiments, and/or other processes for the techniques described herein.
  • the functions of the above-mentioned layer drawing module, layer rendering module, layer synthesis module and period adjustment module may be integrated into one processing module.
  • the processing module may be a processor of the electronic device.
  • the above-mentioned display module may be a display screen (such as a touch screen) of an electronic device.
  • the above-mentioned storage module may be a memory of an electronic device.
  • Some embodiments of the present application provide an electronic device, which may include: a display screen (eg, a touch screen), a memory, and one or more processors.
  • the display screen, memory and processor are coupled.
  • the memory is used to store computer program code comprising computer instructions.
  • the processor executes the computer instructions, the electronic device can perform various functions or steps performed by the electronic device in the foregoing method embodiments.
  • the structure of the electronic device reference may be made to the structure of the electronic device 100 shown in FIG. 1 .
  • the chip system 1700 includes at least one processor 1701 and at least one interface circuit 1702 .
  • the processor 1701 and the interface circuit 1702 may be interconnected by wires.
  • the interface circuit 1702 may be used to receive signals from other devices, such as the memory of an electronic device.
  • the interface circuit 1702 may be used to send signals to other devices (eg, the processor 1701 or a touch screen of an electronic device).
  • the interface circuit 1702 can read the instructions stored in the memory and send the instructions to the processor 1701 .
  • the electronic device can be caused to perform the various steps in the above-mentioned embodiments.
  • the chip system may also include other discrete devices, which are not specifically limited in this embodiment of the present application.
  • Embodiments of the present application further provide a computer storage medium, where the computer storage medium includes computer instructions, when the computer instructions are executed on the above electronic device, the electronic device is made to perform each function performed by the electronic device in the above method embodiments or step.
  • Embodiments of the present application further provide a computer program product, which, when the computer program product runs on a computer, enables the computer to perform each function or step performed by the electronic device in the foregoing method embodiments.
  • the disclosed apparatus and method may be implemented in other manners.
  • the device embodiments described above are only illustrative.
  • the division of the modules or units is only a logical function division. In actual implementation, there may be other division methods.
  • multiple units or components may be Incorporation may either be integrated into another device, or some features may be omitted, or not implemented.
  • the shown or discussed mutual coupling or direct coupling or communication connection may be through some interfaces, indirect coupling or communication connection of devices or units, and may be in electrical, mechanical or other forms.
  • the units described as separate components may or may not be physically separated, and the components shown as units may be one physical unit or multiple physical units, that is, they may be located in one place, or may be distributed to multiple different places . Some or all of the units may be selected according to actual needs to achieve the purpose of the solution in this embodiment.
  • each functional unit in each embodiment of the present application may be integrated into one processing unit, or each unit may exist physically alone, or two or more units may be integrated into one unit.
  • the above-mentioned integrated units may be implemented in the form of hardware, or may be implemented in the form of software functional units.
  • the integrated unit is implemented in the form of a software functional unit and sold or used as an independent product, it may be stored in a readable storage medium.
  • the technical solutions of the embodiments of the present application can be embodied in the form of software products in essence, or the parts that make contributions to the prior art, or all or part of the technical solutions, which are stored in a storage medium. , including several instructions to make a device (may be a single chip microcomputer, a chip, etc.) or a processor (processor) to execute all or part of the steps of the methods described in the various embodiments of the present application.
  • the aforementioned storage medium includes: U disk, mobile hard disk, read only memory (ROM), random access memory (random access memory, RAM), magnetic disk or optical disk and other media that can store program codes.

Abstract

本申请公开了一种基于垂直同步信号的控制方法及电子设备,涉及图像处理及显示技术领域,可降低电子设备显示图像时出现丢帧的可能性,保证显示屏显示图像的流畅性。具体方案包括:电子设备响应于第一垂直同步信号,绘制第一应用的第一图层,并将第一图层缓存至第一缓存队列;电子设备响应于第二垂直同步信号,对第一缓存队列中缓存图层进行图层合成得到图像帧;若第一缓存队列中缓存的图层的数量小于第一预设阈值,电子设备将第一垂直同步信号的信号周期调整为第一时长,第一时长小于第二垂直同步信号的信号周期。

Description

一种基于垂直同步信号的控制方法及电子设备
本申请要求于2020年10月31日提交国家知识产权局、申请号为202011197544.3、发明名称为“一种基于垂直同步信号的控制方法及电子设备”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本申请实施例涉及图像处理及显示技术领域,尤其涉及一种基于垂直同步信号的控制方法及电子设备。
背景技术
随着电子技术的发展,各类电子设备(如手机)的性能越来越好。消费者对电子产品的人机交互性能的要求也越来越高。其中,电子设备的显示图像在用户视觉上的连贯性是一项重要的人机交互性能。
其中,保证电子设备显示图像的连贯性的前提是电子设备的显示画面不丢帧。而电子设备的高帧率显示也是一种发展趋势。例如,电子设备的帧率由60赫兹(Hz)发展到90Hz,再到120Hz。而电子设备的帧率越高,则更容易出现丢帧的问题,则会导致电子设备的显示内容的不连贯,影响用户体验。因此,如何在高帧率的场景下,减少甚至避免电子设备显示图像时出现丢帧的现象,保存电子设备显示图像的流畅性是亟待解决的问题。
发明内容
本申请实施例提供一种基于垂直同步信号的控制方法及电子设备,可以降低电子设备显示图像时出现丢帧的可能性,可以保证显示屏显示图像的流畅性,从而提升用户的视觉体验。
第一方面,本申请提供一种基于垂直同步信号的控制方法,该方法应用于电子设备。该方法包括:电子设备响应于第一垂直同步信号,绘制第一图层,并将第一图层缓存至第一缓存队列;电子设备响应于第二垂直同步信号,对第一缓存队列中缓存的图层进行图层合成得到图像帧;若第一缓存队列中缓存的图层的数量小于第一预设阈值,电子设备将第一垂直同步信号的信号周期调整为第一时长,该第一时长小于第二垂直同步信号的信号周期。
其中,当第一缓存队列中缓存的图层的数量小于第一预设阈值时,表示该第一缓存队列中缓存的帧数据不足,电子设备显示图像容易出现丢帧的现象。电子设备显示图像出现丢帧现象的原因在于:消费者(如合成线程)在一个第二垂直同步信号(如VSYNC_SF)信号到来后,无法从第一缓存队列中读取到帧数据。那么,如果消费者(如合成线程)在一个VSYNC_SF信号到来后,可以从第一缓存队列中读取到帧数据;那么,电子设备显示图像则不会出现丢帧的现象。其中,SF为Surface Flinger。
本申请中,若第一缓存队列中缓存的图层的数量小于第一预设阈值,电子设备可以将第一垂直同步信号的信号周期调整为第一时长。其中,由于第一时长小于第二垂直同步信号的信号周期;因此,电子设备将第一垂直同步信号的信号周期调整为第一 时长后,第一垂直同步信号的信号周期小于第二垂直同步信号的信号周期,即生产者的生产周期小于消费者的消费周期。这样,同一时长内生产者生产的帧数据的数量大于消费者消费的帧数据的数量,则生产者(即UI线程和Render线程)的生产速率大于消费者(即合成线程)的消费速率。如此,VSYNC_SF信号到来时,第一缓存队列中缓存有帧数据(即图层),电子设备显示图像则不会出现丢帧的现象。
采用上述方案,可以在不增加电子设备功耗的前提下,降低电子设备显示图像时出现丢帧的可能性,保证显示屏显示图像的流畅性。
在第一方面的一种可能的设计方式中,电子设备响应于第一应用切换为前台应用,若第一缓存队列中缓存的图层的数量小于第一预设阈值,则将第一垂直同步信号的信号周期调整为第一时长。
其中,第一缓存队列是为第一应用分配的。该第一缓存队列是由电子设备的渲染(Render)线程申请,为第一应用分配的。电子设备可以分别为每个应用分配一个缓存队列。上述前台应用是电子设备的显示屏当前显示的界面对应的应用。
应理解,当第一应用切换为前台应用时,该第一应用的应用界面被电子设备的显示屏显示,对用户可见。如果电子设备针对该第一应用执行图层绘制、图层渲染、图像帧合成和显示出现丢帧的现象,则会影响显示屏显示图像的流畅性,影响用户体验。因此,第一应用切换为前台应用时,电子设备可以执行上述方法调整VSYNC_APP信号的信号周期,以降低电子设备显示图像时出现丢帧的可能性。
在第一方面的另一种可能的设计方式中,电子设备将VSYNC_APP信号的信号周期调整为第一时长之后,可能无法降低电子设备显示图像时出现丢帧的可能性。针对这种情况,如果第一缓存队列中缓存的图层的数量小于第二预设阈值,电子设备可以继续调小VSYNC_APP信号的信号周期,以降低电子设备显示图像时出现丢帧的可能性。
具体的,本申请的方法还可以包括:若第一缓存队列中缓存的图层的数量小于第二预设阈值,电子设备将第一垂直同步信号的信号周期调整为第二时长。其中,第二预设阈值小于第一预设阈值,第二时长小于第一时长。
本申请中,在第一缓存队列中缓存的图层的数量小于第二预设阈值的情况下,电子设备可以进一步调小VSYNC_APP信号的信号周期,以降低电子设备显示图像时出现丢帧的可能性,保证显示屏显示图像的流畅性。
在第一方面的另一种可能的设计方式中,为了避免生产速率较大而导致第一缓存队列无法缓存生产者生产的帧数据,电子设备还可以在第一缓存队列中缓存的图层的数量大于第三预设阈值时,调整第一垂直同步信号(即VSYNC_APP信号)的信号周期,使第一垂直同步信号的信号周期等于第二垂直同步信号的信号周期。其中,第三预设阈值大于或等于第一预设阈值。
本申请中,电子设备可以在第一缓存队列缓存了足够的帧数据后,调整第一垂直同步信号的信号周期,使第一垂直同步信号的信号周期等于第二垂直同步信号的信号周期。这样,可以避免出现第一缓存队列无法缓存UI线程和Render线程(即生产者)生产得到的帧数据的问题。
在第一方面的另一种可能的设计方式中,电子设备的前台应用发生变化,也可以触发电子设备执行上述方法,重新调整VSYNC_APP信号的信号周期,以降低电子设备 显示图像时出现丢帧的可能性。具体的,电子设备响应于前台应用由第一应用切换为第二应用,可将调整VSYNC_APP信号的信号周期调整为第三时长。
在第一方面的另一种可能的设计方式中,上述第一缓存队列中缓存的图层的数量小于第一预设阈值,具体包括:电子设备响应于第二垂直同步信号,在对第一缓存队列中队首的一帧图层进行图层合成之前,读取第一缓存队列中缓存的图层的数量,读取到的数量小于第一预设阈值。例如,第一预设阈值等于2。
可以理解的是,如果帧数据从第一缓存队列中出队之前,第一缓存队列中缓存的图层的数量小于2;那么,帧数据从第一缓存队列中出队之后,第一缓存队列中缓存的图层的数量小于1,即第一缓存队列中缓存的图层的数量为0。在这种情况下,在一个第二垂直同步信号(如VSYNC_SF)信号到来后,无法从第一缓存队列中读取到帧数据,电子设备显示图像会出现丢帧的现象。
在第一方面的另一种可能的设计方式中,上述第一缓存队列中缓存的图层的数量小于第一预设阈值,具体包括:电子设备响应于第二垂直同步信号,在对第一缓存队列中队首的一帧图层进行图层合成之后,读取第一缓存队列中缓存的图层的数量,读取到的数量小于第一预设阈值。例如,第一预设阈值等于1。
可以理解的是,如果帧数据从第一缓存队列中出队之后,第一缓存队列中缓存的图层的数量小于1,则表示第一缓存队列中缓存的图层的数量为0。在这种情况下,在一个第二垂直同步信号(如VSYNC_SF)信号到来后,无法从第一缓存队列中读取到帧数据,电子设备显示图像会出现丢帧的现象。
在第一方面的另一种可能的设计方式中,上述方法还可以包括:若第一缓存队列中缓存的图层的数量大于第三预设阈值,电子设备则调整第一垂直同步信号的信号周期,使第一垂直同步信号的信号周期等于第二垂直同步信号的信号周期。其中,第二预设阈值大于或等于第一预设阈值。
在第一方面的另一种可能的设计方式中,上述第一缓存队列中缓存的图层的数量大于第三预设阈值,具体包括:电子设备响应于第二垂直同步信号,在对第一缓存队列中队首的一帧图层进行图层合成之前,读取第一缓存队列中缓存的图层的数量,读取到的数量大于第三预设阈值。例如,第三预设阈值等于1。
可以理解的是,如果帧数据从第一缓存队列中出队之前,第一缓存队列中缓存的图层的数量大于1,则表示第一缓存队列中缓存的图层的数量至少为2;那么,帧数据从第一缓存队列中出队之后,第一缓存队列中缓存的图层的数量至少为1。在这种情况下,在一个第二垂直同步信号(如VSYNC_SF)信号到来后,可以从第一缓存队列中读取到帧数据,电子设备显示图像不会出现丢帧的现象。
在第一方面的另一种可能的设计方式中,上述第一缓存队列中缓存的图层的数量大于第三预设阈值,具体包括:电子设备响应于第二垂直同步信号,在对第一缓存队列中队首的一帧图层进行图层合成之后,读取第一缓存队列中缓存的图层的数量,读取到的数量大于第三预设阈值。例如,第三预设阈值等于0。
可以理解的是,如果帧数据从第一缓存队列中出队之后,第一缓存队列中缓存的图层的数量大于0,则表示第一缓存队列中缓存的图层的数量至少为1。在这种情况下,在一个第二垂直同步信号(如VSYNC_SF)信号到来后,可以从第一缓存队列中读取到 帧数据,电子设备显示图像不会出现丢帧的现象。
在第一方面的另一种可能的设计方式中,上述电子设备将第一垂直同步信号的信号周期调整为第一时长,包括:电子设备将第一垂直同步信号的信号周期调小△T,使第一垂直同步信号的信号周期等于第一时长,第一时长小于第二垂直同步信号的信号周期。
在另一种实现方式中,上述△T(如△T1)是预先配置在电子设备中的固定时长。在另一种实现方式中,上述△T是根据电子设备的屏幕刷新率确定的,屏幕刷新率越高则△T越小,屏幕刷新率越低则△T越大。在另一种实现方式中,上述△T是根据用户设置的预设数量K确定的,预设数量K用于指示用户期望电子设备在K帧后解决一帧的丢帧,△T=T S/(K+1),T S为电子设备的屏幕刷新率的倒数。
在另一种实现方式中,△T是根据所述第二垂直同步信号的信号周期与所述第一统计周期的第一绘制帧长的差值确定的,第一绘制帧长是电子设备绘制图层所需的时长,△T小于或等于上述差值。
第二方面,本申请提供一种电子设备,该电子设备包括显示屏、存储器和一个或多个处理器;显示屏、存储器和处理器耦合;存储器用于存储计算机程序代码,计算机程序代码包括计算机指令,当处理器执行计算机指令时,电子设备执行如第一方面及其任一种可能的设计方式所述的方法。
第三方面,本申请提供一种芯片系统,该芯片系统应用于包括显示屏的电子设备。该芯片系统包括一个或多个接口电路和一个或多个处理器。该接口电路和处理器通过线路互联。接口电路用于从电子设备的存储器接收信号,并向处理器发送信号,信号包括存储器中存储的计算机指令;当处理器执行计算机指令时,电子设备执行如第一方面及其任一种可能的设计方式所述的方法。
第四方面,本申请提供一种计算机存储介质,包括计算机指令,当所述计算机指令在电子设备上运行时,使得所述电子设备执行如第一方面及其任一种可能的设计方式所述的方法。
第五方面,本申请提供一种计算机程序产品,当所述计算机程序产品在计算机上运行时,使得所述计算机执行如第一方面及其任一种可能的设计方式所述的方法。
可以理解地,上述提供的第二方面所述的电子设备,第三方面所述的芯片系统,第四方面所述的计算机存储介质,第五方面所述的计算机程序产品所能达到的有益效果,可参考如第一方面及其任一种可能的设计方式中的有益效果,此处不再赘述。
附图说明
图1为本申请实施例提供的一种电子设备的硬件结构示意图;
图2为本申请实施例提供的一种垂直同步信号的示意图;
图3为本申请实施例提供的一种电子设备响应于触摸操作显示图像的软件处理流程示意图;
图4A为一种方案中电子设备进行图层绘制、渲染、合成以及图像帧显示的原理示意图;
图4B为一种方案中电子设备的图层生产和消费原理示意图;
图5A为图4A所示的电子设备进行图层绘制、渲染、合成以及图像帧显示过程中 第一缓存队列中的帧数据(即图层)变化情况示意图;
图5B为图4A所示的电子设备进行图层绘制、渲染、合成以及图像帧显示过程中第一缓存队列中的帧数据(即图层)变化情况示意图;
图6为一种方案中电子设备进行图层绘制、渲染、合成以及图像帧显示的原理示意图;
图7为本申请实施例提供的一种基于垂直同步信号的控制方法原理示意图;
图8为本申请实施例提供的一种基于垂直同步信号的控制方法流程图;
图9为本申请实施例提供的一种基于垂直同步信号的控制方法中调整VSYNC_APP信号和VSYNC_SF信号的原理示意图;
图10为本申请实施例提供的一种电子设备进行图层绘制、渲染、合成以及图像帧显示的原理示意图;
图11为本申请实施例提供的另一种电子设备进行图层绘制、渲染、合成以及图像帧显示的原理示意图;
图12为本申请实施例提供的另一种基于垂直同步信号的控制方法流程图;
图13为本申请实施例提供的另一种基于垂直同步信号的控制方法流程图;
图14为本申请实施例提供的另一种基于垂直同步信号的控制方法流程图;
图15为一种方案中电子设备调整VSYNC_APP信号和VSYNC_SF信号的原理示意图;
图16为本申请实施例提供的一种电子设备调整VSYNC_APP信号和VSYNC_SF信号的原理示意图;
图17为本申请实施例提供的一种芯片系统的结构组成示意图。
具体实施方式
以下,术语“第一”、“第二”仅用于描述目的,而不能理解为指示或暗示相对重要性或者隐含指明所指示的技术特征的数量。由此,限定有“第一”、“第二”的特征可以明示或者隐含地包括一个或者更多个该特征。在本实施例的描述中,除非另有说明,“多个”的含义是两个或两个以上。
本申请实施例提供一种基于垂直同步信号的控制方法,该方法可以应用于包括显示屏(如触摸屏)的电子设备。通过该方法,可以降低电子设备显示图像时出现丢帧的可能性,可以保证显示屏显示图像的流畅性,从而提升用户的视觉体验。
示例性的,上述电子设备可以是手机、平板电脑、桌面型、膝上型、手持计算机、笔记本电脑、超级移动个人计算机(ultra-mobile personal computer,UMPC)、上网本,以及蜂窝电话、个人数字助理(personal digital assistant,PDA)、增强现实(augmented reality,AR)\虚拟现实(virtual reality,VR)设备等包括显示屏(如触摸屏)的设备,本申请实施例对该电子设备的具体形态不作特殊限制。
下面将结合附图对本申请实施例的实施方式进行详细描述。
请参考图1,为本申请实施例提供的一种电子设备100的结构示意图。如图1所示,电子设备100可以包括处理器110,外部存储器接口120,内部存储器121,通用串行总线(universal serial bus,USB)接口130,充电管理模块140,电源管理模块141,电池142,天线1,天线2,移动通信模块150,无线通信模块160,音频模块170,扬声器170A,受话器170B,麦克风170C,耳机接口170D,传感器模块180,按 键190,马达191,指示器192,摄像头293,显示屏194,以及用户标识模块(subscriber identification module,SIM)卡接口195等。其中,传感器模块180可以包括压力传感器180A,陀螺仪传感器180B,气压传感器180C,磁传感器180D,加速度传感器180E,距离传感器180F,接近光传感器180G,指纹传感器180H,温度传感器180J,触摸传感器180K,环境光传感器180L,骨传导传感器180M等。
可以理解的是,本实施例示意的结构并不构成对电子设备100的具体限定。在另一些实施例中,电子设备100可以包括比图示更多或更少的部件,或者组合某些部件,或者拆分某些部件,或者不同的部件布置。图示的部件可以以硬件,软件或软件和硬件的组合实现。
处理器110可以包括一个或多个处理单元,例如:处理器110可以包括应用处理器(application processor,AP),调制解调处理器,图形处理器(graphics processing unit,GPU),图像信号处理器(image signal processor,ISP),控制器,存储器,视频编解码器,数字信号处理器(digital signal processor,DSP),基带处理器,和/或神经网络处理器(neural-network processing unit,NPU)等。其中,不同的处理单元可以是独立的器件,也可以集成在一个或多个处理器中。
控制器可以是电子设备100的神经中枢和指挥中心。控制器可以根据指令操作码和时序信号,产生操作控制信号,完成取指令和执行指令的控制。
处理器110中还可以设置存储器,用于存储指令和数据。在一些实施例中,处理器110中的存储器为高速缓冲存储器。该存储器可以保存处理器110刚用过或循环使用的指令或数据。如果处理器110需要再次使用该指令或数据,可从所述存储器中直接调用。避免了重复存取,减少了处理器110的等待时间,因而提高了系统的效率。
在一些实施例中,处理器110可以包括一个或多个接口。接口可以包括集成电路(inter-integrated circuit,I2C)接口,集成电路内置音频(inter-integrated circuit sound,I2S)接口,脉冲编码调制(pulse code modulation,PCM)接口,通用异步收发传输器(universal asynchronous receiver/transmitter,UART)接口,移动产业处理器接口(mobile industry processor interface,MIPI),通用输入输出(general-purpose input/output,GPIO)接口,用户标识模块(subscriber identity module,SIM)接口,和/或通用串行总线(universal serial bus,USB)接口等。
可以理解的是,本实施例示意的各模块间的接口连接关系,只是示意性说明,并不构成对电子设备100的结构限定。在另一些实施例中,电子设备100也可以采用上述实施例中不同的接口连接方式,或多种接口连接方式的组合。
充电管理模块140用于从充电器接收充电输入。充电管理模块140为电池142充电的同时,还可以通过电源管理模块141为电子设备供电。
电源管理模块141用于连接电池142,充电管理模块140与处理器110。电源管理模块141接收电池142和/或充电管理模块140的输入,为处理器110,内部存储器121,外部存储器,显示屏194,摄像头293,和无线通信模块160等供电。在其他一些实施例中,电源管理模块141也可以设置于处理器110中。在另一些实施例中,电源管理模块141和充电管理模块140也可以设置于同一个器件中。
电子设备100的无线通信功能可以通过天线1,天线2,移动通信模块150,无线 通信模块160,调制解调处理器以及基带处理器等实现。
天线1和天线2用于发射和接收电磁波信号。电子设备100中的每个天线可用于覆盖单个或多个通信频带。不同的天线还可以复用,以提高天线的利用率。例如:可以将天线1复用为无线局域网的分集天线。
移动通信模块150可以提供应用在电子设备100上的包括2G/3G/4G/5G等无线通信的解决方案。移动通信模块150可以包括至少一个滤波器,开关,功率放大器,低噪声放大器(low noise amplifier,LNA)等。移动通信模块150可以由天线1接收电磁波,并对接收的电磁波进行滤波,放大等处理,传送至调制解调处理器进行解调。移动通信模块150还可以对经调制解调处理器调制后的信号放大,经天线1转为电磁波辐射出去。
调制解调处理器可以包括调制器和解调器。其中,调制器用于将待发送的低频基带信号调制成中高频信号。解调器用于将接收的电磁波信号解调为低频基带信号。随后解调器将解调得到的低频基带信号传送至基带处理器处理。低频基带信号经基带处理器处理后,被传递给应用处理器。应用处理器通过音频设备(不限于扬声器170A,受话器170B等)输出声音信号,或通过显示屏194显示图像或视频。
无线通信模块160可以提供应用在电子设备100上的包括无线局域网(wireless local area networks,WLAN)(如无线保真(wireless fidelity,Wi-Fi)网络),蓝牙(bluetooth,BT),全球导航卫星系统(global navigation satellite system,GNSS),调频(frequency modulation,FM),近距离无线通信技术(near field communication,NFC),红外技术(infrared,IR)等无线通信的解决方案。无线通信模块160可以是集成至少一个通信处理模块的一个或多个器件。无线通信模块160经由天线2接收电磁波,将电磁波信号调频以及滤波处理,将处理后的信号发送到处理器110。无线通信模块160还可以从处理器110接收待发送的信号,对其进行调频,放大,经天线2转为电磁波辐射出去。
在一些实施例中,电子设备100的天线1和移动通信模块150耦合,天线2和无线通信模块160耦合,使得电子设备100可以通过无线通信技术与网络以及其他设备通信。所述无线通信技术可以包括全球移动通讯系统(global system for mobile communications,GSM),通用分组无线服务(general packet radio service,GPRS),码分多址接入(code division multiple access,CDMA),宽带码分多址(wideband code division multiple access,WCDMA),时分码分多址(time-division code division multiple access,TD-SCDMA),长期演进(long term evolution,LTE),BT,GNSS,WLAN,NFC,FM,和/或IR技术等。所述GNSS可以包括全球卫星定位系统(global positioning system,GPS),全球导航卫星系统(global navigation satellite system,GLONASS),北斗卫星导航系统(beidou navigation satellite system,BDS),准天顶卫星系统(quasi-zenith satellite system,QZSS)和/或星基增强系统(satellite based augmentation systems,SBAS)。
电子设备100通过GPU,显示屏194,以及应用处理器等实现显示功能。GPU为图像处理的微处理器,连接显示屏194和应用处理器。GPU用于执行数学和几何计算,用于图形渲染。处理器110可包括一个或多个GPU,其执行程序指令以生成或改变显 示信息。
显示屏194用于显示图像,视频等。该显示屏194包括显示面板。显示面板可以采用液晶显示屏(liquid crystal display,LCD),有机发光二极管(organic light-emitting diode,OLED),有源矩阵有机发光二极体或主动矩阵有机发光二极体(active-matrix organic light emitting diode,AMOLED),柔性发光二极管(flex light-emitting diode,FLED),Miniled,MicroLed,Micro-oLed,量子点发光二极管(quantum dot light emitting diodes,QLED)等。
其中,本申请实施例中的显示屏194可以是触摸屏。即该显示屏194中集成了触摸传感器180K。该触摸传感器180K也可以称为“触控面板”。也就是说,显示屏194可以包括显示面板和触摸面板,由触摸传感器180K与显示屏194组成触摸屏,也称“触控屏”。触摸传感器180K用于检测作用于其上或附近的触摸操作。触摸传感器180K检测到的触摸操作后,可以由内核层的驱动(如TP驱动)传递给上层,以确定触摸事件类型。可以通过显示屏194提供与触摸操作相关的视觉输出。在另一些实施例中,触摸传感器180K也可以设置于电子设备100的表面,与显示屏194所处的位置不同。
电子设备100可以通过ISP,摄像头293,视频编解码器,GPU,显示屏194以及应用处理器等实现拍摄功能。ISP用于处理摄像头293反馈的数据。摄像头293用于捕获静态图像或视频。数字信号处理器用于处理数字信号,除了可以处理数字图像信号,还可以处理其他数字信号。视频编解码器用于对数字视频压缩或解压缩。电子设备100可以支持一种或多种视频编解码器。这样,电子设备100可以播放或录制多种编码格式的视频,例如:动态图像专家组(moving picture experts group,MPEG)1,MPEG2,MPEG3,MPEG4等。
NPU为神经网络(neural-network,NN)计算处理器,通过借鉴生物神经网络结构,例如借鉴人脑神经元之间传递模式,对输入信息快速处理,还可以不断的自学习。通过NPU可以实现电子设备100的智能认知等应用,例如:图像识别,人脸识别,语音识别,文本理解等。
外部存储器接口120可以用于连接外部存储卡,例如Micro SD卡,实现扩展电子设备100的存储能力。外部存储卡通过外部存储器接口120与处理器110通信,实现数据存储功能。例如将音乐,视频等文件保存在外部存储卡中。内部存储器121可以用于存储计算机可执行程序代码,所述可执行程序代码包括指令。处理器110通过运行存储在内部存储器121的指令,从而执行电子设备100的各种功能应用以及数据处理。例如,在本申请实施例中,处理器110可以通过执行存储在内部存储器121中的指令,内部存储器121可以包括存储程序区和存储数据区。其中,存储程序区可存储操作系统,至少一个功能所需的应用程序(比如声音播放功能,图像播放功能等)等。存储数据区可存储电子设备100使用过程中所创建的数据(比如音频数据,电话本等)等。此外,内部存储器121可以包括高速随机存取存储器,还可以包括非易失性存储器,例如至少一个磁盘存储器件,闪存器件,通用闪存存储器(universal flash storage,UFS)等。
电子设备100可以通过音频模块170,扬声器170A,受话器170B,麦克风170C,耳机接口170D,以及应用处理器等实现音频功能。例如音乐播放,录音等。
音频模块170用于将数字音频信息转换成模拟音频信号输出,也用于将模拟音频输入转换为数字音频信号。音频模块170还可以用于对音频信号编码和解码。扬声器170A,也称“喇叭”,用于将音频电信号转换为声音信号。受话器170B,也称“听筒”,用于将音频电信号转换成声音信号。麦克风170C,也称“话筒”,“传声器”,用于将声音信号转换为电信号。耳机接口170D用于连接有线耳机。
压力传感器180A用于感受压力信号,可以将压力信号转换成电信号。在一些实施例中,压力传感器180A可以设置于显示屏194。压力传感器180A的种类很多,如电阻式压力传感器,电感式压力传感器,电容式压力传感器等。电容式压力传感器可以是包括至少两个具有导电材料的平行板。当有力作用于压力传感器180A,电极之间的电容改变。电子设备100根据电容的变化确定压力的强度。当有触摸操作作用于显示屏194,电子设备100根据压力传感器180A检测所述触摸操作强度。电子设备100也可以根据压力传感器180A的检测信号计算触摸的位置。在一些实施例中,作用于相同触摸位置,但不同触摸操作强度的触摸操作,可以对应不同的操作指令。本申请实施例中,电子设备100可以通过压力传感器180A获取用户的触摸操作的按压力度。
按键190包括开机键,音量键等。按键190可以是机械按键。也可以是触摸式按键。电子设备100可以接收按键输入,产生与电子设备100的用户设置以及功能控制有关的键信号输入。马达191可以产生振动提示。马达191可以用于来电振动提示,也可以用于触摸振动反馈。指示器192可以是指示灯,可以用于指示充电状态,电量变化,也可以用于指示消息,未接来电,通知等。SIM卡接口195用于连接SIM卡。SIM卡可以通过插入SIM卡接口195,或从SIM卡接口195拔出,实现和电子设备100的接触和分离。电子设备100可以支持1个或N个SIM卡接口,N为大于1的正整数。SIM卡接口195可以支持Nano SIM卡,Micro SIM卡,SIM卡等。
为了便于理解,本申请实施例这里介绍一种技术中所述的垂直同步信号,如垂直同步信号1、垂直同步信号2和垂直同步信号3。
垂直同步信号1:如VSYNC_APP信号。该垂直同步信号1可以用于触发绘制一个或多个图层,并渲染绘制的图层。也就是说,上述垂直同步信号1可用于触发UI线程绘制一个或多个图层,并由Render线程对UI线程绘制的一个或多个图层进行渲染。该垂直同步信号1(如VSYNC_APP信号)是第一垂直同步信号。
垂直同步信号2:如VSYNC_SF信号。该垂直同步信号2可以用于触发对渲染的一个或多个图层进行图层合成得到图像帧。也就是说,上述垂直同步信号2可用于触发合成线程对Render线程渲染的一个或多个图层进行图层合成得到图像帧。该垂直同步信号2(如VSYNC_SF信号)是第二垂直同步信号。
垂直同步信号3:如HW_VSYNC信号。该垂直同步信号3可以用于触发硬件刷新显示图像帧。
其中,垂直同步信号3是由电子设备的显示屏驱动触发的一个硬件信号。本申请实施例中,垂直同步信号3(如HW_VSYNC)的信号周期T3是根据电子设备的显示屏的屏幕刷新率确定的。具体的,垂直同步信号3的信号周期T3是电子设备的显示屏(如LCD或OLED)的屏幕刷新率的倒数。其中,电子设备的屏幕刷新率与该电子设备的帧率可以相同。电子设备的高帧率就是高屏幕刷新率。
例如,电子设备的显示屏的屏幕刷新率和帧率可以为60赫兹(Hz)、70Hz、75Hz、80Hz、90Hz或者120Hz等任一值。以帧率是60Hz为例,上述垂直同步信号3的信号周期为1/60=0.01667秒(s)=16.667毫秒(ms)。以帧率是90Hz为例,上述垂直同步信号3的信号周期为1/90=0.01111秒(s)=11.11毫秒(ms)。需要注意的是,其中,电子设备可能支持多个不同的帧率。电子设备的帧率可以在上述不同的帧率之间切换。本申请实施例中所述的帧率是电子设备当前所使用的帧率。即垂直同步信号3的信号周期是电子设备当前所使用的帧率的倒数。
需要注意的是,本申请实施例中的垂直同步信号3是一个周期性离散信号。例如,如图2所示,每间隔一个信号周期就会有一个由硬件驱动触发的垂直同步信号3。垂直同步信号1和垂直同步信号2是基于垂直同步信号3产生的,即垂直同步信号3可以是垂直同步信号1和垂直同步信号2的信号源。或者,垂直同步信号1和垂直同步信号2与垂直同步信号3同步。因此,一般而言,垂直同步信号1和垂直同步信号2的信号周期与垂直同步信号3的信号周期相同,且相位一致。例如,如图2所示,垂直同步信号1的信号周期T1,垂直同步信号2的信号周期T2,与垂直同步信号3的信号周期T3相同。
并且,如图2所示,垂直同步信号1、垂直同步信号2,以及垂直同步信号3的相位一致。可以理解的是,实际实施过程中,垂直同步信号1、垂直同步信号2,以及垂直同步信号3之间可能会因为各种因素(如处理性能)存在一定的相位误差。需要注意的是,在理解本申请实施例的方法时,上述相位误差被忽略。
综上所述,上述垂直同步信号1、垂直同步信号2和垂直同步信号3均为周期性离散信号。例如,如图2所示,每间隔一个信号周期T1就会有一个垂直同步信号1,每间隔一个信号周期T2就会有一个垂直同步信号2,每间隔一个信号周期T3就会有一个垂直同步信号3。上述垂直同步信号1、垂直同步信号2和垂直同步信号3的信号周期都可以称为同步周期T Z,T1=T2=T3=T Z。也就是说,本申请实施例中的同步周期是电子设备的帧率的倒数。
需要注意的是,在不同的系统或者架构中,垂直同步信号的名称可能不同。例如,在一些系统或者架构中,上述用于触发绘制一个或多个图层的垂直同步信号(即垂直同步信号1)的名称可能不是VSYNC_APP。但是,无论垂直同步信号的名称是什么,只要是具备类似功能的同步信号,符合本申请实施例提供的方法的技术思路,都应涵盖在本申请的保护范围之内。
为了便于理解,本申请实施例这里结合图3,以上述显示屏是触摸屏,用户在显示屏的操作是触摸操作为例,介绍从“用户手指在触摸屏输入触摸操作”到“触摸屏显示该触摸操作对应的图像”过程中,电子设备的软件处理流程。
如图3所示,电子设备可以包括:触控面板(touch panel,TP)/TP驱动(Driver)10、Input框架(即Input Framework)20、UI框架(即UI Framework)30、Display框架(即Display Framework)40和硬件显示模块50。
如图3所示,电子设备的软件处理流程可以包括以下步骤(1)-步骤(5)。步骤(1):TP IC/TP驱动10中的TP采集用户手指对电子设备的TP的触摸操作后,TP驱动向Event Hub上报相应的触摸事件。步骤(2):Input框架20的Input Reader 线程可以从Event Hub中读取触摸事件,然后向Input Dispatcher线程发送该触摸事件;由Input Dispatcher线程向UI框架30中的UI线程上传该触摸事件。步骤(3):UI框架30中的UI线程(如Do Frame)绘制该触摸事件对应的一个或多个图层;渲染(Render)线程(如Draw Frame)对一个或多个图层进行图层渲染。步骤(4):Display框架40中的合成线程(Surface Flinger)对绘制的一个或多个图层(即渲染后的一个或多个图层)进行图层合成得到图像帧。步骤(5):硬件显示模块50的液晶显示面板(Liquid Crystal Display,LCD)驱动可接收合成的图像帧,由LCD显示合成的图像帧。LCD显示图像帧后,LCD显示的图像可被人眼感知。
一般而言,响应于用户对TP的触摸操作或者UI事件,UI框架可以在垂直同步信号1到来后,调用UI线程绘制触控事件对应的一个或多个图层,再调用Render线程以对该一个或多个图层进行渲染;然后,硬件合成(Hardware Composer,HWC)可以在垂直同步信号2到来后,调用合成线程对绘制的一个或多个图层(即渲染后的一个或多个图层)进行图层合成得到图像帧;最后,硬件显示模块可以在垂直同步信号3到来后,在LCD刷新显示上述图像帧。其中,上述UI事件可以是由用户对TP的触摸操作触发的。或者,该UI事件可以是由电子设备自动触发的。例如,电子设备的前台应用自动切换画面时,可以触发上述UI事件。前台应用是电子设备的显示屏当前显示的界面对应的应用。
其中,TP可以周期性检测用户的触摸操作。TP检测到触摸操作后,可以唤醒上述垂直同步信号1和垂直同步信号2,以触发UI框架基于垂直同步信号1进行图层绘制和渲染,硬件合成HWC基于垂直同步信号2进行图层合成。其中,TP检测触摸操作的检测周期与垂直同步信号3(如HW_VSYNC)的信号周期T3相同。
需要注意的是,UI框架是基于垂直同步信号1周期性的进行图层绘制和渲染的;硬件合成HWC是基于垂直同步信号2周期性的进行图层合成的;LCD是基于垂直同步信号3周期性的进行图像帧刷新的。
示例性的,本申请实施例这里,以上述垂直同步信号1是VSYNC_APP信号,垂直同步信号2是VSYNC_SF信号,垂直同步信号3是HW_VSYNC信号为例,介绍一种技术中电子设备执行绘制、渲染、合成和图像帧显示的流程。
例如,如图4A所示,电子设备的UI线程响应于t 1时刻的VSYNC_APP信号,执行“绘制a”绘制图层a,然后由Render线程执行“渲染a”和“渲染a′”渲染该图层a;电子设备的合成线程响应于t 2时刻的VSYNC_SF信号,执行“图像帧合成a”对上述图层a进行图层合成得到图像帧a;电子设备的LCD响应于t 3时刻的HW_VSYNC信号,执行“图像帧显示a”刷新显示上述图像帧a。
又例如,如图4A所示,电子设备的UI线程响应于t 2时刻的VSYNC_APP信号,执行“绘制b”绘制图层b,然后由Render线程执行“渲染b”和“渲染b′”渲染该图层b;电子设备的合成线程响应于t 3时刻的VSYNC_SF信号,执行“图像帧合成b”对上述图层b进行图层合成得到图像帧b;电子设备的LCD响应于t 4时刻的HW_VSYNC信号,执行“图像帧显示b”刷新显示上述图像帧b。
需要说明的是,图4A所示的“绘制a”和“渲染a”在电子设备的CPU中实现,“渲染a′”在电子设备的GPU中实现。CPU执行“渲染a”是GPU对绘制的图层a进 行图层渲染前的准备,GPU执行“渲染a′”是电子设备正式对绘制的图层a进行图层渲染。图4A所示的“绘制b”和“渲染b”在电子设备的CPU中实现,“渲染b′”在电子设备的GPU中实现。CPU执行“渲染b”是GPU对绘制的图层b进行图层渲染前的准备,GPU执行“渲染b′”是电子设备正式对绘制的图层b进行图层渲染。也就是说,本申请实施例中所述的绘制可以包括:UI线程所执行的图层绘制以及Render线程对UI线程绘制的图层进行图层渲染前的准备。
其中,上述电子设备绘制、渲染和合成图层的过程可以构成一个图形生成消费模型,如图4B所示的图形生成消费模型400。在该图形生成消费模型400中,电子设备的UI线程和Render线程(即渲染器Renderder)作为生产者,绘制和渲染图层;Render线程(即渲染器Renderder)可将完成渲染准备的图层保存在第一缓存队列中,并对第一缓存队列中的图层进行图层渲染;合成线程(即合成器Surface Flinger)作为消费者,从第一缓存队列中读取图层,并对读取的图层进行图层合成得到图像帧,并将该图像帧送到电子设备的LCD(即显示控制器Display Controller)来显示。其中,第一缓存队列是根据应用来分配的,一个应用可以分配一个缓存队列。该第一缓存队列是由Render线程申请,为应用分配的。
其中,上述图形生成和消费的模型中,生产者(如UI线程和Render线程)和消费者(如合成线程)都是根据VSYNC信号来进行图层的生成和消费的。
在不发生卡顿(即不丢帧)的情况下,上述模型中生产和消费的速率是保持一致的。生产者(如Render线程)每隔一个VSYNC周期(如上述同步周期T Z)生成一帧图层(即帧数据)放到第一缓存队列中,消费者(如合成线程)每隔一个VSYNC周期(如上述同步周期T Z)从第一缓存队列中取出一帧图层(即帧数据)进行图层合成(也称为图像帧合成)。即UI线程和Render线程作为生产者的生产周期与合成线程(即Surface Flinger)作为消费者的消费周期相同,均等于上述同步周期T Z
例如,在图4A所示的t x时刻,电子设备的Render线程完成“渲染a”。此时,Render线程可将渲染得到的帧数据a(即图层a)缓存至第一缓存队列,即生产者生产了一帧图层(即帧数据),并将该图层缓存至第一缓存队列。如图4A所示,在t x时刻,第一缓存队列中的帧数据(即图层)的数量由0增加为1(即0->1)。如图5A所示,在t x时刻,帧数据a(即图层a)在第一缓存队列入队。
随后,响应于图4A所示的t 2时刻的VSYNC_SF信号,电子设备的合成线程可执行“图像帧合成a”(也称为图层合成a)。此时,合成线程可从第一缓存队列中读取帧数据a(即图层a)缓存至第一缓存队列,即消费者从第一缓存队列消费了一帧图层。如图4A所示,在t 2时刻,第一缓存队列中的帧数据(即图层)的数量由1减少为0(即1->0)。如图5A所示,在t 2时刻,帧数据a(即图层a)在第一缓存队列出队。
又例如,在图4A所示的t y时刻,电子设备的Render线程完成“渲染b”。此时,Render线程可将渲染得到的帧数据b(即图层b)缓存至第一缓存队列,即生产者生产了一帧图层(即帧数据),并将该图层缓存至第一缓存队列。如图4A所示,在t y时刻,第一缓存队列中的帧数据(即图层)的数量由0增加为1(即0->1)。如图5B所示,在t y时刻,帧数据b(即图层b)在第一缓存队列入队。
随后,响应于图4A所示的t 3时刻的VSYNC_SF信号,电子设备的合成线程可执行 “图像帧合成b”(也称为图层合成b)。此时,合成线程可从第一缓存队列中读取帧数据b(即图层b)缓存至第一缓存队列,即消费者从第一缓存队列消费了一帧图层。如图4A所示,在t 3时刻,第一缓存队列中的帧数据(即图层)的数量由1减少为0(即1->0)。如图5B所示,在t 3时刻,帧数据b(即图层b)在第一缓存队列出队。
但是,电子设备响应于上述VSYNC_APP信号、VSYNC_SF信号和HW_VSYNC信号,进行图层的绘制、渲染、合成和刷新显示图像帧的过程中,可能会出现丢帧的现象。这样,会影响显示屏显示图像的连贯性和流畅性,从而影响用户的视觉体验。
其中,电子设备显示图像出现丢帧现象的原因可能是:UI线程和Render线程绘制和渲染耗时过长,无法在一个VSYNC周期(如上述同步周期T Z)内完成绘制和渲染。如此,生产者(如Render线程)就不能按时将帧数据(即渲染的图层)缓存至第一缓存队列中。也就是说,生产者(如Render线程)至少会有一个VSYNC周期没有在第一缓存队列中缓存帧数据。但是,消费者(如合成线程)还是会每隔一个VSYNC周期从第一缓存队列中取出一帧图层(即帧数据)进行图层合成(也称为图像帧合成)。
在这种情况下,生产者的生产速率将会小于消费者的消费速度。如果第一缓存队列中没有缓存足够数量的帧数据,则会出现消费者(如合成线程)在一个VSYNC_SF信号到来后,无法从第一缓存队列中读取到帧数据。那么这一个VSYNC周期则无法进行图层合成得到图像帧,并无法刷新显示该图像帧,LCD的显示画面则无法更新,则会出现丢帧的现象。这样,会影响显示屏显示图像的连贯性和流畅性,从而影响用户的视觉体验。
例如,如图6所示,Render线程无法在t 3时刻完成“渲染b”,则无法在t 3时刻将帧数据b(即图层b)缓存至第一缓存队列;在t 3时刻,第一缓存队列中的帧数据的数量为0。因此,响应于t 3时刻的VSYNC_SF信号,合成线程无法从第一缓存队列中读取到帧数据,从而无法进行图层合成得到图像帧,进而在t 4时刻电子设备的LCD无法刷新显示图像帧,则出现了丢帧的现象。
在t 3时刻之后的t z时刻,Render线程才完成“渲染b”;第一缓存队列中的帧数据(即图层)的数量由0增加为1(即0->1)。响应于t z时刻之后t 4时刻的VSYNC_SF信号,合成线程才可以从第一缓存队列中读取到帧数据b(即图层b),第一缓存队列中的帧数据的数量由1减少为0(即1->2)。电子设备的LCD在t 5时刻才可以执行“图像帧显示b”刷新显示图像帧。其中,图6所示的“渲染a”和“渲染b”在电子设备的CPU中实现,“渲染a′”和“渲染b′”在电子设备的GPU中实现。“渲染a”、“渲染a′”“渲染b”和“渲染b′”的详细描述,可以参考上述实施例对图4A的详细介绍,这里不予赘述。
由图6可知,在t 4时刻-t 5时刻这一同步周期,显示屏显示图像出现丢帧现象。而通过本申请实施例的方法,可以避免显示图像出现丢帧现象,以避免显示屏显示一帧空白图像。也就是说,通过本申请实施例的方法可以降低电子设备显示图像时出现丢帧的可能性,可以保证显示屏显示图像的流畅性,从而提升用户的视觉体验。
需要说明的是,导致电子设备的显示屏显示图像出现丢帧现象的原因不仅是UI线程绘制图层所花费的时长较大或者Render线程渲染图层所花费的时长较大,还可能是因为电子设备的帧率和屏幕刷新率较高。
例如,当电子设备的帧率为60Hz时,VSYNC_APP信号、VSYNC_SF信号和HW_VSYNC信号的信号周期T Z=16.66ms。如果UI线程和Render线程可以在该16.66ms内完成每一帧图层的绘制和渲染,电子设备的显示屏显示图像则不会出现丢帧的现象。
又例如,当电子设备的帧率提高至120Hz后,VSYNC_APP信号、VSYNC_SF信号和HW_VSYNC信号的信号周期T Z=8.33ms。如果UI线程和Render线程可以在该8.33ms内完成每一帧图层的绘制和渲染,电子设备的显示屏显示图像则不会出现丢帧的现象。
可以理解的是,相比于在16.66ms内完成一帧图层的绘制和渲染,电子设备在8.33ms内完成一帧图层的绘制和渲染的难度较大。因此,在高帧率的场景下,电子设备的显示屏显示图像出现丢帧现象的可能性较高。
需要说明的是,电子设备显示图像出现丢帧现象的原因可能是电子设备无法在一帧(即一个同步周期T Z)内完成一帧图层的绘制和渲染,还可能是电子设备无法在一帧(即一个同步周期T Z)内完成一帧图层的图层合成。假设电子设备的UI线程和Render线程处理一帧图层所花费的时间为t cpu,合成线程处理一帧图层所花费的时间为t SF。电子设备显示图像不丢帧的条件为Max{t cpu,t SF}<T Z。其中,Max{}表示取{}中的最大值。以下实施例中,以UI线程和Render线程无法在一帧内完成一帧图层的绘制和渲染导致电子设备显示图像丢帧为例,介绍本申请实施例的方法。
目前的一些方案中,为了降低电子设备显示图像时出现丢帧的可能性,保证显示屏显示图像的流畅性,提升了电子设备的CPU和GPU的工作频率。其中,提升电子设备的CPU和GPU的工作频率,可以提升UI线程和Render线程的处理速度,从而可以减少UI线程和Render线程绘制图层和渲染图层所花费的时长,进而可以降低电子设备显示图像时出现丢帧的可能性。但是,提升电子设备的CPU和GPU的工作频率,会增加电子设备的功耗,减少电子设备的续航时间。由此可见,通过提升工作频率的方式降低丢帧率的方案,能效比较低。
本申请实施例提供一种基于垂直同步信号的控制方法,可以调整VSYNC_APP信号的信号周期,使VSYNC_APP信号的信号周期小于VSYNC_SF信号的信号周期。这样,可以使UI线程和Render线程在同一时长内生产的帧数据的数量大于合成线程在同一时长内消费的帧数据的数量,即图7所示的生产速率大于消费速率。这样,如图7所示,第一缓存队列中则可以缓存足够的帧数据,可供合成线程消费。如此,则不会出现合成线程响应于VSYNC_APP信号无法从第一缓存队列中读取帧数据的问题,则可以降低电子设备显示图像时出现丢帧的可能性。采用上述方案,可以在不增加电子设备功耗的前提下,降低电子设备显示图像时出现丢帧的可能性,保证显示屏显示图像的流畅性。
示例性的,本申请实施例提供的方法的执行主体可以是用于处理图像的装置。该装置可以是上述电子设备中的任一种(例如,该装置可以为图1所示的电子设备100)。或者,该装置还可以为电子设备的中央处理器(Central Processing Unit,CPU),或者电子设备中的用于执行本申请实施例提供的方法的控制模块。本申请实施例中以上述电子设备(如手机)执行图像处理方法为例,介绍本申请实施例提供的方法。
实施例(一)
本申请实施例提供一种基于垂直同步信号的控制方法。该方法可以应用于包括显 示屏的电子设备。在该实施例中,垂直同步信号1(如VSYNC_APP信号)是第一垂直同步信号,垂直同步信号2(如VSYNC_SF信号)是第二垂直同步信号。
如图8所示,该基于垂直同步信号的控制方法可以包括S801-S807。该基于垂直同步信号的控制方法可以包括“调整垂直同步信号前的控制流程”和“调整垂直同步信号,以及调整后的控制流程”。如图8所示,上述“调整垂直同步信号前的控制流程”可以包括S801-S803。
S801、电子设备响应于VSYNC_APP信号,绘制第一应用的第一图层,并将第一图层缓存至第一缓存队列。
其中,本申请实施例中所述的绘制可以包括:UI线程所执行的图层绘制以及Render线程对UI线程绘制的图层进行图层渲染前的准备。例如,S801所述的绘制可以包括图6所示的“绘制a”和“渲染a”或者“绘制b”和“渲染b”。又例如,S801所述的绘制可以包括图10所示的“绘制1”和“渲染1”或者“绘制2”和“渲染2”。
具体的,S801可以包括:UI线程响应于VSYNC_APP信号,绘制第一图层;Render线程对UI线程绘制的第一图层进行渲染准备,并将第一图层缓存至第一缓存队列。需要说明的是,在Render线程将第一图层缓存至第一缓存队列后,该Render线程可以正式对该第一缓存队列中缓存的第一图层进行渲染。之后,合成线程可以对第一缓存队列中缓存的图层进行图层合成得到图像帧。其中,上述第一应用是前台应用。前台应用的详细介绍,可以参考实施例(六)中的相关内容。
S802、电子设备响应于VSYNC_SF信号,对第一缓存队列中缓存的图层进行图层合成得到图像帧。
S803、电子设备响应于HW_VSYNC信号,刷新显示图像帧。
一般而言,电子设备的UI线程和Render线程向第一缓存队列生产帧数据(即图层)的生产周期T S,等于合成线程消费第一缓存队列中的帧数据(即图层)的消费周期T X,即T S=T X=T Z。其中,上述VSYNC_APP信号的信号周期是UI线程和Render线程向第一缓存队列生产帧数据的生产周期T S。上述VSYNC_SF信号的信号周期是合成线程消费第一缓存队列中的帧数据的消费周期T X
如上述S801-S803所述,电子设备按照相同的生产周期T S和消费周期T X,控制上述VSYNC_APP信号和VSYNC_SF信号。在生产周期T S=消费周期T X的情况下,如果UI线程和Render线程可以在一帧(即一个同步周期T Z)内完成一帧图层的绘制和渲染准备,则UI线程和Render线程(即生产者)的生产速率等于合成线程(即消费者)的消费速率。电子设备显示图像不会出现丢帧的现象。
但是,如果UI线程和Render线程无法在一帧(即一个同步周期T Z)内完成一帧图层的绘制和渲染准备,则合成线程(即消费者)的消费速率大于UI线程和Render线程(即生产者)的生产速率。这样,则会出现第一缓存队列中缓存的帧数据(即图层)的数量为0,消费者(如合成线程)在一个VSYNC_SF信号到来后,无法从第一缓存队列中读取到帧数据。那么,这一个VSYNC周期则无法进行图层合成得到图像帧,并无法刷新显示该图像帧,LCD的显示画面则无法更新,则会出现图6所示的丢帧的现象。这样,会影响显示屏显示图像的连贯性和流畅性,从而影响用户的视觉体验。
由上述描述可知,电子设备显示图像出现丢帧现象的原因在于:消费者(如合成 线程)在一个VSYNC_SF信号到来后,无法从第一缓存队列中读取到帧数据。反之,如果消费者(如合成线程)在一个VSYNC_SF信号到来后,可以从第一缓存队列中读取到帧数据;那么,电子设备显示图像则不会出现丢帧的现象。
其中,消费者(如合成线程)在一个VSYNC_SF信号到来后,可以从第一缓存队列中读取到帧数据(即图层)的前提是:VSYNC_SF信号到来时,第一缓存队列中缓存有帧数据(即图层)。要保证VSYNC_SF信号到来时第一缓存队列中缓存有帧数据,如图7所示,则要求生产者(即UI线程和Render线程)的生产速率大于消费者(即合成线程)的消费速率。
要保证生产者(即UI线程和Render线程)的生产速率大于消费者(即合成线程)的消费速率,则要求在同一时长内生产者生产的帧数据的数量大于消费者消费的帧数据的数量。如此,如图7所示,则要求生产者的生产周期T S小于消费者的消费周期T X
本申请实施例中,可以调整生产者的生产周期T S,使生产者的生产周期T S小于消费者的消费周期T X。这样,可以降低电子设备显示图像时出现丢帧的可能性,保证显示屏显示图像的流畅性。具体的,如图8所示,上述“调整垂直同步信号,以及调整后的控制流程”可以包括S804-S807。
S804、电子设备将VSYNC_APP信号的信号周期调整为第一时长,该第一时长小于VSYNC_SF信号的信号周期。
可以理解的是,由于第一时长小于VSYNC_SF信号的信号周期;因此,电子设备将VSYNC_APP信号的信号周期调整为第一时长之后,VSYNC_APP信号的信号周期小于VSYNC_SF信号的信号周期。
其中,VSYNC_APP信号的信号周期是电子设备的生产周期,VSYNC_SF信号的信号周期是电子设备的消费周期。示例性的,如图9或图10所示,假设调整前电子设备的生产周期为T S,调整后电子设备的生产周期为T S′,电子设备的消费周期为T X。即第一时长等于T S′。其中,T S=T X=T Z,T Z是电子设备的同步周期,T Z等于电子设备的屏幕刷新率的倒数。T S′<T X,T S′<T Z
假设电子设备在图9或图10所示的t Q时刻执行S804,将生产周期(即VSYNC_APP信号的信号周期)由T S调整为T S′。如图9或图10所示,在t Q时刻之前,电子设备(如电子设备的UI线程和Render线程)的生产周期为T S,如t 1时刻的VSYNC_APP信号与t 2时刻的VSYNC_APP信号相距T S到来。
如图9或图10所示,在t Q时刻之后,电子设备的生产周期为T S′。例如,t 2时刻的VSYNC_APP信号到来后,经过T S′,在t A时刻下一个VSYNC_APP信号到来。又例如,t A时刻的VSYNC_APP信号到来后,经过T S′,在t B时刻下一个VSYNC_APP信号到来。又例如,t B时刻的VSYNC_APP信号到来后,经过T S′,在t C时刻下一个VSYNC_APP信号到来。又例如,t C时刻的VSYNC_APP信号到来后,经过T S′,在t D时刻下一个VSYNC_APP信号到来。
需要说明的是,在电子设备执行S804调整VSYNC_APP信号的信号周期的前后,VSYNC_SF信号的信号周期(即电子设备的消费周期)不变。
例如,如图9所示,在t Q时刻之前,电子设备(如电子设备的合成线程)的消费周期为T X,T X=T Z。t 1时刻的VSYNC_SF信号到来后,经过T X,在t 2时刻下一个VSYNC_SF 信号到来。如图9或图10所示,在t Q时刻之后,电子设备(如电子设备的合成线程)的消费周期仍为T X,T X=T Z。例如,t 2时刻的VSYNC_SF到来后,经过T X,在t 3时刻下一个VSYNC_SF信号到来。又例如,t 3时刻的VSYNC_SF到来后,经过T X,在t 4时刻下一个VSYNC_SF信号到来。又例如,t 4时刻的VSYNC_SF到来后,经过T X,在t 5时刻下一个VSYNC_SF信号到来。
示例性的,电子设备可以将VSYNC_APP信号的信号周期调小△T1,使VSYNC_APP信号的信号周期等于第一时长T S′,使VSYNC_APP信号的信号周期小于VSYNC_SF信号的信号周期。也就是说,T S-△T1=T S′,0<△T1<T Z。其中,△T1的详细描述,可以参考实施例(三)中的相关内容,这里不予赘述。
S805、电子设备响应于调整后的VSYNC_APP信号,绘制第一图层,并将第一图层缓存至第一缓存队列。
S806、电子设备响应于VSYNC_SF信号,对第一缓存队列中缓存的图层进行图层合成得到图像帧。
S807、电子设备响应于HW_VSYNC信号,刷新显示图像帧。
示例性的,本申请实施例这里结合附图10介绍上述“调整垂直同步信号前的控制流程”,即电子设备执行S804之前的控制流程,如S801-S803。
例如,在电子设备执行S804之前,电子设备执行S801,UI线程响应于t 1时刻的VSYNC_APP信号,执行图10所示的“绘制1”绘制图层1,Render线程执行“渲染1”对该图层1进行渲染准备,并将帧数据1(即图层1)缓存至第一缓存队列。如图10所示,Render线程还可以执行“渲染1′”渲染第一缓存队列中的帧数据1(即图层1)。如图10所示,Render线程在t 1时刻之后的t 1′时刻完成“渲染1”,在t 1′时刻将帧数据1(即图层1)缓存至第一缓存队列;如图10所示,在t 1′时刻,第一缓存队列中帧数据的数量由0增加为1(即0->1)。
随后,电子设备执行S802,合成线程响应于t 2时刻的VSYNC_SF信号,执行图10所示的“图像帧合成1”对上述帧数据1(即图层1)进行图层合成得到图像帧1。在t 1′时刻之后的t 2时刻,合成线程从第一缓存队列中读取上述帧数据1(即图层1),该帧数据1(即图层1)从第一缓存队列出队。如图10所示,在t 2时刻,第一缓存队列中帧数据的数量由1减少为0(即1->0)。
最后,电子设备执行S803,电子设备的LCD响应于t 3时刻的VSYNC_SF信号,执行图10所示的“图像帧显示1”刷新显示对上述图像帧1。
又例如,在电子设备执行S804之前,电子设备执行S801,UI线程响应于t 2时刻的VSYNC_APP信号,执行图10所示的“绘制2”绘制图层2,Render线程执行“渲染2”对该图层2进行渲染准备,并将帧数据2(即图层2)缓存至第一缓存队列。如图10所示,Render线程还可以执行“渲染2′”渲染第一缓存队列中的帧数据2(即图层2)。如图10所示,Render线程在t 2时刻之后的t 2′时刻完成“渲染2”,在t 2′时刻将帧数据2(即图层2)缓存至第一缓存队列;如图10所示,在t 2′时刻,第一缓存队列中帧数据的数量由0增加为1(即0->1)。
随后,电子设备执行S802,合成线程响应于t 3时刻的VSYNC_SF信号,执行图10所示的“图像帧合成2”对上述帧数据2(即图层2)进行图层合成得到图像帧2。在 t 2′时刻之后的t 2时刻,合成线程从第一缓存队列中读取上述帧数据2(即图层2),该帧数据2(即图层2)从第一缓存队列出队。如图10所示,在t 2时刻,第一缓存队列中帧数据的数量由1减少为0(即1->0)。
最后,电子设备执行S803,电子设备的LCD响应于t 4时刻的VSYNC_SF信号,执行图10所示的“图像帧显示2”刷新显示对上述图像帧2。
示例性的,本申请实施例这里结合附图10介绍上述“调整垂直同步信号后的控制流程”,即电子设备执行S804之后的控制流程,如S805-S807。
例如,电子设备在t 2时刻之后的t Q时刻执行S804。在电子设备执行S804之后,VSYNC_APP信号的信号周期(即生产周期)由T S变为T S′,T S′<T S;VSYNC_SF信号的信号周期(即消费周期)T X不变,T S=T X=T Z
在电子设备执行S804之后,在t 1时刻的VSYNC_APP信号到来时刻(即t 1时刻)之后的t A时刻,下一个VSYNC_APP信号到来。该t A时刻与t 1时刻相距T S′。电子设备可以执行S805,UI线程响应于t A时刻的VSYNC_APP信号,执行图10所示的“绘制3”绘制图层3,Render线程执行“渲染3”对该图层3进行渲染准备,并将帧数据3(即图层3)缓存至第一缓存队列。如图10所示,Render线程还可以执行“渲染3′”渲染第一缓存队列中的帧数据3(即图层3)。如图10所示,Render线程在t A时刻之后的t 3′时刻完成“渲染3”,在t 3′时刻将帧数据3(即图层3)缓存至第一缓存队列;如图10所示,在t 3′时刻,第一缓存队列中帧数据的数量由0增加为1(即0->1)。
随后,电子设备执行S806,合成线程响应于t 4时刻的VSYNC_SF信号,执行图10所示的“图像帧合成3”对上述帧数据3(即图层3)进行图层合成得到图像帧3。在t 3′时刻之后的t 4时刻,合成线程从第一缓存队列中读取上述帧数据3(即图层3),该帧数据3(即图层3)从第一缓存队列出队。如图10所示,在t 4时刻,第一缓存队列中帧数据的数量由1减少为0(即1->0)。
最后,电子设备执行S807,电子设备的LCD响应于t 5时刻的VSYNC_SF信号,执行图10所示的“图像帧显示3”刷新显示对上述图像帧3。
又例如,在电子设备执行S804之后,在t A时刻的VSYNC_APP信号到来时刻(即t A时刻)之后的t B时刻,下一个VSYNC_APP信号到来。该t B时刻与t A时刻相距T S′。电子设备可以执行S805,UI线程响应于t B时刻的VSYNC_APP信号,执行图10所示的“绘制4”绘制图层4,Render线程执行“渲染4”对该图层4进行渲染准备,并将帧数据4(即图层4)缓存至第一缓存队列。如图10所示,Render线程还可以执行“渲染4′”渲染第一缓存队列中的帧数据4(即图层4)。如图10所示,Render线程在t B时刻之后的t 4′时刻完成“渲染4”,在t 4′时刻将帧数据4(即图层4)缓存至第一缓存队列;如图10所示,在t 4′时刻,第一缓存队列中帧数据的数量由0增加为1(即0->1)。
随后,电子设备执行S806,合成线程响应于t 5时刻的VSYNC_SF信号,执行图10所示的“图像帧合成4”对上述帧数据4(即图层4)进行图层合成得到图像帧4。在t 4′时刻之后的t 5时刻,合成线程从第一缓存队列中读取上述帧数据4(即图层4),该帧数据4(即图层4)从第一缓存队列出队。如图10所示,在t 5时刻,第一缓存队列中帧数据的数量由1减少为0(即1->0)。
最后,电子设备执行S807,电子设备的LCD响应于t 6时刻的VSYNC_SF信号,执行图10所示的“图像帧显示4”刷新显示对上述图像帧4。
同样的,在电子设备执行S804之后,在t B时刻的VSYNC_APP信号到来时刻(即t B时刻)之后的t C时刻,下一个VSYNC_APP信号到来。该t C时刻与t B时刻相距T S′;在t C时刻的VSYNC_APP信号到来时刻(即t C时刻)之后的t d时刻,下一个VSYNC_APP信号到来。该t d时刻与t C时刻相距T S′。
可以理解的是,由于VSYNC_APP信号的信号周期(即生产周期)T S′小于VSYNC_SF信号的信号周期(即消费周期)T X;因此,电子设备执行上述S805-S807一段时间后,UI线程和Render线程(即生产者)所生产的帧数据(即图层)的数量将大于合成线程(即消费者)所消费的帧数据(即图层)的数量。例如,在一段时间后,UI线程和Render线程(即生产者)所生产的帧数据(即图层)的数量比合成线程(即消费者)所消费的帧数据(即图层)的数量大1。如此,响应于一个VSYNC_SF信号,合成线程从第一缓存队列中读取一帧图层后,该第一缓存队列还缓存有另一帧图层。这样,即使UI线程和Render线程无法在一帧(即一个同步周期T Z)内完成一帧图层的绘制和渲染,合成线程在一个VSYNC_SF信号到来后,也可以从第一缓存队列中读取到帧数据,则可以避免出现丢帧的现象。
例如,在电子设备的屏幕刷新率为60Hz的情况下,电子设备的同步周期T Z=16.67ms。电子设备执行S804之前,VSYNC_APP信号的信号周期(即生产周期)T S,以及VSYNC_SF信号的信号周期(即消费周期)T X均为16.67ms,即T Z=T S=T X=16.67ms。
假设电子设备执行S804,上述生产周期的调整值△T1=0.5ms。那么,调整后VSYNC_APP信号的信号周期(即生产周期)T S′=T S-△T1=16.67-0.5=16.17ms。其中,16.17/0.5=32.34,取整为33。在不出现丢帧的情况下,经过33个T S′(即33*T S′=33*16.17=533.61ms),UI线程和Render线程(即生产者)可生产33个帧数据(即图层),而合成线程(即消费者)可消费32(533.61/16.67≈32)个帧数据(即图层)。这样,生产者(如Render线程)每隔一个T S′生成一帧图层(即帧数据)放到第一缓存队列中,消费者(如合成线程)每隔一个T X从第一缓存队列中取出一帧图层(即帧数据)进行图层合成,从第34个T S′开始,第一缓存队列中则至少可以缓存一帧图层(即帧数据)。这样,即使UI线程和Render线程无法在一帧(即一个同步周期T Z)内完成一帧图层的绘制和渲染,合成线程在一个VSYNC_SF信号到来后,也可以从第一缓存队列中读取到帧数据,则可以避免出现丢帧的现象。
又例如,在电子设备的屏幕刷新率为90Hz的情况下,电子设备的同步周期T Z=11.11ms。电子设备执行S804之前,VSYNC_APP信号的信号周期(即生产周期)T S,以及VSYNC_SF信号的信号周期(即消费周期)T X均为16.67ms,即T Z=T S=T X=11.11ms。
假设电子设备执行S804,上述生产周期的调整值△T1=0.2ms。那么,调整后VSYNC_APP信号的信号周期(即生产周期)T S′=T S-△T1=11.11-0.2=10.91ms。其中,10.91/0.2=54.55,取整为55。在不出现丢帧的情况下,经过55个T S′(即55*T S′=55*10.91=600.05ms),UI线程和Render线程(即生产者)可生产55个帧数据(即图层),而合成线程(即消费者)可消费54(600.05/11.11≈54)个帧数据(即图层)。这样,生产者(如Render线程)每隔一个T S′生成一帧图层(即帧数据)放到第一缓 存队列中,消费者(如合成线程)每隔一个T X从第一缓存队列中取出一帧图层(即帧数据)进行图层合成,从第56个T S′开始,第一缓存队列中则至少可以缓存一帧图层(即帧数据)。这样,即使UI线程和Render线程无法在一帧(即一个同步周期T Z)内完成一帧图层的绘制和渲染,合成线程在一个VSYNC_SF信号到来后,也可以从第一缓存队列中读取到帧数据,则可以避免出现丢帧的现象。
又例如,在电子设备的屏幕刷新率为120Hz的情况下,电子设备的同步周期T Z=8.33ms。电子设备执行S804之前,VSYNC_APP信号的信号周期(即生产周期)T S,以及VSYNC_SF信号的信号周期(即消费周期)T X均为16.67ms,即T Z=T S=T X=8.33ms。
假设电子设备执行S804,上述生产周期的调整值△T1=0.1ms。那么,调整后VSYNC_APP信号的信号周期(即生产周期)T S′=T S-△T1=8.33-0.1=8.23ms。其中,8.23/0.1=82.3,取整为83。在不出现丢帧的情况下,经过83个T S′(即83*T S′=83*8.23=683.09ms),UI线程和Render线程(即生产者)可生产83个帧数据(即图层),而合成线程(即消费者)可消费82(683.09/8.33≈82)个帧数据(即图层)。这样,生产者(如Render线程)每隔一个T S′生成一帧图层(即帧数据)放到第一缓存队列中,消费者(如合成线程)每隔一个T X从第一缓存队列中取出一帧图层(即帧数据)进行图层合成,从第84个T S′开始,第一缓存队列中则至少可以缓存一帧图层(即帧数据)。这样,即使UI线程和Render线程无法在一帧(即一个同步周期T Z)内完成一帧图层的绘制和渲染,合成线程在一个VSYNC_SF信号到来后,也可以从第一缓存队列中读取到帧数据,则可以避免出现丢帧的现象。
综上所述,采用上述方案,可以在不增加电子设备功耗的前提下,降低电子设备显示图像时出现丢帧的可能性,保证显示屏显示图像的流畅性。
电子设备执行S804之后,VSYNC_APP信号的信号周期变小。在这种情况下,可能会存在这样的疑问:VSYNC_APP信号的信号周期调小之前,电子设备显示图像都可能会因为无法在一帧(即VSYNC_APP信号的信号周期)内完成绘制和渲染而丢帧;VSYNC_APP信号的信号周期调小后,是不是会加剧丢帧问题。
从整个处理过程来说,将VSYNC_APP信号的信号周期调小,并不会加剧丢帧问题。原因在于:即使将VSYNC_APP信号的信号周期调小,大多数图层是能够在一帧内完成绘制和渲染的;只有个别图层无法在一帧内完成绘制和渲染。并且,将VSYNC_APP信号的信号周期调小后,在相同的时长内,生产者(即UI线程和Render线程)可以生产更多的帧数据。这样,可以使第一缓存队列中缓存足够的帧数据。即使UI线程和Render线程无法在一帧(即一个同步周期T Z)内完成一帧图层的绘制和渲染,合成线程在一个VSYNC_SF信号到来后,也可以从第一缓存队列中读取到帧数据,则可以避免出现丢帧的现象。
实施例(二)
本实施例这里以4*T S′=3*T X为例,结合附图11说明本申请实施例的效果。如图11所示,t 1时刻(即t a时刻)-t 4时刻(即t e时刻)这段时间包括3个T X,t a时刻(即t 1时刻)-t e时刻(即t 4时刻)这段时间包括4个T S′。
例如,如图11所示,电子设备在t 1时刻之前执行S804。在电子设备执行S804之后,VSYNC_APP信号的信号周期(即生产周期)为T S′,T S′<T S;VSYNC_SF信号的信 号周期(即消费周期)T X不变,T S=T X=T Z。以下按照时间的先后顺序,介绍电子设备执行图11所示的绘制、渲染、图像帧合成和图像帧显示的流程。
在电子设备执行S804之后,在t 1时刻(即t a时刻)的VSYNC_APP信号到来时刻(即t 1时刻/t a时刻)之后的t b时刻,下一个VSYNC_APP信号到来。该t b时刻与t a时刻相距T S′。电子设备的UI线程响应于t a时刻的VSYNC_APP信号,执行图11所示的“绘制A”绘制图层A,Render线程执行“渲染A”对图层A进行渲染准备,并将帧数据A(即图层A)缓存至第一缓存队列。如图11所示,Render线程还可以执行“渲染A′”渲染第一缓存队列中的帧数据A(即图层A)。如图11所示,Render线程在t a时刻之后的t A′时刻完成“渲染A”,在t A′时刻将帧数据A(即图层A)缓存至第一缓存队列;如图11所示,在t A′时刻,第一缓存队列中帧数据的数量由0增加为1(即0->1)。
在t A′时刻之后,电子设备的UI线程响应于t b时刻的VSYNC_APP信号,执行图11所示的“绘制B”绘制图层B,Render线程执行“渲染B”对图层B进行渲染准备,并将帧数据B(即图层B)缓存至第一缓存队列。如图11所示,Render线程还可以执行“渲染B′”渲染第一缓存队列中的帧数据B(即图层B)。
在t b时刻之后,电子设备的合成线程响应于t 2时刻的VSYNC_SF信号,执行图11所示的“图像帧合成A”对上述帧数据A(即图层A)进行图层合成得到图像帧A。在t 2时刻,合成线程从第一缓存队列中读取上述帧数据A(即图层A),该帧数据A(即图层A)从第一缓存队列出队。如图11所示,在t 2时刻,第一缓存队列中帧数据的数量由1减少为0(即1->0)。
如图11所示,Render线程在t 2时刻之后的t B′时刻完成“渲染B”,在t B′时刻将帧数据B(即图层B)缓存至第一缓存队列。如图11所示,在t B′时刻,第一缓存队列中帧数据的数量由0增加为1(即0->1)。
在t B′时刻之后,电子设备的UI线程响应于t c时刻的VSYNC_APP信号,执行图11所示的“绘制C”绘制图层C,Render线程执行“渲染C”对该图层C进行渲染准备,并将帧数据C(即图层C)缓存至第一缓存队列。如图11所示,Render线程还可以执行“渲染C′”渲染第一缓存队列中的帧数据C(即图层C)。
在t c时刻之后,电子设备的合成线程响应于t 3时刻的VSYNC_SF信号,执行图11所示的“图像帧合成B”对上述帧数据B(即图层B)进行图层合成得到图像帧B。在t 3时刻,合成线程从第一缓存队列中读取上述帧数据B(即图层B),该帧数据B(即图层B)从第一缓存队列出队。如图11所示,在t 3时刻,第一缓存队列中帧数据的数量由1减少为0(即1->0)。电子设备的LCD响应于t 3时刻的HW_VSYNC信号,执行图11所示的“图像帧显示A”刷新显示上述图像帧A。
如图11所示,Render线程在t 3时刻之后的t C′时刻完成“渲染C”,在t C′时刻将帧数据C(即图层C)缓存至第一缓存队列;如图11所示,在t C′时刻,第一缓存队列中帧数据的数量由0增加为1(即0->1)。在t C′时刻之后,电子设备的UI线程响应于t d时刻的VSYNC_APP信号,执行图11所示的“绘制D”绘制图层D,Render线程执行“渲染D”对图层D进行渲染准备,并将帧数据D(即图层D)缓存至第一缓存队列。如图11所示,Render线程还可以执行“渲染D′”渲染第一缓存队列中的帧 数据D(即图层D)。
如图11所示,Render线程在t d时刻之后的t D′时刻完成“渲染D”,在t D′时刻将帧数据D(即图层D)缓存至第一缓存队列;如图11所示,在t D′时刻,第一缓存队列中帧数据的数量由1增加为2(即1->2)。
在t D′时刻之后,电子设备的合成线程响应于t 4时刻(即t e时刻)的VSYNC_SF信号,执行图11所示的“图像帧合成C”对上述帧数据C(即图层C)进行图层合成得到图像帧C。在t 4时刻,合成线程从第一缓存队列中读取上述帧数据C(即图层C),该帧数据C(即图层C)从第一缓存队列出队。如图11所示,在t 4时刻,第一缓存队列中帧数据的数量由2减少为1(即1->0)。电子设备的LCD响应于t 4时刻的HW_VSYNC信号,执行图11所示的“图像帧显示B”刷新显示上述图像帧B。
电子设备的UI线程响应于t 4时刻/t e时刻的VSYNC_APP信号,执行图11所示的“绘制E”绘制图层E,Render线程执行“渲染E”对该图层E进行渲染准备,并将帧数据E(即图层E)缓存至第一缓存队列。如图11所示,Render线程还可以执行“渲染E′”渲染第一缓存队列中的帧数据E(即图层E)。
如图11所示,Render线程在t E′时刻完成“渲染E”,在t E′时刻将帧数据E(即图层E)缓存至第一缓存队列;如图11所示,在t E′时刻,第一缓存队列中帧数据的数量由1增加为2(即1->2)。
在t E′时刻之后,电子设备的合成线程响应于t 5时刻的VSYNC_SF信号,执行图11所示的“图像帧合成D”对上述帧数据D(即图层D)进行图层合成得到图像帧D。在t 5时刻,合成线程从第一缓存队列中读取上述帧数据D(即图层D),该帧数据D(即图层D)从第一缓存队列出队。如图11所示,在t 5时刻,第一缓存队列中帧数据的数量由2减少为1(即1->0)。电子设备的LCD响应于t 5时刻的HW_VSYNC信号,执行图11所示的“图像帧显示C”刷新显示上述图像帧C。
如图11所示,虽然UI线程执行“绘制E”和Render线程执行“渲染E”所花费的时间超过一帧。但是,在图11所示的t 6时刻,第一缓存队列中缓存有一帧图层(即帧数据),如第一缓存队列中帧数据的数量为1。因此,电子设备的合成线程响应于t 6时刻的VSYNC_SF信号,可以从第一缓存队列读取到帧数据,如图11所示,电子设备显示图像并未出现丢帧的现象。由此可见,采用本申请实施例的方法,可以降低电子设备显示图像时出现丢帧的可能性,保证显示屏显示图像的流畅性。
实施例(三)
示例性的,电子设备可以将VSYNC_APP信号的信号周期调小△T1,使VSYNC_APP信号的信号周期小于VSYNC_SF信号的信号周期。也就是说,T S-△T1=T S′,0<△T1<T Z。T S′是第一时长。
在一种实现方式中,△T1可以预先配置在电子设备中。例如,△T1可以为0.1毫秒(ms),0.2ms或者0.5ms等任一数值。
在另一种实现方式中,电子设备可以根据VSYNC_APP信号的信号周期T X(T X=T S)与第一统计周期的第一绘制帧长的差值确定上述△T1。△T1小于或等于该差值。其中,第一绘制帧长是电子设备绘制图层所需的时长。其中,上述第一统计周期可以包括多个同步周期。
示例性的,电子设备可以统计第一统计周期内UI线程多次绘制图层中每次绘制图层所需的时长。然后,电子设备可以将该多次绘制图层所需的时长的平均值作为第一绘制帧长。或者,电子设备可以将该多次绘制图层所需的时长中最大的时长作为第一绘制帧长。
例如,假设电子设备在第一统计周期内进行了3次图层绘制和渲染准备。电子设备绘制图层a并对图层a进行渲染准备所需的时长为时长a。电子设备绘制图层b并对图层b进行渲染准备所需的时长为时长b。电子设备绘制图层c并对图层c进行渲染准备所需的时长为时长c。电子设备可以计算时长a、时长b和时长c的平均值得到第一绘制帧长。或者,电子设备可以将时长a、时长b和时长c中最大的时长作为第一绘制帧长。
在另一种实现方式中,电子设备可以按照用户的需求,计算上述△T1。例如,用户期望电子设备可以在预设数量(如K帧)后可解决一帧的丢帧。其中,(K+1)*T S′=K*T S
由(K+1)*T S′=K*T S可以得出:T S′=T S*[K/(K+1)],△T1=T S-T S′=T S-T S*[K/(K+1)]。由△T1=T S-T S′=T S-T S*[K/(K+1)]可以得出:△T1=T S/(K+1)。
在另一种实现方式中,电子设备可以根据电子设备的屏幕刷新率确定△T1。电子设备的屏幕刷新率越高,则△T1越小;电子设备的屏幕刷新率越低,则△T1越大。
其中,电子设备的同步周期T Z等于电子设备的屏幕刷新率的倒数。电子设备执行S804之前,VSYNC_APP信号的信号周期(即生产周期)T S,以及VSYNC_SF信号的信号周期(即消费周期)T X等于同步周期T Z
电子设备的屏幕刷新率越高,T Z、T S和T X则越小。例如,在电子设备的屏幕刷新率为60Hz的情况下,T Z=T S=T X=16.67ms;在电子设备的屏幕刷新率为90Hz的情况下,T Z=T S=T X=11.11ms;在电子设备的屏幕刷新率为120Hz的情况下,T Z=T S=T X=8.33ms。
可以理解的是,T S越小,电子设备的UI线程和Render线程在一帧(即一个T S)内完成绘制和渲染的可能性则越小,电子设备出现丢帧现象的可能性则越高。因此,在T S较小的情况下,如果△T1过大,则会提升电子设备出现丢帧现象的可能性。由此可见,如果T S较小,则△T1可以设置在较小的值;如果T S较大,则△T1可以设置在较大的值。
由上述描述可知:电子设备的屏幕刷新率越高,T Z、T S和T X则越小。即电子设备的屏幕刷新率与T S成反比。因此:电子设备的屏幕刷新率越高,则△T1越小;电子设备的屏幕刷新率越低,则△T1越大。
例如,在电子设备的屏幕刷新率为60Hz的情况下,T X=16.67ms,△T1可以为0.5ms;在电子设备的屏幕刷新率为90Hz的情况下,T S=11.11ms,△T1可以为0.2ms;在电子设备的屏幕刷新率为120Hz的情况下,T S=8.33ms,△T1可以为0.1ms。
需要说明的是,如果△T1的取值过大,则会影响电子设备显示图像的效果,影响用户的视觉体验。例如,如果△T1的取值过大,那么原本在一定时长(如1s)内电子设备生产5帧图层,并播放该5帧图层对应的5帧图像帧;根据该△T1调整VSYNC_APP信号的信号周期后,电子设备则可能会在1s内生产10帧图层,但还是播放5帧图像帧。因此,上述△T1的取值不宜过大。例如,△T1小于或等于预设时长阈值。该预设 时长阈值可以为0.5ms或者1ms。该预设时长阈值可以预先配置在电子设备中。
实施例(四)
本申请实施例这里介绍电子设备执行上述S804调整VSYNC_APP信号的信号周期的条件。其中,电子设备可以在第一缓存队列中缓存的图层的数量小于第一预设阈值时,执行S804。具体的,如图12所示,在上述S804之前,本申请实施例的方法还可以包括S1201。
S1201、电子设备判断第一缓存队列中缓存的图层的数量是否小于第一预设阈值。
其中,该第一预设阈值可以预先配置在电子设备中。例如,该第一预设阈值可以为1、2或3等任一数值。
示例性的,S1201可以包括:电子设备的Render线程响应于VSYNC_SF信号,读取第一缓存队列中缓存的图层的数量,判断第一缓存队列中缓存的图层的数量是否小于第一预设阈值。
在第一种情况下,电子设备的Render线程响应于VSYNC_SF信号,可以在从第一缓存队列中读出帧数据(即帧数据从第一缓存队列中出队)之前,读取第一缓存队列中缓存的图层的数量,判断第一缓存队列中缓存的图层的数量是否小于第一预设阈值。例如,第一预设阈值可以为2或者3等任一数值。
以第一预设阈值等于2为例。可以理解的是,如果帧数据从第一缓存队列中出队之前,第一缓存队列中缓存的图层的数量小于2;那么,帧数据从第一缓存队列中出队之后,第一缓存队列中缓存的图层的数量小于1,即第一缓存队列中缓存的图层的数量为0。
在第二种情况下,电子设备的Render线程响应于VSYNC_SF信号,可以在从第一缓存队列中读出帧数据(即帧数据从第一缓存队列中出队)之后,读取第一缓存队列中缓存的图层的数量,判断第一缓存队列中缓存的图层的数量是否小于第一预设阈值。在这种情况下,第一预设阈值可以为1或者2等任一数值。
以第一预设阈值等于1为例。可以理解的是,如果帧数据从第一缓存队列中出队之后,第一缓存队列中缓存的图层的数量小于1,则表示第一缓存队列中缓存的图层的数量为0。
可以理解的是,在第一缓存队列中缓存的图层的数量为0的情况下,如果UI线程和Render线程(即生产者)无法在一帧内完成绘制和渲染,则电子设备显示图像会出现丢帧的现象。
具体的,如果第一缓存队列中缓存的图层的数量(即第一缓存队列中缓存的帧数据的数量)小于第一预设阈值,则表示第一缓存队列中缓存的帧数据较少,电子设备显示图像出现丢帧的可能性较大。在这种情况下,如图12所示,电子设备可以执行S804-S807。
如果第一缓存队列中缓存的图层的数量(即第一缓存队列中缓存的帧数据的数量)大于或等于第一预设阈值,则表示第一缓存队列中缓存的帧数据较多,电子设备显示图像出现丢帧的可能性较小。在这种情况下,如图12所示,电子设备可以执行S801-S803。
本申请实施例中,电子设备可以在电子设备显示图像出现丢帧的可能性较大的情 况下,调小VSYNC_APP信号的信号周期,以降低电子设备显示图像时出现丢帧的可能性,保证显示屏显示图像的流畅性。
在另一些实施例中,电子设备执行S804将VSYNC_APP信号的信号周期调整为第一时长之后,可能无法降低电子设备显示图像时出现丢帧的可能性。针对这种情况,如果第一缓存队列中缓存的图层的数量小于第二预设阈值,电子设备可以继续调小VSYNC_APP信号的信号周期,以降低电子设备显示图像时出现丢帧的可能性。具体的,在S804之后,本申请实施例的方法还可以包括S1301-S1302。例如,如图13所示,在S807之后,本申请实施例的方法还可以包括S1301-S1302。
S1301、电子设备判断第一缓存队列中缓存的图层的数量是否小于第二预设阈值。该第二预设阈值小于第一预设阈值。
例如,当第一预设阈值为2时,该第二预设阈值可以等于1;当第一预设阈值为3时,该第二预设阈值可以等于1或2。当然,第一预设阈值与第二预设阈值的取值包括但不限于上述具体数值。
需要说明的是,电子设备判断第一缓存队列中缓存的图层的数量是否小于第二预设阈值的方法,可以参考S1201中电子设备判断第一缓存队列中缓存的图层的数量是否小于第一预设阈值的方法,本申请实施例这里不予赘述。
具体的,S1301之后,如果第一缓存队列中缓存的图层的数量小于第二预设阈值,电子设备可以执行S1302。如果第一缓存队列中缓存的图层的数量大于或等于第二预设阈值,电子设备则可以执行S805-S807。
S1302、电子设备将VSYNC_APP信号的信号周期调整为第二时长。该第二时长小于第一时长。
其中,电子设备确定第二时长的方法,可以参考上述实施例中电子设备确定第一时长的方法,本申请实施例这里不予赘述。
也就是说,在第一缓存队列中缓存的图层的数量小于第二预设阈值的情况下,电子设备可以进一步调小VSYNC_APP信号的信号周期,以降低电子设备显示图像时出现丢帧的可能性,保证显示屏显示图像的流畅性。
实施例(五)
一般而言,第一缓存队列的缓存空间有限。例如,第一缓存队列最多可缓存2帧图层(即帧数据)或者3帧图层(即帧数据)。因此,如果VSYNC_APP信号的信号周期长时间小于VSYNC_SF信号的信号周期,则会出现第一缓存队列无法缓存UI线程和Render线程(即生产者)生产得到的帧数据的问题。
为了避免出现上述问题,电子设备还可以在第一缓存队列中缓存的图层的数量大于第三预设阈值时,将VSYNC_APP信号的信号周期由T S′调整为T S。本申请实施例的方法还可以包括S1401-S1402。例如,如图14所示,在S804之后,本申请实施例的方法还可以包括S1401-S1402。
S1401、电子设备判断第一缓存队列中缓存的图层的数量是否大于第三预设阈值。该第三预设阈值大于或等于第一预设阈值。
其中,该第三预设阈值可以预先配置在电子设备中。例如,该第三预设阈值可以为0、1或者2等任一数值。
示例性的,S1401可以包括:电子设备的Render线程响应于VSYNC_SF信号,读取第一缓存队列中缓存的图层的数量,判断第一缓存队列中缓存的图层的数量是否大于第三预设阈值。
在第一种情况下,电子设备的Render线程响应于VSYNC_SF信号,可以在从第一缓存队列中读出帧数据(即帧数据从第一缓存队列中出队)之前,读取第一缓存队列中缓存的图层的数量,判断第一缓存队列中缓存的图层的数量是否大于第三预设阈值。在这种情况下,第三预设阈值可以为1或者2等任一数值。
以第三预设阈值等于1为例。可以理解的是,如果帧数据从第一缓存队列中出队之前,第一缓存队列中缓存的图层的数量大于1,则表示第一缓存队列中缓存的图层的数量至少为2;那么,帧数据从第一缓存队列中出队之后,第一缓存队列中缓存的图层的数量至少为1。
在第二种情况下,电子设备的Render线程响应于VSYNC_SF信号,可以在从第一缓存队列中读出帧数据(即帧数据从第一缓存队列中出队)之后,读取第一缓存队列中缓存的图层的数量,判断第一缓存队列中缓存的图层的数量是否大于第三预设阈值。在这种情况下,第一预设阈值可以为0或者1等任一数值。
以第一预设阈值等于0为例。可以理解的是,如果帧数据从第一缓存队列中出队之后,第一缓存队列中缓存的图层的数量大于0,则表示第一缓存队列中缓存的图层的数量至少为1。
可以理解的是,在第一缓存队列中缓存的图层的数量至少为1的情况下,如果UI线程和Render线程(即生产者)无法在一帧内完成绘制和渲染,则电子设备显示图像也不会出现丢帧的现象。
具体的,如果第一缓存队列中缓存的图层的数量(即第一缓存队列中缓存的帧数据的数量)大于第三预设阈值,则表示第一缓存队列中缓存的帧数据较多,电子设备显示图像出现丢帧的可能性较小。在这种情况下,如图14所示,电子设备可以执行S1402和S801-S803。
如果第一缓存队列中缓存的图层的数量(即第一缓存队列中缓存的帧数据的数量)小于或等于第三预设阈值,则表示第一缓存队列中缓存的帧数据较少,电子设备显示图像出现丢帧的可能性较大。在这种情况下,如图14所示,电子设备可以继续执行S805-S807。
S1402、电子设备调整VSYNC_APP信号的信号周期,使VSYNC_APP信号的信号周期等于VSYNC_SF信号的信号周期。
例如,电子设备可以将VSYNC_APP信号的信号周期由T S′调整为T S
本申请实施例中,电子设备可以在第一缓存队列缓存了足够的帧数据后,将VSYNC_APP信号的信号周期由T S′调整为T S。这样,可以避免出现第一缓存队列无法缓存UI线程和Render线程(即生产者)生产得到的帧数据的问题。
实施例(六)
本申请实施例这里介绍电子设备执行上述方法调整VSYNC_APP信号的信号周期的时机。其中,电子设备可以响应于第一应用切换为前台应用时,开始执行上述方法,以降低电子设备显示图像时出现丢帧的可能性,保证显示屏显示图像的流畅性。
应理解,当第一应用切换为前台应用时,该第一应用的应用界面被电子设备的显示屏显示,对用户可见。如果电子设备针对该第一应用执行图层绘制、图层渲染、图像帧合成和显示出现丢帧的现象,则会影响显示屏显示图像的流畅性,影响用户体验。因此,第一应用切换为前台应用时,电子设备可以执行上述方法调整VSYNC_APP信号的信号周期,以降低电子设备显示图像时出现丢帧的可能性。
前台应用可以是指:电子设备的显示屏当前显示的界面对应的应用。也就是说,电子设备的显示屏当前显示哪个应用的界面,该应用则是前台应用。
前台应用还可以是指:一个应用通过活动管理器(activity manager service,AMS)申请一个新的Activity(startActivity)或者处于pauase状态的Activity重新进入到活动状态。
其中,电子设备的软件系统可以采用分层架构。分层架构可将软件分成若干个层,每一层都有清晰的角色和分工。层与层之间通过软件接口通信。在一些实施例中,将电子设备的软件系统分为三层,从上至下分别为应用程序层(简称应用层),应用程序框架层(简称框架层),以及内核层(也称为驱动层)。
其中,应用层可以包括一系列应用程序包。如为相机,图库,日历,通话,地图,导航,WLAN,蓝牙,音乐,视频,短信息以及桌面启动(Launcher)等应用程序。框架层为应用程序层的应用程序提供应用编程接口(application programming interface,API)和编程框架。应用程序框架层包括一些预先定义的函数。该框架层可以包括窗口管理器(window manager service,WMS)和活动管理器AMS等。
窗口管理器WMS用于管理窗口程序。窗口管理器可以获取显示屏大小,判断是否有状态栏,锁定屏幕,截取屏幕等。活动管理器AMS用于负责管理Activity,负责系统中各组件的启动、切换、调度及应用程序的管理和调度等工作。内核层是硬件和软件之间的层。内核层至少包含显示驱动,摄像头驱动,音频驱动,传感器驱动。内核层是硬件和软件之间的层。
其中,用户对电子设备进行输入操作(如触发电子设备显示某个应用的操作),内核层可以根据输入操作产生相应的输入事件(如折叠屏展开事件),并向应用程序框架层上报该事件。由应用程序框架层的活动管理服务器AMS设置应用的窗口属性。应用程序框架层的窗口管理服务器WMS根据AMS的设置绘制窗口,然后将窗口数据发送给内核层的显示驱动,由显示驱动在折叠屏显示对应的应用界面。
其中,窗口的属性可以包括Activity窗口的位置和大小,以及Activity窗口的可见属性(即Activity窗口的状态)。Activity窗口的位置是该Activity窗口在显示屏的位置、Activity窗口的大小可以是应用启动config中的宽高等信息。Activity窗口的可见属性可以为true或者false。当Activity窗口的可见属性是true时,表示该Activity窗口处于活动状态,该Activity窗口对用户可见,即显示驱动会显示该Activity窗口的内容。当Activity窗口的Activity的可见属性是false时,表示该Activity窗口处于pauase状态,该Activity窗口对用户不可见,即显示驱动会不显示该Activity窗口的内容。
其中,应用(如应用1或应用2)可调用启动Activity接口以启动对应的Activity。活动管理器AMS响应于应用的调用,可请求窗口管理器WMS绘制Activity对应的窗 口,并调用显示驱动实现界面的显示。
其中,进入活动状态的应用会进行以下处理:(1)创建Application对象、Context对象;(2)调用Activity.attach()来创建Activity对应的窗口;(3)调用用户的onCreate方法,里面的setContentView方法创建activity的视图DecorView;(4)对Activity视图进行计算和绘制。完成了上述的步骤,应用的画面会被显示出来,该应用是前台应用。
需要说明的是,前台应用的画面内容不仅可以包括用户看的见的画面,还可以包括无用户界面的内容、透明图层的内容或者被其他应用界面遮挡对用户不可见的内容。
进一步的,电子设备的前台应用发生变化,也可以触发电子设备执行上述方法,重新调整VSYNC_APP信号的信号周期,以降低电子设备显示图像时出现丢帧的可能性。具体的,电子设备响应于前台应用由第一应用切换为第二应用,可将调整VSYNC_APP信号的信号周期调整为第三时长。
其中,第三时长等于T S-△T2。在一种实现方式中,该第三时长是基于第二应用确定的。电子设备可以根据不同的应用设置不同的△T2。
在另一种实现方式中,电子设备可以根据VSYNC_APP信号的信号周期T X(T X=T S)与第一统计周期的第一绘制帧长的差值确定△T2。△T2小于或等于该差值。
在另一种实现方式中,电子设备可以按照用户的需求,计算上述△T2。
在另一种实现方式中,电子设备可以根据电子设备的屏幕刷新率确定△T2。电子设备的屏幕刷新率越高,则△T2越小;电子设备的屏幕刷新率越低,则△T2越大。
需要说明的是,电子设备确定△T2的具体方法,可以参考上述实施例中确定△T1的方法,本申请实施例这里不予赘述。
实施例(七)
本实施例介绍电子设备执行上述实施例(一)-实施例(四)中任一实施例的方法的应用场景。电子设备在列表类滑动场景中,执行上述实施例(一)-实施例(四)中任一实施例的方法,其降低电子设备显示图层出现丢帧的可能性的效果较为明显。
应理解,上述列表类滑动场景也可以称为连续性滑动场景。例如,手机响应于用户连续在“购物”应用的界面输入向上或向下的滑动操作,刷新手机页面的场景。又例如,手机响应于用户连续在“通讯录”应用的界面输入向上或向下的滑动操作,刷新手机页面的场景。又例如,手机响应于用户连续在“即时通讯”应用的联系人界面或朋友圈界面输入向上或向下的滑动操作,刷新手机页面的场景。上述实例中刷新手机页面的场景都可以称为列表类滑动场景或者连续性滑动场景。
在上述列表类滑动场景或者连续性滑动场景中,电子设备可以连续刷新页面,即电子设备中生产者会连续生产多帧图层,消费者也会连续消费多帧图层。因此,在这些应用场景中,使用本申请实施例的方法,在连续生产和消费图层的过程中,生产者的生产速率大于消费者的消费速率,便可以降低电子设备显示图像出现丢帧的可能性。在上述应用场景中,执行本申请实施例的方法,降低电子设备显示图层出现丢帧的可能性的效果较为明显。
实施例(八)
本实施例中对本申请任一实施例中,电子设备确定一个或多个图层渲染完成的具 体方法进行说明。
在第一种应用场景中,上述一个或多个图层可以为一个图层。在第二种应用场景中,上述一个或多个第一图层可以包括多个图层。在不同的应用场景中,电子设备确定一个或多个图层渲染完成的方式不同。
在第一种应用场景中,一个或多个图层渲染完成是指:上述一个图层渲染完成。
在第二种应用场景中,一个或多个图层渲染完成是指:上述多个图层中的预设图层渲染完成;或者,上述多个图层中的所有图层渲染完成。例如,该预设图层可以包括该多个图层中、图层面积与显示屏的面积的比值大于预设比例阈值的图层。
实施例(九)
本实施例介绍电子设备调整VSYNC_APP信号的信号周期的具体方式。由上述HW_VSYNC信号、VSYNC_SF信号和VSYNC_APP信号的介绍可知:HW_VSYNC信号是一个由硬件驱动触发的信号,该VSYNC_SF信号和VSYNC_APP信号是基于HW_VSYNC信号产生的。HW_VSYNC信号的信号周期等于电子设备的屏幕刷新率的倒数。一般而言,VSYNC_SF信号和VSYNC_APP信号与HW_VSYNC信号同相位,且VSYNC_SF信号和VSYNC_APP信号的信号周期等于HW_VSYNC信号的信号周期。
例如,如图15所示,HW_VSYNC信号作为输入,根据当前界限时间戳(Present Fence Timestamp)经过DyspSync 1501可以得到SW_VSYNC信号。该Present Fence Timestamp是根据当前的HW_VSYNC信号(即硬件信号)产生的时间戳,用于记录当前的HW_VSYNC信号的输入时间。该SW_VSYNC信号是由HW_VSYNC信号(即硬件信号)作为输入,经过DyspSync 1501模块进行训练计算得到的软件信号,该SW_VSYNC信号可以作为中间信号,用于产生VSYNC_SF信号和VSYNC_APP信号。其中,SW_VSYNC信号与HW_VSYNC信号同相位,且信号周期相同。如,目前的一些方案中,可以对SW_VSYNC信号进行相位调整(Phase Adjust)1502,得到VSYNC_SF信号和VSYNC_APP信号。其中,VSYNC_SF信号与SW_VSYNC信号的相位差(SF_Phase)和VSYNC_APP信号与SW_VSYNC信号的相位差(APP_Phase)可以相同,也可以不同。电子设备可以采用SF_Phase对SW_VSYNC信号进行相位调整(Phase Adjust)1502,得到VSYNC_SF信号;采用APP_Phase对SW_VSYNC信号进行相位调整(Phase Adjust)1502,得到VSYNC_APP信号。
在一种技术中,VSYNC_SF信号和VSYNC_APP信号的信号周期与HW_VSYNC信号的信号周期是相同的。电子设备采用图15所示的处理流程,不能调整VSYNC_SF信号和VSYNC_APP信号的信号周期。
本申请实施例中,可以在图15所示的信号处理流程中增加图16所示的周期调整(Period Adjust)1601。如图16所示,电子设备不仅可以对SW_VSYNC信号进行相位调整(Phase Adjust)1502,还可以对SW_VSYNC信号进行周期调整(Period Adjust)1601。
其中,VSYNC_SF信号与SW_VSYNC信号的周期差(SF_Period)和VSYNC_APP信号与SW_VSYNC信号的周期差(APP_Period)可以相同,也可以不同。如图16所示,电子设备可以采用SF_Phase对SW_VSYNC信号进行相位调整(Phase Adjust)1502,采用SF_Period对SW_VSYNC信号进行周期调整(Period Adjust)1601,得到VSYNC_SF信号。如图16所示,电子设备可以采用APP_Phase对SW_VSYNC信号进行相位调整 (Period Adjust)1502,采用APP_Period对SW_VSYNC信号进行周期调整(Period Adjust)1601,得到VSYNC_APP信号。
本申请实施例中,增加了周期调整(Period Adjust)模块,如图16所示的周期调整(Period Adjust)1601。这样,电子设备便可以调整VSYNC_SF信号或VSYNC_APP信号的信号周期。
实施例(十)
本申请一些实施例提供了一种电子设备,该电子设备可以包括图层绘制模块、图层渲染模块、图层合成模块、显示模块、存储模块和周期调整模块。
其中,图层绘制模块用于支持电子设备执行上述实施例中S801和S805中所述的绘制图层的操作,和/或用于本文所描述的技术的其它过程。该图层绘制模块可以是上述UI线程。图层渲染模块用于支持电子设备执行上述实施例中所述的对图层进行渲染准备的操作,以及渲染图层的操作,和/或用于本文所描述的技术的其它过程。该图层渲染模块可以是上述Render线程。图层合成模块用于支持电子设备执行上述实施例中的S802,S806,S1201,S1301,S1401,和/或用于本文所描述的技术的其它过程。该图层合成模块可以是上述合成线程。显示模块用于支持电子设备执行上述实施例中的S803,S807,和/或用于本文所描述的技术的其它过程。存储模块用于保存第一缓存队列,支持电子设备执行S801和S805中所述的将第一图层缓存至第一缓存队列的操作,和/或用于本文所描述的技术的其它过程。周期调整模块用于支持电子设备执行上述实施例中的S804,S1302,S1402,和/或用于本文所描述的技术的其它过程。
其中,上述图层绘制模块、图层渲染模块、图层合成模块和周期调整模块的功能可以集成在一个处理模块中实现。该处理模块可以是电子设备的处理器。上述显示模块可以是电子设备的显示屏(如触摸屏)。上述存储模块可以是电子设备的存储器。
本申请一些实施例提供了一种电子设备,该电子设备可以包括:显示屏(如触摸屏)、存储器和一个或多个处理器。该显示屏、存储器和处理器耦合。该存储器用于存储计算机程序代码,该计算机程序代码包括计算机指令。当处理器执行计算机指令时,电子设备可执行上述方法实施例中电子设备执行的各个功能或者步骤。该电子设备的结构可以参考图1所示的电子设备100的结构。
本申请实施例还提供一种芯片系统,如图17所示,该芯片系统1700包括至少一个处理器1701和至少一个接口电路1702。处理器1701和接口电路1702可通过线路互联。例如,接口电路1702可用于从其它装置(例如电子设备的存储器)接收信号。又例如,接口电路1702可用于向其它装置(例如处理器1701或者电子设备的触摸屏)发送信号。示例性的,接口电路1702可读取存储器中存储的指令,并将该指令发送给处理器1701。当所述指令被处理器1701执行时,可使得电子设备执行上述实施例中的各个步骤。当然,该芯片系统还可以包含其他分立器件,本申请实施例对此不作具体限定。
本申请实施例还提供一种计算机存储介质,该计算机存储介质包括计算机指令,当所述计算机指令在上述电子设备上运行时,使得该电子设备执行上述方法实施例中电子设备执行的各个功能或者步骤。
本申请实施例还提供一种计算机程序产品,当所述计算机程序产品在计算机上运 行时,使得所述计算机执行上述方法实施例中电子设备执行的各个功能或者步骤。
通过以上实施方式的描述,所属领域的技术人员可以清楚地了解到,为描述的方便和简洁,仅以上述各功能模块的划分进行举例说明,实际应用中,可以根据需要而将上述功能分配由不同的功能模块完成,即将装置的内部结构划分成不同的功能模块,以完成以上描述的全部或者部分功能。
在本申请所提供的几个实施例中,应该理解到,所揭露的装置和方法,可以通过其它的方式实现。例如,以上所描述的装置实施例仅仅是示意性的,例如,所述模块或单元的划分,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式,例如多个单元或组件可以结合或者可以集成到另一个装置,或一些特征可以忽略,或不执行。另一点,所显示或讨论的相互之间的耦合或直接耦合或通信连接可以是通过一些接口,装置或单元的间接耦合或通信连接,可以是电性,机械或其它的形式。
所述作为分离部件说明的单元可以是或者也可以不是物理上分开的,作为单元显示的部件可以是一个物理单元或多个物理单元,即可以位于一个地方,或者也可以分布到多个不同地方。可以根据实际的需要选择其中的部分或者全部单元来实现本实施例方案的目的。
另外,在本申请各个实施例中的各功能单元可以集成在一个处理单元中,也可以是各个单元单独物理存在,也可以两个或两个以上单元集成在一个单元中。上述集成的单元既可以采用硬件的形式实现,也可以采用软件功能单元的形式实现。
所述集成的单元如果以软件功能单元的形式实现并作为独立的产品销售或使用时,可以存储在一个可读取存储介质中。基于这样的理解,本申请实施例的技术方案本质上或者说对现有技术做出贡献的部分或者该技术方案的全部或部分可以以软件产品的形式体现出来,该软件产品存储在一个存储介质中,包括若干指令用以使得一个设备(可以是单片机,芯片等)或处理器(processor)执行本申请各个实施例所述方法的全部或部分步骤。而前述的存储介质包括:U盘、移动硬盘、只读存储器(read only memory,ROM)、随机存取存储器(random access memory,RAM)、磁碟或者光盘等各种可以存储程序代码的介质。
以上内容,仅为本申请的具体实施方式,但本申请的保护范围并不局限于此,任何在本申请揭露的技术范围内的变化或替换,都应涵盖在本申请的保护范围之内。因此,本申请的保护范围应以所述权利要求的保护范围为准。

Claims (12)

  1. 一种基于垂直同步信号的控制方法,其特征在于,所述方法包括:
    电子设备响应于第一垂直同步信号,绘制第一应用的第一图层,并将所述第一图层缓存至第一缓存队列;
    所述电子设备响应于第二垂直同步信号,对所述第一缓存队列中缓存图层进行图层合成得到图像帧;
    若所述第一缓存队列中缓存的图层的数量小于第一预设阈值,所述电子设备将所述第一垂直同步信号的信号周期调整为第一时长,所述第一时长小于所述第二垂直同步信号的信号周期。
  2. 根据权利要求1所述的方法,其特征在于,所述若所述第一缓存队列中缓存的图层的数量小于第一预设阈值,所述电子设备将所述第一垂直同步信号的信号周期调整为第一时长,包括:
    所述电子设备响应于所述第一应用切换为前台应用,若所述第一缓存队列中缓存的图层的数量小于所述第一预设阈值,则将所述第一垂直同步信号的信号周期调整为所述第一时长;
    其中,所述第一缓存队列是为所述第一应用分配的,所述前台应用是所述电子设备的显示屏当前显示的界面对应的应用。
  3. 根据权利要求1或2所述的方法,其特征在于,所述方法还包括:
    若所述第一缓存队列中缓存的图层的数量小于第二预设阈值,所述电子设备将所述第一垂直同步信号的信号周期调整为第二时长;
    其中,所述第二预设阈值小于所述第一预设阈值,所述第二时长小于所述第一时长。
  4. 根据权利要求1-3中任一项所述的方法,其特征在于,所述方法还包括:
    若所述第一缓存队列中缓存的图层的数量大于第三预设阈值,所述电子设备调整所述第一垂直同步信号的信号周期,使所述第一垂直同步信号的信号周期等于所述第二垂直同步信号的信号周期;
    其中,所述第三预设阈值大于或等于第一预设阈值。
  5. 根据权利要求1-4中任一项所述的方法,其特征在于,所述电子设备将所述第一垂直同步信号的信号周期调整为第一时长,包括:
    所述电子设备将所述第一垂直同步信号的信号周期调小△T,使所述第一垂直同步信号的信号周期等于所述第一时长,所述第一时长小于所述第二垂直同步信号的信号周期;
    其中,所述△T是预先配置在所述电子设备中的固定时长;或者,
    所述△T是根据所述第二垂直同步信号的信号周期与所述第一统计周期的第一绘制帧长的差值确定的,所述第一绘制帧长是所述电子设备绘制图层所需的时长,所述△T小于或等于所述差值;或者,
    所述△T是根据用户设置的预设数量K确定的,所述预设数量K用于指示用户期望电子设备在K帧后解决一帧的丢帧,△T=T S/(K+1),T S为所述电子设备的屏幕刷新率的倒数;或者,
    所述△T是根据所述电子设备的屏幕刷新率确定的,所述屏幕刷新率越高则所述△T越小,所述屏幕刷新率越低则所述△T越大。
  6. 根据权利要求1-5中任一项所述的方法,其特征在于,所述方法还包括:
    所述电子设备响应于所述前台应用由所述第一应用切换为第二应用,所述电子设备将调整所述第一垂直同步信号的信号周期调整为第三时长。
  7. 根据权利要求1-6中任一项所述的方法,其特征在于,所述第一缓存队列中缓存的图层的数量小于第一预设阈值,包括:
    所述电子设备响应于所述第二垂直同步信号,在对所述第一缓存队列中缓存的图层进行图层合成之前,读取所述第一缓存队列中缓存的图层的数量,读取到的数量小于所述第一预设阈值;或者,
    所述电子设备响应于所述第二垂直同步信号,在对所述第一缓存队列中缓存的图层进行图层合成之后,读取所述第一缓存队列中缓存的图层的数量,读取到的数量小于所述第一预设阈值。
  8. 根据权利要求4所述的方法,其特征在于,所述第一缓存队列中缓存的图层的数量大于第三预设阈值,包括:
    所述电子设备响应于所述第二垂直同步信号,在对所述第一缓存队列中缓存的图层进行图层合成之前,读取所述第一缓存队列中缓存的图层的数量,读取到的数量大于所述第三预设阈值;或者,
    所述电子设备响应于所述第二垂直同步信号,在对所述第一缓存队列中队首的一帧图层进行图层合成之后,读取所述第一缓存队列中缓存的图层的数量,读取到的数量大于所述第二预设阈值。
  9. 一种电子设备,其特征在于,所述电子设备包括显示屏、存储器和一个或多个处理器;所述显示屏、所述存储器和所述处理器耦合;所述存储器用于存储计算机程序代码,所述计算机程序代码包括计算机指令,当所述处理器执行所述计算机指令时,所述电子设备执行如权利要求1-8中任一项所述的方法。
  10. 一种芯片系统,其特征在于,所述芯片系统应用于包括显示屏的电子设备;所述芯片系统包括一个或多个接口电路和一个或多个处理器;所述接口电路和所述处理器通过线路互联;所述接口电路用于从所述电子设备的存储器接收信号,并向所述处理器发送所述信号,所述信号包括所述存储器中存储的计算机指令;当所述处理器执行所述计算机指令时,所述电子设备执行如权利要求1-8中任一项所述的方法。
  11. 一种计算机存储介质,其特征在于,包括计算机指令,当所述计算机指令在电子设备上运行时,使得所述电子设备执行如权利要求1-8中任一项所述的方法。
  12. 一种计算机程序产品,其特征在于,当所述计算机程序产品在计算机上运行时,使得所述计算机执行如权利要求1-8中任一项所述的方法。
PCT/CN2021/122218 2020-10-31 2021-09-30 一种基于垂直同步信号的控制方法及电子设备 WO2022089153A1 (zh)

Priority Applications (2)

Application Number Priority Date Filing Date Title
EP21884882.8A EP4224834A4 (en) 2020-10-31 2021-09-30 VERTICAL SYNCHRONIZATION SIGNAL CONTROL METHOD AND ELECTRONIC DEVICE
US18/251,094 US20230410767A1 (en) 2020-10-31 2021-09-30 Vertical Synchronization Signal-Based Control Method and Electronic Device

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202011197544.3 2020-10-31
CN202011197544.3A CN114531519B (zh) 2020-10-31 一种基于垂直同步信号的控制方法及电子设备

Publications (1)

Publication Number Publication Date
WO2022089153A1 true WO2022089153A1 (zh) 2022-05-05

Family

ID=81381876

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2021/122218 WO2022089153A1 (zh) 2020-10-31 2021-09-30 一种基于垂直同步信号的控制方法及电子设备

Country Status (3)

Country Link
US (1) US20230410767A1 (zh)
EP (1) EP4224834A4 (zh)
WO (1) WO2022089153A1 (zh)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116052618A (zh) * 2022-08-24 2023-05-02 荣耀终端有限公司 一种屏幕刷新率切换方法及电子设备

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090213925A1 (en) * 2008-02-22 2009-08-27 Kabushiki Kaisha Toshiba Buffer control device and receiving apparatus
CN108810281A (zh) * 2018-06-22 2018-11-13 Oppo广东移动通信有限公司 丢帧补偿方法、装置、存储介质及终端
CN110018874A (zh) * 2019-04-09 2019-07-16 Oppo广东移动通信有限公司 垂直同步方法、装置、终端及存储介质
CN110503708A (zh) * 2019-07-03 2019-11-26 华为技术有限公司 一种基于垂直同步信号的图像处理方法及电子设备
CN110609645A (zh) * 2019-06-25 2019-12-24 华为技术有限公司 一种基于垂直同步信号的控制方法及电子设备

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2020062069A1 (en) * 2018-09-28 2020-04-02 Qualcomm Incorporated Frame composition alignment to target frame rate for janks reduction

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090213925A1 (en) * 2008-02-22 2009-08-27 Kabushiki Kaisha Toshiba Buffer control device and receiving apparatus
CN108810281A (zh) * 2018-06-22 2018-11-13 Oppo广东移动通信有限公司 丢帧补偿方法、装置、存储介质及终端
CN110018874A (zh) * 2019-04-09 2019-07-16 Oppo广东移动通信有限公司 垂直同步方法、装置、终端及存储介质
CN110609645A (zh) * 2019-06-25 2019-12-24 华为技术有限公司 一种基于垂直同步信号的控制方法及电子设备
CN110503708A (zh) * 2019-07-03 2019-11-26 华为技术有限公司 一种基于垂直同步信号的图像处理方法及电子设备

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
See also references of EP4224834A4

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116052618A (zh) * 2022-08-24 2023-05-02 荣耀终端有限公司 一种屏幕刷新率切换方法及电子设备
CN116052618B (zh) * 2022-08-24 2023-11-07 荣耀终端有限公司 一种屏幕刷新率切换方法及电子设备

Also Published As

Publication number Publication date
US20230410767A1 (en) 2023-12-21
CN114531519A (zh) 2022-05-24
EP4224834A4 (en) 2024-04-10
EP4224834A1 (en) 2023-08-09

Similar Documents

Publication Publication Date Title
WO2021000921A1 (zh) 一种基于垂直同步信号的图像处理方法及电子设备
WO2020259457A1 (zh) 一种基于垂直同步信号的控制方法及电子设备
WO2022068501A1 (zh) 一种基于垂直同步信号的图像处理方法及电子设备
CN115631258B (zh) 一种图像处理方法及电子设备
CN114579075B (zh) 数据处理方法和相关装置
WO2021027678A1 (zh) 一种基于垂直同步信号的图像处理方法及电子设备
WO2021223539A1 (zh) 射频资源分配方法及装置
CN114579076B (zh) 数据处理方法和相关装置
WO2021254438A1 (zh) 驱动控制方法及相关设备
CN116052618B (zh) 一种屏幕刷新率切换方法及电子设备
CN115048012A (zh) 数据处理方法和相关装置
WO2022089153A1 (zh) 一种基于垂直同步信号的控制方法及电子设备
US20230367415A1 (en) Event processing method and device
WO2023045806A9 (zh) 触控屏中的位置信息计算方法和电子设备
CN114531519B (zh) 一种基于垂直同步信号的控制方法及电子设备
WO2023124227A1 (zh) 帧率切换方法及装置
WO2023124225A1 (zh) 帧率切换方法及装置
CN115904185A (zh) 数据处理方法和相关装置
CN116414336A (zh) 帧率切换方法及装置

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21884882

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 2021884882

Country of ref document: EP

Effective date: 20230502

NENP Non-entry into the national phase

Ref country code: DE