CN114153415A - Image frame rate control method and related product - Google Patents

Image frame rate control method and related product Download PDF

Info

Publication number
CN114153415A
CN114153415A CN202111427930.1A CN202111427930A CN114153415A CN 114153415 A CN114153415 A CN 114153415A CN 202111427930 A CN202111427930 A CN 202111427930A CN 114153415 A CN114153415 A CN 114153415A
Authority
CN
China
Prior art keywords
speed
frame
display
output
signal
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111427930.1A
Other languages
Chinese (zh)
Inventor
白颂荣
张海越
陈锋
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Xihua Technology Co Ltd
Original Assignee
Shenzhen Xihua Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Xihua Technology Co Ltd filed Critical Shenzhen Xihua Technology Co Ltd
Priority to CN202111427930.1A priority Critical patent/CN114153415A/en
Publication of CN114153415A publication Critical patent/CN114153415A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/14Digital output to display device ; Cooperation and interconnection of the display device with other functional units
    • G06F3/147Digital output to display device ; Cooperation and interconnection of the display device with other functional units using display panels
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09GARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
    • G09G5/00Control arrangements or circuits for visual indicators common to cathode-ray tube indicators and other visual indicators
    • G09G5/36Control arrangements or circuits for visual indicators common to cathode-ray tube indicators and other visual indicators characterised by the display of a graphic pattern, e.g. using an all-points-addressable [APA] memory
    • G09G5/39Control of the bit-mapped memory
    • G09G5/393Arrangements for updating the contents of the bit-mapped memory
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09GARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
    • G09G5/00Control arrangements or circuits for visual indicators common to cathode-ray tube indicators and other visual indicators
    • G09G5/36Control arrangements or circuits for visual indicators common to cathode-ray tube indicators and other visual indicators characterised by the display of a graphic pattern, e.g. using an all-points-addressable [APA] memory
    • G09G5/39Control of the bit-mapped memory
    • G09G5/395Arrangements specially adapted for transferring the contents of the bit-mapped memory to the screen
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09GARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
    • G09G2320/00Control of display operating conditions
    • G09G2320/02Improving the quality of display appearance
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09GARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
    • G09G2360/00Aspects of the architecture of display systems
    • G09G2360/12Frame memory handling

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Hardware Design (AREA)
  • Human Computer Interaction (AREA)
  • General Engineering & Computer Science (AREA)
  • Controls And Circuits For Display Device (AREA)
  • Telephone Function (AREA)

Abstract

The embodiment of the application provides a method for controlling an image frame rate and a related product, wherein the method comprises the following steps: a terminal AP acquires the writing speed and the reading speed of a frame buffer; the terminal AP acquires the relation between the writing speed and the reading speed; the terminal AP adjusts the output position of the TE signal according to the relationship. The technical scheme provided by the application has the advantages of improving the image quality and improving the user experience.

Description

Image frame rate control method and related product
Technical Field
The present disclosure relates to the field of electronic and communication technologies, and in particular, to a method for controlling an image frame rate and a related product.
Background
A Tearing Effect (TE) occurs in an image frame during a display process, and the TE signal is a signal generated by a chip and is used to prevent a Tearing problem during a picture refresh in the image display process. When the next frame of image is ready to be refreshed, the chip generates a TE signal, and optionally, the AP (Application Processor, Application host) sends the next frame of image data to the chip after monitoring a rising edge of the TE signal or detecting that the TE signal is in a high level state.
After the tearing effect occurs to the image, the display of the image may have 2 frames of pictures each displaying a part, which affects the display effect of the image.
Disclosure of Invention
The embodiment of the application discloses a method for controlling an image frame rate, which can reduce the tearing effect of an image and improve the display effect of the image.
In a first aspect, a method for controlling an image frame rate is provided, the method including the steps of:
a terminal AP acquires the writing speed and the reading speed of a frame buffer;
the terminal AP acquires the relation between the writing speed and the reading speed;
the terminal AP adjusts the output position of the TE signal according to the relationship.
By way of example, the relationship includes: the writing speed is faster or slower than the reading speed.
Illustratively, the adjusting, by the terminal AP, the output position of the TE signal according to the relationship specifically includes:
and when the writing speed is higher than the reading speed, regulating the TE signal to be output at the frame header of the display.
Illustratively, the adjusting, by the terminal AP, the output position of the TE signal according to the relationship specifically includes:
when the writing speed is slower than the reading speed, the TE signal is adjusted to be output in the middle of the display.
For example, the adjusting the output of the TE signal in the middle of the display specifically includes:
one TE is generated and output every 2 frames or 3 frames displayed.
In a second aspect, there is provided a system for controlling an image frame rate, the system comprising:
the device comprises an acquisition unit, a processing unit and a control unit, wherein the acquisition unit is used for acquiring the writing speed and the reading speed of a frame buffer; acquiring the relation between the writing speed and the reading speed;
and the processing unit is used for adjusting the output position of the TE signal according to the relation.
As an example of this, it is possible to provide,
the relationship includes: the writing speed is faster or slower than the reading speed.
As an example of this, it is possible to provide,
and the processing unit is specifically used for adjusting the TE signal to be output at the displayed frame header when the writing speed is higher than the reading speed.
In a third aspect, there is provided an electronic device comprising a processor, a memory, a communication interface, and one or more programs stored in the memory and configured to be executed by the processor, the programs comprising instructions for performing the steps of the method of the first aspect.
In a fourth aspect, a computer-readable storage medium is provided, storing a computer program for electronic data exchange, wherein the computer program causes a computer to perform the method of the first aspect.
In a fifth aspect, there is provided a computer program product, wherein the computer program product comprises a non-transitory computer readable storage medium storing a computer program operable to cause a computer to perform some or all of the steps as described in the first aspect of an embodiment of the present application. The computer program product may be a software installation package.
According to the method and the device, the TE signal is adjusted to output corresponding control processing at the displayed frame header or the middle when the reading and writing speeds are different, the problem of image tearing caused by different frame rate conversion in a command mode is solved, the images are prevented from being torn, the quality is not lost too much, and the quality of the images is improved.
Drawings
The drawings used in the embodiments of the present application are described below.
FIG. 1 is a block diagram of an exemplary terminal;
FIG. 2 is a flowchart illustrating a method for controlling an image frame rate according to the present disclosure;
FIG. 3 is a schematic diagram of a read/write speed of a Frame Buffer provided in the present application;
FIG. 4 is a schematic diagram of the read/write speed of another Frame Buffer provided in the present application;
FIG. 5 is a flowchart illustrating a method for controlling an image frame rate according to an embodiment of the present disclosure;
fig. 6 is a flowchart illustrating a method for controlling an image frame rate according to another embodiment of the present application;
fig. 7 is a schematic structural diagram of a system for controlling an image frame rate according to an embodiment of the present application;
fig. 8 is a schematic structural diagram of an electronic device according to an embodiment of the present application.
Detailed Description
The embodiments of the present application will be described below with reference to the drawings.
The term "and/or" in this application is only one kind of association relationship describing the associated object, and means that there may be three kinds of relationships, for example, a and/or B, which may mean: a exists alone, A and B exist simultaneously, and B exists alone. In addition, the character "/" in this document indicates that the former and latter related objects are in an "or" relationship.
The "plurality" appearing in the embodiments of the present application means two or more. The descriptions of the first, second, etc. appearing in the embodiments of the present application are only for illustrating and differentiating the objects, and do not represent the order or the particular limitation of the number of the devices in the embodiments of the present application, and do not constitute any limitation to the embodiments of the present application. The term "connect" in the embodiments of the present application refers to various connection manners, such as direct connection or indirect connection, to implement communication between devices, which is not limited in this embodiment of the present application.
In the related design, when the frame buffer writing speed is faster than the reading speed and the writing pointer catches up with the reading pointer, or the frame buffer reading speed is faster than the writing speed and the reading pointer catches up with the writing pointer, a phenomenon that a part of each of the new and old frames is displayed on the display screen occurs, and the phenomenon is tearing. Because the reading and writing are in the same frame, it must happen that the pixel read at a certain moment is not written into the buffer yet, and tearing must occur. This time, a double buffer control technique is needed.
In view of the foregoing problems, an embodiment of the present application provides an image data read-write control method applied to a chip, which can be applied to various input formats of a mobile phone, including: command mode, image mode, etc.
Specifically, in the video mode, a main Processor (AP) connected to the display screen needs to continuously refresh the display screen, and since a dedicated data signal is not used to transmit synchronization information, data is transmitted in the form of a message through a Mobile Industry Processor Interface (MIPI) bus. Because the main processor needs to refresh the display screen periodically, the display screen does not need a frame buffer. In command mode, the MIPI bus controller of the main processor uses a display command message to send data to the display screen, which has a Frame Buffer (Frame Buffer) of full Frame length for storing Frame data. The frame data refers to a frame of image data issued by the AP in a certain frame period. Once the frame data is placed in the frame buffer of the display screen, a Data Driver IC (DDIC) of the display screen reads the frame data from the frame buffer and displays the data on the screen. The Tear Effect (TE) signal generated by the main processor AP and the data driver of the display screen is used to control when the main processor AP writes frame data into the frame buffer.
The scheme mainly aims at the command mode input suitable for the Low Temperature Polycrystalline Oxide display (LTPO) scene to carry out expansion description. Specifically, when the AP of the LTPO screen inputs, the TE signal at the screen end may not be concerned, and the time point output by the AP end is not fixed initially. When the mobile phone displays a dynamic picture, the refresh rate is automatically increased, and when the mobile phone displays a static picture, the refresh rate is automatically reduced, so that the power consumption is effectively reduced.
Referring to fig. 1, fig. 1 is a schematic structural diagram of a terminal disclosed in an embodiment of the present application, where the terminal 100 may be a user equipment UE, and specifically includes:
the chip is connected with the AP end of the mobile phone and the display screen. Specifically, the chip is connected with the AP end through an internal receiving module to realize data interaction. In a specific application, the receiving module is a Mobile Industry Processor Interface Receiver (MIPI-RX), and the transmitting module is a Mobile Industry Processor Interface transmitter (MIPI-TX).
For example, a terminal device according to an embodiment of the present application may be referred to as a terminal (terminal), a User Equipment (UE), a Mobile Station (MS), a Mobile Terminal (MT), an access terminal device, a vehicle-mounted terminal device, an industrial control terminal device, a UE unit, a UE station, a mobile station, a remote terminal device, a mobile device, a UE terminal device, a wireless communication device, a UE agent, a UE apparatus, or the like. The terminal device may be fixed or mobile.
The following description is provided for possible terminology used in the present application.
The VIDEO mode is a mode that a host transmits a real-time pixel stream to a liquid crystal module and transmits signals at a high speed, and the VIDEO mode mainly operates on the liquid crystal module without a frame buffer (cache area) in a driving chip and transmits pixel data according to the refreshing frequency time sequence of the liquid crystal module.
LCDC: the buffer module is used for reading single-frame data from the buffer module at a fixed frame rate.
MIPI-TX: and modulating the data transmitted from the LCDC to an MIPI protocol, and outputting video data to a display screen.
COMMAND mode, standard call for MIPI (Mobile Industry Processor Interface) protocol.
MIPI is an open standard developed for mobile application processors, initiated by the MIPI alliance. The bridge chip converts the input signal into an MIPI timing sequence interface signal through an internal conversion mechanism in an image VIDEO mode or a COMMAND COMMAND mode.
The VIDEO mode is a mode that a host transmits a real-time pixel stream to a liquid crystal module and transmits signals at a high speed, mainly aims at the liquid crystal module without a frame buffer (cache region) in a driving chip, and transmits pixel data according to the refreshing frequency time sequence of the liquid crystal module;
the COMMAND mode is a mode of transmitting a COMMAND and data to a controller with a display buffer, the COMMAND mode mainly operates for driving a CPU screen with a frame buffer, the main control only sends pixel data when a display image needs to be changed, and the driving chip takes out the data from the internal buffer for display at other times.
In the double-Buffer mode, two Graphic buffers (Graphic Buffer areas) are arranged in each Buffer Queue corresponding to the Surface (operating platform), and one Graphic Buffer area is used for drawing and the other Graphic Buffer area is used for displaying. When the data of buffer A is displayed, the CPU/GPU starts to prepare the next frame data in buffer B.
In the triple Buffer mode, a Graphic Buffer area is added on the basis of a double Buffer mechanism in the triple Buffer mechanism, so that idle time can be utilized to the maximum extent, and the defect caused by the fact that the memory occupied by one Graphic Buffer is used more than once is brought.
The VSYNC pulse signal starts all processing for the next frame. Project button first modifies Surface flag of Android Display system to provide VSYNC interrupt. The CPU will prepare Buffer data immediately after receiving VSYNC interrupt, since most display devices refresh at 80Hz (80 refreshes per second), that is, the preparation of one frame of data is completed within 16 ms.
It is very easy to understand what is a triple buffer if the principle of the double buffer mechanism is understood. If there are only two Graphic Buffer buffers A and B, if the CPU/GPU rendering process is long, more than one VSYNC signal period.
During the second 16ms period, Display should Display B frames, but because the GPU is still processing B frames, the a frames are repeatedly displayed.
Similarly, during the second 16ms period, the CPU has nothing to do, since A Buffer is being used by Display. B Buffer is used by the GPU. Note that once the VSYNC time point has elapsed, the CPU cannot be triggered to process the drawing work.
VSYNC is a vertical synchronizing signal which is a frame synchronizing signal of an image mode and is triggered between one frame picture and another frame picture; the vertical synchronization signal is used for solving the problem of image tearing, and the reason for the image tearing is that if no vertical synchronization signal exists, when the frame data rendered by the engine is faster, the display cannot keep up (if a frame is rendered for 10ms, and the display refresh period of the display is 16ms, the display is within the display duration range of 16ms, the GPU has rendered 1.6 frames of image data, so that the previous frame of image is covered by the next frame of image data to cause tearing), it may happen that a certain frame is currently displayed, and the other frame is triggered to be displayed on the display;
current displays typically support a double buffer scheme, i.e. frame data is displayed on a picture and the next frame is placed in a buffer, with the two frames being alternately swapped for display.
The vertical synchronization is to solve the tearing problem by controlling the display of the display, when there is a vertical synchronization signal to control the frame picture to be displayed on the display, the vertical synchronization signal generally controls the rendering of the frame at regular intervals, as shown in the following figure, if the rendering of the frame is too long and the frame is not displayed in time, the frame is displayed until the next frame picture, and the pause occurs. If all normal, the whole picture is very smooth, and because of the control of the vertical synchronizing signal, the vertical synchronizing signal is generally sent out after the display displays one frame, so the problem of picture tearing is solved.
It can be seen that the frequency of vertical synchronization and the frame rate of engine rendering are very much related, which is the second part. The engine rendering process is that the CPU submits data, the GPU renders, and rendered frame data are displayed on a display.
WMS frame start instruction of command mode.
Input/display speed: in the command mode, the time from the sending of the WMS instruction to the completion of the transmission of all image data is designated; the image mode refers to the time from VSYNC to the end of the entire image data transmission, excluding the blanking line before the frame sync.
Frame rate: in the command mode, the frequency corresponding to the time interval of two WMS instructions — the rate at which GPU rendering of the engine is completed may be referred to as the engine frame rate, fps; the image mode refers to the frequency corresponding to the time interval between two VSYNC times — the frequency of display refresh.
The DCS (display Command set), is a set of related (control) commands used in the DSI protocol, and a display device (e.g., LCD) manufacturer can selectively implement some (or all) of the commands specified in the DCS document. To facilitate understanding of the following, the following four Display and Power Modes of the DSI are first introduced:
normal Mode (Normal Mode): all regions (pixels) of the display device are used to display images;
partial Mode (Partial Mode): only a part of the area (pixel point) of the display device is used for displaying the image;
idle Mode (Idle Mode): the display device displays an image with only a limited number of colors, i.e., the range of representation of the colors is reduced (in the RGB format, the precision of representation of the individual components of RGB is reduced); as shown in the following figures:
sleep Mode (Sleep Mode): at this time, the display device does not display any data, i.e. is in an off state, but the display interface needs to be kept in a power supply state or a low power consumption state;
command control: sending commands to peripheral devices (such as a display module) of the display controller; the display controller may contain local registers and a frame buffer. The system may write and read registers using commands. The host processor may indirectly control the functions of the peripheral devices by sending commands and parameters to the display controller. The main processor can also read the display module state information, and the command mode needs to use a bidirectional interface.
Referring to fig. 2, fig. 2 provides a method for controlling an image frame rate, which may be executed by the terminal shown in fig. 1, where the terminal may specifically be: a smart phone, a tablet computer, etc. and the terminal may include an AP to execute the method, as shown in fig. 2, including the following steps:
step S201, the terminal AP obtains the writing speed and the reading speed of a Frame Buffer (Frame Buffer);
step S202, the terminal AP acquires the relation between the writing speed and the reading speed;
illustratively, the above relationships include: the writing speed is faster or slower than the reading speed.
Step S203, the terminal AP adjusts the output position of the TE signal according to the relationship.
According to the technical scheme provided by the application, a terminal AP acquires the writing speed and the reading speed of a Frame Buffer; the terminal AP acquires the relation between the writing speed and the reading speed; the terminal AP adjusts the output position of the TE signal according to the relationship. Therefore, when the writing speed and the reading speed of the Frame Buffer acquired by the AP are not consistent, the output position of the TE signal can be adjusted to avoid displaying a part of each of the 2 frames of pictures, so that the display effect is improved, and the user experience is improved.
The terminal takes a mobile phone as an example, the time delay from the reception of the TE signal by the AP of the mobile phone to the transmission of the WMS data of the whole graph is basically fixed, and the time delay of different APs may be different (but the time delay of the fixed AP is basically unchanged). The frame rate of TE signal generation is based on the display refresh frequency; the AP may or may not transmit new frames upon receiving the TE signal. After the LCDC starts to display, continuous and uninterrupted refreshing is started according to a fixed frame rate, and the next frame is started immediately after the last frame is finished; for the same AP, the input speed is fixed and cannot be dynamically changed; the display is output at a fixed frame rate. The writing speed of the AP is relatively fixed, but the reading speed may change, so that there may be no deviation between the writing speed and the reading speed, and each display part of 2 frames of pictures is generated.
For example, the adjusting, by the terminal AP, the output position of the TE signal according to the relationship may specifically include:
and when the writing speed is higher than the reading speed, regulating the TE signal to be output at the frame header of the display.
Taking fig. 3 as an example, when the write speed is faster than the read speed in fig. 3, as shown in fig. 3, the long line segment in fig. 3 represents the write waveform of the Frame Buffer, and the short line segment represents the read waveform of the Frame Buffer. The slope in fig. 3 represents the read speed or write speed, a full waveform represents a full one-frame picture, and a short waveform represents a pip.
The software calculates a reasonable read start condition according to the delay of the application TE received by the AP and the delay on the path, and outputs a pulse to generate the TE when the condition is satisfied, as shown in fig. 3, which outputs frame numbers 1,2,4, and 5, to ensure that the displayed data does not catch up with the data written in the frame buffer.
If the current image is not a full image and is only partially updated, the software calculates a reasonable read starting condition according to the DCS command sent by the AP, and outputs a pulse generation TE when the condition is met, such as the output frame number 3 shown in a single buffer.
For example, the adjusting, by the terminal AP, the output position of the TE signal according to the relationship may specifically include:
when the writing speed is slower than the reading speed, the TE signal is adjusted to be output in the middle of the display.
Taking fig. 4 as an example, when the writing speed is slower than the reading speed in fig. 4, as shown in fig. 4, the line segment intersecting with the T axis in fig. 4 represents the writing waveform of the Frame Buffer, and the line segment not intersecting with the T axis represents the reading waveform of the Frame Buffer. The slope in the figure represents the read speed or write speed, a complete waveform represents a complete frame of picture, and a short waveform represents a pip.
When the input speed is slower than the output speed, the TE signal is adjusted to be output in the middle of the display, resulting in one TE per 2 frames displayed, the adjustment algorithm being similar to the case where the input speed is faster than the output speed. As shown in fig. 4, the output frames 4 and 5 are both the content of the full map of the input frame 1; input frame 1 is a partial update so output frames 2,3 show the content of partially updated frame 1.
According to the method and the device, the TE signal is adjusted to output corresponding control processing at the displayed frame header or the middle when the reading and writing speeds are different, the problem of image tearing caused by different frame rate conversion in a command mode is solved, the images are prevented from being torn, the quality is not lost too much, and the quality of the images is improved.
For example, the adjusting the output of the TE signal in the middle of the display may specifically include:
one TE is generated for each 2 or 3 frames displayed.
An embodiment of the present application provides a method for controlling an image frame rate, which is performed by a structure of a terminal as shown in fig. 1, and the method is performed in a double Buffer mode, where each Buffer Queue corresponding to a Surface has two Graphic buffers therein, one for drawing one for display. When the data of buffer A is displayed, the CPU/GPU starts to prepare the next frame data in buffer B. The method, as shown in fig. 5, includes the following steps:
step S501, the terminal AP obtains a writing speed V1 and a reading speed V2 of the frame buffer;
and step S502, when the terminal AP determines that V1 is more than V2, the terminal AP adjusts TE signals to be output in a displayed frame header.
Referring to fig. 3, in fig. 3, when the input speed is slower than the output speed, the processing unit 702 adjusts the TE signal to be output at the frame header of the display, and as a result, one TE is generated every 2 frames displayed, and the adjustment algorithm is similar to the case that the input speed is faster than the output speed. As shown in fig. 3, the software calculates a reasonable read start condition according to the delay of the AP receiving the application TE and the delay on the path, and outputs a pulse to generate TE when the condition is satisfied, as shown in fig. 3, which outputs frame numbers 1,2,4, and 5, to ensure that the displayed data does not catch up with the data written in the frame buffer. Therefore, the TE signal is adjusted to output corresponding control processing in the middle of the display frame, the problem of image tearing caused by different frame rate conversion in a command mode is solved, the image is ensured not to be torn, the quality is not lost too much, and the quality of the image is improved.
Another embodiment of the present application provides a method for controlling an image frame rate, which is performed by the structure of the terminal shown in fig. 1, and the method is performed in a double Buffer mode, and each Buffer Queue corresponding to a Surface has two Graphic buffers therein, one for drawing one for display. When the data of buffer A is displayed, the CPU/GPU starts to prepare the next frame data in buffer B. The method, as shown in fig. 6, includes the following steps:
step S601, the terminal AP obtains a writing speed V1 and a reading speed V2 of the frame buffer;
and step S602, when the terminal AP determines that V1 is less than V2, the terminal AP adjusts TE signals to be output in the middle of the displayed frame.
Referring specifically to fig. 4, in fig. 4, when the input speed is slower than the output speed, the processing unit 702 adjusts the TE signal to be output in the middle of the display, and as a result, one TE is generated every 2 frames displayed, and the adjustment algorithm is similar to the case where the input speed is faster than the output speed. As shown in the following figure, the output frames 4 and 5 are both the content of the full map of the input frame 1; input frame 1 is a partial update so output frames 2,3 show the content of partially updated frame 1. Therefore, the TE signal is adjusted to output corresponding control processing in the middle of the display frame, the problem of image tearing caused by different frame rate conversion in a command mode is solved, the image is ensured not to be torn, the quality is not lost too much, and the quality of the image is improved.
Referring to fig. 7, fig. 7 provides an image frame rate control system, including:
an obtaining unit 701, configured to obtain a writing speed and a reading speed of a frame buffer; acquiring the relation between the writing speed and the reading speed;
the processing unit 702 is configured to adjust the output position of the TE signal according to the relationship.
The control system for the image frame rate adjusts the TE signal to output corresponding control processing at the displayed frame head or the middle when the reading and writing speeds are different, solves the problem of image tearing caused by different frame rate conversion in a command mode, ensures that the image cannot be torn and the quality is not lost too much, and improves the quality of the image.
Illustratively, the above relationships include: the writing speed is faster or slower than the reading speed.
By way of example, in one possible embodiment, in a practical application scenario,
the processing unit 702 is specifically configured to adjust the TE signal to be output at the frame header of the display when the writing speed is faster than the reading speed.
Referring specifically to fig. 3, in fig. 3, when the input speed is slower than the output speed, the processing unit 702 adjusts the TE signal to be output at the frame header of the display, and as a result, one TE is generated per 2 frames displayed, and the adjustment algorithm is similar to the case where the input speed is faster than the output speed. As shown in fig. 3, the software calculates a reasonable read start condition according to the delay of the AP receiving the application TE and the delay on the path, and outputs a pulse to generate TE when the condition is satisfied, as shown in fig. 3, which outputs frame numbers 1,2,4, and 5, to ensure that the displayed data does not catch up with the data written in the frame buffer. Therefore, the TE signal is adjusted to output corresponding control processing in the middle of the display frame, the problem of image tearing caused by different frame rate conversion in a command mode is solved, the image is ensured not to be torn, the quality is not lost too much, and the quality of the image is improved.
By way of example, in one possible embodiment, in a practical application scenario,
the processing unit 702 is specifically configured to adjust the TE signal to be output in the middle of the display when the writing speed is slower than the reading speed.
By way of example, in one possible embodiment, in a practical application scenario,
the processing unit 702 is specifically configured to generate and output one TE per 2 frames or 3 frames displayed.
Referring specifically to fig. 4, in fig. 4, when the input speed is slower than the output speed, the processing unit 702 adjusts the TE signal to be output in the middle of the display, with the result that one TE is generated per 2 frames displayed, and the adjustment algorithm is similar to the case where the input speed is faster than the output speed. As shown in the following figure, the output frames 4 and 5 are both the content of the full map of the input frame 1; input frame 1 is a partial update so output frames 2,3 show the content of partially updated frame 1. Therefore, the TE signal is adjusted to output corresponding control processing in the middle of the display frame, the problem of image tearing caused by different frame rate conversion in a command mode is solved, the image is ensured not to be torn, the quality is not lost too much, and the quality of the image is improved.
It is understood that the above-described means for realizing the above-described functions comprise corresponding hardware and/or software modules for performing the respective functions. The present application is capable of being implemented in hardware or a combination of hardware and computer software in conjunction with the exemplary algorithm steps described in connection with the embodiments disclosed herein. Whether a function is performed as hardware or computer software drives hardware depends upon the particular application and design constraints imposed on the solution. Skilled artisans may implement the described functionality in varying ways for each particular application, with the embodiment described in connection with the particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
In this embodiment, the electronic device may be divided into functional modules according to the above method example, for example, each functional module may be divided corresponding to each function, or two or more functions may be integrated into one processing module. The integrated module may be implemented in the form of hardware. It should be noted that the division of the modules in this embodiment is schematic, and is only a logic function division, and there may be another division manner in actual implementation.
It should be noted that all relevant contents of each step related to the above method embodiment may be referred to the functional description of the corresponding functional module, and are not described herein again.
In case an integrated unit is employed, the user equipment may comprise a processing module and a storage module. The processing module may be configured to control and manage an action of the user equipment, and for example, may be configured to support the electronic equipment to perform the steps performed by the obtaining unit, the communication unit, and the processing unit. The memory module may be used to support the electronic device in executing stored program codes and data, etc.
The processing module may be a processor or a controller. Which may implement or perform the various illustrative logical blocks, modules, and circuits described in connection with the disclosure. A processor may also be a combination of computing functions, e.g., a combination of one or more microprocessors, a Digital Signal Processing (DSP) and a microprocessor, or the like. The storage module may be a memory. The communication module may specifically be a radio frequency circuit, a bluetooth chip, a Wi-Fi chip, or other devices that interact with other electronic devices.
It should be understood that the interface connection relationship between the modules illustrated in the embodiments of the present application is only an exemplary illustration, and does not form a structural limitation on the user equipment. In other embodiments of the present application, the user equipment may also adopt different interface connection manners or a combination of multiple interface connection manners in the above embodiments.
Referring to fig. 8, fig. 8 is an electronic device 80 provided in an embodiment of the present application, where the electronic device 80 includes a processor 801, a memory 802, a communication interface 803, and a display 804, the processor 801, the memory 802, and the communication interface 803 are connected to each other through a bus, the display supplies power to the electronic device, and the electronic device may further include the circuit shown in fig. 2.
The memory 802 includes, but is not limited to, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM), or a portable read-only memory (CD-ROM), and the memory 802 is used for related computer programs and data. The communication interface 803 is used to receive and transmit data.
The processor 801 may be one or more Central Processing Units (CPUs), and in the case where the processor 801 is one CPU, the CPU may be a single-core CPU or a multi-core CPU.
Processor 801 may include one or more processing units, such as: the processing unit may include an Application Processor (AP), a modem processor, a Graphics Processing Unit (GPU), an Image Signal Processor (ISP), a controller, a video codec, a Digital Signal Processor (DSP), a baseband processor, and/or a neural-Network Processing Unit (NPU), etc. Wherein the different processing units may be separate components or may be integrated in one or more processors. In some embodiments, the user equipment may also include one or more processing units. The controller can generate an operation control signal according to the instruction operation code and the time sequence signal to complete the control of instruction fetching and instruction execution. In other embodiments, a memory may also be provided in the processing unit for storing instructions and data. Illustratively, the memory in the processing unit may be a cache memory. The memory may hold instructions or data that have just been used or recycled by the processing unit. If the processing unit needs to reuse the instruction or data, it can be called directly from the memory. This avoids repeated accesses and reduces the latency of the processing unit, thereby improving the efficiency with which the user equipment processes data or executes instructions.
In some embodiments, the processor 801 may include one or more interfaces. The interface may include an inter-integrated circuit (I2C) interface, an inter-integrated circuit audio (I2S) interface, a Pulse Code Modulation (PCM) interface, a universal asynchronous receiver/transmitter (UART) interface, a Mobile Industry Processor Interface (MIPI), a general-purpose input/output (GPIO) interface, a SIM card interface, a USB interface, and/or the like. The USB interface is an interface conforming to the USB standard specification, and may specifically be a Mini USB interface, a Micro USB interface, a USB Type C interface, or the like. The USB interface can be used for connecting a charger to charge the user equipment, and can also be used for transmitting data between the user equipment and peripheral equipment. The USB interface can also be used for connecting an earphone and playing audio through the earphone.
If the electronic device 80 is a user device, such as a smart phone, the processor 801 in the electronic device 80 is configured to read the computer program code stored in the memory 802, and perform the following operations:
acquiring the writing speed and the reading speed of the frame buffer;
acquiring the relation between the writing speed and the reading speed;
the output position of the TE signal is adjusted according to the relationship.
All relevant contents of each scene related to the method embodiment may be referred to the functional description of the corresponding functional module, and are not described herein again.
The embodiment of the present application further provides a chip system, where the chip system includes at least one processor, a memory and an interface circuit, where the memory, the transceiver and the at least one processor are interconnected by a line, and the at least one memory stores a computer program; the method flow shown in fig. 2 is implemented when the computer program is executed by the processor.
An embodiment of the present application further provides a computer-readable storage medium, in which a computer program is stored, and when the computer program runs on a network device, the method flow shown in fig. 2 is implemented.
An embodiment of the present application further provides a computer program product, and when the computer program product runs on a terminal, the method flow shown in fig. 2 is implemented.
An embodiment of the present application also provides an electronic device, including a processor, a memory, a communication interface, and one or more programs, stored in the memory and configured to be executed by the processor, the programs including instructions for performing the steps in the method of the embodiment shown in fig. 2.
The embodiment of the present application further provides a network device, which is configured to support a user equipment UE to execute the method and the refinement scheme shown in fig. 2.
The above description has introduced the solution of the embodiment of the present application mainly from the perspective of the method-side implementation process. It will be appreciated that the electronic device, in order to carry out the functions described above, may comprise corresponding hardware structures and/or software templates for performing the respective functions. Those of skill in the art will readily appreciate that the present application is capable of hardware or a combination of hardware and computer software implementing the various illustrative elements and algorithm steps described in connection with the embodiments provided herein. Whether a function is performed as hardware or computer software drives hardware depends upon the particular application and design constraints imposed on the solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
In the embodiment of the present application, the electronic device may be divided into the functional units according to the method example, for example, each functional unit may be divided corresponding to each function, or two or more functions may be integrated into one processing unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit. It should be noted that the division of the unit in the embodiment of the present application is schematic, and is only a logic function division, and there may be another division manner in actual implementation.
It should be noted that, for simplicity of description, the above-mentioned method embodiments are described as a series of acts or combination of acts, but those skilled in the art will recognize that the present application is not limited by the order of acts described, as some steps may occur in other orders or concurrently depending on the application. Further, those skilled in the art will also appreciate that the embodiments described in the specification are presently preferred and that no acts or templates referred to are necessarily required by the application.
In the foregoing embodiments, the descriptions of the respective embodiments have respective emphasis, and for parts that are not described in detail in a certain embodiment, reference may be made to related descriptions of other embodiments.
In the embodiments provided in the present application, it should be understood that the disclosed apparatus may be implemented in other manners. For example, the above-described embodiments of the apparatus are merely illustrative, and for example, the above-described division of the units is only one type of division of logical functions, and other divisions may be realized in practice, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection of some interfaces, devices or units, and may be an electric or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The integrated unit may be stored in a computer readable memory if it is implemented in the form of a software functional unit and sold or used as a stand-alone product. Based on such understanding, the technical solution of the present application may be substantially implemented or a part of or all or part of the technical solution contributing to the prior art may be embodied in the form of a software product stored in a memory, and including several instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the above-mentioned method of the embodiments of the present application. And the aforementioned memory comprises: a U-disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a removable hard disk, a magnetic or optical disk, and other various media capable of storing program codes.
Those skilled in the art will appreciate that all or part of the steps in the methods of the above embodiments may be implemented by associated hardware instructed by a program, which may be stored in a computer-readable memory, which may include: flash Memory disks, Read-Only memories (ROMs), Random Access Memories (RAMs), magnetic or optical disks, and the like.

Claims (10)

1. A method for controlling an image frame rate, the method comprising:
a terminal AP acquires the writing speed and the reading speed of a frame buffer;
the terminal AP acquires the relation between the writing speed and the reading speed;
the terminal AP adjusts the output position of the TE signal according to the relationship.
2. The method of claim 1, wherein the relationship comprises: the writing speed is faster or slower than the reading speed.
3. The method according to claim 2, wherein the terminal AP adjusting the output position of the TE signal according to the relationship specifically comprises:
and when the writing speed is higher than the reading speed, regulating the TE signal to be output at the frame header of the display.
4. The method according to claim 2, wherein the terminal AP adjusting the output position of the TE signal according to the relationship specifically comprises:
when the writing speed is slower than the reading speed, the TE signal is adjusted to be output in the middle of the display.
5. The method of claim 4, wherein the adjusting the TE signal output at the middle of the display specifically comprises:
one TE is generated and output every 2 frames or 3 frames displayed.
6. A system for controlling an image frame rate, the system comprising:
the device comprises an acquisition unit, a processing unit and a control unit, wherein the acquisition unit is used for acquiring the writing speed and the reading speed of a frame buffer; acquiring the relation between the writing speed and the reading speed;
and the processing unit is used for adjusting the output position of the TE signal according to the relation.
7. The system of claim 6,
the relationship includes: the writing speed is faster or slower than the reading speed.
8. The system of claim 7,
and the processing unit is specifically used for adjusting the TE signal to be output at the displayed frame header when the writing speed is higher than the reading speed.
9. An electronic device comprising a processor, a memory, a communication interface, and one or more programs stored in the memory and configured to be executed by the processor, the programs including instructions for performing the steps of the method of any of claims 1-5.
10. A computer-readable storage medium, in which a computer program is stored which, when run on a user equipment, performs the method of any one of claims 1-5.
CN202111427930.1A 2021-11-27 2021-11-27 Image frame rate control method and related product Pending CN114153415A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111427930.1A CN114153415A (en) 2021-11-27 2021-11-27 Image frame rate control method and related product

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111427930.1A CN114153415A (en) 2021-11-27 2021-11-27 Image frame rate control method and related product

Publications (1)

Publication Number Publication Date
CN114153415A true CN114153415A (en) 2022-03-08

Family

ID=80457966

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111427930.1A Pending CN114153415A (en) 2021-11-27 2021-11-27 Image frame rate control method and related product

Country Status (1)

Country Link
CN (1) CN114153415A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116027930A (en) * 2023-02-21 2023-04-28 深圳曦华科技有限公司 Dynamic frame rate control method and device
CN117939225A (en) * 2024-01-25 2024-04-26 京东方科技集团股份有限公司 Frame rate adjusting method and related equipment

Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102984436A (en) * 2011-09-07 2013-03-20 晨星软件研发(深圳)有限公司 Image refreshing method and related image processing device
CN102982759A (en) * 2011-09-02 2013-03-20 三星电子株式会社 Display driver, operating method thereof, host for controlling the display driver, and system having the display driver and the host
CN105144281A (en) * 2013-04-26 2015-12-09 夏普株式会社 Memory control device and mobile terminal
CN107786748A (en) * 2017-10-31 2018-03-09 广东欧珀移动通信有限公司 Method for displaying image and equipment
CN108109584A (en) * 2016-11-25 2018-06-01 瑞鼎科技股份有限公司 Driving circuit and its operation method
CN109074784A (en) * 2016-04-01 2018-12-21 夏普株式会社 Display device, the control method of display device and control program
CN109725801A (en) * 2018-12-17 2019-05-07 深圳市爱协生科技有限公司 A kind of method that driving chip control display picture is spun upside down
CN111277836A (en) * 2020-01-19 2020-06-12 苏州浪潮智能科技有限公司 Video extraction frame loss control method, system, terminal and storage medium
CN111752514A (en) * 2020-06-09 2020-10-09 Oppo广东移动通信有限公司 Display control method, display control device, electronic equipment and computer-readable storage medium
CN113450733A (en) * 2021-06-11 2021-09-28 上海跳与跳信息技术合伙企业(有限合伙) Screen refreshing method, display system and user equipment
CN113691271A (en) * 2021-07-21 2021-11-23 荣耀终端有限公司 Data transmission method and wearable device

Patent Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102982759A (en) * 2011-09-02 2013-03-20 三星电子株式会社 Display driver, operating method thereof, host for controlling the display driver, and system having the display driver and the host
CN102984436A (en) * 2011-09-07 2013-03-20 晨星软件研发(深圳)有限公司 Image refreshing method and related image processing device
CN105144281A (en) * 2013-04-26 2015-12-09 夏普株式会社 Memory control device and mobile terminal
CN109074784A (en) * 2016-04-01 2018-12-21 夏普株式会社 Display device, the control method of display device and control program
CN108109584A (en) * 2016-11-25 2018-06-01 瑞鼎科技股份有限公司 Driving circuit and its operation method
CN107786748A (en) * 2017-10-31 2018-03-09 广东欧珀移动通信有限公司 Method for displaying image and equipment
CN109725801A (en) * 2018-12-17 2019-05-07 深圳市爱协生科技有限公司 A kind of method that driving chip control display picture is spun upside down
CN111277836A (en) * 2020-01-19 2020-06-12 苏州浪潮智能科技有限公司 Video extraction frame loss control method, system, terminal and storage medium
CN111752514A (en) * 2020-06-09 2020-10-09 Oppo广东移动通信有限公司 Display control method, display control device, electronic equipment and computer-readable storage medium
CN113450733A (en) * 2021-06-11 2021-09-28 上海跳与跳信息技术合伙企业(有限合伙) Screen refreshing method, display system and user equipment
CN113691271A (en) * 2021-07-21 2021-11-23 荣耀终端有限公司 Data transmission method and wearable device

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116027930A (en) * 2023-02-21 2023-04-28 深圳曦华科技有限公司 Dynamic frame rate control method and device
CN116027930B (en) * 2023-02-21 2023-08-08 深圳曦华科技有限公司 Dynamic frame rate control method and device
CN117939225A (en) * 2024-01-25 2024-04-26 京东方科技集团股份有限公司 Frame rate adjusting method and related equipment
CN117939225B (en) * 2024-01-25 2024-07-05 京东方科技集团股份有限公司 Frame rate adjusting method and related equipment

Similar Documents

Publication Publication Date Title
CN109992232B (en) Image updating method, device, terminal and storage medium
CN110018874B (en) Vertical synchronization method, device, terminal and storage medium
KR101467127B1 (en) Techniques to control display activity
JP6894976B2 (en) Image smoothness improvement method and equipment
CN1981519B (en) Method and system for displaying a sequence of image frames
EP2619653B1 (en) Techniques to transmit commands to a target device
CN114153415A (en) Image frame rate control method and related product
TWI455061B (en) Techniques for controlling frame refresh
US20150163450A1 (en) Video display system, source device, sink device, and video display method
JP2002287728A (en) Method and device for interface device of display system
CN114189732B (en) Method and related device for controlling reading and writing of image data
JP6811607B2 (en) Receiver and receiving method
CN109584766A (en) A kind of display methods, display device and the device with store function
US11320853B2 (en) Image transmission apparatus, image transmission system, and method of controlling image transmission apparatus
EP1484737A1 (en) Display controller
CN115101025B (en) LCD control circuit supporting virtual frame buffering and control method thereof
TW201409350A (en) A processing method of external image apparatus, and a external image apparatus
CN114153416B (en) Display control method and related device
CN115831074B (en) Frame rate conversion method and device based on single buffer mode
CN115841804B (en) Resolution real-time switching control method and device
JP2005122119A (en) Video interface device in system constituted of mpu and video codec
CN115032797B (en) Display method for wireless intelligent glasses and wireless intelligent glasses
WO2023184242A1 (en) Display control method and apparatus, image processing apparatus, and display device
JP2006337859A (en) Display control device and method, and program
CN118646908A (en) Method and device for adjusting tearing effect TE signal

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20220308

RJ01 Rejection of invention patent application after publication