WO2024046317A1 - 一种内容显示方法及电子设备 - Google Patents

一种内容显示方法及电子设备 Download PDF

Info

Publication number
WO2024046317A1
WO2024046317A1 PCT/CN2023/115528 CN2023115528W WO2024046317A1 WO 2024046317 A1 WO2024046317 A1 WO 2024046317A1 CN 2023115528 W CN2023115528 W CN 2023115528W WO 2024046317 A1 WO2024046317 A1 WO 2024046317A1
Authority
WO
WIPO (PCT)
Prior art keywords
image frame
image
application
display
electronic device
Prior art date
Application number
PCT/CN2023/115528
Other languages
English (en)
French (fr)
Inventor
罗诚
刘开罩
华梦峥
Original Assignee
华为技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 华为技术有限公司 filed Critical 华为技术有限公司
Publication of WO2024046317A1 publication Critical patent/WO2024046317A1/zh

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0487Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
    • G06F3/0488Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures
    • G06F3/04883Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures for inputting data by handwriting, e.g. gesture or text
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/14Digital output to display device ; Cooperation and interconnection of the display device with other functional units
    • G06F3/147Digital output to display device ; Cooperation and interconnection of the display device with other functional units using display panels
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09FDISPLAYING; ADVERTISING; SIGNS; LABELS OR NAME-PLATES; SEALS
    • G09F9/00Indicating arrangements for variable information in which the information is built-up on a support by selection or combination of individual elements
    • G09F9/30Indicating arrangements for variable information in which the information is built-up on a support by selection or combination of individual elements in which the desired character or characters are formed by combining individual elements
    • G09F9/37Indicating arrangements for variable information in which the information is built-up on a support by selection or combination of individual elements in which the desired character or characters are formed by combining individual elements being movable elements
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09GARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
    • G09G3/00Control arrangements or circuits, of interest only in connection with visual indicators other than cathode-ray tubes
    • G09G3/20Control arrangements or circuits, of interest only in connection with visual indicators other than cathode-ray tubes for presentation of an assembly of a number of characters, e.g. a page, by composing the assembly by combination of individual elements arranged in a matrix no fixed position being assigned to or needed to be assigned to the individual characters or partial characters
    • G09G3/34Control arrangements or circuits, of interest only in connection with visual indicators other than cathode-ray tubes for presentation of an assembly of a number of characters, e.g. a page, by composing the assembly by combination of individual elements arranged in a matrix no fixed position being assigned to or needed to be assigned to the individual characters or partial characters by control of light from an independent source

Definitions

  • the present application relates to the technical field of electronic equipment, and in particular, to a content display method and electronic equipment.
  • ink screens have many advantages such as ultra-low power consumption, paper-like texture, eye protection and no blue light, and being thin and light, ink screens are currently being used more and more widely.
  • the current refresh rate of the ink screen is generally only 5 to 7 frames per second (FPS), which is much lower than the refresh rate of the liquid crystal display (LCD) (60FPS, 120FPS, etc.).
  • the low refresh rate of the Ink Screen will cause a very large handwriting delay on the Ink Screen, which can easily cause applications in electronic devices to reduce the smoothness of displaying handwritten content on the Ink Screen based on the user's handwriting operations, affecting the user's handwriting experience.
  • This application provides a content display method and electronic device, which are used to easily and efficiently reduce the writing delay of the ink screen when displaying handwritten content on the ink screen, and at the same time improve the versatility and practicality of the solution.
  • the present application provides a content display method applied to a display unit in an operating system of an electronic device.
  • the method includes: acquiring multiple image frames in response to a handwriting operation acting on an ink screen; wherein, The plurality of image frames belong to a first application; a first image frame is predicted according to the plurality of image frames; wherein the first image frame is a prediction of the next image frame of the last image frame in the plurality of image frames.
  • Image frame update the second image frame displayed on the ink screen to the first image frame.
  • the display unit in the electronic device when the display unit in the electronic device receives the handwriting operation on the ink screen and needs to display the handwritten content, it can predict the subsequent image frames based on the image frames already displayed on the ink screen and display them without waiting for the subsequent The generation of image frames, therefore, the subsequent image frames to be displayed can be obtained in advance and displayed, which can appropriately increase the speed of updating the displayed image frames, thereby improving the smoothness of the display and reducing the delay in displaying handwritten content.
  • the system service that is, the display unit
  • the system service can directly obtain the image frame of the application and predict and display the subsequent image frame to be displayed based on the obtained image frame, without changing the application's processing logic or method, so there is no need to adapt the application, the implementation is less difficult and the efficiency is high.
  • this method can improve display fluency while reducing implementation difficulty and improving display efficiency. Therefore, this method is highly versatile and practical.
  • updating the second image frame displayed on the ink screen to the first image frame includes: determining the first image frame based on the first image frame and the second image frame. A target image data; wherein the first target image data is used to indicate the image content in the first image frame that changes relative to the second image frame; according to the first target image data, the ink The second image frame displayed on the screen is updated to the first image frame.
  • the display unit updates the second image frame to the first image frame according to the changed image content of the first image frame relative to the second image frame, and can update the second image frame in a local update manner, so It can speed up the update of image frames, improve update efficiency and reduce display delay.
  • updating the second image frame displayed on the ink screen to the first image frame according to the first target image data includes: using a serial peripheral interface Send the first target image data to the ink screen, and drive the ink screen to replace the second target content in the second image frame with the first target content; wherein the first target content is the The image content indicated by the first target image data, the second target content is the content in the second image frame that is different from the first image frame. Allow.
  • the serial peripheral interface can transmit a smaller amount of data, but the transmission speed is very fast.
  • the display unit uses a local update method to transmit the changed image content in the image frame to the display screen for updated display.
  • the amount of data that needs to be transmitted is small, so the serial peripheral interface can be used as much as possible to transmit the image content. This ensures a faster data transmission speed, which is beneficial to improving the refresh rate when the ink screen updates the display content, thereby supporting the provision of smooth handwriting services for users and improving the user experience.
  • the method before sending the first target image data to the ink screen using a serial peripheral interface, the method further includes: determining that the data amount of the first target image data is less than or Equal to the set data volume threshold.
  • updating the second image frame displayed on the ink screen to the first image frame according to the first target image data includes: when determining that the first When the data amount of the target image data is greater than the set data amount threshold, the mobile industry processor interface is used to send the first image frame to the ink screen, and the ink screen is driven to replace the first image frame with the first image frame. the second image frame.
  • the advantage of the mobile industry processor interface is that it can support the transmission of a large amount of data. Therefore, data transmission through the MIPI interface can satisfy the global update data transmission of updating the entire second image frame to the first image frame. requirements to ensure the smooth execution of the image frame update process.
  • the first application is any application in a set white list, and the white list includes at least one application; and/or the first application is an application of a set type. .
  • the second image frame is a predicted image frame of the last image frame among the plurality of image frames.
  • the first image frame and the second image frame are both predicted image frames. Therefore, the image frame displayed by the display unit is the predicted image frame rather than the actually generated image frame. Therefore, the display unit does not need to Waiting for the actual image frame to be generated can be displayed directly after the predicted image frame is determined. Therefore, the delay in updating and displaying the image frame on the ink screen can be appropriately reduced and the user experience can be improved.
  • the handwriting operation is an operation performed in a first display area on the ink screen; wherein the first display area is a display area where the second image frame is located.
  • the display unit can update the content displayed in a display area on the ink screen according to the handwriting operation in the display area on the ink screen, so it will not affect the content displayed in other display areas on the ink screen, and can Improve user experience.
  • predicting the first image frame based on the multiple image frames includes: determining the first image frame based on the multiple image frames and an image prediction model; wherein, the image The prediction model is used to represent the relationship between a plurality of consecutive image frames and an image frame next to the last image frame in the plurality of image frames.
  • the accuracy of image frame prediction using the image prediction model is high, which can improve the content accuracy of image frame update, thereby improving the user experience.
  • the application provides an electronic device, which includes an ink screen, a memory and one or more processors; wherein the memory is used to store computer program code, and the computer program code includes computer instructions; when the computer instructions are processed by a When executed by or multiple processors, the electronic device is caused to execute the method described in the above first aspect or any possible design of the first aspect.
  • the present application provides a computer-readable storage medium that stores a computer program.
  • the computer program When the computer program is run on a computer, it causes the computer to execute the above-mentioned first aspect or any possibility of the first aspect. The design method described.
  • the present application provides a computer program product.
  • the computer program product includes a computer program or instructions.
  • the computer program or instructions When the computer program or instructions are run on a computer, the computer causes the computer to execute the above-mentioned first aspect or any possible method of the first aspect. Design the method described.
  • Figure 1 is a schematic diagram of the hardware architecture of an electronic device provided by an embodiment of the present application.
  • Figure 2 is a schematic diagram of the software architecture of an electronic device provided by an embodiment of the present application.
  • Figure 3 is an architectural schematic diagram of a content display system provided by an embodiment of the present application.
  • Figure 4 is a schematic diagram of a content display method provided by an embodiment of the present application.
  • Figure 5 is a schematic flowchart of a content display method provided by an embodiment of the present application.
  • Figure 6 is a schematic diagram of a content display method provided by an embodiment of the present application.
  • FIG. 7 is a schematic structural diagram of an electronic device provided by an embodiment of the present application.
  • first and second are only used for descriptive purposes and cannot be understood as indicating or implying the relative importance or implicitly indicating the number of indicated technical features. . Therefore, features defined as “first” and “second” may explicitly or implicitly include one or more of these features.
  • Electronic equipment is a device with an ink screen.
  • the electronic device may be a portable device with an ink screen, such as a mobile phone with an ink screen, a tablet computer, a wearable device with a wireless communication function (for example, a watch, a bracelet, a helmet, a headset, etc.), a vehicle Terminal devices, augmented reality (AR)/virtual reality (VR) devices, laptops, ultra-mobile personal computers (UMPC), netbooks, personal digital assistants (PDA) ), smart home equipment (such as smart TVs, smart speakers, etc.), smart robots, workshop equipment, wireless terminals in self-driving, wireless terminals in remote medical surgery, smart grids Wireless terminals in grids, wireless terminals in transportation safety, wireless terminals in smart cities, or wireless terminals in smart homes, flying equipment (such as smart robots, thermal Balloons, drones, airplanes), etc.
  • AR augmented reality
  • VR virtual reality
  • laptops laptops
  • UMPC ultra-mobile personal computers
  • PDA
  • a wearable device is a portable device that can be worn directly on the user's body or integrated into the user's clothes or accessories.
  • the electronic device may also be a portable terminal device that also includes other functions such as a personal digital assistant and/or a music player function.
  • portable terminal devices include, but are not limited to, carrying Or portable terminal devices with other operating systems.
  • the above-mentioned portable terminal device may also be other portable terminal devices, such as a laptop computer (laptop) with a touch-sensitive surface (eg, a touch panel).
  • a touch-sensitive surface eg, a touch panel
  • the above-mentioned electronic device may not be a portable terminal device, but a desktop computer with a touch-sensitive surface (such as a touch panel).
  • At least one refers to one or more, and “multiple” refers to two or more.
  • “And/or” describes the relationship between associated objects, indicating that there can be three relationships. For example, A and/or B can mean: A exists alone, A and B exist simultaneously, and B exists alone, where A, B can be singular or plural. The character “/” generally indicates that the related objects are in an “or” relationship. "At least one (item) of the following” or similar expressions thereof refers to any combination of these items, including any combination of single item (items) or plural items (items).
  • At least one of a, b or c can mean: a, b, c, a and b, a and c, b and c, or a, b and c, where a, b, c Can be single or multiple.
  • the system of electronic equipment can provide customized local drawing interfaces for system applications, self-developed applications and cooperative applications in electronic equipment. These applications can adapt the local drawing interface to complete the drawing of handwriting trajectories.
  • Electronic equipment The system service of the device can accurately calculate the local change area of the handwriting trajectory drawn by the application through the local drawing interface, and use the SPI interface to refresh the handwriting trajectory in the local change area.
  • Third-party applications in electronic devices can directly call the native interface of the system to draw handwriting traces.
  • the system services of electronic devices cannot determine the native interface of the system that is actually called by third-party applications when drawing handwriting traces, the system services of electronic devices can only Obtain the entire frame drawn by a third-party application and perform a full-screen refresh of the display interface based on the frame. Therefore, the rate at which electronic devices display handwriting traces on the ink screen is slow and the delay is large.
  • the partial refresh method is only applicable to system applications, self-developed applications and cooperative applications in electronic equipment, and needs to be adapted to the applications one by one. Therefore, the processing efficiency is low, and the versatility and practicality of the scheme are relatively low. Low.
  • the handwriting trajectory received by the third-party application cannot be displayed by partial refresh, and can only be drawn according to the third-party application. The entire frame of the screen is refreshed in full screen, resulting in a large writing delay on the ink screen and low display smoothness.
  • the input on the ink screen during a period of time can be event (including the coordinates of the contact point of the operation on the ink screen, pressure sensitivity, etc.)
  • a motion compensation algorithm is used to predict the next input event, thereby predicting the coordinates of the next contact point of the operation on the ink screen, and based on The predicted coordinates are used to update the handwriting trajectory.
  • the applications still need to be adapted one by one. Therefore, the processing efficiency is low, and the versatility and practicality of the solution are low.
  • electronic devices cannot use partial refresh to display the handwriting traces received by third-party applications. Therefore, the writing delay of the ink screen is still large, resulting in low display fluency.
  • embodiments of the present application provide a content display method and electronic device.
  • This solution can display handwritten content on an ink screen, while simultaneously achieving the effect of reducing the writing delay of the ink screen simply and efficiently, and improving the versatility of the solution. Safety and practicality, improving the user experience of using the ink screen.
  • the electronic device 100 may include a processor 110, an external memory interface 120, an internal memory 121, a USB interface 130, a charging management module 140, a power management module 141, a battery 142, an antenna 1, an antenna 2, and a mobile communication module.
  • Wireless communication module 160, audio module 170, speaker 170A, receiver 170B, microphone 170C, headphone interface 170D, sensor module 180, button 190, motor 191, indicator 192, camera 193, display screen 194, and SIM card interface 195 wait.
  • the sensor module 180 may include a gyroscope sensor, an acceleration sensor, a proximity light sensor, a fingerprint sensor, a touch sensor, a temperature sensor, a pressure sensor, a distance sensor, a magnetic sensor, an ambient light sensor, an air pressure sensor, a bone conduction sensor, etc.
  • the electronic device 100 shown in FIG. 1 is only an example and does not constitute a limitation on the electronic device, and the electronic device may have more or fewer components than those shown in the figure and may be combined. Two or more components, or can have different component configurations.
  • the various components shown in Figure 1 may be implemented in hardware, software, or a combination of hardware and software including one or more signal processing and/or application specific integrated circuits.
  • the processor 110 may include one or more processing units.
  • the processor 110 may include an application processor (application processor, AP), a modem processor, a graphics processing unit (GPU), and an image signal processor. (image signal processor, ISP), controller, memory, video codec, digital signal processor (digital signal processor, DSP), baseband processor, and/or neural-network processing unit (NPU) wait.
  • image signal processor, ISP image signal processor
  • controller may be the nerve center and command center of the electronic device 100 .
  • the controller can generate operation control signals based on the instruction operation code and timing signals to complete the control of fetching and executing instructions.
  • the processor 110 may also be provided with a memory for storing instructions and data.
  • the memory in processor 110 is cache memory. This memory may hold instructions or data that have been recently used or recycled by processor 110 . If the processor 110 needs to use the instructions or data again, it can be called directly from the memory. Repeated access is avoided and the waiting time of the processor 110 is reduced, thus improving the efficiency of the system.
  • the execution of the content display method provided by the embodiment of the present application can be completed by controlling the processor 110 or calling other components, such as calling the processing program of the embodiment of the present application stored in the internal memory 121, or calling a third party through the external memory interface 120.
  • the processing program of the embodiment of the present application stored in the device controls the wireless communication module 160 to perform data communication with other devices, thereby improving the intelligence and convenience of the electronic device 100 and improving the user experience.
  • the processor 110 may include different devices. For example, when integrating a CPU and a GPU, the CPU and the GPU may cooperate to execute the content display method provided by the embodiments of the present application. For example, part of the algorithm in the content display method is executed by the CPU, and another part of the algorithm is executed by the GPU. to obtain faster processing efficiency.
  • the display screen 194 is used to display images, videos, etc.
  • the display screen 194 is an ink screen.
  • Display 194 includes a display panel.
  • the electronic device 100 may include 1 or N display screens 194, where N is a positive integer greater than 1.
  • the display screen 194 may be used to display information input by or provided to the user as well as various graphical user interfaces (GUI).
  • GUI graphical user interfaces
  • the display screen 194 can display photos, videos, web pages, or files, etc.
  • the display screen 194 can be an integrated flexible display screen, or a spliced display screen composed of two rigid screens and a flexible screen located between the two rigid screens.
  • a camera 193 (either a front-facing camera or a rear-facing camera, or one camera can serve as both a front-facing and rear-facing camera) is used to capture still images or video.
  • the camera 193 may include photosensitive elements such as a lens group and an image sensor, where the lens group includes multiple lenses (convex lenses or concave lenses) for collecting light signals reflected by objects to be photographed, and transmitting the collected light signals to the image sensor. .
  • the image sensor generates an original image of the object to be photographed based on the light signal.
  • Internal memory 121 may be used to store computer executable program code, which includes instructions.
  • the processor 110 executes instructions stored in the internal memory 121 to execute various functional applications and data processing of the electronic device 100 .
  • the internal memory 121 may include a program storage area and a data storage area. Among them, the stored program area can store codes of the operating system, application programs (such as functions corresponding to the content display method provided by this application, etc.).
  • the storage data area may store data created during use of the electronic device 100 and the like.
  • the internal memory 121 may also store one or more computer programs corresponding to the algorithm of the content display method provided by the embodiment of the present application.
  • the one or more computer programs are stored in the above-mentioned internal memory 121 and configured to be executed by the one or more processors 110.
  • the one or more computer programs include instructions, and the above instructions can be used to perform the following embodiments. various steps.
  • the internal memory 121 may include high-speed random access memory, and may also include non-volatile memory, such as at least one disk storage device, flash memory device, universal flash storage (UFS), etc.
  • non-volatile memory such as at least one disk storage device, flash memory device, universal flash storage (UFS), etc.
  • the code of the algorithm of the content display method provided by the embodiment of the present application can also be stored in an external memory.
  • the processor 110 may execute the code of the algorithm of the content display method stored in the external memory through the external memory interface 120 .
  • the sensor module 180 may include a gyroscope sensor, an acceleration sensor, a proximity light sensor, a fingerprint sensor, a touch sensor, and the like.
  • Touch sensor also called “touch panel”.
  • the touch sensor can be disposed on the display screen 194, and the touch sensor and the display screen 194 form a touch display screen, also called a "touch screen”. Touch sensors are used to detect touches on or near them.
  • the touch sensor can pass the detected touch operation to the application processor to determine the touch event type.
  • Visual output related to the touch operation may be provided through display screen 194 .
  • the touch sensor may also be disposed on the surface of the electronic device 100 at a location different from that of the display screen 194 .
  • the display screen 194 of the electronic device 100 displays a home interface, which includes icons of applications (such as camera applications, etc.).
  • applications such as camera applications, etc.
  • the user can click the icon of the camera application in the main interface through the touch sensor to trigger the processor 110 to start the camera application and open the camera 193 .
  • the display screen 194 displays the interface of the camera application, such as the viewfinder interface.
  • the wireless communication function of the electronic device 100 can be implemented through the antenna 1, the antenna 2, the mobile communication module 150, the wireless communication module 160, the modem processor and the baseband processor.
  • Antenna 1 and Antenna 2 are used to transmit and receive electromagnetic wave signals.
  • Each antenna in electronic device 100 may be used to cover a single or multiple communication frequency bands. Different antennas can also be reused to improve antenna utilization. For example: Antenna 1 can be reused as a diversity antenna for a wireless LAN. In other embodiments, antennas may be used in conjunction with tuning switches.
  • the mobile communication module 150 can provide solutions for wireless communication including 2G/3G/4G/5G applied on the electronic device 100 .
  • the mobile communication module 150 may include at least one filter, switch, power amplifier, low noise amplifier (LNA), etc.
  • the mobile communication module 150 can receive electromagnetic waves through the antenna 1, perform filtering, amplification and other processing on the received electromagnetic waves, and transmit them to the modem processor for demodulation.
  • the mobile communication module 150 can also amplify the signal modulated by the modem processor and convert it into electromagnetic waves through the antenna 1 for radiation.
  • at least part of the functional modules of the mobile communication module 150 may be disposed in the processor 110 .
  • at least part of the functional modules of the mobile communication module 150 and at least part of the modules of the processor 110 may be provided in the same device. In this embodiment of the present application, the mobile communication module 150 can also be used to interact with other devices for information.
  • a modem processor may include a modulator and a demodulator.
  • the modulator is used to modulate the low-frequency baseband signal to be sent into a medium-high frequency signal.
  • the demodulator is used to demodulate the received electromagnetic wave signal into a low-frequency baseband signal.
  • the demodulator then transmits the demodulated low-frequency baseband signal to the baseband processor for processing.
  • the application processor outputs sound signals through audio devices (not limited to speaker 170A, receiver 170B, etc.), or displays images or videos through display screen 194.
  • the modem processor may be a stand-alone device.
  • the modem processor may be independent of the processor 110 and may be provided in the same device as the mobile communication module 150 or other functional modules.
  • the wireless communication module 160 can provide applications on the electronic device 100 including wireless local area networks (WLAN) (such as wireless fidelity (Wi-Fi) network), Bluetooth (bluetooth, BT), and global navigation satellites. Wireless communication solutions such as global navigation satellite system (GNSS), frequency modulation (FM), near field communication (NFC), infrared technology (infrared, IR), etc.
  • WLAN wireless local area networks
  • BT Bluetooth
  • GNSS global navigation satellite system
  • FM frequency modulation
  • NFC near field communication
  • IR infrared technology
  • the wireless communication module 160 may be one or more devices integrating at least one communication processing module.
  • the wireless communication module 160 receives electromagnetic waves via the antenna 2 , frequency modulates and filters the electromagnetic wave signals, and sends the processed signals to the processor 110 .
  • the wireless communication module 160 can also receive the signal to be sent from the processor 110, frequency modulate it, amplify it, and convert it into electromagnetic waves through the antenna 2 for radiation.
  • the wireless communication module 160 is used to establish connections with other electronic devices for data exchange.
  • the wireless communication module 160 may be used to access an access point device, Send control instructions to other electronic devices, or receive data sent from other electronic devices.
  • the electronic device 100 can implement audio functions through the audio module 170, the speaker 170A, the receiver 170B, the microphone 170C, the headphone interface 170D, and the application processor. Such as music playback, recording, etc.
  • the electronic device 100 may receive key 190 inputs and generate key signal inputs related to user settings and function control of the electronic device 100 .
  • the electronic device 100 can use the motor 191 to generate vibration prompts (such as vibration prompts for incoming calls).
  • the indicator 192 in the electronic device 100 may be an indicator light, which may be used to indicate charging status, power changes, or may be used to indicate messages, missed calls, notifications, etc.
  • the SIM card interface 195 in the electronic device 100 is used to connect a SIM card.
  • the SIM card can be connected to or separated from the electronic device 100 by inserting it into the SIM card interface 195 or pulling it out from the SIM card interface 195 .
  • the electronic device 100 may include more or fewer components than shown in FIG. 1 , which is not limited by the embodiments of this application.
  • the illustrated electronic device 100 is only an example, and the electronic device 100 may have more or fewer components than shown, may combine two or more components, or may have a different component configuration.
  • the various components shown in the figures may be implemented in hardware, software, or a combination of hardware and software including one or more signal processing and/or application specific integrated circuits.
  • the software system of the electronic device 100 may adopt a layered architecture, an event-driven architecture, a microkernel architecture, a microservice architecture, or a cloud architecture.
  • the embodiment of this application takes the Android system with a layered architecture as an example to illustrate the software structure of the electronic device.
  • the layered architecture divides the software into several layers, and each layer has clear roles and division of labor.
  • the layers communicate through software interfaces.
  • the software architecture can be divided into four layers, from top to bottom: application layer, application framework layer (framework, FWK), runtime and system libraries, and Linux kernel (kernel) layer.
  • the application layer is the top layer of the operating system, including the native applications of the operating system, such as camera, gallery, calendar, Bluetooth, music, video, information, etc.
  • the application program involved in the embodiments of this application is referred to as an application (APP), and is a software program that can implement one or more specific functions.
  • APP application
  • multiple applications can be installed in an electronic device.
  • the applications mentioned below may be system applications installed when the electronic device leaves the factory, or they may be third-party applications downloaded from the Internet or obtained from other electronic devices when the user uses the electronic device.
  • the application can be developed using the Java language, which is accomplished by calling the application programming interface (API) provided by the application framework layer. Developers can communicate with the operating system through the application framework. interact with the underlying layer (such as the kernel layer, etc.) to develop your own applications.
  • API application programming interface
  • the application framework layer is the API and programming framework of the application layer.
  • the application framework layer can include some predefined functions.
  • the application framework layer can include window managers, content providers, view systems, phone managers, resource managers, notification managers, etc.
  • a window manager is used to manage window programs.
  • the window manager can obtain the display size, determine whether there is a status bar, lock the screen, capture the screen, etc.
  • Content providers are used to store and retrieve data and make this data accessible to applications.
  • the data may include files (such as documents, videos, images, audio), text and other information.
  • the view system includes visual controls, such as controls that display text, pictures, documents, etc.
  • a view system can be used to build applications.
  • the interface in the display window can be composed of one or more views.
  • a display interface including a text message notification icon may include a view for displaying text and a view for displaying pictures.
  • Telephone managers are used to provide communication functions of electronic devices.
  • the notification manager allows applications to display notification information in the status bar, which can be used to convey notification-type messages and can automatically disappear after a short stay without user interaction.
  • the runtime includes core libraries and virtual machines.
  • the runtime is responsible for the scheduling and management of the operating system.
  • the core library of the system consists of two parts: one part is the functional functions that need to be called by the Java language, and the other part is the core library of the system.
  • the application layer and application framework layer run in virtual machines. Taking Java as an example, the virtual machine executes Java files in the application layer and application framework layer as binary files. The virtual machine is used to perform object life cycle management, stack management, thread management, security and exception management, and garbage collection and other functions.
  • System libraries can include multiple functional modules. For example: surface manager, media library, three-dimensional graphics processing library (for example: OpenGL ES), two-dimensional graphics engine (for example: SGL), etc.
  • the surface manager is used to manage the display subsystem and provides the fusion of 2D and 3D layers for multiple applications.
  • the media library supports playback and recording of a variety of commonly used audio and video formats, as well as static image files, etc.
  • the media library can support a variety of audio and video encoding formats, such as: MPEG4, H.564, MP3, AAC, AMR, JPG, PNG, etc.
  • the 3D graphics processing library is used to implement 3D graphics drawing, image rendering, composition, and layer processing.
  • 2D graphics engine is a drawing for 2D drawing engine.
  • the kernel layer provides core system services of the operating system, such as security, memory management, process management, network protocol stack, and driver model, all of which are implemented based on the kernel layer.
  • the kernel layer also serves as an abstraction layer between the hardware and software stacks. This layer has many drivers related to electronic devices.
  • the main drivers are: display driver; keyboard driver as an input device; Flash driver based on memory technology equipment; camera driver; audio driver; Bluetooth driver; WiFi driver, etc.
  • Figure 3 is a schematic architectural diagram of a content display system provided by an embodiment of the present application.
  • the content display system may include an application, a graphics rendering system, a scene recognition system and a display system.
  • the content display system can be deployed in an electronic device.
  • the electronic device can have a content display service.
  • the graphics rendering system, scene recognition system and display system can be included in the content display service. three subsystems.
  • the graphics drawing system can be used as an interface in the electronic device to provide system drawing services.
  • the application can draw the graphics images (including user handwriting traces) and other content required to be displayed by the application business by calling the graphics drawing system.
  • the application can send the drawn content to the display system, and the display system displays the content.
  • the application may be a system application or a third-party application.
  • the scene recognition system can be used to detect the applications (services) currently running in the electronic device, applications running in the foreground in the electronic device, types of applications running in the foreground, stylus pen status, ink screen touch status, etc., and will detect the The information is sent to the display system, so that the display system determines based on the information whether to use the method provided by the embodiment of the present application to control the content display process of the application running in the foreground.
  • the display system is used to synthesize the content drawn by the application by calling the graphics drawing system and output it to the ink screen for display.
  • the display system may include a display control module, a frame buffer learning module and a frame buffer update module.
  • the display control module is used to perform content display-related control such as content synthesis method and display path.
  • the frame buffer learning module is used to predict the image to be displayed in the next frame based on the displayed consecutive frame images (data), and to calculate the dirty area between the predicted image frame and the previous frame image.
  • the frame buffer update module is used to refresh and display a predicted frame of image based on the dirty area calculated by the frame buffer learning module.
  • the content display system may further include a display screen driver, which is used to drive the display screen to display content under the control of the frame buffer update module.
  • the graphics drawing system can be implemented as a rendering process (render thread) in the Android system
  • the display system can be implemented as a display synthesis (surface flinger) service of the Android system.
  • system architecture shown in Figure 3 is only an exemplary illustration of the system architecture applicable to the solution of this application.
  • the system architecture shown in Figure 3 does not limit the system architecture applicable to the solution of this application.
  • the system architecture applicable to the solution of this application may include fewer or more modules than shown in Figure 3, and there is no specific limitation in the embodiment of this application.
  • the content display method provided by the embodiment of the present application can be applied to an electronic device with an ink screen.
  • the content display method provided by the embodiment of the present application may include:
  • S401 The electronic device acquires multiple consecutive image frames, which are image frames generated by the first application control in the electronic device.
  • the first application may be an application of a set type, such as a handwriting office application, etc.; or the first application may be any application in a set white list, where the white list may be To record at least one application to which the solution provided by this application is applicable.
  • the first application may be a third-party application installed in the electronic device.
  • the number of the plurality of image frames may be a set number.
  • the first application in the electronic device can call a system service in the electronic device to sequentially generate consecutive image frames to be displayed and display them based on the received handwriting operation.
  • the generated image frames are displayed.
  • the electronic device can obtain corresponding image frames frame by frame, thereby obtaining the multiple image frames.
  • the system services may include services for drawing image frames in the electronic device and services for displaying image frames in the electronic device.
  • the system service may be the content display service shown in Figure 3 above, or the system service may at least include The graphics drawing system and display system described in Figure 3 above.
  • the generated image frames (or interfaces corresponding to the image frames) can be displayed sequentially on the ink screen of the electronic device.
  • the electronic device may include a display unit and the first application, and the above step S401 may be a step performed by the display unit in the electronic device.
  • the display unit can serve as the above-mentioned system service.
  • the display unit does not need to serve as the above-mentioned system service. If the display unit does not serve as the above-mentioned system service, then after the first application controls the system service to generate image frames in sequence, the display unit can obtain the image frames from the system service, thereby obtaining the multiple image frames.
  • the display unit every time the display unit obtains an image frame, if it can obtain a set number of image frames including the image frame (where the image frame is the last image frame among the set number of image frames), Then the set number of image frames can be used as the plurality of image frames, and the execution of the above step S401 can be started, and subsequent steps can be continued; otherwise, the display unit can not execute step S401 and subsequent steps, and only the first application executes the control Methods for generating and displaying image frames.
  • the method used by the first application when displaying the image frame is to call a system service of the electronic device to draw the image frame and display the image frame.
  • the first application may be the application shown in Figure 3 above
  • the display unit may provide the content display service shown in Figure 3 above.
  • the display unit may provide system-level services for electronic devices.
  • the first application can generate the plurality of image frames and display them accordingly by calling a system interface of the electronic device. For example, when the first application is the application shown in Figure 3 and the display unit is the content display service shown in Figure 3, the first application can draw the interface to be displayed by calling the graphics drawing system in the display unit, thereby The corresponding plurality of image frames are obtained, and the corresponding interface is displayed according to the plurality of image frames.
  • the first application may be to, after receiving a user's handwriting operation on the first display screen, control the generation of multiple consecutive image frames in response to the received handwriting operation and display the corresponding image frames in sequence.
  • the handwriting operation may be a touch sliding operation or other operation in which the trajectory of the operation point on the ink screen continues (changes), for example, it may be a writing operation, a drawing operation, etc.
  • the handwriting operation may be an operation performed directly by the user on the ink screen (for example, the user uses a finger to write on the ink screen), or it may be an operation performed by the user using a stylus on the ink screen (for example, the user writes with a handheld pen) Use the pen to write, draw, annotate, etc. on the ink screen).
  • the first application can sequentially display the generated image frames in a full-screen window, a split-screen window, or a floating window of the ink screen.
  • the first operation may be an operation acting on any area on the ink screen;
  • the handwriting operation may be an operation that acts on the area corresponding to the split-screen window on the ink screen;
  • the handwriting operation is an operation performed in the area corresponding to the floating window on the ink screen.
  • the generation of image frames by the first application described in the embodiments of this application refers to the first application calling a service for generating image frames in the system service of the electronic device to generate an image frame to be displayed; this application
  • the first application described in the embodiment displays the image frame on the ink screen of the electronic device means that the first application calls the service for displaying the image frame in the system service of the electronic device to display the image frame on the ink screen of the electronic device.
  • the first application is an application running in the foreground of the electronic device.
  • the electronic device (or a display unit in an electronic device) can identify the foreground running state of the first application, identify the category of the first application, and can also identify the state of the user operation received by the first application. If it is determined that the first application is currently running in the foreground, the type of the first application is a set type (for example, handwriting office type) and the user has a control operation on the first application (or the content displayed by the first application), the electronic device can start execution. The above step S401, otherwise, the electronic device may not start executing the above step S401.
  • the electronic The device may determine that the user has a control operation on the first application.
  • the above-mentioned step S401 may be a step performed by the display unit. Further, when the display unit serves the content display shown in FIG. 3, the above step S401 may be executed by the display control module or the frame buffer learning module in the display system.
  • the electronic device predicts a first image frame based on the plurality of image frames, and the first image frame is used as an image frame displayed by the electronic device on the ink screen after displaying the second image frame.
  • the second image frame is the image frame currently displayed on the ink screen.
  • the first image frame is used as a predicted image frame of a third image frame and is displayed on the ink screen instead of the third image frame.
  • the third image frame is generated by the first application and the multi-dimensional image frame is generated by the first application.
  • the electronic device can first predict the predicted image frame of the third image frame, that is, the first image frame, and display For the first image frame, the display speed of the image frame can be appropriately accelerated, thereby reducing the delay.
  • the electronic device can obtain a plurality of consecutive image frames generated by the first application including the third image frame (where the third image frame is the plurality of images). the last image frame in the frame), and predict a fourth image frame based on the plurality of image frames, and then use the fourth image frame to update the first image frame.
  • the fourth image frame is used as a predicted image frame next to the third image frame generated by the first application.
  • the second image frame may be the last image frame among the plurality of image frames; or the second image frame may be an image frame predicted by the electronic device, in which case the second image frame
  • the image frame is used as a prediction frame of the last image frame among the plurality of image frames and is displayed on the ink screen instead of the last image frame.
  • the electronic device can obtain multiple consecutive image frames actually generated by the application, and predict the next image frame displayed on the ink screen based on the multiple image frames.
  • the first to fifth frames of images displayed by the electronic device on the ink screen are the first to fifth frames generated by the first application after receiving the user operation.
  • Image after the first application generates the fifth frame image, the plurality of image frames acquired by the electronic device may be the first to fifth frame images, and the first image frame is used as the sixth image frame displayed on the ink screen by the electronic device.
  • frame image, and the 6th frame image (i.e., the third image frame) generated by the first application will not be displayed on the ink screen; when the multiple image frames acquired by the electronic device are the 2nd to 6th frame images generated by the first application , the first image frame is used as the 7th frame image displayed on the ink screen by the electronic device.
  • the 7th frame image (ie, the third image frame) generated by the first application will not be displayed on the ink screen.
  • each time the electronic device obtains an image frame generated by the first application it can predict the next frame of the image frame based on the image frame and the image frame generated by the first application before the image frame, and The predicted next frame image can be displayed.
  • the electronic device can use a trained image prediction model to predict the first image frame based on multiple acquired image frames, wherein the image prediction model is used to represent the relationship between multiple consecutive image frames and the Describes the relationship between the last image frame and the next image frame in the plurality of image frames.
  • the electronic device can input the multiple image frames into the trained image prediction model to obtain the image frame output by the image prediction model, and can use the image frame as the last of the multiple image frames. The next image frame of one image frame.
  • the electronic device may also first determine whether the ink screen is in a handwriting state. If so, the electronic device may predict the first image frame based on the multiple image frames. The first image frame, otherwise, the electronic device may not perform the step of predicting the first image frame and subsequent steps according to the plurality of image frames, and only the first application performs the task of displaying the image frame.
  • the electronic device when the electronic device includes the above-mentioned display unit and the first application, when the display unit determines that the ink screen is not in the handwriting state, it can call the mobile industry processor interface (mobile industry processor interface, MIPI) to The image frame to be displayed is transmitted to the driver of the ink screen, so that the driver of the ink screen drives the ink screen to display the corresponding interface based on the image frame.
  • MIPI mobile industry processor interface
  • the electronic device can detect whether the ink screen is in a handwriting state through system services.
  • the input subsystem in the system service of electronic equipment can detect input operations on the ink screen and report corresponding information to the system.
  • the display unit in the electronic device can query the information reported by the input subsystem and determine whether the ink screen is in a handwriting state based on the information.
  • the information reported by the input subsystem may include the type of input device (such as finger, stylus, trackball, mouse, etc.), the type of input event (such as press (down), lift (up), slide (move) etc.) and other information.
  • the electronic device can also use other methods to determine whether the ink screen is in a handwriting state, which is not specifically limited in the embodiments of this application.
  • the above-mentioned step S402 may be a step performed by the display unit. Further, when the display unit serves the content display shown in FIG. 3, the above step S402 may be executed by the frame buffer learning module in the display system.
  • S403 The electronic device updates the first interface corresponding to the second image frame to the second interface corresponding to the first image frame.
  • the electronic device can use a global update method to update the first interface corresponding to the second image frame to the second interface corresponding to the first image frame. Specifically, after predicting the first image frame, the electronic device can directly replace the entire first interface corresponding to the second image frame displayed on the ink screen with the third interface corresponding to the first image frame based on the first image frame. Second interface.
  • the display unit when the electronic device includes the above-mentioned display unit and the first application, after the display unit obtains the first image frame by executing the above steps S401-S402, the display unit can convert the first image frame by calling the MIPI interface. Transmitted to the driver of the ink screen, so that the driver of the ink screen drives the ink screen to display the second interface corresponding to the first image frame according to the first image frame.
  • the display unit needs to transmit the entire image frame to the display screen for display, so the amount of data that needs to be transmitted is large.
  • the advantage of the MIPI interface is that it can support the transmission of large amounts of data. Therefore, data transmission through the MIPI interface can meet the data transmission requirements of the global update method, thereby ensuring the normal execution of the update process of the display interface on the Ink Screen.
  • the electronic device may use a local update method to update the first interface corresponding to the second image frame to the second interface corresponding to the first image frame. Specifically, after predicting the first image frame, the electronic device can first determine, based on the second image frame and the first image frame, the first target image area (that is, the dirty area area) in which the first image frame changes relative to the second image frame. ), and then partially update the first interface corresponding to the second image frame displayed on the ink screen according to the image content in the first target image area, so that the updated interface displayed on the ink screen is the first interface corresponding to the first image frame. Two interfaces, thereby achieving the effect of updating the first interface to the second interface.
  • the first target image area that is, the dirty area area
  • the electronic device can replace the image content in the same area as the first target image area in the first interface displayed on the ink screen with the image content in the first target image area in the first image frame, So that the updated interface displayed on the ink screen is the second interface corresponding to the first image frame.
  • the display unit may compare the first image frame and the second image frame.
  • the image frame determines the image content in the first target image area where the first image frame changes relative to the second image frame.
  • the display unit can transmit the image content in the first target image area to the driver of the ink screen by calling the serial peripheral interface (SPI), so that the driver of the ink screen can display the image content according to the image content in the first target image area.
  • the image content drives the ink screen to replace the image content in the area corresponding to the first target image area in the first interface with the image content in the first target image area.
  • the electronic device may also first determine that the image content in the first target image area satisfies the SPI display condition.
  • the SPI display condition is that the data amount of the image content is less than or equal to the set data amount threshold, and the set data amount threshold can be the maximum data amount that the SPI interface can carry. Based on this method, the smooth execution of interface updates can be ensured.
  • the MIPI interface can be used to The image content is transmitted to ensure the smooth execution of interface updates.
  • the electronic device may not transmit the image content in the first target image area to the ink screen, that is, stop the current process, and the first application may generate the image frame to be displayed and display the corresponding interface.
  • the SPI interface can transmit a smaller amount of data, but the transmission speed is very fast.
  • the display unit only needs to transmit the changed content in the image frame to the display screen for updated display. Therefore, the amount of data that needs to be transmitted is small, so the method of using SPI for data transmission is suitable in this scenario. It is feasible, and at the same time, it can ensure a fast data transmission speed, which is conducive to improving the refresh rate when the ink screen updates the display content, thereby supporting the provision of smooth handwriting services for users, and improving the user experience.
  • the display unit in the electronic device can also use other interfaces to transmit the image frame or the image content of the target area in the image frame to the display screen for display according to actual business needs. This is not done in the embodiments of this application. Specific restrictions.
  • the above-mentioned step S403 may be a step performed by the display unit. Further, when the display unit serves the content display shown in FIG. 3, the above step S403 may be executed by the frame buffer update module in the display system.
  • the electronic device may begin to execute the method provided in the embodiments of the present application when detecting a handwriting operation performed by the user.
  • the electronic device can continue to obtain image frames generated by the first application until it is detected that the user stops performing handwriting operations.
  • each time the electronic device obtains an image frame generated by the first application it can predict the next image frame of the image frame based on the image frame and at least one image frame before the image frame, and display it on the ink screen. Display the next image frame obtained.
  • the at least one image frame is an image frame generated by the first application (that is, an image frame to be displayed generated by the first application calling a system service of the electronic device).
  • the method provided by the above embodiment predicts the changing area of the display interface during the display phase of the system display process and refreshes it. Therefore, in the method provided by the above embodiment, it is only necessary to obtain the content displayed by the application on the ink screen, and there is no need to adapt the application like the existing technical solution. Therefore, the solution provided by the above embodiment can greatly reduce the implementation cost. Difficulty, improve the versatility and practicality of the solution.
  • a partial update method can also be used to update the displayed content for third-party applications, and the content displayed by the third-party application can be quickly realized. refresh, thereby reducing the delay in refreshing third-party application content on the Ink Screen and improving display smoothness.
  • the process of a content display method may include:
  • S501 The scene recognition system in the electronic device detects and identifies the type of third-party application running in the foreground and the status of the stylus.
  • the setting type can be handwriting office type, etc.
  • the electronic device when the electronic device adopts the Android system architecture and the display system determines based on the information from the scene recognition system that the type of the third-party application currently running in the foreground is handwriting office and the stylus pen status is down, it can be forced to start.
  • Graphics processing unit (GPU) synthesis mode In this mode, third-party applications can obtain the complete image frame and send the complete image frame during the process of generating image frames and displaying the interface corresponding to the image frame. to the display system.
  • GPU Graphics processing unit
  • S503 The display system in the electronic device instructs the third-party application to display the interface through the display system.
  • this step can be performed by a display control module in the display system.
  • the display control module determines that the type of the third-party application currently running in the foreground is the set type and the stylus pen status is down, it can be determined to use the method provided by the embodiment of the present application to predict the next image frame based on multiple consecutive image frames generated by the third-party application.
  • the display control module determines that the type of the third-party application currently running in the foreground is not the set type or the stylus pen status is not down, it can determine to use the conventional method to generate the image frame and display the corresponding interface, that is, use the third-party application to generate the image frame by itself And call the system service to display the corresponding interface.
  • the third-party application in the electronic device After receiving the handwriting operation of the stylus pen on the ink screen, the third-party application in the electronic device calls the graphics drawing system to draw the handwriting trajectory and generates multiple image frames corresponding to the handwriting trajectory.
  • each image frame in the plurality of image frames includes part of the handwriting trajectory.
  • Third-party applications can draw handwriting trajectories and generate corresponding image frames by calling the native drawing interface of the electronic device.
  • S505 The graphics drawing system in the electronic device sends the generated multiple image frames to the display system in sequence.
  • the graphics drawing system can send multiple image frames to the display system in sequence through the buffer queue mechanism.
  • step S506 The display system in the electronic device determines whether it is currently in a handwriting state. If so, execute step S507; otherwise, execute step S511.
  • this step can be performed by the frame buffer learning module in the display system.
  • S507 The display system in the electronic device predicts the next image frame to be displayed based on the multiple image frames.
  • this step can be performed by the frame buffer learning module in the display system.
  • S508 The display system in the electronic device calculates the dirty area area of the image to be displayed in the next frame and the image of the previous frame of the image.
  • this step can be performed by the frame buffer learning module in the display system.
  • this step can be performed by a frame buffer update module in the display system.
  • the ink screen driver in the electronic device partially updates the currently displayed interface based on the information in the dirty area, thereby updating the display interface on the ink screen.
  • S511 The electronic device sends the next image frame to be displayed to the ink screen driver through the MIPI interface.
  • the ink screen driver globally updates the currently displayed interface according to the next image frame to be displayed, so as to update the display interface on the ink screen.
  • third-party applications (not adapted to the API customized by the manufacturer) can use the system method of partially refreshing the display to refresh the display interface.
  • there is no need to adapt to each third-party application which can greatly improve the performance of the display interface. Reduce the difficulty of implementation and improve the versatility and practicality of the solution.
  • this solution when this solution is applied to an ink screen with a low refresh rate, it can greatly reduce the writing delay, increase the refresh rate, and thereby improve the writing experience.
  • inventions of the present application also provide a content display method, as shown in Figure 6.
  • the method may include:
  • S601 The display unit in the operating system of the electronic device acquires multiple consecutive image frames belonging to the first application in response to the handwriting operation on the ink screen.
  • the electronic device, display unit, and first application may refer to the relevant introductions in the previous embodiments, which will not be repeated here.
  • the plurality of image frames may be image frames generated by the first application control described in the above embodiment.
  • the method for the display unit to obtain the plurality of image frames may refer to the method described in the above embodiments, which will not be described again here.
  • the handwriting operation is an operation performed in a first display area on the ink screen; wherein the first display area is a display area where the second image frame is located.
  • the display unit predicts a first image frame according to the plurality of image frames; wherein the first image frame is a predicted image frame next to the last image frame in the plurality of image frames.
  • the method for the display unit to predict the first image frame based on the plurality of image frames may refer to the relevant introduction in the above embodiments, and will not be described again here.
  • S603 The display unit updates the second image frame displayed on the ink screen to the first image frame.
  • the second image frame is a predicted image frame of the last image frame among the plurality of image frames.
  • the display unit may update the second image frame to the first image frame in a local update manner. Specifically, the display unit may first determine the first target image data based on the first image frame and the second image frame; wherein the first target image data is used to indicate the relative position of the first image frame in the first image frame. the changed image content of the second image frame; and then the second image frame displayed on the ink screen can be updated to the first image frame according to the first target image data.
  • the display unit may use a serial peripheral interface to send the first target image data to the ink screen, and drive the ink screen to replace the second image data in the second image frame with the first target content.
  • Target content wherein, the first target content is the image content indicated by the first target image data, and the second target content is content in the second image frame that is different from the first image frame.
  • the display unit may first determine that the data amount of the first target image data is less than or equal to the set data amount threshold. If the display unit determines that the data amount of the first target image data is greater than the set data amount threshold, a global update method may be used. Specifically, the display unit may use a mobile industry processor interface to send the first image frame to the ink screen, and drive the ink screen to replace the second image frame with the first image frame.
  • electronic device 700 may include: a display screen 701, a memory 702, one or more processors 703, and one or more computer programs (not shown in the figure).
  • the various devices described above may be coupled through one or more communication buses 704.
  • the display screen 701 is an ink screen, used to display application interfaces and other related user interfaces.
  • One or more computer programs are stored in the memory 702, and the one or more computer programs include computer instructions; one or more processors 703 call the computer instructions stored in the memory 702, so that the electronic device 700 executes the embodiment of the present application. Provided content display method.
  • the memory 702 may include high-speed random access memory, and may also include non-volatile memory, such as one or more disk storage devices, flash memory devices or other non-volatile solid-state storage devices.
  • the memory 702 can store an operating system (hereinafter referred to as the system), such as an embedded operating system such as ANDROID, IOS, WINDOWS, or LINUX.
  • the memory 702 can be used to store the implementation program of the embodiment of the present application.
  • the memory 702 may also store a network communication program that may be used to communicate with one or more additional devices, one or more user devices, and one or more network devices.
  • One or more processors 703 may be a general central processing unit (Central Processing Unit, CPU), a microprocessor, an application-specific integrated circuit (Application-Specific Integrated Circuit, ASIC), or one or more processors for controlling the present application. Scheme program execution on the integrated circuit.
  • CPU Central Processing Unit
  • ASIC Application-Specific Integrated Circuit
  • FIG. 7 is only an implementation manner of the electronic device 700 provided by the embodiment of the present application. In actual applications, the electronic device 700 may also include more or fewer components, which is not limited here.
  • embodiments of the present application also provide a computer-readable storage medium.
  • the computer-readable storage medium stores a computer program. When the computer program is run on a computer, it causes the computer to execute the steps provided in the above embodiments. method.
  • embodiments of the present application also provide a computer program product.
  • the computer program product includes a computer program or instructions. When the computer program or instructions are run on a computer, the computer is caused to execute the method provided in the above embodiments. .
  • the methods provided by the embodiments of this application can be implemented in whole or in part by software, hardware, firmware, or any combination thereof.
  • software When implemented using software, it may be implemented in whole or in part in the form of a computer program product.
  • a computer program product includes one or more computer instructions.
  • Computer program instructions When computer program instructions are loaded and executed on a computer, processes or functions according to embodiments of the present invention are generated in whole or in part.
  • the computer may be a general purpose computer, a special purpose computer, a computer network, a network device, a user equipment, or other programmable device.
  • Computer instructions may be stored in or transmitted from one computer-readable storage medium to another computer-readable storage medium, e.g., computer instructions may be transmitted from a website, computer, server or data center via a wired link (e.g.
  • Coaxial cable, optical fiber, digital subscriber line (DSL) or wireless means to transmit to another website site, computer, server or data center.
  • Computer-readable storage media can It is any available media that can be accessed by a computer or a data storage device such as a server or data center that contains one or more available media. Available media can be magnetic media (for example, floppy disks, hard disks, tapes), optical media (for example, Digital video disc (DVD for short), or semiconductor media (such as SSD), etc.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Computer Hardware Design (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

本申请提供一种内容显示方法及电子设备,应用于电子设备的操作系统中的显示单元,该方法中,显示单元可以响应于作用在墨水屏上的手写操作,获取属于第一应用的连续多个图像帧,并根据多个图像帧预测第一图像帧,其中,第一图像帧为多个图像帧中最后一个图像帧的下一个图像帧的预测图像帧。在预测得到第一图像帧后,显示单元可以将墨水屏上显示的第二图像帧更新为第一图像帧。该过程中,显示单元根据属于第一应用的图像帧可以进行后续图像帧的预测和显示,一方面无需等待实际图像帧生成,能够降低显示时延,另一方面无需对应用进行适配,实施难度较小。因此,该方法能够在提高显示流畅度的同时降低实施难度,提高显示效率,通用性和实用性较高。

Description

一种内容显示方法及电子设备
相关申请的交叉引用
本申请要求在2022年08月30日提交中华人民共和国知识产权局、申请号为202211049177.1、申请名称为“一种内容显示方法及电子设备”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本申请涉及电子设备技术领域,尤其涉及一种内容显示方法及电子设备。
背景技术
由于墨水屏具备超低功耗、类纸质感、护眼无蓝光、轻薄等诸多优点,当前墨水屏的应用越来越广泛。但是当前墨水屏的刷新率一般只有5~7画面每秒传输帧数(frame per second,FPS),比液晶显示屏(liquid crystal display,LCD)的刷新率(60FPS、120FPS等)低很多。墨水屏的低刷新率会导致墨水屏上手写的时延非常大,容易导致电子设备中的应用根据用户的手写操作在墨水屏上显示手写内容时的流畅度降低,影响用户手写体验。
当前,为了降低墨水屏上的手写时延,可以对电子设备中的应用进行适配,为其提供定制化的局部绘制接口,进而在应用调用局部绘制接口进行手写轨迹绘制的过程中,通过准确计算或预测的方式确定显示界面中局部区域(即手写轨迹所在区域)发生的变化,并根据变化情况对该局部区域内显示的内容进行刷新,从而提高进行界面刷新的速度,降低在墨水屏上显示手写轨迹的时延。但是,该方法中需要针对每个应用分别进行适配,实施难度较大,效率较低,因此该方法的通用性和实用性较低。
发明内容
本申请提供一种内容显示方法及电子设备,用以在墨水屏上显示手写内容时,简便高效的实现降低墨水屏的书写时延的效果,同时提高方案的通用性和实用性。
第一方面,本申请提供一种内容显示方法,应用于电子设备的操作系统中的显示单元,所述方法包括:响应于作用在墨水屏上的手写操作,获取多个图像帧;其中,所述多个图像帧属于第一应用;根据所述多个图像帧预测第一图像帧;其中,所述第一图像帧为所述多个图像帧中最后一个图像帧的下一个图像帧的预测图像帧;将所述墨水屏上显示的第二图像帧更新为所述第一图像帧。
在该方法中,电子设备中的显示单元接收到作用在墨水屏上的手写操作、需要显示手写内容时,可以根据墨水屏上已显示的图像帧预测后续图像帧并进行显示,而无需等待后续图像帧的生成,因此可以提前得到后续待显示的图像帧并进行显示,能够适当提高更新显示图像帧的速度,进而提高显示的流畅度,降低显示手写内容的时延。电子设备显示的手写内容为应用中的内容时,系统服务即显示单元可以直接获取应用的图像帧并根据获取的图像帧进行后续待显示图像帧的预测和显示,而无需改变应用的处理逻辑或方法,因此无需对应用进行适配,实施难度较小,效率较高。综上,该方法在显示应用中的手写内容时,可以在提高显示流畅度的同时降低实施难度,提高显示效率,因此该方法的通用性和实用性较高。
在一种可能的设计中,所述将所述墨水屏上显示的第二图像帧更新为所述第一图像帧,包括:根据所述第一图像帧和所述第二图像帧,确定第一目标图像数据;其中,所述第一目标图像数据用于指示所述第一图像帧中相对所述第二图像帧发生变化的图像内容;根据所述第一目标图像数据,将所述墨水屏上显示的所述第二图像帧更新为所述第一图像帧。
在该方法中,显示单元根据第一图像帧相对第二图像帧发生变化的图像内容,将第二图像帧更新为第一图像帧,能够采用局部更新的方式对第二图像帧进行更新,因此能够加快更新图像帧的速度,提高更新效率,降低显示时延。
在一种可能的设计中,所述根据所述第一目标图像数据,将所述墨水屏上显示的所述第二图像帧更新为所述第一图像帧,包括:利用串行外设接口将所述第一目标图像数据发送至所述墨水屏,并驱动所述墨水屏利用第一目标内容替换所述第二图像帧中的第二目标内容;其中,所述第一目标内容为所述第一目标图像数据所指示的图像内容,所述第二目标内容为所述第二图像帧中与所述第一图像帧不同的内 容。
在该方法中,串行外设接口能够传输的数据量较小,但是传输的速度很快。显示单元采用局部更新的方式,将图像帧中发生变化的图像内容传输至显示屏进行更新显示时,需要传输的数据量较小,因此能够尽可能利用串行外设接口进行图像内容的传输,进而保证较快的数据传输速度,有利于提高墨水屏更新显示内容时的刷新率,进而能够支持为用户提供流畅的手写服务,能够提高用户使用体验。
在一种可能的设计中,在利用串行外设接口将所述第一目标图像数据发送至所述墨水屏之前,所述方法还包括:确定所述第一目标图像数据的数据量小于或等于设定的数据量阈值。
在该方法中,在变化的图像内容的数据量小于或等于设定的数据量阈值时,说明串行外设接口能够承载该图像内容,因此利用串行外设接口能够将该图像内容传输至墨水屏进行更新显示。因此,通过该方法能够保证图像帧更新流程的顺利执行。
在一种可能的设计中,所述根据所述第一目标图像数据,将所述墨水屏上显示的所述第二图像帧更新为所述第一图像帧,包括:当确定所述第一目标图像数据的数据量大于设定的数据量阈值时,利用移动行业处理器接口将所述第一图像帧发送至所述墨水屏,并驱动所述墨水屏利用所述第一图像帧替换所述第二图像帧。
在该方法中,移动行业处理器接口的优势在于能够支持传输较大的数据量,因此通过MIPI接口传输数据能够满足将第二图像帧整帧更新为第一图像帧的全局更新方式的数据传输需求,从而保证图像帧更新流程的顺利执行。
在一种可能的设计中,所述第一应用为设定的白名单中的任一应用,所述白名单中包含至少一个应用;和/或,所述第一应用为设定类型的应用。
在一种可能的设计中,所述第二图像帧为所述多个图像帧中最后一个图像帧的预测图像帧。
在该方法中,第一图像帧、第二图像帧均为预测得到的图像帧,因此显示单元所显示的图像帧为预测得到的图像帧,而非实际生成的图像帧,因此,显示单元无需等待实际图像帧的生成,可以在确定预测的图像帧后直接显示,因此能够适当降低墨水屏更新显示图像帧的时延,提高用户使用体验。
在一种可能的设计中,所述手写操作为作用在所述墨水屏上的第一显示区域内的操作;其中,所述第一显示区域为所述第二图像帧所在的显示区域。
在该方法中,显示单元可以根据作用在墨水屏上的一个显示区域内的手写操作,对该显示区域内显示的内容进行更新,因此不会影响墨水屏上其它显示区域内显示的内容,能够提高用户使用体验。
在一种可能的设计中,所述根据所述多个图像帧预测第一图像帧,包括:根据所述多个图像帧和图像预测模型,确定所述第一图像帧;其中,所述图像预测模型用于表示连续的多个图像帧与所述多个图像帧中最后一个图像帧的下一个图像帧之间的关系。
在该方法中,利用图像预测模型进行图像帧预测的准确度较高,能够提高进行图像帧更新的内容准确度,进而提高用户使用体验。
第二方面,本申请提供一种电子设备,该电子设备包括墨水屏,存储器和一个或多个处理器;其中,存储器用于存储计算机程序代码,计算机程序代码包括计算机指令;当计算机指令被一个或多个处理器执行时,使得电子设备执行上述第一方面或第一方面的任一可能的设计所描述的方法。
第三方面,本申请提供一种计算机可读存储介质,该计算机可读存储介质存储有计算机程序,当计算机程序在计算机上运行时,使得计算机执行上述第一方面或第一方面的任一可能的设计所描述的方法。
第四方面,本申请提供一种计算机程序产品,该计算机程序产品包括计算机程序或指令,当计算机程序或指令在计算机上运行时,使得计算机执行上述第一方面或第一方面的任一可能的设计所描述的方法。
上述第二方面到第四方面的有益效果,请参见上述第一方面的有益效果的描述,这里不再重复赘述。
附图说明
图1为本申请实施例提供的一种电子设备的硬件架构示意图;
图2为本申请实施例提供的一种电子设备的软件架构示意图;
图3为本申请实施例提供的一种内容显示系统的架构示意图;
图4为本申请实施例提供的一种内容显示方法的示意图;
图5为本申请实施例提供的一种内容显示方法的流程示意图;
图6为本申请实施例提供的一种内容显示方法的示意图;
图7为本申请实施例提供的一种电子设备的结构示意图。
具体实施方式
为了使本申请实施例的目的、技术方案和优点更加清楚,下面将结合附图对本申请实施例作进一步地详细描述。
其中,在本申请实施例的描述中,以下,术语“第一”、“第二”仅用于描述目的,而不能理解为指示或暗示相对重要性或者隐含指明所指示的技术特征的数量。由此,限定有“第一”、“第二”的特征可以明示或者隐含地包括一个或者更多个该特征。
为了便于理解,示例性的给出了与本申请相关概念的说明以供参考。
电子设备,为具有墨水屏的设备。本申请一些实施例中电子设备可以是具备墨水屏的便携式设备,诸如具备墨水屏的手机、平板电脑、具备无线通讯功能的可穿戴设备(例如,手表、手环、头盔、耳机等)、车载终端设备、增强现实(augmented reality,AR)/虚拟现实(virtual reality,VR)设备、笔记本电脑、超级移动个人计算机(ultra-mobile personal computer,UMPC)、上网本、个人数字助理(personal digital assistant,PDA)、智能家居设备(例如,智能电视、智能音箱等)、智能机器人、车间设备、无人驾驶(self driving)中的无线终端、远程手术(remote medical surgery)中的无线终端、智能电网(smart grid)中的无线终端、运输安全(transportation safety)中的无线终端、智慧城市(smart city)中的无线终端,或智慧家庭(smart home)中的无线终端、飞行设备(例如,智能机器人、热气球、无人机、飞机)等。
其中,可穿戴设备为用户可以直接穿戴在身上或者整合到用户的衣服或配件上的一种便携式设备。
在本申请一些实施例中,电子设备还可以是还包含其它功能诸如个人数字助理和/或音乐播放器功能的便携式终端设备。便携式终端设备的示例性实施例包括但不限于搭载 或者其它操作系统的便携式终端设备。上述便携式终端设备也可以是其它便携式终端设备,诸如具有触敏表面(例如触控面板)的膝上型计算机(laptop)等。还应当理解的是,在本申请其它一些实施例中,上述电子设备也可以不是便携式终端设备,而是具有触敏表面(例如触控面板)的台式计算机。
应理解,本申请实施例中“至少一个”是指一个或者多个,“多个”是指两个或两个以上。“和/或”,描述关联对象的关联关系,表示可以存在三种关系,例如,A和/或B,可以表示:单独存在A,同时存在A和B,单独存在B的情况,其中A、B可以是单数或者复数。字符“/”一般表示前后关联对象是一种“或”的关系。“以下至少一(项)个”或其类似表达,是指的这些项中的任意组合,包括单项(个)或复数项(个)的任意组合。例如,a、b或c中的至少一项(个),可以表示:a,b,c,a和b,a和c,b和c,或a、b和c,其中a、b、c可以是单个,也可以是多个。
当前,为了降低在墨水屏上显示手写内容的时延,一般可以针对电子设备中显示手写内容的应用进行定制化优化。
其中,在一种方案中,电子设备的系统可以为电子设备中的系统应用、自研应用和合作应用提供定制化的局部绘制接口,这些应用可以适配局部绘制接口完成手写轨迹的绘制,电子设备的系统服务则可以通过局部绘制接口准确计算出应用绘制的手写轨迹的局部变化区域,并采用SPI接口刷新局部变化区域中的手写轨迹。电子设备中的第三方应用则可以直接调用系统原生接口进行手写轨迹的绘制,由于电子设备的系统服务无法确定第三方应用绘制手写轨迹时实际调用的系统原生接口,因此电子设备的系统服务仅能获取第三方应用绘制得到的整帧画面并根据该帧画面进行显示界面的全屏刷新,因此电子设备在墨水屏上显示手写轨迹的速率较慢,时延较大。
在上述方案中,局部刷新的方式仅适用于电子设备中的系统应用、自研应用和合作应用,且需要逐个对应用进行适配处理,因此处理效率较低,方案的通用性、实用性较低。对于第三方应用,由于电子设备无法准确识别是否在手写状态,也无法准确确定手写轨迹的变化,因此无法采用局部刷新的方式对第三方应用接收的手写轨迹进行显示,只能根据第三方应用绘制的整帧画面进行全屏刷新,导致墨水屏的书写时延较大,显示流畅度较低。
在另一种方案中,电子设备中的应用在绘制手写轨迹时,可以根据一段时间内墨水屏的输入(input) 事件(包括作用在墨水屏上的操作的触点的坐标、压感等),采用运动补偿算法预测下一个输入事件,从而预测作用在墨水屏上的操作的下一触点的坐标,并根据预测的坐标进行手写轨迹的更新。该方案中,对于电子设备中的系统应用、自研应用和合作应用,仍然需要逐个对应用进行适配处理,因此处理效率较低,方案的通用性、实用性较低。对于第三方应用,电子设备也无法采用局部刷新的方式对第三方应用接收的手写轨迹进行显示,因此墨水屏的书写时延仍然较大,导致显示流畅度较低。
鉴于以上问题,本申请实施例提供了一种内容显示方法及电子设备,该方案可以在墨水屏上显示手写内容,同时简便高效的实现降低墨水屏的书写时延的效果,并提高方案的通用性和实用性,提高用户使用墨水屏的体验度。
下面参阅图1,对本申请实施例提供的方法适用的电子设备的结构进行介绍。
如图1所示,电子设备100可以包括处理器110,外部存储器接口120,内部存储器121,USB接口130,充电管理模块140,电源管理模块141,电池142,天线1,天线2,移动通信模块150,无线通信模块160,音频模块170,扬声器170A,受话器170B,麦克风170C,耳机接口170D,传感器模块180,按键190,马达191,指示器192,摄像头193,显示屏194,以及SIM卡接口195等。
其中传感器模块180可以包括陀螺仪传感器、加速度传感器、接近光传感器、指纹传感器、触摸传感器、温度传感器、压力传感器、距离传感器、磁传感器、环境光传感器、气压传感器、骨传导传感器等。
可以理解的是,图1所示的电子设备100仅仅是一个范例,并不构成对电子设备的限定,并且电子设备可以具有比图中所示出的更多的或者更少的部件,可以组合两个或更多的部件,或者可以具有不同的部件配置。图1中所示出的各种部件可以在包括一个或多个信号处理和/或专用集成电路在内的硬件、软件、或硬件和软件的组合中实现。
处理器110可以包括一个或多个处理单元,例如:处理器110可以包括应用处理器(application processor,AP),调制解调处理器,图形处理器(graphics processing unit,GPU),图像信号处理器(image signal processor,ISP),控制器,存储器,视频编解码器,数字信号处理器(digital signal processor,DSP),基带处理器,和/或神经网络处理器(neural-network processing unit,NPU)等。其中,不同的处理单元可以是独立的器件,也可以集成在一个或多个处理器中。其中,控制器可以是电子设备100的神经中枢和指挥中心。控制器可以根据指令操作码和时序信号,产生操作控制信号,完成取指令和执行指令的控制。
处理器110中还可以设置存储器,用于存储指令和数据。在一些实施例中,处理器110中的存储器为高速缓冲存储器。该存储器可以保存处理器110刚用过或循环使用的指令或数据。如果处理器110需要再次使用该指令或数据,可从所述存储器中直接调用。避免了重复存取,减少了处理器110的等待时间,因而提高了系统的效率。
本申请实施例提供的内容显示方法的执行可以由处理器110来控制或调用其他部件来完成,比如调用内部存储器121中存储的本申请实施例的处理程序,或者通过外部存储器接口120调用第三方设备中存储的本申请实施例的处理程序,来控制无线通信模块160向其它设备进行数据通信,提高电子设备100的智能化、便捷化程度,提升用户的体验。处理器110可以包括不同的器件,比如集成CPU和GPU时,CPU和GPU可以配合执行本申请实施例提供的内容显示方法,比如内容显示方法中部分算法由CPU执行,另一部分算法由GPU执行,以得到较快的处理效率。
显示屏194用于显示图像,视频等。本申请实施例中,显示屏194为墨水屏。显示屏194包括显示面板。在一些实施例中,电子设备100可以包括1个或N个显示屏194,N为大于1的正整数。显示屏194可用于显示由用户输入的信息或提供给用户的信息以及各种图形用户界面(graphical user interface,GUI)。例如,显示屏194可以显示照片、视频、网页、或者文件等。
在本申请实施例中,显示屏194可以是一个一体的柔性显示屏,也可以采用两个刚性屏以及位于两个刚性屏之间的一个柔性屏组成的拼接显示屏。
摄像头193(前置摄像头或者后置摄像头,或者一个摄像头既可作为前置摄像头,也可作为后置摄像头)用于捕获静态图像或视频。通常,摄像头193可以包括感光元件比如镜头组和图像传感器,其中,镜头组包括多个透镜(凸透镜或凹透镜),用于采集待拍摄物体反射的光信号,并将采集的光信号传递给图像传感器。图像传感器根据所述光信号生成待拍摄物体的原始图像。
内部存储器121可以用于存储计算机可执行程序代码,所述可执行程序代码包括指令。处理器110通过运行存储在内部存储器121的指令,从而执行电子设备100的各种功能应用以及数据处理。内部存储器121可以包括存储程序区和存储数据区。其中,存储程序区可存储操作系统,应用程序(比如本申请提供的内容显示方法对应的功能等)的代码等。存储数据区可存储电子设备100使用过程中所创建的数据等。
内部存储器121还可以存储本申请实施例提供的内容显示方法的算法对应的一个或多个计算机程序。该一个或多个计算机程序被存储在上述内部存储器121中并被配置为被一个或多个处理器110执行,该一个或多个计算机程序包括指令,上述指令可以用于执行以下实施例中的各个步骤。
此外,内部存储器121可以包括高速随机存取存储器,还可以包括非易失性存储器,例如至少一个磁盘存储器件,闪存器件,通用闪存存储器(universal flash storage,UFS)等。
当然,本申请实施例提供的内容显示方法的算法的代码还可以存储在外部存储器中。这种情况下,处理器110可以通过外部存储器接口120运行存储在外部存储器中的内容显示方法的算法的代码。
传感器模块180可以包括陀螺仪传感器、加速度传感器、接近光传感器、指纹传感器、触摸传感器等。
触摸传感器,也称“触控面板”。触摸传感器可以设置于显示屏194,由触摸传感器与显示屏194组成触摸显示屏,也称“触控屏”。触摸传感器用于检测作用于其上或附近的触摸操作。触摸传感器可以将检测到的触摸操作传递给应用处理器,以确定触摸事件类型。可以通过显示屏194提供与触摸操作相关的视觉输出。在另一些实施例中,触摸传感器也可以设置于电子设备100的表面,与显示屏194所处的位置不同。
示例性的,电子设备100的显示屏194显示主界面,主界面中包括应用(比如相机应用等)的图标。例如用户可以通过触摸传感器点击主界面中相机应用的图标,触发处理器110启动相机应用,打开摄像头193。显示屏194显示相机应用的界面,例如取景界面。
电子设备100的无线通信功能可以通过天线1,天线2,移动通信模块150,无线通信模块160,调制解调处理器以及基带处理器等实现。
天线1和天线2用于发射和接收电磁波信号。电子设备100中的每个天线可用于覆盖单个或多个通信频带。不同的天线还可以复用,以提高天线的利用率。例如:可以将天线1复用为无线局域网的分集天线。在另外一些实施例中,天线可以和调谐开关结合使用。
移动通信模块150可以提供应用在电子设备100上的包括2G/3G/4G/5G等无线通信的解决方案。移动通信模块150可以包括至少一个滤波器,开关,功率放大器,低噪声放大器(low noise amplifier,LNA)等。移动通信模块150可以由天线1接收电磁波,并对接收的电磁波进行滤波,放大等处理,传送至调制解调处理器进行解调。移动通信模块150还可以对经调制解调处理器调制后的信号放大,经天线1转为电磁波辐射出去。在一些实施例中,移动通信模块150的至少部分功能模块可以被设置于处理器110中。在一些实施例中,移动通信模块150的至少部分功能模块可以与处理器110的至少部分模块被设置在同一个器件中。在本申请实施例中,移动通信模块150还可以用于与其它设备进行信息交互。
调制解调处理器可以包括调制器和解调器。其中,调制器用于将待发送的低频基带信号调制成中高频信号。解调器用于将接收的电磁波信号解调为低频基带信号。随后解调器将解调得到的低频基带信号传送至基带处理器处理。低频基带信号经基带处理器处理后,被传递给应用处理器。应用处理器通过音频装置(不限于扬声器170A,受话器170B等)输出声音信号,或通过显示屏194显示图像或视频。在一些实施例中,调制解调处理器可以是独立的器件。在另一些实施例中,调制解调处理器可以独立于处理器110,与移动通信模块150或其他功能模块设置在同一个器件中。
无线通信模块160可以提供应用在电子设备100上的包括无线局域网(wireless local area networks,WLAN)(如无线保真(wireless fidelity,Wi-Fi)网络),蓝牙(bluetooth,BT),全球导航卫星系统(global navigation satellite system,GNSS),调频(frequency modulation,FM),近距离无线通信技术(near field communication,NFC),红外技术(infrared,IR)等无线通信的解决方案。无线通信模块160可以是集成至少一个通信处理模块的一个或多个器件。无线通信模块160经由天线2接收电磁波,将电磁波信号调频以及滤波处理,将处理后的信号发送到处理器110。无线通信模块160还可以从处理器110接收待发送的信号,对其进行调频,放大,经天线2转为电磁波辐射出去。本申请实施例中,无线通信模块160,用于与其它电子设备建立连接,进行数据交互。或者无线通信模块160可以用于接入接入点设备, 向其它电子设备发送控制指令,或者接收来自其它电子设备发送的数据。
另外,电子设备100可以通过音频模块170,扬声器170A,受话器170B,麦克风170C,耳机接口170D,以及应用处理器等实现音频功能。例如音乐播放,录音等。电子设备100可以接收按键190输入,产生与电子设备100的用户设置以及功能控制有关的键信号输入。电子设备100可以利用马达191产生振动提示(比如来电振动提示)。电子设备100中的指示器192可以是指示灯,可以用于指示充电状态,电量变化,也可以用于指示消息,未接来电,通知等。电子设备100中的SIM卡接口195用于连接SIM卡。SIM卡可以通过插入SIM卡接口195,或从SIM卡接口195拔出,实现和电子设备100的接触和分离。
应理解,在实际应用中,电子设备100可以包括比图1所示的更多或更少的部件,本申请实施例不作限定。图示电子设备100仅是一个范例,并且电子设备100可以具有比图中所示出的更多的或者更少的部件,可以组合两个或更多的部件,或者可以具有不同的部件配置。图中所示出的各种部件可以在包括一个或多个信号处理和/或专用集成电路在内的硬件、软件、或硬件和软件的组合中实现。
电子设备100的软件系统可以采用分层架构,事件驱动架构,微核架构,微服务架构,或云架构。本申请实施例以分层架构的Android系统为例,示例性说明电子设备的软件结构。
分层架构将软件分成若干个层,每一层都有清晰的角色和分工。层与层之间通过软件接口通信。如图2所示,该软件架构可以分为四层,从上至下分别为应用程序层,应用程序框架层(framework,FWK),运行时和系统库,以及Linux内核(kernel)层。
应用程序层是操作系统的最上一层,包括操作系统的原生应用程序,例如相机、图库、日历、蓝牙、音乐、视频、信息等。本申请实施例涉及的应用程序简称应用(application,APP),为能够实现某项或多项特定功能的软件程序。通常,电子设备中可以安装多个应用。比如,相机应用、邮箱应用、运动健康应用、健康使用手机应用等。下文中提到的应用,可以是电子设备出厂时已安装的系统应用,也可以是用户在使用电子设备的过程中从网络下载或从其他电子设备获取的第三方应用。
当然,对于开发者来说,开发者可以编写应用程序并安装到该层。一种可能的实现方式中,应用程序可以使用Java语言开发,通过调用应用程序框架层所提供的应用程序编程接口(application programming interface,API)来完成,开发者可以通过应用程序框架来与操作系统的底层(例如内核层等)进行交互,开发自己的应用程序。
应用程序框架层为应用程序层的API和编程框架。应用程序框架层可以包括一些预先定义的函数。应用程序框架层可以包括窗口管理器,内容提供器,视图系统,电话管理器,资源管理器,通知管理器等。
窗口管理器用于管理窗口程序。窗口管理器可以获取显示屏大小,判断是否有状态栏,锁定屏幕,截取屏幕等。
内容提供器用来存放和获取数据,并使这些数据可以被应用程序访问。所述数据可以包括文件(例如文档、视频、图像、音频),文本等信息。
视图系统包括可视控件,例如显示文字、图片、文档等内容的控件等。视图系统可用于构建应用程序。显示窗口中的界面可以由一个或多个视图组成的。例如,包括短信通知图标的显示界面,可以包括显示文字的视图以及显示图片的视图。
电话管理器用于提供电子设备的通信功能。通知管理器使应用程序可以在状态栏中显示通知信息,可以用于传达告知类型的消息,可以短暂停留后自动消失,无需用户交互。
运行时包括核心库和虚拟机。运行时负责操作系统的调度和管理。
系统的核心库包含两部分:一部分是Java语言需要调用的功能函数,另一部分是系统的核心库。应用程序层和应用程序框架层运行在虚拟机中。以Java举例,虚拟机将应用程序层和应用程序框架层的Java文件执行为二进制文件。虚拟机用于执行对象生命周期的管理,堆栈管理,线程管理,安全和异常的管理,以及垃圾回收等功能。
系统库可以包括多个功能模块。例如:表面管理器,媒体库,三维图形处理库(例如:OpenGL ES),二维图形引擎(例如:SGL)等。表面管理器用于对显示子系统进行管理,并且为多个应用程序提供了二维和三维图层的融合。媒体库支持多种常用的音频,视频格式回放和录制,以及静态图像文件等。媒体库可以支持多种音视频编码格式,例如:MPEG4,H.564,MP3,AAC,AMR,JPG,PNG等。三维图形处理库用于实现三维图形绘图,图像渲染,合成,和图层处理等。二维图形引擎是二维绘图的绘图 引擎。
内核层提供操作系统的核心系统服务,如安全性、内存管理、进程管理、网络协议栈和驱动模型等都基于内核层实现。内核层同时也作为硬件和软件栈之间的抽象层。该层有许多与电子设备相关的驱动程序,主要的驱动有:显示驱动;作为输入设备的键盘驱动;基于内存技术设备的Flash驱动;照相机驱动;音频驱动;蓝牙驱动;WiFi驱动等。
需要理解的是,如上所述的功能服务只是一种示例,在实际应用中,电子设备也可以按照其他因素来划分为更多或更少的功能服务,或者可以按照其他方式来划分各个服务的功能,或者也可以不划分功能服务,而是按照整体来工作。
下面结合具体实施例,对本申请提供的方案进行详细说明。
图3为本申请实施例提供的一种内容显示系统的架构示意图。可选的,如图3中所示,该内容显示系统中可以包括应用、图形绘制系统、场景识别系统和显示系统。可选的,如图3中所示,该内容显示系统可以部署在电子设备中,该电子设备中可以具有内容显示服务,图形绘制系统、场景识别系统和显示系统可以为该内容显示服务中的三个子系统。
其中,图形绘制系统可以用于作为电子设备中提供系统绘制服务的接口。应用可以通过调用图形绘制系统绘制应用业务所需显示的图形图像(包括用户手写轨迹)等内容。本申请实施例中,应用可以将所绘制的内容发送到显示系统,由显示系统进行内容的显示。可选的,所述应用可以为系统应用,也可以为第三方应用。
场景识别系统可以用于检测电子设备中当前所运行的应用(服务)、电子设备中前台运行的应用、前台运行的应用的类型、手写笔状态、墨水屏触控状态等,并将检测到的信息发送到显示系统,以便显示系统根据这些信息确定是否采用本申请实施例提供的方法对前台运行应用的内容显示过程进行控制。
显示系统用于将应用通过调用图形绘制系统所绘制的内容合成并输出到墨水屏上进行显示。具体的,显示系统可以包括显示控制模块、帧缓存(frame buffer)学习模块和帧缓存更新模块。其中,显示控制模块用于进行内容合成方式、送显路径等内容显示相关的控制。帧缓存学习模块用于根据已显示的连续帧图像(数据)预测下一帧待显示的图像,并计算预测的一帧图像与前一帧图像之间的脏区(dirty)区域。帧缓存更新模块用于根据帧缓存学习模块计算得到的脏区区域刷新显示预测的一帧图像。
在本申请一些实施例中,内容显示系统中还可以包括显示屏驱动器,所述显示屏驱动器用于在帧缓存更新模块的控制下驱动显示屏显示内容。
在一个示例中,当电子设备采用Android系统时,所述图形绘制系统可以实现为Android系统中的渲染进程(render thread),所述显示系统可以实现为Android系统的显示合成(surface flinger)服务。
需要说明的是,图3所示的系统架构仅是对本申请方案适用的系统架构的一种示例性说明,图3所示的系统架构并不对本申请方案适用的系统架构造成限制。本申请方案适用的系统架构可以包括比图3中所示的更少或更多的模块,本申请实施例中不做具体限制。
下面结合具体实施例,对本申请提供的方案进行详细说明。
本申请实施例提供的内容显示方法可以应用于具有墨水屏的电子设备,参阅图4,本申请实施例提供的内容显示方法可以包括:
S401:电子设备获取连续的多个图像帧,所述多个图像帧为电子设备中的第一应用控制生成的图像帧。
可选的,所述第一应用可以是设定类型的应用,例如手写办公类应用等;或者,所述第一应用可以是设定的白名单中的任一应用,其中,白名单可以用于记录本申请提供的方案适用的至少一个应用。
可选的,所述第一应用可以为电子设备中安装的第三方应用。
可选的,所述多个图像帧的数量可以为设定数量。
在本申请一些实施例中,在电子设备获取多个图像帧之前,电子设备中的第一应用可以根据接收到的手写操作,调用电子设备中的系统服务依次生成待显示的连续图像帧并对生成的图像帧进行显示,在此过程中,电子设备可以逐帧获取对应的图像帧,从而得到所述多个图像帧。其中,系统服务可以包括电子设备中用于绘制图像帧的服务以及电子设备中用于显示图像帧的服务,例如系统服务可以为上述图3中所示的内容显示服务,或者系统服务至少可以包括上述图3中所述的图形绘制系统和显示系统。
作为一种可选的实施方式,系统服务根据第一电子设备的控制生成图像帧后,可以在电子设备的墨水屏上依次显示所生成的图像帧(或图像帧对应的界面)。
在本申请一些实施例中,电子设备中可以包括显示单元和所述第一应用,上述步骤S401可以是电子设备中的显示单元所执行的步骤。可选的,显示单元可以作为上述的系统服务。当然,显示单元也可以不作为上述的系统服务。若显示单元不作为上述的系统服务,则第一应用在控制系统服务依次生成图像帧后,显示单元可以从该系统服务获取图像帧,从而得到所述多个图像帧。
其中,显示单元每获取到一个图像帧后,若能够获取到包括该图像帧在内的设定数量个图像帧(其中,该图像帧为设定数量个图像帧中的最后一个图像帧),则可以将设定数量个图像帧作为所述多个图像帧,并启动执行上述步骤S401,以及继续执行后续步骤;否则,显示单元可以不执行步骤S401及后续步骤,则仅第一应用执行控制生成和显示图像帧的方法。其中,第一应用在显示图像帧时采用的方式是调用电子设备的系统服务进行图像帧的绘制以及图像帧的显示。
示例性的,所述第一应用可以为上述图3中所示的应用,所述显示单元可以为上述图3中所示的内容显示服务。
在本申请一些实施例中,所述显示单元可以为电子设备的系统级服务。第一应用可以通过调用电子设备的系统接口生成所述多个图像帧并进行对应的显示。例如,当第一应用为图3中所示的应用、显示单元为图3中所示的内容显示服务时,第一应用可以通过调用显示单元中的图形绘制系统进行待显示界面的绘制,从而得到对应的所述多个图像帧,并根据所述多个图像帧进行对应界面的显示。
其中,第一应用可以是在接收到用户作用于第一显示屏的手写操作后,响应于接收到的手写操作,控制生成连续多个图像帧并依次进行对应图像帧的显示。可选的,所述手写操作可以为触控滑动操作等在墨水屏上的操作点的轨迹连续(变化)的操作,例如可以是书写操作、绘画操作等。所述手写操作可以是用户直接在墨水屏上进行的操作(例如用户利用手指在墨水屏上进行书写等的操作),也可以是用户利用手写笔在墨水屏上进行的操作(例如用户手持书写笔在墨水屏上进行书写、绘画、批注等的操作)。
在本申请一些实施例中,所述第一应用可以在墨水屏的全屏窗口或分屏窗口或悬浮窗口中依次显示生成的图像帧。其中,当所述第一应用在墨水屏的全屏窗口中依次显示所述图像帧的情况下,所述第一操作可以为作用在墨水屏上的任意区域的操作;当所述第一应用在墨水屏的分屏窗口中依次显示图像帧的情况下,所述手写操作可以为作用在墨水屏上所述分屏窗口对应的区域中的操作;当所述第一应用在墨水屏的悬浮窗口中依次显示图像帧的情况下,所述手写操作为作用在墨水屏上所述悬浮窗口对应的区域中的操作。
需要说明的是,本申请实施例中所述的第一应用生成图像帧,指的是第一应用调用电子设备的系统服务中用于生成图像帧的服务,生成待显示的图像帧;本申请实施例中所述的第一应用在电子设备的墨水屏上显示图像帧,指的是第一应用调用电子设备的系统服务中用于显示图像帧的服务在电子设备的墨水屏上显示图像帧。
在本申请一些实施例中,所述第一应用为电子设备前台运行的应用,在获取所述多个图像帧之前,在第一应用依次显示所述多个图像帧的过程中,电子设备(或电子设备中的显示单元)可以识别第一应用的前台运行状态,并识别第一应用的类别,还可以识别第一应用接收到的用户操作的状态。若确定当前第一应用在前台运行、第一应用的类型为设定类型(例如手写办公类)且用户对第一应用(或第一应用显示的内容)有控制操作时,电子设备可以启动执行上述步骤S401,否则,电子设备可以不启动执行上述步骤S401。其中,当能够在墨水屏上第一应用对应的区域内检测到用户对墨水屏的触控操作(例如用户手指对墨水屏有压力、手写笔状态为按下(down)等场景)时,电子设备可以确定用户对第一应用有控制操作。
在本申请一些实施例中,当电子设备中包括上述的显示单元和第一应用时,上述步骤S401可以是由显示单元执行的步骤。进一步的,当显示单元为上述图3中所示的内容显示服务时,上述步骤S401可以由显示系统中的显示控制模块或帧缓存学习模块执行。
S402:电子设备根据所述多个图像帧预测第一图像帧,所述第一图像帧用于作为电子设备在显示第二图像帧之后在所述墨水屏上所显示的图像帧,所述第二图像帧为当前所述墨水屏上所显示的图像帧。
其中,所述第一图像帧用于作为第三图像帧的预测图像帧,并代替所述第三图像帧显示在墨水屏上,所述第三图像帧为第一应用生成的、所述多个图像帧中最后一个图像帧的下一个图像帧。基于此,电子设备可以在第一应用生成第三图像帧之前,先预测得到第三图像帧的预测图像帧即第一图像帧,并显示 第一图像帧,则可以适当加快图像帧显示速度,从而降低时延。当第一应用生成所述第三图像帧后,电子设备可以获取包含所述第三图像帧在内的、第一应用生成的连续多个图像帧(其中,第三图像帧为该多个图像帧中的最后一个图像帧),并根据该多个图像帧预测得到第四图像帧,进而利用第四图像帧更新第一图像帧。其中,第四图像帧用于作为第一应用生成的第三图像帧的下一个图像帧的预测图像帧。
可选的,所述第二图像帧可以是所述多个图像帧中的最后一个图像帧;或者,所述第二图像帧可以是电子设备预测得到的图像帧,该情况下所述第二图像帧用于作为所述多个图像帧中的最后一个图像帧的预测帧并代替所述最后一个图像帧显示在墨水屏上。
基于上述方法,电子设备可以获取应用实际生成的连续多个图像帧,并根据该多个图像帧预测在墨水屏上显示的下一个图像帧。
示例性的,在多个图像帧的数量为设定数量5的情况下,电子设备在墨水屏上显示的第1~5帧图像为第一应用接收到用户操作后生成的第1~5帧图像;在第一应用生成第5帧图像后,电子设备获取的多个图像帧可以为所述第1~5帧图像,则第一图像帧用于作为电子设备显示在墨水屏上的第6帧图像,而第一应用生成的第6帧图像(即第三图像帧)不会显示在墨水屏上;在电子设备获取的多个图像帧为第一应用生成的第2~6帧图像时,第一图像帧用于作为电子设备显示在墨水屏上的第7帧图像,对应的,第一应用生成的第7帧图像(即第三图像帧)也不会显示在墨水屏上。依此类推,电子设备每获取到第一应用生成的一个图像帧时,可以根据该图像帧以及在该图像帧之前第一应用生成的图像帧,预测得到该图像帧的下一帧图像,并可以显示预测得到的下一帧图像。
在本申请一些实施例中,电子设备可以利用已训练的图像预测模型,根据获取的多个图像帧预测第一图像帧,其中,所述图像预测模型用于表示连续的多个图像帧与所述多个图像帧中最后一个图像帧的下一图像帧之间的关系。电子设备在获取多个图像帧后,可以将多个图像帧输入到已训练的图像预测模型,得到该图像预测模型输出的图像帧,并可以将该图像帧作为所述多个图像帧中最后一个图像帧的下一图像帧。
在本申请一些实施例中,电子设备在根据所述多个图像帧预测第一图像帧之前,还可以先确定墨水屏是否处于手写状态,若是,则电子设备可以根据所述多个图像帧预测第一图像帧,否则,电子设备可以不执行根据所述多个图像帧预测第一图像帧的步骤及后续步骤,则仅第一应用执行显示图像帧的任务。
示例性的,当电子设备中包括上述的显示单元和第一应用时,所述显示单元在确定墨水屏不是处于手写状态时,可以通过调用移动行业处理器接口(mobile industry processor interface,MIPI)将待显示的图像帧传输至墨水屏的驱动器,以使墨水屏的驱动器根据该图像帧,驱动墨水屏显示对应的界面。
其中,电子设备可以通过系统服务检测墨水屏是否处于手写状态。例如,电子设备的系统服务中的输入子系统(input subsystem)可以检测作用在墨水屏上的输入操作,并向系统上报对应的信息。电子设备中的显示单元可以查询输入子系统上报的信息,并根据该信息确定墨水屏是否处于手写状态。其中,输入子系统上报的信息可以包括输入设备的类型(如手指、手写笔、轨迹球、鼠标等)、输入事件的类型(如按下(down)、抬起(up)、滑动(move)等)等信息。当然,电子设备也可以采用其它方式判断墨水屏是否处于手写状态,本申请实施例中不做具体限制。
在本申请一些实施例中,当电子设备中包括上述的显示单元和第一应用时,上述步骤S402可以是由显示单元执行的步骤。进一步的,当显示单元为上述图3中所示的内容显示服务时,上述步骤S402可以由显示系统中的帧缓存学习模块执行。
S403:电子设备将所述第二图像帧对应的第一界面更新为所述第一图像帧对应的第二界面。
在本申请一些实施例中,作为一种可选的实施方式,电子设备可以采用全局更新方式,将所述第二图像帧对应的第一界面更新为所述第一图像帧对应的第二界面。具体的,电子设备在预测得到第一图像帧之后,可以直接根据第一图像帧,将墨水屏上所显示的第二图像帧对应的第一界面整体替换为所述第一图像帧对应的第二界面。
示例性的,当电子设备中包括上述的显示单元和第一应用时,所述显示单元通过执行上述步骤S401~S402,得到第一图像帧之后,显示单元可以通过调用MIPI接口将第一图像帧传输至墨水屏的驱动器,以使墨水屏的驱动器根据第一图像帧,驱动墨水屏显示第一图像帧对应的第二界面。
其中,上述全局更新方式中,显示单元需要将整个图像帧传输至显示屏进行显示,因此需要传输的数据量较大。而MIPI接口的优势在于能够支持传输较大的数据量,因此通过MIPI接口传输数据能够满足全局更新方式的数据传输需求,从而保证墨水屏上显示界面更新流程的正常执行。
作为另一种可选的实施方式,电子设备可以采用局部更新方式,将所述第二图像帧对应的第一界面更新为所述第一图像帧对应的第二界面。具体的,电子设备在预测得到第一图像帧之后,可以先根据第二图像帧和第一图像帧,确定第一图像帧相对第二图像帧发生变化的第一目标图像区域(即脏区区域),然后根据第一目标图像区域中的图像内容,对墨水屏上显示的第二图像帧对应的第一界面进行部分更新,使得更新后墨水屏上显示的界面为第一图像帧对应的第二界面,从而实现将第一界面更新为第二界面的效果。其中,电子设备可以将墨水屏上显示的第一界面中与所述第一目标图像区域相同的区域内的图像内容,替换为第一图像帧中所述第一目标图像区域内的图像内容,以使更新后墨水屏上显示的界面为第一图像帧对应的第二界面。
示例性的,当电子设备中包括上述的显示单元和第一应用时,所述显示单元通过执行上述步骤S401~S402,得到第一图像帧之后,显示单元可以通过比较第一图像帧和第二图像帧,确定第一图像帧相对第二图像帧发生变化的第一目标图像区域中的图像内容。然后,显示单元可以通过调用串行外设接口(serial peripheral interface,SPI)将第一目标图像区域中的图像内容传输至墨水屏的驱动器,以使墨水屏的驱动器根据第一目标图像区域中的图像内容,驱动墨水屏将第一界面中与所述第一目标图像区域对应的区域内的图像内容,替换为第一目标图像区域中的图像内容。
可选的,电子设备在通过调用SPI接口将第一目标图像区域中的图像内容传输至墨水屏的驱动器之前,还可以先确定第一目标图像区域中的图像内容满足SPI送显条件。其中,SPI送显条件为图像内容的数据量小于或等于设定的数据量阈值,设定的数据量阈值可以为SPI接口最大能够承载的数据量值。基于该方式,可以保证界面更新的顺利执行。
可选的,电子设备在确定第一目标图像区域中的图像内容后,若确定第一目标图像区域中的图像内容大于设定的数据量阈值,则可以利用MIPI接口对第一目标图像区域中的图像内容进行传输,以保证界面更新的顺利执行。当然,电子设备也可以不将第一目标图像区域中的图像内容传输至墨水屏,即停止当前流程,则可以由第一应用自行生成待显示的图像帧并进行对应界面的显示。
其中,SPI接口能够传输的数据量较小,但是传输的速度很快。而上述局部更新方式中,显示单元仅需将图像帧中发生变化的内容传输至显示屏进行更新显示即可,因此需要传输的数据量较小,因此利用SPI进行数据传输的方式在该场景中是可行的,同时还能保证较快的数据传输速度,有利于提高墨水屏更新显示内容时的刷新率,进而能够支持为用户提供流畅的手写服务,能够提高用户使用体验。
可选的,上述方法中,电子设备中的显示单元也可以根据实际业务需求,采用其它接口将图像帧或图像帧中目标区域的图像内容传输至显示屏进行显示,本申请实施例中不做具体限制。
在本申请一些实施例中,当电子设备中包括上述的显示单元和第一应用时,上述步骤S403可以是由显示单元执行的步骤。进一步的,当显示单元为上述图3中所示的内容显示服务时,上述步骤S403可以由显示系统中的帧缓存更新模块执行。
在本申请一些实施例中,电子设备可以在检测到存在用户进行的手写操作时,开始执行本申请实施例提供的方法。在执行本申请实施例提供的方法时,电子设备可以持续获取第一应用生成的图像帧,直至检测到用户停止进行手写操作。在此过程中,电子设备每获取到第一应用生成的一个图像帧时,可以根据该图像帧以及该图像帧之前的至少一个图像帧,预测该图像帧的下一个图像帧,并在墨水屏上显示得到的下一个图像帧。其中,所述至少一个图像帧为第一应用生成的图像帧(即第一应用调用电子设备的系统服务生成的待显示的图像帧)。
与现有技术中在应用进程内在绘制阶段确定显示界面的变化区域并进行刷新的方法相比,上述实施例提供的方法中是在系统显示进程在显示阶段预测显示界面的变化区域并进行刷新,因此上述实施例提供的方法中仅需获取到应用在墨水屏显示的内容即可,而无需像现有技术方案那样对应用进行适配处理,因此上述实施例提供的方案能够极大程度降低实施难度,提高方案的通用性和实用性。与现有技术中第三方应用无法采用局部更新方式不同,本申请上述实施例提供的方案中,针对第三方应用也能够采用局部更新方式进行显示内容的更新,能够快速实现第三方应用显示的内容的刷新,进而降低墨水屏刷新第三方应用内容的时延,提高显示流畅度。
下面结合具体实例,对上述方法进行介绍。
以本申请实施例提供的内容显示方法应用于上述图3中所述的内容显示系统为例,以电子设备中的应用为第三方应用为例,以用户利用手写笔进行显示内容的绘制为例,参照图5,本申请实施例提供的 一种内容显示方法的流程可以包括:
S501:电子设备中的场景识别系统检测和识别前台运行的第三方应用的类型和手写笔的状态。
S502:电子设备中的场景识别系统确定前台运行的第三方应用的类型为设定类型且手写笔状态为落下时,将该场景信息通知到显示系统。
其中,设定类型可以为手写办公类等。
在本申请一些实施例,当电子设备采用Android系统架构时,显示系统根据来自场景识别系统的信息确定当前前台运行的第三方应用的类型为手写办公类且手写笔状态为落下时,可以强制启动图形处理器(graphics processing unit,GPU)合成模式,在该模式下,第三方应用能够在生成图像帧和显示图像帧对应的界面的过程中,获取到完整的图像帧并将完整的图像帧发送到显示系统。
S503:电子设备中的显示系统指示第三方应用通过显示系统进行界面的显示。
其中,该步骤可以由显示系统中的显示控制模块执行。其中,显示控制模块确定当前前台运行的第三方应用的类型为设定类型且手写笔状态为落下时,可以确定采用本申请实施例提供的根据第三方应用生成的连续多个图像帧预测下一帧显示的图像帧并根据预测的下一帧图像帧进行界面显示更新的方法。显示控制模块确定当前前台运行的第三方应用的类型不是设定类型或者手写笔状态不是落下时,可以确定采用常规方式进行图像帧的生成和对应界面的显示,即采用第三方应用自行生成图像帧并调用系统服务进行对应界面的显示的方法。
S504:电子设备中的第三方应用接收到手写笔作用于墨水屏的手写操作后,调用图形绘制系统进行手写轨迹的绘制,并生成手写轨迹对应的多个图像帧。
其中,所述多个图像帧中的每个图像帧包括所述手写轨迹中的部分轨迹。第三方应用可以通过调用电子设备原生绘制接口进行手写轨迹的绘制和对应图像帧的生成。
S505:电子设备中的图形绘制系统将生成的多个图像帧依次发送到显示系统。
可选的,图形绘制系统可以通过缓冲队列(buffer quene)机制,将多个图像帧依次发送到显示系统。
S506:电子设备中的显示系统判断当前是否处于手写状态,若是,执行步骤S507,否则,执行步骤S511。
其中,该步骤可以由显示系统中的帧缓存学习模块执行。
S507:电子设备中的显示系统根据所述多个图像帧预测下一帧待显示的图像帧。
其中,该步骤可以由显示系统中的帧缓存学习模块执行。
S508:电子设备中的显示系统计算所述下一帧待显示的图像与该帧图像的前一帧图像的脏区区域。
其中,该步骤可以由显示系统中的帧缓存学习模块执行。
S509:电子设备中的显示系统判断脏区区域满足SPI送显条件时,利用SPI接口将脏区区域的信息发送至墨水屏驱动器。
其中,该步骤可以由显示系统中的帧缓存更新模块执行。
S510:电子设备中的墨水屏驱动器根据脏区区域的信息,对当前显示的界面进行局部更新,实现显墨水屏上显示界面的更新。
S511:电子设备通过MIPI接口将所述下一帧待显示的图像帧发送至墨水屏驱动器。
S512:墨水屏驱动器根据所述下一帧待显示的图像帧,对当前显示的界面进行全局更新,实现墨水屏上显示界面的更新。
上述方法中,第三方应用(未适配厂商定制的API)可以采用局部刷新显示的系统方法,进行显示界面的刷新,且上述方法中无需针对每个第三方应用进行适配,能够极大程度降低实施难度,提高方案的通用性和实用性。同时,该方案应用在刷新率较低的墨水屏上时,能够极大减小书写的时延,提高刷新率,进而提高书写体验。
其中,上述流程中各步骤的具体实施方式可以参照前文实施例中的相关介绍,此处不再赘述。
需要说明的是,上述实例提供的具体实施流程,仅是对本申请实施例适用方法流程的举例说明,其中各步骤的执行顺序可根据实际需求进行相应调整,还可以增加其它步骤,或减少部分步骤。
基于以上实施例及相同构思,本申请实施例还提供一种内容显示方法,如图6中所示,该方法可以包括:
S601:电子设备的操作系统中的显示单元响应于作用在墨水屏上的手写操作,获取属于第一应用的连续多个图像帧。
示例性的,电子设备、显示单元、第一应用可以参照前文实施例中的相关介绍,此处不再重述。所述多个图像帧可以为上述实施例中所述的第一应用控制生成的图像帧。所述显示单元获取所述多个图像帧的方法可以参照上述实施例中所述的方法,此处不再赘述。
在本申请一些实施例中,所述手写操作为作用在所述墨水屏上的第一显示区域内的操作;其中,所述第一显示区域为所述第二图像帧所在的显示区域。
S602:所述显示单元根据所述多个图像帧预测第一图像帧;其中,所述第一图像帧为所述多个图像帧中最后一个图像帧的下一个图像帧的预测图像帧。
其中,显示单元根据所述多个图像帧预测第一图像帧的方法可以参照上述实施例中的相关介绍,此处不再赘述。
S603:所述显示单元将所述墨水屏上显示的第二图像帧更新为所述第一图像帧。
在本申请一些实施例中,所述第二图像帧为所述多个图像帧中最后一个图像帧的预测图像帧。
作为一种可选的实施方式,显示单元可以采用局部更新的方式,将所述第二图像帧更新为所述第一图像帧。具体的,显示单元可以先根据所述第一图像帧和所述第二图像帧,确定第一目标图像数据;其中,所述第一目标图像数据用于指示所述第一图像帧中相对所述第二图像帧发生变化的图像内容;然后可以根据所述第一目标图像数据,将所述墨水屏上显示的所述第二图像帧更新为所述第一图像帧。具体更新时,显示单元可以利用串行外设接口将所述第一目标图像数据发送至所述墨水屏,并驱动所述墨水屏利用第一目标内容替换所述第二图像帧中的第二目标内容;其中,所述第一目标内容为所述第一目标图像数据所指示的图像内容,所述第二目标内容为所述第二图像帧中与所述第一图像帧不同的内容。
可选的,在采用局部更新的方式之前,显示单元可以先确定所述第一目标图像数据的数据量小于或等于设定的数据量阈值。若显示单元确定所述第一目标图像数据的数据量大于设定的数据量阈值,则可以采用全局更新方式。具体的,显示单元可以利用移动行业处理器接口将所述第一图像帧发送至所述墨水屏,并驱动所述墨水屏利用所述第一图像帧替换所述第二图像帧。
上述方法中,电子设备中的显示单元所执行的各步骤的具体实施方式,可参阅前述实施例中相关的介绍,在此不再过多赘述。
基于以上实施例及相同构思,本申请实施例还提供一种电子设备,该电子设备用于实现本申请实施例提供的内容显示方法。如图7中所示,电子设备700可以包括:显示屏701,存储器702,一个或多个处理器703,以及一个或多个计算机程序(图中未示出)。上述各器件可以通过一个或多个通信总线704耦合。
其中,显示屏701为墨水屏,用于显示应用界面等相关用户界面。
存储器702中存储有一个或多个计算机程序(代码),一个或多个计算机程序包括计算机指令;一个或多个处理器703调用存储器702中存储的计算机指令,使得电子设备700执行本申请实施例提供的内容显示方法。
具体实现中,存储器702可包括高速随机存取的存储器,并且也可包括非易失性存储器,例如一个或多个磁盘存储设备、闪存设备或其他非易失性固态存储设备。存储器702可以存储操作系统(下述简称系统),例如ANDROID,IOS,WINDOWS,或者LINUX等嵌入式操作系统。存储器702可用于存储本申请实施例的实现程序。存储器702还可以存储网络通信程序,该网络通信程序可用于与一个或多个附加设备,一个或多个用户设备,一个或多个网络设备进行通信。
一个或多个处理器703可以是一个通用中央处理器(Central Processing Unit,CPU),微处理器,特定应用集成电路(Application-Specific Integrated Circuit,ASIC),或一个或多个用于控制本申请方案程序执行的集成电路。
需要说明的是,图7仅仅是本申请实施例提供的电子设备700的一种实现方式,实际应用中,电子设备700还可以包括更多或更少的部件,这里不作限制。
基于以上实施例及相同构思,本申请实施例还提供一种计算机可读存储介质,该计算机可读存储介质存储有计算机程序,当计算机程序在计算机上运行时,使得计算机执行上述实施例提供的方法。
基于以上实施例及相同构思,本申请实施例还提供一种计算机程序产品,该计算机程序产品包括计算机程序或指令,当计算机程序或指令在计算机上运行时,使得计算机执行上述实施例提供的方法。
本申请实施例提供的方法中,可以全部或部分地通过软件、硬件、固件或者其任意组合来实现。当使用软件实现时,可以全部或部分地以计算机程序产品的形式实现。计算机程序产品包括一个或多个计算机指令。在计算机上加载和执行计算机程序指令时,全部或部分地产生按照本发明实施例的流程或功能。计算机可以是通用计算机、专用计算机、计算机网络、网络设备、用户设备或者其他可编程装置。计算机指令可以存储在计算机可读存储介质中,或者从一个计算机可读存储介质向另一个计算机可读存储介质传输,例如,计算机指令可以从一个网站站点、计算机、服务器或数据中心通过有线(例如同轴电缆、光纤、数字用户线(digital subscriber line,简称DSL)或无线(例如红外、无线、微波等)方式向另一个网站站点、计算机、服务器或数据中心进行传输。计算机可读存储介质可以是计算机可以存取的任何可用介质或者是包含一个或多个可用介质集成的服务器、数据中心等数据存储设备。可用介质可以是磁性介质(例如,软盘、硬盘、磁带)、光介质(例如,数字视频光盘(digital video disc,简称DVD)、或者半导体介质(例如,SSD)等。
显然,本领域的技术人员可以对本申请进行各种改动和变型而不脱离本申请的范围。这样,倘若本申请的这些修改和变型属于本申请权利要求及其等同技术的范围之内,则本申请也意图包含这些改动和变型在内。

Claims (12)

  1. 一种内容显示方法,应用于电子设备的操作系统中的显示单元,其特征在于,所述方法包括:
    响应于作用在墨水屏上的手写操作,获取属于第一应用的连续多个图像帧;
    根据所述多个图像帧预测第一图像帧;其中,所述第一图像帧为所述多个图像帧中最后一个图像帧的下一个图像帧的预测图像帧;
    将所述墨水屏上显示的第二图像帧更新为所述第一图像帧。
  2. 如权利要求1所述的方法,其特征在于,所述将所述墨水屏上显示的第二图像帧更新为所述第一图像帧,包括:
    根据所述第一图像帧和所述第二图像帧,确定第一目标图像数据;其中,所述第一目标图像数据用于指示所述第一图像帧中相对所述第二图像帧发生变化的图像内容;
    根据所述第一目标图像数据,将所述墨水屏上显示的所述第二图像帧更新为所述第一图像帧。
  3. 如权利要求2所述的方法,其特征在于,所述根据所述第一目标图像数据,将所述墨水屏上显示的所述第二图像帧更新为所述第一图像帧,包括:
    利用串行外设接口将所述第一目标图像数据发送至所述墨水屏,并驱动所述墨水屏利用第一目标内容替换所述第二图像帧中的第二目标内容;
    其中,所述第一目标内容为所述第一目标图像数据所指示的图像内容,所述第二目标内容为所述第二图像帧中与所述第一图像帧不同的内容。
  4. 如权利要求3所述的方法,其特征在于,在利用串行外设接口将所述第一目标图像数据发送至所述墨水屏之前,所述方法还包括:
    确定所述第一目标图像数据的数据量小于或等于设定的数据量阈值。
  5. 如权利要求2所述的方法,其特征在于,所述根据所述第一目标图像数据,将所述墨水屏上显示的所述第二图像帧更新为所述第一图像帧,包括:
    当确定所述第一目标图像数据的数据量大于设定的数据量阈值时,利用移动行业处理器接口将所述第一图像帧发送至所述墨水屏,并驱动所述墨水屏利用所述第一图像帧替换所述第二图像帧。
  6. 如权利要求1~5任一所述的方法,其特征在于,
    所述第一应用为设定的白名单中的任一应用,所述白名单中包含至少一个应用;和/或
    所述第一应用为设定类型的应用。
  7. 如权利要求1~6任一所述的方法,其特征在于,所述第二图像帧为所述多个图像帧中最后一个图像帧的预测图像帧。
  8. 如权利要求1~7任一所述的方法,其特征在于,所述手写操作为作用在所述墨水屏上的第一显示区域内的操作;其中,所述第一显示区域为所述第二图像帧所在的显示区域。
  9. 如权利要求1~8任一所述的方法,其特征在于,所述根据所述多个图像帧预测第一图像帧,包括:
    根据所述多个图像帧和图像预测模型,确定所述第一图像帧;其中,所述图像预测模型用于表示连续的多个图像帧与所述多个图像帧中最后一个图像帧的下一个图像帧之间的关系。
  10. 一种电子设备,其特征在于,所述电子设备包括墨水屏,存储器和一个或多个处理器;
    其中,所述存储器用于存储计算机程序代码,所述计算机程序代码包括计算机指令;当所述计算机指令被所述一个或多个处理器执行时,使得所述电子设备执行如权利要求1~9任一所述的方法。
  11. 一种计算机可读存储介质,其特征在于,所述计算机可读存储介质存储有计算机程序,当所述计算机程序在电子设备上运行时,使得所述电子设备执行如权利要求1~9任一所述的方法。
  12. 一种计算机程序产品,其特征在于,所述计算机程序产品包括计算机程序或指令,当所述计算机程序或指令在计算机上运行时,使得所述计算机执行如权利要求1~9任一所述的方法。
PCT/CN2023/115528 2022-08-30 2023-08-29 一种内容显示方法及电子设备 WO2024046317A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202211049177.1 2022-08-30
CN202211049177.1A CN117666911A (zh) 2022-08-30 2022-08-30 一种内容显示方法及电子设备

Publications (1)

Publication Number Publication Date
WO2024046317A1 true WO2024046317A1 (zh) 2024-03-07

Family

ID=90079424

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2023/115528 WO2024046317A1 (zh) 2022-08-30 2023-08-29 一种内容显示方法及电子设备

Country Status (2)

Country Link
CN (1) CN117666911A (zh)
WO (1) WO2024046317A1 (zh)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130271487A1 (en) * 2012-04-11 2013-10-17 Research In Motion Limited Position lag reduction for computer drawing
CN106716331A (zh) * 2014-09-16 2017-05-24 微软技术许可有限责任公司 模拟触摸显示器的实时响应性
CN110764652A (zh) * 2019-10-25 2020-02-07 深圳市康冠商用科技有限公司 红外触摸屏及其触摸点预测方法
CN112764616A (zh) * 2021-01-22 2021-05-07 广州文石信息科技有限公司 一种电子墨水屏手写加速方法、装置、设备及存储介质
CN114035763A (zh) * 2022-01-11 2022-02-11 广州文石信息科技有限公司 一种电子墨水屏幕作为电脑显示器的抖动优化方法及装置

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130271487A1 (en) * 2012-04-11 2013-10-17 Research In Motion Limited Position lag reduction for computer drawing
CN106716331A (zh) * 2014-09-16 2017-05-24 微软技术许可有限责任公司 模拟触摸显示器的实时响应性
CN110764652A (zh) * 2019-10-25 2020-02-07 深圳市康冠商用科技有限公司 红外触摸屏及其触摸点预测方法
CN112764616A (zh) * 2021-01-22 2021-05-07 广州文石信息科技有限公司 一种电子墨水屏手写加速方法、装置、设备及存储介质
CN114035763A (zh) * 2022-01-11 2022-02-11 广州文石信息科技有限公司 一种电子墨水屏幕作为电脑显示器的抖动优化方法及装置

Also Published As

Publication number Publication date
CN117666911A (zh) 2024-03-08

Similar Documents

Publication Publication Date Title
WO2021227770A1 (zh) 应用窗口显示方法和电子设备
WO2022052772A1 (zh) 多窗口投屏场景下的应用界面显示方法及电子设备
WO2021057830A1 (zh) 一种信息处理方法及电子设备
WO2021032097A1 (zh) 一种隔空手势的交互方法及电子设备
US20220391047A1 (en) Split-screen display method, electronic device, and computer-readable storage medium
WO2021129253A1 (zh) 显示多窗口的方法、电子设备和系统
CN107925749B (zh) 用于调整电子设备的分辨率的方法和设备
KR20180095409A (ko) 전자 장치 및 전자 장치의 화면 표시 방법
WO2021063237A1 (zh) 电子设备的控制方法及电子设备
KR102511247B1 (ko) 다면 디스플레이 장치와 그의 운영 방법
WO2022052671A1 (zh) 一种显示窗口的方法、切换窗口的方法、电子设备和系统
KR101718046B1 (ko) 동적 해상도 제어를 수행하는 이동 단말기 및 그 제어방법
WO2018161534A1 (zh) 一种显示图像的方法、双屏终端和计算机可读的非易失性存储介质
US11079900B2 (en) Electronic device and computer program product for controlling display of information based on display areas of flexible display
WO2021135838A1 (zh) 一种页面绘制方法及相关装置
US10055119B2 (en) User input method and apparatus in electronic device
WO2021175272A1 (zh) 一种应用信息的显示方法及相关设备
US20220357818A1 (en) Operation method and electronic device
WO2024041047A1 (zh) 一种屏幕刷新率切换方法及电子设备
WO2022088974A1 (zh) 一种遥控方法、电子设备及系统
CN112347048A (zh) 电子装置及其共享数据的方法
WO2022134691A1 (zh) 一种终端设备中啸叫处理方法及装置、终端
WO2021052488A1 (zh) 一种信息处理方法及电子设备
US20150331600A1 (en) Operating method using an input control object and electronic device supporting the same
KR102614044B1 (ko) 미라캐스트 제공 장치 및 방법

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 23859347

Country of ref document: EP

Kind code of ref document: A1