CN117666911A - Content display method and electronic equipment - Google Patents

Content display method and electronic equipment Download PDF

Info

Publication number
CN117666911A
CN117666911A CN202211049177.1A CN202211049177A CN117666911A CN 117666911 A CN117666911 A CN 117666911A CN 202211049177 A CN202211049177 A CN 202211049177A CN 117666911 A CN117666911 A CN 117666911A
Authority
CN
China
Prior art keywords
image frame
image
application
display
ink screen
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211049177.1A
Other languages
Chinese (zh)
Inventor
罗诚
刘开罩
华梦峥
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Huawei Technologies Co Ltd
Original Assignee
Huawei Technologies Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huawei Technologies Co Ltd filed Critical Huawei Technologies Co Ltd
Priority to CN202211049177.1A priority Critical patent/CN117666911A/en
Priority to PCT/CN2023/115528 priority patent/WO2024046317A1/en
Publication of CN117666911A publication Critical patent/CN117666911A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0487Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
    • G06F3/0488Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures
    • G06F3/04883Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures for inputting data by handwriting, e.g. gesture or text
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/14Digital output to display device ; Cooperation and interconnection of the display device with other functional units
    • G06F3/147Digital output to display device ; Cooperation and interconnection of the display device with other functional units using display panels
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09FDISPLAYING; ADVERTISING; SIGNS; LABELS OR NAME-PLATES; SEALS
    • G09F9/00Indicating arrangements for variable information in which the information is built-up on a support by selection or combination of individual elements
    • G09F9/30Indicating arrangements for variable information in which the information is built-up on a support by selection or combination of individual elements in which the desired character or characters are formed by combining individual elements
    • G09F9/37Indicating arrangements for variable information in which the information is built-up on a support by selection or combination of individual elements in which the desired character or characters are formed by combining individual elements being movable elements
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09GARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
    • G09G3/00Control arrangements or circuits, of interest only in connection with visual indicators other than cathode-ray tubes
    • G09G3/20Control arrangements or circuits, of interest only in connection with visual indicators other than cathode-ray tubes for presentation of an assembly of a number of characters, e.g. a page, by composing the assembly by combination of individual elements arranged in a matrix no fixed position being assigned to or needed to be assigned to the individual characters or partial characters
    • G09G3/34Control arrangements or circuits, of interest only in connection with visual indicators other than cathode-ray tubes for presentation of an assembly of a number of characters, e.g. a page, by composing the assembly by combination of individual elements arranged in a matrix no fixed position being assigned to or needed to be assigned to the individual characters or partial characters by control of light from an independent source

Abstract

The application provides a content display method and electronic equipment, which are applied to a display unit in an operating system of the electronic equipment. After the first image frame is predicted, the display unit may update the second image frame displayed on the ink screen to the first image frame. In the process, the display unit can predict and display the subsequent image frames according to the image frames belonging to the first application, so that on one hand, the display time delay can be reduced without waiting for the generation of the actual image frames, and on the other hand, the application is not required to be adapted, and the implementation difficulty is low. Therefore, the method can improve the display fluency, reduce the implementation difficulty, and improve the display efficiency, and has higher universality and practicability.

Description

Content display method and electronic equipment
Technical Field
The present disclosure relates to the field of electronic devices, and in particular, to a content display method and an electronic device.
Background
The ink screen has the advantages of ultra-low power consumption, paper-like texture, eye protection, no blue light, light weight, thinness and the like, so that the current ink screen is widely applied. However, the refresh rate of the current ink screen is typically only 5-7 Frames Per Second (FPS), which is much lower than the refresh rate of the liquid crystal display (liquid crystal display, LCD) (60 FPS, 120FPS, etc.). The low refresh rate of the ink screen can cause very large time delay of handwriting on the ink screen, which easily causes the smoothness of the application in the electronic equipment to be reduced when the handwriting content is displayed on the ink screen according to the handwriting operation of the user, and influences the handwriting experience of the user.
Currently, in order to reduce the handwriting time delay on the ink screen, an application in the electronic device can be adapted, a customized local drawing interface is provided for the application, and further in the process of drawing the handwriting track by calling the local drawing interface by the application, the change of a local area (namely the area where the handwriting track is located) in the display interface is determined in an accurate calculation or prediction mode, and the content displayed in the local area is refreshed according to the change condition, so that the speed of refreshing the interface is increased, and the time delay of displaying the handwriting track on the ink screen is reduced. However, in the method, adaptation needs to be performed for each application, so that the implementation difficulty is high, the efficiency is low, and the universality and the practicability of the method are low.
Disclosure of Invention
The application provides a content display method and electronic equipment, which are used for simply, conveniently and efficiently realizing the effect of reducing the writing time delay of an ink screen when handwriting content is displayed on the ink screen, and improving the universality and the practicability of a scheme.
In a first aspect, the present application provides a content display method applied to a display unit in an operating system of an electronic device, the method including: acquiring a plurality of image frames in response to a handwriting operation acting on the ink screen; wherein the plurality of image frames belong to a first application; predicting a first image frame from the plurality of image frames; wherein the first image frame is a predicted image frame of a next image frame to a last image frame of the plurality of image frames; and updating the second image frame displayed on the ink screen to the first image frame.
In the method, when the display unit in the electronic equipment receives the handwriting operation acted on the ink screen and the handwriting content needs to be displayed, the subsequent image frames can be predicted and displayed according to the displayed image frames on the ink screen without waiting for the generation of the subsequent image frames, so that the subsequent image frames to be displayed can be obtained and displayed in advance, the speed of updating the displayed image frames can be properly improved, the display smoothness is further improved, and the time delay of displaying the handwriting content is reduced. When the handwriting content displayed by the electronic equipment is the content in the application, the system service, namely the display unit, can directly acquire the image frame of the application and predict and display the image frame to be displayed subsequently according to the acquired image frame without changing the processing logic or method of the application, so that the application is not required to be adapted, the implementation difficulty is low, and the efficiency is high. In summary, when the method displays the handwriting content in the application, the implementation difficulty can be reduced while the display smoothness is improved, and the display efficiency is improved, so that the method has higher universality and practicability.
In one possible design, the updating the second image frame displayed on the ink screen to the first image frame includes: determining first target image data from the first image frame and the second image frame; wherein the first target image data is used for indicating image content of the first image frame which changes relative to the second image frame; and updating the second image frame displayed on the ink screen to the first image frame according to the first target image data.
In the method, the display unit updates the second image frame into the first image frame according to the image content of the first image frame, which changes relative to the second image frame, and the second image frame can be updated in a local updating mode, so that the speed of updating the image frame can be increased, the updating efficiency is improved, and the display time delay is reduced.
In one possible design, the updating the second image frame displayed on the ink screen to the first image frame according to the first target image data includes: transmitting the first target image data to the ink screen by using a serial peripheral interface, and driving the ink screen to replace second target content in the second image frame by first target content; the first target content is the image content indicated by the first target image data, and the second target content is the content different from the first image frame in the second image frame.
In this method, the amount of data that the serial peripheral interface can transfer is small, but the transfer speed is fast. The display unit adopts a local updating mode, when the image content which changes in the image frame is transmitted to the display screen for updating display, the data quantity required to be transmitted is smaller, so that the serial peripheral interface can be utilized as much as possible to transmit the image content, the faster data transmission speed is further ensured, the refresh rate of the ink screen when the display content is updated is improved, further smooth handwriting service can be supported for a user, and the user experience can be improved.
In one possible design, before transmitting the first target image data to the ink screen using a serial peripheral interface, the method further comprises: and determining that the data amount of the first target image data is smaller than or equal to a set data amount threshold.
In the method, when the data amount of the changed image content is smaller than or equal to the set data amount threshold, the serial peripheral interface is indicated to be capable of bearing the image content, so that the image content can be transmitted to the ink screen for updating display by utilizing the serial peripheral interface. Therefore, by the method, the smooth execution of the image frame updating flow can be ensured.
In one possible design, the updating the second image frame displayed on the ink screen to the first image frame according to the first target image data includes: when the data amount of the first target image data is determined to be larger than the set data amount threshold, the first image frame is sent to the ink screen by using a mobile industry processor interface, and the ink screen is driven to replace the second image frame by the first image frame.
In the method, the mobile industry processor interface has the advantage of supporting the transmission of larger data volume, so that the data transmission requirement of a global updating mode for updating the whole second image frame into the first image frame can be met by transmitting the data through the MIPI interface, and the smooth execution of the image frame updating flow is ensured.
In one possible design, the first application is any application in a set white list, where the white list includes at least one application; and/or the first application is a setting type application.
In one possible design, the second image frame is a predicted image frame of a last image frame of the plurality of image frames.
In the method, the first image frame and the second image frame are both predicted image frames, so that the image frames displayed by the display unit are predicted image frames instead of actually generated image frames, and therefore, the display unit can directly display the predicted image frames after determining the predicted image frames without waiting for the generation of the actually generated image frames, and therefore, the time delay of updating and displaying the image frames by the ink screen can be properly reduced, and the user experience is improved.
In one possible design, the handwriting operation is an operation within a first display area acting on the ink screen; the first display area is a display area where the second image frame is located.
In the method, the display unit can update the content displayed in one display area on the ink screen according to the handwriting operation in the display area, so that the content displayed in other display areas on the ink screen is not influenced, and the use experience of a user can be improved.
In one possible design, the predicting the first image frame from the plurality of image frames includes: determining the first image frame according to the plurality of image frames and an image prediction model; wherein the image prediction model is used to represent a relationship between a consecutive plurality of image frames and a next image frame of a last image frame of the plurality of image frames.
In the method, the accuracy of image frame prediction by using the image prediction model is high, the content accuracy of image frame update can be improved, and the user experience is further improved.
In a second aspect, the present application provides an electronic device comprising an ink screen, a memory, and one or more processors; wherein the memory is for storing computer program code, the computer program code comprising computer instructions; the computer instructions, when executed by one or more processors, cause an electronic device to perform the method described in the first aspect or any of the possible designs of the first aspect.
In a third aspect, the present application provides a computer readable storage medium storing a computer program which, when run on a computer, causes the computer to perform the method described in the first aspect or any one of the possible designs of the first aspect.
In a fourth aspect, the present application provides a computer program product comprising a computer program or instructions which, when run on a computer, cause the computer to perform the method described in the first aspect or any one of the possible designs of the first aspect.
The advantages of the second aspect to the fourth aspect are described with reference to the first aspect, and the detailed description is not repeated here.
Drawings
Fig. 1 is a schematic diagram of a hardware architecture of an electronic device according to an embodiment of the present application;
fig. 2 is a schematic software architecture of an electronic device according to an embodiment of the present application;
fig. 3 is a schematic architecture diagram of a content display system according to an embodiment of the present application;
fig. 4 is a schematic diagram of a content display method according to an embodiment of the present application;
fig. 5 is a flow chart of a content display method according to an embodiment of the present application;
Fig. 6 is a schematic diagram of a content display method according to an embodiment of the present application;
fig. 7 is a schematic structural diagram of an electronic device according to an embodiment of the present application.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present application more apparent, the embodiments of the present application will be described in further detail with reference to the accompanying drawings.
Wherein in the description of embodiments of the present application, the terms "first," "second," and the like are used for descriptive purposes only and are not to be construed as indicating or implying a relative importance or implicitly indicating the number of technical features indicated. Thus, a feature defining "a first" or "a second" may explicitly or implicitly include one or more such feature.
For ease of understanding, a description of concepts related to the present application is given by way of example for reference.
The electronic device is a device with an ink screen. The electronic device in some embodiments of the present application may be a portable device with an ink screen, such as a mobile phone with an ink screen, a tablet computer, a wearable device with wireless communication capabilities (e.g., a watch, a bracelet, a helmet, an earphone, etc.), a vehicle-mounted terminal device, an augmented reality (augmented reality, AR)/Virtual Reality (VR) device, a notebook computer, an ultra-mobile personal computer (ultra-mobile personal computer, UMPC), a netbook, a personal digital assistant (personal digital assistant, PDA), a smart home device (e.g., a smart television, a smart speaker, etc.), a smart robot, a workshop device, a wireless terminal in an unmanned (self driving), a wireless terminal in a teleoperation (remote medical surgery), a wireless terminal in a smart grid (smart grid), a wireless terminal in a transportation security (transportation safety), a wireless terminal in a smart city (smart city), or a wireless terminal in a smart home (smart home), a flying device (e.g., a smart robot, a hot air, an aircraft, a balloon, etc.
Wherein the wearable device is a portable device that the user can wear directly on his or her clothing or accessories.
In some embodiments of the present application, the electronic device may also be a portable terminal device that also contains other functions, such as personal digital assistant and/or music player functions. Exemplary embodiments of portable terminal devices include, but are not limited to, piggy-backOr other operating system. The above-described portable terminal device may also be other portable terminal devices, such as a laptop computer (laptop) or the like having a touch-sensitive surface (e.g., a touch panel). It should also be appreciated that in other embodiments of the present application, the electronic device described above may be a desktop computer having a touch-sensitive surface (e.g., a touch panel) instead of a portable terminal device.
It should be understood that in embodiments of the present application, "at least one" means one or more, and "a plurality" means two or more. "and/or", describes an association relationship of an association object, and indicates that there may be three relationships, for example, a and/or B, and may indicate: a alone, a and B together, and B alone, wherein A, B may be singular or plural. The character "/" generally indicates that the context-dependent object is an "or" relationship. "at least one (item) below" or the like, refers to any combination of these items, including any combination of single item(s) or plural items(s). For example, at least one (one) of a, b or c may represent: a, b, c, a and b, a and c, b and c, or a, b and c, wherein a, b and c can be single or multiple.
Currently, in order to reduce the time delay of displaying handwritten content on an ink screen, customization optimization may generally be performed for an application displaying the handwritten content in an electronic device.
In one scheme, the system of the electronic device can provide customized local drawing interfaces for system applications, self-grinding applications and cooperative applications in the electronic device, the applications can be matched with the local drawing interfaces to complete drawing of handwriting tracks, and system services of the electronic device can accurately calculate local change areas of the handwriting tracks drawn by the applications through the local drawing interfaces and refresh the handwriting tracks in the local change areas through SPI interfaces. The system service of the electronic equipment can only acquire the whole frame picture drawn by the third party application and carry out full screen refreshing of the display interface according to the frame picture, so that the speed of displaying the handwriting track on the ink screen by the electronic equipment is slower and the time delay is larger.
In the above scheme, the local refreshing mode is only suitable for system application, self-grinding application and cooperation application in the electronic equipment, and the application needs to be subjected to adaptive processing one by one, so that the processing efficiency is low, and the scheme has low universality and practicability. For the third party application, the electronic device cannot accurately identify whether the handwriting state is in or not, and cannot accurately determine the change of the handwriting track, so that the handwriting track received by the third party application cannot be displayed in a local refreshing mode, and full-screen refreshing can only be performed according to the whole frame of picture drawn by the third party application, so that the writing time delay of the ink screen is larger, and the display smoothness is lower.
In another approach, when the application in the electronic device draws the handwriting track, a motion compensation algorithm may be used to predict a next input event according to an input (input) event of the ink screen (including coordinates, pressure sensing, etc. of a contact point of an operation acting on the ink screen) within a period of time, so as to predict coordinates of a next contact point of the operation acting on the ink screen, and update the handwriting track according to the predicted coordinates. In the scheme, for system application, self-research application and cooperation application in the electronic equipment, the application still needs to be subjected to adaptive processing one by one, so that the processing efficiency is low, and the scheme has low universality and practicability. For the third party application, the electronic device cannot display the handwriting track received by the third party application in a local refreshing mode, so that the writing time delay of the ink screen is still larger, and the display smoothness is lower.
In view of the above problems, the embodiments of the present application provide a content display method and an electronic device, where the solution may display handwritten content on an ink screen, and simultaneously simply and efficiently implement an effect of reducing writing time delay of the ink screen, and improve universality and practicality of the solution, and improve experience of a user using the ink screen.
Referring to fig. 1, a structure of an electronic device to which the method provided in the embodiment of the present application is applicable is described below.
As shown in fig. 1, the electronic device 100 may include a processor 110, an external memory interface 120, an internal memory 121, a usb interface 130, a charge management module 140, a power management module 141, a battery 142, an antenna 1, an antenna 2, a mobile communication module 150, a wireless communication module 160, an audio module 170, a speaker 170A, a receiver 170B, a microphone 170C, an earphone interface 170D, a sensor module 180, a key 190, a motor 191, an indicator 192, a camera 193, a display 194, a SIM card interface 195, and the like.
The sensor module 180 may include a gyroscope sensor, an acceleration sensor, a proximity light sensor, a fingerprint sensor, a touch sensor, a temperature sensor, a pressure sensor, a distance sensor, a magnetic sensor, an ambient light sensor, a barometric pressure sensor, a bone conduction sensor, and the like.
It will be appreciated that the electronic device 100 shown in fig. 1 is merely an example and is not limiting of the electronic device, and that the electronic device may have more or fewer components than shown in the figures, may combine two or more components, or may have different configurations of components. The various components shown in fig. 1 may be implemented in hardware, software, or a combination of hardware and software, including one or more signal processing and/or application specific integrated circuits.
The processor 110 may include one or more processing units, such as: the processor 110 may include an application processor (application processor, AP), a modem processor, a graphics processor (graphics processing unit, GPU), an image signal processor (image signal processor, ISP), a controller, a memory, a video codec, a digital signal processor (digital signal processor, DSP), a baseband processor, and/or a neural network processor (neural-network processing unit, NPU), etc. Wherein the different processing units may be separate devices or may be integrated in one or more processors. The controller may be a neural hub and a command center of the electronic device 100, among others. The controller can generate operation control signals according to the instruction operation codes and the time sequence signals to finish the control of instruction fetching and instruction execution.
A memory may also be provided in the processor 110 for storing instructions and data. In some embodiments, the memory in the processor 110 is a cache memory. The memory may hold instructions or data that the processor 110 has just used or recycled. If the processor 110 needs to reuse the instruction or data, it can be called directly from the memory. Repeated accesses are avoided and the latency of the processor 110 is reduced, thereby improving the efficiency of the system.
The execution of the content display method provided in the embodiment of the present application may be performed by the processor 110 controlling or calling other components, for example, calling a processing program of the embodiment of the present application stored in the internal memory 121, or calling a processing program of the embodiment of the present application stored in a third party device through the external memory interface 120, so as to control the wireless communication module 160 to perform data communication with other devices, thereby improving the intelligence and convenience of the electronic device 100 and improving the user experience. The processor 110 may include different devices, such as an integrated CPU and a GPU, where the CPU and the GPU may cooperate to execute the content display method provided in the embodiments of the present application, such as a part of algorithms in the content display method are executed by the CPU, and another part of algorithms are executed by the GPU, so as to obtain a faster processing efficiency.
The display screen 194 is used to display images, videos, and the like. In this embodiment, the display 194 is an ink screen. The display 194 includes a display panel. In some embodiments, the electronic device 100 may include 1 or N display screens 194, N being a positive integer greater than 1. The display 194 may be used to display information entered by or provided to a user as well as various graphical user interfaces (graphical user interface, GUI). For example, the display 194 may display photographs, videos, web pages, or files, etc.
In the embodiment of the present application, the display 194 may be an integral flexible display, or a tiled display formed of two rigid screens and a flexible screen located between the two rigid screens may be used.
The camera 193 (front camera or rear camera, or one camera may be used as both front camera and rear camera) is used to capture still images or video. In general, the camera 193 may include a photosensitive element such as a lens group including a plurality of lenses (convex lenses or concave lenses) for collecting optical signals reflected by an object to be photographed and transmitting the collected optical signals to an image sensor. The image sensor generates an original image of the object to be photographed according to the optical signal.
The internal memory 121 may be used to store computer executable program code including instructions. The processor 110 executes various functional applications of the electronic device 100 and data processing by executing instructions stored in the internal memory 121. The internal memory 121 may include a storage program area and a storage data area. The storage program area may store codes of an operating system and application programs (such as functions corresponding to the content display method provided in the present application). The storage data area may store data created during use of the electronic device 100, etc.
The internal memory 121 may also store one or more computer programs corresponding to algorithms of the content display method provided in the embodiments of the present application. The one or more computer programs are stored in the internal memory 121 and configured to be executed by the one or more processors 110, the one or more computer programs including instructions that can be used to perform the various steps in the following embodiments.
In addition, the internal memory 121 may include a high-speed random access memory, and may further include a nonvolatile memory such as at least one magnetic disk storage device, a flash memory device, a universal flash memory (universal flash storage, UFS), and the like.
Of course, the code of the algorithm of the content display method provided in the embodiment of the present application may also be stored in the external memory. In this case, the processor 110 may run code of an algorithm of the content display method stored in the external memory through the external memory interface 120.
The sensor module 180 may include a gyro sensor, an acceleration sensor, a proximity light sensor, a fingerprint sensor, a touch sensor, and the like.
Touch sensors, also known as "touch panels". The touch sensor may be disposed on the display screen 194, and the touch sensor and the display screen 194 form a touch display screen, which is also referred to as a "touch screen". The touch sensor is used to detect a touch operation acting on or near it. The touch sensor may communicate the detected touch operation to the application processor to determine the touch event type. Visual output related to touch operations may be provided through the display 194. In other embodiments, the touch sensor may also be disposed on a surface of the electronic device 100 at a different location than the display 194.
Illustratively, the display 194 of the electronic device 100 displays a main interface that includes icons of applications (such as camera applications, etc.). For example, the user may click on an icon of the camera application in the main interface by touching the sensor, triggering the processor 110 to launch the camera application, opening the camera 193. The display 194 displays an interface for the camera application, such as a viewfinder interface.
The wireless communication function of the electronic device 100 may be implemented by the antenna 1, the antenna 2, the mobile communication module 150, the wireless communication module 160, a modem processor, a baseband processor, and the like.
The antennas 1 and 2 are used for transmitting and receiving electromagnetic wave signals. Each antenna in the electronic device 100 may be used to cover a single or multiple communication bands. Different antennas may also be multiplexed to improve the utilization of the antennas. For example: the antenna 1 may be multiplexed into a diversity antenna of a wireless local area network. In other embodiments, the antenna may be used in conjunction with a tuning switch.
The mobile communication module 150 may provide a solution for wireless communication including 2G/3G/4G/5G, etc., applied to the electronic device 100. The mobile communication module 150 may include at least one filter, switch, power amplifier, low noise amplifier (low noise amplifier, LNA), etc. The mobile communication module 150 may receive electromagnetic waves from the antenna 1, perform processes such as filtering, amplifying, and the like on the received electromagnetic waves, and transmit the processed electromagnetic waves to the modem processor for demodulation. The mobile communication module 150 can amplify the signal modulated by the modem processor, and convert the signal into electromagnetic waves through the antenna 1 to radiate. In some embodiments, at least some of the functional modules of the mobile communication module 150 may be disposed in the processor 110. In some embodiments, at least some of the functional modules of the mobile communication module 150 may be provided in the same device as at least some of the modules of the processor 110. In the embodiment of the present application, the mobile communication module 150 may also be used for information interaction with other devices.
The modem processor may include a modulator and a demodulator. The modulator is used for modulating the low-frequency baseband signal to be transmitted into a medium-high frequency signal. The demodulator is used for demodulating the received electromagnetic wave signal into a low-frequency baseband signal. The demodulator then transmits the demodulated low frequency baseband signal to the baseband processor for processing. The low frequency baseband signal is processed by the baseband processor and then transferred to the application processor. The application processor outputs sound signals through audio means (not limited to the speaker 170A, the receiver 170B, etc.), or displays images or video through the display screen 194. In some embodiments, the modem processor may be a stand-alone device. In other embodiments, the modem processor may be provided in the same device as the mobile communication module 150 or other functional module, independent of the processor 110.
The wireless communication module 160 may provide solutions for wireless communication including wireless local area network (wireless local area networks, WLAN) (e.g., wireless fidelity (wireless fidelity, wi-Fi) network), bluetooth (BT), global navigation satellite system (global navigation satellite system, GNSS), frequency modulation (frequency modulation, FM), near field wireless communication technology (near field communication, NFC), infrared technology (IR), etc., as applied to the electronic device 100. The wireless communication module 160 may be one or more devices that integrate at least one communication processing module. The wireless communication module 160 receives electromagnetic waves via the antenna 2, modulates the electromagnetic wave signals, filters the electromagnetic wave signals, and transmits the processed signals to the processor 110. The wireless communication module 160 may also receive a signal to be transmitted from the processor 110, frequency modulate it, amplify it, and convert it to electromagnetic waves for radiation via the antenna 2. In this embodiment, the wireless communication module 160 is configured to establish a connection with another electronic device for data interaction. Or the wireless communication module 160 may be configured to access the access point device, send control instructions to other electronic devices, or receive data sent from other electronic devices.
In addition, the electronic device 100 may implement audio functions through the audio module 170, the speaker 170A, the receiver 170B, the microphone 170C, the earphone interface 170D, and the application processor, etc. Such as music playing, recording, etc. The electronic device 100 may receive key 190 inputs, generating key signal inputs related to user settings and function control of the electronic device 100. The electronic device 100 may generate a vibration alert (such as an incoming call vibration alert) using the motor 191. The indicator 192 in the electronic device 100 may be an indicator light, may be used to indicate a state of charge, a change in power, may be used to indicate a message, a missed call, a notification, etc. The SIM card interface 195 in the electronic device 100 is used to connect a SIM card. The SIM card may be inserted into the SIM card interface 195, or removed from the SIM card interface 195 to enable contact and separation with the electronic device 100.
It should be understood that in practical applications, electronic device 100 may include more or fewer components than shown in fig. 1, and embodiments of the present application are not limited. The illustrated electronic device 100 is only one example, and the electronic device 100 may have more or fewer components than shown in the figures, may combine two or more components, or may have a different configuration of components. The various components shown in the figures may be implemented in hardware, software, or a combination of hardware and software, including one or more signal processing and/or application specific integrated circuits.
The software system of the electronic device 100 may employ a layered architecture, an event driven architecture, a microkernel architecture, a microservice architecture, or a cloud architecture. In the embodiment of the application, taking an Android system with a layered architecture as an example, a software structure of an electronic device is illustrated.
The layered architecture divides the software into several layers, each with distinct roles and branches. The layers communicate with each other through a software interface. As shown in fig. 2, the software architecture can be divided into four layers, from top to bottom, an application layer, an application framework layer (FWK), runtime and system libraries, and a Linux kernel layer.
The application layer is the top layer of the operating system, including native applications of the operating system, such as cameras, gallery, calendar, bluetooth, music, video, information, etc. An application program referred to in the embodiments of the present application is simply referred to as Application (APP), which is a software program capable of implementing one or more specific functions. Typically, a plurality of applications may be installed in an electronic device. Such as camera applications, mailbox applications, sports health applications, health use mobile phone applications, and the like. The application mentioned below may be a system application installed when the electronic device leaves the factory, or may be a third party application downloaded from a network or acquired from other electronic devices by a user during the process of using the electronic device.
Of course, for a developer, the developer may write an application and install it to that layer. In one possible implementation, the application may be developed using Java language, by calling an application programming interface (application programming interface, API) provided by the application framework layer, through which a developer may interact with the underlying layers of the operating system (e.g., kernel layer, etc.) to develop his own application.
The application framework layer is an API and a programming framework of the application layer. The application framework layer may include some predefined functions. The application framework layer may include a window manager, a content provider, a view system, a telephony manager, a resource manager, a notification manager, and the like.
The window manager is used for managing window programs. The window manager can acquire the size of the display screen, judge whether a status bar exists, lock the screen, intercept the screen and the like.
The content provider is used to store and retrieve data and make such data accessible to applications. The data may include information such as files (e.g., documents, video, images, audio), text, etc.
The view system includes visual controls, such as controls that display text, pictures, documents, and the like. The view system may be used to build applications. The interface in the display window may be composed of one or more views. For example, a display interface including a text message notification icon may include a view displaying text and a view displaying a picture.
The telephony manager is for providing communication functions of the electronic device. The notification manager allows the application to display notification information in a status bar, can be used to communicate notification type messages, can automatically disappear after a short dwell, and does not require user interaction.
The runtime includes a core library and a virtual machine. The runtime is responsible for the scheduling and management of the operating system.
The core library of the system comprises two parts: one part is a function which needs to be called by Java language, and the other part is a core library of the system. The application layer and the application framework layer run in a virtual machine. Taking Java as an example, the virtual machine executes Java files of the application layer and the application framework layer as binary files. The virtual machine is used for executing the functions of object life cycle management, stack management, thread management, security and exception management, garbage collection and the like.
The system library may include a plurality of functional modules. For example: a surface manager, a media library, a three-dimensional graphics processing library (e.g., openGL ES), a two-dimensional graphics engine (e.g., SGL), etc. The surface manager is used to manage the display subsystem and provides a fusion of two-dimensional and three-dimensional layers for multiple applications. Media libraries support a variety of commonly used audio, video format playback and recording, still image files, and the like. The media library may support a variety of audio video encoding formats, such as: MPEG4, h.564, MP3, AAC, AMR, JPG, PNG, etc. The three-dimensional graphic processing library is used for realizing three-dimensional graphic drawing, image rendering, synthesis, layer processing and the like. A two-dimensional graphics engine is a drawing engine that draws two-dimensional drawings.
The kernel layer provides core system services of the operating system, such as security, memory management, process management, network protocol stacks, driving models and the like, which are realized based on the kernel layer. The kernel layer also acts as an abstraction layer between the hardware and software stacks. This layer has many drivers associated with the electronic device, the main drivers being: a display drive; a keyboard driver as an input device; flash driving based on memory technology equipment; a camera drive; an audio drive; bluetooth driving; wiFi drive, etc.
It should be understood that the functional services described above are only examples, and in practical applications, the electronic device may be divided into more or fewer functional services according to other factors, or the functions of the respective services may be divided in other manners, or the functional services may not be divided, but may operate as a whole.
The following describes in detail the solutions provided in the present application with reference to specific embodiments.
Fig. 3 is a schematic architecture diagram of a content display system according to an embodiment of the present application. Alternatively, as shown in FIG. 3, applications, a graphics rendering system, a scene recognition system, and a display system may be included in the content display system. Alternatively, as shown in FIG. 3, the content display system may be deployed in an electronic device, which may have a content display service therein, and the graphics rendering system, scene recognition system, and display system may be three subsystems in the content display service.
Wherein the graphics rendering system may be used as an interface in an electronic device to provide system rendering services. The application can draw the content such as graphic images (including user handwriting tracks) and the like which are required to be displayed by calling a graphic drawing system. In the embodiment of the application, the application can send the drawn content to the display system, and the display system displays the content. Alternatively, the application may be a system application, or may be a third party application.
The scene recognition system can be used for detecting the currently running application (service) in the electronic equipment, the foreground running application in the electronic equipment, the type of the foreground running application, the handwriting pen state, the ink screen touch state and the like, and sending the detected information to the display system so that the display system can determine whether to control the content display process of the foreground running application by adopting the method provided by the embodiment of the application according to the information.
The display system is used for synthesizing the contents drawn by the application through calling the graphic drawing system and outputting the contents to the ink screen for display. Specifically, the display system may include a display control module, a frame buffer (frame buffer) learning module, and a frame buffer update module. The display control module is used for controlling content display related to a content synthesis mode, a transmission and display path and the like. The frame buffer learning module is used for predicting an image to be displayed of a next frame according to the displayed continuous frame image (data) and calculating a dirty region (dirty) between the predicted frame image and the previous frame image. The frame buffer updating module is used for refreshing and displaying a predicted frame image according to the dirty area calculated by the frame buffer learning module.
In some embodiments of the present application, a display driver may be further included in the content display system, where the display driver is configured to drive the display to display content under the control of the frame buffer update module.
In one example, when the electronic device adopts an Android system, the graphics rendering system may be implemented as a rendering process (render thread) in the Android system, and the display system may be implemented as a display composition (surface flinger) service of the Android system.
It should be noted that the system architecture shown in fig. 3 is only an exemplary illustration of a system architecture applicable to the present application, and the system architecture shown in fig. 3 is not limited to the system architecture applicable to the present application. The system architecture to which the present application is applicable may include fewer or more modules than those shown in fig. 3, and the embodiments of the present application are not particularly limited.
The following describes in detail the solutions provided in the present application with reference to specific embodiments.
The content display method provided by the embodiment of the application may be applied to an electronic device with an ink screen, and referring to fig. 4, the content display method provided by the embodiment of the application may include:
s401: the electronic device acquires a plurality of continuous image frames, wherein the plurality of image frames are image frames generated by a first application control in the electronic device.
Alternatively, the first application may be a setting type application, such as a handwriting office type application, etc.; alternatively, the first application may be any application in a set white list, where the white list may be used to record at least one application for which the solution provided in the present application is applicable.
Alternatively, the first application may be a third party application installed in the electronic device.
Alternatively, the number of the plurality of image frames may be a set number.
In some embodiments of the present application, before the electronic device acquires a plurality of image frames, a first application in the electronic device may call a system service in the electronic device to sequentially generate continuous image frames to be displayed and display the generated image frames according to the received handwriting operation, and in this process, the electronic device may acquire the corresponding image frames frame by frame, so as to obtain the plurality of image frames. The system service may include a service for drawing image frames in the electronic device and a service for displaying image frames in the electronic device, for example, the system service may be a content display service shown in fig. 3 described above, or the system service may include at least a graphics drawing system and a display system described in fig. 3 described above.
As an alternative implementation manner, after the system service generates the image frames according to the control of the first electronic device, the generated image frames (or interfaces corresponding to the image frames) may be sequentially displayed on the ink screen of the electronic device.
In some embodiments of the present application, the electronic device may include a display unit and the first application, and the step S401 may be a step performed by the display unit in the electronic device. Alternatively, the display unit may serve as the system described above. Of course, the display unit may not serve as the system described above. If the display unit is not used as the system service, the first application may acquire the image frames from the system service after controlling the system service to sequentially generate the image frames, thereby obtaining the plurality of image frames.
After each time an image frame is acquired by the display unit, if a set number of image frames including the image frame (wherein the image frame is the last image frame in the set number of image frames) can be acquired, the set number of image frames can be used as the plurality of image frames, and the step S401 is started to be executed, and the subsequent steps are continued to be executed; otherwise, the display unit may not perform step S401 and subsequent steps, and only the first application performs a method of controlling generation and display of an image frame. The first application is used for displaying the image frames in a mode of calling system services of the electronic equipment to draw the image frames and displaying the image frames.
The first application may be the application shown in fig. 3 and the display unit may be the content display service shown in fig. 3.
In some embodiments of the present application, the display unit may be a system level service of the electronic device. The first application may generate the plurality of image frames and display the plurality of image frames by invoking a system interface of the electronic device. For example, when the first application is an application shown in fig. 3 and the display unit is a content display service shown in fig. 3, the first application may draw an interface to be displayed by calling a graphics drawing system in the display unit, thereby obtaining the plurality of image frames corresponding to the first application, and displaying the corresponding interface according to the plurality of image frames.
The first application may be to control generation of a plurality of continuous image frames and display of the corresponding image frames in sequence in response to a received handwriting operation after receiving the handwriting operation of the user on the first display screen. Alternatively, the handwriting operation may be an operation in which the locus of the operation point on the ink screen is continuous (varies) such as a touch sliding operation, for example, a writing operation, a drawing operation, or the like. The handwriting operation may be an operation performed by the user directly on the ink screen (for example, an operation performed by the user by using a finger to write on the ink screen, etc.), or an operation performed by the user by using a handwriting pen on the ink screen (for example, an operation performed by the user by holding the handwriting pen on the ink screen to write, draw, annotate, etc.).
In some embodiments of the present application, the first application may sequentially display the generated image frames in a full screen window or a split screen window or a floating window of the ink screen. Wherein, when the first application sequentially displays the image frames in a full-screen window of the ink screen, the first operation may be an operation of an arbitrary region acting on the ink screen; when the first application sequentially displays image frames in a split-screen window of the ink screen, the handwriting operation may be an operation acting in an area corresponding to the split-screen window on the ink screen; and when the first application sequentially displays the image frames in the floating window of the ink screen, the handwriting operation is an operation which acts in the area corresponding to the floating window on the ink screen.
It should be noted that, in the embodiment of the present application, the first application generates an image frame, which refers to that the first application invokes a service for generating the image frame in a system service of an electronic device, to generate an image frame to be displayed; the first application in the embodiment of the present application displays an image frame on an ink screen of an electronic device, which means that the first application invokes a service for displaying the image frame in a system service of the electronic device to display the image frame on the ink screen of the electronic device.
In some embodiments of the present application, before the first application is an application running in the foreground of the electronic device, in a process that the first application sequentially displays the plurality of image frames before acquiring the plurality of image frames, the electronic device (or a display unit in the electronic device) may identify a running state of the foreground of the first application, identify a class of the first application, and may also identify a state of a user operation received by the first application. If it is determined that the current first application is running in the foreground, the type of the first application is a set type (for example, handwriting office class), and the user has a control operation on the first application (or the content displayed by the first application), the electronic device may initiate execution of the step S401, otherwise, the electronic device may not initiate execution of the step S401. When a touch operation of the user on the ink screen (for example, a situation that a finger of the user presses the ink screen and a handwriting pen is pressed down) is detected in an area corresponding to the first application on the ink screen, the electronic device can determine that the user has a control operation on the first application.
In some embodiments of the present application, when the above-described display unit and the first application are included in the electronic device, the above-described step S401 may be a step performed by the display unit. Further, when the display unit serves the content display service shown in fig. 3 described above, the above step S401 may be performed by a display control module or a frame buffer learning module in the display system.
S402: the electronic equipment predicts a first image frame according to the plurality of image frames, wherein the first image frame is used as an image frame displayed on the ink screen by the electronic equipment after displaying a second image frame, and the second image frame is the image frame displayed on the ink screen currently.
The first image frame is used as a predicted image frame of a third image frame, and is displayed on the ink screen instead of the third image frame, wherein the third image frame is generated by the first application and is the next image frame of the last image frame in the plurality of image frames. Based on the above, the electronic device can predict the predicted image frame of the third image frame, i.e. the first image frame, before the first application generates the third image frame, and display the first image frame, so that the display speed of the image frame can be properly increased, thereby reducing the time delay. After the first application generates the third image frame, the electronic device may acquire a plurality of continuous image frames generated by the first application and including the third image frame (where the third image frame is a last image frame in the plurality of image frames), predict a fourth image frame according to the plurality of image frames, and update the first image frame with the fourth image frame. Wherein the fourth image frame is used for a predicted image frame of a next image frame of the third image frame generated by the first application.
Optionally, the second image frame may be a last image frame of the plurality of image frames; alternatively, the second image frame may be an image frame predicted by the electronic device, in which case the second image frame is used as a predicted frame for a last image frame of the plurality of image frames and displayed on the ink screen instead of the last image frame.
Based on the above method, the electronic device may acquire a plurality of consecutive image frames actually generated by the application and predict a next image frame displayed on the ink screen based on the plurality of image frames.
For example, in the case that the number of the plurality of image frames is the set number 5, the 1 st to 5 th frame images displayed on the ink screen by the electronic device are 1 st to 5 th frame images generated after the first application receives the user operation; after the first application generates the 5 th frame image, the plurality of image frames acquired by the electronic device may be the 1 st to 5 th frame images, and the first image frame is used as the 6 th frame image displayed on the ink screen by the electronic device, while the 6 th frame image (i.e., the third image frame) generated by the first application is not displayed on the ink screen; when a plurality of image frames acquired by the electronic equipment are 2 th to 6 th frame images generated by the first application, the first image frame is used as a 7 th frame image displayed on the ink screen by the electronic equipment, and the 7 th frame image (namely, a third image frame) generated by the first application is not displayed on the ink screen. And so on, when the electronic device obtains an image frame generated by the first application, the electronic device can predict and obtain the next frame image of the image frame according to the image frame and the image frame generated by the first application before the image frame, and can display the predicted next frame image.
In some embodiments of the present application, the electronic device may predict the first image frame from the acquired plurality of image frames using a trained image prediction model, wherein the image prediction model is used to represent a relationship between a consecutive plurality of image frames and a next image frame to a last image frame of the plurality of image frames. After the electronic device acquires the plurality of image frames, the plurality of image frames can be input into the trained image prediction model to obtain an image frame output by the image prediction model, and the image frame can be used as a next image frame of the last image frame in the plurality of image frames.
In some embodiments of the present application, before predicting the first image frame according to the plurality of image frames, the electronic device may further determine whether the ink screen is in a handwriting state, if yes, the electronic device may predict the first image frame according to the plurality of image frames, otherwise, the electronic device may not perform the step of predicting the first image frame according to the plurality of image frames and the subsequent steps, and only the first application may perform the task of displaying the image frames.
When the display unit and the first application are included in the electronic device, the display unit may transmit an image frame to be displayed to a driver of the ink screen by calling a mobile industry processor interface (mobile industry processor interface, MIPI) when it is determined that the ink screen is not in a handwriting state, so that the driver of the ink screen drives the ink screen to display a corresponding interface according to the image frame.
The electronic equipment can detect whether the ink screen is in a handwriting state through the system service. For example, an input subsystem (input subsystem) in the system service of the electronic device may detect input operations acting on the ink screen and report corresponding information to the system. The display unit in the electronic equipment can inquire the information reported by the input subsystem and determine whether the ink screen is in a handwriting state according to the information. The information reported by the input subsystem may include information such as a type of an input device (e.g., finger, stylus, track ball, mouse, etc.), a type of an input event (e.g., down, up, slide, etc.), etc. Of course, the electronic device may also determine whether the ink screen is in a handwriting state in other manners, which is not particularly limited in the embodiments of the present application.
In some embodiments of the present application, when the electronic device includes the display unit and the first application, the step S402 may be a step performed by the display unit. Further, when the display unit serves the content display shown in fig. 3, the step S402 may be performed by a frame buffer learning module in the display system.
S403: and the electronic equipment updates the first interface corresponding to the second image frame into the second interface corresponding to the first image frame.
In some embodiments of the present application, as an optional implementation manner, the electronic device may update the first interface corresponding to the second image frame to the second interface corresponding to the first image frame in a global update manner. Specifically, after the electronic device predicts the first image frame, the electronic device may directly replace, according to the first image frame, the first interface corresponding to the second image frame displayed on the ink screen with the second interface corresponding to the first image frame.
When the electronic device includes the display unit and the first application, the display unit may transmit the first image frame to the driver of the ink screen by calling the MIPI interface after the display unit obtains the first image frame by executing the steps S401 to S402, so that the driver of the ink screen drives the ink screen to display the second interface corresponding to the first image frame according to the first image frame.
In the global updating manner, the display unit needs to transmit the whole image frame to the display screen for displaying, so that the data amount required to be transmitted is large. The MIPI interface has the advantage of supporting the transmission of larger data volume, so that the data transmission requirement of a global updating mode can be met by transmitting data through the MIPI interface, and the normal execution of the updating flow of the display interface on the ink screen is ensured.
As another optional implementation manner, the electronic device may update the first interface corresponding to the second image frame to the second interface corresponding to the first image frame in a local update manner. Specifically, after the electronic device predicts the first image frame, the first target image area (i.e., the dirty area) where the first image frame changes relative to the second image frame may be determined according to the second image frame and the first image frame, and then, according to the image content in the first target image area, the first interface corresponding to the second image frame displayed on the ink screen is partially updated, so that the interface displayed on the updated ink screen is the second interface corresponding to the first image frame, thereby implementing the effect of updating the first interface to the second interface. The electronic device may replace the image content in the same area as the first target image area in the first interface displayed on the ink screen with the image content in the first target image area in the first image frame, so that the updated interface displayed on the ink screen is a second interface corresponding to the first image frame.
For example, when the electronic device includes the display unit and the first application, the display unit may determine the image content in the first target image area in which the first image frame changes relative to the second image frame by comparing the first image frame and the second image frame after the display unit obtains the first image frame by executing steps S401 to S402. Then, the display unit may transmit the image content in the first target image area to the driver of the ink screen by calling the serial peripheral interface (serial peripheral interface, SPI), so that the driver of the ink screen drives the ink screen to replace the image content in the first interface in the area corresponding to the first target image area with the image content in the first target image area according to the image content in the first target image area.
Optionally, before the electronic device transmits the image content in the first target image area to the driver of the ink screen through the call SPI interface, it may also be determined that the image content in the first target image area satisfies the SPI transmission condition. The SPI sending and displaying condition is that the data volume of the image content is smaller than or equal to a set data volume threshold, and the set data volume threshold can be the data volume value which can be borne by the SPI interface at maximum. Based on this mode, smooth execution of the interface update can be ensured.
Optionally, after determining the image content in the first target image area, if it is determined that the image content in the first target image area is greater than the set data amount threshold, the electronic device may transmit the image content in the first target image area by using the MIPI interface, so as to ensure smooth execution of the interface update. Of course, the electronic device may not transmit the image content in the first target image area to the ink screen, i.e. stop the current flow, and then the first application may generate the image frame to be displayed by itself and display the corresponding interface.
The SPI interface can transmit small data quantity, but the transmission speed is high. In the local updating mode, the display unit only needs to transmit the content which changes in the image frame to the display screen for updating and displaying, so that the data volume to be transmitted is small, the mode of data transmission by using the SPI is feasible in the scene, and meanwhile, the faster data transmission speed can be ensured, the refresh rate of the ink screen when updating and displaying the content can be improved, further smooth handwriting service can be supported for a user, and the user experience can be improved.
Optionally, in the above method, the display unit in the electronic device may also transmit the image frame or the image content of the target area in the image frame to the display screen for display by using other interfaces according to the actual service requirement, which is not specifically limited in the embodiment of the present application.
In some embodiments of the present application, when the above-described display unit and the first application are included in the electronic device, the above-described step S403 may be a step performed by the display unit. Further, when the display unit serves the content display shown in fig. 3, the step S403 may be performed by a frame buffer update module in the display system.
In some embodiments of the present application, the electronic device may start executing the method provided by the embodiments of the present application when detecting that there is a handwriting operation performed by the user. When the method provided by the embodiment of the application is executed, the electronic device can continuously acquire the image frame generated by the first application until the user is detected to stop handwriting operation. In this process, each time the electronic device obtains an image frame generated by the first application, a next image frame of the image frame may be predicted according to the image frame and at least one image frame preceding the image frame, and the obtained next image frame may be displayed on the ink screen. Wherein the at least one image frame is an image frame generated by the first application (i.e., an image frame to be displayed generated by the first application invoking a system service of the electronic device).
Compared with the method for determining and refreshing the change area of the display interface in the drawing stage in the application process in the prior art, in the method provided by the embodiment, the change area of the display interface is predicted and refreshed in the display stage in the system display process, so that only the content displayed on the ink screen needs to be obtained in the method provided by the embodiment, and the adaptation processing on the application does not need to be carried out as in the prior art, therefore, the implementation difficulty can be greatly reduced, and the universality and the practicability of the scheme are improved. In the scheme provided by the embodiment of the application, the display content can be updated in a local updating mode aiming at the third party application, so that the refreshing of the content displayed by the third party application can be realized rapidly, the time delay of refreshing the content of the third party application by the ink screen is further reduced, and the display smoothness is improved.
The above method will be described with reference to specific examples.
Taking the application of the content display method provided in the embodiment of the present application to the content display system described in fig. 3 as an example, taking the application in the electronic device as a third party application as an example, and taking the drawing of the display content by the user using the handwriting pen as an example, referring to fig. 5, the flow of the content display method provided in the embodiment of the present application may include:
S501: a scene recognition system in the electronic device detects and recognizes the type of the third party application running in the foreground and the state of the stylus.
S502: when the scene recognition system in the electronic equipment determines that the type of the third party application running in the foreground is a set type and the handwriting pen state is falling, the scene information is notified to the display system.
The setting type may be handwriting office type, etc.
In some embodiments of the present application, when the electronic device adopts the Android system architecture, the display system determines, according to information from the scene recognition system, that the type of the third party application currently running in the foreground is handwriting office type and the handwriting pen state is falling, a graphics processor (graphics processing unit, GPU) synthesis mode may be forcedly started, in which the third party application can acquire a complete image frame and send the complete image frame to the display system in a process of generating an interface corresponding to the image frame and the display image frame.
S503: a display system in the electronic device instructs a third party application to display an interface through the display system.
Wherein this step may be performed by a display control module in the display system. When the display control module determines that the type of the third party application running in the current foreground is a set type and the handwriting pen state is falling, the method for predicting the image frame displayed by the next frame according to the continuous multiple image frames generated by the third party application and updating the interface display according to the predicted image frame of the next frame can be determined. When the display control module determines that the type of the third party application running in the current foreground is not the set type or the handwriting pen state is not falling, the generation of the image frames and the display of the corresponding interfaces can be determined in a conventional mode, namely, a method for automatically generating the image frames by the third party application and calling the system service to display the corresponding interfaces is adopted.
S504: and after receiving handwriting operation of the handwriting pen on the ink screen, a third party application in the electronic equipment invokes the graph drawing system to draw a handwriting track and generates a plurality of image frames corresponding to the handwriting track.
Wherein each image frame of the plurality of image frames includes a portion of the handwriting trajectory. The third party application can draw the handwriting track and generate the corresponding image frame by calling the original drawing interface of the electronic equipment.
S505: the graphics rendering system in the electronic device sequentially sends the generated plurality of image frames to the display system.
Alternatively, the graphics rendering system may send multiple image frames sequentially to the display system via a buffer queue (buffer queue) mechanism.
S506: the display system in the electronic device determines whether the electronic device is currently in a handwriting state, if so, step S507 is executed, and if not, step S511 is executed.
Wherein this step may be performed by a frame buffer learning module in the display system.
S507: and predicting the image frame to be displayed of the next frame according to the plurality of image frames by a display system in the electronic equipment.
Wherein this step may be performed by a frame buffer learning module in the display system.
S508: a display system in the electronic device calculates a dirty region of the image to be displayed of the next frame and an image of a frame preceding the frame.
Wherein this step may be performed by a frame buffer learning module in the display system.
S509: when the display system in the electronic equipment judges that the dirty area meets the SPI display sending condition, the information of the dirty area is sent to the ink screen driver by utilizing the SPI interface.
Wherein this step may be performed by a frame buffer update module in the display system.
S510: and the ink screen driver in the electronic equipment locally updates the currently displayed interface according to the information of the dirty area so as to update the display interface on the ink display screen.
S511: and the electronic equipment sends the image frame to be displayed of the next frame to the ink screen driver through the MIPI interface.
S512: and the ink screen driver globally updates the currently displayed interface according to the image frame to be displayed in the next frame, so as to update the display interface on the ink screen.
In the method, the third party application (API customized by an unadapted manufacturer) can refresh the display interface by adopting a system method of partial refresh display, and the method does not need to adapt to each third party application, so that the implementation difficulty can be greatly reduced, and the universality and the practicability of the scheme are improved. Meanwhile, when the scheme is applied to the ink screen with a low refreshing rate, the writing time delay can be greatly reduced, the refreshing rate is improved, and writing experience is further improved.
The specific implementation of each step in the above flow may refer to the related description in the foregoing embodiments, which is not repeated here.
It should be noted that, the specific implementation process provided by the above embodiment is merely an illustration of a process flow applicable to the embodiment of the present application, where the execution sequence of each step may be adjusted accordingly according to actual needs, and other steps may be added or some steps may be reduced.
Based on the above embodiments and the same concept, the embodiments of the present application further provide a content display method, as shown in fig. 6, the method may include:
s601: a display unit in an operating system of the electronic device acquires a continuous plurality of image frames belonging to a first application in response to a handwriting operation acting on the ink screen.
By way of example, the electronic device, the display unit, the first application may be referred to the relevant description of the previous embodiments and will not be repeated here. The plurality of image frames may be image frames generated by the first application control described in the above embodiment. The method for acquiring the plurality of image frames by the display unit may refer to the method described in the foregoing embodiments, and will not be described herein.
In some embodiments of the present application, the handwriting operation is an operation within a first display area acting on the ink screen; the first display area is a display area where the second image frame is located.
S602: the display unit predicts a first image frame from the plurality of image frames; wherein the first image frame is a predicted image frame of a next image frame to a last image frame of the plurality of image frames.
The method for predicting the first image frame by the display unit according to the plurality of image frames may be described with reference to the related description in the above embodiment, which is not repeated herein.
S603: the display unit updates a second image frame displayed on the ink screen to the first image frame.
In some embodiments of the present application, the second image frame is a predicted image frame of a last image frame of the plurality of image frames.
As an alternative embodiment, the display unit may update the second image frame to the first image frame in a locally updated manner. Specifically, the display unit may determine the first target image data according to the first image frame and the second image frame; wherein the first target image data is used for indicating image content of the first image frame which changes relative to the second image frame; the second image frame displayed on the ink screen may then be updated to the first image frame based on the first target image data. When the specific updating is performed, the display unit can use the serial peripheral interface to send the first target image data to the ink screen and drive the ink screen to replace the second target content in the second image frame by the first target content; the first target content is the image content indicated by the first target image data, and the second target content is the content different from the first image frame in the second image frame.
Alternatively, before the local update mode is adopted, the display unit may determine that the data amount of the first target image data is less than or equal to a set data amount threshold. If the display unit determines that the data amount of the first target image data is greater than the set data amount threshold, a global updating mode may be adopted. Specifically, the display unit may send the first image frame to the ink screen using a mobile industry processor interface and drive the ink screen to replace the second image frame with the first image frame.
In the above method, the specific implementation of each step executed by the display unit in the electronic device may refer to the description related to the foregoing embodiment, which is not repeated herein.
Based on the above embodiments and the same concept, the embodiments of the present application further provide an electronic device, where the electronic device is used to implement the content display method provided by the embodiments of the present application. As shown in fig. 7, an electronic device 700 may include: a display 701, a memory 702, one or more processors 703, and one or more computer programs (not shown). The devices described above may be coupled by one or more communication buses 704.
The display 701 is an ink screen, and is used for displaying a user interface such as an application interface.
The memory 702 has stored therein one or more computer programs (code) comprising computer instructions; the one or more processors 703 invoke computer instructions stored in the memory 702 to cause the electronic device 700 to perform the content display methods provided by embodiments of the present application.
In particular implementations, memory 702 may include high-speed random access memory and may also include non-volatile memory, such as one or more magnetic disk storage devices, flash memory devices, or other non-volatile solid-state storage devices. The memory 702 may store an operating system (hereinafter referred to as a system), such as ANDROID, IOS, WINDOWS, or an embedded operating system, such as LINUX. Memory 702 may be used to store implementation procedures for embodiments of the present application. The memory 702 may also store network communication programs that may be used to communicate with one or more additional devices, one or more user devices, and one or more network devices.
The one or more processors 703 may be a general purpose central processing unit (Central Processing Unit, CPU), microprocessor, application-specific integrated circuit (ASIC), or one or more integrated circuits for controlling the execution of programs in accordance with aspects of the present Application.
It should be noted that fig. 7 is merely an implementation of the electronic device 700 provided in the embodiment of the present application, and in practical application, the electronic device 700 may further include more or fewer components, which is not limited herein.
Based on the above embodiments and the same conception, the present application embodiment also provides a computer-readable storage medium storing a computer program, which when run on a computer, causes the computer to perform the method provided by the above embodiments.
Based on the above embodiments and the same conception, the present application embodiment also provides a computer program product comprising a computer program or instructions for causing a computer to perform the method provided by the above embodiments when the computer program or instructions are run on the computer.
The method provided in the embodiments of the present application may be implemented in whole or in part by software, hardware, firmware, or any combination thereof. When implemented in software, may be implemented in whole or in part in the form of a computer program product. The computer program product includes one or more computer instructions. When the computer program instructions are loaded and executed on a computer, the processes or functions in accordance with embodiments of the present invention are produced in whole or in part. The computer may be a general purpose computer, a special purpose computer, a computer network, a network device, a user device, or other programmable apparatus. The computer instructions may be stored in a computer-readable storage medium or transmitted from one computer-readable storage medium to another computer-readable storage medium, for example, the computer instructions may be transmitted from one website, computer, server, or data center to another website, computer, server, or data center by a wired (e.g., coaxial cable, optical fiber, digital subscriber line (digital subscriber line, abbreviated DSL), or wireless (e.g., infrared, wireless, microwave, etc.) medium, for example, the usable medium may be any available medium that the computer can access or a data storage device such as a server, data center, etc., that contains an integration of one or more usable mediums, for example, a magnetic medium (e.g., floppy disk, hard disk, tape), an optical medium (e.g., digital video disc (digital video disc, abbreviated DVD), or a semiconductor medium (e.g., SSD), etc.
It will be apparent to those skilled in the art that various modifications and variations can be made in the present application without departing from the scope of the application. Thus, if such modifications and variations of the present application fall within the scope of the claims and the equivalents thereof, the present application is intended to cover such modifications and variations.

Claims (11)

1. A content display method applied to a display unit in an operating system of an electronic device, the method comprising:
acquiring a continuous plurality of image frames belonging to a first application in response to a handwriting operation acting on the ink screen;
predicting a first image frame from the plurality of image frames; wherein the first image frame is a predicted image frame of a next image frame to a last image frame of the plurality of image frames;
and updating the second image frame displayed on the ink screen to the first image frame.
2. The method of claim 1, wherein the updating the second image frame displayed on the ink screen to the first image frame comprises:
determining first target image data from the first image frame and the second image frame; wherein the first target image data is used for indicating image content of the first image frame which changes relative to the second image frame;
And updating the second image frame displayed on the ink screen to the first image frame according to the first target image data.
3. The method of claim 2, wherein the updating the second image frame displayed on the ink screen to the first image frame based on the first target image data comprises:
transmitting the first target image data to the ink screen by using a serial peripheral interface, and driving the ink screen to replace second target content in the second image frame by first target content;
the first target content is the image content indicated by the first target image data, and the second target content is the content different from the first image frame in the second image frame.
4. The method of claim 3, wherein prior to transmitting the first target image data to the ink screen using a serial peripheral interface, the method further comprises:
and determining that the data amount of the first target image data is smaller than or equal to a set data amount threshold.
5. The method of claim 2, wherein the updating the second image frame displayed on the ink screen to the first image frame based on the first target image data comprises:
When the data amount of the first target image data is determined to be larger than the set data amount threshold, the first image frame is sent to the ink screen by using a mobile industry processor interface, and the ink screen is driven to replace the second image frame by the first image frame.
6. The method according to any one of claim 1 to 5,
the first application is any application in a set white list, and the white list comprises at least one application; and/or
The first application is a set type of application.
7. The method of any of claims 1-6, wherein the second image frame is a predicted image frame of a last image frame of the plurality of image frames.
8. The method of any one of claims 1 to 7, wherein the handwriting operation is an operation within a first display area acting on the ink screen; the first display area is a display area where the second image frame is located.
9. The method of any of claims 1-8, wherein predicting the first image frame from the plurality of image frames comprises:
determining the first image frame according to the plurality of image frames and an image prediction model; wherein the image prediction model is used to represent a relationship between a consecutive plurality of image frames and a next image frame of a last image frame of the plurality of image frames.
10. An electronic device comprising an ink screen, a memory, and one or more processors;
wherein the memory is for storing computer program code, the computer program code comprising computer instructions; the computer instructions, when executed by the one or more processors, cause the electronic device to perform the method of any of claims 1-9.
11. A computer readable storage medium, characterized in that the computer readable storage medium stores a computer program which, when run on an electronic device, causes the electronic device to perform the method according to any one of claims 1-9.
CN202211049177.1A 2022-08-30 2022-08-30 Content display method and electronic equipment Pending CN117666911A (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202211049177.1A CN117666911A (en) 2022-08-30 2022-08-30 Content display method and electronic equipment
PCT/CN2023/115528 WO2024046317A1 (en) 2022-08-30 2023-08-29 Content display method and electronic device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211049177.1A CN117666911A (en) 2022-08-30 2022-08-30 Content display method and electronic equipment

Publications (1)

Publication Number Publication Date
CN117666911A true CN117666911A (en) 2024-03-08

Family

ID=90079424

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211049177.1A Pending CN117666911A (en) 2022-08-30 2022-08-30 Content display method and electronic equipment

Country Status (2)

Country Link
CN (1) CN117666911A (en)
WO (1) WO2024046317A1 (en)

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130271487A1 (en) * 2012-04-11 2013-10-17 Research In Motion Limited Position lag reduction for computer drawing
US20160077665A1 (en) * 2014-09-16 2016-03-17 Microsoft Corporation Simulating real-time responsiveness for touch displays
CN110764652A (en) * 2019-10-25 2020-02-07 深圳市康冠商用科技有限公司 Infrared touch screen and touch point prediction method thereof
CN112764616B (en) * 2021-01-22 2021-11-26 广州文石信息科技有限公司 Method, device and equipment for accelerating handwriting of electronic ink screen and storage medium
CN114035763B (en) * 2022-01-11 2022-04-19 广州文石信息科技有限公司 Jitter optimization method and device of electronic ink screen as computer display

Also Published As

Publication number Publication date
WO2024046317A1 (en) 2024-03-07

Similar Documents

Publication Publication Date Title
US10284775B2 (en) Electronic device and method for processing captured image associated with preview frames by electronic device
CN107257954B (en) Apparatus and method for providing screen mirroring service
CN108476277B (en) Electronic device
US20180284979A1 (en) Electronic device and control method thereof
KR102486797B1 (en) Electronic device and method for driving display thereof
US20170150139A1 (en) Electronic device and method for displaying content according to display mode
CN108028891B (en) Electronic apparatus and photographing method
WO2021036770A1 (en) Split-screen processing method and terminal device
KR20180081362A (en) Method and electronic device for taking a photograph
KR20180095409A (en) Electronic device and method for displaying screen thereof
CN107925738B (en) Method and electronic device for providing image
WO2021063237A1 (en) Control method for electronic device, and electronic device
US20200257411A1 (en) Method for providing user interface related to note and electronic device for the same
KR102332136B1 (en) Method and apparatus for controlling display of electronic device having a plurality of processors
KR102480895B1 (en) Electronic device and method for controlling operation thereof
WO2021175272A1 (en) Method for displaying application information and related device
US20180176536A1 (en) Electronic device and method for controlling the same
CN114461051B (en) Frame rate switching method and device and storage medium
KR20180073188A (en) Electronic device and a method for displaying a web page using the same
WO2022134691A1 (en) Method and device for screech processing in terminal device, and terminal
WO2021213451A1 (en) Track playback method, and related apparatus
KR20180014607A (en) Electronic device and method for image synthesis and restoration
EP4207744A1 (en) Video photographing method and electronic device
CN117666911A (en) Content display method and electronic equipment
US11061544B2 (en) Method and electronic device for processing input

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination