EP4664280A1 - Bildanzeigeverfahren, bildanzeigevorrichtung und elektronische vorrichtung - Google Patents
Bildanzeigeverfahren, bildanzeigevorrichtung und elektronische vorrichtungInfo
- Publication number
- EP4664280A1 EP4664280A1 EP24841947.5A EP24841947A EP4664280A1 EP 4664280 A1 EP4664280 A1 EP 4664280A1 EP 24841947 A EP24841947 A EP 24841947A EP 4664280 A1 EP4664280 A1 EP 4664280A1
- Authority
- EP
- European Patent Office
- Prior art keywords
- window
- frame
- composite
- interface layer
- hardware interface
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/44—Arrangements for executing specific programs
- G06F9/451—Execution arrangements for user interfaces
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T11/00—Two-dimensional [2D] image generation
- G06T11/20—Drawing from basic elements
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T11/00—Two-dimensional [2D] image generation
- G06T11/20—Drawing from basic elements
- G06T11/26—Drawing of charts or graphs
-
- G—PHYSICS
- G09—EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
- G09G—ARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
- G09G5/00—Control arrangements or circuits for visual indicators common to cathode-ray tube indicators and other visual indicators
- G09G5/14—Display of multiple viewports
-
- G—PHYSICS
- G09—EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
- G09G—ARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
- G09G5/00—Control arrangements or circuits for visual indicators common to cathode-ray tube indicators and other visual indicators
- G09G5/36—Control arrangements or circuits for visual indicators common to cathode-ray tube indicators and other visual indicators characterised by the display of a graphic pattern, e.g. using an all-points-addressable [APA] memory
- G09G5/39—Control of the bit-mapped memory
- G09G5/395—Arrangements specially adapted for transferring the contents of the bit-mapped memory to the screen
- G09G5/397—Arrangements specially adapted for transferring the contents of two or more bit-mapped memories to the screen simultaneously, e.g. for mixing or overlay
-
- G—PHYSICS
- G09—EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
- G09G—ARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
- G09G2340/00—Aspects of display data processing
- G09G2340/04—Changes in size, position or resolution of an image
- G09G2340/0407—Resolution change, inclusive of the use of different resolutions for different screen areas
-
- G—PHYSICS
- G09—EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
- G09G—ARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
- G09G2340/00—Aspects of display data processing
- G09G2340/04—Changes in size, position or resolution of an image
- G09G2340/0407—Resolution change, inclusive of the use of different resolutions for different screen areas
- G09G2340/0435—Change or adaptation of the frame rate of the video stream
-
- G—PHYSICS
- G09—EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
- G09G—ARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
- G09G2340/00—Aspects of display data processing
- G09G2340/10—Mixing of images, i.e. displayed pixel being the result of an operation, e.g. adding, on the corresponding input pixels
-
- G—PHYSICS
- G09—EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
- G09G—ARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
- G09G2354/00—Aspects of interface with display user
-
- G—PHYSICS
- G09—EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
- G09G—ARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
- G09G2360/00—Aspects of the architecture of display systems
- G09G2360/08—Power processing, i.e. workload management for processors involved in display operations, such as CPUs or GPUs
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/43—Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
- H04N21/443—OS processes, e.g. booting an STB, implementing a Java virtual machine in an STB or power management in an STB
- H04N21/4438—Window management, e.g. event handling following interaction with the user interface
Definitions
- Embodiments of this application relate to the field of electronic devices, and more specifically, to an image display method, an image display apparatus, and an electronic device.
- Embodiments of this application provide an image display method, an image display apparatus, and an electronic device, to ensure that a focus window is preferentially visible, and reduce occurrence of lagging and freezing of the focus window in a heavy-load scenario.
- an image display method includes: obtaining a first window and a second window on a first interface, where the first window is a focus window, and the second window is a non-focus window; drawing the first window in a first frame buffer, and drawing the second window in a second frame buffer, where the first window and the second window are drawn by a process of an operating system; compositing the first frame buffer through a first hardware interface layer to obtain a first composite frame, and compositing the second frame buffer through a second hardware interface layer to obtain a second composite frame; and controlling, based on the first composite frame and the second composite frame, a display apparatus to display a second interface.
- the first window and the second window on the first interface may be stacked.
- a z-order (z-order) of the first window is higher than a z-order of the second window.
- the first interface may further include a third window.
- the third window may be a mouse and/or a system user interface (user interface, UI) window.
- UI system user interface
- the first window and the third window may be drawn in the first frame buffer.
- the first interface may include one second window, or may include a plurality of second windows.
- the first interface may include the plurality of second windows, all of the plurality of second windows may be drawn in the second frame buffer during window drawing.
- the image display method may be applied to an intelligent device such as a mobile phone, a tablet computer, or a personal computer.
- the first window may be a window corresponding to an application 1 of the intelligent device
- the second window may be a window corresponding to an application 2 of the intelligent device.
- drawing a window in a frame buffer may be understood as follows: in a graphics rendering procedure, drawing visible content of the window in the frame buffer.
- the frame buffer may be a memory area for storing image data.
- the first frame buffer and the second frame buffer may be understood as two different memory areas for storing image data, and have different memory addresses.
- the process of the operating system may be a rendering service process, and the rendering service process may be used to receive a rendering task sent by each application, convert the rendering task into a command that can be executed by a GPU, and perform data transmission with a CPU, to implement efficient graphics rendering and window drawing.
- rendering service process is merely an example for description. Any process that can be used to draw the first window and the second window in the operating system may be understood as the rendering service process in this application.
- compositing a frame buffer through a hardware interface layer may be understood as follows:
- the hardware interface layer combines and processes pixel data in the frame buffer, to generate a final display frame (for example, the first composite frame and the second composite frame).
- the display frame may be displayed on the display apparatus. Therefore, compositing the first frame buffer through the first hardware interface layer may be understood as follows:
- the first hardware interface layer processes pixel data corresponding to the first window in the first frame buffer.
- compositing the first frame buffer through the first hardware interface layer may be understood as follows:
- the first hardware interface layer processes pixel data corresponding to the first window and the third window in the first frame buffer.
- a focus window is drawn in the first frame buffer, and a non-focus window is drawn in the second frame buffer.
- This window drawing manner helps avoid drawing all windows in a same frame buffer, thereby avoiding lagging or impact of a rendering procedure of the non-focus window on drawing of the focus window.
- the first frame buffer is composited through the first hardware interface layer, and the second frame buffer is composited through the second hardware interface layer. This composition manner ensures preferential visibility of the focus window. Even if the non-focus window lags or freezes, drawing and rendering procedures of the focus window are not affected.
- compositing the first frame buffer through the first hardware interface layer to obtain the first composite frame, and compositing the second frame buffer through the second hardware interface layer to obtain the second composite frame include: compositing the first frame buffer at a first frame rate through the first hardware interface layer, to obtain the first composite frame; and compositing the second frame buffer at a second frame rate through the second hardware interface layer, to obtain the second composite frame, where the second frame rate is less than or equal to the first frame rate.
- the first frame rate may be 60 frames per second (frames per second, FPS), and the second frame rate may be less than or equal to 60 FPS.
- the second hardware interface layer when drawing load of the non-focus window is heavy, may composite the second frame buffer at a low second frame rate, and the first hardware interface layer may always composite the first frame buffer at a high first frame rate.
- the system can preferentially ensure a rendering and composition frame rate of the focus window. Because the first frame buffer is composited through the first hardware interface layer and is operated at a high frame rate, rendering and display of the focus window are not affected by the drawing load of the non-focus window.
- the second hardware interface layer composites the second frame buffer at a low frame rate. In this way, drawing pressure of the non-focus window can be alleviated, and it is ensured that rendering and interaction of the focus window can be continuously and smoothly performed.
- the first composite frame includes a frame composited by the first hardware interface layer in an (N+1) th frame
- the second composite frame includes a frame composited by the second hardware interface layer in an N th frame, where N is a positive integer
- controlling, based on the first composite frame and the second composite frame, the display apparatus to display the second interface includes: when a frame rate at which the second hardware interface layer performs composition in the (N+1) th frame is less than or equal to a preset threshold, controlling, based on the frame composited by the first hardware interface layer in the (N+1) th frame and the frame composited by the second hardware interface layer in the N th frame, the display apparatus to display the second interface.
- the frame rate at which the second hardware interface layer composites the second frame buffer is less than the preset threshold may be understood as follows: When the non-focus window is rendered and composited, load of an electronic device is heavy, resulting in lagging.
- the preset threshold is 60 FPS, or the preset threshold is 30 FPS.
- the second hardware interface layer when the second hardware interface layer composites the second frame buffer in the (N+1) th frame, if lagging occurs, the second hardware interface layer may send the composite frame in the N th frame to the display apparatus for display, and the first hardware interface layer still sends the composite frame in the (N+1) th frame to the display apparatus for display.
- This processing manner can ensure that the display apparatus can correctly display the second interface when the non-focus window lags.
- the second hardware interface layer sends the composite frame in the N th frame to the display apparatus for display, so that a user can still view latest visible content when the non-focus window lags during rendering of the non-focus window, and pictures are not lagging.
- the first hardware interface layer still sends the composite frame in the (N+1) th frame to the display apparatus for display. This means that continuity and smoothness of rendering and display of the focus window can still be maintained, and rendering and display of the focus window are not affected by lagging of the non-focus window. The user may continue to interact with the focus window, and an operation response speed of the focus window is not significantly reduced.
- obtaining the first window and the second window on the first interface includes: detecting, on the first interface, a first input for the first window; and obtaining the first window and the second window in response to the first input.
- the first input may be moving or scaling the first window.
- the electronic device when the user operates the focus window, the electronic device may obtain the first window and the second window, to perform subsequent layered drawing and rendering operations. In this way, the electronic device can avoid impact of displacement and a size change of the focus window on a dirty region on the first interface, thereby avoiding fluctuation of a rendering frame rate of the focus window. This manner ensures a drawing priority of the focus window, and improves an operation response and visual effect during user interaction.
- the method further includes: detecting, on the second interface, a second input for the second window; in response to the second input, drawing the second window in the first frame buffer, and drawing the first window in the second frame buffer; compositing the first frame buffer through the first hardware interface layer to obtain a third composite frame, and compositing the second frame buffer through the second hardware interface layer to obtain a fourth composite frame; and controlling, based on the third composite frame and the fourth composite frame, the display apparatus to display a third interface.
- the second input may be moving or scaling the second window.
- the second window is switched to a focus window, and the first window is switched to a non-focus window.
- the electronic device when detecting that the user operates the second window, may draw the second window in the first frame buffer, and draw the first window in the second frame buffer. In this way, when the user operates the second window, the electronic device can ensure preferential visibility of the second window, and occurrence of lagging and freezing of the second window is reduced. In addition, the user can more smoothly interact with a window of interest, thereby improving user experience and real-time responsiveness to an operation.
- an image display apparatus includes: an obtaining unit, configured to obtain a first window and a second window on a first interface, where the first window is a focus window, and the second window is a non-focus window; and a processing unit, configured to: draw the first window in a first frame buffer, and draw the second window in a second frame buffer, where the first window and the second window are drawn by a process of an operating system; composite the first frame buffer through a first hardware interface layer to obtain a first composite frame, and composite the second frame buffer through a second hardware interface layer to obtain a second composite frame; and control, based on the first composite frame and the second composite frame, a display apparatus to display a second interface.
- the processing unit is specifically configured to: composite the first frame buffer at a first frame rate through the first hardware interface layer, to obtain the first composite frame; and composite the second frame buffer at a second frame rate through the second hardware interface layer, to obtain the second composite frame, where the second frame rate is less than or equal to the first frame rate.
- the first composite frame includes a frame composited by the first hardware interface layer in an (N+1) th frame
- the second composite frame includes a frame composited by the second hardware interface layer in an N th frame, where N is a positive integer
- the processing unit is specifically configured to: when a frame rate at which the second hardware interface layer performs composition in the (N+1) th frame is less than or equal to a preset threshold, control, based on the frame composited by the first hardware interface layer in the (N+1) th frame and the frame composited by the second hardware interface layer in the N th frame, the display apparatus to display the second interface.
- the processing unit is further configured to detect, on the first interface, a first input for the first window; and the obtaining unit is specifically configured to obtain the first window and the second window in response to the first input.
- the processing unit is further configured to: detect, on the second interface, a second input for the second window; in response to the second input, draw the second window in the first frame buffer, and draw the first window in the second frame buffer; composite the first frame buffer through the first hardware interface layer to obtain a third composite frame, and composite the second frame buffer through the second hardware interface layer to obtain a fourth composite frame; and control, based on the third composite frame and the fourth composite frame, the display apparatus to display a third interface.
- a rendering method includes: obtaining first information sent by a CPU, where the first information includes a first window and a second window, the first window is a focus window, and the second window is a non-focus window; a processing unit, configured to draw the first window in a first frame buffer, and draw the second window in a second frame buffer, where the first window and the second window are drawn by a process of an operating system; and a transceiver unit, configured to send data in the first frame buffer and the second frame buffer to a hardware composer.
- a rendering apparatus includes: an obtaining unit, configured to obtain first information sent by a CPU, where the first information includes a first window and a second window, the first window is a focus window, and the second window is a non-focus window; a processing unit, configured to: draw the first window in a first frame buffer, and draw the second window in a second frame buffer, where the first window and the second window are drawn by a process of an operating system; and a transceiver unit, configured to send data in the first frame buffer and the second frame buffer to a hardware composer.
- an image display apparatus includes at least one processor and a memory.
- the at least one processor is coupled to the memory, and is configured to read and execute instructions in the memory, to enable the apparatus to implement the method in any implementation of the first aspect.
- a computer-readable storage medium stores program code, and when the computer program code is run on a computer, the computer is enabled to perform the method in any implementation of the first aspect or the third aspect.
- a chip is provided.
- the chip includes a circuit, and the circuit is configured to perform the method in any implementation of the first aspect or the third aspect.
- a computer program product includes a computer program.
- the computer program When the computer program is run, a computer is enabled to perform the method in any implementation of the first aspect or the third aspect.
- an electronic device including the apparatus in any implementation of the second aspect or the fourth aspect.
- a method provided in embodiments of this application is applied to an electronic device, and the electronic device includes but is not limited to a mobile phone, a tablet computer, a vehicle-mounted device, a wearable device, an augmented reality (augmented reality, AR)/virtual reality (virtual reality, VR) device, a notebook computer, an ultra-mobile personal computer (ultra-mobile personal computer, UMPC), a netbook, a personal digital assistant (personal digital assistant, PDA), a smart screen, and another electronic device having a display.
- a specific type of the electronic device is not limited in embodiments of this application.
- FIG. 1 is a diagram of a hardware structure of an electronic device according to embodiments of this application.
- an electronic device 100 may include a processor 110, a memory 120, a universal serial bus (universal serial bus, USB) interface 130, a charging management module 140, a power management module 141, a battery 142, an antenna 1, an antenna 2, a mobile communication module 150, a wireless communication module 160, an audio module 170, a speaker 170A, a receiver 170B, a microphone 170C, a headset jack 170D, a sensor module 180, a camera 191, a display 192, a button 193, and the like.
- a processor 110 a memory 120
- a universal serial bus (universal serial bus, USB) interface 130 may include a charging management module 140, a power management module 141, a battery 142, an antenna 1, an antenna 2, a mobile communication module 150, a wireless communication module 160, an audio module 170, a speaker 170A, a receiver 170B, a microphone 170C, a headset jack 170D, a sensor module 180, a camera 191, a
- the processor 110 may include one or more processing units.
- the processor 110 may include an application processor (application processor, AP), a modem processor, a graphics processing unit (graphics processing unit, GPU), an image signal processor (image signal processor, ISP), a controller, a memory, a video codec, a digital signal processor (digital signal processor, DSP), a baseband processor, and/or a neural-network processing unit (neural-network processing unit, NPU).
- application processor application processor, AP
- modem processor graphics processing unit
- ISP image signal processor
- controller a memory
- video codec digital signal processor
- DSP digital signal processor
- baseband processor baseband processor
- neural-network processing unit neural-network processing unit
- Different processing units may be independent components, or may be integrated into one or more processors.
- the controller may be a nerve center and a command center of the electronic device 100.
- the controller may generate an operation control signal based on an instruction operation code and a time sequence signal, to control instruction reading and instruction execution.
- a memory may be further disposed in the processor 110, and is configured to store instructions and data.
- the memory in the processor 110 is a cache memory.
- the memory may store instructions or data recently used or cyclically used by the processor 110. If the processor 110 needs to use the instructions or the data again, the processor 110 may directly invoke the instructions or the data from the memory, to avoid repeated access. This reduces waiting time of the processor 110, and improves system efficiency.
- the processor 110 may include one or more interfaces.
- the interface may include an inter-integrated circuit (inter-integrated circuit, I2C) interface, an inter-integrated circuit sound (inter-integrated circuit sound, I2S) interface, a pulse code modulation (pulse code modulation, PCM) interface, a universal asynchronous receiver/transmitter (universal asynchronous receiver/transmitter, UART) interface, a mobile industry processor interface (mobile industry processor interface, MIPI), a general-purpose input/output (general-purpose input/output, GPIO) interface, a subscriber identity module (subscriber identity module, SIM) interface, a universal serial bus (universal serial bus, USB) interface, and/or the like.
- I2C inter-integrated circuit
- I2S inter-integrated circuit sound
- PCM pulse code modulation
- PCM pulse code modulation
- UART universal asynchronous receiver/transmitter
- MIPI mobile industry processor interface
- GPIO general-purpose input/output
- the processor 110 and a touch sensor 180E may communicate with each other through the I2C bus interface, to implement a touch function of the electronic device 100.
- the processor 110 and the camera 191 may communicate with each other through the CSI interface, to implement a photographing function of the electronic device 100.
- the processor 110 and the display 192 may communicate with each other through the DSI interface, to implement a display function of the electronic device 100.
- an interface connection relationship between the modules shown in this embodiment of this application is merely an example for description, and does not constitute a limitation on the structure of the electronic device 100.
- the electronic device 100 may alternatively use an interface connection manner different from that in the foregoing embodiment, or use a combination of a plurality of interface connection manners.
- the wireless communication module 160 may provide a wireless communication solution applied to the electronic device 100, and the wireless communication solution includes a solution for a wireless local area network (wireless local area network, WLAN) (for example, a wireless fidelity (wireless fidelity, Wi-Fi) network), Bluetooth (Bluetooth, BT), a global navigation satellite system (global navigation satellite system, GNSS), frequency modulation (frequency modulation, FM), a near field communication (near field communication, NFC) technology, an infrared (infrared, IR) technology, or the like.
- the wireless communication module 160 may be one or more components integrating at least one communication processing module.
- the wireless communication module 160 receives an electromagnetic wave through the antenna 2, performs frequency modulation and filtering processing on an electromagnetic wave signal, and sends a processed signal to the processor 110.
- the wireless communication module 160 may further receive a to-be-sent signal from the processor 110, perform frequency modulation and amplification on the signal, and convert the signal into an electromagnetic wave for radiation through the antenna 2.
- the electronic device 100 may implement the display function through the GPU, the display 192, the application processor, and the like.
- the GPU is a microprocessor for image processing, and is connected to the display 192 and the application processor.
- the GPU is configured to: perform mathematical and geometric computation, and render an image.
- the processor 110 may include one or more GPUs that execute program instructions to generate or change display information.
- the display 192 is configured to display an image, a video, and the like.
- the display 192 includes a display panel.
- the display panel may be a liquid crystal display (liquid crystal display, LCD), an organic light-emitting diode (organic light-emitting diode, OLED), an active-matrix organic light-emitting diode (active-matrix organic light-emitting diode, AMOLED), a flexible light-emitting diode (flex light-emitting diode, FLED), a mini-LED, a micro-LED, a micro-OLED, a quantum dot light-emitting diode (quantum dot light-emitting diode, QLED), or the like.
- the electronic device 100 may include one or N displays 192, where N is a positive integer greater than 1.
- the electronic device 100 may implement the photographing function through the ISP, the camera 191, the video codec, the GPU, the display 192, the application processor, and the like.
- the ISP is configured to process data fed back by the camera 191.
- the camera 191 is configured to capture a static image or a video.
- An optical image of an object is generated through a lens, and is projected onto a photosensitive element.
- the photosensitive element may be a charge coupled device (charge coupled device, CCD) or a complementary metal-oxide-semiconductor (complementary metal-oxide-semiconductor, CMOS) phototransistor.
- the electronic device 100 may include one or N cameras 191, where N is a positive integer greater than 1.
- the memory 120 is configured to store data and/or instructions.
- the memory 120 may include an internal memory.
- the internal memory is configured to store computer-executable program code, and the executable program code includes the instructions.
- the processor 110 performs various function applications and data processing of the electronic device 100 by running the instructions stored in the internal memory.
- the internal memory may include a program storage area and a data storage area.
- the program storage area may store an operating system.
- the program storage area may further store one or more applications (for example, Gallery or Contacts), and the like.
- the data storage area may store data (for example, an image or a contact) or the like created during use of the electronic device 100.
- the internal memory may include a high-speed random access memory, and may further include a non-volatile memory, for example, one or more disk storage devices, a flash device, or a universal flash storage (universal flash storage, UFS).
- the processor 110 may run the instructions stored in the internal memory and/or the instructions stored in the memory disposed in the processor 110, to enable the electronic device 100 to perform a card sharing method provided in embodiments of this application.
- the memory 120 may further include an external memory, for example, a micro SD card, to extend a storage capability of the electronic device 100.
- the external memory may communicate with the processor 110 through an external memory interface, to implement a data storage function. For example, files such as music and videos are stored in the external memory.
- a type of a sensor included in the sensor module 180 is not limited.
- the sensor module 180 may include more or fewer sensors. This may be specifically determined based on an actual requirement. Details are not described herein.
- the structure shown in this embodiment of this application does not constitute a specific limitation on the electronic device.
- the electronic device may include more or fewer components than those shown in the figure, some components may be combined, some components may be split, or different component arrangements may be used.
- the components shown in the figure may be implemented as hardware, software, or a combination of software and hardware.
- the following describes, by using an example in which the electronic device 100 is an intelligent device, a technical problem that needs to be resolved in this application and a technical solution used in this application.
- rendering technologies such as partial refreshing and occlusion culling can effectively alleviate repeated drawing of a redundant window in a heavy-load scenario.
- Partial refreshing may be understood as follows: In split rendering, each window has a layer of the window, and an intelligent device may record a drawing area of each window, and redraw only an area that needs to be updated, thereby reducing content of window drawing.
- Occlusion culling may be understood as follows: In split rendering, the intelligent device may remove an invisible window from a drawing list, to reduce a quantity of drawn windows.
- a focus window still lags or freezes when a user operates the focus window.
- each application may independently draw a window, and send a drawing result to a rendering service process, and after converting a format of received data, the rendering service process may transmit the data to a video RAM of a GPU for processing by the GPU.
- occlusion culling may be performed on an occluded window, that is, the occluded window may not be rendered, thereby improving rendering efficiency to some extent.
- a frame rate of a focus window is still lower than 60 FPS, that is, the focus window still lags or freezes when the user operates the focus window. This severely affects interaction experience of the user.
- a unified rendering architecture may be used to replace the split rendering architecture in FIG. 2 .
- unified rendering changes original logic of separately rendering an application process
- unified rendering may integrate different types of rendering tasks into a unified rendering framework, and in the unified rendering architecture, the application process may directly send a rendering instruction to a rendering service (render service) process of an operating system (operating system, OS) for unified rendering processing. Therefore, the unified rendering architecture can provide a more flexible rendering procedure and higher rendering efficiency, and can also reduce development and maintenance costs.
- a displacement and a size change of a focus window directly affect occlusion culling benefits a lower-layer non-focus window, and a dirty region (dirty region, DR) of an entire screen also changes.
- a drawing refresh rate of the focus window fluctuates, that is, the focus window lags or freezes when the user operates the focus window. This severely affects interaction experience of the user.
- the dirty region is introduced to reduce a requirement of rendering on computer performance.
- each frame is drawn, only a changed part is drawn. This saves a large amount of rendering resources in software.
- Embodiments of this application provide an image display method, an image display apparatus, and an electronic device, to ensure that a focus window is preferentially visible, and reduce occurrence of lagging and freezing of the focus window in a heavy-load scenario.
- FIG. 3 is a system architecture to which an image display method is applicable according to an embodiment of this application.
- the system architecture may be applied to the electronic device 100 in FIG. 1 .
- the system architecture 300 includes an application layer, a framework layer, a system service layer, and a kernel layer. Each layer has a clear role and task. Layers may communicate with each other through a software interface.
- the application layer may provide a network interface for an application, so that the application directly provides a service for a user.
- the application layer may include a series of applications, for example, a system application, Desktop, Settings, and Phone.
- the framework layer may provide an application programming interface (application programming interface, API) and a programming framework for the applications at the application layer.
- the framework layer may include some predefined functions. As shown in FIG. 2 , the framework layer may include a user program framework, a UI development framework, a capability framework, and a graphics subsystem. A rendering service component in the graphics subsystem may perform the image display method provided in embodiments of this application.
- the system service layer may implement scheduling and data management of a rendering task.
- the system service layer may include a distributed scheduling module, a distributed management module, and a graphics subsystem.
- the kernel layer may be an intermediate layer between software and hardware, may transfer a request of an application to hardware, and act as an underlying driver to address various devices and components in the system.
- the hardware layer may include a kernel subsystem and a driver subsystem.
- FIG. 4 is a schematic flowchart of an image display method according to an embodiment of this application.
- the method 400 may be performed by the electronic device 100.
- the method 400 may be performed by the processor 110 in the electronic device 100.
- the method 400 may include step S401 to step S404.
- S401 Obtain a first window and a second window on a first interface.
- the first window is a focus window
- the second window is a non-focus window
- the first window and the second window on the first interface may be stacked.
- a z-order of the first window is higher than a z-order of the second window.
- the first interface may further include a third window.
- the third window may be a mouse and/or a system user interface (user interface, UI) window.
- UI system user interface
- the first window and the third window may be drawn in a first frame buffer.
- the first interface may include one second window, or may include a plurality of second windows.
- the first interface may include the plurality of second windows, all of the plurality of second windows may be drawn in a second frame buffer during window drawing.
- the image display method may be applied to an intelligent device such as a mobile phone, a tablet computer, or a personal computer.
- the first window may be a window corresponding to an application 1 of the intelligent device
- the second window may be a window corresponding to an application 2 of the intelligent device.
- the focus window may be a window that is currently interacting with a user, and the window is responsible for receiving a key event and a touch event.
- a new activity activity
- a new window is added, an old window is removed, or screen splitting or restoration from screen splitting is performed, a focus window may be updated.
- the non-focus window may be a window that is currently in an inactive state or a window that does not interact with the user currently.
- step S401 includes: detecting, on the first interface, a first input for the first window; and obtaining the first window and the second window in response to the first input. In this way, when the user operates the first window, the electronic device may perform the method 400.
- the first input may be moving or scaling the first window.
- S402 Draw the first window in the first frame buffer, and draw the second window in the second frame buffer.
- drawing a window in a frame buffer may be understood as follows: in a graphics rendering procedure, drawing visible content of the window in the frame buffer.
- the frame buffer may be a memory area for storing image data, and is configured to: temporarily store rendered image data, and then send the rendered image data to a display apparatus for display.
- the first frame buffer and the second frame buffer may be understood as two different memory areas for storing image data, and have different memory addresses.
- the first window and the second window are drawn by a process of an operating system.
- the process of the operating system may be a rendering service process, and the rendering service process may receive a rendering task sent by each application, convert the rendering task into a command that can be executed by a GPU, and perform data transmission with a CPU, to implement efficient graphics rendering and window drawing.
- a hardware interface layer may be an interface between the operating system or a window manager and underlying hardware, and is configured to send drawn image data to the display apparatus for display.
- the hardware interface layer may be responsible for communicating with graphics hardware (for example, a graphics card or an integrated graphics card), and transferring the drawn image data to the display apparatus.
- compositing a frame buffer through a hardware interface layer may be understood as follows:
- the hardware interface layer combines and processes pixel data in the frame buffer, to generate a final display frame (for example, the first composite frame and the second composite frame).
- the display frame may be displayed on the display apparatus. Therefore, compositing the first frame buffer through the first hardware interface layer may be understood as follows:
- the first hardware interface layer processes pixel data corresponding to the first window in the first frame buffer.
- compositing the first frame buffer through the first hardware interface layer may be understood as follows:
- the first hardware interface layer processes pixel data corresponding to the first window and the third window in the first frame buffer.
- the first composite frame includes a frame composited by the first hardware interface layer in an (N+1) th frame
- the second composite frame includes a frame composited by the second hardware interface layer in an N th frame, where N is a positive integer.
- Step S403 includes: when a frame rate at which the second hardware interface layer performs composition in the (N+1) th frame is less than or equal to a preset threshold, controlling, based on the frame composited by the first hardware interface layer in the (N+1) th frame and the frame composited by the second hardware interface layer in the N th frame, the display apparatus to display a second interface. This processing manner can ensure that the display apparatus can correctly display the second interface when the non-focus window lags.
- the second hardware interface layer sends the composite frame in the N th frame to the display apparatus for display, so that the user can still view latest visible content when the non-focus window lags during rendering of the non-focus window, and pictures are not lagging.
- the first hardware interface layer still sends the composite frame in the (N+1) th frame to the display apparatus for display. This means that continuity and smoothness of rendering and display of the focus window can still be maintained, and rendering and display of the focus window are not affected by lagging of the non-focus window. The user may continue to interact with the focus window, and an operation response speed of the focus window is not significantly reduced.
- the frame rate at which the second hardware interface layer composites the second frame buffer is less than the preset threshold may be understood as follows: When the non-focus window is rendered and composited, load of the electronic device is heavy, resulting in lagging.
- the preset threshold is 60 FPS, or the preset threshold is 30 FPS.
- S404 Control, based on the first composite frame and the second composite frame, the display apparatus to display the second interface.
- step S404 may be replaced with sending a first instruction to the display apparatus, where the first instruction is used to instruct the display apparatus to display the first composite frame and the second composite frame.
- the focus window is drawn in the first frame buffer, and the non-focus window is drawn in the second frame buffer.
- This window drawing manner helps avoid drawing all windows in a same frame buffer, thereby avoiding lagging or impact of a rendering procedure of the non-focus window on drawing of the focus window.
- the first frame buffer is composited through the first hardware interface layer, and the second frame buffer is composited through the second hardware interface layer. This composition manner ensures preferential visibility of the focus window. Even if the non-focus window lags or freezes, drawing and rendering procedures of the focus window are not affected.
- steps S401, S403, and S404 may be performed by a CPU, and step S402 may be performed by a GPU.
- the electronic device may identify a new focus window, draw the new focus window in the first frame buffer, and then perform a subsequent composition operation.
- the method 400 further includes: detecting, on the second interface, a second input for the second window; in response to the second input, drawing the second window in the first frame buffer, and drawing the first window in the second frame buffer; compositing the first frame buffer through the first hardware interface layer to obtain a third composite frame, and compositing the second frame buffer through the second hardware interface layer to obtain a fourth composite frame; and controlling, based on the third composite frame and the fourth composite frame, the display apparatus to display a third interface.
- the electronic device may draw the second window in the first frame buffer, and draw the first window in the second frame buffer.
- the electronic device can ensure preferential visibility of the second window, and occurrence of lagging and freezing of the second window is reduced.
- the user can more smoothly interact with a window of interest, thereby improving user experience and real-time responsiveness to an operation.
- the second input may be moving or scaling the second window.
- the second window is switched to a focus window, and the first window is switched to a non-focus window.
- FIG. 5(a) , FIG. 5(b) , and FIG. 5(c) are scenarios to which an image display method is applicable according to an embodiment of this application.
- the method 400 may be applicable to this scenario.
- a focus window and a window whose z-order is higher than that of the focus window may be composited into a first hardware interface layer (which may also be referred to as a focus layer and correspond to FIG. 5(b) ); and a non-focus window and a window whose z-order is lower than that of the focus window are composited into a second hardware interface layer (which may also be referred to as a non-focus layer and correspond to FIG. 5(c) ).
- the focus window is drawn in a first frame buffer
- the non-focus window is drawn in a second frame buffer.
- the first frame buffer is composited through the first hardware interface layer
- the second frame buffer is composited through the second hardware interface layer.
- a composite interface may be displayed on the display apparatus of the electronic device.
- the application scenarios shown in FIG. 5(a) to FIG. 5(c) are merely examples for description, and should not be construed as a limitation on this application.
- the method 400 may be further applicable to an application scenario of another operating system (for example, a Linux system, an Android system, or an iOS system).
- another operating system for example, a Linux system, an Android system, or an iOS system.
- FIG. 6(a)-1 , FIG. 6(a)-2 , FIG. 6(b)-1, FIG. 6(b)-2 , FIG. 6(c)-1 , and FIG. 6(c)-2 are diagrams of independently drawing a focus layer and a non-focus layer at different frame rates according to an embodiment of this application;
- FIG. 6(a)-1 , FIG. 6(a)-2 , FIG. 6(b)-1 , FIG. 6(b)-2 , FIG. 6(c)-1 , and FIG. 6(c)-2 are an N th frame of picture of a display apparatus
- FIG. 6(b)-1 and FIG. 6(b)-2 are an (N+1) th frame of picture of the display apparatus
- FIG. 6(c)-1 , and FIG. 6(c)-2 are an (N+2) th frame of picture of the display apparatus, where N is a positive integer.
- the electronic device may control, based on a frame composited by the second hardware interface layer in the N th frame and a frame composited by the first hardware interface layer in the (N+1) th frame, the display apparatus to display a composite picture (that is, a picture displayed by the display apparatus in the (N+1) th frame).
- the electronic device may control, based on a frame composited by the second hardware interface layer in the (N+2) th frame and a frame composited by the first hardware interface layer in the (N+2) th frame, the display apparatus to display a composite picture (that is, a picture displayed by the display apparatus in the (N+2) th frame).
- the apparatus 700 further includes a transceiver unit, configured to receive/send instructions and/or data.
- a transceiver unit configured to receive/send instructions and/or data.
- the processing unit 730 is specifically configured to: composite the first frame buffer at a first frame rate through the first hardware interface layer, to obtain the first composite frame; composite the second frame buffer at a second frame rate through the second hardware interface layer, to obtain the second composite frame, where the second frame rate is less than or equal to the first frame rate.
- the first composite frame includes a frame composited by the first hardware interface layer in an (N+1) th frame
- the second composite frame includes a frame composited by the second hardware interface layer in an N th frame, where N is a positive integer
- the processing unit 730 is specifically configured to: when a frame rate at which the second hardware interface layer performs composition is less than or equal to a preset threshold in the (N+1) th frame, determine, based on the frame composited by the first hardware interface layer in the (N+1) th frame and the frame composited by the second hardware interface layer in the N th frame, the display apparatus to display the second interface.
- the processing unit 730 is further configured to detect, on the first interface, a first input for the first window; and the obtaining unit 710 is specifically configured to obtain the first window and the second window in response to the first input.
- the processing unit 730 is further configured to: detect, on the second interface, a second input for the second window; in response to the second input, draw the second window in the first frame buffer, and draw the first window in the second frame buffer; composite the first frame buffer through the first hardware interface layer to obtain a third composite frame, and composite the second frame buffer through the second hardware interface layer to obtain a fourth composite frame; and control, based on the third composite frame and the fourth composite frame, the display apparatus to display a third interface.
- the apparatus 700 includes: the obtaining unit 710, configured to obtain first information sent by a CPU, where the first information includes a first window and a second window, the first window is a focus window, and the second window is a non-focus window; the processing unit 730, configured to: draw the first window in a first frame buffer, and draw the second window in a second frame buffer, where the first window and the second window are drawn by a process of an operating system; and the transceiver unit, configured to send data in the first frame buffer and the second frame buffer to a hardware composer.
- the processing unit 730 may be the processor 110 shown in FIG. 1 .
- FIG. 8 is a diagram of another apparatus 800 according to an embodiment of this application.
- the apparatus 800 includes a memory 810, a processor 820, and a communication interface 830.
- the memory 810, the processor 820, and the communication interface 830 are connected to each other through an internal connection path.
- the memory 810 is configured to store instructions.
- the processor 820 is configured to execute the instructions stored in the memory 810, to control the communication interface 830 to obtain information, to enable the apparatus 800 to implement the foregoing image display method or rendering method.
- the memory 810 may be coupled to the processor 820 through an interface, or may be integrated with the processor 820.
- the communication interface 830 uses a transceiver apparatus, for example, but not limited to, a transceiver.
- the communication interface 830 may further include an input/output interface (input/output interface).
- the processor 820 stores one or more computer programs, and the one or more computer programs include instructions.
- the apparatus 800 is enabled to perform the image display method or the rendering method in the foregoing embodiments.
- the processor 820 includes a CPU and a GPU.
- the CPU is configured to obtain a first window and a second window on a first interface, where the first window is a focus window, and the second window is a non-focus window;
- the GPU is configured to: draw the first window in a first frame buffer, and draw the second window in a second frame buffer, where the first window and the second window are drawn by a process of an operating system;
- the CPU is configured to: control a hardware composer to composite the first frame buffer through a first hardware interface layer to obtain a first composite frame, and control the hardware composer to composite the second frame buffer through a second hardware interface layer to obtain a second composite frame; and the CPU is further configured to control, based on the first composite frame and the second composite frame, a display apparatus to display a second interface.
- the processor 820 includes a GPU.
- the GPU is configured to: obtain first information sent by a CPU, where the first information includes a first window and a second window, the first window is a focus window, and the second window is a non-focus window; draw the first window in a first frame buffer, and draw the second window in a second frame buffer, where the first window and the second window are drawn by a process of an operating system; and send data in the first frame buffer and the second frame buffer to a hardware composer.
- the processor may be a central control unit (central processing unit, CPU), or the processor may be another general-purpose processor, a digital signal processor (digital signal processor, DSP), an application-specific integrated circuit (application-specific integrated circuit, ASIC), a field programmable gate array (field programmable gate array, FPGA) or another programmable logic device, a discrete gate or transistor logic device, a discrete hardware component, or the like.
- the general-purpose processor may be a microprocessor, or the processor may be any conventional processor or the like.
- the memory may include a read-only memory and a random access memory, and provide instructions and data for the processor.
- a part of the processor may further include a non-volatile random access memory.
- the processor may further store information of a device type.
- steps of the foregoing methods may be performed by using a hardware integrated logic circuit in the processor 820 or by using instructions in a form of software.
- the methods disclosed with reference to embodiments of this application may be directly performed by a hardware processor, or may be performed by using a combination of hardware and a software module in the processor.
- the software module may be located in a mature storage medium in the art, such as a random access memory, a flash memory, a read-only memory, a programmable read-only memory, an electrically erasable programmable memory, or a register.
- the storage medium is located in the memory 810.
- the processor 820 reads information in the memory 810, and performs the steps of the foregoing methods in combination with the hardware of the processor. To avoid repetition, details are not described herein again.
- the communication interface 830 in FIG. 8 may implement the obtaining unit 710 in FIG. 7
- the memory 810 in FIG. 8 may implement the storage unit 720 in FIG. 7
- the processor 820 in FIG. 8 may implement the processing unit 730 in FIG. 7 .
- the apparatus 700 or the apparatus 800 may be located in the electronic device 100 in FIG. 1 .
- An embodiment of this application further provides a computer-readable storage medium.
- the computer-readable storage medium stores program code, and when the computer program code is run on a computer, the computer is enabled to perform any one of the methods in FIG. 4 to FIG. 6(a)-1 , FIG. 6(a)-2 , FIG. 6(b)-1 , FIG. 6(b)-2 , FIG. 6(c)-1 , and FIG. 6(c)-2 .
- An embodiment of this application further provides a computer program product.
- the computer product includes a computer program.
- a computer is enabled to perform any one of the methods in FIG. 4 to FIG. 6(a)-1 , FIG. 6(a)-2 , FIG. 6(b)-1 , FIG. 6(b)-2 , FIG. 6(c)-1 , and FIG. 6(c)-2 .
- An embodiment of this application further provides a chip, including a circuit.
- the circuit is configured to perform any one of the methods in FIG. 4 to FIG. 6(a)-1 , FIG. 6(a)-2 , FIG. 6(b)-1 , FIG. 6(b)-2 , FIG. 6(c)-1 , and FIG. 6(c)-2 .
- the chip may include one or more of a CPU chip, a GPU chip, an ASIC chip, and an NPU chip.
- An embodiment of this application further provides an electronic device, including any image display apparatus or rendering apparatus shown in FIG. 7 or FIG. 8 .
- the disclosed system, apparatus, and method may be implemented in other manners.
- the described apparatus embodiment is merely an example.
- division into the units is merely logical function division and may be other division during actual implementation.
- a plurality of units or components may be combined or integrated into another system, or some features may be ignored or not performed.
- the displayed or discussed mutual couplings or direct couplings or communication connections may be implemented through some interfaces.
- the indirect couplings or communication connections between the apparatuses or units may be implemented in electrical, mechanical, or other forms.
- the units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one position, or may be distributed on a plurality of network units. Some or all of the units may be selected based on actual requirements to achieve the objectives of the solutions of embodiments.
- the functions When the functions are implemented in a form of a software functional unit and sold or used as an independent product, the functions may be stored in a computer-readable storage medium. Based on such an understanding, the technical solutions of this application essentially, or the part contributing to the conventional technology, or some of the technical solutions may be implemented in a form of a software product.
- the computer software product is stored in a storage medium, and includes several instructions for instructing a computer device (which may be a personal computer, a server, a network device, or the like) to perform all or some of the steps of the methods described in embodiments of this application.
- the foregoing storage medium includes any medium that can store program code, such as a USB flash drive, a removable hard disk, a read-only memory (Read-Only Memory, ROM), a random access memory (Random Access Memory, RAM), a magnetic disk, or an optical disc.
- program code such as a USB flash drive, a removable hard disk, a read-only memory (Read-Only Memory, ROM), a random access memory (Random Access Memory, RAM), a magnetic disk, or an optical disc.
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Software Systems (AREA)
- Human Computer Interaction (AREA)
- General Engineering & Computer Science (AREA)
- Computer Hardware Design (AREA)
- Controls And Circuits For Display Device (AREA)
- User Interface Of Digital Computer (AREA)
Applications Claiming Priority (2)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN202310883194.3A CN119336421A (zh) | 2023-07-18 | 2023-07-18 | 图像显示方法、图像显示装置以及电子设备 |
| PCT/CN2024/080962 WO2025015944A1 (zh) | 2023-07-18 | 2024-03-11 | 图像显示方法、图像显示装置以及电子设备 |
Publications (1)
| Publication Number | Publication Date |
|---|---|
| EP4664280A1 true EP4664280A1 (de) | 2025-12-17 |
Family
ID=94267925
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| EP24841947.5A Pending EP4664280A1 (de) | 2023-07-18 | 2024-03-11 | Bildanzeigeverfahren, bildanzeigevorrichtung und elektronische vorrichtung |
Country Status (3)
| Country | Link |
|---|---|
| EP (1) | EP4664280A1 (de) |
| CN (1) | CN119336421A (de) |
| WO (1) | WO2025015944A1 (de) |
Family Cites Families (8)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US7274370B2 (en) * | 2003-12-18 | 2007-09-25 | Apple Inc. | Composite graphics rendered using multiple frame buffers |
| CN103247068B (zh) * | 2013-04-03 | 2016-03-30 | 上海晨思电子科技有限公司 | 一种渲染方法和装置 |
| US10643381B2 (en) * | 2016-01-12 | 2020-05-05 | Qualcomm Incorporated | Systems and methods for rendering multiple levels of detail |
| CN105955687B (zh) * | 2016-04-29 | 2019-12-17 | 华为技术有限公司 | 图像处理的方法、装置和系统 |
| CN116055786B (zh) * | 2020-07-21 | 2023-09-29 | 华为技术有限公司 | 一种显示多个窗口的方法及电子设备 |
| CN113050899B (zh) * | 2021-02-07 | 2022-09-27 | 厦门亿联网络技术股份有限公司 | 一种基于Wayland协议的视频和UI的drm直接显示方法及系统 |
| CN115629697A (zh) * | 2022-10-21 | 2023-01-20 | 展讯半导体(南京)有限公司 | 多窗口的显示画面处理方法与装置、电子设备 |
| CN116347166A (zh) * | 2022-12-27 | 2023-06-27 | Vidaa国际控股(荷兰)公司 | 显示设备及窗口显示方法 |
-
2023
- 2023-07-18 CN CN202310883194.3A patent/CN119336421A/zh active Pending
-
2024
- 2024-03-11 EP EP24841947.5A patent/EP4664280A1/de active Pending
- 2024-03-11 WO PCT/CN2024/080962 patent/WO2025015944A1/zh active Pending
Also Published As
| Publication number | Publication date |
|---|---|
| CN119336421A (zh) | 2025-01-21 |
| WO2025015944A1 (zh) | 2025-01-23 |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| CN114648951B (zh) | 控制屏幕刷新率动态变化的方法及电子设备 | |
| US12061833B2 (en) | Multi-window display method, electronic device, and system | |
| EP4083792B1 (de) | Bildverarbeitungsverfahren und elektronische vorrichtung | |
| WO2021227770A1 (zh) | 应用窗口显示方法和电子设备 | |
| EP4178213A1 (de) | Verfahren zur anzeige mehrerer fenster und elektronische vorrichtung | |
| WO2021023021A1 (zh) | 一种显示方法及电子设备 | |
| WO2021047251A1 (zh) | 显示方法及电子设备 | |
| WO2023005751A1 (zh) | 渲染方法及电子设备 | |
| WO2022252816A1 (zh) | 显示方法及电子设备 | |
| CN115361468B (zh) | 屏幕旋转时的显示优化方法、设备及存储介质 | |
| US20250349234A1 (en) | Electronic Device Display Method, Apparatus, and Storage Medium | |
| CN116700655A (zh) | 一种界面显示方法及电子设备 | |
| CN115268809A (zh) | 多屏协同过程中恢复窗口的方法、电子设备和系统 | |
| WO2024017145A1 (zh) | 显示方法和电子设备 | |
| WO2023169276A1 (zh) | 投屏方法、终端设备及计算机可读存储介质 | |
| WO2023011215A1 (zh) | 一种显示方法及电子设备 | |
| CN119649723B (zh) | 显示屏刷新率的切换方法、电子设备及存储介质 | |
| EP4664280A1 (de) | Bildanzeigeverfahren, bildanzeigevorrichtung und elektronische vorrichtung | |
| EP4657248A1 (de) | Anzeigeverfahren für elektronische vorrichtung sowie elektronische vorrichtung und speichermedium | |
| CN116719587A (zh) | 屏幕显示方法、电子设备及计算机可读存储介质 | |
| WO2023051354A1 (zh) | 一种分屏显示方法及电子设备 | |
| EP4579415A1 (de) | Anzeigeverfahren, anzeigevorrichtung und elektronische vorrichtung | |
| EP4664896A1 (de) | Videoumschaltverfahren und elektronische vorrichtung | |
| CN118550497B (zh) | 环境光的确定方法、屏幕亮度调节方法及电子设备 | |
| EP4560450A1 (de) | Datenverarbeitungsverfahren, vorrichtung und speichermedium |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| STAA | Information on the status of an ep patent application or granted ep patent |
Free format text: STATUS: THE INTERNATIONAL PUBLICATION HAS BEEN MADE |
|
| PUAI | Public reference made under article 153(3) epc to a published international application that has entered the european phase |
Free format text: ORIGINAL CODE: 0009012 |
|
| STAA | Information on the status of an ep patent application or granted ep patent |
Free format text: STATUS: REQUEST FOR EXAMINATION WAS MADE |
|
| 17P | Request for examination filed |
Effective date: 20250911 |
|
| AK | Designated contracting states |
Kind code of ref document: A1 Designated state(s): AL AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC ME MK MT NL NO PL PT RO RS SE SI SK SM TR |
|
| RAP3 | Party data changed (applicant data changed or rights of an application transferred) |
Owner name: HUAWEI TECHNOLOGIES CO., LTD. |