WO2023016191A1 - 图像显示方法、装置、计算机设备及存储介质 - Google Patents

图像显示方法、装置、计算机设备及存储介质 Download PDF

Info

Publication number
WO2023016191A1
WO2023016191A1 PCT/CN2022/106166 CN2022106166W WO2023016191A1 WO 2023016191 A1 WO2023016191 A1 WO 2023016191A1 CN 2022106166 W CN2022106166 W CN 2022106166W WO 2023016191 A1 WO2023016191 A1 WO 2023016191A1
Authority
WO
WIPO (PCT)
Prior art keywords
layer
target
display
enhancement
scene
Prior art date
Application number
PCT/CN2022/106166
Other languages
English (en)
French (fr)
Inventor
胡杰
Original Assignee
Oppo广东移动通信有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Oppo广东移动通信有限公司 filed Critical Oppo广东移动通信有限公司
Publication of WO2023016191A1 publication Critical patent/WO2023016191A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/14Digital output to display device ; Cooperation and interconnection of the display device with other functional units
    • G06F3/1407General aspects irrespective of display type, e.g. determination of decimal point position, display with fixed or driving decimal point, suppression of non-significant zeros
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T1/00General purpose image data processing
    • G06T1/20Processor architectures; Processor configuration, e.g. pipelining
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T1/00General purpose image data processing
    • G06T1/60Memory management

Definitions

  • the embodiments of the present application relate to the field of image display technologies, and in particular, to an image display method, device, computer equipment, and storage medium.
  • image processing may be performed on image frames before image display, and the processed image frames may be transmitted to a display component for display, thereby improving the final image display quality.
  • Embodiments of the present application provide an image display method, device, computer equipment, and storage medium. Described technical scheme is as follows:
  • an embodiment of the present application provides an image display method, the method comprising:
  • the display enhancement processing and layer synthesis are performed on the layer by a graphics processor (Graphics Processing Unit, GPU), to obtain the second composite layer;
  • a graphics processor Graphics Processing Unit, GPU
  • an image display device comprising:
  • the first synthesis module is used to perform display enhancement processing and layer synthesis on the layer through the hardware synthesizer to obtain the first synthesis layer;
  • a display module configured to display an image of the first composite layer through a display component
  • the second compositing module is configured to respond to a layer compositing mode switching instruction, perform display enhancement processing and layer compositing on the layer through the GPU, and obtain a second composite layer;
  • the display module is configured to display an image of the second composite layer through the display component.
  • an embodiment of the present application provides a computer device, the computer device includes a processor, a memory, and a display component, at least one program is stored in the memory, and the at least one program is loaded by the processor and displayed Execute to realize the image display method as described in the above aspect.
  • an embodiment of the present application provides a computer-readable storage medium, the storage medium stores at least one program, and the at least one program is used to be executed by a processor to implement the image display method as described in the above aspect .
  • a computer program product includes computer instructions stored in a computer-readable storage medium; a processor of a computer device reads the computer-readable storage medium from the computer-readable storage medium. Computer instructions, the processor executes the computer instructions, so that the computer device executes the image display method provided in the above optional implementation manners.
  • Fig. 1 is a schematic diagram of the principle of an image display method shown in an exemplary embodiment
  • Fig. 2 shows a structural block diagram of a computer device provided by an exemplary embodiment of the present application
  • Fig. 3 is a flowchart of an image display method shown in an exemplary embodiment of the present application.
  • Fig. 4 is a schematic diagram of the implementation of the layer composition and display process in the picture flipping process
  • Fig. 5 is a flowchart of an image display method shown in another exemplary embodiment of the present application.
  • Fig. 6 is a schematic diagram of layer display enhancement and synthesis process shown in an exemplary embodiment
  • Fig. 7 is a flowchart of an image display method shown in another exemplary embodiment of the present application.
  • Fig. 8 is a schematic diagram of layer display enhancement and synthesis process shown in an exemplary embodiment
  • Fig. 9 is an implementation schematic diagram of an enhanced parameter multiplexing process shown in an exemplary embodiment of the present application.
  • Fig. 10 is a structural block diagram of an image display device provided by an exemplary embodiment of the present application.
  • the "plurality” mentioned herein means two or more.
  • “And/or” describes the association relationship of associated objects, indicating that there may be three types of relationships, for example, A and/or B may indicate: A exists alone, A and B exist simultaneously, and B exists independently.
  • the character “/” generally indicates that the contextual objects are an "or” relationship.
  • Display enhancement technology is an image processing technology that enhances the display effect of images to improve image perception.
  • display enhancement can be divided into two implementation methods: decoding end and display end. Wherein, when the display enhancement is implemented at the decoding end, the computer device performs display enhancement processing on the image frame during the image rendering stage; while when the display enhancement is implemented at the display end, the computer device does not perform display enhancement processing on the image frame during the image rendering stage, Instead, in the layer composition stage, display enhancement processing is performed on the layers, so that the composited image is transmitted to the display component for image display.
  • display enhancement when display enhancement is implemented on the display side, in order to reduce the processing pressure of the GPU (especially in games and other display scenarios that require a high GPU), computer equipment usually uses a hardware synthesizer for layer synthesis, and uses the hardware synthesizer to Display enhancements are performed on layers during layer compositing.
  • the computer device when the computer device cannot perform layer synthesis through the hardware synthesizer, it can only perform layer synthesis through the GPU. Since the layers are processed by the hardware synthesizer, the display of the layers cannot be enhanced. , which in turn leads to display enhancement failure.
  • the computer device performs display enhancement and layer synthesis processing on the layer 11 through the hardware synthesizer 12 to obtain the first composite layer 13, and convert the first composite layer 13 is sent to display for image display by the display component 14; and when the layer synthesis mode is switched, in order to ensure that the display enhancement continues, when the computer device performs layer synthesis on the layer 11 through the GPU 15, the layer 11 continues to be synthesized.
  • Perform display enhancement processing and send the synthesized second composite layer 16 to display, so as to ensure that the images displayed by the display component 14 before and after switching the layer composite mode are display enhanced, and the image display effect is optimized.
  • FIG. 2 shows a structural block diagram of a computer device provided by an exemplary embodiment of the present application.
  • the computer device may be an electronic device such as a smart phone, a tablet computer, a portable personal computer, or the like.
  • a computer device in this application may include one or more of the following components: a processor 210 , a memory 220 and a display component 230 .
  • Processor 210 may include one or more processing cores.
  • the processor 210 uses various interfaces and lines to connect various parts of the entire computer equipment, and executes the computer by running or executing instructions, programs, code sets or instruction sets stored in the memory 220, and calling data stored in the memory 220.
  • the processor 210 may adopt at least one of Digital Signal Processing (Digital Signal Processing, DSP), Field-Programmable Gate Array (Field-Programmable Gate Array, FPGA), and Programmable Logic Array (Programmable Logic Array, PLA). implemented in the form of hardware.
  • DSP Digital Signal Processing
  • FPGA Field-Programmable Gate Array
  • PLA Programmable Logic Array
  • the processor 210 may integrate one or a combination of a central processing unit (Central Processing Unit, CPU), a GPU, a neural network processor (Neural-network Processing Unit, NPU), a modem, and the like.
  • the CPU mainly handles the operating system, user interface and application programs, etc.
  • the GPU is used for rendering and drawing of display content
  • the NPU is used for data processing related to artificial intelligence
  • the modem is used for wireless communication. It can be understood that, the above-mentioned modem may not be integrated into the processor 210, but may be realized by a communication chip alone.
  • the processor 210 in the embodiment of the present application includes a hardware synthesizer 211, a GPU 212, and an NPU 213.
  • the hardware synthesizer 211 is used for layer synthesis in the default state
  • the GPU 212 is used for layer synthesis when the hardware synthesizer 211 cannot perform layer synthesis
  • the NPU 213 is used for layer synthesis through the neural network model. Scene recognition, so that the subsequent hardware synthesizer 211 and GPU 212 perform display enhancement based on the enhancement parameters corresponding to the scene in the layer.
  • the aforementioned hardware synthesizer 211 and NPU 213 may be set in a coprocessor independent of the processor 210, which is not limited in this embodiment.
  • the memory 220 may include random access memory (Random Access Memory, RAM), and may also include read-only memory (Read-Only Memory, ROM).
  • the memory 220 includes a non-transitory computer-readable storage medium.
  • the memory 220 may be used to store instructions, programs, codes, sets of codes, or sets of instructions.
  • the memory 220 may include a program storage area and a data storage area, wherein the program storage area may store instructions for implementing an operating system and instructions for implementing at least one function (such as a touch function, a sound playback function, an image playback function, etc.) , the instruction etc.
  • this operating system can be Andrews (Android) system (comprising the system based on the depth development of Android system), the IOS system developed by Apple Inc. (comprising the system based on the depth development of IOS system) or other systems.
  • the storage data area can also store data and the like created by the computer device during use.
  • the display unit 230 is a unit for performing image display.
  • the display component 230 also has a touch function, which is used to receive a user's touch operation on or near it using any suitable object such as a finger or a touch pen.
  • the display component 230 may be designed as a combination of one or more of a full screen, a curved screen, or a special-shaped screen, which is not limited in this embodiment of the present application.
  • the structure of the computer equipment shown in the above drawings does not constitute a limitation on the computer equipment, and the computer equipment may include more or less components than those shown in the illustration, or combine certain some components, or a different arrangement of components.
  • the computer equipment also includes radio frequency circuits, camera components, sensors, audio circuits, wireless fidelity (Wireless Fidelity, WiFi) components, power supplies, Bluetooth components and other components, which will not be repeated here.
  • FIG. 3 shows a flowchart of an image display method according to an exemplary embodiment of the present application.
  • This embodiment uses the method applied to computer equipment as an example for illustration, and the method includes:
  • a hardware synthesizer is used to perform display enhancement processing and layer synthesis on layers to obtain a first synthesized layer.
  • the computer device performs display enhancement processing on multiple layers (surfaces) corresponding to the current image frame through a hardware synthesizer, and synthesizes the layers after display enhancement processing to obtain Composite layers. Since the layer has undergone display enhancement processing, subsequent image display effects can be improved.
  • the layer compositing process is a process of stacking each layer according to the corresponding display area and display order of each layer. For example, when the layers corresponding to the current image frame include the first layer corresponding to the status bar, the second layer corresponding to the navigation bar, and the third layer corresponding to the wallpaper, the hardware compositor will superimpose the first layer on top of the third layer. One layer, overlay the second layer on the bottom of the third layer to get composite layer.
  • the hardware synthesizer is a mobile display processor (Mobile Display Processor, MDP), and the MDP can be set in a processor, or a coprocessor independent of the processor.
  • MDP Mobile Display Processor
  • the computer device performs display enhancement processing on each layer corresponding to the current image frame through a hardware synthesizer, or only performs display enhancement processing on a specific layer in the current video frame, and displays corresponding to different layers
  • the enhancement methods can be the same or different.
  • Step 302 displaying the image of the first composite layer through the display component.
  • the synthesized first composite layer is stored in a frame buffer (FrameBuffer), and the computer device further sends the frame buffer to the display component, and the display component based on the first composite layer in the frame buffer Image display is performed.
  • a frame buffer FraeBuffer
  • Step 303 in response to the layer compositing mode switching instruction, perform display enhancement processing and layer compositing on the layers through the GPU to obtain a second compositing layer.
  • the layer composition mode switching instruction is triggered when the layer composition mode switching condition is met, and the computer device switches the execution subject of layer composition from a hardware synthesizer to a GPU based on the layer composition mode switching instruction. Since the hardware compositor performs display enhancement processing before the layers are reasonable, in order to ensure the consistency of the image display effect before and after switching the layer composition mode, the computer device also performs display enhancement processing on the layers of the current image frame first, and then Composite the layer after display enhancement to obtain the second composite layer. Optionally, the GPU and the hardware compositor perform display enhancement processing on the same layer in the image frame, and the parameters used for the display enhancement processing on the layer are the same.
  • the layer compositing mode switching instruction may be triggered by a user operation, or may be automatically triggered by the computer device according to its own operating state.
  • the computer device when the hardware compositor cannot perform layer compositing, or is in a scene requiring hardware compositing through the GPU, the computer device switches the layer compositing mode.
  • the hardware combiner can only perform layer synthesis when the image display direction is 90° or 180°, but cannot perform layer synthesis in other image display directions. Therefore, when the image display of the computer device When the direction changes (manually triggered by the user, or automatically triggered based on the gravity sensor), the computer device triggers a layer compositing mode switching command, and during the process of changing the image display direction, the computer device switches to use the GPU for layer compositing.
  • the smart phone uses a hardware synthesizer for layer synthesis, and when the image display direction is switched from portrait to landscape, the smart phone switches to use GPU for layer synthesis.
  • the computer device also needs to switch to use the GPU for layer synthesis.
  • the hardware synthesizer is used to synthesize simple image frames (such as 2D image frames).
  • simple image frames such as 2D image frames
  • complex image frames such as 3D image frames
  • the computer Device switching uses the GPU for layer compositing.
  • the smart phone when displaying a 2D image, uses a hardware synthesizer for layer synthesis, and when the display image is switched from a 2D image to a 3D image, the smart phone switches to use a GPU for layer synthesis.
  • Step 304 performing image display on the second composite layer through the display component.
  • the second composite layer obtained by synthesis is stored in the frame buffer, and the computer device further sends the frame buffer to the display component, and the display component is based on the second composite image in the frame buffer layer for image display. Since both the first composite layer and the second composite layer have undergone display enhancement (that is, the layer can still be displayed enhanced without passing through the hardware compositer), it can ensure the continuity of the display enhancement effect before and after the composite mode switching, and avoid the compositing mode After switching, the display effect cannot be enhanced, resulting in a sudden change in the image (the screen may flicker).
  • the computer device when the layer compositing mode is switched again, the computer device performs display enhancement and layer compositing through the hardware compositor again.
  • the layer compositing mode switching command is triggered again, and the computer device performs display enhancement processing and layer compositing on the layers through the hardware compositor again to obtain the third composite image layer, and display the image of the third composite layer through the display component.
  • the hardware synthesizer 42 when the smartphone is playing video, the hardware synthesizer 42 performs display enhancement and layer synthesis on the layer 41 , and sends the synthesis result for display by the display component 43 .
  • the hardware synthesizer 42 could not synthesize the layers in the flipping process, so the smart phone enhanced the display of the layer 41 through the GPU 44 during the flipping process. and layer synthesis, and send the synthesis result for display by the display component 43.
  • the smart phone switches the hardware synthesizer 42 again to perform display enhancement and layer synthesis on the layer 41, so as to ensure the consistency of the display enhancement effect of the video picture during the picture flipping process.
  • both the hardware synthesizer 42 and the GPU 44 perform display enhancement processing by acquiring the enhancement parameter 45.
  • the display enhancement and layer synthesis processing are performed on the layer through the hardware synthesizer, and the image display of the synthesized first composite layer is performed through the display component to optimize the image display effect; and when the layer compositing mode is switched, when the GPU is used for layer compositing processing, the display enhancement processing of the layer will continue to be performed through the GPU, and the image display of the synthesized second composite layer will be performed through the display component, ensuring The display enhancement continues before and after switching the layer composition mode, avoiding the sudden change of the image display effect before and after the layer composition mode switching, and further improving the image display quality.
  • the display enhancement processing and layer synthesis are performed on the layer by a hardware synthesizer to obtain the first composite layer, including:
  • Display enhancement processing and layer synthesis are performed on the layers through the GPU to obtain the second composite layer, including:
  • methods include:
  • determine the target layer from the layers to be synthesized including:
  • the layer corresponding to the identifier of the layer to be enhanced is determined as the target layer.
  • a hardware compositor including:
  • display enhancement processing is performed on the target layer through a hardware synthesizer
  • display enhancement processing is performed on the target layer through the GPU.
  • the method also includes:
  • perform scene recognition on the target layer to obtain the target scene including:
  • the target scene is determined from the candidate scenes.
  • perform scene recognition on the target layer to obtain the target scene including:
  • Scene recognition is performed on the target layer in the nth image frame to obtain the target scene
  • Methods also include:
  • the target enhancement parameter is determined as the enhancement parameter corresponding to the target layer in the n+1th to n+mth image frames, where n and m are positive integers.
  • the display enhancement processing is performed on the target layer through the GPU, including:
  • display enhancement processing is performed on the i-th scene area in the target layer through the GPU.
  • the target enhancement parameter includes at least one of saturation, contrast and sharpness.
  • the GPU is used to perform display enhancement processing and layer composition on the layer to obtain a second composition layer, including:
  • display enhancement processing and layer synthesis are performed on the layers through the GPU to obtain a second composite layer.
  • the method also includes:
  • a layer compositing mode switching command is triggered, and the layer is subjected to display enhancement processing and layer compositing through a hardware compositor to obtain a third compositing layer;
  • An image display is performed on the third composite layer through a display component.
  • the hardware synthesizer is a mobile display processor MDP.
  • An image frame is composed of multiple layers to be synthesized, wherein different layers to be synthesized display different contents. Since not every content displayed in each layer to be synthesized needs display enhancement (for example, the layer corresponding to the status bar and the navigation bar does not need display enhancement, but the image corresponding to the wallpaper needs display enhancement), so the Before performing display enhancement processing on the layers, the computer device first needs to determine the target layer from the layers to be synthesized, so as to subsequently perform display enhancement on the target layer.
  • the target layer is a layer with display enhancement requirements
  • the number of target layers is at least one.
  • FIG. 5 shows a flowchart of an image display method according to another exemplary embodiment of the present application.
  • This embodiment uses the method applied to computer equipment as an example for illustration, and the method includes:
  • Step 501 Determine the identifier of the layer to be enhanced based on the foreground application, where the identifier of the layer to be enhanced is the identifier of a layer in the foreground application that requires display enhancement.
  • the preset determines the layers that have display enhancement requirements in different applications, and Set the corresponding relationship between the application and the layer ID to be enhanced. Wherein, the corresponding relationship is pre-configured in the computer device and supports updating.
  • the number of layers with display enhancement requirements in different applications may be the same or different.
  • the computer device obtains the application identifier of the foreground application (which may be the package name of the application program), so as to query the corresponding relationship between the application program and the layer identifier to be enhanced based on the application identifier.
  • the application identifier of the foreground application which may be the package name of the application program
  • the computer device if the layer identification to be enhanced is determined based on the foreground application, the computer device performs the following step 502; if the layer identification to be enhanced is not determined based on the foreground application (that is, the computer device does not support the current foreground application display enhancements), computer devices only composite layers without display enhancements.
  • the computer device can identify each layer to be synthesized and identify the layer to be enhanced.
  • the computer device can determine the layer to be enhanced according to the content richness of the layer to be synthesized, or according to the degree of change of the layer to be synthesized (such as the content difference of the same layer to be synthesized in adjacent image frames). There is no limit to this.
  • step 502 among the layers to be synthesized, the layer corresponding to the identifier of the layer to be enhanced is determined as the target layer.
  • the computer device matches the identifier of the layer to be enhanced with the layer identifier corresponding to each layer to be synthesized, so as to determine the matched layer to be synthesized as the target layer.
  • the image frame currently displayed by the foreground application "App B” is composed of surface 009, surface003, and surface010, and the computer device converts the image frame indicated by surface003 based on the corresponding relationship shown in Table 1.
  • the layer is identified as the target layer.
  • Step 503 performing display enhancement processing on the target layer through the hardware combiner.
  • the computer device performs display enhancement (pixel level) on the target image through a hardware synthesizer to obtain a display-enhanced target image. For example, the computer device performs contrast enhancement, saturation enhancement and sharpness enhancement on each pixel in the target layer through a hardware synthesizer; or, contrast enhancement, saturation reduction and sharpness enhancement on each pixel in the target layer through a hardware synthesizer Sharpness is reduced.
  • the computer device enhances the display of the layer indicated by surface003 through a hardware synthesizer to obtain a layer of surface003 after display enhancement.
  • step 504 layer synthesis is performed on the enhanced target layer and other layers to obtain a first composite layer.
  • the hardware combiner After the display enhancement is completed, the hardware combiner performs layer synthesis on the display-enhanced target layer and other layers that have not undergone display enhancement processing to obtain the first composite layer.
  • the specific process of synthesizing layers by the hardware synthesizer is not described in detail.
  • the computer device synthesizes layers of the surface003 layer after display enhancement, surface009 layer and surface010 layer without display enhancement, to obtain the first composite layer, wherein surface009 The layers are at the bottom of the surface003 layer and the surface010 layer is on top of the surface003 layer.
  • Step 505 in response to the layer compositing mode switching instruction, perform display enhancement processing on the target layer through the GPU.
  • the computer device continues to perform display enhancement processing on the target layer through the GPU.
  • step 506 layer synthesis is performed on the enhanced target layer and other layers to obtain a second composite layer.
  • the GPU After the display enhancement is completed, the GPU performs layer composition on the display-enhanced target layer and other layers that have not undergone display enhancement processing to obtain a second composite layer.
  • the specific process of combining layers by the GPU is not described in detail.
  • the layers with display enhancement requirements in different applications are preset, so that the target layer can be determined from multiple layers to be synthesized according to the foreground application, so as to enhance the display of the target layer and avoid Unnecessary processing consumption caused by display enhancement for all layers in the image frame.
  • uniform enhancement parameters may be used. However, since there are differences in enhancement requirements for display effects in different scenarios (related to display content in layers), the effect of display enhancement using uniform enhancement parameters is not good.
  • the target layer contains sky, sea, and food, it is necessary to increase the display effect by increasing saturation; when the target layer contains buildings, it is necessary to increase the display effect by increasing the sharpness; when the target layer contains When including human faces, it is necessary to improve the display effect by reducing the saturation and sharpness.
  • the computer device needs to determine different enhancement parameters, so as to perform targeted display enhancement on the layer based on the enhancement parameters, so as to improve image display effects in different scenarios.
  • the following uses an exemplary embodiment for description.
  • FIG. 7 shows a flowchart of an image display method according to another exemplary embodiment of the present application.
  • This embodiment uses the method applied to computer equipment as an example for illustration, and the method includes:
  • Step 701 Determine the identifier of the layer to be enhanced based on the foreground application, where the identifier of the layer to be enhanced is the identifier of a layer in the foreground application that requires display enhancement.
  • Step 702 among the layers to be synthesized, the layer corresponding to the identifier of the layer to be enhanced is determined as the target layer.
  • steps 701 to 702 For implementation manners of steps 701 to 702, reference may be made to steps 501 to 502, which will not be repeated in this embodiment.
  • Step 703 performing scene recognition on the target layer to obtain the target scene.
  • the computer device after the target layer is determined, the computer device performs scene recognition on the target layer in each image frame, or the computer device performs scene recognition on the image frame according to the target recognition frequency (such as 500ms/time). Scene recognition is performed on the target layer in , and the target scene is obtained.
  • the target recognition frequency such as 500ms/time
  • the target scene belongs to a preset candidate scene
  • the candidate scene may include any one of a portrait scene, a sky scene, a grass scene, a food scene, and a building scene or a combination of at least two scenes (such as a character+grass field) Scenes).
  • the embodiment of the present application does not limit the specific types of candidate scenarios.
  • the computer device is provided with an NPU and a scene recognition model.
  • the computer device uses the scene recognition model (running on the NPU) to perform scene recognition on the target layer. This step may include the following steps:
  • the scene recognition model Due to the difference in the size of the target layer in different applications, and the scene recognition model has input size requirements, before using the scene recognition model for scene recognition, it is first necessary to scale the target size to the target size, so that the scaled target layer conforms to Model input dimensions for scene recognition models.
  • the computer equipment uniformly scales the target layer to 256px ⁇ 256px.
  • the input of the scene recognition model is an image
  • the output is the probability of each candidate scene.
  • the backbone network of the scene recognition model is a convolutional neural network (used to extract features from images), and the convolutional neural network is followed by a classification network (used to classify scenes according to image features). ), the scene probability output by the classification network is the probability that the input image belongs to each candidate scene.
  • the embodiment of the present application does not limit the specific model structure of the scene recognition model.
  • the scene recognition model set in the computer device supports updating.
  • the model parameters of the scene recognition model need to be updated.
  • the computer device determines the candidate scene corresponding to the highest probability as the target scene.
  • the computer device determines that the target scene is a food scene.
  • the computer device provides a scene setting entry, and the user can manually set the display scene through the scene setting entry.
  • the computer device determines the target scene based on the scene setting operation, and the scene setting operation may be a trigger operation on a scene setting option, which is not limited in this embodiment.
  • Step 704 determining target enhancement parameters corresponding to the target scene, where different scenes correspond to different enhancement parameters, and different enhancement parameters correspond to different display enhancement effects.
  • different enhancement parameters are set in advance for different scenes, where different enhancement parameters correspond to display enhancement effects, so as to provide targeted display enhancement for different scenes.
  • the computer device determines the target scene, it determines the target enhancement parameters corresponding to the target scene, so that the candidates can perform targeted display enhancement processing.
  • the enhancement parameter includes at least one of saturation, contrast, and sharpness
  • the enhancement parameter can be a positive value (such as increasing saturation) or a negative value (such as reducing saturation).
  • the embodiment does not limit the specific type of the enhancement parameter.
  • the computer device needs to determine target scenes corresponding to different target layers, and then obtain target enhancement parameters corresponding to different target layers. This embodiment will not be described in detail here.
  • the computer device needs to re-determine the target enhancement parameters to ensure the accuracy and real-time performance of display enhancement processing.
  • Step 705 based on the target enhancement parameters, perform display enhancement processing on the target layer through the hardware combiner.
  • the hardware synthesizer acquires the target enhancement parameters, and processes the pixels in the target layer based on the display enhancement mode indicated by the target enhancement parameters to obtain the display-enhanced target layer.
  • the computer device can use different target enhancement parameters to perform display enhancement on the target layer for different scene areas deal with.
  • the computer device determines the i-th scene area corresponding to the i-th target scene, and displays the i-th scene area in the target layer through the GPU based on the i-th target enhancement parameter corresponding to the i-th target scene Enhanced processing.
  • the i-th target scene is any one of at least two target scenes.
  • the scene recognition model can not only output the target scene, but also the area information of the scene area corresponding to the target scene, and the computer device is based on the area information. Scenes are processed differently for display enhancement.
  • the computer device uses the "person”
  • the display enhancement parameter corresponding to " is used to perform display enhancement processing on the left half of the target layer
  • the display enhancement parameter corresponding to "landscape” is used to perform display enhancement processing on the right half of the target layer.
  • step 706 layer synthesis is performed on the enhanced target layer and other layers to obtain a first composite layer.
  • the image frame currently displayed by the foreground application consists of surface 009 , surface003 and surface010 , and the surface003 layer is the target layer.
  • the computer device uses the scene recognition model to perform scene recognition on the surface003 layer, and based on the recognized target scene, determines the target enhancement parameters corresponding to the target scene from the enhancement parameter set.
  • the hardware synthesizer Based on the target enhancement parameters, the hardware synthesizer performs display enhancement on the layer indicated by surface003 to obtain the surface003 layer after display enhancement, and then performs display enhancement on the surface003 layer after display enhancement, as well as the surface009 layer and surface010 without display enhancement
  • the layers are composited to obtain the first composite layer.
  • Step 707 in response to the layer compositing mode switching instruction, based on the target enhancement parameters, perform display enhancement processing on the target layer through the GPU.
  • the determined target enhancement parameters exist in a designated storage area, and both the hardware synthesizer and the GPU have the read permission of the storage area.
  • the GPU reads the target Enhancement parameters, so as to perform display enhancement processing on the target layer based on the target enhancement parameters.
  • step 708 layer synthesis is performed on the enhanced target layer and other layers to obtain a second composite layer.
  • the scene recognition model is used to perform scene recognition on the target layer, and based on the recognized target scene, the target enhancement parameters used to perform display enhancement processing on the target layer are determined, so as to realize targeted display enhancement in different scenarios, It helps to improve the image display effect in different scenarios.
  • the computer device Since the target scene corresponding to the target layer remains unchanged in a short period of time, in order to reduce the power consumption of the computer device, the computer device performs scene recognition on the target layer in the nth image frame, and after obtaining the target scene, the computer device will The target enhancement parameter corresponding to the target scene is applied to the consecutive m image frames after the nth image frame, that is, the target enhancement parameter is determined as the enhancement parameter corresponding to the target layer in the n+1 to n+m image frames, Thereby exempting the scene recognition of the target layer in the n+1 to the n+m frame image frame (because the time interval is shorter, the scene recognition of the target layer in the n+1 to the n+m frame image frame The high probability is the same as the nth image frame).
  • m is a fixed value, or dynamically adjusted based on the scene recognition result corresponding to the target layer in the nth image frame (for example, m is 5 in the sky scene, and m is 3 in the portrait scene), which is not made in this embodiment limited.
  • FIG. 10 shows a structural block diagram of an image display device provided by an exemplary embodiment of the present application.
  • the device can be implemented as all or a part of computer equipment through software, hardware or a combination of the two.
  • the unit includes:
  • the first compositing module 1001 is configured to perform display enhancement processing and layer compositing on layers through a hardware compositor to obtain a first composite layer;
  • a display module 1002 configured to display an image of the first composite layer through a display component
  • the second compositing module 1003 is configured to respond to a layer compositing mode switching instruction, perform display enhancement processing and layer compositing on the layers through the graphics processor GPU, to obtain a second composite layer;
  • the display module 1002 is configured to display an image of the second composite layer through the display component.
  • the first synthesis module 1001 includes:
  • a first enhancement unit configured to perform display enhancement processing on the target layer through the hardware synthesizer
  • a first synthesis unit configured to perform layer synthesis on the enhanced target layer and other layers to obtain the first composite layer
  • the second synthesis module 1003 includes:
  • a second enhancement unit configured to perform display enhancement processing on the target layer through the GPU
  • the second compositing unit is configured to composite the target layer and other layers after display enhancement to obtain the second composite layer.
  • the device includes:
  • the layer determination module is used to determine the target layer from the layers to be synthesized.
  • the layer determination module includes:
  • the first determining unit is configured to determine the identity of the layer to be enhanced based on the foreground application, where the layer identity to be enhanced is the identity of a layer in the foreground application that has display enhancement requirements;
  • the second determining unit is configured to determine, among the layers to be synthesized, a layer corresponding to the identifier of the layer to be enhanced as the target layer.
  • the first synthesis unit is used for:
  • the second synthesis unit is used for:
  • the GPU Based on the target enhancement parameters, the GPU performs display enhancement processing on the target layer.
  • the device also includes:
  • a scene determination module configured to perform scene recognition on the target layer to obtain a target scene, or determine the target scene based on a scene setting operation
  • a parameter determination module configured to determine the target enhancement parameters corresponding to the target scene, wherein different scenes correspond to different enhancement parameters, and different enhancement parameters correspond to different display enhancement effects.
  • the scene recognition module is used for:
  • the target layer of the target size is input into the scene recognition model to obtain the output scene probability, the target size conforms to the model input size of the scene recognition model, and the scene probability includes the probability corresponding to the candidate scene;
  • the target scene is determined from the candidate scenes based on the scene probabilities.
  • the identification module is used for:
  • the device also includes:
  • a multiplexing module configured to determine the target enhancement parameter as an enhancement parameter corresponding to the target layer in the n+1th to n+mth image frames, where n and m are positive integers.
  • the second synthesis unit is used for:
  • the GPU Based on the i-th target enhancement parameter corresponding to the i-th target scene, the GPU performs display enhancement processing on the i-th scene area in the target layer.
  • the target enhancement parameters include at least one of saturation, contrast and sharpness.
  • the second synthesis module 1003 is configured to:
  • the GPU performs display enhancement processing and layer synthesis on the layers to obtain the second composite layer.
  • the device also includes:
  • the third compositing module is configured to trigger the switching instruction of the layer compositing mode when the image display direction stops changing, and perform display enhancement processing and layer compositing on the layers through the hardware compositor to obtain a third composite image layer;
  • a third display module configured to display an image of the third composite layer through the display component.
  • the hardware synthesizer is a mobile display processor MDP.
  • the display enhancement and layer synthesis processing are performed on the layer through the hardware synthesizer, and the image display of the synthesized first composite layer is performed through the display component to optimize the image display effect; and when the layer compositing mode is switched, when the GPU is used for layer compositing processing, the display enhancement processing of the layer will continue to be performed through the GPU, and the image display of the synthesized second composite layer will be performed through the display component, ensuring The display enhancement continues before and after switching the layer composition mode, avoiding the sudden change of the image display effect before and after the layer composition mode switching, and further improving the image display quality.
  • the layers with display enhancement requirements in different applications are preset, so that the target layer can be determined from multiple layers to be synthesized according to the foreground application, so as to enhance the display of the target layer and avoid Unnecessary processing consumption caused by display enhancement for all layers in the image frame.
  • the scene recognition model is used to perform scene recognition on the target layer, and based on the recognized target scene, the target enhancement parameters used to perform display enhancement processing on the target layer are determined, so as to realize targeted display enhancement in different scenarios, It helps to improve the image display effect in different scenarios.
  • the embodiment of the present application also provides a computer-readable storage medium, the computer storage medium stores at least one program, and the at least one program is used to be executed by a processor to implement the image display method as described in the foregoing embodiments.
  • a computer program product includes computer instructions stored in a computer-readable storage medium; a processor of a computer device reads the computer-readable storage medium from the computer-readable storage medium. Computer instructions, the processor executes the computer instructions, so that the computer device executes the image display method provided in the above optional implementation manners.
  • the functions described in the embodiments of the present application may be implemented by hardware, software, firmware or any combination thereof.
  • the functions may be stored on or transmitted over as one or more instructions or code on a computer-readable medium.
  • Computer-readable media includes both computer storage media and communication media including any medium that facilitates transfer of a computer program from one place to another.
  • a storage media may be any available media that can be accessed by a general purpose or special purpose computer.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Human Computer Interaction (AREA)
  • General Engineering & Computer Science (AREA)
  • Image Processing (AREA)

Abstract

一种图像显示方法、装置、计算机设备及存储介质,属于图像显示技术领域。该方法包括:通过硬件合成器对图层进行显示增强处理以及图层合成,得到第一合成图层(301);通过显示组件对第一合成图层进行图像显示(302);响应于图层合成方式切换指令,通过GPU对图层进行显示增强处理以及图层合成,得到第二合成图层(303);通过显示组件对第二合成图层进行图像显示(304)。采用本申请实施例提供的方案,保证图层合成方式切换前后显示增强的持续进行,避免图层合成方式切换前后图像显示效果发生突变,进一步提高了图像显示质量。

Description

图像显示方法、装置、计算机设备及存储介质
本申请要求于2021年8月10日提交的申请号为202110914167.9、发明名称为“图像显示方法、装置、计算机设备及存储介质”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本申请实施例涉及图像显示技术领域,特别涉及一种图像显示方法、装置、计算机设备及存储介质。
背景技术
随着显示技术的发展,人们对图像显示效果的要求越来越高,比如,视频播放场景中,人们希望显示的视频效果清晰、靓丽,让人赏心悦目。
相关技术中,为了提高图像显示效果,在图像显示之前,可以预先对图像帧进行图像处理,并将处理后的图像帧传输至显示组件进行显示,从而提升最终的图像显示质量。
发明内容
本申请实施例提供了一种图像显示方法、装置、计算机设备及存储介质。所述技术方案如下:
一方面,本申请实施例提供了一种图像显示方法,所述方法包括:
通过硬件合成器对图层进行显示增强处理以及图层合成,得到第一合成图层;
通过显示组件对所述第一合成图层进行图像显示;
响应于图层合成方式切换指令,通过图形处理器(Graphics Processing Unit,GPU)对图层进行显示增强处理以及图层合成,得到第二合成图层;
通过所述显示组件对所述第二合成图层进行图像显示。
另一方面,本申请实施例提供了一种图像显示装置,所述装置包括:
第一合成模块,用于通过硬件合成器对图层进行显示增强处理以及图层合成,得到第一合成图层;
显示模块,用于通过显示组件对所述第一合成图层进行图像显示;
第二合成模块,用于响应于图层合成方式切换指令,通过GPU对图层进行显示增强处理以及图层合成,得到第二合成图层;
所述显示模块,用于通过所述显示组件对所述第二合成图层进行图像显示。
另一方面,本申请实施例提供了一种计算机设备,所述计算机设备包括处理器、存储器和显示组件,所述存储器中存储有至少一段程序,所述至少一段程序由所述处理器加载并执行以实现如上述方面所述的图像显示方法。
另一方面,本申请实施例提供了一种计算机可读存储介质,所述存储介质存储有至少一段程序,所述至少一段程序用于被处理器执行以实现如上述方面所述的图像显示方法。
根据本申请的另一方面,提供了一种计算机程序产品,该计算机程序产品包括计算机指令,该计算机指令存储在计算机可读存储介质中;计算机设备的处理器从计算机可读存储介质读取该计算机指令,处理器执行该计算机指令,使得该计算机设备执行上述可选实现方式中提供的图像显示方法。
附图说明
图1是一个示例性实施例示出的图像显示方法的原理示意图;
图2示出了本申请一个示例性实施例提供的计算机设备的结构方框图;
图3是本申请一个示例性实施例示出的图像显示方法的流程图;
图4是画面翻转过程中图层合成以及显示过程的实施示意图;
图5是本申请另一个示例性实施例示出的图像显示方法的流程图;
图6是一个示例性实施例示出的图层显示增强以及合成过程的实施示意图;
图7是本申请另一个示例性实施例示出的图像显示方法的流程图;
图8是一个示例性实施例示出的图层显示增强以及合成过程的实施示意图;
图9是本申请一个示例性实施例示出的增强参数复用过程的实施示意图;
图10是本申请一个示例性实施例提供的图像显示装置的结构框图。
具体实施方式
为使本申请的目的、技术方案和优点更加清楚,下面将结合附图对本申请实施方式作进一步地详细描述。
在本文中提及的“多个”是指两个或两个以上。“和/或”,描述关联对象的关联关系,表示可以存在三种关系,例如,A和/或B,可以表示:单独存在A,同时存在A和B,单独存在B这三种情况。字符“/”一般表示前后关联对象是一种“或”的关系。
显示增强技术是一种对图像的显示效果进行增强,以提高图像观感的图像处理技术。按照显示增强的实施阶段进行划分,显示增强可以分为解码端和显示端两种实现方式。其中,在解码端实现显示增强时,计算机设备在图像渲染阶段即对图像帧进行显示增强处理;而在显示端实现显示增强时,计算机设备在图像渲染阶段不会对图像帧进行显示增强处理,而是在图层合成阶段,对图层进行显示增强处理,从而将合成后的图像传输至显示组件进行图像显示。
进一步的,在显示端实现显示增强时,为了降低GPU的处理压力(尤其是在游戏等对GPU需求较高的显示场景),计算机设备通常采用硬件合成器进行图层合成,并利用硬件合成器在图层合成过程中对图层进行显示增强处理。然而,在某些特定使用场景下,当计算机设备无法通过硬件合成器进行图层合成时,只能通过GPU进行图层合成,由于图层并经过硬件合成器处理,因此图层无法得到显示增强,进而导致显示增强失效。
而本申请实施例提供的技术方案中,默认状态下,计算机设备通过硬件合成器12对图层11进行显示增强以及图层合成处理,得到第一合成图层13,并将第一合成图层13送显,供显示组件14进行图像显示;而当图层合成方式发生切换时,为了保证显示增强的持续进行,计算机设备通过GPU 15对图层11 进行图层合成时,继续对图层11进行显示增强处理,并将合成的第二合成图层16送显,保证图层合成方式切换前后显示组件14显示的图像均经过显示增强,优化了图像显示效果。
请参考图2,其示出了本申请一个示例性实施例提供的计算机设备的结构方框图。该计算机设备可以是智能手机、平板电脑、便携式个人计算机等电子设备。本申请中的计算机设备可以包括一个或多个如下部件:处理器210、存储器220和显示组件230。
处理器210可以包括一个或者多个处理核心。处理器210利用各种接口和线路连接整个计算机设备内的各个部分,通过运行或执行存储在存储器220内的指令、程序、代码集或指令集,以及调用存储在存储器220内的数据,执行计算机设备的各种功能和处理数据。可选地,处理器210可以采用数字信号处理(Digital Signal Processing,DSP)、现场可编程门阵列(Field-Programmable Gate Array,FPGA)、可编程逻辑阵列(Programmable Logic Array,PLA)中的至少一种硬件形式来实现。处理器210可集成中央处理器(Central Processing Unit,CPU)、GPU、神经网络处理器(Neural-network Processing Unit,NPU)和调制解调器等中的一种或几种的组合。其中,CPU主要处理操作系统、用户界面和应用程序等;GPU用于负责显示内容的渲染和绘制;NPU用于负责人工智能相关的数据处理,调制解调器用于处理无线通信。可以理解的是,上述调制解调器也可以不集成到处理器210中,单独通过一块通信芯片进行实现。
在一种可能的设计中,本申请实施例中的处理器210包括硬件合成器211、GPU 212以及NPU 213。其中,硬件合成器211用于在默认状态下进行图层合成,GPU 212用于在硬件合成器211无法进行图层合成时进行图层合成,NPU 213则用于通过神经网络模型对图层进行场景识别,以便后续硬件合成器211和GPU 212基于图层中场景对应的增强参数进行显示增强。
可选的,上述硬件合成器211以及NPU 213可以设置在独立于处理器210的协处理器中,本实施例对此不作限定。
存储器220可以包括随机存储器(Random Access Memory,RAM),也可以包括只读存储器(Read-Only Memory,ROM)。可选地,该存储器220包括非瞬时性计算机可读介质(non-transitory computer-readable storage medium)。存储器220可用于存储指令、程序、代码、代码集或指令集。存储器220可包括存储程序区和存储数据区,其中,存储程序区可存储用于实现操作系统的指令、用于实现至少一个功能的指令(比如触控功能、声音播放功能、图像播放功能等)、用于实现上述各个方法实施例的指令等,该操作系统可以是安卓(Android)系统(包括基于Android系统深度开发的系统)、苹果公司开发的IOS系统(包括基于IOS系统深度开发的系统)或其它系统。存储数据区还可以存储计算机设备在使用中所创建的数据等。
显示组件230是用于进行图像显示的组件。可选的,显示组件230还具有触控功能,用于接收用户使用手指、触摸笔等任何适合的物体在其上或附近的 触摸操作。显示组件230可被设计成为全面屏、曲面屏或异型屏中一种或多种的结合,本申请实施例对此不加以限定。
除此之外,本领域技术人员可以理解,上述附图所示出的计算机设备的结构并不构成对计算机设备的限定,计算机设备可以包括比图示更多或更少的部件,或者组合某些部件,或者不同的部件布置。比如,计算机设备中还包括射频电路、拍摄组件、传感器、音频电路、无线保真(Wireless Fidelity,WiFi)组件、电源、蓝牙组件等部件,在此不再赘述。
请参考图3,其示出了本申请一个示例性实施例示出的图像显示方法的流程图。本实施例以该方法应用于计算机设备为例进行示例性说明,该方法包括:
步骤301,通过硬件合成器对图层进行显示增强处理以及图层合成,得到第一合成图层。
在一种可能的实施方式中,默认状态下,计算机设备通过硬件合成器对当前图像帧对应的多个图层(surface)进行显示增强处理,并对显示增强处理后的图层进行合成,得到合成图层。由于图层经过了显示增强处理,因此能够提高后续的图像显示效果。
其中,对图层合成过程即按照各个图层对应的显示区域以及显示顺序,对各个图层进行堆叠的过程。比如,当前图像帧对应的图层包括状态栏对应的第一图层、导航栏对应的第二图层以及墙纸对应的第三图层时,硬件合成器即在第三图层的顶部叠加第一图层,在第三图层的底部叠加第二图层,得到合成图层。
可选的,该硬件合成器为移动显示处理器(Mobile Display Processor,MDP),且该MDP可以设置在处理器,或,独立于处理器的协处理器中。
在一些实施例中,计算机设备通过硬件合成器对当前图像帧对应各个图层均进行显示增强处理,或者,仅对当前视频帧中的特定图层进行显示增强处理,并且不同图层对应的显示增强方式可以相同,也可以不同。
步骤302,通过显示组件对第一合成图层进行图像显示。
在一些实施例中,合成得到的第一合成图层存储在帧缓冲区(FrameBuffer)中,计算机设备进一步将帧缓冲区发送至显示组件,由显示组件基于帧缓冲区中的第一合成图层进行图像显示。
步骤303,响应于图层合成方式切换指令,通过GPU对图层进行显示增强处理以及图层合成,得到第二合成图层。
在一种可能的实施方式中,图层合成方式切换指令在满足图层合成方式切换条件时触发,计算机设备基于图层合成方式切换指令将图层合成的执行主体由硬件合成器切换为GPU。由于硬件合成器在进行图层合理前进行了显示增强处理,因此为了保证图层合成方式切换前后,图像显示效果的一致性,计算机设备同样先对当前图像帧的图层进行显示增强处理,然后对显示增强后的图层进行合成,得到第二合成图层。可选的,GPU和硬件合成器对图像帧中相同的图层进行显示增强处理,且对图层进行显示增强处理所采用的参数相同。
其中,该图层合成方式切换指令可以由用户操作触发,也可以由计算机设备根据自身所处的运行状态自动触发。
在一些实施例中,当硬件合成器无法进行图层合成,或者,处于需要通过GPU进行硬件合成的场景时,计算机设备即进行图层合成方式切换。
在一种可能的应用场景下,硬件合成器只能在图像显示方向为90°或180°时进行图层合成,而无法在其他图像显示方向下进行图层合成,因此当计算机设备的图像显示方向发生变化时(用户手动触发,或者,基于重力传感器自动触发),计算机设备触发图层合成方式切换指令,并在图像显示方向变化过程中,计算机设备即切换使用GPU进行图层合成。
示意性的,竖屏状态下,智能手机使用硬件合成器进行图层合成,当图像显示方向由竖屏切换为横屏过程中,智能手机切换使用GPU进行图层合成。当然,当图像显示方向旋转180°时(比如翻转手机)时,或者,由横屏状态切换为竖屏状态时,计算机设备同样需要切换使用GPU进行图层合成。
在另一种可能的应用场景下,硬件合成器用于合成简单图像帧(比如2D图像帧),当需要合成复杂图像帧(比如3D图像帧)时,为了达到更好的图层合成效果,计算机设备切换使用GPU进行图层合成。
示意性的,在显示2D画面时,智能手机使用硬件合成器进行图层合成,当显示画面由2D画面切换为3D画面时,智能手机切换使用GPU进行图层合成。
当然,除了上述场景外,其他需要利用GPU进行图层合成的场景均可以被认定为图层合成切换场景,本实施例并不对图层合成方式切换的触发方式构成限定。
步骤304,通过显示组件对第二合成图层进行图像显示。
与硬件合成器图层合成类似的,合成得到的第二合成图层存储在帧缓冲区中,计算机设备进一步将帧缓冲区发送至显示组件,由显示组件基于帧缓冲区中的第二合成图层进行图像显示。由于第一合成图层和第二合成图层均经过显示增强(即图层不经过硬件合成器仍旧能够进行显示增强),因此能够保证合成方式切换前后显示增强效果的连续性,避免出现合成方式切换后无法进行显示效果增强导致图像突变(可能会出现画面闪烁)的问题。
可选的,当图层合成方式再次切换时,计算机设备重新通过硬件合成器进行显示增强以及图层合成。
在一种可能的应用场景下,在图像显示方向停止变化时,再次触发图层合成方式切换指令,计算机设备重新通过硬件合成器对图层进行显示增强处理以及图层合成,得到第三合成图层,并通过显示组件对第三合成图层进行图像显示。
示意性的,如图4所示,智能手机进行视频播放时,通过硬件合成器42对图层41进行显示增强以及图层合成,并将合成结果送显,供显示组件43进行显示。当智能手机翻转180°时,由于显示的视频画面需要进行翻转,硬件合成器42无法对翻转过程中的图层进行合成,因此智能手机在画面翻转过程中通过GPU 44对图层41进行显示增强以及图层合成,并将合成结果送显,供显示组 件43进行显示。当画面翻转完毕时,智能手机再次切换硬件合成器42对图层41进行显示增强以及图层合成,保证画面翻转过程中视频画面显示增强效果的一致性。其中,硬件合成器42以及GPU 44均通过获取增强参数45进行显示增强处理。
综上所述,本申请实施例中,默认状态下,通过硬件合成器对图层进行显示增强以及图层合成处理,并通过显示组件对合成的第一合成图层进行图像显示,优化图像显示效果;而在图层合成方式发生切换时,切换使用GPU进行图层合成处理时,继续通过GPU对图层进行显示增强处理,并通过显示组件对合成的第二合成图层进行图像显示,保证图层合成方式切换前后显示增强的持续进行,避免图层合成方式切换前后图像显示效果发生突变,进一步提高了图像显示质量。
可选的,通过硬件合成器对图层进行显示增强处理以及图层合成,得到第一合成图层,包括:
通过硬件合成器对目标图层进行显示增强处理;
将显示增强后的目标图层以及其它图层进行图层合成,得到第一合成图层;
通过GPU对图层进行显示增强处理以及图层合成,得到第二合成图层,包括:
通过GPU对目标图层进行显示增强处理;
将显示增强后的目标图层以及其它图层进行图层合成,得到第二合成图层。
可选的,方法包括:
从待合成图层中确定目标图层。
可选的,从待合成图层中确定目标图层,包括:
基于前台应用确定待增强图层标识,待增强图层标识为前台应用中具有显示增强需求的图层的标识;
将待合成图层中,待增强图层标识对应的图层确定为目标图层。
可选的,通过硬件合成器对目标图层进行显示增强处理,包括:
基于目标增强参数,通过硬件合成器对目标图层进行显示增强处理;
通过GPU对目标图层进行显示增强处理,包括:
基于目标增强参数,通过GPU对目标图层进行显示增强处理。
可选的,方法还包括:
对目标图层进行场景识别得到目标场景,或,基于场景设置操作确定目标场景;
确定目标场景对应的目标增强参数,其中,不同场景对应不同增强参数,且不同增强参数对应不同显示增强效果。
可选的,对目标图层进行场景识别得到目标场景,包括:
将目标图层缩放为目标尺寸;
将目标尺寸的目标图层输入场景识别模型,得到输出的场景概率,目标尺寸符合场景识别模型的模型输入尺寸,场景概率中包含候选场景对应的概率;
基于场景概率,从候选场景中确定目标场景。
可选的,对目标图层进行场景识别得到目标场景,包括:
对第n帧图像帧中的目标图层进行场景识别,得到目标场景;
方法还包括:
将目标增强参数确定为第n+1至第n+m帧图像帧中目标图层对应的增强参数,n和m为正整数。
可选的,目标场景为至少两种;
基于目标增强参数,通过GPU对目标图层进行显示增强处理,包括:
确定第i目标场景对应的第i场景区域,i为正整数;
基于第i目标场景对应的第i目标增强参数,通过GPU对目标图层中第i场景区域进行显示增强处理。
可选的,目标增强参数包括饱和度、对比度和锐度中的至少一种。
可选的,响应于图层合成方式切换指令,通过GPU对图层进行显示增强处理以及图层合成,得到第二合成图层,包括:
在图像显示方向发生变化的情况下,触发图层合成方式切换指令;
在图像显示方向变化过程中,通过GPU对图层进行显示增强处理以及图层合成,得到第二合成图层。
可选的,方法还包括:
在图像显示方向停止变化的情况下,触发图层合成方式切换指令,通过硬件合成器对图层进行显示增强处理以及图层合成,得到第三合成图层;
通过显示组件对第三合成图层进行图像显示。
可选的,硬件合成器为移动显示处理器MDP。
一帧图像帧由多个待合成图层构成,其中,不同待合成图层中显示的内容不同。由于并非每个待合成图层中显示的内容均需要进行显示增强(比如,状态栏以及导航栏对应的图层并不需要进行显示增强,而壁纸对应的图像则需要进行显示增强),因此对图层进行显示增强处理之前,计算机设备首先需要从待合成图层中确定出目标图层,以便后续对目标图层进行显示增强。其中,该目标图层为具有显示增强需求的图层,且目标图层的数量为至少一个。下面采用示例性的实施例进行说明。
请参考图5,其示出了本申请另一个示例性实施例示出的图像显示方法的流程图。本实施例以该方法应用于计算机设备为例进行示例性说明,该方法包括:
步骤501,基于前台应用确定待增强图层标识,待增强图层标识为前台应用中具有显示增强需求的图层的标识。
由于不同应用中图像帧所包含的图层不同,因此为了保证后续图层显示增强的准确性,在一种可能的实施方式中,预设确定不同应用程序中具有显示增强需求的图层,并设置应用与待增强图层标识之间的对应关系。其中,该对应关系预先配置在计算机设备中,并支持更新。
可选的,不同应用程序中具有显示增强需求的图层的数量可以相同,也可 以不同。
在一个示意性的例子中,应用程序与待增强图层标识的对应关系如表一所示。
表一
应用程序 待增强图层标识
App A surface001,surface002
App B surface003
App C surface004,surface005
在一种可能的实施方式中,计算机设备获取前台应用的应用标识(可以为应用程序的包名),从而基于该应用标识从应用程序与待增强图层标识之间的对应关系中,查询待增强图层标识。
在一些实施例中,若基于前台应用确定出待增强图层标识,计算机设备则执行下述步骤502;若基于前台应用未确定出待增强图层标识(即计算机设备不支持对当前前台应用进行显示增强),计算机设备则仅对图层进行合成,而不进行显示增强。
当然,为了提高显示增强的适用范围,在其他一些实施例中,若基于前台应用未确定出待增强图层标识,计算机设备可以对各个待合成图层进行识别,识别出待增强图层。其中,计算机设备可以根据待合成图层的内容丰富度,或者,根据待合成图层的变化程度(比如相邻图像帧中同一待合成图层的内容差异)确定待增强图层,本实施例对此不作限定。
步骤502,将待合成图层中,待增强图层标识对应的图层确定为目标图层。
进一步的,计算机设备将待增强图层标识与各个待合成图层对应的图层标识进行匹配,从而将匹配的待合成图层确定为目标图层。
在一个示意性的例子中,如图6所示,前台应用“App B”当前显示的图像帧由surface 009、surface003和surface010构成,计算机设备基于表一所示的对应关系,将surface003所指示的图层确定为目标图层。
步骤503,通过硬件合成器对目标图层进行显示增强处理。
计算机设备通过硬件合成器对目标图像进行显示增强(像素级),得到显示增强后的目标图像。比如,计算机设备通过硬件合成器对目标图层中各个像素点进行对比度增强、饱和度增强以及锐度增强;或者,通过硬件合成器对目标图层中各个像素点进行对比度增强、饱和度减弱以及锐度减弱。
示意性的,如图6所示,计算机设备通过硬件合成器对surface003所指示的图层进行显示增强,得到显示增强后的surface003图层。
步骤504,将显示增强后的目标图层以及其它图层进行图层合成,得到第一合成图层。
完成显示增强后,硬件合成器对显示增强后的目标图层,以及未经过显示增强处理的其它图层进行图层合成,得到第一合成图层。本申请实施例对硬件合成器合成图层的具体过程不作赘述。
示意性的,如图6所示,计算机设备对经过显示增强后的surface003图层, 以及未经过显示增强的surface009图层和surface010图层进行图层合成,得到第一合成图层,其中,surface009图层位于surface003图层的底部,surface010图层位于surface003图层的顶部。
步骤505,响应于图层合成方式切换指令,通过GPU对目标图层进行显示增强处理。
同样的,当图层合成方式发生切换时,计算机设备通过GPU继续对目标图层进行显示增强处理。
步骤506,将显示增强后的目标图层以及其它图层进行图层合成,得到第二合成图层。
完成显示增强后,GPU对显示增强后的目标图层,以及未经过显示增强处理的其它图层进行图层合成,得到第二合成图层。本申请实施例对GPU合成图层的具体过程不作赘述。
本实施例中,通过预先设置不同应用中具有显示增强需求的图层,以便即根据前台应用,从多个待合成图层中确定出目标图层,从而对目标图层进行显示增强,避免对图像帧中所有图层均进行显示增强所带来的不必要的处理消耗。
在对目标图层进行显示增强时,在一种可能的实施方式中,可以采用统一的增强参数。然而,由于不同场景(与图层中的显示内容有关)下对显示效果的增强需求存在差别,因此使用统一增强参数进行显示增强的效果不佳。
比如,当目标图层中包含天空、大海以及食物时,需要通过增加饱和度来提升显示效果;当目标图层中包含建筑物时,需要通过增加锐度来提升显示效果;当目标图层中包含人脸时,需要通过降低饱和度和锐度来提升显示效果。
在一种可能的实施方式中,针对不同场景,计算机设备需要确定出不同的增强参数,从而基于增强参数对图层进行针对性的显示增强,以此提高不同场景下的图像显示效果。下面采用示例性的实施例进行说明。
请参考图7,其示出了本申请另一个示例性实施例示出的图像显示方法的流程图。本实施例以该方法应用于计算机设备为例进行示例性说明,该方法包括:
步骤701,基于前台应用确定待增强图层标识,待增强图层标识为前台应用中具有显示增强需求的图层的标识。
步骤702,将待合成图层中,待增强图层标识对应的图层确定为目标图层。
步骤701至702的实施方式可以参考步骤501至502,本实施例在此不再赘述。
步骤703,对目标图层进行场景识别得到目标场景。
在一种可能的实施方式中,确定出目标图层后,计算机设备对每一帧图像帧中的目标图层进行场景识别,或者,计算机设备按照目标识别频率(比如500ms/次)对图像帧中的目标图层进行场景识别,得到目标场景。
可选的,该目标场景属于预先设置的候选场景,该候选场景可以包括人像场景、天空场景、草地场景、食物场景、建筑场景中任意一种场景或至少两种场景的组合(比如人物+草地场景)。本申请实施例并不对候选场景的具体类型 构成限定。
在一些实施例中,计算机设备中设置有NPU以及场景识别模型,在对目标图层进行场景识别时,计算机设备利用场景识别模型(运行在NPU上)对目标图层进行场景识别。本步骤可以包括如下步骤:
一、将目标图层缩放为目标尺寸。
由于不同应用中的目标图层的尺寸存在差异,而场景识别模型具有输入尺寸需求,因此利用场景识别模型进行场景识别前,首先需要将目标尺寸缩放为目标尺寸,使缩放后的目标图层符合场景识别模型的模型输入尺寸。
在一个示意性的例子中,计算机设备统一将目标图层缩放为256px×256px。
二、将目标尺寸的目标图层输入场景识别模型,得到输出的场景概率,目标尺寸符合场景识别模型的模型输入尺寸,场景概率中包含候选场景对应的概率。
可选的,该场景识别模型的输入为图像,输出为各个候选场景的概率。在一种可能的实现方式中,该场景识别模型的骨干网络为卷积神经网络(用于对图像进行特征提取),且卷积神经网络后接一个分类网络(用于根据图像特征进行场景分类),该分类网络输出的场景概率即为输入图像属于各个候选场景的概率。本申请实施例并不对场景识别模型的具体模型结构进行限定。
可选的,计算机设备中设置的场景识别模型支持更新,当支持识别的候选场景变更,或场景识别模型经过优化(提高场景识别的准确度),该场景识别模型的模型参数即需要进行更新。
在一个示意性的例子中,当候选场景包括人像场景、天空场景、草地场景、食物场景和建筑场景时,场景识别模型示出的场景概率为:P 1=0.2,P 2=0.05,P 3=0.04,P 4=0.7,P 5=0.01。
三、基于场景概率,从候选场景中确定目标场景。
在一种可能的实施方式中,若场景概率中最高概率大于概率阈值(比如0.6),计算机设备将最高概率对应的候选场景确定为目标场景。
结合上述步骤中的示例,计算机设备确定目标场景为食物场景。
除了由计算机设备自动识别场景外,在其他可能的实施方式中,计算机设备提供场景设置入口,用户通过该场景设置入口可以手动设置显示场景。相应的,计算机设备基于场景设置操作确定目标场景,该场景设置操作可以是对场景设置选项的触发操作,本实施例对此不作限定。
步骤704,确定目标场景对应的目标增强参数,其中,不同场景对应不同增强参数,且不同增强参数对应不同显示增强效果。
在一种可能的实施方式中,预先为不同场景设置不同的增强参数,其中,不同增强参数对应显示增强效果,以此为不同场景提供针对性的显示增强。计算机设备确定出目标场景后,即确定目标场景对应的目标增强参数,以便候选进行针对性显示增强处理。
可选的,该增强参数中包含饱和度、对比度以及锐度中的至少一种,且增强参数可以为正值(比如提高饱和度),也可以为负值(比如降低饱和度),本 申请实施例并不对增强参数的具体类型构成限定。
在一个示意性的例子中,不同场景与增强参数之间的对应关系如表二所示。
表二
Figure PCTCN2022106166-appb-000001
可选的,当存在多个目标图层时,计算机设备需要确定不同目标图层对应的目标场景,进而获取不同目标图层对应的目标增强参数。本实施例在此不作赘述。
需要说明的是,当目标图层对应的目标场景发生变化时,计算机设备需要重新确定目标增强参数,保证显示增强处理的准确性和实时性。
步骤705,基于目标增强参数,通过硬件合成器对目标图层进行显示增强处理。
进一步的,硬件合成器获取目标增强参数,并基于目标增强参数指示的显示增强方式对目标图层中的像素点进行处理,得到显示增强后的目标图层。
在一些实施例中,当确定出至少两种目标场景时(比如同时浏览多张不同场景的照片时),计算机设备可以针对不同的场景区域,采用不同的目标增强参数对目标图层进行显示增强处理。
在一种可能的实施方式中,计算机设备确定第i目标场景对应的第i场景区域,并基于第i目标场景对应的第i目标增强参数,通过GPU对目标图层中第i场景区域进行显示增强处理。其中,第i目标场景为至少两种目标场景中的任一场景。
可选的,计算机设备利用场景识别模型对目标图层进行场景识别时,场景识别模型除了输出目标场景外,还能够出处目标场景对应场景区域的区域信息,计算机设备即基于该区域信息针对不同目标场景进行不同显示增强处理。
在一个示意性的例子中,当用户同频浏览两张照片,且左侧照片对应的目标场景为“人物”,而右侧照片对应的目标场景为“风景”时,计算机设备即采用“人物”对应的显示增强参数对目标图层的左半部进行显示增强处理,采用“风景”对应的显示增强参数对目标图层的右半部进行显示增强处理。
步骤706,将显示增强后的目标图层以及其它图层进行图层合成,得到第一合成图层。
示意性的,如图8所示,前台应用当前显示的图像帧由surface 009、surface003和surface010构成,且surface003图层为目标图层。计算机设备利用场景识别模型对surface003图层进行场景识别,并基于识别出的目标场景,从增强参数集中确定出目标场景对应的目标增强参数。硬件合成器基于目标增强参数,对surface003所指示的图层进行显示增强,得到显示增强后的surface003图层,进而对经过显示增强后的surface003图层,以及未经过显示增强的surface009图层 和surface010图层进行图层合成,得到第一合成图层。
步骤707,响应于图层合成方式切换指令,基于目标增强参数,通过GPU对目标图层进行显示增强处理。
在一种可能的实施方式中,确定出的目标增强参数存在的指定存储区域,且硬件合成器和GPU均具有该存储区域的读取权限,当图层合成方式切换时,GPU读取该目标增强参数,从而基于目标增强参数对目标图层进行显示增强处理。
步骤708,将显示增强后的目标图层以及其它图层进行图层合成,得到第二合成图层。
本步骤的实施方式可以参考上述步骤506,本实施例在此不再赘述。
本实施例中,通过场景识别模型对目标图层进行场景识别,并基于识别出的目标场景确定对目标图层进行显示增强处理所采用的目标增强参数,实现不同场景下针对性的显示增强,有助于提高不同场景下的图像显示效果。
由于目标图层对应的目标场景在短时间内保持不变,因此为了降低计算机设备的功耗,计算机设备对第n帧图像帧中的目标图层进行场景识别,得到目标场景后,计算机设备将该目标场景对应的目标增强参数应用于第n帧图像帧后连续的m帧图像帧,即将目标增强参数确定为第n+1至第n+m帧图像帧中目标图层对应的增强参数,从而免去对第n+1至第n+m帧图像帧中目标图层的场景识别(由于相隔时间较短,因此第n+1至第n+m帧图像帧中目标图层的场景识别大概率与第n帧图像帧相同)。
可选的,m为定值,或者基于第n帧图像帧中目标图层对应的场景识别结果动态调整(比如天空场景下m为5,人像场景下m为3),本实施例对此不作限定。
示意性的,如图9所示,计算机设备对第n帧图像帧的目标图层进行场景识别并确定出目标增强参数后,无需对第n+1至第n+m帧的目标图层进行场景识别,而是直接将第n帧图像帧对应的目标增强参数应用于第n+1至第n+m帧中目标图层的显示增强处理;对于第n+m+1帧图像帧,计算机设备则需要重新进行场景识别和增强参数确定,并将将第n+m+1帧图像帧对应的目标增强参数应用于第n+m+2至第n+2m帧中目标图层的显示增强处理。
请参考图10,其示出了本申请一个示例性实施例提供的图像显示装置的结构框图。该装置可以通过软件、硬件或者两者的结合实现成为计算机设备的全部或一部分。该装置包括:
第一合成模块1001,用于通过硬件合成器对图层进行显示增强处理以及图层合成,得到第一合成图层;
显示模块1002,用于通过显示组件对所述第一合成图层进行图像显示;
第二合成模块1003,用于响应于图层合成方式切换指令,通过图形处理器GPU对图层进行显示增强处理以及图层合成,得到第二合成图层;
所述显示模块1002,用于通过所述显示组件对所述第二合成图层进行图像显示。
可选的,所述第一合成模块1001,包括:
第一增强单元,用于通过所述硬件合成器对目标图层进行显示增强处理;
第一合成单元,用于将显示增强后的所述目标图层以及其它图层进行图层合成,得到所述第一合成图层;
所述第二合成模块1003,包括:
第二增强单元,用于通过所述GPU对所述目标图层进行显示增强处理;
第二合成单元,用于将显示增强后的所述目标图层以及其它图层进行图层合成,得到所述第二合成图层。
可选的,所述装置包括:
图层确定模块,用于从待合成图层中确定所述目标图层。
可选的,所述图层确定模块,包括:
第一确定单元,用于基于前台应用确定待增强图层标识,所述待增强图层标识为所述前台应用中具有显示增强需求的图层的标识;
第二确定单元,用于将所述待合成图层中,所述待增强图层标识对应的图层确定为所述目标图层。
可选的,所述第一合成单元,用于:
基于目标增强参数,通过所述硬件合成器对所述目标图层进行显示增强处理;
所述第二合成单元,用于:
基于所述目标增强参数,通过所述GPU对所述目标图层进行显示增强处理。
可选的,所述装置还包括:
场景确定模块,用于对所述目标图层进行场景识别得到目标场景,或,基于场景设置操作确定目标场景;
参数确定模块,用于确定所述目标场景对应的所述目标增强参数,其中,不同场景对应不同增强参数,且不同增强参数对应不同显示增强效果。
可选的,所述场景识别模块,用于:
将所述目标图层缩放为目标尺寸;
将所述目标尺寸的所述目标图层输入场景识别模型,得到输出的场景概率,所述目标尺寸符合所述场景识别模型的模型输入尺寸,所述场景概率中包含候选场景对应的概率;
基于所述场景概率,从所述候选场景中确定所述目标场景。
可选的,所述识别模块,用于:
对第n帧图像帧中的所述目标图层进行场景识别,得到所述目标场景;
所述装置还包括:
复用模块,用于将所述目标增强参数确定为第n+1至第n+m帧图像帧中所述目标图层对应的增强参数,n和m为正整数。
可选的,所述目标场景为至少两种;
所述第二合成单元,用于:
确定第i目标场景对应的第i场景区域,i为正整数;
基于第i目标场景对应的第i目标增强参数,通过所述GPU对所述目标图层中所述第i场景区域进行显示增强处理。
可选的,所述目标增强参数包括饱和度、对比度和锐度中的至少一种。
可选的,所述第二合成模块1003,用于:
在图像显示方向发生变化的情况下,触发所述图层合成方式切换指令;
在图像显示方向变化过程中,通过所述GPU对图层进行显示增强处理以及图层合成,得到所述第二合成图层。
可选的,所述装置还包括:
第三合成模块,用于在图像显示方向停止变化的情况下,触发所述图层合成方式切换指令,通过所述硬件合成器对图层进行显示增强处理以及图层合成,得到第三合成图层;
第三显示模块,用于通过所述显示组件对所述第三合成图层进行图像显示。
可选的,所述硬件合成器为移动显示处理器MDP。
综上所述,本申请实施例中,默认状态下,通过硬件合成器对图层进行显示增强以及图层合成处理,并通过显示组件对合成的第一合成图层进行图像显示,优化图像显示效果;而在图层合成方式发生切换时,切换使用GPU进行图层合成处理时,继续通过GPU对图层进行显示增强处理,并通过显示组件对合成的第二合成图层进行图像显示,保证图层合成方式切换前后显示增强的持续进行,避免图层合成方式切换前后图像显示效果发生突变,进一步提高了图像显示质量。
本实施例中,通过预先设置不同应用中具有显示增强需求的图层,以便即根据前台应用,从多个待合成图层中确定出目标图层,从而对目标图层进行显示增强,避免对图像帧中所有图层均进行显示增强所带来的不必要的处理消耗。
本实施例中,通过场景识别模型对目标图层进行场景识别,并基于识别出的目标场景确定对目标图层进行显示增强处理所采用的目标增强参数,实现不同场景下针对性的显示增强,有助于提高不同场景下的图像显示效果。
本申请实施例还提供了一种计算机可读存储介质,该计算机存储介质存储有至少一段程序,所述至少一段程序用于被处理器执行以实现如上述各个实施例所述的图像显示方法。
根据本申请的另一方面,提供了一种计算机程序产品,该计算机程序产品包括计算机指令,该计算机指令存储在计算机可读存储介质中;计算机设备的处理器从计算机可读存储介质读取该计算机指令,处理器执行该计算机指令,使得该计算机设备执行上述可选实现方式中提供的图像显示方法。
本领域技术人员应该可以意识到,在上述一个或多个示例中,本申请实施例所描述的功能可以用硬件、软件、固件或它们的任意组合来实现。当使用软件实现时,可以将这些功能存储在计算机可读介质中或者作为计算机可读介质 上的一个或多个指令或代码进行传输。计算机可读介质包括计算机存储介质和通信介质,其中通信介质包括便于从一个地方向另一个地方传送计算机程序的任何介质。存储介质可以是通用或专用计算机能够存取的任何可用介质。
以上所述仅为本申请的可选实施例,并不用以限制本申请,凡在本申请的精神和原则之内,所作的任何修改、等同替换、改进等,均应包含在本申请的保护范围之内。

Claims (20)

  1. 一种图像显示方法,所述方法包括:
    通过硬件合成器对图层进行显示增强处理以及图层合成,得到第一合成图层;
    通过显示组件对所述第一合成图层进行图像显示;
    响应于图层合成方式切换指令,通过图形处理器GPU对图层进行显示增强处理以及图层合成,得到第二合成图层;
    通过所述显示组件对所述第二合成图层进行图像显示。
  2. 根据权利要求1所述的方法,其中,所述通过硬件合成器对图层进行显示增强处理以及图层合成,得到第一合成图层,包括:
    通过所述硬件合成器对目标图层进行显示增强处理;
    将显示增强后的所述目标图层以及其它图层进行图层合成,得到所述第一合成图层;
    所述通过GPU对图层进行显示增强处理以及图层合成,得到第二合成图层,包括:
    通过所述GPU对所述目标图层进行显示增强处理;
    将显示增强后的所述目标图层以及其它图层进行图层合成,得到所述第二合成图层。
  3. 根据权利要求2所述的方法,其中,所述方法包括:
    从待合成图层中确定所述目标图层。
  4. 根据权利要求3所述的方法,其中,所述从待合成图层中确定所述目标图层,包括:
    基于前台应用确定待增强图层标识,所述待增强图层标识为所述前台应用中具有显示增强需求的图层的标识;
    将所述待合成图层中,所述待增强图层标识对应的图层确定为所述目标图层。
  5. 根据权利要求2所述的方法,其中,所述通过所述硬件合成器对目标图层进行显示增强处理,包括:
    基于目标增强参数,通过所述硬件合成器对所述目标图层进行显示增强处理;
    所述通过所述GPU对所述目标图层进行显示增强处理,包括:
    基于所述目标增强参数,通过所述GPU对所述目标图层进行显示增强处理。
  6. 根据权利要求5所述的方法,其中,所述方法还包括:
    对所述目标图层进行场景识别得到目标场景,或,基于场景设置操作确定目标场景;
    确定所述目标场景对应的所述目标增强参数,其中,不同场景对应不同增强参数,且不同增强参数对应不同显示增强效果。
  7. 根据权利要求6所述的方法,其中,所述对所述目标图层进行场景识别得到目标场景,包括:
    将所述目标图层缩放为目标尺寸;
    将所述目标尺寸的所述目标图层输入场景识别模型,得到输出的场景概率,所述目标尺寸符合所述场景识别模型的模型输入尺寸,所述场景概率中包含候选场景对应的概率;
    基于所述场景概率,从所述候选场景中确定所述目标场景。
  8. 根据权利要求6所述的方法,其中,所述对所述目标图层进行场景识别得到目标场景,包括:
    对第n帧图像帧中的所述目标图层进行场景识别,得到所述目标场景;
    所述方法还包括:
    将所述目标增强参数确定为第n+1至第n+m帧图像帧中所述目标图层对应的增强参数,n和m为正整数。
  9. 根据权利要求6所述的方法,其中,所述目标场景为至少两种;
    所述基于所述目标增强参数,通过所述GPU对所述目标图层进行显示增强处理,包括:
    确定第i目标场景对应的第i场景区域,i为正整数;
    基于第i目标场景对应的第i目标增强参数,通过所述GPU对所述目标图层中所述第i场景区域进行显示增强处理。
  10. 根据权利要求5所述的方法,其中,所述目标增强参数包括饱和度、对比度和锐度中的至少一种。
  11. 根据权利要求1至10任一所述的方法,其中,所述响应于图层合成方式切换指令,通过GPU对图层进行显示增强处理以及图层合成,得到第二合成图层,包括:
    在图像显示方向发生变化的情况下,触发所述图层合成方式切换指令;
    在图像显示方向变化过程中,通过所述GPU对图层进行显示增强处理以及图层合成,得到所述第二合成图层。
  12. 根据权利要求11所述的方法,其中,所述方法还包括:
    在图像显示方向停止变化的情况下,触发所述图层合成方式切换指令,通过所述硬件合成器对图层进行显示增强处理以及图层合成,得到第三合成图层;
    通过所述显示组件对所述第三合成图层进行图像显示。
  13. 根据权利要求1至10任一所述的方法,其中,所述硬件合成器为移动显示处理器MDP。
  14. 一种图像显示装置,所述装置包括:
    第一合成模块,用于通过硬件合成器对图层进行显示增强处理以及图层合成,得到第一合成图层;
    显示模块,用于通过显示组件对所述第一合成图层进行图像显示;
    第二合成模块,用于响应于图层合成方式切换指令,通过图形处理器GPU对图层进行显示增强处理以及图层合成,得到第二合成图层;
    所述显示模块,用于通过所述显示组件对所述第二合成图层进行图像显示。
  15. 根据权利要求14所述的装置,其中,所述第一合成模块,包括:
    第一增强单元,用于通过所述硬件合成器对目标图层进行显示增强处理;
    第一合成单元,用于将显示增强后的所述目标图层以及其它图层进行图层合成,得到所述第一合成图层;
    所述第二合成模块,包括:
    第二增强单元,用于通过所述GPU对所述目标图层进行显示增强处理;
    第二合成单元,用于将显示增强后的所述目标图层以及其它图层进行图层合成,得到所述第二合成图层。
  16. 根据权利要求15所述的装置,其中,所述装置包括:
    图层确定模块,用于从待合成图层中确定所述目标图层。
  17. 根据权利要求16所述的装置,其中,所述图层确定模块,包括:
    第一确定单元,用于基于前台应用确定待增强图层标识,所述待增强图层标识为所述前台应用中具有显示增强需求的图层的标识;
    第二确定单元,用于将所述待合成图层中,所述待增强图层标识对应的图层确定为所述目标图层。
  18. 一种计算机设备,所述计算机设备包括处理器、存储器和显示组件,所述存储器中存储有至少一段程序,所述至少一段程序由所述处理器加载并执行以实现如权利要求1至13任一所述的图像显示方法。
  19. 一种计算机可读存储介质,所述存储介质存储有至少一段程序,所述至少一段程序用于被处理器执行以实现如权利要求1至13任一所述的图像显示方法。
  20. 一种计算机程序产品,所述计算机程序产品包括计算机指令,所述计算机指令存储在计算机可读存储介质中;计算机设备的处理器从所述计算机可读存储介质读取所述计算机指令,所述处理器执行所述计算机指令,使得所述计算机设备的处理器执行如权利要求1至13任一所述的图像显示方法。
PCT/CN2022/106166 2021-08-10 2022-07-18 图像显示方法、装置、计算机设备及存储介质 WO2023016191A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202110914167.9A CN113625983B (zh) 2021-08-10 2021-08-10 图像显示方法、装置、计算机设备及存储介质
CN202110914167.9 2021-08-10

Publications (1)

Publication Number Publication Date
WO2023016191A1 true WO2023016191A1 (zh) 2023-02-16

Family

ID=78383978

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2022/106166 WO2023016191A1 (zh) 2021-08-10 2022-07-18 图像显示方法、装置、计算机设备及存储介质

Country Status (2)

Country Link
CN (1) CN113625983B (zh)
WO (1) WO2023016191A1 (zh)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN118138832A (zh) * 2024-05-06 2024-06-04 武汉凌久微电子有限公司 一种基于gpu硬图层的网络视频流显示方法

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113625983B (zh) * 2021-08-10 2024-08-27 Oppo广东移动通信有限公司 图像显示方法、装置、计算机设备及存储介质
CN114510207B (zh) * 2022-02-28 2024-09-13 亿咖通(湖北)技术有限公司 图层合成方法、装置、设备、介质及程序产品

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2015097049A (ja) * 2013-11-15 2015-05-21 日本電信電話株式会社 画像処理装置及び画像処理方法
CN106331831A (zh) * 2016-09-07 2017-01-11 珠海市魅族科技有限公司 图像处理的方法及装置
CN106331427A (zh) * 2016-08-24 2017-01-11 北京小米移动软件有限公司 饱和度增强方法及装置
CN110362186A (zh) * 2019-07-17 2019-10-22 Oppo广东移动通信有限公司 图层处理方法、装置、电子设备及计算机可读介质
CN110413245A (zh) * 2019-07-17 2019-11-05 Oppo广东移动通信有限公司 图像合成方法、装置、电子设备及存储介质
CN111338744A (zh) * 2020-05-22 2020-06-26 北京小米移动软件有限公司 图像显示方法及装置、电子设备、存储介质
CN113064727A (zh) * 2021-04-16 2021-07-02 上海众链科技有限公司 应用于Android系统的图像显示调度方法、终端及存储介质
CN113625983A (zh) * 2021-08-10 2021-11-09 Oppo广东移动通信有限公司 图像显示方法、装置、计算机设备及存储介质

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103839533B (zh) * 2014-01-23 2016-03-02 华为技术有限公司 一种移动终端图像的显示方法和移动终端
CN106933525B (zh) * 2017-03-09 2019-09-20 青岛海信移动通信技术股份有限公司 一种显示图像的方法和装置
CN109685726B (zh) * 2018-11-27 2021-04-13 Oppo广东移动通信有限公司 游戏场景处理方法、装置、电子设备以及存储介质
CN109847352B (zh) * 2019-01-18 2022-12-20 网易(杭州)网络有限公司 游戏中控件图标的显示控制方法、显示设备及存储介质
CN112527220B (zh) * 2019-09-18 2022-08-26 华为技术有限公司 一种电子设备显示方法及电子设备
CN111565337A (zh) * 2020-04-26 2020-08-21 华为技术有限公司 图像处理方法、装置和电子设备

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2015097049A (ja) * 2013-11-15 2015-05-21 日本電信電話株式会社 画像処理装置及び画像処理方法
CN106331427A (zh) * 2016-08-24 2017-01-11 北京小米移动软件有限公司 饱和度增强方法及装置
CN106331831A (zh) * 2016-09-07 2017-01-11 珠海市魅族科技有限公司 图像处理的方法及装置
CN110362186A (zh) * 2019-07-17 2019-10-22 Oppo广东移动通信有限公司 图层处理方法、装置、电子设备及计算机可读介质
CN110413245A (zh) * 2019-07-17 2019-11-05 Oppo广东移动通信有限公司 图像合成方法、装置、电子设备及存储介质
CN111338744A (zh) * 2020-05-22 2020-06-26 北京小米移动软件有限公司 图像显示方法及装置、电子设备、存储介质
CN113064727A (zh) * 2021-04-16 2021-07-02 上海众链科技有限公司 应用于Android系统的图像显示调度方法、终端及存储介质
CN113625983A (zh) * 2021-08-10 2021-11-09 Oppo广东移动通信有限公司 图像显示方法、装置、计算机设备及存储介质

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN118138832A (zh) * 2024-05-06 2024-06-04 武汉凌久微电子有限公司 一种基于gpu硬图层的网络视频流显示方法

Also Published As

Publication number Publication date
CN113625983A (zh) 2021-11-09
CN113625983B (zh) 2024-08-27

Similar Documents

Publication Publication Date Title
WO2023016191A1 (zh) 图像显示方法、装置、计算机设备及存储介质
US12165275B2 (en) Face augmentation in video
WO2021008456A1 (zh) 图像处理方法、装置、电子设备及存储介质
US20230328429A1 (en) Audio processing method and electronic device
WO2021129642A1 (zh) 图像处理方法、装置、计算机设备及存储介质
US9692959B2 (en) Image processing apparatus and method
US20220319077A1 (en) Image-text fusion method and apparatus, and electronic device
US10152778B2 (en) Real-time face beautification features for video images
CN111770340B (zh) 视频编码方法、装置、设备以及存储介质
CN113421189B (zh) 图像超分辨率处理方法和装置、电子设备
CN111353946B (zh) 图像修复方法、装置、设备及存储介质
US20210335391A1 (en) Resource display method, device, apparatus, and storage medium
CN113099233A (zh) 视频编码方法、装置、视频编码设备及存储介质
US20220138906A1 (en) Image Processing Method, Apparatus, and Device
CN106548117B (zh) 一种人脸图像处理方法和装置
CN108389165B (zh) 一种图像去噪方法、装置、终端系统和存储器
WO2023231630A1 (zh) 视频数据的处理方法、电子设备及可读存储介质
CN117830077A (zh) 图像处理方法、装置以及电子设备
CN114915722B (zh) 处理视频的方法和装置
CN114998961A (zh) 虚拟三维人脸生成方法、人脸生成模型的训练方法及装置
WO2017101570A1 (zh) 照片的处理方法及处理系统
CN116453131B (zh) 文档图像矫正方法、电子设备及存储介质
US20240107086A1 (en) Multi-layer Foveated Streaming
WO2025025657A1 (zh) 一种设置壁纸的方法及电子设备
US20240311969A1 (en) Image processing method and electronic device

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22855174

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 22855174

Country of ref document: EP

Kind code of ref document: A1