WO2022007862A1 - Procédé de traitement d'image, système, dispositif électronique et support de stockage lisible par ordinateur - Google Patents

Procédé de traitement d'image, système, dispositif électronique et support de stockage lisible par ordinateur Download PDF

Info

Publication number
WO2022007862A1
WO2022007862A1 PCT/CN2021/105060 CN2021105060W WO2022007862A1 WO 2022007862 A1 WO2022007862 A1 WO 2022007862A1 CN 2021105060 W CN2021105060 W CN 2021105060W WO 2022007862 A1 WO2022007862 A1 WO 2022007862A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
resolution
electronic device
super
model
Prior art date
Application number
PCT/CN2021/105060
Other languages
English (en)
Chinese (zh)
Inventor
张梦然
陈泰雨
薛蓬
张运超
Original Assignee
华为技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 华为技术有限公司 filed Critical 华为技术有限公司
Publication of WO2022007862A1 publication Critical patent/WO2022007862A1/fr

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • G06T3/4053Scaling of whole images or parts thereof, e.g. expanding or contracting based on super-resolution, i.e. the output image resolution being higher than the sensor resolution
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/005General purpose rendering architectures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2200/00Indexing scheme for image data processing or generation, in general
    • G06T2200/28Indexing scheme for image data processing or generation, in general involving image processing hardware

Definitions

  • the present application belongs to the field of artificial intelligence, and in particular, relates to an image processing method, system, electronic device, and computer-readable storage medium.
  • the present application provides an image processing method, a system, an electronic device and a computer-readable storage medium, which solves the problem that in the existing image solution, when the hardware resources of the electronic device are insufficient, the electronic device can only run at a low image quality and high Problems with image quality products.
  • an image processing method is provided, applied to a first electronic device, including:
  • the first electronic device acquires native image data, where the native image data is unrendered image data generated by an application;
  • the first electronic device renders the native image data through the first graphics rendering hardware to obtain a first image
  • the first electronic device performs super-resolution reconstruction on the first image through second graphics rendering hardware to obtain a target image, and the first graphics rendering hardware and the second graphics rendering hardware are different graphics rendering hardware.
  • the application generates native image data during the running process.
  • the first electronic device may render the native image data through the first graphics rendering component, so that the native image data is converted into visible pixels to obtain the first image.
  • the first electronic device may perform super-resolution reconstruction on the first image through the second graphics rendering component to obtain a high-quality target image.
  • the image processing method of the present application obtains the target image by combining preliminary rendering and super-resolution reconstruction, which can reduce the time required for electronic devices to render high-quality images. Rendering power and computation.
  • first graphics rendering component and the second graphics rendering component may be a central processing unit (CPU), a graphics processing unit (graphics processing unit, GPU), and a neural-network processing unit (neural-network processing units). , NPU) one or more of.
  • CPU central processing unit
  • GPU graphics processing unit
  • NPU neural-network processing units
  • first graphics rendering component and the second graphics rendering component are different graphics rendering components, so as to make full use of the heterogeneous graphics rendering components inside the electronic device, and avoid the electronic device with a lower hardware resource due to insufficient hardware resources of a single graphics rendering hardware.
  • Image quality runs high-quality products.
  • the first electronic device performs super-resolution reconstruction on the first image through first graphics rendering hardware to obtain a target image, including:
  • the first electronic device obtains the identifier of the application
  • the first electronic device searches for a target super-score model associated with the identifier
  • the first electronic device performs super-resolution reconstruction on the first image through the first graphics rendering hardware and the found target super-resolution model to obtain a target image.
  • the first electronic device may perform super-resolution reconstruction on the first image by using the super-resolution model.
  • the super-score model may include a specific over-score model and a general over-score model.
  • the specific super-resolution model is only suitable for some applications, and its applicability is poor, but it has a high image quality optimization ability.
  • the general super-score model has high applicability, but the ability to optimize image quality is limited.
  • the electronic device may pre-establish an association relationship between the specific super-resolution model and its applicable application program.
  • the first electronic device may acquire the identifier of the application, and search for a target super-score model associated with the identifier.
  • the first electronic device can find the target super-score model associated with the above identifier, it means that there is a specific super-score model (ie, the target super-score model) suitable for the above application in the first electronic device.
  • the first electronic device may perform super-resolution reconstruction on the first image by using the above-mentioned first graphics rendering hardware and the above-mentioned target super-resolution model.
  • the electronic device uses the target super-resolution model to perform super-resolution reconstruction on the first image, which can better improve the image quality of the target image.
  • the identifier of the application program may be the package name of the application program, or the identifier of the application program may also be a user-defined identifier.
  • the method further includes:
  • the first electronic device performs super-resolution reconstruction on the first image by using the first graphics rendering hardware and a preset general super-resolution model, Get the target image.
  • the first electronic device cannot find the target super-scoring model associated with the above identifier, it means that there is no specific super-scoring model associated with the application.
  • the first electronic device may perform super-resolution reconstruction on the first image by using the first graphics rendering hardware and the preset general super-resolution model to obtain the target image.
  • the method further includes:
  • the first electronic device establishes an association relationship between the identifier and the general super-resolution model.
  • the first electronic device uses the general super-resolution model to perform super-resolution reconstruction on the first image
  • the first electronic device may be set with multiple general super-resolution models, in order to make the application program next time
  • the first electronic device may invoke the same general super-resolution model to perform super-resolution reconstruction on the first image, and the first electronic device may establish an association relationship between the above-mentioned identifier and the general super-resolution model.
  • the first electronic device can find the general super-score model according to the association relationship between the above-mentioned identifier and the general super-score model, determine the general-purpose super-score model as the target
  • the super-resolution model performs super-resolution reconstruction on the first image, so that the first electronic device can maintain the same level of image quality optimization when processing the image of the application.
  • the first electronic device renders the native image data by using first graphics rendering hardware to obtain a first image, including:
  • the first electronic device renders the native image data by using the first graphics rendering hardware and a preset first image resolution to obtain a first image.
  • the first electronic device may render native image data according to the first image resolution.
  • the first image resolution is preset by the first electronic device.
  • the first electronic device performs super-resolution reconstruction on the first image by using second graphics rendering hardware to obtain a target image, including:
  • the first electronic device performs super-resolution reconstruction on the first image through the second graphics rendering hardware and the single-enhanced super-resolution model to obtain a target image, wherein the first image resolution and the The image resolutions of the target images are the same, and the single-enhanced super-resolution model is a super-resolution model in which the image resolution of the input image and the image resolution of the output image are the same.
  • the first electronic device may select the single-enhanced super-resolution model when selecting the super-resolution model to perform super-resolution reconstruction on the first image.
  • the image resolution of the input image of the model is the same as the image resolution of the output image of the model.
  • the first electronic device performs super-resolution reconstruction on the first image by using second graphics rendering hardware to obtain a target image, including:
  • the first electronic device performs super-resolution reconstruction on the first image by using the second graphics rendering hardware and the multiple-enhanced super-resolution model to obtain a target image, wherein the resolution of the first image is smaller than the resolution of the first image.
  • the image resolution of the target image is a super-resolution model in which the image resolution of the input image is smaller than the image resolution of the output image.
  • the electronic device may select a multiple-enhanced super-resolution model to perform super-resolution reconstruction on the first image.
  • the image resolution of the input image of the model is smaller than the image resolution of the output image of the model. That is, the multi-fold enhanced super-resolution model can increase the image resolution of the input image.
  • the first electronic device renders the native image data by using first graphics rendering hardware to obtain a first image, including:
  • the first electronic device renders the native image data through a graphics processor to obtain a first image
  • the first electronic device performs super-resolution reconstruction on the first image through the second graphics rendering hardware to obtain a target image, including:
  • the first electronic device performs super-resolution reconstruction on the first image through a neural network processor to obtain a target image.
  • the first electronic device processes the native image data through heterogeneous graphics rendering hardware to obtain the target image. Specifically, the first electronic device may render the original image data through a graphics processor to obtain a first image, and perform super-resolution reconstruction on the first image through a neural network processor.
  • the first electronic device selects appropriate graphics rendering hardware to perform corresponding operations, which can improve the image processing efficiency of the first electronic device.
  • an image processing method is provided, applied to a second electronic device, including:
  • the second electronic device receives a first image sent by the first electronic device, where the first image is an image obtained by the first electronic device rendering the native image data generated by the application;
  • the second electronic device performs super-resolution reconstruction on the first image to obtain a target image.
  • the first electronic device can locally render the native image data of the application program to obtain the first image.
  • the first electronic device sends the first image to the second electronic device.
  • the second electronic device receives the first image, and performs super-resolution reconstruction on the first image to obtain a high-quality target image.
  • the first electronic device and the second electronic device jointly perform image processing, which can make full use of the hardware resources of the graphics rendering hardware of different electronic devices, thereby reducing the need for the first electronic device when rendering high-quality images.
  • the load of the local hardware resources reduces the rendering consumption of the first electronic device for rendering high-quality images, and makes full use of the hardware resources of different electronic devices to better improve the image quality, thereby improving the user experience.
  • the second electronic device performs super-resolution reconstruction on the first image to obtain a target image, including:
  • the second electronic device searches for a target super-score model associated with the identifier
  • the second electronic device performs super-resolution reconstruction on the first image by using the found target super-resolution model to obtain a target image.
  • the second electronic device may perform super-resolution reconstruction on the second image by using the super-resolution model.
  • the super-score model may include a specific over-score model and a general over-score model.
  • the specific super-resolution model is only suitable for some applications, and its applicability is poor, but it has a high image quality optimization ability.
  • the general super-score model has high applicability, but the ability to optimize image quality is limited.
  • the electronic device may pre-establish an association relationship between the specific super-resolution model and its applicable application program.
  • the second electronic device may acquire the identifier of the application, and search for the target super-score model associated with the identifier.
  • the second electronic device can find the target super-score model associated with the above identifier, it means that there is a specific super-score model (ie, the target super-score model) suitable for the above application in the second electronic device, and the second electronic device can use the graph
  • the rendering hardware and the target super-resolution model described above perform super-resolution reconstruction of the second image.
  • the electronic device uses the target super-resolution model to perform super-resolution reconstruction on the second image, which can better improve the image quality of the target image.
  • the identifier of the application program may be the package name of the application program, or the identifier of the application program may also be a user-defined identifier.
  • the method further includes:
  • the second electronic device performs super-resolution reconstruction on the first image by using a preset general super-resolution model to obtain a target image.
  • the second electronic device cannot find the target super-score model associated with the above identifier, it means that there is no specific super-score model associated with the application.
  • the second electronic device may perform super-resolution reconstruction on the second image by using the graphics rendering hardware and the preset general super-resolution model to obtain the target image.
  • the method further includes:
  • the second electronic device establishes an association relationship between the identifier and the general super-resolution model.
  • the second electronic device uses the general super-resolution model to perform super-resolution reconstruction on the second image
  • the second electronic device may be set with multiple general super-resolution models, in order to make the application program next time
  • the second electronic device may invoke the same general super-resolution model to perform super-resolution reconstruction on the second image, and the second electronic device may establish an association relationship between the above-mentioned identifier and the general super-resolution model.
  • the second electronic device can find the general super-score model according to the association relationship between the above-mentioned identifier and the general super-score model, determine the general super-score model as the target
  • the super-resolution model performs super-resolution reconstruction on the second image, so that the second electronic device can maintain the same level of image quality optimization when processing the image of the application.
  • the first image resolution of the first image is consistent with the image resolution of the target image
  • the second electronic device performs super-resolution reconstruction on the first image to obtain a target image, including:
  • the second electronic device performs super-resolution reconstruction on the first image through a single-enhanced super-resolution model to obtain the target image, wherein the single-enhanced super-resolution model is the image resolution of the input image A superresolution model with the same image resolution as the output image.
  • the second electronic device may select the single-enhanced super-resolution model when selecting the super-resolution model to perform super-resolution reconstruction on the first image.
  • the image resolution of the input image of the haplo-enhanced super-resolution model is the same as the image resolution of the output image.
  • the first image resolution of the first image is lower than the image resolution of the target image
  • the second electronic device performs super-resolution reconstruction on the first image to obtain a target image, including:
  • the second electronic device performs up-sampling processing on the first image to obtain a second image, and the image resolution of the second image is consistent with the image resolution of the target image;
  • the second electronic device performs super-resolution reconstruction on the second image through a single-enhanced super-resolution model to obtain the target image, wherein the single-enhanced super-resolution model is the image resolution of the input image A superresolution model with the same image resolution as the output image.
  • the second electronic device may perform up-sampling processing on the first image to obtain a second image, so that the image resolution of the second image is the same as that of the target image.
  • the image resolution is the same.
  • the second electronic device uses the single-enhanced super-resolution model to process the second image to obtain the target image.
  • the algorithm applied for up-sampling can be any one of interpolation algorithms such as the nearest neighbor method, bilinear interpolation method, and cubic interpolation method.
  • the first image resolution of the first image is lower than the resolution of the target image
  • the second electronic device performs super-resolution reconstruction on the first image to obtain a target image, including:
  • the second electronic device performs super-resolution reconstruction on the first image through a multiple-enhanced super-resolution model to obtain the target image, wherein the multiple-enhanced super-resolution model is the image resolution of the input image A superresolution model with an image resolution smaller than the output image.
  • the second electronic device may use a multiple-enhanced super-resolution model in addition to up-sampling the first image. Perform super-resolution reconstruction on the first image.
  • the image resolution of the input image of the multiple-enhanced super-resolution model is smaller than the image resolution of the output image. That is, the multi-fold enhanced super-resolution model can increase the image resolution of the input image.
  • an electronic device including:
  • a native data module used to obtain native image data, where the native image data is image data generated by an application and not rendered;
  • a preliminary rendering module configured to render the native image data through the first graphics rendering hardware to obtain a first image
  • the first super-resolution module is configured to perform super-resolution reconstruction on the first image through second graphics rendering hardware to obtain a target image, and the first graphics rendering hardware and the second graphics rendering hardware are different graphics rendering hardware.
  • the first super-segmentation module includes:
  • a first identification submodule used to obtain the identification of the application
  • a first model submodule used to find the target super-score model associated with the identifier
  • the first reconstruction sub-module is configured to perform super-resolution reconstruction on the first image by using the first graphics rendering hardware and the found target super-resolution model to obtain a target image.
  • the first super-segmentation module further includes:
  • the first general sub-module is used to perform super-resolution on the first image through the first graphics rendering hardware and the preset general super-resolution model if the target super-resolution model associated with the identifier is not found. Reconstruct to get the target image.
  • the first super-segmentation module further includes:
  • the first association submodule is used to establish an association relationship between the identifier and the general super-resolution model.
  • the preliminary rendering module is specifically configured to render the native image data by using the first graphics rendering hardware and a preset first image resolution to obtain the first an image.
  • the first super-resolution module is specifically configured to perform super-resolution on the first image by using the second graphics rendering hardware and the single-enhanced super-resolution model Reconstruction to obtain a target image, wherein the first image resolution is consistent with the image resolution of the target image, and the single-enhanced super-resolution model is that the image resolution of the input image and the image resolution of the output image are the same super-score model.
  • the first super-resolution module is specifically configured to perform super-resolution on the first image by using the second graphics rendering hardware and the multiple-enhanced super-resolution model rate reconstruction to obtain a target image, wherein the resolution of the first image is smaller than the image resolution of the target image, and the multiple-enhanced super-resolution model is the image resolution of the input image is smaller than the image resolution of the output image Overscore model.
  • the preliminary rendering module is specifically configured to render the native image data through a graphics processor to obtain the first image
  • the first super-resolution module is specifically configured to perform super-resolution reconstruction on the first image through a neural network processor to obtain a target image.
  • an electronic device comprising:
  • an image receiving module configured to receive a first image sent by a first electronic device, where the first image is an image obtained by the first electronic device rendering native image data generated by an application;
  • the second super-resolution module is configured to perform super-resolution reconstruction on the first image to obtain a target image.
  • the second super-segmentation module includes:
  • the second identification submodule is used to obtain the identification of the application
  • the second model submodule is used to find the target super-score model associated with the identifier
  • the second reconstruction sub-module is configured to perform super-resolution reconstruction on the first image by using the found target super-resolution model to obtain a target image.
  • the second super-segmentation module further includes:
  • the second general sub-module is configured to perform super-resolution reconstruction on the first image by using a preset general super-resolution model to obtain a target image if the target super-resolution model associated with the identifier is not found.
  • the second super-segmentation module further includes:
  • the second association module is configured to establish an association relationship between the identifier and the general super-resolution model.
  • the first image resolution of the first image is consistent with the image resolution of the target image
  • the second super-resolution module is specifically configured to perform super-resolution reconstruction on the first image through a single-enhanced super-score model to obtain the target image, wherein the single-enhanced super-score model is an input A superresolution model where the image resolution of the image is the same as the image resolution of the output image.
  • the first image resolution of the first image is lower than the image resolution of the target image
  • the second super-divided module includes:
  • an upsampling submodule configured to perform an upsampling process on the first image to obtain a second image, the image resolution of the second image is consistent with the image resolution of the target image;
  • An enhancement sub-module used for the second electronic device to perform super-resolution reconstruction on the second image through a single-enhanced super-resolution model to obtain the target image, wherein the single-enhanced super-resolution model is: A superresolution model where the image resolution of the input image is the same as the image resolution of the output image.
  • the first image resolution of the first image is lower than the resolution of the target image
  • the second super-resolution module is specifically configured to perform super-resolution reconstruction on the first image through a multiple-enhanced super-resolution model to obtain the target image, wherein the multiple-enhanced super-resolution model is an input A superresolution model where the image resolution of the image is smaller than that of the output image.
  • an image processing system in a fifth aspect, includes a first electronic device and a second electronic device;
  • the first electronic device is configured to render the native image data generated by the application to obtain a first image, and send the first image to the second electronic device;
  • the second electronic device is configured to execute the image processing method mentioned in the second aspect above.
  • an electronic device comprising a memory, a processor, and a computer program stored in the memory and executable on the processor, when the processor executes the computer program, the electronic device realizes the steps of the above method.
  • a computer-readable storage medium stores a computer program, and when the computer program is executed by a processor, enables an electronic device to implement the steps of the above method.
  • a chip system may be a single chip or a chip module composed of multiple chips, the chip system includes a memory and a processor, and the processor executes the storage in the memory.
  • the electronic device first renders the original image data to obtain the first image, and then performs super-resolution reconstruction on the first image to obtain the target image displayed on the screen.
  • generating the target image by means of preliminary rendering and super-resolution reconstruction can reduce rendering power consumption, reduce the amount of calculation, and improve rendering. efficient.
  • the electronic device performs the preliminary rendering operation through the first graphics rendering hardware, and performs the super-resolution reconstruction operation through the second graphics rendering hardware, which can make full use of the heterogeneous hardware resources in the electronic device and avoid the lack of hardware resources of a single graphics rendering hardware.
  • the electronic device performs the preliminary rendering operation through the first graphics rendering hardware, and performs the super-resolution reconstruction operation through the second graphics rendering hardware, which can make full use of the heterogeneous hardware resources in the electronic device and avoid the lack of hardware resources of a single graphics rendering hardware.
  • the image processing method of the present application can reduce the rendering power consumption and calculation amount of rendering high-quality images, and can make full use of heterogeneous hardware resources in electronic devices, so as to solve the problem that in the existing image solutions, when the electronic device's When the hardware resources are insufficient, the electronic device can only run high-quality products with lower picture quality, which has strong ease of use and practicability.
  • FIG. 1 is a schematic structural diagram of an electronic device provided by an embodiment of the present application.
  • FIG. 2 is a software structural block diagram of an electronic device provided by an embodiment of the present application.
  • FIG. 3 is a schematic flowchart of an image processing method provided by an embodiment of the present application.
  • FIG. 4 is a schematic diagram of an example image provided by an embodiment of the present application.
  • FIG. 5 is a schematic diagram of another example image provided by an embodiment of the present application.
  • FIG. 6 is a schematic diagram of another example image provided by an embodiment of the present application.
  • FIG. 7 is a schematic diagram of another electronic device provided by an embodiment of the present application.
  • FIG. 8 is a schematic diagram of an application scenario provided by an embodiment of the present application.
  • FIG. 9 is a schematic flowchart of another image processing method provided by an embodiment of the present application.
  • FIG. 10 is a schematic diagram of another application scenario provided by an embodiment of the present application.
  • FIG. 11 is a schematic flowchart of another image processing method provided by an embodiment of the present application.
  • FIG. 12 is a schematic diagram of another application scenario provided by an embodiment of the present application.
  • FIG. 13 is a schematic diagram of another application scenario provided by an embodiment of the present application.
  • FIG. 14 is a schematic diagram of another application scenario provided by an embodiment of the present application.
  • FIG. 15 is a schematic diagram of another application scenario provided by an embodiment of the present application.
  • 16 is a schematic diagram of another application scenario provided by an embodiment of the present application.
  • 17 is a schematic flowchart of another image processing method provided by an embodiment of the present application.
  • FIG. 18 is a schematic diagram of another application scenario provided by an embodiment of the present application.
  • FIG. 19 is a schematic flowchart of another image processing method provided by an embodiment of the present application.
  • FIG. 20 is a schematic diagram of another application scenario provided by an embodiment of the present application.
  • 21 is a schematic diagram of another application scenario provided by an embodiment of the present application.
  • FIG. 22 is a schematic diagram of another application scenario provided by an embodiment of the present application.
  • FIG. 23 is a schematic diagram of another application scenario provided by an embodiment of the present application.
  • FIG. 24 is a schematic diagram of another application scenario provided by an embodiment of the present application.
  • FIG. 25 is a schematic diagram of another application scenario provided by an embodiment of the present application.
  • FIG. 26 is a schematic diagram of another electronic device provided by an embodiment of the present application.
  • the term “if” may be contextually interpreted as “when” or “once” or “in response to determining” or “in response to detecting “.
  • the phrases “if it is determined” or “if the [described condition or event] is detected” may be interpreted, depending on the context, to mean “once it is determined” or “in response to the determination” or “once the [described condition or event] is detected. ]” or “in response to detection of the [described condition or event]”.
  • references in this specification to "one embodiment” or “some embodiments” and the like mean that a particular feature, structure or characteristic described in connection with the embodiment is included in one or more embodiments of the present application.
  • appearances of the phrases “in one embodiment,” “in some embodiments,” “in other embodiments,” “in other embodiments,” etc. in various places in this specification are not necessarily All refer to the same embodiment, but mean “one or more but not all embodiments” unless specifically emphasized otherwise.
  • the terms “including”, “including”, “having” and their variants mean “including but not limited to” unless specifically emphasized otherwise.
  • Picture quality refers to the picture quality. There are many image quality indicators for evaluating image quality. The more common image quality indicator is image resolution. When other image quality indicators are the same, the higher the image resolution, the higher the image quality and the lower the image resolution. , the lower the image quality.
  • image quality indicators can also include one of sharpness, sharpness, lens distortion, dispersion, resolution, color gamut range, color purity (color brilliance), and color balance parameters. or more.
  • Image resolution refers to the amount of information stored in the image, which can be understood as the number of pixels contained in the image.
  • the image resolution can be expressed as "the number of horizontal pixels ⁇ the number of vertical pixels".
  • the resolution of the image is 2048 ⁇ 1080, which means that each row of pixels in the image includes 2048 pixels, and each column of pixels includes 1080 pixels.
  • Rendering in computer terms refers to the process of generating an image from an image model.
  • An image model is a description of a three-dimensional scene using a strictly defined language or data structure, including geometry, viewpoint, texture, lighting information, and rendering parameters.
  • the rendering parameters may include the above-mentioned image quality indicators.
  • the frame rate of the electronic device can remain around 90 frames per second .
  • the frame rate of the electronic device can usually only be maintained at 40 frames per second or so. If the user turns down the image resolution, the frame rate of the electronic device is usually only kept at 60 frames per second.
  • GPU-turbo graphics acceleration
  • GPU-turbo technology reconstructs the traditional GPU architecture at the bottom layer of the electronic device system, realizes the coordination of software and hardware, and greatly improves the overall computing efficiency of the GPU.
  • GPU-turbo technology can detect the similarities and differences of image quality of adjacent frame images through artificial intelligence (AI) technology, render the difference parts of adjacent frames, and retain the same content of adjacent frames. In this way, GPU-turbo can save 80% of the calculation, greatly improving the rendering speed of the GPU.
  • AI artificial intelligence
  • the image rendering method of GPU-turbo technology is consistent with the traditional GPU rendering method.
  • the GPU directly renders the initial image file into the final image displayed on the screen. Therefore, when using the GPU-turbo technology to run high-quality products, it will still generate a large load on the GPU of the electronic device, and the rendering power consumption will be high.
  • embodiments of the present application provide an image processing method, apparatus, electronic device, and computer-readable storage medium, so as to solve the problem of high rendering power consumption and high computational cost when rendering high-quality products in the existing image rendering method. volume problem.
  • the electronic device described in the embodiments of the present application may be a mobile phone, a tablet computer, a handheld computer, a personal digital assistant (PDA), an augmented reality (AR) ⁇ virtual reality (VR) device, a media
  • PDA personal digital assistant
  • AR augmented reality
  • VR virtual reality
  • media such as players and wearable devices
  • the embodiments of the present application do not impose any restrictions on the specific forms/types of the electronic devices.
  • FIG. 1 shows a schematic structural diagram of an electronic device 100 .
  • the electronic device 100 may include a processor 110, an external memory interface 120, an internal memory 121, a universal serial bus (USB) interface 130, a charge management module 140, a power management module 141, a battery 142, an antenna 1, an antenna 2 , mobile communication module 150, wireless communication module 160, audio module 170, speaker 170A, receiver 170B, microphone 170C, headphone jack 170D, sensor module 180, buttons 190, motor 191, indicator 192, camera 193, display screen 194, and Subscriber identification module (subscriber identification module, SIM) card interface 195 and so on.
  • SIM Subscriber identification module
  • the sensor module 180 may include a pressure sensor 180A, a gyroscope sensor 180B, an air pressure sensor 180C, a magnetic sensor 180D, an acceleration sensor 180E, a distance sensor 180F, a proximity light sensor 180G, a fingerprint sensor 180H, a temperature sensor 180J, a touch sensor 180K, and ambient light. Sensor 180L, bone conduction sensor 180M, etc.
  • the structures illustrated in the embodiments of the present invention do not constitute a specific limitation on the electronic device 100 .
  • the electronic device 100 may include more or less components than shown, or combine some components, or separate some components, or arrange different components.
  • the illustrated components may be implemented in hardware, software, or a combination of software and hardware.
  • the processor 110 may include one or more processing units, for example, the processor 110 may include an application processor (application processor, AP), a modem processor, a graphics processor (graphics processing unit, GPU), an image signal processor (image signal processor, ISP), controller, video codec, digital signal processor (digital signal processor, DSP), baseband processor, and/or neural-network processing unit (neural-network processing unit, NPU), etc. . Wherein, different processing units may be independent devices, or may be integrated in one or more processors.
  • application processor application processor, AP
  • modem processor graphics processor
  • image signal processor image signal processor
  • ISP image signal processor
  • controller video codec
  • digital signal processor digital signal processor
  • baseband processor baseband processor
  • neural-network processing unit neural-network processing unit
  • the controller can generate an operation control signal according to the instruction operation code and timing signal, and complete the control of fetching and executing instructions.
  • a memory may also be provided in the processor 110 for storing instructions and data.
  • the memory in processor 110 is cache memory. This memory may hold instructions or data that have just been used or recycled by the processor 110 . If the processor 110 needs to use the instruction or data again, it can be called directly from the memory. Repeated accesses are avoided and the latency of the processor 110 is reduced, thereby increasing the efficiency of the system.
  • the processor 110 may include one or more interfaces.
  • the interface may include an integrated circuit (inter-integrated circuit, I2C) interface, an integrated circuit built-in audio (inter-integrated circuit sound, I2S) interface, a pulse code modulation (pulse code modulation, PCM) interface, a universal asynchronous transceiver (universal asynchronous transmitter) receiver/transmitter, UART) interface, mobile industry processor interface (MIPI), general-purpose input/output (GPIO) interface, subscriber identity module (SIM) interface, and / or universal serial bus (universal serial bus, USB) interface, etc.
  • I2C integrated circuit
  • I2S integrated circuit built-in audio
  • PCM pulse code modulation
  • PCM pulse code modulation
  • UART universal asynchronous transceiver
  • MIPI mobile industry processor interface
  • GPIO general-purpose input/output
  • SIM subscriber identity module
  • USB universal serial bus
  • the I2C interface is a bidirectional synchronous serial bus that includes a serial data line (SDA) and a serial clock line (SCL).
  • the processor 110 may contain multiple sets of I2C buses.
  • the processor 110 can be respectively coupled to the touch sensor 180K, the charger, the flash, the camera 193 and the like through different I2C bus interfaces.
  • the processor 110 may couple the touch sensor 180K through the I2C interface, so that the processor 110 and the touch sensor 180K communicate with each other through the I2C bus interface, so as to realize the touch function of the electronic device 100 .
  • the I2S interface can be used for audio communication.
  • the processor 110 may contain multiple sets of I2S buses.
  • the processor 110 may be coupled with the audio module 170 through an I2S bus to implement communication between the processor 110 and the audio module 170 .
  • the audio module 170 can transmit audio signals to the wireless communication module 160 through the I2S interface, so as to realize the function of answering calls through a Bluetooth headset.
  • the PCM interface can also be used for audio communications, sampling, quantizing and encoding analog signals.
  • the audio module 170 and the wireless communication module 160 may be coupled through a PCM bus interface.
  • the audio module 170 can also transmit audio signals to the wireless communication module 160 through the PCM interface, so as to realize the function of answering calls through the Bluetooth headset. Both the I2S interface and the PCM interface can be used for audio communication.
  • the UART interface is a universal serial data bus used for asynchronous communication.
  • the bus may be a bidirectional communication bus. It converts the data to be transmitted between serial communication and parallel communication.
  • a UART interface is typically used to connect the processor 110 with the wireless communication module 160 .
  • the processor 110 communicates with the Bluetooth module in the wireless communication module 160 through the UART interface to implement the Bluetooth function.
  • the audio module 170 can transmit audio signals to the wireless communication module 160 through the UART interface, so as to realize the function of playing music through the Bluetooth headset.
  • the MIPI interface can be used to connect the processor 110 with peripheral devices such as the display screen 194 and the camera 193 .
  • MIPI interfaces include camera serial interface (CSI), display serial interface (DSI), etc.
  • the processor 110 communicates with the camera 193 through a CSI interface, so as to realize the photographing function of the electronic device 100 .
  • the processor 110 communicates with the display screen 194 through the DSI interface to implement the display function of the electronic device 100 .
  • the GPIO interface can be configured by software.
  • the GPIO interface can be configured as a control signal or as a data signal.
  • the GPIO interface may be used to connect the processor 110 with the camera 193, the display screen 194, the wireless communication module 160, the audio module 170, the sensor module 180, and the like.
  • the GPIO interface can also be configured as I2C interface, I2S interface, UART interface, MIPI interface, etc.
  • the USB interface 130 is an interface that conforms to the USB standard specification, and may specifically be a Mini USB interface, a Micro USB interface, a USB Type C interface, and the like.
  • the USB interface 130 can be used to connect a charger to charge the electronic device 100, and can also be used to transmit data between the electronic device 100 and peripheral devices. It can also be used to connect headphones to play audio through the headphones.
  • the interface can also be used to connect other electronic devices, such as AR devices.
  • the interface connection relationship between the modules illustrated in the embodiment of the present invention is only a schematic illustration, and does not constitute a structural limitation of the electronic device 100 .
  • the electronic device 100 may also adopt different interface connection manners in the foregoing embodiments, or a combination of multiple interface connection manners.
  • the charging management module 140 is used to receive charging input from the charger.
  • the charger may be a wireless charger or a wired charger.
  • the charging management module 140 may receive charging input from the wired charger through the USB interface 130 .
  • the charging management module 140 may receive wireless charging input through a wireless charging coil of the electronic device 100 . While the charging management module 140 charges the battery 142 , it can also supply power to the electronic device through the power management module 141 .
  • the power management module 141 is used for connecting the battery 142 , the charging management module 140 and the processor 110 .
  • the power management module 141 receives the input from the battery 142 and/or the charging management module 140 and supplies power to the processor 110 , the internal memory 121 , the display screen 194 , the camera 193 , and the wireless communication module 160 .
  • the power management module 141 can also be used to monitor parameters such as battery capacity, battery cycle times, battery health status (leakage, impedance).
  • the power management module 141 may also be provided in the processor 110 .
  • the power management module 141 and the charging management module 140 may also be provided in the same device.
  • the wireless communication function of the electronic device 100 may be implemented by the antenna 1, the antenna 2, the mobile communication module 150, the wireless communication module 160, the modulation and demodulation processor, the baseband processor, and the like.
  • Antenna 1 and Antenna 2 are used to transmit and receive electromagnetic wave signals.
  • Each antenna in electronic device 100 may be used to cover a single or multiple communication frequency bands. Different antennas can also be reused to improve antenna utilization.
  • the antenna 1 can be multiplexed as a diversity antenna of the wireless local area network. In other embodiments, the antenna may be used in conjunction with a tuning switch.
  • the mobile communication module 150 may provide wireless communication solutions including 2G/3G/4G/5G etc. applied on the electronic device 100 .
  • the mobile communication module 150 may include at least one filter, switch, power amplifier, low noise amplifier (LNA) and the like.
  • the mobile communication module 150 can receive electromagnetic waves from the antenna 1, filter and amplify the received electromagnetic waves, and transmit them to the modulation and demodulation processor for demodulation.
  • the mobile communication module 150 can also amplify the signal modulated by the modulation and demodulation processor, and then turn it into an electromagnetic wave for radiation through the antenna 1 .
  • at least part of the functional modules of the mobile communication module 150 may be provided in the processor 110 .
  • at least part of the functional modules of the mobile communication module 150 may be provided in the same device as at least part of the modules of the processor 110 .
  • the modem processor may include a modulator and a demodulator.
  • the modulator is used to modulate the low frequency baseband signal to be sent into a medium and high frequency signal.
  • the demodulator is used to demodulate the received electromagnetic wave signal into a low frequency baseband signal. Then the demodulator transmits the demodulated low-frequency baseband signal to the baseband processor for processing.
  • the low frequency baseband signal is processed by the baseband processor and passed to the application processor.
  • the application processor outputs sound signals through audio devices (not limited to the speaker 170A, the receiver 170B, etc.), or displays images or videos through the display screen 194 .
  • the modem processor may be a stand-alone device.
  • the modem processor may be independent of the processor 110, and may be provided in the same device as the mobile communication module 150 or other functional modules.
  • the wireless communication module 160 can provide applications on the electronic device 100 including wireless local area networks (WLAN) (such as wireless fidelity (Wi-Fi) networks), bluetooth (BT), global navigation satellites System (global navigation satellite system, GNSS), frequency modulation (frequency modulation, FM), near field communication technology (near field communication, NFC), infrared technology (infrared, IR) and other wireless communication solutions.
  • WLAN wireless local area networks
  • BT wireless fidelity
  • GNSS global navigation satellites System
  • frequency modulation frequency modulation, FM
  • NFC near field communication technology
  • IR infrared technology
  • the wireless communication module 160 may be one or more devices integrating at least one communication processing module.
  • the wireless communication module 160 receives electromagnetic waves via the antenna 2 , frequency modulates and filters the electromagnetic wave signals, and sends the processed signals to the processor 110 .
  • the wireless communication module 160 can also receive the signal to be sent from the processor 110 , perform frequency modulation on it, amplify it, and convert it into electromagnetic waves for radiation through
  • the antenna 1 of the electronic device 100 is coupled with the mobile communication module 150, and the antenna 2 is coupled with the wireless communication module 160, so that the electronic device 100 can communicate with the network and other devices through wireless communication technology.
  • the wireless communication technologies may include global system for mobile communications (GSM), general packet radio service (GPRS), code division multiple access (CDMA), broadband Code Division Multiple Access (WCDMA), Time Division Code Division Multiple Access (TD-SCDMA), Long Term Evolution (LTE), BT, GNSS, WLAN, NFC , FM, and/or IR technology, etc.
  • the GNSS may include a global positioning system (global positioning system, GPS), a global navigation satellite system (GLONASS), a Beidou navigation satellite system (BDS), a quasi-zenith satellite system (quasi -zenith satellite system, QZSS) and/or satellite based augmentation systems (SBAS).
  • GPS global positioning system
  • GLONASS global navigation satellite system
  • BDS Beidou navigation satellite system
  • QZSS quasi-zenith satellite system
  • SBAS satellite based augmentation systems
  • the electronic device 100 implements a display function through a GPU, a display screen 194, an application processor, and the like.
  • the GPU is a microprocessor for image processing, and is connected to the display screen 194 and the application processor.
  • the GPU is used to perform mathematical and geometric calculations for graphics rendering.
  • Processor 110 may include one or more GPUs that execute program instructions to generate or alter display information.
  • Display screen 194 is used to display images, videos, and the like.
  • Display screen 194 includes a display panel.
  • the display panel can be a liquid crystal display (LCD), an organic light-emitting diode (OLED), an active-matrix organic light-emitting diode or an active-matrix organic light-emitting diode (active-matrix organic light).
  • LED diode AMOLED
  • flexible light-emitting diode flexible light-emitting diode (flex light-emitting diode, FLED), Miniled, MicroLed, Micro-oLed, quantum dot light-emitting diode (quantum dot light emitting diodes, QLED) and so on.
  • the electronic device 100 may include one or N display screens 194 , where N is a positive integer greater than one.
  • the electronic device 100 may implement a shooting function through an ISP, a camera 193, a video codec, a GPU, a display screen 194, an application processor, and the like.
  • the ISP is used to process the data fed back by the camera 193 .
  • the shutter is opened, the light is transmitted to the camera photosensitive element through the lens, the light signal is converted into an electrical signal, and the camera photosensitive element transmits the electrical signal to the ISP for processing, and converts it into an image visible to the naked eye.
  • ISP can also perform algorithm optimization on image noise, brightness, and skin tone.
  • ISP can also optimize the exposure, color temperature and other parameters of the shooting scene.
  • the ISP may be provided in the camera 193 .
  • Camera 193 is used to capture still images or video.
  • the object is projected through the lens to generate an optical image onto the photosensitive element.
  • the photosensitive element may be a charge coupled device (CCD) or a complementary metal-oxide-semiconductor (CMOS) phototransistor.
  • CMOS complementary metal-oxide-semiconductor
  • the photosensitive element converts the optical signal into an electrical signal, and then transmits the electrical signal to the ISP to convert it into a digital image signal.
  • the ISP outputs the digital image signal to the DSP for processing.
  • DSP converts digital image signals into standard RGB, YUV and other formats of image signals.
  • the electronic device 100 may include 1 or N cameras 193 , where N is a positive integer greater than 1.
  • a digital signal processor is used to process digital signals, in addition to processing digital image signals, it can also process other digital signals. For example, when the electronic device 100 selects a frequency point, the digital signal processor is used to perform Fourier transform on the frequency point energy and so on.
  • Video codecs are used to compress or decompress digital video.
  • the electronic device 100 may support one or more video codecs.
  • the electronic device 100 can play or record videos of various encoding formats, such as: Moving Picture Experts Group (moving picture experts group, MPEG) 1, MPEG2, MPEG3, MPEG4 and so on.
  • MPEG Moving Picture Experts Group
  • MPEG2 moving picture experts group
  • MPEG3 MPEG4
  • MPEG4 Moving Picture Experts Group
  • the NPU is a neural-network (NN) computing processor.
  • NN neural-network
  • Applications such as intelligent cognition of the electronic device 100 can be implemented through the NPU, such as image recognition, face recognition, speech recognition, text understanding, and the like.
  • the external memory interface 120 can be used to connect an external memory card, such as a Micro SD card, to expand the storage capacity of the electronic device 100 .
  • the external memory card communicates with the processor 110 through the external memory interface 120 to realize the data storage function. For example to save files like music, video etc in external memory card.
  • Internal memory 121 may be used to store computer executable program code, which includes instructions.
  • the internal memory 121 may include a storage program area and a storage data area.
  • the storage program area can store an operating system, an application program required for at least one function (such as a sound playback function, an image playback function, etc.), and the like.
  • the storage data area may store data (such as audio data, phone book, etc.) created during the use of the electronic device 100 and the like.
  • the internal memory 121 may include high-speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, universal flash storage (UFS), and the like.
  • the processor 110 executes various functional applications and data processing of the electronic device 100 by executing instructions stored in the internal memory 121 and/or instructions stored in a memory provided in the processor.
  • the electronic device 100 may implement audio functions through an audio module 170, a speaker 170A, a receiver 170B, a microphone 170C, an earphone interface 170D, an application processor, and the like. Such as music playback, recording, etc.
  • the audio module 170 is used for converting digital audio information into analog audio signal output, and also for converting analog audio input into digital audio signal. Audio module 170 may also be used to encode and decode audio signals. In some embodiments, the audio module 170 may be provided in the processor 110 , or some functional modules of the audio module 170 may be provided in the processor 110 .
  • Speaker 170A also referred to as a "speaker" is used to convert audio electrical signals into sound signals.
  • the electronic device 100 can listen to music through the speaker 170A, or listen to a hands-free call.
  • the receiver 170B also referred to as "earpiece" is used to convert audio electrical signals into sound signals.
  • the voice can be answered by placing the receiver 170B close to the human ear.
  • the microphone 170C also called “microphone” or “microphone” is used to convert sound signals into electrical signals.
  • the user can make a sound by approaching the microphone 170C through a human mouth, and input the sound signal into the microphone 170C.
  • the electronic device 100 may be provided with at least one microphone 170C. In other embodiments, the electronic device 100 may be provided with two microphones 170C, which can implement a noise reduction function in addition to collecting sound signals. In other embodiments, the electronic device 100 may further be provided with three, four or more microphones 170C to collect sound signals, reduce noise, identify sound sources, and implement directional recording functions.
  • the earphone jack 170D is used to connect wired earphones.
  • the earphone interface 170D can be the USB interface 130, or can be a 3.5mm open mobile terminal platform (OMTP) standard interface, a cellular telecommunications industry association of the USA (CTIA) standard interface.
  • OMTP open mobile terminal platform
  • CTIA cellular telecommunications industry association of the USA
  • the pressure sensor 180A is used to sense pressure signals, and can convert the pressure signals into electrical signals.
  • the pressure sensor 180A may be provided on the display screen 194 .
  • the capacitive pressure sensor may be comprised of at least two parallel plates of conductive material. When a force is applied to the pressure sensor 180A, the capacitance between the electrodes changes.
  • the electronic device 100 determines the intensity of the pressure according to the change in capacitance. When a touch operation acts on the display screen 194, the electronic device 100 detects the intensity of the touch operation according to the pressure sensor 180A.
  • the electronic device 100 may also calculate the touched position according to the detection signal of the pressure sensor 180A.
  • touch operations acting on the same touch position but with different touch operation intensities may correspond to different operation instructions. For example, when a touch operation whose intensity is less than the first pressure threshold acts on the short message application icon, the instruction for viewing the short message is executed. When a touch operation with a touch operation intensity greater than or equal to the first pressure threshold acts on the short message application icon, the instruction to create a new short message is executed.
  • the gyro sensor 180B may be used to determine the motion attitude of the electronic device 100 .
  • the angular velocity of electronic device 100 about three axes ie, x, y, and z axes
  • the gyro sensor 180B can be used for image stabilization.
  • the gyro sensor 180B detects the angle at which the electronic device 100 shakes, calculates the distance that the lens module needs to compensate for according to the angle, and allows the lens to counteract the shake of the electronic device 100 through reverse motion to achieve anti-shake.
  • the gyro sensor 180B can also be used for navigation and somatosensory game scenarios.
  • the air pressure sensor 180C is used to measure air pressure.
  • the electronic device 100 calculates the altitude through the air pressure value measured by the air pressure sensor 180C to assist in positioning and navigation.
  • the magnetic sensor 180D includes a Hall sensor.
  • the electronic device 100 can detect the opening and closing of the flip holster using the magnetic sensor 180D.
  • the electronic device 100 can detect the opening and closing of the flip according to the magnetic sensor 180D. Further, according to the detected opening and closing state of the leather case or the opening and closing state of the flip cover, characteristics such as automatic unlocking of the flip cover are set.
  • the acceleration sensor 180E can detect the magnitude of the acceleration of the electronic device 100 in various directions (generally three axes).
  • the magnitude and direction of gravity can be detected when the electronic device 100 is stationary. It can also be used to identify the posture of electronic devices, and can be used in applications such as horizontal and vertical screen switching, pedometers, etc.
  • the electronic device 100 can measure the distance through infrared or laser. In some embodiments, when shooting a scene, the electronic device 100 can use the distance sensor 180F to measure the distance to achieve fast focusing.
  • Proximity light sensor 180G may include, for example, light emitting diodes (LEDs) and light detectors, such as photodiodes.
  • the light emitting diodes may be infrared light emitting diodes.
  • the electronic device 100 emits infrared light to the outside through the light emitting diode.
  • Electronic device 100 uses photodiodes to detect infrared reflected light from nearby objects. When sufficient reflected light is detected, it can be determined that there is an object near the electronic device 100 . When insufficient reflected light is detected, the electronic device 100 may determine that there is no object near the electronic device 100 .
  • the electronic device 100 can use the proximity light sensor 180G to detect that the user holds the electronic device 100 close to the ear to talk, so as to automatically turn off the screen to save power.
  • Proximity light sensor 180G can also be used in holster mode, pocket mode automatically unlocks and locks the screen.
  • the ambient light sensor 180L is used to sense ambient light brightness.
  • the electronic device 100 can adaptively adjust the brightness of the display screen 194 according to the perceived ambient light brightness.
  • the ambient light sensor 180L can also be used to automatically adjust the white balance when taking pictures.
  • the ambient light sensor 180L can also cooperate with the proximity light sensor 180G to detect whether the electronic device 100 is in a pocket, so as to prevent accidental touch.
  • the fingerprint sensor 180H is used to collect fingerprints.
  • the electronic device 100 can use the collected fingerprint characteristics to realize fingerprint unlocking, accessing application locks, taking pictures with fingerprints, answering incoming calls with fingerprints, and the like.
  • the temperature sensor 180J is used to detect the temperature.
  • the electronic device 100 uses the temperature detected by the temperature sensor 180J to execute a temperature processing strategy. For example, when the temperature reported by the temperature sensor 180J exceeds a threshold value, the electronic device 100 reduces the performance of the processor located near the temperature sensor 180J in order to reduce power consumption and implement thermal protection.
  • the electronic device 100 when the temperature is lower than another threshold, the electronic device 100 heats the battery 142 to avoid abnormal shutdown of the electronic device 100 caused by the low temperature.
  • the electronic device 100 boosts the output voltage of the battery 142 to avoid abnormal shutdown caused by low temperature.
  • Touch sensor 180K also called “touch device”.
  • the touch sensor 180K may be disposed on the display screen 194 , and the touch sensor 180K and the display screen 194 form a touch screen, also called a “touch screen”.
  • the touch sensor 180K is used to detect a touch operation on or near it.
  • the touch sensor can pass the detected touch operation to the application processor to determine the type of touch event.
  • Visual output related to touch operations may be provided through display screen 194 .
  • the touch sensor 180K may also be disposed on the surface of the electronic device 100 , which is different from the location where the display screen 194 is located.
  • the bone conduction sensor 180M can acquire vibration signals.
  • the bone conduction sensor 180M can acquire the vibration signal of the vibrating bone mass of the human voice.
  • the bone conduction sensor 180M can also contact the pulse of the human body and receive the blood pressure beating signal.
  • the bone conduction sensor 180M can also be disposed in the earphone, combined with the bone conduction earphone.
  • the audio module 170 can analyze the voice signal based on the vibration signal of the vocal vibration bone block obtained by the bone conduction sensor 180M, so as to realize the voice function.
  • the application processor can analyze the heart rate information based on the blood pressure beat signal obtained by the bone conduction sensor 180M, and realize the function of heart rate detection.
  • the keys 190 include a power-on key, a volume key, and the like. Keys 190 may be mechanical keys. It can also be a touch key.
  • the electronic device 100 may receive key inputs and generate key signal inputs related to user settings and function control of the electronic device 100 .
  • Motor 191 can generate vibrating cues.
  • the motor 191 can be used for vibrating alerts for incoming calls, and can also be used for touch vibration feedback.
  • touch operations acting on different applications can correspond to different vibration feedback effects.
  • the motor 191 can also correspond to different vibration feedback effects for touch operations on different areas of the display screen 194 .
  • Different application scenarios for example: time reminder, receiving information, alarm clock, games, etc.
  • the touch vibration feedback effect can also support customization.
  • the indicator 192 can be an indicator light, which can be used to indicate the charging state, the change of the power, and can also be used to indicate a message, a missed call, a notification, and the like.
  • the SIM card interface 195 is used to connect a SIM card.
  • the SIM card can be contacted and separated from the electronic device 100 by inserting into the SIM card interface 195 or pulling out from the SIM card interface 195 .
  • the electronic device 100 may support 1 or N SIM card interfaces, where N is a positive integer greater than 1.
  • the SIM card interface 195 can support Nano SIM card, Micro SIM card, SIM card and so on. Multiple cards can be inserted into the same SIM card interface 195 at the same time. The types of the plurality of cards may be the same or different.
  • the SIM card interface 195 can also be compatible with different types of SIM cards.
  • the SIM card interface 195 is also compatible with external memory cards.
  • the electronic device 100 interacts with the network through the SIM card to realize functions such as call and data communication.
  • the electronic device 100 employs an eSIM, ie: an embedded SIM card.
  • the eSIM card can be embedded in the electronic device 100 and cannot be separated from the electronic device 100 .
  • the software system of the electronic device 100 may adopt a layered architecture, an event-driven architecture, a microkernel architecture, a microservice architecture, or a cloud architecture.
  • the embodiment of the present invention takes an Android system with a layered architecture as an example to illustrate the software structure of the electronic device 100 as an example.
  • FIG. 2 is a block diagram of a software structure of an electronic device 100 according to an embodiment of the present invention.
  • the layered architecture divides the software into several layers, and each layer has a clear role and division of labor. Layers communicate with each other through software interfaces.
  • the Android system is divided into four layers, which are, from top to bottom, an application layer, an application framework layer, an Android runtime (Android runtime) and a system library, and a kernel layer.
  • the application layer can include a series of application packages.
  • the application package can include applications such as camera, gallery, calendar, call, map, navigation, WLAN, Bluetooth, music, video, short message and so on.
  • the application framework layer provides an application programming interface (application programming interface, API) and a programming framework for applications in the application layer.
  • the application framework layer includes some predefined functions.
  • the application framework layer may include window managers, content providers, view systems, telephony managers, resource managers, notification managers, and the like.
  • a window manager is used to manage window programs.
  • the window manager can get the size of the display screen, determine whether there is a status bar, lock the screen, take screenshots, etc.
  • Content providers are used to store and retrieve data and make these data accessible to applications.
  • the data may include video, images, audio, calls made and received, browsing history and bookmarks, phone book, etc.
  • the view system includes visual controls, such as controls for displaying text, controls for displaying pictures, and so on. View systems can be used to build applications.
  • a display interface can consist of one or more views.
  • the display interface including the short message notification icon may include a view for displaying text and a view for displaying pictures.
  • the phone manager is used to provide the communication function of the electronic device 100 .
  • the management of call status including connecting, hanging up, etc.).
  • the resource manager provides various resources for the application, such as localization strings, icons, pictures, layout files, video files and so on.
  • the notification manager enables applications to display notification information in the status bar, which can be used to convey notification-type messages, and can disappear automatically after a brief pause without user interaction. For example, the notification manager is used to notify download completion, message reminders, etc.
  • the notification manager can also display notifications in the status bar at the top of the system in the form of graphs or scroll bar text, such as notifications of applications running in the background, and notifications on the screen in the form of dialog windows. For example, text information is prompted in the status bar, a prompt sound is issued, the electronic device vibrates, and the indicator light flashes.
  • Android Runtime includes core libraries and a virtual machine. Android runtime is responsible for scheduling and management of the Android system.
  • the core library consists of two parts: one is the function functions that the java language needs to call, and the other is the core library of Android.
  • the application layer and the application framework layer run in virtual machines.
  • the virtual machine executes the java files of the application layer and the application framework layer as binary files.
  • the virtual machine is used to perform functions such as object lifecycle management, stack management, thread management, safety and exception management, and garbage collection.
  • a system library can include multiple functional modules. For example: surface manager (surface manager), media library (Media Libraries), 3D graphics processing library (eg: OpenGL ES), 2D graphics engine (eg: SGL), etc.
  • surface manager surface manager
  • media library Media Libraries
  • 3D graphics processing library eg: OpenGL ES
  • 2D graphics engine eg: SGL
  • the Surface Manager is used to manage the display subsystem and provides a fusion of 2D and 3D layers for multiple applications.
  • the media library supports playback and recording of a variety of commonly used audio and video formats, as well as still image files.
  • the media library can support a variety of audio and video encoding formats, such as: MPEG4, H.264, MP3, AAC, AMR, JPG, PNG, etc.
  • the 3D graphics processing library is used to implement 3D graphics drawing, image rendering, compositing, and layer processing.
  • 2D graphics engine is a drawing engine for 2D drawing.
  • the kernel layer is the layer between hardware and software.
  • the kernel layer contains at least display drivers, camera drivers, audio drivers, and sensor drivers.
  • a corresponding hardware interrupt is sent to the kernel layer.
  • the kernel layer processes touch operations into raw input events (including touch coordinates, timestamps of touch operations, etc.). Raw input events are stored at the kernel layer.
  • the application framework layer obtains the original input event from the kernel layer, and identifies the control corresponding to the input event. Taking the touch operation as a touch click operation, and the control corresponding to the click operation is the control of the camera application icon, for example, the camera application calls the interface of the application framework layer to start the camera application, and then starts the camera driver by calling the kernel layer.
  • the camera 193 captures still images or video.
  • the method includes:
  • the first electronic device renders the native image data through the first graphics rendering hardware to obtain a first image
  • the application program transmits the native image data to the first graphics rendering hardware of the first electronic device for rendering.
  • Native image data refers to unrendered image files generated by the application.
  • Rendering refers to a process of converting the image data stored in the first electronic device into visible pixels through techniques such as rasterization.
  • the first electronic device may obtain the target image of the target image quality index by means of image super-resolution reconstruction.
  • the first electronic device cannot directly perform super-resolution reconstruction on the native image data.
  • the first electronic device needs to perform preliminary rendering on the original image data to obtain a first image that can be reconstructed with super-resolution.
  • the first electronic device may call the first graphics rendering hardware to render the native image data with the first image quality index to obtain the first image.
  • the native image data may be rendered at the first image resolution, and the first image resolution is lower than the preset image resolution; or, The first graphics rendering hardware may also adjust other image quality indicators so that the first image quality indicator is lower than the preset image quality indicator, and then render the native image data.
  • the above-mentioned first image quality index may be a specific type of image quality index, or the above-mentioned first image quality index may be a collection of multiple image quality indexes.
  • the first image quality index is a set of multiple image quality indexes, it may be understood that the first image quality index is lower than the preset image quality index, and it can be understood that some or all of the image quality indexes in the first image quality index are low at the default image quality index.
  • the above-mentioned first graphics rendering hardware can be selected according to the actual situation.
  • the first graphics rendering hardware may be one of a central processing unit (CPU), a graphics processing unit (GPU), and a neural-network processing unit (NPU). one or more.
  • CPU central processing unit
  • GPU graphics processing unit
  • NPU neural-network processing unit
  • the above-mentioned first graphics rendering hardware may be a combination of a CPU and a GPU.
  • the first electronic device performs super-resolution reconstruction on the first image through the second graphics rendering hardware to obtain a target image.
  • the first electronic device performs preliminary rendering on the native image data of the application through the first graphics rendering hardware to obtain a first image.
  • the image quality of the first image is low, and it is difficult to meet the user's requirements for product image quality.
  • Super-resolution reconstruction refers to the application of AI technology to map low-resolution images to high-resolution, with the expectation of enhancing image quality.
  • the first electronic device may perform super-resolution reconstruction on the first image by using the second graphics rendering hardware.
  • the first image can be input into a trained super-resolution model, and the image quality of the first image is enhanced by the super-resolution model to obtain the target image.
  • the above-mentioned super-score model can be selected according to the actual situation.
  • the above-mentioned super-resolution model can be a super-resolution convolutional neural network model (super-resolution convolutional neural network, SRCNN model), a fast super-resolution convolutional neural network model (fast super-resolution convolutional neural network, SRCNN model), Sub-pixel convolutional neural network model (efficient sub-pixel convolutional neural network, ESPCN model), deep recursive model (deeply-recursive convolutional network, DRCN model) and deep super-resolution network model (very deep network for super-resolution, VSDR model) ) and other models.
  • the application program cannot maintain a certain frame rate, which causes the screen of the first electronic device to freeze. Therefore, developers should limit the size of the super-resolution model when choosing the super-resolution model, so that the single-frame running time of the super-resolution model meets the requirements of the application's single-frame time for the display frame rate. For example, if an application sends a frame rate of 90 frames per second, the single frame time for an application to send a frame rate is 1/90 of a second.
  • the scale of the super-scoring model should be limited, so that the single-frame running time of the super-scoring model should be less than 1/90 of a second, so as to ensure that the sending frame rate of the application is kept at 90 frames per second as much as possible, and the occurrence of screen freezes can be reduced. .
  • the first electronic device may acquire at least one set of image sample pairs, and train the super-score model by using the image sample pairs.
  • Image sample pairs refer to pairs of sample images.
  • Each set of image sample pairs includes a first sample image and a second sample image. The contents of the first sample image and the second sample image are consistent, but the image quality of the first sample image is lower than that of the second sample image.
  • the first electronic device may input the first sample image in the image sample pair into the super-score model to obtain a first output image.
  • the first electronic device calculates the loss value according to the first output image, the second sample image and the preset loss function, and updates the super-score model according to the loss value and the preset network update algorithm.
  • the super-score model After the super-score model is updated, go back to the previous step, and use the image samples to train the over-score model in a loop until the number of cycles reaches the preset number of times threshold or the loss value is less than the preset loss threshold.
  • the acquisition method of the above-mentioned image sample pair can be selected according to the actual situation.
  • the above-mentioned image sample pair may be an image dataset from which a pair was originally generated, or the above-mentioned image sample pair may also be a pair of image sample pairs obtained by degrading a high-quality image into a low-quality image.
  • the source of image sample pairs will have a certain impact on the performance of the super-resolution model. If the developer wants to train a general super-score model, the developer can collect samples in an untargeted manner when sampling image sample pairs to obtain general image sample pairs.
  • the first electronic device can obtain the general super-score model by using the general image sample pair to train the super-score model.
  • the general super-score model has high applicability and can be applied to many application scenarios.
  • the image quality optimization capability of the general super-resolution model is limited, and it is difficult to optimize the image quality of each application scenario to a high degree.
  • the developer should only collect the same product or type of product when sampling the image sample pair.
  • Related image sample pairs get a specific image sample pair.
  • the first electronic device uses a specific pair of image samples to train a super-scoring model, and can obtain a specific super-scoring model for a certain product or a certain type of product.
  • the applicability of specific super-score models is poor and can only be applied to specific products.
  • the image quality optimization capability of a specific super-scoring model is high, and a higher degree of image quality optimization can be performed on the image of a specific product.
  • the first electronic device uses the above-mentioned generic game image sample pair to train a super-score model, so that the trained super-score model can be applied to various game applications.
  • Arena of Valor When training a superscore model for Arena of Valor, developers should only use the game images in Arena of Valor as a specific game image sample pair.
  • the above game images can be character images, terrain images, skill images, etc. in "Arena of Valor".
  • the first electronic device trains the super-scoring model according to the above-mentioned specific game image sample pair, so that the trained super-scoring model can specifically enhance the image quality of the game "Arena of Valor".
  • the first electronic device can detect the identifier of the application when the user starts the application, and select the corresponding super-score model according to the identifier of the application to perform the above steps Operation of S302.
  • the first electronic device may call the general super-score model to perform the steps of the above-mentioned step S302.
  • the first electronic device can also use the general super-resolution model to perform super-resolution reconstruction on the first image, and establish the general super-resolution model and the above application.
  • the association relationship between the program identifiers so that when the first electronic device processes the first image of the application program next time, it can find the same general super-score model to process the first image according to the foregoing association relationship.
  • the identifier of the application program may be the package name (packname) of the application program.
  • the first electronic device may associate the super-scoring model with the package name of "Arena of Valor”.
  • the first electronic device obtains the package name of "Arena of Valor”
  • searches for the corresponding super-score model according to the package name and uses the corresponding super-score model to compare the image of "Arena of Valor” to be processed.
  • the identifier of the application program may also be user-defined.
  • the user defines the ID of "Arena of Valor” as 0010.
  • the first electronic device associates "0010" with "Arena of Valor” and a super-resolution model for "Arena of Valor”.
  • the first electronic device searches for the logo corresponding to "Arena of Valor", and obtains the logo "0010” of "Arena of Valor”.
  • the first electronic device searches for a corresponding super-resolution model according to the identifier "0010”, and uses the corresponding super-resolution model to process the image of "Arena of Valor".
  • the above-mentioned super-score model may be a single-enhanced super-score model, or, the above-mentioned super-score model may also be a multiple-enhancement type of over-score model.
  • the haplointensive super-resolution model is a super-resolution model in which the image resolution of the input image and the image resolution of the output image are the same.
  • the multiple-enhanced super-resolution model is a super-resolution model in which the image resolution of the input image is smaller than that of the output image.
  • the image resolution of the first image is consistent with the image resolution of the target image, and the single-enhancement super-score model improves other image quality indicators of the first image by means of The image quality of the first image is enhanced to obtain the target image.
  • the image resolution of the first image is smaller than that of the target image.
  • the first graphics rendering hardware of the first electronic device may render at a smaller image resolution, thereby reducing the hardware resources occupied by the first graphics rendering hardware in the rendering process , as well as reducing rendering power consumption.
  • the first electronic device After the first graphics rendering hardware renders a first image with a smaller image resolution, the first electronic device enhances the image quality of the first image through the multiple-enhanced super-resolution model according to the target resolution configured by the user, and converts the The image resolution of the first image is adapted to the target image resolution to obtain the target image.
  • the second graphics rendering hardware that runs the above-mentioned super-resolution model may be set according to the actual situation.
  • the first electronic device may run the above-mentioned super-score model on a CPU, and use the CPU to perform image quality enhancement on the above-mentioned first image; in other embodiments, the first electronic device may run the above-mentioned super-resolution model on a GPU
  • the sub-model is used to enhance the image quality of the first image through the GPU; or, the first electronic device may run the super-resolution model on the NPU to enhance the image quality of the first image through the NPU.
  • the present application does not limit the hardware for performing the above-mentioned super-resolution reconstruction operation in the first electronic device.
  • FIG. 7 is a schematic diagram of a first electronic device suitable for scenario 1, scenario 2, scenario 3, and scenario 4 according to this embodiment.
  • a GPU 701 and an NPU 702 may be provided in the first electronic device.
  • NPU 702 a general game-type super-score model and a super-score model for game A are preset, and the over-score model for game A is associated with the application identifier of game A.
  • a plurality of icons can be set on the main page of the first electronic device, including “clock”, “calendar”, “game B”, “memo”, “camera”, “address book”, “phone” , Information and other icons.
  • An icon represents an application.
  • the application program of the game B sends the native image data to the GPU 701 frame by frame.
  • the GPU 701 performs frame-by-frame rendering on the above-mentioned native image data, obtains a first image corresponding to each frame of native image data, and sends the first image to the NPU 702 frame by frame.
  • NPU 702 After receiving the first image, NPU 702 obtains the application identifier of game B, and does not find the super-score model corresponding to game B according to the application identifier of game B, then selects the super-score model of the general game class as the target over-score model.
  • the NPU 702 inputs the first image into the target super-resolution model frame by frame, enhances the image quality of the first image through the target super-resolution model, obtains target images corresponding to the first images of each frame, and sends the target images to the first image frame by frame.
  • a display screen of an electronic device is a display screen of an electronic device.
  • the target image is displayed on the screen, so that the screen of the game B is displayed on the display screen.
  • the first electronic device still uses the same super-score model of the same general game class to process the image of Game B .
  • the first electronic device can randomly select a super-score model from the multiple general game-type super-score models to match the game
  • the image of B is processed.
  • the first electronic device may also establish the application identifier of the game B after using the super-score model of the general game class to process the image of the game B for the first time.
  • the association with the superscore model for this generic game class When the user triggers the game B again, the first electronic device can obtain the application identifier of the game B, find the super-score model of the same general game class according to the application identifier of the game B, and process the image of the game B.
  • a plurality of icons can be set on the main page of the first electronic device, including “clock”, “calendar”, “game A”, “memo”, “camera”, “contact book”, “phone” ”, “Information” and other icons.
  • An icon represents an application.
  • the user clicks the icon of the game A on the first electronic device, and the first electronic device starts the game A in response to the user's click operation.
  • the application program of the game A sends the native image data to the GPU 701 frame by frame.
  • the GPU 701 performs frame-by-frame rendering on the above-mentioned native image data, obtains a first image corresponding to each frame of native image data, and sends the first image to the NPU 702 frame by frame.
  • NPU 702 After receiving the first image, NPU 702 obtains the application identifier of game A, finds the super-score model for game A according to the application identifier of game A, and selects the super-score model for game A as the target over-score model.
  • the NPU 702 inputs the first image into the target super-resolution model frame by frame, enhances the image quality of the first image through the target super-resolution model, obtains target images corresponding to the first images of each frame, and sends the target images to the first image frame by frame.
  • a display screen of an electronic device is a display screen of an electronic device.
  • the target image is displayed on the screen, so that the screen of the game A is displayed on the display screen.
  • the user clicks the icon of the game A on the first electronic device, and the first electronic device starts the game A in response to the user's click operation.
  • the application of the game A sends the native image data 1301 to the GPU 701 frame by frame.
  • the image resolution of the native image data 1301 is 480 ⁇ 360.
  • the GPU 701 performs frame-by-frame rendering on the above-mentioned native image data 1301 to obtain a first image 1302 corresponding to each frame of the native image data 1301, and the image resolution of the first image 1302 is 1920 ⁇ 1080.
  • the GPU 701 sends the initially rendered first image 1302 to the NPU 702 frame by frame.
  • the NPU 702 After receiving the first image 1302, the NPU 702 obtains the application identification of game A, finds the super-score model for game A according to the application identification of game A, and selects the super-score model for game A as the target over-score model.
  • the target super-resolution model is a haplo-enhanced super-resolution model, and the target resolution of the output image of the target super-resolution model is 1920 ⁇ 1080.
  • the NPU 702 inputs the first image 1302 into the target super-resolution model frame by frame, and enhances the image quality of the first image 1302 through the target super-resolution model to obtain a target image 1303 with an image resolution of 1920 ⁇ 1080.
  • the image resolution of the target image 1303 is the same as that of the first image 1302 , but the definition of the target image 1303 is higher than that of the first image 1302 , and the image quality of the target image 1303 is higher than that of the first image 1302 picture quality.
  • the NPU 702 After acquiring the target image 1303, the NPU 702 sends the target image 1303 to the display screen of the first electronic device frame by frame.
  • the application program of the game A sends the native image data 1501 to the GPU 701 frame by frame.
  • the image resolution of the native image data 1501 is 480 ⁇ 360.
  • the GPU 701 performs frame-by-frame rendering on the above-mentioned native image data 1501 to obtain a first image 1502 corresponding to each frame of native image data, and the image resolution of the first image 1502 is 480 ⁇ 360.
  • the GPU 701 sends the initially rendered first image 1502 to the NPU 702 frame by frame.
  • the NPU 702 After receiving the first image 1502, the NPU 702 obtains the application identifier of game A, finds the super-score model for game A according to the application identifier of game A, and selects the super-score model for game A as the target over-score model.
  • the target super-resolution model is a multiple-enhanced super-resolution model, and the target resolution of the output image of the target super-resolution model is 1920 ⁇ 1080.
  • the NPU 702 inputs the first image 1502 into the target super-resolution model frame by frame, enhances the image quality of the first image 1502 through the target super-resolution model, and performs resolution adaptation on the first image 1502 to obtain an image resolution of 1920 Target image 1503 of ⁇ 1080.
  • the definition of the target image 1503 is the same as that of the first image 1502, but the image resolution of the target image 1503 is higher than that of the first image 1502, and the image quality of the target image 1503 is higher than that of the first image 1502.
  • the NPU 702 After acquiring the target image 1503, the NPU 702 sends the target image 1503 frame by frame to the display screen of the first electronic device for display on the upper screen.
  • the first electronic device firstly renders the native image data of the application program to obtain the first image. Then, the first electronic device performs super-resolution reconstruction on the first image through the super-resolution model, so as to improve the image quality of the first image, and obtain a target image for display on the upper screen.
  • the image processing method provided in this embodiment can reduce the requirements on the hardware resources of the first electronic device, and reduce the rendering power consumption, thereby solving the problem.
  • the existing image processing methods have problems of high rendering power consumption and large amount of computation when rendering high-quality products.
  • the above-mentioned steps of preliminary rendering and super-resolution reconstruction may be performed by the same graphics rendering hardware in the first electronic device.
  • the above-mentioned steps of preliminary rendering and super-resolution reconstruction may both be performed by a GPU in the first electronic device; in other embodiments, the above-mentioned steps of preliminary rendering and super-resolution reconstruction may be performed by a GPU.
  • the steps may all be performed by an NPU within the first electronic device.
  • the above-mentioned steps of preliminary rendering and super-resolution reconstruction may also be performed by different graphics rendering hardware in the first electronic device, that is, the above-mentioned first graphics rendering hardware and second graphics rendering hardware are different graphics rendering hardware.
  • the foregoing preliminary rendering step may be performed by a GPU of the first electronic device, and the foregoing super-resolution reconstruction step may be performed by an NPU of the first electronic device; in other embodiments, the foregoing preliminary rendering The steps of the super-resolution reconstruction may be performed by the CPU of the first electronic device, and the above-mentioned steps of super-resolution reconstruction may be performed by the NPU of the first electronic device.
  • the super-resolution model selected by the first electronic device may be a single-enhanced super-resolution model, or the super-resolution model selected by the first electronic device may also be a multiple-enhanced super-resolution model.
  • the super-resolution model selected by the first electronic device is the multiple-enhanced super-resolution model, the hardware resources occupied by the graphics rendering hardware in the preliminary rendering process and the rendering power consumption of the preliminary rendering can be reduced.
  • the above-mentioned super-scoring model may be a general super-scoring model, or, the above-mentioned super-scoring model may also be a specific super-scoring model for a certain application or a certain type of application.
  • the applicable range of the specific super-resolution model is smaller than that of the general super-resolution model, but usually the image quality enhancement effect of the specific super-resolution model is better than that of the general super-resolution model.
  • the method includes:
  • the first electronic device renders the native image data to obtain a first image
  • the first electronic device When the first electronic device interacts with other electronic devices on multiple screens, the first electronic device may need to project the display screen of the device to the other electronic devices.
  • the user may feel that the screen of the mobile phone is too small and the visual effect is not good.
  • the user can control the mobile phone to interact with the smart TV on multiple screens, establish a communication connection between the mobile phone and the smart TV, and project the game screen on the mobile phone to the smart TV for display, so that the user can Watch the game screen through the smart TV to get a better visual experience.
  • the first electronic device of the projection side needs to independently complete the image rendering work, and then transmits the rendered image to the first electronic device of the projection side. Afterwards, the first electronic device of the screen-projected party performs resolution adaptation on the rendered image, and displays the image after resolution adaptation on the screen.
  • the screen projection technology is usually used to project the display screen of the small-screen device to the screen of the large-screen device. At this time, it is difficult for the image with low image resolution to adapt to the display device with high image resolution, resulting in poor user experience. good.
  • the first electronic device when the user starts the application program on the first electronic device and enables the multi-screen interaction function, the first electronic device can perform preliminary processing on the native image data generated by the application program Render to get the first image.
  • step S301 For the preliminary rendering process, reference may be made to the description of step S301 in the previous embodiment, and details are not repeated here.
  • the first electronic device sends the first image to the designated device to instruct the designated device to perform super-resolution reconstruction on the first image to obtain the target image.
  • the designated device ie, the aforementioned second electronic device
  • the designated device is another electronic device selected by the user and located in the same local area network as the aforementioned first electronic device.
  • the user when the user wishes to project the game screen of the mobile phone to the smart TV, the user can enable the "wireless screen projection" function of the mobile phone.
  • the mobile phone After the user turns on the "wireless screen projection" function of the mobile phone, the mobile phone starts to search for available electronic devices in the same local area network.
  • the mobile phone searches for electronic device 1, electronic device 2, and electronic device 3.
  • the mobile phone detects the user's click operation on the electronic device 1, it means that the electronic device 1 is selected by the user, and the mobile phone determines the electronic device 1 as the designated device.
  • the first electronic device After obtaining the first image, the first electronic device sends the first image to the designated device.
  • the above designated device is the electronic device to be projected.
  • the number of specified devices can be one, or the number of specified devices can be multiple.
  • the electronic device 1 is a smart TV and the electronic device 2 is a computer.
  • the mobile phone is the first electronic device of the screen-casting party
  • the smart TV is the electronic device of the screen-casting party (ie the designated device).
  • Users can turn on the "wireless screen projection" function of the mobile phone.
  • the mobile phone starts to search for available electronic devices in the same local area network.
  • the mobile phone searches for electronic device 1, electronic device 2 and electronic device 3.
  • the user clicks on the electronic device 1 and the mobile phone sets the electronic device 1 as a designated device in response to the user's operation, and the number of designated devices is one.
  • the mobile phone When the user operates the mobile phone, if the user wants to project the game screen of the mobile phone to the smart TV and the computer at the same time, the mobile phone is the first electronic device of the screen-casting party, and the smart TV and the computer are the electronic devices of the screen-casting party (that is, the specified device), the user can turn on the "wireless screen projection" function of the mobile phone.
  • the mobile phone After the user enables the "wireless screen projection" function of the mobile phone, the mobile phone starts to search for the first electronic device available in the same local area network. The mobile phone searches for electronic device 1, electronic device 2, and electronic device 3. Then the user clicks on electronic device 1 and electronic device 2, and the mobile phone sets electronic device 1 and electronic device 2 as designated devices in response to the user's operation, and the number of designated devices is 2.
  • the designated device After receiving the first image, the designated device inputs the first image into the trained super-resolution model, performs super-resolution reconstruction on the first image, and obtains and displays the target image.
  • the first electronic device can not only use the local hardware resources to render the image, but also can use the hardware resources of the designated device to optimize the image quality of the image, and make full use of the same hardware resources.
  • hardware resources of multiple electronic devices in the local area network thereby reducing the load of the first electronic device on local hardware resources when rendering high-quality images, and reducing the rendering power consumption of the first electronic device for rendering high-quality images.
  • the superscore model on a specified device can be a haplo-enhanced super-score model, or the super-score model on a specified device can also be a multiple-enhanced over-score model.
  • the haplointensive super-resolution model is a super-resolution model in which the image resolution of the input image and the image resolution of the output image are the same.
  • the multiple-enhanced super-resolution model is a super-resolution model in which the image resolution of the input image is smaller than that of the output image.
  • the first electronic device may acquire the target image resolution configured on the specified device.
  • the target image resolution is the image resolution of the screen display image set on the specified device.
  • the first electronic device renders the native image data generated by the application according to the target image resolution to obtain a first image.
  • the image resolution of the first image is the target image resolution.
  • the first electronic device transmits the first image to the designated device.
  • the designated device inputs the first image into the single-enhanced super-resolution model, improves the image quality of the first image through the single-enhanced super-score model, and obtains the target image and displays it on the screen.
  • the image resolution of the target image is the target image resolution.
  • the first electronic device can also directly perform the processing on the native image data generated by the application program according to the first image resolution configured by the user. Render to get the first image. At this time, the image resolution of the first image is the first image resolution.
  • the first electronic device transmits the first image to the designated device. Since the super-resolution model on the specified device is a haplo-enhanced super-resolution model, the resolution of the first image cannot be adapted to the target image resolution.
  • the target image resolution is the upper-screen display image resolution set by the specified device. Therefore, after acquiring the first image, the designated device can perform up-sampling processing on the first image, and adapt the resolution of the first image to the target image resolution configured on the designated device to obtain the second image.
  • the image resolution of the second image is the target image resolution.
  • the algorithm applied for up-sampling can be any one of interpolation algorithms such as the nearest neighbor method, bilinear interpolation method, and cubic interpolation method.
  • the designated device performs up-sampling processing on the first image through a preset up-sampling algorithm to obtain the second image.
  • the designated device inputs the second image into the single-enhanced super-score network, and enhances the image quality of the second image through the single-enhancement super-score network to obtain the target image.
  • the image resolution of the target image is the target image resolution.
  • the first electronic device may render the native image data generated by the application according to the first image resolution configured by the user, Get the first image. At this time, the image resolution of the first image is the first image resolution.
  • the first electronic device transmits the first image to the designated device.
  • the designated device inputs the first image into the multiple-enhanced super-division network, enhances the image quality of the first image through the multiple-enhanced super-division network, and adapts the resolution of the first image to the target configured on the designated device Image resolution to get the target image.
  • the image resolution of the target image is the target image resolution.
  • the super-scoring model on the specified device may be a general super-scoring model, or the above-mentioned super-scoring model may also be a specific super-scoring model trained for a certain product or a certain type of product.
  • the general super-resolution model and the specific super-resolution model reference may be made to the description of the previous embodiment, and details are not repeated here.
  • the designated device may, after receiving the first image, obtain the identifier of the application corresponding to the first image, and select the corresponding super-score model according to the identifier of the application. Perform super-resolution reconstruction processing on the first image.
  • the specified device can call the general super-resolution model to perform super-resolution reconstruction processing.
  • the designated device can also establish a relationship between the general super-resolution model and the identifier of the above-mentioned application after performing super-resolution reconstruction on the first image using the general super-resolution model. association relationship, so that when the specified device processes the first image of the application program next time, the same general super-score model can be found to process the first image according to the foregoing association relationship.
  • the first electronic device may select appropriate graphics rendering hardware to perform preliminary rendering on the native image data according to the actual situation to obtain the first image.
  • the above-mentioned graphics rendering hardware may be one or more of a central processing unit (central processing unit, CPU), a graphics processing unit (graphics processing unit, GPU), and a neural-network processing unit (neural-network processing units, NPU).
  • CPU central processing unit
  • GPU graphics processing unit
  • NPU neural-network processing units
  • the above-mentioned graphics rendering hardware may be a combination of a CPU and a GPU.
  • the specified device can choose suitable hardware to run the above super-score model according to the actual situation.
  • the designated device may run the above-mentioned super-score model on the CPU, and enhance the image quality of the above-mentioned first image through the CPU; in other embodiments, the designated device may run the above-mentioned super-score model on the GPU, through The GPU enhances the image quality of the first image; or, the designated device may run the super-score model on the NPU, and enhance the image quality of the first image through the NPU.
  • the present application does not limit the hardware for performing the above-mentioned super-division reconstruction operation in the specified device.
  • the first electronic device (that is, the electronic device of the screen-casting party) may be a mobile phone, and a GPU may be provided in the mobile phone.
  • the designated device ie, the electronic device of the screen-casting party
  • NPU a general game-type super-score model and a super-score model for game A are preset, and the over-score model for game A is associated with the application identifier of game A.
  • the user operates the mobile phone and projects the display screen of the mobile phone to the smart TV.
  • Game B After Game B is started, the application of Game B sends native image data to the GPU of the mobile phone frame by frame.
  • the GPU of the mobile phone renders the above-mentioned native image data frame by frame, obtains the first image corresponding to each frame of native image data, and sends the first image frame by frame to the smart TV through the wireless communication module of the mobile phone.
  • the smart TV After receiving the first image, the smart TV obtains the application identifier corresponding to the first image, and if no super-score model corresponding to game B is found according to the application identifier of game B, a general game-type super-score model is selected as the target over-score model.
  • the NPU of the smart TV inputs the first image into the target super-resolution model frame by frame, enhances the image quality of the first image through the target super-resolution model, obtains the target image corresponding to each frame of the first image, and sends the target image frame by frame. to the upper screen of the Smart TV's display.
  • the user operates the mobile phone and projects the display screen of the mobile phone to the smart TV.
  • Game A After Game A starts, the application of Game A sends native image data frame by frame to the GPU of the mobile phone.
  • the GPU of the mobile phone renders the above-mentioned native image data frame by frame, obtains a first image corresponding to each frame of native image data, and sends the first image to the smart TV through the wireless communication module of the mobile phone.
  • the smart TV After receiving the first image, the smart TV obtains the application identifier corresponding to the first image, finds a superscore model for game A according to the application identifier of game A, and selects the superscore model for game A as the target overscore model.
  • the NPU of the smart TV inputs the first image into the target super-resolution model frame by frame, enhances the image quality of the first image through the target super-resolution model, obtains the target image corresponding to each frame of the first image, and sends the target image frame by frame. to the upper screen of the Smart TV's display.
  • the user operates the mobile phone 2001 to project the display screen of the mobile phone 2001 to the smart TV 2002 and the computer 2003 .
  • the application of the game A sends the native image data 20011 to the GPU of the mobile phone 2001 frame by frame.
  • the image resolution of native image data 20011 is 480 ⁇ 360.
  • the mobile phone 2001 performs data interaction with the smart TV 2002 and the computer 2003 through the wireless communication module, and obtains the image resolution 1920 ⁇ 1080 configured on the smart TV 2002 and the image resolution 2560 ⁇ 1440 configured on the computer 2003 .
  • the mobile phone 2001 renders the above-mentioned native image data 20011 frame by frame according to the image resolution 1920 ⁇ 1080 configured on the smart TV 2002, and obtains the image 20012.
  • the resolution of the image 20012 is 1920 ⁇ 1080.
  • the mobile phone 2001 renders the above-mentioned native image data 20011 frame by frame according to the image resolution 2560 ⁇ 1440 configured on the computer 2003 to obtain the image 20013, and the resolution of the image 20013 is 2560 ⁇ 1440.
  • the mobile phone 2001 sends the image 20012 to the smart TV 2002 through the wireless communication module, and sends the image 20013 to the computer 2003 through the wireless communication module.
  • the smart TV 2002 After receiving the image 20012, the smart TV 2002 obtains the application identifier corresponding to the image 20012, and finds the single-enhanced super-score model for the game A according to the application identifier of the game A, and selects the single-enhanced super-score model for the game A. as the target superscore model.
  • the NPU of the smart TV 2002 inputs the image 20012 into the target super-resolution model frame by frame, and performs super-resolution reconstruction on the image 20012 through the target super-resolution model to obtain the image 20021.
  • Image 20021 has an image resolution of 1920 ⁇ 1080.
  • the image resolution of image 20012 and the image resolution of image 20021 are both 1920 ⁇ 1080, but the resolution of image 20021 is higher than that of image 20012, and the image quality of image 20021 is higher than that of image 20012 quality.
  • the image 20012 is transmitted frame by frame to the display screen of the smart TV 2002 for on-screen display.
  • the computer 2003 After receiving the image 20013, the computer 2003 obtains the application identifier corresponding to the image 20013, and finds the haplo-enhanced super-score model for game A according to the application identifier of game A, and selects the haplo-enhanced super-score model for game A as the Goal overscore model.
  • the NPU of the computer 2003 inputs the image 20013 into the target super-resolution model frame by frame, and performs super-resolution reconstruction on the image 20013 through the target super-resolution model to obtain the image 20031.
  • Image 20031 has an image resolution of 2560 ⁇ 1440.
  • the image resolution of image 20013 and the image resolution of image 20031 are both 2560 ⁇ 1440, but the resolution of image 20031 is higher than that of image 20013, and the image quality of image 20031 is higher than that of image 20013 quality.
  • the computer 2003 After the computer 2003 obtains the image 20031, it transmits the image 20031 frame by frame to the display screen of the computer 2003 for display on the upper screen.
  • the user operates the mobile phone 2201 to project the display screen of the mobile phone 2201 to the smart TV 2202 and the computer 2203 .
  • the application program of the game A sends the native image data 22011 to the GPU of the mobile phone 2201 frame by frame.
  • the image resolution of native image data 22011 is 480 ⁇ 360.
  • the mobile phone 2201 renders the above-mentioned native image data 22011 frame by frame according to the pre-configured image resolution to obtain an image 22012, and the resolution of the image 22012 is 480 ⁇ 360.
  • the mobile phone 2201 sends the image 22012 to the smart TV 2202 through the wireless communication module, and sends the image 22012 to the computer 2203 through the wireless communication module.
  • the smart TV 2202 After receiving the image 22012, the smart TV 2202 obtains the application identifier corresponding to the image 22012, and finds the single-enhanced super-score model for the game A according to the application identifier of the game A, and selects the single-enhanced super-score model for the game A. as the target superscore model.
  • the NPU of the smart TV 2202 upsamples the image 22012 frame by frame to obtain the image 22021, and the image resolution of the image 22021 is 1920 ⁇ 1080. Then, the NUP of the smart TV 2202 inputs the image 22021 into the target super-resolution model frame by frame, and performs super-resolution reconstruction on the image 22021 through the target super-resolution model to obtain the image 22022.
  • the image resolution of image 22021 and the image resolution of image 22022 are both 1920 ⁇ 1080, but the resolution of image 22022 is higher than that of image 22021, and the image quality of image 22022 is higher than that of image 22021 quality.
  • the image 22022 is transmitted frame by frame to the display screen of the smart TV 2202 for on-screen display.
  • the computer 2203 After receiving the image 22012, the computer 2203 obtains the application identifier corresponding to the image 22012, and finds the haplo-enhanced super-score model for game A according to the application identifier of game A, and selects the haplo-enhanced super-score model for game A as the Goal overscore model.
  • the NPU of the computer 2203 upsamples the image 22012 frame by frame to obtain the image 22031, and the image resolution of the image 22031 is 2560 ⁇ 1440. Then, the NUP of the computer 2203 inputs the image 22031 into the target super-resolution model frame by frame, and performs super-resolution reconstruction on the image 22031 through the target super-resolution model to obtain the image 22032.
  • the image resolution of image 22031 and the image resolution of image 22032 are both 1920 ⁇ 1080, but the resolution of image 22032 is higher than that of image 22031, and the image quality of image 22032 is higher than that of image 22031 quality.
  • the computer 2203 After the computer 2203 obtains the image 22032, it transmits the image 22032 frame by frame to the display screen of the computer 2203 for display on the upper screen.
  • the user operates the mobile phone 2401 to project the display screen of the mobile phone 2401 to the smart TV 2402 and the computer 2403 .
  • the application of the game A sends the native image data 24011 to the GPU of the mobile phone 2401 frame by frame.
  • the mobile phone 2401 performs frame-by-frame rendering on the above-mentioned native image data 24011 according to the pre-configured image resolution to obtain an image 24012, and the resolution of the image 24012 is 480 ⁇ 360.
  • the mobile phone 2401 sends the image 24012 to the smart TV 2402 through the wireless communication module, and sends the image 24012 to the computer 2403 through the wireless communication module.
  • the smart TV 2402 After receiving the image 24012, the smart TV 2402 obtains the application identifier corresponding to the image 24012, and finds a multiple-enhanced super-score model for the game A according to the application identifier of the game A, and selects the multiple-enhanced super-score model for the game A. as the target superscore model.
  • the NPU of the smart TV 2402 inputs the image 24012 into the target super-resolution model frame by frame, and performs super-resolution reconstruction on the image 24012 through the target super-resolution model to obtain the image 24021.
  • Image 24021 has an image resolution of 1920 ⁇ 1080.
  • the image resolution of image 24021 is greater than that of image 24012 , and the resolution of image 24021 is greater than that of image 24012 , so the image quality of image 24021 is higher than that of image 24012 .
  • the smart TV 2402 After the smart TV 2402 obtains the image 24021, it transmits the image 24021 frame by frame to the display screen of the smart TV 2402 for on-screen display.
  • the computer 2403 After receiving the image 24012, the computer 2403 obtains the application identifier corresponding to the image 24012, and finds the multiple-enhanced super-score model for the game A according to the application identifier of the game A, and selects the multiple-enhanced super-score model for the game A as the Goal overscore model.
  • the NPU of the computer 2403 inputs the image 24012 into the target super-resolution model frame by frame, and performs super-resolution reconstruction on the image 24012 through the target super-resolution model to obtain the image 24031.
  • Image 24031 has an image resolution of 2560 ⁇ 1440.
  • the image resolution of image 24031 is greater than that of image 24012, and the resolution of image 24031 is greater than that of image 24012, so the image quality of image 24031 is higher than that of image 24012.
  • the image 24031 is transmitted frame by frame to the display screen of the computer 2403 for display on the upper screen.
  • the first electronic device firstly renders the native image data of the application program to obtain the first image. Then, the first electronic device transmits the first image to the designated device that needs to be projected, and the designated device performs super-resolution reconstruction on the first image, improves the quality of the first image, and obtains the target image for display on the upper screen . That is to say, in the image processing method improved in this embodiment, the image rendering process is divided into two steps: preliminary rendering and super-resolution reconstruction. The preliminary rendering step is performed by the first electronic device on the screen projection side, and super-resolution reconstruction is performed.
  • the steps are executed by the designated device of the screen-casting party, thereby reducing the load on the hardware resources of the first electronic device of the screen-casting party, reducing the rendering efficiency, and making full use of the hardware resources of the screen-casting party's first electronic device and the casting party's first electronic device.
  • the display resolution of the target image after image quality enhancement can be adapted to the display resolution of the specified device, thereby improving the user experience.
  • the super-resolution model selected by the designated device may be a single-enhanced super-resolution model, or may also be a multiple-enhanced super-resolution model.
  • the super-resolution model selected by the specified device is the multiple-enhanced super-resolution model, the hardware resources occupied by the preliminary rendering and the rendering power consumption of the preliminary rendering can be reduced.
  • the above-mentioned super-scoring model may be a general super-scoring model, or may also be a specific super-scoring model for a certain application or a certain type of application.
  • the applicable range of the specific super-resolution model is smaller than that of the general super-resolution model, but usually the image quality enhancement effect of the specific super-resolution model is better than that of the general super-resolution model.
  • an embodiment of the present application further provides an electronic device.
  • the electronic device 26 of this embodiment includes a processor 260 , a memory 261 , and a computer program 262 stored in the memory 261 and executable on the processor 260 .
  • the processor 260 executes the computer program 262
  • the steps in the above embodiments of the screen expansion method are implemented, for example, steps S301 to S302 shown in FIG. 1 .
  • the processor 260 executes the computer program 262
  • the functions of the modules/units in the foregoing device embodiments are implemented, for example, the functions of the modules 2601 to 2602 shown in FIG. 26 .
  • the computer program 262 may be divided into one or more modules/units, and the one or more modules/units are stored in the memory 261 and executed by the processor 260 to complete the this application.
  • the one or more modules/units may be a series of computer program instruction segments capable of performing specific functions, and the instruction segments are used to describe the execution process of the computer program 262 in the electronic device 26 .
  • the computer program 262 can be divided into a native data module, a preliminary rendering module and a first super-divided module, and the specific functions of each module are as follows:
  • a native data module used to obtain native image data, where the native image data is image data generated by an application and not rendered;
  • a preliminary rendering module configured to render the native image data through the first graphics rendering hardware to obtain a first image
  • the first super-resolution module is configured to perform super-resolution reconstruction on the first image through second graphics rendering hardware to obtain a target image, and the first graphics rendering hardware and the second graphics rendering hardware are different graphics rendering hardware.
  • the electronic device 26 may be a computing device such as a desktop computer, a notebook, a palmtop computer, and a cloud server.
  • the electronic device may include, but is not limited to, the processor 260 and the memory 261 .
  • FIG. 26 is only an example of the electronic device 26, and does not constitute a limitation on the electronic device 26, and may include more or less components than the one shown, or combine some components, or different components
  • the electronic device may further include an input and output device, a network access device, a bus, and the like.
  • the so-called processor 260 may be a central processing unit (Central Processing Unit, CPU), or other general-purpose processors, digital signal processors (Digital Signal Processors, DSP), application specific integrated circuits (Application Specific Integrated Circuit, ASIC), Field-Programmable Gate Array (FPGA) or other programmable logic devices, discrete gate or transistor logic devices, discrete hardware hardware, etc.
  • a general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
  • the memory 261 may be an internal storage unit of the electronic device 26 , such as a hard disk or a memory of the electronic device 26 .
  • the memory 261 may also be an external storage device of the electronic device 26, such as a plug-in hard disk, a smart memory card (Smart Media Card, SMC), a secure digital (Secure Digital, SD) equipped on the electronic device 26 card, Flash Card, etc.
  • the memory 261 may also include both an internal storage unit of the electronic device 26 and an external storage device.
  • the memory 261 is used to store the computer program and other programs and data required by the electronic device.
  • the memory 261 may also be used to temporarily store data that has been output or will be output.
  • the disclosed apparatus/electronic device and method may be implemented in other manners.
  • the above-described embodiments of the apparatus/electronic device are only illustrative.
  • the division of the modules or units is only a logical function division. In actual implementation, there may be other division methods, such as multiple units. Either the hardware may be incorporated or integrated into another system, or some features may be omitted, or not implemented.
  • the shown or discussed mutual coupling or direct coupling or communication connection may be through some interfaces, indirect coupling or communication connection of devices or units, and may be in electrical, mechanical or other forms.
  • the units described as separate components may or may not be physically separated, and components displayed as units may or may not be physical units, that is, may be located in one place, or may be distributed to multiple network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution in this embodiment.
  • each functional unit in each embodiment of the present application may be integrated into one processing unit, or each unit may exist physically alone, or two or more units may be integrated into one unit.
  • the above-mentioned integrated units may be implemented in the form of hardware, or may be implemented in the form of software functional units.
  • the integrated modules/units if implemented in the form of software functional units and sold or used as independent products, may be stored in a computer-readable storage medium.
  • the present application can implement all or part of the processes in the methods of the above embodiments, and can also be completed by instructing the relevant hardware through a computer program.
  • the computer program can be stored in a computer-readable storage medium, and the computer When the program is executed by the processor, the steps of the foregoing method embodiments can be implemented.
  • the computer program includes computer program code, and the computer program code may be in the form of source code, object code, executable file or some intermediate form, and the like.
  • the computer-readable medium may include: any entity or device capable of carrying the computer program code, recording medium, U disk, removable hard disk, magnetic disk, optical disk, computer memory, read-only memory (ROM, Read-Only Memory) , Random Access Memory (RAM, Random Access Memory), electric carrier signal, telecommunication signal and software distribution medium, etc. It should be noted that the content contained in the computer-readable media may be appropriately increased or decreased according to the requirements of legislation and patent practice in the jurisdiction, for example, in some jurisdictions, according to legislation and patent practice, the computer-readable media Electric carrier signals and telecommunication signals are not included.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Graphics (AREA)
  • Image Processing (AREA)

Abstract

Procédé de traitement d'image, système, dispositif électronique et support de stockage lisible par ordinateur. La présente invention est applicable au domaine de la technologie d'intelligence artificielle. Dans le procédé de traitement d'image, un premier dispositif électronique fournit des données d'image native au moyen d'un premier matériel de rendu graphique pour obtenir une première image (S301) ; et le premier dispositif électronique effectue une reconstruction en super-résolution sur la première image au moyen d'un second matériel de rendu graphique pour obtenir une image cible (S302), le premier matériel de rendu graphique et le second matériel de rendu graphique étant différents matériels de rendu graphique. Par rapport à un rendu direct de données d'image native en une image cible en utilisant un seul matériel de rendu graphique, ledit procédé réduit la consommation d'énergie de rendu pour produire une image de haute qualité, réduit la quantité de calcul, améliore l'efficacité de rendu et utilise pleinement des ressources matérielles de matériel de rendu graphique hétérogène dans un dispositif électronique, ce qui permet de résoudre le problème d'insuffisance de ressources matérielles du dispositif électronique dans une solution d'image existante qui oblige le dispositif électronique à utiliser uniquement un produit de haute qualité avec une qualité d'image relativement faible.
PCT/CN2021/105060 2020-07-08 2021-07-07 Procédé de traitement d'image, système, dispositif électronique et support de stockage lisible par ordinateur WO2022007862A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202010653106.7 2020-07-08
CN202010653106.7A CN113935898A (zh) 2020-07-08 2020-07-08 图像处理方法、系统、电子设备及计算机可读存储介质

Publications (1)

Publication Number Publication Date
WO2022007862A1 true WO2022007862A1 (fr) 2022-01-13

Family

ID=79273437

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2021/105060 WO2022007862A1 (fr) 2020-07-08 2021-07-07 Procédé de traitement d'image, système, dispositif électronique et support de stockage lisible par ordinateur

Country Status (2)

Country Link
CN (1) CN113935898A (fr)
WO (1) WO2022007862A1 (fr)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114638951A (zh) * 2022-03-29 2022-06-17 北京有竹居网络技术有限公司 房屋模型的展示方法、装置、电子设备和可读存储介质
CN116033065A (zh) * 2022-12-29 2023-04-28 维沃移动通信有限公司 播放方法、装置、电子设备及可读存储介质

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114860141A (zh) * 2022-05-23 2022-08-05 Oppo广东移动通信有限公司 图像显示方法、装置、电子设备及计算机可读介质
CN115474090B (zh) * 2022-08-31 2023-05-16 北京理工大学 一种支持视频目标检测跟踪的异构嵌入式实时处理架构的应用方法
CN116012474B (zh) * 2022-12-13 2024-01-30 昆易电子科技(上海)有限公司 仿真测试图像生成、回注方法及系统、工控机、装置

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107680042A (zh) * 2017-09-27 2018-02-09 杭州群核信息技术有限公司 渲染方法、装置、引擎及存储介质
CN107742317A (zh) * 2017-09-27 2018-02-27 杭州群核信息技术有限公司 一种渲染图像的获取方法、装置、渲染系统及存储介质
CN108694685A (zh) * 2017-04-01 2018-10-23 英特尔公司 改进的图形处理器微架构内的多分辨率图像平面渲染
WO2019187298A1 (fr) * 2018-03-29 2019-10-03 Mitsubishi Electric Corporation Système et procédé de traitement d'image
CN110827380A (zh) * 2019-09-19 2020-02-21 北京铂石空间科技有限公司 图像的渲染方法、装置、电子设备及计算机可读介质

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108694685A (zh) * 2017-04-01 2018-10-23 英特尔公司 改进的图形处理器微架构内的多分辨率图像平面渲染
CN107680042A (zh) * 2017-09-27 2018-02-09 杭州群核信息技术有限公司 渲染方法、装置、引擎及存储介质
CN107742317A (zh) * 2017-09-27 2018-02-27 杭州群核信息技术有限公司 一种渲染图像的获取方法、装置、渲染系统及存储介质
WO2019187298A1 (fr) * 2018-03-29 2019-10-03 Mitsubishi Electric Corporation Système et procédé de traitement d'image
CN110827380A (zh) * 2019-09-19 2020-02-21 北京铂石空间科技有限公司 图像的渲染方法、装置、电子设备及计算机可读介质

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114638951A (zh) * 2022-03-29 2022-06-17 北京有竹居网络技术有限公司 房屋模型的展示方法、装置、电子设备和可读存储介质
CN114638951B (zh) * 2022-03-29 2023-08-15 北京有竹居网络技术有限公司 房屋模型的展示方法、装置、电子设备和可读存储介质
CN116033065A (zh) * 2022-12-29 2023-04-28 维沃移动通信有限公司 播放方法、装置、电子设备及可读存储介质

Also Published As

Publication number Publication date
CN113935898A (zh) 2022-01-14

Similar Documents

Publication Publication Date Title
WO2020259452A1 (fr) Procédé d'affichage plein écran pour terminal mobile et appareil
US11669242B2 (en) Screenshot method and electronic device
WO2020253719A1 (fr) Procédé de d'enregistrement d'écran et dispositif électronique
JP7238115B2 (ja) 写真撮影シナリオおよび電子デバイスで画像を表示するための方法
CN115473957B (zh) 一种图像处理方法和电子设备
WO2022007862A1 (fr) Procédé de traitement d'image, système, dispositif électronique et support de stockage lisible par ordinateur
US11930130B2 (en) Screenshot generating method, control method, and electronic device
WO2020093988A1 (fr) Procédé de traitement d'image et dispositif électronique
CN113254120B (zh) 数据处理方法和相关装置
WO2022017261A1 (fr) Procédé de synthèse d'image et dispositif électronique
CN113448382B (zh) 多屏幕显示电子设备和电子设备的多屏幕显示方法
WO2022001258A1 (fr) Procédé et appareil d'affichage à écrans multiples, dispositif terminal et support de stockage
WO2022257474A1 (fr) Procédé de prédiction d'image, dispositif électronique et support d'enregistrement
CN113986162B (zh) 图层合成方法、设备及计算机可读存储介质
WO2022095744A1 (fr) Procédé de commande d'affichage vr, dispositif électronique et support de stockage lisible par ordinateur
WO2022143180A1 (fr) Procédé d'affichage collaboratif, dispositif terminal et support de stockage lisible par ordinateur
CN113542574A (zh) 变焦下的拍摄预览方法、终端、存储介质及电子设备
WO2023000746A1 (fr) Procédé de traitement vidéo à réalité augmentée et dispositif électronique
WO2022078116A1 (fr) Procédé de génération d'image à effet de pinceau, procédé et dispositif d'édition d'image et support de stockage
WO2022135195A1 (fr) Procédé et appareil permettant d'afficher une interface de réalité virtuelle, dispositif, et support de stockage lisible
WO2022033344A1 (fr) Procédé de stabilisation vidéo, dispositif de terminal et support de stockage lisible par ordinateur
WO2021204103A1 (fr) Procédé de prévisualisation d'images, dispositif électronique et support de stockage
CN115686403A (zh) 显示参数的调整方法、电子设备、芯片及可读存储介质
CN114827098A (zh) 合拍的方法、装置、电子设备和可读存储介质
CN116700578B (zh) 图层合成方法、电子设备以及存储介质

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21838599

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 21838599

Country of ref document: EP

Kind code of ref document: A1