CN113935898A - Image processing method, system, electronic device and computer readable storage medium - Google Patents

Image processing method, system, electronic device and computer readable storage medium Download PDF

Info

Publication number
CN113935898A
CN113935898A CN202010653106.7A CN202010653106A CN113935898A CN 113935898 A CN113935898 A CN 113935898A CN 202010653106 A CN202010653106 A CN 202010653106A CN 113935898 A CN113935898 A CN 113935898A
Authority
CN
China
Prior art keywords
image
resolution
hyper
electronic device
target
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010653106.7A
Other languages
Chinese (zh)
Inventor
张梦然
陈泰雨
薛蓬
张运超
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Huawei Technologies Co Ltd
Original Assignee
Huawei Technologies Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huawei Technologies Co Ltd filed Critical Huawei Technologies Co Ltd
Priority to CN202010653106.7A priority Critical patent/CN113935898A/en
Priority to PCT/CN2021/105060 priority patent/WO2022007862A1/en
Publication of CN113935898A publication Critical patent/CN113935898A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformation in the plane of the image
    • G06T3/40Scaling the whole image or part thereof
    • G06T3/4053Super resolution, i.e. output image resolution higher than sensor resolution
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/005General purpose rendering architectures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformation in the plane of the image
    • G06T3/40Scaling the whole image or part thereof
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2200/00Indexing scheme for image data processing or generation, in general
    • G06T2200/28Indexing scheme for image data processing or generation, in general involving image processing hardware

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Graphics (AREA)
  • Image Processing (AREA)

Abstract

The application is applicable to the technical field of artificial intelligence, and provides an image processing method, an image processing system, electronic equipment and a computer-readable storage medium. According to the image processing method, the electronic equipment uses first graphic rendering hardware to render the native image data to obtain a first image, then uses second graphic rendering hardware to perform super-resolution reconstruction on the first image to obtain a target image, and the first graphic rendering hardware and the second graphic rendering hardware are different graphic rendering hardware. Compared with the mode of directly rendering the native image data into the target image by using single image rendering hardware, the method can reduce the rendering power consumption for rendering the high-quality image, reduce the calculation amount, improve the rendering efficiency, fully utilize the hardware resources of heterogeneous image rendering hardware in the electronic equipment, and solve the problem that the electronic equipment can only run high-quality products with lower image quality when the hardware resources of the electronic equipment are insufficient in the conventional image scheme.

Description

Image processing method, system, electronic device and computer readable storage medium
Technical Field
The present application relates to the field of artificial intelligence, and in particular, to an image processing method, system, electronic device, and computer-readable storage medium.
Background
Increasingly, fine-screen products, such as high-quality games and video, are currently on the market.
When the electronic device renders the pictures of these products, the rendering power consumption is high, the calculation amount is large, and the requirement on the hardware resources of the electronic device is high. However, when the hardware resources of the electronic device are insufficient, the electronic device can only operate the products with low image quality, which seriously affects the image effect of the products.
Disclosure of Invention
The application provides an image processing method, an image processing system, an electronic device and a computer readable storage medium, which solve the problem that in the existing image scheme, when the hardware resource of the electronic device is insufficient, the electronic device can only operate a high-quality product with lower quality.
In order to achieve the purpose, the technical scheme is as follows:
in a first aspect, an image processing method applied to a first electronic device is provided, including:
the first electronic equipment acquires native image data, wherein the native image data is generated by an application program and is not subjected to rendering;
the first electronic equipment renders the native image data through first graphic rendering hardware to obtain a first image;
the first electronic equipment carries out super-resolution reconstruction on the first image through second graphics rendering hardware to obtain a target image, wherein the first graphics rendering hardware and the second graphics rendering hardware are different graphics rendering hardware.
It should be noted that the application program generates the native image data during the running process.
After the first electronic device acquires the native image data, the native image data can be rendered through the first graphic rendering component, so that the native image data is converted into visible pixels, and a first image is obtained.
And then, the first electronic equipment can perform super-resolution reconstruction on the first image through the second graphic rendering component to obtain a high-quality target image.
Compared with a mode of directly rendering the native image data into a high-quality target image, the image processing method obtains the target image by combining the primary rendering and the super-resolution reconstruction mode, and can reduce rendering power consumption and calculation amount of the electronic equipment for rendering the high-quality image.
In addition, the specific type of the first graphics rendering component and the second graphics rendering component may be one or more of a Central Processing Unit (CPU), a Graphics Processing Unit (GPU), and a neural-Network Processing Unit (NPU).
And the first graphic rendering component and the second graphic rendering component are different graphic rendering components, so that heterogeneous graphic rendering components in the electronic equipment are fully utilized, and the electronic equipment is prevented from running a high-image-quality product with low image quality due to insufficient hardware resources of single graphic rendering hardware.
In a possible implementation manner of the first aspect, the super-resolution reconstruction, performed by the first electronic device through first graphics rendering hardware, of the first image to obtain a target image includes:
the first electronic equipment acquires the identification of the application program;
the first electronic equipment searches a target hyper-resolution model associated with the identifier;
and the first electronic equipment carries out super-resolution reconstruction on the first image through the first graphic rendering hardware and the searched target super-resolution model to obtain a target image.
It should be noted that the first electronic device may perform super-resolution reconstruction on the first image through a super-resolution model.
Wherein, the hyper-score model can comprise a specific hyper-score model and a general hyper-score model. The specific hyper-segmentation model is only suitable for part of application programs, is poor in applicability, and has high picture quality optimization capability. The general hyper-resolution model has high applicability, but the image quality optimization capability is limited.
Because the application range of the specific hyper-score model is limited, the electronic device can pre-establish the association relationship between the specific hyper-score model and the application program applicable to the specific hyper-score model.
After the first image is acquired, the first electronic device may acquire an identifier of the application program and search for a target hyper-resolution model associated with the identifier.
If the first electronic device can find the target hyper-score model associated with the identifier, it indicates that a specific hyper-score model (i.e. target hyper-score model) applicable to the application program exists in the first electronic device. The first electronic device can perform super-resolution reconstruction on the first image through the first graphics rendering hardware and the target hyper-resolution model.
The electronic equipment carries out super-resolution reconstruction on the first image by using the target super-resolution model, so that the image quality of the target image can be better improved.
The identifier of the application program may be a package name of the application program, or the application program may be a user-defined identifier.
In a possible implementation manner of the first aspect, after the first electronic device finds the target hyper-score model associated with the identifier, the method further includes:
and if the target hyper-resolution model associated with the identifier is not found, the first electronic equipment carries out super-resolution reconstruction on the first image through the first graph rendering hardware and a preset general hyper-resolution model to obtain a target image.
It should be noted that, if the first electronic device cannot find the target hyper-score model associated with the identifier, it indicates that there is no associated specific hyper-score model for the application.
At this time, the first electronic device may perform super-resolution reconstruction on the first image through the first graphics rendering hardware and a preset general hyper-resolution model to obtain the target image.
In a possible implementation manner of the first aspect, after the first electronic device performs super-resolution reconstruction on the first image through a preset general hyper-resolution model to obtain a target image, the method further includes:
and the first electronic equipment establishes the incidence relation between the identification and the universal hyper-score model.
It should be noted that, when the first electronic device uses the universal hyper-resolution model to perform super-resolution reconstruction on the first image, since the first electronic device may be provided with a plurality of universal hyper-resolution models, in order to enable the application program to be started next time, the first electronic device may call the same universal hyper-resolution model to perform super-resolution reconstruction on the first image, and the first electronic device may establish an association relationship between the identifier and the universal hyper-resolution model.
When the application program is started next time, the first electronic device can find the general hyper-segmentation model according to the incidence relation between the identifier and the general hyper-segmentation model, determine the general hyper-segmentation model as a target hyper-segmentation model, and perform super-resolution reconstruction on the first image by using the target hyper-segmentation model, so that the first electronic device can keep the same image quality optimization level when processing the image of the application program.
In a possible implementation manner of the first aspect, the rendering, by the first electronic device, the native image data through first graphics rendering hardware to obtain a first image includes:
and the first electronic equipment renders the native image data through the first graphic rendering hardware and a preset first image resolution ratio to obtain a first image.
It should be noted that, when performing the preliminary rendering, the first electronic device may render the native image data according to the first image resolution. The first image resolution is preset by the first electronic device.
In a possible implementation manner of the first aspect, the super-resolution reconstruction, performed by the first electronic device through second graphics rendering hardware, of the first image to obtain a target image includes:
and the first electronic equipment carries out super-resolution reconstruction on the first image through the second image rendering hardware and a single enhanced hyper-resolution model to obtain a target image, wherein the resolution of the first image is consistent with that of the target image, and the single enhanced hyper-resolution model is a hyper-resolution model with the same image resolution of an input image and an output image.
It should be noted that, if the resolution of the first image is consistent with the resolution of the target image, the first electronic device may select the single enhanced hyper-segmentation model when performing the super-resolution reconstruction on the first image by using the hyper-segmentation model.
For the single enhanced hyper-resolution model, the image resolution of the input image of the model is consistent with the image resolution of the output image of the model.
In a possible implementation manner of the first aspect, the super-resolution reconstruction, performed by the first electronic device through second graphics rendering hardware, of the first image to obtain a target image includes:
the first electronic equipment carries out super-resolution reconstruction on the first image through the second graphic rendering hardware and a multiple enhanced type hyper-resolution model to obtain a target image, wherein the resolution of the first image is smaller than that of the target image, and the multiple enhanced type hyper-resolution model is a hyper-resolution model in which the image resolution of an input image is smaller than that of an output image.
It should be noted that, if the resolution of the first image is smaller than the image resolution of the target image, the electronic device may select the multiple enhanced hyper-resolution model to perform the super-resolution reconstruction on the first image.
For the multiply-enhanced hyper-segmentation model, the image resolution of the input image of the model is smaller than the image resolution of the output image of the model. That is, the multiple enhanced hyper-segmentation model may increase the image resolution of the input image.
In a possible implementation manner of the first aspect, the rendering, by the first electronic device, the native image data through first graphics rendering hardware to obtain a first image includes:
the first electronic equipment renders the native image data through a graphics processor to obtain a first image;
correspondingly, the super-resolution reconstruction of the first image by the first electronic device through the second graphics rendering hardware to obtain the target image includes:
and the first electronic equipment carries out super-resolution reconstruction on the first image through a neural network processor to obtain a target image.
It should be noted that, the first electronic device processes the native image data through heterogeneous graphics rendering hardware to obtain the target image. Specifically, the first electronic device may render the native image data through a graphics processor to obtain a first image, and perform super-resolution reconstruction on the first image through a neural network processor.
The first electronic device selects proper graphics rendering hardware to execute corresponding operations, and the image processing efficiency of the first electronic device can be improved.
In a second aspect, an image processing method is provided, which is applied to a second electronic device, and includes:
the second electronic equipment receives a first image sent by first electronic equipment, wherein the first image is an image obtained by rendering native image data generated by an application program by the first electronic equipment;
and the second electronic equipment carries out super-resolution reconstruction on the first image to obtain a target image.
It should be noted that, when a user wishes to project a screen display of the first electronic device to the second electronic device, the first electronic device may perform preliminary rendering on native image data of the application locally to obtain the first image.
Then, the first electronic device transmits the first image to the second electronic device.
And the second electronic equipment receives the first image and carries out super-resolution reconstruction on the first image to obtain a target image with high image quality.
In the process of obtaining the target image, the first electronic device and the second electronic device jointly perform image processing, hardware resources of graphics rendering hardware of different electronic devices can be fully utilized, so that load of the first electronic device on local hardware resources when the first electronic device renders the high-quality image is reduced, rendering consumption of the first electronic device in rendering the high-quality image is reduced, hardware resources of different electronic devices are fully utilized, image quality of the image can be better improved, and use experience of a user is improved.
In a possible implementation manner of the second aspect, the performing, by the second electronic device, super-resolution reconstruction on the first image to obtain a target image includes:
the second electronic equipment acquires the identification of the application program;
the second electronic equipment searches a target hyper-resolution model associated with the identifier;
and the second electronic equipment carries out super-resolution reconstruction on the first image through the searched target super-resolution model to obtain a target image.
It should be noted that the second electronic device may perform super-resolution reconstruction on the second image through the hyper-resolution model.
Wherein, the hyper-score model can comprise a specific hyper-score model and a general hyper-score model. The specific hyper-segmentation model is only suitable for part of application programs, is poor in applicability, and has high picture quality optimization capability. The general hyper-resolution model has high applicability, but the image quality optimization capability is limited.
Because the application range of the specific hyper-score model is limited, the electronic device can pre-establish the association relationship between the specific hyper-score model and the application program applicable to the specific hyper-score model.
After the second image is acquired, the second electronic device may acquire the identifier of the application program and search for a target hyper-resolution model associated with the identifier.
If the second electronic device can find the target hyper-score model associated with the identifier, it indicates that a specific hyper-score model (i.e. a target hyper-score model) suitable for the application program exists in the second electronic device, and the second electronic device can perform super-resolution reconstruction on the second image through the graphics rendering hardware and the target hyper-score model.
The electronic equipment uses the target hyper-resolution model to carry out super-resolution reconstruction on the second image, so that the image quality of the target image can be better improved.
The identifier of the application program may be a package name of the application program, or the application program may be a user-defined identifier.
In a possible implementation manner of the second aspect, after the second electronic device finds the target hyper-score model associated with the identifier, the method further includes:
and if the target hyper-resolution model associated with the identifier is not found, the second electronic equipment carries out super-resolution reconstruction on the first image through a preset general hyper-resolution model to obtain a target image.
It should be noted that, if the second electronic device cannot find the target hyper-score model associated with the identifier, it indicates that there is no associated specific hyper-score model for the application.
At this time, the second electronic device may perform super-resolution reconstruction on the second image through the graphics rendering hardware and a preset general hyper-resolution model to obtain the target image.
In a possible implementation manner of the second aspect, after the second electronic device performs super-resolution reconstruction on the first image through a preset general hyper-resolution model to obtain a target image, the method further includes:
and the second electronic equipment establishes the incidence relation between the identification and the universal hyper-score model.
It should be noted that, when the second electronic device uses the universal hyper-resolution model to perform super-resolution reconstruction on the second image, since the second electronic device may be provided with a plurality of universal hyper-resolution models, in order to enable the application program to be started next time, the second electronic device may call the same universal hyper-resolution model to perform super-resolution reconstruction on the second image, and the second electronic device may establish an association relationship between the identifier and the universal hyper-resolution model.
When the application program is started next time, the second electronic device can find the general hyper-segmentation model according to the incidence relation between the identifier and the general hyper-segmentation model, determine the general hyper-segmentation model as a target hyper-segmentation model, and perform super-resolution reconstruction on the second image by using the target hyper-segmentation model, so that the second electronic device can keep the same image quality optimization level when processing the image of the application program.
In one possible implementation of the second aspect, the first resolution of the first image and the image resolution of the target image are the same;
the second electronic device performs super-resolution reconstruction on the first image to obtain a target image, and the method comprises the following steps:
and the second electronic equipment carries out super-resolution reconstruction on the first image through a single enhanced hyper-resolution model to obtain the target image, wherein the single enhanced hyper-resolution model is a hyper-resolution model with the same image resolution of the input image and the output image.
It should be noted that, if the resolution of the first image is consistent with the resolution of the target image, the second electronic device may select the single enhanced hyper-segmentation model when performing the super-resolution reconstruction on the first image by using the hyper-segmentation model.
The image resolution of the input image and the image resolution of the output image of the single enhanced hyper-differential model are consistent.
In one possible implementation of the second aspect, the first resolution of the first image is lower than the image resolution of the target image;
the second electronic device performs super-resolution reconstruction on the first image to obtain a target image, and the method comprises the following steps:
the second electronic equipment performs up-sampling processing on the first image to obtain a second image, wherein the image resolution of the second image is consistent with the image resolution of the target image;
and the second electronic equipment carries out super-resolution reconstruction on the second image through a single enhanced hyper-resolution model to obtain the target image, wherein the single enhanced hyper-resolution model is a hyper-resolution model with the same image resolution of the input image and the output image.
It should be noted that, if the first image resolution is smaller than the image resolution of the target image, the second electronic device may perform upsampling processing on the first image to obtain a second image, so that the image resolution of the second image is consistent with the image resolution of the target image.
And then, the second electronic equipment processes the second image by using the single enhanced hyper-segmentation model to obtain a target image.
The algorithm applied for upsampling may be any one of interpolation algorithms such as nearest neighbor, bilinear, cubic, etc.
In one possible implementation of the second aspect, the first resolution of the first image is lower than the resolution of the target image;
the second electronic device performs super-resolution reconstruction on the first image to obtain a target image, and the method comprises the following steps:
and the second electronic equipment carries out super-resolution reconstruction on the first image through a multiple enhanced hyper-resolution model to obtain the target image, wherein the multiple enhanced hyper-resolution model is a hyper-resolution model in which the image resolution of the input image is smaller than that of the output image.
It should be noted that, if the resolution of the first image is smaller than the resolution of the target image, the second electronic device may select a multiple enhanced hyper-resolution model to perform super-resolution reconstruction on the first image in addition to upsampling the first image.
The image resolution of the input image of the multiple enhanced hyper-resolution model is smaller than the image resolution of the output image. That is, the multiple enhanced hyper-segmentation model may increase the image resolution of the input image.
In a third aspect, an electronic device is provided, including:
the native data module is used for acquiring native image data, wherein the native image data is generated by an application program and is not subjected to rendering;
the preliminary rendering module is used for rendering the native image data through first graphic rendering hardware to obtain a first image;
and the first super-resolution module is used for performing super-resolution reconstruction on the first image through second graphics rendering hardware to obtain a target image, wherein the first graphics rendering hardware and the second graphics rendering hardware are different graphics rendering hardware.
In a possible implementation manner of the third aspect, the first hyper-separation module includes:
the first identification submodule is used for acquiring the identification of the application program;
the first model submodule is used for searching a target hyper-resolution model associated with the identifier;
and the first reconstruction submodule is used for performing super-resolution reconstruction on the first image through the first graph rendering hardware and the searched target super-resolution model to obtain a target image.
In a possible implementation manner of the third aspect, the first hyper-separation module further includes:
and the first general submodule is used for performing super-resolution reconstruction on the first image through the first graph rendering hardware and a preset general hyper-resolution model to obtain a target image if the target hyper-resolution model associated with the identifier is not found.
In a possible implementation manner of the third aspect, the first hyper-separation module further includes:
and the first association submodule is used for establishing the association relationship between the identification and the universal hyper-resolution model.
In a possible implementation manner of the third aspect, the preliminary rendering module is specifically configured to render the native image data through the first graphics rendering hardware and a preset first image resolution to obtain a first image.
In a possible implementation manner of the third aspect, the first hyper-segmentation module is specifically configured to perform super-resolution reconstruction on the first image through the second graphics rendering hardware and a single-enhanced hyper-segmentation model to obtain a target image, where the resolution of the first image is consistent with that of the target image, and the single-enhanced hyper-segmentation model is a hyper-segmentation model in which the image resolution of an input image is the same as that of an output image.
In another possible implementation manner of the third aspect, the first hyper-resolution module is specifically configured to perform super-resolution reconstruction on the first image through the second graphics rendering hardware and a multiple-enhancement hyper-resolution model to obtain a target image, where the resolution of the first image is smaller than the image resolution of the target image, and the multiple-enhancement hyper-resolution model is a hyper-resolution model in which the image resolution of an input image is smaller than the image resolution of an output image.
In a possible implementation manner of the third aspect, the preliminary rendering module is specifically configured to render, by using a graphics processor, the native image data to obtain a first image;
correspondingly, the first super-resolution module is specifically configured to perform super-resolution reconstruction on the first image through a neural network processor to obtain a target image.
In a fourth aspect, an electronic device is provided, comprising:
the image receiving module is used for receiving a first image sent by first electronic equipment, wherein the first image is obtained by rendering native image data generated by an application program by the first electronic equipment;
and the second super-resolution module is used for performing super-resolution reconstruction on the first image to obtain a target image.
In a possible implementation manner of the fourth aspect, the second super-divide module includes:
the second identification submodule is used for acquiring the identification of the application program;
the second model submodule is used for searching a target hyper-resolution model associated with the identifier;
and the second reconstruction submodule is used for performing super-resolution reconstruction on the first image through the searched target super-resolution model to obtain a target image.
In a possible implementation manner of the fourth aspect, the second super-divide module further includes:
and the second general submodule is used for performing super-resolution reconstruction on the first image through a preset general hyper-resolution model to obtain a target image if the target hyper-resolution model associated with the identifier is not found.
In a possible implementation manner of the fourth aspect, the second super-divide module further includes:
and the second association module is used for establishing the association relationship between the identification and the general hyper-score model.
In one possible implementation manner of the fourth aspect, the first resolution of the first image is consistent with the image resolution of the target image;
the second hyper-resolution module is specifically configured to perform super-resolution reconstruction on the first image through a single-enhanced hyper-resolution model to obtain the target image, where the single-enhanced hyper-resolution model is a hyper-resolution model in which an image resolution of an input image and an image resolution of an output image are the same.
In another possible implementation of the fourth aspect, the first resolution of the first image is lower than the image resolution of the target image;
the second hyper-resolution module comprises:
the up-sampling sub-module is used for performing up-sampling processing on the first image to obtain a second image, and the image resolution of the second image is consistent with the image resolution of the target image;
and the enhancement submodule is used for performing super-resolution reconstruction on the second image through a single enhanced hyper-resolution model by the second electronic equipment to obtain the target image, wherein the single enhanced hyper-resolution model is a hyper-resolution model with the same image resolution of the input image and the output image.
In another possible implementation of the fourth aspect, the first resolution of the first image is lower than the resolution of the target image;
the second hyper-resolution module is specifically configured to perform super-resolution reconstruction on the first image through a multiple-time enhanced hyper-resolution model to obtain the target image, where the multiple-time enhanced hyper-resolution model is a hyper-resolution model in which an image resolution of an input image is smaller than an image resolution of an output image.
In a fifth aspect, an image processing system is provided, the system comprising a first electronic device and a second electronic device;
the first electronic equipment is used for rendering the native image data generated by the application program to obtain a first image and sending the first image to the second electronic equipment;
the second electronic device is configured to execute the image processing method mentioned in the second aspect.
In a sixth aspect, an electronic device is provided, comprising a memory, a processor and a computer program stored in the memory and executable on the processor, wherein the electronic device implements the steps of the method as described above when the processor executes the computer program.
In a seventh aspect, a computer-readable storage medium is provided, which stores a computer program, which, when executed by a processor, causes an electronic device to implement the steps of the method as described above.
In an eighth aspect, a chip system is provided, where the chip system may be a single chip or a chip module composed of a plurality of chips, and the chip system includes a memory and a processor, and the processor executes a computer program stored in the memory to implement the steps of the method.
Compared with the prior art, the embodiment of the application has the advantages that:
according to the image processing method, the electronic equipment renders the native image data to obtain a first image, and then performs super-resolution reconstruction on the first image to obtain a target image displayed on a screen. Compared with a mode of directly rendering the native image data generated by the application program into the target image finally displayed on the screen, the method for generating the target image in the mode of primary rendering and super-resolution reconstruction can reduce rendering power consumption, reduce calculation amount and improve rendering efficiency.
Moreover, the electronic equipment executes the preliminary rendering operation through the first graphics rendering hardware and executes the super-resolution reconstruction operation through the second graphics rendering hardware, so that heterogeneous hardware resources in the electronic equipment can be fully utilized, and the situation that the electronic equipment runs a high-quality product with low image quality due to insufficient hardware resources of single graphics rendering hardware is avoided.
In summary, the image processing method of the present application can reduce rendering power consumption and computational complexity for rendering high-quality images, and can fully utilize heterogeneous hardware resources in the electronic device, thereby solving the problem that the electronic device can only operate high-quality products with lower quality when the hardware resources of the electronic device are insufficient in the existing image scheme, and having strong usability and practicability.
Drawings
Fig. 1 is a schematic structural diagram of an electronic device according to an embodiment of the present disclosure;
fig. 2 is a block diagram of a software structure of an electronic device according to an embodiment of the present disclosure;
fig. 3 is a schematic flowchart of an image processing method according to an embodiment of the present application;
FIG. 4 is a schematic diagram of an example image provided by an embodiment of the present application;
FIG. 5 is a schematic diagram of another example image provided by embodiments of the present application;
FIG. 6 is a schematic diagram of another example image provided by embodiments of the present application;
FIG. 7 is a schematic diagram of another electronic device provided by an embodiment of the application;
fig. 8 is a schematic diagram of an application scenario provided in an embodiment of the present application;
FIG. 9 is a schematic flowchart of another image processing method provided in the embodiments of the present application;
FIG. 10 is a schematic diagram of another application scenario provided by an embodiment of the present application;
FIG. 11 is a schematic flowchart of another image processing method provided in the embodiments of the present application;
fig. 12 is a schematic diagram of another application scenario provided in an embodiment of the present application;
fig. 13 is a schematic diagram of another application scenario provided in an embodiment of the present application;
fig. 14 is a schematic diagram of another application scenario provided in an embodiment of the present application;
fig. 15 is a schematic diagram of another application scenario provided in an embodiment of the present application;
fig. 16 is a schematic diagram of another application scenario provided in the embodiment of the present application;
FIG. 17 is a schematic flowchart of another image processing method provided in the embodiments of the present application;
FIG. 18 is a schematic diagram of another application scenario provided by an embodiment of the present application;
FIG. 19 is a schematic flowchart of another image processing method provided in the embodiments of the present application;
FIG. 20 is a schematic diagram of another application scenario provided by an embodiment of the present application;
fig. 21 is a schematic diagram of another application scenario provided in an embodiment of the present application;
FIG. 22 is a schematic diagram of another application scenario provided by an embodiment of the present application;
fig. 23 is a schematic diagram of another application scenario provided in an embodiment of the present application;
FIG. 24 is a schematic diagram of another application scenario provided by an embodiment of the present application;
FIG. 25 is a schematic diagram of another application scenario provided by an embodiment of the present application;
fig. 26 is a schematic diagram of another electronic device provided in the embodiment of the present application.
Detailed Description
In the following description, for purposes of explanation and not limitation, specific details are set forth, such as particular system structures, techniques, etc. in order to provide a thorough understanding of the embodiments of the present application. It will be apparent, however, to one skilled in the art that the present application may be practiced in other embodiments that depart from these specific details. In other instances, detailed descriptions of well-known systems, devices, circuits, and methods are omitted so as not to obscure the description of the present application with unnecessary detail.
It will be understood that the terms "comprises" and/or "comprising," when used in this specification and the appended claims, specify the presence of stated features, integers, steps, operations, elements, and/or hardware, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, hardware, and/or groups thereof.
It should also be understood that the term "and/or" as used in this specification and the appended claims refers to and includes any and all possible combinations of one or more of the associated listed items.
As used in this specification and the appended claims, the term "if" may be interpreted contextually as "when", "upon" or "in response to" determining "or" in response to detecting ". Similarly, the phrase "if it is determined" or "if a [ described condition or event ] is detected" may be interpreted contextually to mean "upon determining" or "in response to determining" or "upon detecting [ described condition or event ]" or "in response to detecting [ described condition or event ]".
Furthermore, in the description of the present application and the appended claims, the terms "first," "second," "third," and the like are used for distinguishing between descriptions and not necessarily for describing or implying relative importance.
Reference throughout this specification to "one embodiment" or "some embodiments," or the like, means that a particular feature, structure, or characteristic described in connection with the embodiment is included in one or more embodiments of the present application. Thus, appearances of the phrases "in one embodiment," "in some embodiments," "in other embodiments," or the like, in various places throughout this specification are not necessarily all referring to the same embodiment, but rather "one or more but not all embodiments" unless specifically stated otherwise. The terms "comprising," "including," "having," and variations thereof mean "including, but not limited to," unless expressly specified otherwise.
Before describing the embodiments of the present application, some terms related to the embodiments of the present application will be explained first:
picture quality refers to picture quality. There are many image quality indexes for evaluating image quality, and a common image quality index is image resolution, and when other image quality indexes are the same, the higher the image resolution is, the higher the image quality of an image is, and the lower the image resolution is, the lower the image quality of the image is.
In addition to the image resolution, the image quality index may include one or more of sharpness, lens distortion, dispersion, resolution, color gamut, color purity (color brilliance), color balance parameter, and the like.
The image resolution refers to the amount of information stored in the image and can be understood as the number of pixels contained in the image. The expression of the image resolution may be "the number of horizontal pixels × the number of vertical pixels". For example, the resolution of an image is 2048 × 1080, which means that each row of pixels in the image includes 2048 pixels and each column of pixels includes 1080 pixels.
Rendering refers in computer terminology to the process of generating an image from an image model. An image model is a description of a three-dimensional scene in a well-defined language or data structure, which includes geometry, viewpoint, texture, lighting information, and rendering parameters. The rendering parameters may include the image quality index.
Increasingly, fine-screen products, such as high-quality games and video, are currently on the market.
These products have high requirements on hardware resources of electronic devices. When the electronic device is rendering pictures of these products, the rendering power consumption is high, and if the hardware resources of the electronic device are insufficient, the electronic device can only run these products with a low image quality, which affects the user experience.
For example, if a user runs a fine-grained game on an electronic device with hardware resources at the RTX2080 graphics card level, the frame rate of the electronic device can be maintained at around 90 frames/sec even if the user adjusts the image resolution to 2048 × 1080.
However, if the user runs a fine game on the mobile phone platform with a higher image resolution, the frame rate of the electronic device can only be kept around 40 frames/second even if the mobile phone platform is the current higher-level mobile phone platform Mali-G76. The frame rate of the electronic device can typically only be maintained at 60 frames/second if the user turns down the image resolution.
In this regard, a graphics acceleration (GPU-turbo) technique is provided. The GPU-turbo technology reconstructs the traditional GPU architecture at the system bottom layer of the electronic equipment, realizes the software and hardware cooperation, and greatly improves the overall operation efficiency of the GPU. In addition, the GPU-turbo technology can detect the image quality difference of the images of the adjacent frames through an Artificial Intelligence (AI) technology, render the difference part of the adjacent frames and reserve the same content of the adjacent frames, and by the mode, the GPU-turbo can save 80% of calculation and greatly improve the rendering speed of the GPU.
However, the image rendering method of the GPU-turbo technology is consistent with the conventional GPU rendering method, and the GPU directly renders the initial image file into the final image displayed on the screen. Therefore, when a high-quality product is operated using the GPU-turbo technology, a large load is still imposed on the GPU of the electronic device, and rendering power consumption is high.
In view of this, embodiments of the present application provide an image processing method, an image processing apparatus, an electronic device, and a computer-readable storage medium, so as to solve the problems of high rendering power consumption and large computation amount when a product with high image quality is rendered by using an existing image rendering method.
It is to be understood that the steps involved in the image rendering method provided in the embodiment of the present application are only examples, and not all the steps are necessarily executed steps, or the content in each information or message is not necessary, and may be increased or decreased as needed during the use process.
The same steps or messages with the same functions in the embodiments of the present application may be referred to with each other between different embodiments.
The service scenario described in the embodiment of the present application is for more clearly illustrating the technical solution of the embodiment of the present application, and does not form a limitation on the technical solution provided in the embodiment of the present application, and as a person having ordinary skill in the art knows that along with the evolution of a network architecture and the appearance of a new service scenario, the technical solution provided in the embodiment of the present application is also applicable to similar technical problems.
The electronic device described in the embodiment of the present application may be a mobile phone, a tablet computer, a handheld computer, a Personal Digital Assistant (PDA), an Augmented Reality (AR) \ Virtual Reality (VR) device, a media player, a wearable device, or the like, and the embodiment of the present application does not set any limitation on the specific form/type of the electronic device.
Fig. 1 shows a schematic structural diagram of an electronic device 100.
The electronic device 100 may include a processor 110, an external memory interface 120, an internal memory 121, a Universal Serial Bus (USB) interface 130, a charging management module 140, a power management module 141, a battery 142, an antenna 1, an antenna 2, a mobile communication module 150, a wireless communication module 160, an audio module 170, a speaker 170A, a receiver 170B, a microphone 170C, an earphone interface 170D, a sensor module 180, a key 190, a motor 191, an indicator 192, a camera 193, a display screen 194, a Subscriber Identification Module (SIM) card interface 195, and the like. The sensor module 180 may include a pressure sensor 180A, a gyroscope sensor 180B, an air pressure sensor 180C, a magnetic sensor 180D, an acceleration sensor 180E, a distance sensor 180F, a proximity light sensor 180G, a fingerprint sensor 180H, a temperature sensor 180J, a touch sensor 180K, an ambient light sensor 180L, a bone conduction sensor 180M, and the like.
It is to be understood that the illustrated structure of the embodiment of the present invention does not specifically limit the electronic device 100. In other embodiments of the present application, electronic device 100 may include more or fewer components than shown, or some components may be combined, some components may be split, or a different arrangement of components. The illustrated components may be implemented in hardware, software, or a combination of software and hardware.
Processor 110 may include one or more processing units, such as: the processor 110 may include an Application Processor (AP), a modem processor, a Graphics Processing Unit (GPU), an Image Signal Processor (ISP), a controller, a video codec, a Digital Signal Processor (DSP), a baseband processor, and/or a neural-Network Processing Unit (NPU), etc. The different processing units may be separate devices or may be integrated into one or more processors.
The controller can generate an operation control signal according to the instruction operation code and the timing signal to complete the control of instruction fetching and instruction execution.
A memory may also be provided in processor 110 for storing instructions and data. In some embodiments, the memory in the processor 110 is a cache memory. The memory may hold instructions or data that have just been used or recycled by the processor 110. If the processor 110 needs to reuse the instruction or data, it can be called directly from the memory. Avoiding repeated accesses reduces the latency of the processor 110, thereby increasing the efficiency of the system.
In some embodiments, processor 110 may include one or more interfaces. The interface may include an integrated circuit (I2C) interface, an integrated circuit built-in audio (I2S) interface, a Pulse Code Modulation (PCM) interface, a universal asynchronous receiver/transmitter (UART) interface, a Mobile Industry Processor Interface (MIPI), a general-purpose input/output (GPIO) interface, a Subscriber Identity Module (SIM) interface, and/or a Universal Serial Bus (USB) interface, etc.
The I2C interface is a bi-directional synchronous serial bus that includes a serial data line (SDA) and a Serial Clock Line (SCL). In some embodiments, processor 110 may include multiple sets of I2C buses. The processor 110 may be coupled to the touch sensor 180K, the charger, the flash, the camera 193, etc. through different I2C bus interfaces, respectively. For example: the processor 110 may be coupled to the touch sensor 180K via an I2C interface, such that the processor 110 and the touch sensor 180K communicate via an I2C bus interface to implement the touch functionality of the electronic device 100.
The I2S interface may be used for audio communication. In some embodiments, processor 110 may include multiple sets of I2S buses. The processor 110 may be coupled to the audio module 170 via an I2S bus to enable communication between the processor 110 and the audio module 170. In some embodiments, the audio module 170 may communicate audio signals to the wireless communication module 160 via the I2S interface, enabling answering of calls via a bluetooth headset.
The PCM interface may also be used for audio communication, sampling, quantizing and encoding analog signals. In some embodiments, the audio module 170 and the wireless communication module 160 may be coupled by a PCM bus interface. In some embodiments, the audio module 170 may also transmit audio signals to the wireless communication module 160 through the PCM interface, so as to implement a function of answering a call through a bluetooth headset. Both the I2S interface and the PCM interface may be used for audio communication.
The UART interface is a universal serial data bus used for asynchronous communications. The bus may be a bidirectional communication bus. It converts the data to be transmitted between serial communication and parallel communication. In some embodiments, a UART interface is generally used to connect the processor 110 with the wireless communication module 160. For example: the processor 110 communicates with a bluetooth module in the wireless communication module 160 through a UART interface to implement a bluetooth function. In some embodiments, the audio module 170 may transmit the audio signal to the wireless communication module 160 through a UART interface, so as to realize the function of playing music through a bluetooth headset.
MIPI interfaces may be used to connect processor 110 with peripheral devices such as display screen 194, camera 193, and the like. The MIPI interface includes a Camera Serial Interface (CSI), a Display Serial Interface (DSI), and the like. In some embodiments, processor 110 and camera 193 communicate through a CSI interface to implement the capture functionality of electronic device 100. The processor 110 and the display screen 194 communicate through the DSI interface to implement the display function of the electronic device 100.
The GPIO interface may be configured by software. The GPIO interface may be configured as a control signal and may also be configured as a data signal. In some embodiments, a GPIO interface may be used to connect the processor 110 with the camera 193, the display 194, the wireless communication module 160, the audio module 170, the sensor module 180, and the like. The GPIO interface may also be configured as an I2C interface, an I2S interface, a UART interface, a MIPI interface, and the like.
The USB interface 130 is an interface conforming to the USB standard specification, and may specifically be a Mini USB interface, a Micro USB interface, a USB Type C interface, or the like. The USB interface 130 may be used to connect a charger to charge the electronic device 100, and may also be used to transmit data between the electronic device 100 and a peripheral device. And the earphone can also be used for connecting an earphone and playing audio through the earphone. The interface may also be used to connect other electronic devices, such as AR devices and the like.
It should be understood that the connection relationship between the modules according to the embodiment of the present invention is only illustrative, and is not limited to the structure of the electronic device 100. In other embodiments of the present application, the electronic device 100 may also adopt different interface connection manners or a combination of multiple interface connection manners in the above embodiments.
The charging management module 140 is configured to receive charging input from a charger. The charger may be a wireless charger or a wired charger. In some wired charging embodiments, the charging management module 140 may receive charging input from a wired charger via the USB interface 130. In some wireless charging embodiments, the charging management module 140 may receive a wireless charging input through a wireless charging coil of the electronic device 100. The charging management module 140 may also supply power to the electronic device through the power management module 141 while charging the battery 142.
The power management module 141 is used to connect the battery 142, the charging management module 140 and the processor 110. The power management module 141 receives input from the battery 142 and/or the charge management module 140, and supplies power to the processor 110, the internal memory 121, the display 194, the camera 193, the wireless communication module 160, and the like. The power management module 141 may also be used to monitor parameters such as battery capacity, battery cycle count, battery state of health (leakage, impedance), etc. In some other embodiments, the power management module 141 may also be disposed in the processor 110. In other embodiments, the power management module 141 and the charging management module 140 may be disposed in the same device.
The wireless communication function of the electronic device 100 may be implemented by the antenna 1, the antenna 2, the mobile communication module 150, the wireless communication module 160, a modem processor, a baseband processor, and the like.
The antennas 1 and 2 are used for transmitting and receiving electromagnetic wave signals. Each antenna in the electronic device 100 may be used to cover a single or multiple communication bands. Different antennas can also be multiplexed to improve the utilization of the antennas. For example: the antenna 1 may be multiplexed as a diversity antenna of a wireless local area network. In other embodiments, the antenna may be used in conjunction with a tuning switch.
The mobile communication module 150 may provide a solution including 2G/3G/4G/5G wireless communication applied to the electronic device 100. The mobile communication module 150 may include at least one filter, a switch, a power amplifier, a Low Noise Amplifier (LNA), and the like. The mobile communication module 150 may receive the electromagnetic wave from the antenna 1, filter, amplify, etc. the received electromagnetic wave, and transmit the electromagnetic wave to the modem processor for demodulation. The mobile communication module 150 may also amplify the signal modulated by the modem processor, and convert the signal into electromagnetic wave through the antenna 1 to radiate the electromagnetic wave. In some embodiments, at least some of the functional modules of the mobile communication module 150 may be disposed in the processor 110. In some embodiments, at least some of the functional modules of the mobile communication module 150 may be disposed in the same device as at least some of the modules of the processor 110.
The modem processor may include a modulator and a demodulator. The modulator is used for modulating a low-frequency baseband signal to be transmitted into a medium-high frequency signal. The demodulator is used for demodulating the received electromagnetic wave signal into a low-frequency baseband signal. The demodulator then passes the demodulated low frequency baseband signal to a baseband processor for processing. The low frequency baseband signal is processed by the baseband processor and then transferred to the application processor. The application processor outputs a sound signal through an audio device (not limited to the speaker 170A, the receiver 170B, etc.) or displays an image or video through the display screen 194. In some embodiments, the modem processor may be a stand-alone device. In other embodiments, the modem processor may be provided in the same device as the mobile communication module 150 or other functional modules, independent of the processor 110.
The wireless communication module 160 may provide a solution for wireless communication applied to the electronic device 100, including Wireless Local Area Networks (WLANs) (e.g., wireless fidelity (Wi-Fi) networks), bluetooth (bluetooth, BT), Global Navigation Satellite System (GNSS), Frequency Modulation (FM), Near Field Communication (NFC), Infrared (IR), and the like. The wireless communication module 160 may be one or more devices integrating at least one communication processing module. The wireless communication module 160 receives electromagnetic waves via the antenna 2, performs frequency modulation and filtering processing on electromagnetic wave signals, and transmits the processed signals to the processor 110. The wireless communication module 160 may also receive a signal to be transmitted from the processor 110, perform frequency modulation and amplification on the signal, and convert the signal into electromagnetic waves through the antenna 2 to radiate the electromagnetic waves.
In some embodiments, antenna 1 of electronic device 100 is coupled to mobile communication module 150 and antenna 2 is coupled to wireless communication module 160 so that electronic device 100 can communicate with networks and other devices through wireless communication techniques. The wireless communication technology may include global system for mobile communications (GSM), General Packet Radio Service (GPRS), code division multiple access (code division multiple access, CDMA), Wideband Code Division Multiple Access (WCDMA), time-division code division multiple access (time-division code division multiple access, TD-SCDMA), Long Term Evolution (LTE), LTE, BT, GNSS, WLAN, NFC, FM, and/or IR technologies, etc. The GNSS may include a Global Positioning System (GPS), a global navigation satellite system (GLONASS), a beidou navigation satellite system (BDS), a quasi-zenith satellite system (QZSS), and/or a Satellite Based Augmentation System (SBAS).
The electronic device 100 implements display functions via the GPU, the display screen 194, and the application processor. The GPU is a microprocessor for image processing, and is connected to the display screen 194 and an application processor. The GPU is used to perform mathematical and geometric calculations for graphics rendering. The processor 110 may include one or more GPUs that execute program instructions to generate or alter display information.
The display screen 194 is used to display images, video, and the like. The display screen 194 includes a display panel. The display panel may adopt a Liquid Crystal Display (LCD), an organic light-emitting diode (OLED), an active-matrix organic light-emitting diode (active-matrix organic light-emitting diode, AMOLED), a flexible light-emitting diode (FLED), a miniature, a Micro-oeld, a quantum dot light-emitting diode (QLED), and the like. In some embodiments, the electronic device 100 may include 1 or N display screens 194, with N being a positive integer greater than 1.
The electronic device 100 may implement a shooting function through the ISP, the camera 193, the video codec, the GPU, the display 194, the application processor, and the like.
The ISP is used to process the data fed back by the camera 193. For example, when a photo is taken, the shutter is opened, light is transmitted to the camera photosensitive element through the lens, the optical signal is converted into an electrical signal, and the camera photosensitive element transmits the electrical signal to the ISP for processing and converting into an image visible to naked eyes. The ISP can also carry out algorithm optimization on the noise, brightness and skin color of the image. The ISP can also optimize parameters such as exposure, color temperature and the like of a shooting scene. In some embodiments, the ISP may be provided in camera 193.
The camera 193 is used to capture still images or video. The object generates an optical image through the lens and projects the optical image to the photosensitive element. The photosensitive element may be a Charge Coupled Device (CCD) or a complementary metal-oxide-semiconductor (CMOS) phototransistor. The light sensing element converts the optical signal into an electrical signal, which is then passed to the ISP where it is converted into a digital image signal. And the ISP outputs the digital image signal to the DSP for processing. The DSP converts the digital image signal into image signal in standard RGB, YUV and other formats. In some embodiments, the electronic device 100 may include 1 or N cameras 193, N being a positive integer greater than 1.
The digital signal processor is used for processing digital signals, and can process digital image signals and other digital signals. For example, when the electronic device 100 selects a frequency bin, the digital signal processor is used to perform fourier transform or the like on the frequency bin energy.
Video codecs are used to compress or decompress digital video. The electronic device 100 may support one or more video codecs. In this way, the electronic device 100 may play or record video in a variety of encoding formats, such as: moving Picture Experts Group (MPEG) 1, MPEG2, MPEG3, MPEG4, and the like.
The NPU is a neural-network (NN) computing processor that processes input information quickly by using a biological neural network structure, for example, by using a transfer mode between neurons of a human brain, and can also learn by itself continuously. Applications such as intelligent recognition of the electronic device 100 can be realized through the NPU, for example: image recognition, face recognition, speech recognition, text understanding, and the like.
The external memory interface 120 may be used to connect an external memory card, such as a Micro SD card, to extend the memory capability of the electronic device 100. The external memory card communicates with the processor 110 through the external memory interface 120 to implement a data storage function. For example, files such as music, video, etc. are saved in an external memory card.
The internal memory 121 may be used to store computer-executable program code, which includes instructions. The internal memory 121 may include a program storage area and a data storage area. The storage program area may store an operating system, an application program (such as a sound playing function, an image playing function, etc.) required by at least one function, and the like. The storage data area may store data (such as audio data, phone book, etc.) created during use of the electronic device 100, and the like. In addition, the internal memory 121 may include a high-speed random access memory, and may further include a nonvolatile memory, such as at least one magnetic disk storage device, a flash memory device, a universal flash memory (UFS), and the like. The processor 110 executes various functional applications of the electronic device 100 and data processing by executing instructions stored in the internal memory 121 and/or instructions stored in a memory provided in the processor.
The electronic device 100 may implement audio functions via the audio module 170, the speaker 170A, the receiver 170B, the microphone 170C, the headphone interface 170D, and the application processor. Such as music playing, recording, etc.
The audio module 170 is used to convert digital audio information into an analog audio signal output and also to convert an analog audio input into a digital audio signal. The audio module 170 may also be used to encode and decode audio signals. In some embodiments, the audio module 170 may be disposed in the processor 110, or some functional modules of the audio module 170 may be disposed in the processor 110.
The speaker 170A, also called a "horn", is used to convert the audio electrical signal into an acoustic signal. The electronic apparatus 100 can listen to music through the speaker 170A or listen to a handsfree call.
The receiver 170B, also called "earpiece", is used to convert the electrical audio signal into an acoustic signal. When the electronic apparatus 100 receives a call or voice information, it can receive voice by placing the receiver 170B close to the ear of the person.
The microphone 170C, also referred to as a "microphone," is used to convert sound signals into electrical signals. When making a call or transmitting voice information, the user can input a voice signal to the microphone 170C by speaking the user's mouth near the microphone 170C. The electronic device 100 may be provided with at least one microphone 170C. In other embodiments, the electronic device 100 may be provided with two microphones 170C to achieve a noise reduction function in addition to collecting sound signals. In other embodiments, the electronic device 100 may further include three, four or more microphones 170C to collect sound signals, reduce noise, identify sound sources, perform directional recording, and so on.
The headphone interface 170D is used to connect a wired headphone. The headset interface 170D may be the USB interface 130, or may be a 3.5mm open mobile electronic device platform (OMTP) standard interface, a cellular telecommunications industry association (cellular telecommunications industry association of the USA, CTIA) standard interface.
The pressure sensor 180A is used for sensing a pressure signal, and converting the pressure signal into an electrical signal. In some embodiments, the pressure sensor 180A may be disposed on the display screen 194. The pressure sensor 180A can be of a wide variety, such as a resistive pressure sensor, an inductive pressure sensor, a capacitive pressure sensor, and the like. The capacitive pressure sensor may be a sensor comprising at least two parallel plates having an electrically conductive material. When a force acts on the pressure sensor 180A, the capacitance between the electrodes changes. The electronic device 100 determines the strength of the pressure from the change in capacitance. When a touch operation is applied to the display screen 194, the electronic apparatus 100 detects the intensity of the touch operation according to the pressure sensor 180A. The electronic apparatus 100 may also calculate the touched position from the detection signal of the pressure sensor 180A. In some embodiments, the touch operations that are applied to the same touch position but different touch operation intensities may correspond to different operation instructions. For example: and when the touch operation with the touch operation intensity smaller than the first pressure threshold value acts on the short message application icon, executing an instruction for viewing the short message. And when the touch operation with the touch operation intensity larger than or equal to the first pressure threshold value acts on the short message application icon, executing an instruction of newly building the short message.
The gyro sensor 180B may be used to determine the motion attitude of the electronic device 100. In some embodiments, the angular velocity of electronic device 100 about three axes (i.e., the x, y, and z axes) may be determined by gyroscope sensor 180B. The gyro sensor 180B may be used for photographing anti-shake. For example, when the shutter is pressed, the gyro sensor 180B detects a shake angle of the electronic device 100, calculates a distance to be compensated for by the lens module according to the shake angle, and allows the lens to counteract the shake of the electronic device 100 through a reverse movement, thereby achieving anti-shake. The gyroscope sensor 180B may also be used for navigation, somatosensory gaming scenes.
The air pressure sensor 180C is used to measure air pressure. In some embodiments, electronic device 100 calculates altitude, aiding in positioning and navigation, from barometric pressure values measured by barometric pressure sensor 180C.
The magnetic sensor 180D includes a hall sensor. The electronic device 100 may detect the opening and closing of the flip holster using the magnetic sensor 180D. In some embodiments, when the electronic device 100 is a flip phone, the electronic device 100 may detect the opening and closing of the flip according to the magnetic sensor 180D. And then according to the opening and closing state of the leather sheath or the opening and closing state of the flip cover, the automatic unlocking of the flip cover is set.
The acceleration sensor 180E may detect the magnitude of acceleration of the electronic device 100 in various directions (typically three axes). The magnitude and direction of gravity can be detected when the electronic device 100 is stationary. The method can also be used for recognizing the posture of the electronic equipment, and is applied to horizontal and vertical screen switching, pedometers and other applications.
A distance sensor 180F for measuring a distance. The electronic device 100 may measure the distance by infrared or laser. In some embodiments, taking a picture of a scene, electronic device 100 may utilize range sensor 180F to range for fast focus.
The proximity light sensor 180G may include, for example, a Light Emitting Diode (LED) and a light detector, such as a photodiode. The light emitting diode may be an infrared light emitting diode. The electronic device 100 emits infrared light to the outside through the light emitting diode. The electronic device 100 detects infrared reflected light from nearby objects using a photodiode. When sufficient reflected light is detected, it can be determined that there is an object near the electronic device 100. When insufficient reflected light is detected, the electronic device 100 may determine that there are no objects near the electronic device 100. The electronic device 100 can utilize the proximity light sensor 180G to detect that the user holds the electronic device 100 close to the ear for talking, so as to automatically turn off the screen to achieve the purpose of saving power. The proximity light sensor 180G may also be used in a holster mode, a pocket mode automatically unlocks and locks the screen.
The ambient light sensor 180L is used to sense the ambient light level. Electronic device 100 may adaptively adjust the brightness of display screen 194 based on the perceived ambient light level. The ambient light sensor 180L may also be used to automatically adjust the white balance when taking a picture. The ambient light sensor 180L may also cooperate with the proximity light sensor 180G to detect whether the electronic device 100 is in a pocket to prevent accidental touches.
The fingerprint sensor 180H is used to collect a fingerprint. The electronic device 100 can utilize the collected fingerprint characteristics to unlock the fingerprint, access the application lock, photograph the fingerprint, answer an incoming call with the fingerprint, and so on.
The temperature sensor 180J is used to detect temperature. In some embodiments, electronic device 100 implements a temperature processing strategy using the temperature detected by temperature sensor 180J. For example, when the temperature reported by the temperature sensor 180J exceeds a threshold, the electronic device 100 performs a reduction in performance of a processor located near the temperature sensor 180J, so as to reduce power consumption and implement thermal protection. In other embodiments, the electronic device 100 heats the battery 142 when the temperature is below another threshold to avoid the low temperature causing the electronic device 100 to shut down abnormally. In other embodiments, when the temperature is lower than a further threshold, the electronic device 100 performs boosting on the output voltage of the battery 142 to avoid abnormal shutdown due to low temperature.
The touch sensor 180K is also called a "touch device". The touch sensor 180K may be disposed on the display screen 194, and the touch sensor 180K and the display screen 194 form a touch screen, which is also called a "touch screen". The touch sensor 180K is used to detect a touch operation applied thereto or nearby. The touch sensor can communicate the detected touch operation to the application processor to determine the touch event type. Visual output associated with the touch operation may be provided through the display screen 194. In other embodiments, the touch sensor 180K may be disposed on a surface of the electronic device 100, different from the position of the display screen 194.
The bone conduction sensor 180M may acquire a vibration signal. In some embodiments, the bone conduction sensor 180M may acquire a vibration signal of the human vocal part vibrating the bone mass. The bone conduction sensor 180M may also contact the human pulse to receive the blood pressure pulsation signal. In some embodiments, the bone conduction sensor 180M may also be disposed in a headset, integrated into a bone conduction headset. The audio module 170 may analyze a voice signal based on the vibration signal of the bone mass vibrated by the sound part acquired by the bone conduction sensor 180M, so as to implement a voice function. The application processor can analyze heart rate information based on the blood pressure beating signal acquired by the bone conduction sensor 180M, so as to realize the heart rate detection function.
The keys 190 include a power-on key, a volume key, and the like. The keys 190 may be mechanical keys. Or may be touch keys. The electronic apparatus 100 may receive a key input, and generate a key signal input related to user setting and function control of the electronic apparatus 100.
The motor 191 may generate a vibration cue. The motor 191 may be used for incoming call vibration cues, as well as for touch vibration feedback. For example, touch operations applied to different applications (e.g., photographing, audio playing, etc.) may correspond to different vibration feedback effects. The motor 191 may also respond to different vibration feedback effects for touch operations applied to different areas of the display screen 194. Different application scenes (such as time reminding, receiving information, alarm clock, game and the like) can also correspond to different vibration feedback effects. The touch vibration feedback effect may also support customization.
Indicator 192 may be an indicator light that may be used to indicate a state of charge, a change in charge, or a message, missed call, notification, etc.
The SIM card interface 195 is used to connect a SIM card. The SIM card can be brought into and out of contact with the electronic apparatus 100 by being inserted into the SIM card interface 195 or being pulled out of the SIM card interface 195. The electronic device 100 may support 1 or N SIM card interfaces, N being a positive integer greater than 1. The SIM card interface 195 may support a Nano SIM card, a Micro SIM card, a SIM card, etc. The same SIM card interface 195 can be inserted with multiple cards at the same time. The types of the plurality of cards may be the same or different. The SIM card interface 195 may also be compatible with different types of SIM cards. The SIM card interface 195 may also be compatible with external memory cards. The electronic device 100 interacts with the network through the SIM card to implement functions such as communication and data communication. In some embodiments, the electronic device 100 employs esims, namely: an embedded SIM card. The eSIM card can be embedded in the electronic device 100 and cannot be separated from the electronic device 100.
The software system of the electronic device 100 may employ a layered architecture, an event-driven architecture, a micro-core architecture, a micro-service architecture, or a cloud architecture. The embodiment of the present invention uses an Android system with a layered architecture as an example to exemplarily illustrate a software structure of the electronic device 100.
Fig. 2 is a block diagram of a software configuration of the electronic apparatus 100 according to the embodiment of the present invention.
The layered architecture divides the software into several layers, each layer having a clear role and division of labor. The layers communicate with each other through a software interface. In some embodiments, the Android system is divided into four layers, an application layer, an application framework layer, an Android runtime (Android runtime) and system library, and a kernel layer from top to bottom.
The application layer may include a series of application packages.
As shown in fig. 2, the application package may include applications such as camera, gallery, calendar, phone call, map, navigation, WLAN, bluetooth, music, video, short message, etc.
The application framework layer provides an Application Programming Interface (API) and a programming framework for the application program of the application layer. The application framework layer includes a number of predefined functions.
As shown in FIG. 2, the application framework layers may include a window manager, content provider, view system, phone manager, resource manager, notification manager, and the like.
The window manager is used for managing window programs. The window manager can obtain the size of the display screen, judge whether a status bar exists, lock the screen, intercept the screen and the like.
The content provider is used to store and retrieve data and make it accessible to applications. The data may include video, images, audio, calls made and received, browsing history and bookmarks, phone books, etc.
The view system includes visual controls such as controls to display text, controls to display pictures, and the like. The view system may be used to build applications. The display interface may be composed of one or more views. For example, the display interface including the short message notification icon may include a view for displaying text and a view for displaying pictures.
The phone manager is used to provide communication functions of the electronic device 100. Such as management of call status (including on, off, etc.).
The resource manager provides various resources for the application, such as localized strings, icons, pictures, layout files, video files, and the like.
The notification manager enables the application to display notification information in the status bar, can be used to convey notification-type messages, can disappear automatically after a short dwell, and does not require user interaction. Such as a notification manager used to inform download completion, message alerts, etc. The notification manager may also be a notification that appears in the form of a chart or scroll bar text at the top status bar of the system, such as a notification of a background running application, or a notification that appears on the screen in the form of a dialog window. For example, prompting text information in the status bar, sounding a prompt tone, vibrating the electronic device, flashing an indicator light, etc.
The Android Runtime comprises a core library and a virtual machine. The Android runtime is responsible for scheduling and managing an Android system.
The core library comprises two parts: one part is a function which needs to be called by java language, and the other part is a core library of android.
The application layer and the application framework layer run in a virtual machine. And executing java files of the application program layer and the application program framework layer into a binary file by the virtual machine. The virtual machine is used for performing the functions of object life cycle management, stack management, thread management, safety and exception management, garbage collection and the like.
The system library may include a plurality of functional modules. For example: surface managers (surface managers), Media Libraries (Media Libraries), three-dimensional graphics processing Libraries (e.g., OpenGL ES), 2D graphics engines (e.g., SGL), and the like.
The surface manager is used to manage the display subsystem and provide fusion of 2D and 3D layers for multiple applications.
The media library supports a variety of commonly used audio, video format playback and recording, and still image files, among others. The media library may support a variety of audio-video encoding formats, such as MPEG4, h.264, MP3, AAC, AMR, JPG, PNG, and the like.
The three-dimensional graphic processing library is used for realizing three-dimensional graphic drawing, image rendering, synthesis, layer processing and the like.
The 2D graphics engine is a drawing engine for 2D drawing.
The kernel layer is a layer between hardware and software. The inner core layer at least comprises a display driver, a camera driver, an audio driver and a sensor driver.
The following describes exemplary workflow of the software and hardware of the electronic device 100 in connection with capturing a photo scene.
When the touch sensor 180K receives a touch operation, a corresponding hardware interrupt is issued to the kernel layer. The kernel layer processes the touch operation into an original input event (including touch coordinates, a time stamp of the touch operation, and other information). The raw input events are stored at the kernel layer. And the application program framework layer acquires the original input event from the kernel layer and identifies the control corresponding to the input event. Taking the touch operation as a touch click operation, and taking a control corresponding to the click operation as a control of a camera application icon as an example, the camera application calls an interface of an application framework layer, starts the camera application, further starts a camera drive by calling a kernel layer, and captures a still image or a video through the camera 193.
Next, an image processing method provided by the present embodiment will be described from the perspective of the first electronic device. Please refer to the flowchart of the image processing method shown in fig. 3, which includes:
s301, rendering the native image data by the first electronic device through first graphic rendering hardware to obtain a first image;
when the user is using the application program of the first electronic device, the application program transfers the native image data to first graphic rendering hardware of the first electronic device for rendering. Native image data refers to image files that are generated by an application program without rendering.
Rendering refers to a process of converting image data stored in the first electronic device into visible pixels through a rasterization technology or the like.
In the image processing method of the embodiment, the first electronic device may obtain the target image of the target image quality index in a super-resolution image reconstruction manner. However, the first electronic device cannot directly perform super-resolution reconstruction on the native image data.
Therefore, the first electronic device needs to perform preliminary rendering on the raw image data to obtain a first image that can be subjected to super-resolution reconstruction.
In the preliminary rendering process, the first electronic device may invoke first graphics rendering hardware to render the native image data according to the first image quality index, so as to obtain a first image.
When the first graphics rendering hardware performs preliminary rendering on the native image data according to the first image quality index, the first graphics rendering hardware may render the native image data according to a first image resolution, where the first image resolution is lower than a preset image resolution; alternatively, the first graphics rendering hardware may also adjust other image quality indicators such that the first image quality indicator is lower than a preset image quality indicator, and then render the native image data.
The first image quality index may be a specific image quality index, or the first image quality index may be a set of a plurality of image quality indexes. When the first image quality index is a set of a plurality of image quality indexes, the first image quality index being lower than the preset image quality index may be understood as that a part of or all of the image quality indexes in the first image quality index are lower than the preset image quality index.
For example, as in the example images shown in fig. 4 and 5, when the image quality index other than the image resolution is the same, and the image resolution of fig. 4 is lower than that of fig. 5, it is considered that the image quality of fig. 4 is lower than that of fig. 5. Referring to the example images shown in fig. 5 and 6, when the image resolutions of fig. 5 and 6 are the same, the image quality of fig. 6 may be considered to be lower than that of fig. 5 because the sharpness of fig. 6 is lower than that of fig. 5.
When the first graphics rendering hardware renders the image, the higher the image quality index of the rendered image is, the more hardware resources the first graphics rendering hardware occupies, and the higher the rendering power consumption is. Therefore, when the first graphics rendering hardware performs preliminary rendering on the native image data by the first image quality index lower than the preset image quality index, the hardware resources occupied by the first graphics rendering hardware during rendering can be reduced, and the rendering power consumption is reduced.
The first graphics rendering hardware may be selected according to actual conditions. In some embodiments, the first graphics rendering hardware may be one or more of a Central Processing Unit (CPU), a Graphics Processing Unit (GPU), and a neural-Network Processing Unit (NPU). For example, when the first electronic device initially renders native image data using GPU-turbo techniques, the first graphics rendering hardware described above may be a combination of a CPU and a GPU.
S302, the first electronic equipment carries out super-resolution reconstruction on the first image through second image rendering hardware to obtain a target image.
In S301, the first electronic device performs preliminary rendering on native image data of the application program through the first graphics rendering hardware to obtain a first image. However, the first image has low quality, and it is difficult to satisfy the quality of the product requested by the user.
Super-resolution reconstruction refers to applying an AI technique to map an image with a low resolution to a high resolution in order to expect to achieve the effect of enhancing the image quality.
After the first image is obtained by the first graphics rendering hardware, the first electronic device can perform super-resolution reconstruction on the first image by using the second graphics rendering hardware.
When the first electronic device performs super-resolution reconstruction on the first image, the first image may be input into a trained hyper-resolution model, and the first image is subjected to image quality enhancement through the hyper-resolution model to obtain a target image.
The type of the hyper-score model can be selected according to actual conditions. For example, the hyper-resolution model may be any one of a super-resolution convolutional neural network model (SRCNN model), a fast super-resolution convolutional neural network model (SRCNN model), a sub-pixel convolutional neural network model (ESPCN model), a depth-recursive model (DRCN model), and a depth super-resolution network model (VSDR model).
Moreover, if the first electronic device spends a long time processing a single frame of image, the application program cannot maintain a certain frame rate of display, and the screen of the first electronic device is blocked. Therefore, when selecting the hyper-resolution model, the developer should limit the scale of the hyper-resolution model so that the single-frame running time of the hyper-resolution model meets the requirement of the single-frame time of the display frame rate of the application program. For example, if the application frame rate is 90 frames/second, the single frame time of the application frame rate is 1/90 seconds. In this case, the scale of the hyper-resolution model should be limited so that the single frame running time of the hyper-resolution model is less than 1/90 seconds, thereby ensuring that the display frame rate of the application program is kept at 90 frames/second as much as possible and reducing the occurrence of the picture pause phenomenon.
When training the hyper-score model, the first electronic device may obtain at least one set of image sample pairs, and train the hyper-score model through the image sample pairs. The image sample pair refers to a pair of sample images. Each set of image sample pairs includes a first sample image and a second sample image. The contents of the first sample image and the second sample image are identical, but the image quality of the first sample image is lower than that of the second sample image.
When the first electronic device trains the hyper-segmentation model by using the image sample pair, the first sample image in the image sample pair can be input into the hyper-segmentation model to obtain a first output image.
And then, the first electronic equipment calculates a loss value according to the first output image, the second sample image and a preset loss function, and updates the hyper-resolution model according to the loss value and a preset network updating algorithm.
And after the hyper-resolution model is updated, returning to execute the previous step, and circularly using the image sample to train the hyper-resolution model until the cycle number reaches a preset number threshold or the loss value is smaller than a preset loss threshold.
The acquisition mode of the image sample pair can be selected according to actual conditions. For example, the pair of image samples may be a pair of native image data sets, or the pair of image samples may be a pair of image samples obtained by degrading a high-quality image into a low-quality image.
Moreover, the source of the image sample pair has a certain influence on the performance of the hyper-resolution model. If the developer wants to train the universal hyper-resolution model, the developer can perform sample collection without pertinence when sampling the image sample pairs to obtain the universal image sample pairs.
At this time, the first electronic device may obtain a general hyper-segmentation model by using the general image sample pair trained hyper-segmentation model. The general hyper-resolution model has high applicability and can be applied to more application scenes. However, the general hyper-segmentation model has a limited image quality optimization capability, and it is difficult to perform image quality optimization of images of each application scene to a high degree.
Therefore, if a developer wants a hyper-resolution model to perform a high degree of image quality optimization on a certain product or a certain class of products, the developer should only collect image sample pairs related to the product or the class of products when sampling the image sample pairs to obtain a specific image sample pair.
At this time, the first electronic device trains the hyper-score model by using the specific image sample pair, so that the specific hyper-score model for a certain product or a certain class of products can be obtained. The applicability of the specific hyper-resolution model is poor, and the model can only be applied to specific products. However, the specific hyper-segmentation model has high image quality optimization capability, and can perform high-level image quality optimization on an image of a specific product.
Take a game-like application as an example. When a developer wants to train a generic game-like hyper-segmentation model, the developer can obtain image sample pairs from various game applications. For example, developers may obtain pairs of common game image samples from similar or different types of game applications such as Arena of Valor (Chinese translation of the book of Legend of the legend), "PUBG Mobile" (Chinese translation of the book of Absolute student), "Carrot Fantasy" (Chinese translation of the book of radish guard war), "spots vs. Zombies" (Chinese translation of the book of plant zombies), "Minecraft" (Chinese translation of the book of My world), "Life After tomorrow" and so on.
The first electronic device then trains the hyper-score model using the generic game image sample pairs, such that the trained hyper-score model can be adapted to a variety of different game applications.
When a developer wants to train a targeted hyper-score model for a phenomenological-level game application, the developer may only obtain images of the game application as a particular game image sample pair. For example, currently, Arena of Valor belongs to a phenomenon-level game, there are many players, and in order to make the players get a smoother game experience when playing Arena of Valor, developers can train an over-score model for Arena of Valor.
When a developer trains the hyper-score model for Arena of Valor, only the game images in Arena of Valor should be used as a specific game image sample pair. The game image may be a character image, a terrain image, a skill image, or the like in Arena of Valor.
Then, the first electronic device trains the hyper-score model according to the specific game image sample pair, so that the trained hyper-score model can pertinently enhance the image quality of the image of the game of Arena of Valor.
When the first electronic device is provided with a plurality of trained hyper-score models, the first electronic device may detect an identifier of an application program when a user starts the application program, and select a corresponding hyper-score model according to the identifier of the application program to execute the operation of step S302.
If the first electronic device does not detect the hyper-segmentation model corresponding to the identifier of the application program, the first electronic device may call the general hyper-segmentation model to perform the step S302. Moreover, when the first electronic device is provided with a plurality of universal hyper-resolution models, the first electronic device may further establish an association relationship between the universal hyper-resolution model and the identifier of the application program after performing super-resolution reconstruction on the first image using the universal hyper-resolution models, so that when the first electronic device processes the first image of the application program next time, the same universal hyper-resolution model may be found according to the association relationship to process the first image.
In some embodiments, the identifier of the application may be a package name (packname) of the application. For example, after the first electronic device trains the hyper-score model for Arena of Valor, the hyper-score model may be associated with the package name of Arena of Valor. When the Arena of Valor is started or awakened, the first electronic device obtains the packet name of the Arena of Valor, searches a corresponding hyper-score model according to the packet name, and processes the image of the Arena of Valor by using the corresponding hyper-score model.
Alternatively, in other embodiments, the identification of the application may be user-defined. For example, the user defines the identity of Arena of Valor as 0010. The first electronic device then associates "0010" with Arena of Valor and the hyper-score model for Arena of Valor. When the Arena of Valor is started or awakened, the first electronic device searches for the identifier corresponding to the Arena of Valor to obtain the identifier "0010" of the Arena of Valor. Then, the first electronic device searches for a corresponding hyper-score model according to the identifier "0010", and processes the image of "Arena of Valor" using the corresponding hyper-score model.
In addition, the hyper-segmentation model may be a single-time enhanced hyper-segmentation model, or the hyper-segmentation model may be a multiple-time enhanced hyper-segmentation model. The single enhanced hyper-segmentation model is a hyper-segmentation model with the same image resolution of the input image and the output image. The multiple enhanced hyper-resolution model is a hyper-resolution model in which the image resolution of the input image is smaller than the image resolution of the output image.
When the hyper-differential model is a single enhanced hyper-differential model, the image resolution of the first image is consistent with the image resolution of the target image, and the single enhanced hyper-differential model enhances the image quality of the first image by improving other image quality indexes of the first image to obtain the target image.
When the hyper-division model is a multiple enhanced hyper-division model, the image resolution of the first image is smaller than that of the target image. When the primary image data of the application program is subjected to primary rendering by the first graphics rendering hardware of the first electronic device, rendering can be performed with a smaller image resolution, so that hardware resources occupied by the first graphics rendering hardware in the rendering process are reduced, and rendering power consumption is reduced.
After the first image with the smaller image resolution is rendered by the first image rendering hardware, the first electronic device performs image quality enhancement on the first image through the multiple enhanced hyper-division model according to the target resolution configured by the user, and adapts the image resolution of the first image to the target image resolution to obtain the target image.
It should be noted that, when the first electronic device executes the super-resolution reconstruction operation, the second graphics rendering hardware running the super-resolution model may be set according to actual conditions. In some embodiments, the first electronic device may run the hyper-segmentation model on a CPU, and perform image quality enhancement on the first image through the CPU; in other embodiments, the first electronic device may run the hyper-segmentation model on a GPU, and perform image quality enhancement on the first image through the GPU; alternatively, the first electronic device may run the hyper-segmentation model on an NPU, and perform image quality enhancement on the first image through the NPU. The application does not limit the hardware in the first electronic device for performing the super-resolution reconstruction operation.
In order to better explain the image processing method provided in the embodiment of the present application, the following description is made in conjunction with a specific scene.
Fig. 7 is a schematic view of the first electronic device suitable for the scene one, the scene two, the scene three, and the scene four provided in this embodiment. As shown in fig. 7, a GPU 701 and an NPU 702 may be disposed within the first electronic device. In the NPU 702, a common game class hyper-score model and a hyper-score model for the game a are preset, and the hyper-score model for the game a is associated with the application identifier of the game a.
Scene one:
as shown in fig. 8, a plurality of icons including "clock", "calendar", "game B", "memo", "camera", "address book", "telephone", "information", and the like may be provided on the main page of the first electronic device. One icon represents one application.
And the user clicks the icon of the game B on the first electronic equipment, and the first electronic equipment responds to the clicking operation of the user to start the game B.
As shown in fig. 9, after game B is started, the application program of game B sends raw image data to GPU 701 frame by frame.
The GPU 701 performs frame-by-frame rendering on the raw image data to obtain a first image corresponding to each frame of raw image data, and sends the first image to the NPU 702 frame-by-frame.
The NPU 702 obtains the application identifier of the game B after receiving the first image, and selects the super-score model of the general game class as the target super-score model if the super-score model corresponding to the game B is not found according to the application identifier of the game B.
The NPU 702 inputs the first image into the target hyper-separation model frame by frame, performs image quality enhancement on the first image through the target hyper-separation model to obtain a target image corresponding to each frame of the first image, and sends the target image to the display screen of the first electronic device frame by frame.
As shown in fig. 10, after the target image is acquired, the display screen displays the target image on the screen, and the display screen displays the screen of game B.
In addition, if only one hyper-score model of a general game class is set in the first electronic device, when the user triggers the game B again, the first electronic device still processes the image of the game B using the hyper-score model of the same general game class.
If the plurality of super-score models of the general game classes exist in the first electronic device, when the user triggers the game B again, the first electronic device can randomly select one super-score model from the plurality of super-score models of the general game classes to process the image of the game B.
Or, if there are multiple super-score models of the general game class in the first electronic device, the first electronic device may also establish an association relationship between the application identifier of the game B and the super-score model of the general game class after processing the image of the game B using the super-score model of the general game class for the first time. When the user triggers the game B again, the first electronic device can acquire the application identifier of the game B, and find the super-score model of the same general game class according to the application identifier of the game B to process the image of the game B.
Scene two:
as shown in fig. 11, a plurality of icons including "clock", "calendar", "game a", "memo", "camera", "address book", "telephone", "information", and the like may be provided on the main page of the first electronic device. One icon represents one application.
As shown in fig. 9, the user clicks an icon of game a on the first electronic device, and the first electronic device starts game a in response to the user's clicking operation.
After game a is started, the application program of game a sends raw image data to GPU 701 frame by frame.
The GPU 701 performs frame-by-frame rendering on the raw image data to obtain a first image corresponding to each frame of raw image data, and sends the first image to the NPU 702 frame-by-frame.
The NPU 702 obtains the application identifier of the game a after receiving the first image, finds the bonus model for the game a according to the application identifier of the game a, and selects the bonus model for the game a as the target bonus model.
The NPU 702 inputs the first image into the target hyper-separation model frame by frame, performs image quality enhancement on the first image through the target hyper-separation model to obtain a target image corresponding to each frame of the first image, and sends the target image to the display screen of the first electronic device frame by frame.
As shown in fig. 12, after the target image is acquired, the display screen displays the target image on the screen, and the display screen displays the screen of game a.
Scene three:
as shown in fig. 11, the user clicks an icon of game a on the first electronic device, and the first electronic device starts game a in response to the user's clicking operation.
As shown in fig. 13 and 14, after game a starts, the application of game a sends raw image data 1301 to the GPU 701 frame by frame. The image resolution of the native image data 1301 is 480 × 360.
The GPU 701 renders the raw image data 1301 frame by frame to obtain a first image 1302 corresponding to each frame of raw image data 1301, and the image resolution of the first image 1302 is 1920 × 1080. The GPU 701 transmits the preliminarily rendered first image 1302 to the NPU 702 frame by frame.
After receiving the first image 1302, the NPU 702 obtains the application identifier of the game a, finds the bonus model for the game a according to the application identifier of the game a, and selects the bonus model for the game a as the target bonus model. The target hyper-resolution model is a single enhanced hyper-resolution model, and the target resolution of an output image of the target hyper-resolution model is 1920 × 1080.
The NPU 702 inputs the first image 1302 into the target hyper-resolution model frame by frame, and performs image quality enhancement on the first image 1302 by the target hyper-resolution model to obtain a target image 1303 with an image resolution of 1920 × 1080.
As shown in fig. 14, the image resolution of the object image 1303 is the same as the image resolution of the first image 1302, but the definition of the object image 1303 is higher than that of the first image 1302, and the image quality of the object image 1303 is higher than that of the first image 1302.
After the NPU 702 acquires the target image 1303, the target image 1303 is sent to the display screen of the first electronic device frame by frame.
Scene four:
as shown in fig. 11, the user clicks an icon of game a on the first electronic device, and the first electronic device starts game a in response to the user's clicking operation.
As shown in fig. 15 and 16, after the game a is started, the application program of the game a transmits the native image data 1501 to the GPU 701 frame by frame. The image resolution of the raw image data 1501 is 480 × 360.
The GPU 701 performs frame-by-frame rendering on the raw image data 1501 to obtain a first image 1502 corresponding to each frame of raw image data, and the image resolution of the first image 1502 is 480 × 360. The GPU 701 sends the preliminarily rendered first image 1502 to the NPU 702 frame by frame.
After receiving the first image 1502, the NPU 702 obtains the application identifier of the game a, finds the super-score model for the game a according to the application identifier of the game a, and selects the super-score model for the game a as the target super-score model. The target hyper-resolution model is a multiple enhanced hyper-resolution model, and the target resolution of an output image of the target hyper-resolution model is 1920 multiplied by 1080.
The NPU 702 inputs the first image 1502 into the target hyper-resolution model frame by frame, performs image quality enhancement on the first image 1502 by the target hyper-resolution model, and performs resolution adaptation on the first image 1502 to obtain a target image 1503 with an image resolution of 1920 × 1080.
As shown in fig. 16, the definition of the target image 1503 is the same as that of the first image 1502, but the image resolution of the target image 1503 is higher than that of the first image 1502, and the image quality of the target image 1503 is higher than that of the first image 1502.
After acquiring the target image 1503, the NPU 702 sends the target image 1503 to the display screen of the first electronic device frame by frame for on-screen display.
In summary, in the image processing method provided in this embodiment, the first electronic device first performs a preliminary rendering on the native image data of the application program to obtain the first image. And then, the first electronic equipment performs super-resolution reconstruction on the first image through the super-resolution model, so that the image quality of the first image is improved, and a target image for on-screen display is obtained. Compared with the current scheme of rendering the native image data and directly obtaining the high-quality image, the image processing method provided by the embodiment can reduce the requirement on hardware resources of the first electronic device, reduce rendering power consumption, and solve the problems of high rendering power consumption and large calculation amount when the existing image processing method is used for rendering a high-quality product.
Furthermore, the above-mentioned steps of preliminary rendering and super-resolution reconstruction may be performed by the same graphics rendering hardware within the first electronic device. For example, in some embodiments, the preliminary rendering step and the super-resolution reconstruction step described above may both be performed by a GPU within the first electronic device; in other embodiments, the preliminary rendering step and the super-resolution reconstruction step may both be performed by the NPU within the first electronic device.
Alternatively, the preliminary rendering step and the super-resolution reconstructing step may be performed by different graphics rendering hardware in the first electronic device, that is, the first graphics rendering hardware and the second graphics rendering hardware are different graphics rendering hardware. For example, in some embodiments, the step of preliminary rendering may be performed by a GPU of the first electronic device, and the step of super-resolution reconstruction may be performed by an NPU of the first electronic device; in other embodiments, the step of preliminary rendering may be performed by a CPU of the first electronic device, and the step of super-resolution reconstruction may be performed by an NPU of the first electronic device.
When the preliminary rendering step and the super-resolution reconstruction step are executed by different hardware, various heterogeneous hardware resources in the first electronic device can be fully utilized, the requirement on the hardware resources of the first electronic device is reduced, and the image quality of the image is better improved under the condition of limited hardware resources.
When super-resolution reconstruction is performed, the super-resolution model selected by the first electronic device may be a single-time enhanced super-resolution model, or the super-resolution model selected by the first electronic device may also be a multiple-time enhanced super-resolution model. When the hyper-division model selected by the first electronic device is the multiple enhanced hyper-division model, hardware resources occupied by graphic rendering hardware in the primary rendering process and rendering power consumption of primary rendering can be reduced.
In addition, the hyper-score model may be a general hyper-score model, or the hyper-score model may be a specific hyper-score model for a certain application or a certain class of applications. The application range of the specific hyper-segmentation model is smaller than that of the general hyper-segmentation model, but the image quality enhancement effect of the specific hyper-segmentation model is superior to that of the general hyper-segmentation model.
In the foregoing, another image processing method provided by the present embodiment will be described below from the perspective of the first electronic device and the designated device. Please refer to the flowchart of the image processing method shown in fig. 8, which includes:
s1701, rendering the native image data by the first electronic equipment to obtain a first image;
in the present embodiment, a method of performing image rendering in cooperation with a plurality of electronic devices will be described.
When the first electronic device interacts with other electronic devices in a multi-screen manner, the first electronic device may need to project a display picture of the first electronic device to the other electronic devices.
For example, when a user is playing a game with a mobile phone, the user may feel that the screen of the mobile phone is too small and the visual effect is not good. At the moment, if the user owns the smart television, the user can control the mobile phone and the smart television to perform multi-screen interaction, establish communication connection between the mobile phone and the smart television, and screen-cast the game picture on the mobile phone to the smart television for display, so that the user can watch the game picture through the smart television, and better visual experience is obtained.
In the current screen projection scheme, the first electronic device on the screen projection side needs to independently complete image rendering work, and then transfers the rendered image to the first electronic device on the screen projection side. And then, the first electronic equipment on the screen projection side performs resolution adaptation on the rendered image, and displays the image with the adapted resolution on a screen.
In the current screen projection scheme, the rendering work of the image is completely processed by the first electronic device on the screen projection side, so that a large amount of hardware resources of the electronic device on the screen projection side are occupied, the rendering power consumption is high, and the hardware resources of a plurality of electronic devices in the same local area network are not fully utilized.
In addition, a screen projection technology is generally used for projecting a display screen of a small-screen device to a screen of a large-screen device, and at this time, an image with a low image resolution is difficult to adapt to a display device with a high image resolution, so that the user experience is poor.
Therefore, in the image processing method provided in this embodiment, when a user starts an application program on the first electronic device and starts a multi-screen interaction function, the first electronic device may perform preliminary rendering on native image data generated by the application program to obtain a first image.
The process of preliminary rendering may refer to the description about step S301 in the previous embodiment, and is not repeated herein.
And S1702, the first electronic device sends the first image to the specified device to instruct the specified device to perform super-resolution reconstruction on the first image to obtain a target image.
The designated device (i.e. the second electronic device) is another electronic device selected by the user and located in the same local area network as the first electronic device.
For example, as shown in fig. 18, when the user wishes to project the game screen of the mobile phone to the smart television, the user may turn on the "wireless projection" function of the mobile phone. After the user starts the wireless screen projection function of the mobile phone, the mobile phone starts to search for available electronic equipment in the same local area network. The mobile phone searches the electronic equipment 1, the electronic equipment 2 and the electronic equipment 3. When the mobile phone detects the click operation of the user on the electronic device 1, which indicates that the electronic device 1 is selected by the user, the mobile phone determines the electronic device 1 as a designated device.
After the first electronic equipment obtains the first image, the first electronic equipment sends the first image to the appointed equipment. The designated device is a projected electronic device. The number of the designated devices may be one, or the number of the designated devices may be plural.
For example, referring to fig. 18, it is assumed that the electronic device 1 is a smart television and the electronic device 2 is a computer. When a user operates the mobile phone, if the user wants to screen a game picture of the mobile phone to the smart television, the mobile phone is the first electronic device of the screen projection party, the smart television is the electronic device (namely, a designated device) of the screen projection party, and at the moment, the user can start the wireless screen projection function of the mobile phone. After the user starts the wireless screen projection function of the mobile phone, the mobile phone starts to search for available electronic equipment in the same local area network. The mobile phone searches the electronic equipment 1, the electronic equipment 2 and the electronic equipment 3. Then, the user clicks the electronic device 1, the mobile phone responds to the operation of the user, and the electronic device 1 is set as the designated device, and the number of the designated devices is 1.
When a user operates the mobile phone, if the user wants to project a game picture of the mobile phone to the smart television and the computer at the same time, the mobile phone is the first electronic device of the screen projecting party, the smart television and the computer are the electronic devices (namely designated devices) of the screen projected party, and at the moment, the user can start the wireless screen projecting function of the mobile phone. After a user starts a wireless screen projection function of the mobile phone, the mobile phone starts to search for available first electronic equipment in the same local area network. The mobile phone searches the electronic equipment 1, the electronic equipment 2 and the electronic equipment 3. Then, the user clicks the electronic device 1 and the electronic device 2, the mobile phone responds to the operation of the user, the electronic device 1 and the electronic device 2 are set as designated devices, and the number of the designated devices is 2.
After receiving the first image, the appointed equipment inputs the first image into the trained hyper-resolution model, carries out super-resolution reconstruction on the first image, and obtains and displays a target image.
It can be understood that, in the image processing method of this embodiment, the first electronic device may not only use local hardware resources to render an image, but also use the hardware resources of a specific device to optimize the image quality of the image, and fully use the hardware resources of a plurality of electronic devices in the same local area network, thereby reducing the load on the local hardware resources when the first electronic device renders a high-quality image and reducing the rendering power consumption when the first electronic device renders the high-quality image.
The hyper-score model on the designated device may be a single enhanced hyper-score model, or the hyper-score model on the designated device may be a multiple enhanced hyper-score model. The single enhanced hyper-segmentation model is a hyper-segmentation model with the same image resolution of the input image and the output image. The multiple enhanced hyper-resolution model is a hyper-resolution model in which the image resolution of the input image is smaller than the image resolution of the output image.
In some embodiments, the first electronic device may obtain a target image resolution configured on the specified device when the hyper-minute model on the specified device is a single enhanced hyper-minute model. The target image resolution is an image resolution of a screen display image set on the specified device.
The first electronic device renders the native image data generated by the application program according to the resolution of the target image to obtain a first image. At this time, the image resolution of the first image is the target image resolution.
The first electronic device then communicates the first image to the designated device. The first image is input into the single enhanced hyper-segmentation model by the appointed equipment, the image quality of the first image is improved through the single enhanced hyper-segmentation model, and the target image is obtained and displayed on a screen. The image resolution of the target image is the target image resolution.
In other embodiments, when the hyper-segmentation model on the designated device is the single-enhanced hyper-segmentation model, the first electronic device may also directly render the native image data generated by the application program according to the first image resolution configured by the user, so as to obtain the first image. At this time, the image resolution of the first image is the first image resolution.
The first electronic device then communicates the first image to the designated device. Since the hyper-segmentation model on the designated device is the single-enhanced hyper-segmentation model, the resolution of the first image cannot be adapted to the resolution of the target image. The target image resolution is the image resolution of the on-screen display set by the given device. Therefore, after acquiring the first image, the specified device may perform upsampling processing on the first image, adapt the resolution of the first image to the target image resolution configured on the specified device, and obtain the second image. The image resolution of the second image is the target image resolution.
The algorithm applied for upsampling may be any one of interpolation algorithms such as nearest neighbor, bilinear, cubic, etc. And the appointed equipment performs upsampling processing on the first image through a preset upsampling algorithm to obtain a second image.
And then, the appointed equipment inputs the second image into the single enhanced type hyper-parting network, and the image quality of the second image is enhanced through the single enhanced type hyper-parting network to obtain the target image. The image resolution of the target image is the target image resolution.
In other embodiments, when the hyper-score model on the designated device is the multiple enhanced hyper-score model, the first electronic device may render the native image data generated by the application according to the first image resolution configured by the user, so as to obtain the first image. At this time, the image resolution of the first image is the first image resolution.
The first electronic device then communicates the first image to the designated device. The method comprises the steps that a first image is input into a multiple enhanced type hyper-division network by designated equipment, the image quality of the first image is enhanced through the multiple enhanced type hyper-division network, and the resolution of the first image is adapted to the resolution of a target image configured on the designated equipment to obtain the target image. The image resolution of the target image is the target image resolution.
In addition, the hyper-score model on the designated device may be a general hyper-score model, or the hyper-score model may be a specific hyper-score model trained for a certain product or a certain class of products. The above general hyper-score model and the training mode of the specific hyper-score model may refer to the description of the above embodiment, and are not described herein again.
When the designated device is provided with a plurality of trained hyper-resolution models, the designated device may obtain an identifier of an application program corresponding to the first image after receiving the first image, and select a corresponding hyper-resolution model according to the identifier of the application program to perform super-resolution reconstruction processing on the first image.
If the designated device does not detect the hyper-resolution model corresponding to the identifier of the application program, the designated device can call the universal hyper-resolution model to perform the super-resolution reconstruction processing. Moreover, when the designated device is provided with a plurality of universal hyper-resolution models, the designated device may further establish an association relationship between the universal hyper-resolution model and the identifier of the application program after performing super-resolution reconstruction on the first image using the universal hyper-resolution model, so that when the designated device processes the first image of the application program next time, the same universal hyper-resolution model may be found according to the association relationship to process the first image.
It should be noted that the first electronic device may select appropriate graphics rendering hardware to perform preliminary rendering on the raw image data according to an actual situation, so as to obtain the first image. The graphics rendering hardware may be one or more of a Central Processing Unit (CPU), a Graphics Processing Unit (GPU), and a neural-Network Processing Unit (NPU). For example, when the first electronic device initially renders native image data using GPU-turbo techniques, the graphics rendering hardware described above may be a combination of a CPU and a GPU.
The designated equipment can select proper hardware to run the hyper-division model according to actual conditions. In some embodiments, the specifying device may run the hyper-segmentation model on a CPU, and perform image quality enhancement on the first image through the CPU; in other embodiments, the specifying device may run the hyper-segmentation model on a GPU, and perform image quality enhancement on the first image by the GPU; or, the specifying device may run the hyper-segmentation model on an NPU, and perform image quality enhancement on the first image through the NPU. The present application is not limited to hardware within a given device that performs the above described hyper-divide reconstruction operations.
In order to better explain the image processing method provided in the embodiment of the present application, the following description is made in conjunction with a specific scene.
In the following application scenarios, the first electronic device (i.e., the electronic device on the screen projection side) may be a mobile phone, and a GPU may be disposed in the mobile phone. The designated equipment (namely the electronic equipment of the screen projection party) can be an intelligent television and a computer, and an NPU can be arranged in the intelligent television and the computer. In the NPU, a common game class hyper-score model and a game A hyper-score model are preset, and the game A hyper-score model is associated with the application identification of the game A.
Scene five:
and the user operates the mobile phone and projects the display picture of the mobile phone to the smart television.
As shown in fig. 8 and 19, the user clicks an icon of game B on the mobile phone, and the mobile phone starts game B in response to the user's clicking operation.
And after the game B is started, the application program of the game B sends the native image data to the GPU of the mobile phone frame by frame.
And the GPU of the mobile phone performs frame-by-frame rendering on the raw image data to obtain a first image corresponding to each frame of raw image data, and the first image is sent to the smart television frame-by-frame through the wireless communication module of the mobile phone.
And after receiving the first image, the smart television acquires an application identifier corresponding to the first image, and selects a universal game type hyper-score model as a target hyper-score model if the hyper-score model corresponding to the game B is not found according to the application identifier of the game B.
The NPU of the intelligent television inputs the first image into the target hyper-resolution model frame by frame, performs image quality enhancement on the first image through the target hyper-resolution model to obtain a target image corresponding to each frame of the first image, and sends the target image to a display screen of the intelligent television frame by frame for display.
Scene six:
and the user operates the mobile phone and projects the display picture of the mobile phone to the smart television.
As shown in fig. 9 and 19, the user clicks an icon of game a on the mobile phone, and the mobile phone starts game a in response to the user's clicking operation.
After the game A is started, the application program of the game A sends the native image data to the GPU of the mobile phone frame by frame.
And the GPU of the mobile phone performs frame-by-frame rendering on the raw image data to obtain a first image corresponding to each frame of raw image data, and sends the first image to the smart television through the wireless communication module of the mobile phone.
After receiving the first image, the smart television acquires an application identifier corresponding to the first image, finds a super-score model for the game A according to the application identifier of the game A, and selects the super-score model for the game A as a target super-score model.
The NPU of the intelligent television inputs the first image into the target hyper-resolution model frame by frame, performs image quality enhancement on the first image through the target hyper-resolution model to obtain a target image corresponding to each frame of the first image, and sends the target image to a display screen of the intelligent television frame by frame for display.
Scene seven:
referring to fig. 20 and 21, the user operates the mobile phone 2001 to project the display screen of the mobile phone 2001 to the smart tv 2002 and the computer 2003.
The user clicks an icon of the game a on the cellular phone 2001, and the cellular phone 2001 starts the game a in response to the user's clicking operation.
After game a is started, the application program of game a sends raw image data 20011 to the GPU of the cell phone 2001 frame by frame. The image resolution of the raw image data 20011 is 480 × 360.
In addition, the mobile phone 2001 performs data interaction with the smart television 2002 and the computer 2003 through the wireless communication module, and acquires an image resolution 1920 × 1080 configured on the smart television 2002 and an image resolution 2560 × 1440 configured on the computer 2003.
The mobile phone 2001 performs frame-by-frame rendering on the raw image data 20011 according to an image resolution 1920 × 1080 configured on the smart tv 2002, so as to obtain an image 20012, where the resolution of the image 20012 is 1920 × 1080.
Meanwhile, the mobile phone 2001 performs frame-by-frame rendering on the raw image data 20011 according to the image resolution 2560 × 1440 disposed on the computer 2003, to obtain an image 20013, where the resolution of the image 20013 is 2560 × 1440.
The mobile phone 2001 transmits the image 20012 to the smart tv 2002 through the wireless communication module, and transmits the image 20013 to the computer 2003 through the wireless communication module.
After receiving the image 20012, the smart television 2002 acquires an application identifier corresponding to the image 20012, finds the single enhanced type hyper-scoring model for the game a according to the application identifier of the game a, and selects the single enhanced type hyper-scoring model for the game a as the target hyper-scoring model.
The NPU of the smart television 2002 inputs the image 20012 into the target hyper-resolution model frame by frame, and performs super-resolution reconstruction on the image 20012 through the target hyper-resolution model to obtain an image 20021. Image resolution of image 20021 is 1920 × 1080.
As shown in fig. 21, the image resolution of image 20012 and the image resolution of image 20021 are 1920 × 1080, but the resolution of image 20021 is higher than that of image 20012, and the image quality of image 20021 is higher than that of image 20012.
After the smart television 2002 obtains the image 20012, the image 20012 is transferred to the display screen of the smart television 2002 frame by frame for displaying.
After receiving the image 20013, the computer 2003 obtains an application identifier corresponding to the image 20013, finds the single enhanced hyper-segmentation model for the game A according to the application identifier of the game A, and selects the single enhanced hyper-segmentation model for the game A as the target hyper-segmentation model.
The NPU of the computer 2003 inputs the image 20013 into the target hyper-resolution model frame by frame, and performs super-resolution reconstruction on the image 20013 through the target hyper-resolution model to obtain an image 20031. The image resolution of image 20031 is 2560 × 1440.
As shown in fig. 21, the image resolution of the image 20013 and the image resolution of the image 20031 are both 2560 × 1440, but the definition of the image 20031 is higher than that of the image 20013, and the image quality of the image 20031 is higher than that of the image 20013.
After computer 2003 obtains image 20031, image 20031 is transferred frame by frame to the on-screen display of computer 2003.
And eighth scene:
referring to fig. 22 and 23, a user operates the mobile phone 2201 to project a display screen of the mobile phone 2201 to the smart television 2202 and the computer 2203.
The user clicks an icon of game a on the mobile phone 2201, and the mobile phone 2201 responds to the clicking operation of the user to start the game a.
After game a is started, the application program of game a sends the raw image data 22011 to the GPU of the handset 2201 frame by frame. The image resolution of the native image data 22011 is 480 × 360.
The mobile phone 2201 renders the raw image data 22011 frame by frame according to the preset image resolution to obtain an image 22012, wherein the resolution of the image 22012 is 480 × 360.
The mobile phone 2201 transmits the image 22012 to the smart television 2202 through the wireless communication module, and transmits the image 22012 to the computer 2203 through the wireless communication module.
After receiving the image 22012, the smart television 2202 acquires an application identifier corresponding to the image 22012, finds a single enhanced type hyper-scoring model for the game a according to the application identifier of the game a, and selects the single enhanced type hyper-scoring model for the game a as a target hyper-scoring model.
The NPU of the smart television 2202 upsamples the image 22012 frame by frame to obtain an image 22021, and the image resolution of the image 22021 is 1920 × 1080. Then, the image 22021 is input into the target hyper-resolution model frame by the NUP of the smart television 2202, and the super-resolution reconstruction is performed on the image 22021 through the target hyper-resolution model, so that an image 22022 is obtained.
As shown in fig. 23, the image resolution of the image 22021 and the image resolution of the image 22022 are 1920 × 1080, but the definition of the image 22022 is higher than that of the image 22021, and the image quality of the image 22022 is higher than that of the image 22021.
After the image 22022 is obtained by the smart television 2202, the image 22022 is transferred to the on-screen display of the smart television 2202 on a frame-by-frame basis.
After receiving the image 22012, the computer 2203 acquires an application identifier corresponding to the image 22012, finds the single-enhanced type hyper-scoring model for the game a according to the application identifier of the game a, and selects the single-enhanced type hyper-scoring model for the game a as the target hyper-scoring model.
The NPU of the computer 2203 upsamples the image 22012 frame by frame to obtain an image 22031, and the image resolution of the image 22031 is 2560 × 1440. Then, the NUP of the computer 2203 inputs the image 22031 into the target hyper-resolution model frame by frame, and performs super-resolution reconstruction on the image 22031 through the target hyper-resolution model to obtain an image 22032.
As shown in fig. 23, the image resolution of the image 22031 and the image resolution of the image 22032 are 1920 × 1080, but the definition of the image 22032 is higher than that of the image 22031, and the image quality of the image 22032 is higher than that of the image 22031.
After the computer 2203 obtains the image 22032, the image 22032 is transferred to the display screen of the computer 2203 frame by frame.
Scene nine:
referring to fig. 24 and 25, a user operates the mobile phone 2401 to project a display screen of the mobile phone 2401 to the smart television 2402 and the computer 2403.
The user clicks an icon of the game a on the mobile phone 2401, and the mobile phone 2401 starts the game a in response to the clicking operation of the user.
After game a is started, the application program of game a sends the raw image data 24011 to the GPU of the handset 2401 frame by frame.
The mobile phone 2401 performs frame-by-frame rendering on the raw image data 24011 according to a preset image resolution to obtain an image 24012, wherein the resolution of the image 24012 is 480 × 360.
The mobile phone 2401 sends the image 24012 to the smart television 2402 through the wireless communication module, and sends the image 24012 to the computer 2403 through the wireless communication module.
After receiving the image 24012, the smart television 2402 acquires an application identifier corresponding to the image 24012, finds the multiple enhanced hyper-score model for the game a according to the application identifier of the game a, and selects the multiple enhanced hyper-score model for the game a as the target hyper-score model.
The NPU of the smart television 2402 inputs the image 24012 into the target hyper-resolution model frame by frame, and performs super-resolution reconstruction on the image 24012 through the target hyper-resolution model to obtain an image 24021. Image 24021 has an image resolution of 1920 × 1080.
As shown in fig. 25, the image resolution of the image 24021 is greater than the image resolution of the image 24012, and the sharpness of the image 24021 is greater than the sharpness of the image 24012, so the image quality of the image 24021 is higher than the image quality of the image 24012.
After the smart television 2402 obtains the image 24021, the image 24021 is transferred to the display screen of the smart television 2402 for display frame by frame.
After receiving the image 24012, the computer 2403 acquires an application identifier corresponding to the image 24012, finds the multiple enhanced hyper-score model for the game a according to the application identifier of the game a, and selects the multiple enhanced hyper-score model for the game a as the target hyper-score model.
The NPU of the computer 2403 inputs the image 24012 into the target hyper-resolution model frame by frame, and performs super-resolution reconstruction on the image 24012 through the target hyper-resolution model to obtain an image 24031. The image resolution of image 24031 is 2560 × 1440.
The image resolution of the image 24031 is greater than the image resolution of the image 24012, and the sharpness of the image 24031 is greater than the sharpness of the image 24012, so the image quality of the image 24031 is higher than the image quality of the image 24012.
After the computer 2403 obtains the image 24031, the image 24031 is transferred to the display screen of the computer 2403 frame by frame for display.
In summary, in the image processing method provided in this embodiment, the first electronic device first performs a preliminary rendering on the native image data of the application program to obtain the first image. And then, the first electronic equipment transmits the first image to the appointed equipment needing screen projection, and the appointed equipment carries out super-resolution reconstruction on the first image, so that the image quality of the first image is improved, and a target image for on-screen display is obtained. That is, in the image processing method provided in the present embodiment, the image rendering process is divided into two steps of the preliminary rendering and the super-resolution reconstruction, the preliminary rendering step is performed by the first electronic device on the screen projection side, and the super-resolution reconstruction step is performed by the specific device on the screen projection side, so that the load of the hardware resources of the first electronic device on the screen projection side is reduced, the rendering efficiency is reduced, and the hardware resources of the first electronic device on the screen projection side and the hardware resources of the specific device on the screen projection side can be fully utilized.
In addition, by adopting the image processing method provided by the embodiment, the display resolution of the target image with enhanced image quality can be adapted to the display resolution of the designated device, and the use experience of a user is improved.
When the super-resolution reconstruction is performed on the designated equipment, the super-resolution model selected by the designated equipment can be a single-time enhanced super-resolution model, or can also be a multiple-time enhanced super-resolution model. When the hyper-division model selected by the appointed equipment is the multiple enhanced hyper-division model, the hardware resources occupied by the primary rendering and the rendering power consumption of the primary rendering can be reduced.
In addition, the hyper-score model may be a general hyper-score model, or may be a specific hyper-score model for a certain application or a certain class of applications. The application range of the specific hyper-segmentation model is smaller than that of the general hyper-segmentation model, but the image quality enhancement effect of the specific hyper-segmentation model is superior to that of the general hyper-segmentation model.
It should be understood that, the sequence numbers of the steps in the foregoing embodiments do not imply an execution sequence, and the execution sequence of each process should be determined by its function and inherent logic, and should not constitute any limitation to the implementation process of the embodiments of the present application.
Referring to fig. 26, an electronic device is further provided in the embodiments of the present application. As shown in fig. 26, the electronic device 26 of this embodiment includes: a processor 260, a memory 261, and a computer program 262 stored in said memory 261 and executable on said processor 260. The processor 260 implements the steps in the above-described screen expansion method embodiments, such as the steps S301 to S302 shown in fig. 1, when executing the computer program 262. Alternatively, the processor 260 implements the functions of the modules/units in the device embodiments described above, for example, the functions of the modules 2601 to 2602 shown in fig. 26 when executing the computer program 262.
Illustratively, the computer program 262 may be divided into one or more modules/units that are stored in the memory 261 and executed by the processor 260 to accomplish the present application. The one or more modules/units may be a series of computer program instruction segments capable of performing specific functions, which are used to describe the execution of the computer program 262 in the electronic device 26. For example, the computer program 262 may be partitioned into a native data module, a preliminary rendering module, and a first super-divide module, each of which functions as follows:
the native data module is used for acquiring native image data, wherein the native image data is generated by an application program and is not subjected to rendering;
the preliminary rendering module is used for rendering the native image data through first graphic rendering hardware to obtain a first image;
and the first super-resolution module is used for performing super-resolution reconstruction on the first image through second graphics rendering hardware to obtain a target image, wherein the first graphics rendering hardware and the second graphics rendering hardware are different graphics rendering hardware.
The electronic device 26 may be a computing device such as a desktop computer, a notebook, a palm top computer, and a cloud server. The electronic device may include, but is not limited to, a processor 260, a memory 261. Those skilled in the art will appreciate that fig. 26 is merely an example of electronic device 26 and does not constitute a limitation of electronic device 26 and may include more or fewer components than shown, or some components may be combined, or different components, e.g., the electronic device may also include input-output devices, network access devices, buses, etc.
The Processor 260 may be a Central Processing Unit (CPU), other general purpose Processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA) or other Programmable logic device, discrete Gate or transistor logic, discrete hardware, or the like. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
The storage 261 may be an internal storage unit of the electronic device 26, such as a hard disk or a memory of the electronic device 26. The memory 261 may also be an external storage device of the electronic device 26, such as a plug-in hard disk, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash memory Card (Flash Card), and the like, provided on the electronic device 26. Further, the memory 261 may also include both an internal storage unit and an external storage device of the electronic device 26. The memory 261 is used for storing the computer program and other programs and data required by the electronic device. The memory 261 may also be used to temporarily store data that has been output or is to be output.
It will be apparent to those skilled in the art that, for convenience and brevity of description, only the above-mentioned division of the functional units and modules is illustrated, and in practical applications, the above-mentioned function distribution may be performed by different functional units and modules according to needs, that is, the internal structure of the apparatus is divided into different functional units or modules to perform all or part of the above-mentioned functions. Each functional unit and module in the embodiments may be integrated in one processing unit, or each unit may exist alone physically, or two or more units are integrated in one unit, and the integrated unit may be implemented in a form of hardware, or in a form of software functional unit. In addition, specific names of the functional units and modules are only for convenience of distinguishing from each other, and are not used for limiting the protection scope of the present application. The specific working processes of the units and modules in the system may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
In the above embodiments, the descriptions of the respective embodiments have respective emphasis, and reference may be made to the related descriptions of other embodiments for parts that are not described or illustrated in a certain embodiment.
Those of ordinary skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
In the embodiments provided in the present application, it should be understood that the disclosed apparatus/electronic device and method may be implemented in other ways. For example, the above-described apparatus/electronic device embodiments are merely illustrative, and for example, the division of the modules or units is only one logical division, and there may be other divisions when actually implemented, for example, a plurality of units or hardware may be combined or may be integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The integrated modules/units, if implemented in the form of software functional units and sold or used as separate products, may be stored in a computer readable storage medium. Based on such understanding, all or part of the flow in the method of the embodiments described above can be realized by a computer program, which can be stored in a computer-readable storage medium and can realize the steps of the embodiments of the methods described above when the computer program is executed by a processor. Wherein the computer program comprises computer program code, which may be in the form of source code, object code, an executable file or some intermediate form, etc. The computer-readable medium may include: any entity or device capable of carrying the computer program code, recording medium, usb disk, removable hard disk, magnetic disk, optical disk, computer Memory, Read-Only Memory (ROM), Random Access Memory (RAM), electrical carrier wave signals, telecommunications signals, software distribution medium, and the like. It should be noted that the computer readable medium may contain content that is subject to appropriate increase or decrease as required by legislation and patent practice in jurisdictions, for example, in some jurisdictions, computer readable media does not include electrical carrier signals and telecommunications signals as is required by legislation and patent practice.
The above-mentioned embodiments are only used for illustrating the technical solutions of the present application, and not for limiting the same; although the present application has been described in detail with reference to the foregoing embodiments, it should be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; such modifications and substitutions do not substantially depart from the spirit and scope of the embodiments of the present application and are intended to be included within the scope of the present application.

Claims (17)

1. An image processing method applied to a first electronic device includes:
the first electronic equipment acquires native image data, wherein the native image data is generated by an application program and is not subjected to rendering;
the first electronic equipment renders the native image data through first graphic rendering hardware to obtain a first image;
the first electronic equipment carries out super-resolution reconstruction on the first image through second graphics rendering hardware to obtain a target image, wherein the first graphics rendering hardware and the second graphics rendering hardware are different graphics rendering hardware.
2. The image processing method of claim 1, wherein the first electronic device performs super-resolution reconstruction of the first image through first graphics rendering hardware to obtain a target image, comprising:
the first electronic equipment acquires the identification of the application program;
the first electronic equipment searches a target hyper-resolution model associated with the identifier;
and the first electronic equipment carries out super-resolution reconstruction on the first image through the first graphic rendering hardware and the searched target super-resolution model to obtain a target image.
3. The image processing method of claim 2, after the first electronic device finds the target hyper-score model associated with the identity, further comprising:
and if the target hyper-resolution model associated with the identifier is not found, the first electronic equipment carries out super-resolution reconstruction on the first image through the first graph rendering hardware and a preset general hyper-resolution model to obtain a target image.
4. The image processing method of claim 1, wherein the first electronic device rendering the native image data through first graphics rendering hardware resulting in a first image, comprises:
and the first electronic equipment renders the native image data through the first graphic rendering hardware and a preset first image resolution ratio to obtain a first image.
5. The image processing method of claim 4, wherein the first electronic device performs super-resolution reconstruction of the first image through a second graphics rendering hardware to obtain a target image, comprising:
and the first electronic equipment carries out super-resolution reconstruction on the first image through the second image rendering hardware and a single enhanced hyper-resolution model to obtain a target image, wherein the resolution of the first image is consistent with that of the target image, and the single enhanced hyper-resolution model is a hyper-resolution model with the same image resolution of an input image and an output image.
6. The image processing method of claim 4, wherein the first electronic device performs super-resolution reconstruction of the first image through a second graphics rendering hardware to obtain a target image, comprising:
the first electronic equipment carries out super-resolution reconstruction on the first image through the second graphic rendering hardware and a multiple enhanced type hyper-resolution model to obtain a target image, wherein the resolution of the first image is smaller than that of the target image, and the multiple enhanced type hyper-resolution model is a hyper-resolution model in which the image resolution of an input image is smaller than that of an output image.
7. The image processing method of claim 1, wherein the first electronic device rendering the native image data through first graphics rendering hardware resulting in a first image, comprises:
the first electronic equipment renders the native image data through a graphics processor to obtain a first image;
correspondingly, the super-resolution reconstruction of the first image by the first electronic device through the second graphics rendering hardware to obtain the target image includes:
and the first electronic equipment carries out super-resolution reconstruction on the first image through a neural network processor to obtain a target image.
8. An image processing method applied to a second electronic device includes:
the second electronic equipment receives a first image sent by first electronic equipment, wherein the first image is an image obtained by rendering native image data generated by an application program by the first electronic equipment;
and the second electronic equipment carries out super-resolution reconstruction on the first image to obtain a target image.
9. The image processing method of claim 8, wherein the second electronic device performs super-resolution reconstruction on the first image to obtain a target image, comprising:
the second electronic equipment acquires the identification of the application program;
the second electronic equipment searches a target hyper-resolution model associated with the identifier;
and the second electronic equipment carries out super-resolution reconstruction on the first image through the searched target super-resolution model to obtain a target image.
10. The image processing method of claim 9, after the second electronic device finds the target hyper-score model associated with the identity, further comprising:
and if the target hyper-resolution model associated with the identifier is not found, the second electronic equipment carries out super-resolution reconstruction on the first image through a preset general hyper-resolution model to obtain a target image.
11. The image processing method of claim 8, wherein the first resolution of the first image and the image resolution of the target image are identical;
the second electronic device performs super-resolution reconstruction on the first image to obtain a target image, and the method comprises the following steps:
and the second electronic equipment carries out super-resolution reconstruction on the first image through a single enhanced hyper-resolution model to obtain the target image, wherein the single enhanced hyper-resolution model is a hyper-resolution model with the same image resolution of the input image and the output image.
12. The image processing method of claim 8, wherein a first resolution of the first image is lower than an image resolution of the target image;
the second electronic device performs super-resolution reconstruction on the first image to obtain a target image, and the method comprises the following steps:
the second electronic equipment performs up-sampling processing on the first image to obtain a second image, wherein the image resolution of the second image is consistent with the image resolution of the target image;
and the second electronic equipment carries out super-resolution reconstruction on the second image through a single enhanced hyper-resolution model to obtain the target image, wherein the single enhanced hyper-resolution model is a hyper-resolution model with the same image resolution of the input image and the output image.
13. The image processing method of claim 8, wherein a first resolution of the first image is lower than a resolution of the target image;
the second electronic device performs super-resolution reconstruction on the first image to obtain a target image, and the method comprises the following steps:
and the second electronic equipment carries out super-resolution reconstruction on the first image through a multiple enhanced hyper-resolution model to obtain the target image, wherein the multiple enhanced hyper-resolution model is a hyper-resolution model in which the image resolution of the input image is smaller than that of the output image.
14. An image processing system, characterized in that the system comprises a first electronic device and a second electronic device;
the first electronic equipment is used for rendering the native image data generated by the application program to obtain a first image and sending the first image to the second electronic equipment;
the second electronic device is configured to perform the image processing method of any one of claims 8 to 13.
15. An electronic device comprising a memory, a processor, and a computer program stored in the memory and executable on the processor, wherein the processor implements the method of any of claims 1 to 7 or the method of any of claims 8 to 13 when executing the computer program.
16. A computer-readable storage medium, in which a computer program is stored which, when being executed by a processor, carries out the method of any one of claims 1 to 7 or carries out the method of any one of claims 8 to 13.
17. A chip system, characterized in that the chip system comprises a memory and a processor, the processor executing a computer program stored in the memory to implement the method of any of claims 1 to 7 or to implement the method of any of claims 8 to 13.
CN202010653106.7A 2020-07-08 2020-07-08 Image processing method, system, electronic device and computer readable storage medium Pending CN113935898A (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202010653106.7A CN113935898A (en) 2020-07-08 2020-07-08 Image processing method, system, electronic device and computer readable storage medium
PCT/CN2021/105060 WO2022007862A1 (en) 2020-07-08 2021-07-07 Image processing method, system, electronic device and computer readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010653106.7A CN113935898A (en) 2020-07-08 2020-07-08 Image processing method, system, electronic device and computer readable storage medium

Publications (1)

Publication Number Publication Date
CN113935898A true CN113935898A (en) 2022-01-14

Family

ID=79273437

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010653106.7A Pending CN113935898A (en) 2020-07-08 2020-07-08 Image processing method, system, electronic device and computer readable storage medium

Country Status (2)

Country Link
CN (1) CN113935898A (en)
WO (1) WO2022007862A1 (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114860141A (en) * 2022-05-23 2022-08-05 Oppo广东移动通信有限公司 Image display method, image display device, electronic equipment and computer readable medium
CN115474090A (en) * 2022-08-31 2022-12-13 北京理工大学 Heterogeneous embedded real-time processing architecture supporting video target detection and tracking and application thereof
CN116012474A (en) * 2022-12-13 2023-04-25 昆易电子科技(上海)有限公司 Simulation test image generation and reinjection method and system, industrial personal computer and device
CN117130766A (en) * 2023-01-17 2023-11-28 荣耀终端有限公司 Thread processing method and electronic equipment

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114638951B (en) * 2022-03-29 2023-08-15 北京有竹居网络技术有限公司 House model display method and device, electronic equipment and readable storage medium
CN116033065A (en) * 2022-12-29 2023-04-28 维沃移动通信有限公司 Playing method, playing device, electronic equipment and readable storage medium

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10453241B2 (en) * 2017-04-01 2019-10-22 Intel Corporation Multi-resolution image plane rendering within an improved graphics processor microarchitecture
CN107680042B (en) * 2017-09-27 2020-03-31 杭州群核信息技术有限公司 Rendering method, device, engine and storage medium combining texture and convolution network
CN107742317B (en) * 2017-09-27 2020-11-03 杭州群核信息技术有限公司 Rendering method, device and system combining light sensation and convolution network and storage medium
US10540749B2 (en) * 2018-03-29 2020-01-21 Mitsubishi Electric Research Laboratories, Inc. System and method for learning-based image super-resolution
CN110827380B (en) * 2019-09-19 2023-10-17 北京铂石空间科技有限公司 Image rendering method and device, electronic equipment and computer readable medium

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114860141A (en) * 2022-05-23 2022-08-05 Oppo广东移动通信有限公司 Image display method, image display device, electronic equipment and computer readable medium
CN115474090A (en) * 2022-08-31 2022-12-13 北京理工大学 Heterogeneous embedded real-time processing architecture supporting video target detection and tracking and application thereof
CN116012474A (en) * 2022-12-13 2023-04-25 昆易电子科技(上海)有限公司 Simulation test image generation and reinjection method and system, industrial personal computer and device
CN116012474B (en) * 2022-12-13 2024-01-30 昆易电子科技(上海)有限公司 Simulation test image generation and reinjection method and system, industrial personal computer and device
CN117130766A (en) * 2023-01-17 2023-11-28 荣耀终端有限公司 Thread processing method and electronic equipment

Also Published As

Publication number Publication date
WO2022007862A1 (en) 2022-01-13

Similar Documents

Publication Publication Date Title
CN110231905B (en) Screen capturing method and electronic equipment
WO2020253719A1 (en) Screen recording method and electronic device
CN112130742B (en) Full screen display method and device of mobile terminal
CN109559270B (en) Image processing method and electronic equipment
CN115473957B (en) Image processing method and electronic equipment
WO2022007862A1 (en) Image processing method, system, electronic device and computer readable storage medium
CN112887583B (en) Shooting method and electronic equipment
CN112532892B (en) Image processing method and electronic device
CN113838490B (en) Video synthesis method and device, electronic equipment and storage medium
WO2022001258A1 (en) Multi-screen display method and apparatus, terminal device, and storage medium
WO2022095744A1 (en) Vr display control method, electronic device, and computer readable storage medium
CN114756184A (en) Collaborative display method, terminal device and computer-readable storage medium
CN113542574A (en) Shooting preview method under zooming, terminal, storage medium and electronic equipment
CN113438366A (en) Information notification interaction method, electronic device and storage medium
WO2022078116A1 (en) Brush effect picture generation method, image editing method and device, and storage medium
CN115686403A (en) Display parameter adjusting method, electronic device, chip and readable storage medium
CN115964231A (en) Load model-based assessment method and device
CN114827098A (en) Method and device for close shooting, electronic equipment and readable storage medium
CN113495733A (en) Theme pack installation method and device, electronic equipment and computer readable storage medium
CN116051351B (en) Special effect processing method and electronic equipment
CN116048831B (en) Target signal processing method and electronic equipment
CN116700578A (en) Layer synthesis method, electronic device and storage medium
CN117850989A (en) Service calling method, system and electronic equipment
CN116414493A (en) Image processing method, electronic device and storage medium
CN117692693A (en) Multi-screen display method and related equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination