CN114240745A - Image rendering method, device, equipment and medium - Google Patents

Image rendering method, device, equipment and medium Download PDF

Info

Publication number
CN114240745A
CN114240745A CN202111300910.8A CN202111300910A CN114240745A CN 114240745 A CN114240745 A CN 114240745A CN 202111300910 A CN202111300910 A CN 202111300910A CN 114240745 A CN114240745 A CN 114240745A
Authority
CN
China
Prior art keywords
resolution
image
layer
white
rendering
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111300910.8A
Other languages
Chinese (zh)
Inventor
唐立军
余力
唐忠樑
钱虔
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Meiping Meiwu Shanghai Technology Co ltd
Original Assignee
Meiping Meiwu Shanghai Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Meiping Meiwu Shanghai Technology Co ltd filed Critical Meiping Meiwu Shanghai Technology Co ltd
Priority to CN202111300910.8A priority Critical patent/CN114240745A/en
Publication of CN114240745A publication Critical patent/CN114240745A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • G06T3/4053Scaling of whole images or parts thereof, e.g. expanding or contracting based on super-resolution, i.e. the output image resolution being higher than the sensor resolution
    • G06T3/4076Scaling of whole images or parts thereof, e.g. expanding or contracting based on super-resolution, i.e. the output image resolution being higher than the sensor resolution using the original low-resolution images to iteratively correct the high-resolution images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration using two or more images, e.g. averaging or subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/90Determination of colour characteristics
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Processing (AREA)

Abstract

The embodiment of the disclosure relates to an image rendering method, an image rendering device, image rendering equipment and an image rendering medium. In at least one embodiment of the present disclosure, for a first resolution rendering task of a target image, the first resolution rendering task is not directly performed by a renderer, but a second resolution rendering is performed by the renderer, the second resolution being smaller than the first resolution; and then obtaining a plurality of layers generated in the process of performing second resolution rendering by the renderer, and processing the plurality of layers to obtain a first resolution image corresponding to the target image. Because the time that the renderer executes the second resolution rendering is shortened by at least half compared with the first resolution rendering, and the time for processing the plurality of image layers is shortened by at least half compared with the second resolution rendering, the time for rendering the first resolution image can be shortened, and the user experience is improved.

Description

Image rendering method, device, equipment and medium
Technical Field
The embodiment of the disclosure relates to the technical field of image processing, in particular to an image rendering method, an image rendering device, image rendering equipment and an image rendering medium.
Background
For an image rendering task with ultra-high definition resolution (e.g., 4K resolution and 10K resolution), an existing renderer calculates a large number of ray samples for each pixel point based on a ray tracing rendering mode of monte carlo, the rendering time is long, and the rendering time is substantially proportional to the resolution, for example, the rendering time of 4K resolution is at least 2 times of 2K resolution. Therefore, it is desirable to provide an image rendering scheme to reduce rendering time.
Disclosure of Invention
At least one embodiment of the present disclosure provides an image rendering method, apparatus, device, and medium.
In a first aspect, an embodiment of the present disclosure provides an image rendering method, including:
responding to a first resolution rendering task of the target image, and performing second resolution rendering on the target image through a renderer, wherein the second resolution is smaller than the first resolution;
obtaining a plurality of layers from the renderer, wherein the layers are generated by the renderer in the process of rendering at the second resolution;
and processing the plurality of image layers based on the first resolution to obtain a first resolution image corresponding to the target image.
In some embodiments, processing the plurality of image layers based on the first resolution to obtain a first resolution image corresponding to the target image includes:
performing super-resolution reconstruction on each first type of image layer in the plurality of image layers to obtain a first resolution image layer corresponding to each first type of image layer;
amplifying each second type of image layer in the plurality of image layers to obtain a first resolution image layer corresponding to each second type of image layer;
merging the first resolution image layers corresponding to the first type image layers and the first resolution image layers corresponding to the second type image layers to obtain first resolution images corresponding to the target images;
the first type of layer is a layer with the detail information amount larger than or equal to a preset threshold value; the second type of layer is a layer with the detail information amount smaller than a preset threshold value.
In some embodiments, the first type of layer comprises at least one of: a diffuse reflection layer, an illumination layer, a reflection layer and a refraction layer;
the second type of layer comprises at least one of the following layers: a reflection intensity layer, a refraction intensity layer, a highlight layer, a self-luminous layer and a background layer.
In some embodiments, after the processing is performed on the plurality of image layers based on the first resolution to obtain the first resolution image corresponding to the target image, the image rendering method further includes:
performing super-resolution reconstruction on the target image to obtain a first-resolution RGB image;
and removing white edges of the first resolution image corresponding to the target image based on the illumination image layer and the first resolution RGB image to obtain the first resolution image corresponding to the target image after the white edges are removed.
In some embodiments, based on the illumination layer and the first-resolution RGB image, removing a white edge from the first-resolution image corresponding to the target image, and obtaining the first-resolution image after removing the white edge and corresponding to the target image includes:
determining a first resolution white edge mask map corresponding to the illumination layer;
converting the RGB image with the first resolution ratio from an RGB space to an LAB space to obtain an L-channel high-frequency image corresponding to the RGB image with the first resolution ratio;
and removing the white edge of the first resolution image corresponding to the target image based on the first resolution white edge mask graph and the L-channel high-frequency image corresponding to the first resolution RGB image to obtain the first resolution image corresponding to the target image after the white edge is removed.
In some embodiments, determining the first resolution white-edge mask map corresponding to the illumination layer includes:
determining a white edge in the edge of the illumination layer;
setting the value of each pixel point of the white edge as 1, and setting the value of each pixel point of the non-white edge in the edge of the illumination layer as 0 to obtain a second resolution white edge mask map corresponding to the illumination layer;
and amplifying the second-resolution white-edge mask map based on the first resolution, and setting the pixel point with the value greater than 0 obtained after amplification as 1 to obtain the first-resolution white-edge mask map corresponding to the illumination layer.
In some embodiments, determining a white edge in the edges of the illumination layer comprises:
determining edges in the illumination layer;
determining the brightness difference of pixel points on two sides of the edge;
and determining the edge with the brightness difference value larger than or equal to the preset brightness threshold value as a white edge.
In some embodiments, based on the first-resolution white-edge mask map and the L-channel high-frequency image corresponding to the first-resolution RGB image, removing a white edge from the first-resolution image corresponding to the target image, and obtaining the white-edge-removed first-resolution image corresponding to the target image includes:
determining a first-resolution non-white edge mask map corresponding to the illumination layer based on the first-resolution white edge mask map;
converting a first resolution ratio image corresponding to the target image from an RGB space to an LAB space to obtain an L-channel high-frequency image corresponding to the first resolution ratio image corresponding to the target image;
merging the L-channel high-frequency image corresponding to the RGB image with the first resolution and the white edge mask image with the first resolution to obtain a first synthetic image with the first resolution;
combining an L-channel high-frequency image corresponding to a first resolution image corresponding to the target image with the first resolution non-white edge mask image to obtain a first resolution second composite image;
merging the first synthetic image with the first resolution, the second synthetic image with the first resolution and the L-channel low-frequency image corresponding to the RGB image with the first resolution to obtain an L-channel image with the first resolution after white edges are removed;
and converting the first-resolution L channel image without the white edge, the A channel image corresponding to the first-resolution RGB image and the B channel image into RGB space to obtain the first-resolution image without the white edge corresponding to the target image.
In a second aspect, an embodiment of the present disclosure further provides an image rendering apparatus, including:
the rendering unit is used for responding to a first resolution rendering task of the target image and performing second resolution rendering on the target image through the renderer, wherein the second resolution is smaller than the first resolution;
the obtaining unit is used for obtaining a plurality of layers from the renderer, and the layers are generated by the renderer in the process of rendering at the second resolution;
and the processing unit is used for processing the layers based on the first resolution to obtain a first resolution image corresponding to the target image.
In a third aspect, an embodiment of the present disclosure further provides an electronic device, including: a processor and a memory; the processor is adapted to perform the steps of the image rendering method according to the first aspect by calling a program or instructions stored in the memory.
In a fourth aspect, the disclosed embodiments also provide a computer-readable storage medium, where the computer-readable storage medium stores a program or instructions for causing a computer to execute the steps of the image rendering method according to the first aspect.
It can be seen that in at least one embodiment of the present disclosure, for a first resolution rendering task of a target image, the first resolution rendering task is not directly performed by a renderer, but a second resolution rendering is performed by the renderer, and the second resolution is smaller than the first resolution; and then obtaining a plurality of layers generated in the process of performing second resolution rendering by the renderer, and processing the plurality of layers to obtain a first resolution image corresponding to the target image. Because the time that the renderer executes the second resolution rendering is shortened by at least half compared with the first resolution rendering, and the time for processing the plurality of image layers is shortened by at least half compared with the second resolution rendering, the time for rendering the first resolution image can be shortened, and the user experience is improved.
Drawings
To more clearly illustrate the technical solutions of the embodiments of the present disclosure, the drawings used in the description of the embodiments or the prior art will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present disclosure, and other drawings can be obtained by those skilled in the art according to the drawings.
Fig. 1 is an exemplary flowchart of an image rendering method provided by an embodiment of the present disclosure;
fig. 2 is an exemplary flowchart for processing a plurality of layers according to an embodiment of the present disclosure;
FIG. 3 is an exemplary flowchart of an image optimization method provided by an embodiment of the present disclosure;
FIG. 4 is an exemplary flowchart for removing a white edge according to an embodiment of the present disclosure;
FIG. 5 is an exemplary process for determining a white-edge mask map according to an embodiment of the present disclosure;
FIG. 6 is another exemplary flow chart for removing a white edge according to an embodiment of the disclosure;
fig. 7 is an exemplary block diagram of an image rendering apparatus provided in an embodiment of the present disclosure;
fig. 8 is a schematic structural diagram of an electronic device provided in an embodiment of the present disclosure;
fig. 9 is a schematic flow chart of super-resolution reconstruction based on the ESRGAN model.
Detailed Description
In order that the above objects, features and advantages of the present disclosure can be more clearly understood, the present disclosure will be described in further detail with reference to the accompanying drawings and examples. It is to be understood that the embodiments described are only a few embodiments of the present disclosure, and not all embodiments. The specific embodiments described herein are merely illustrative of the disclosure and are not intended to be limiting. All other embodiments derived by one of ordinary skill in the art from the described embodiments of the disclosure are intended to be within the scope of the disclosure.
It is noted that, in this document, relational terms such as "first" and "second," and the like, may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions.
At present, for an image rendering task with ultra-high definition resolution, for example, a 4K villa panorama, a renderer executes the 4K image rendering task to obtain a corresponding 4K rendering image, and the rendering time is about 500 seconds. Some solutions in the prior art perform super-resolution reconstruction on a high-dynamic image to a higher-resolution image based on a mode of generating a countermeasure network, and artifacts are easily generated at the edges of the image. Some solutions in the prior art perform ultra-high definition resolution reconstruction based on gradient information of an image, but the effect is not obvious in fine detail parts, such as hair parts, and is still fuzzy.
In order to reduce rendering time, an embodiment of the present disclosure provides an image rendering scheme, where for a 4K image rendering task, a renderer does not directly execute the 4K image rendering task, but executes a 2K image rendering through the renderer, so as to obtain a plurality of layers generated in the 2K image rendering process performed by the renderer, and process the plurality of layers to obtain a 4K image, and since the time for the renderer to execute the 2K image rendering is about 120 seconds and the time for the plurality of layers to be processed is about 30 seconds, the time for the renderer to render the 4K image is about 150 seconds, and the time for the renderer to render the 4K image is about 200 seconds under a condition that an extremely rare cluster is fully loaded, which may shorten the rendering time by at least half.
Fig. 1 is an exemplary flowchart of an image rendering method provided by an embodiment of the present disclosure, where an execution subject of the image rendering method is any type of electronic device, for example, a portable mobile device such as a smart phone, a tablet computer, and a notebook computer, and further, a stationary device such as a desktop computer and a smart television. The image rendering method may include, but is not limited to, the following steps 101 to 103:
in step 101, in response to a task of rendering the target image at a first resolution, rendering the target image at a second resolution by a renderer, wherein the second resolution is smaller than the first resolution.
The target image may be understood as an original image to be rendered, and the original image is an RGB (red, green, and blue) image.
The first resolution is an ultra high definition resolution, such as 4K resolution, 8K resolution, 10K resolution, 12K resolution, and the like. The first-resolution rendering task may be understood as a task of rendering the original image to be rendered into the first-resolution image.
The second resolution is lower than the first resolution, which may be understood as a low resolution, e.g. a 2K resolution, compared to the first resolution.
In this embodiment, in response to the first resolution rendering task, the renderer does not directly execute the first resolution rendering task, but executes the second resolution rendering through the renderer, so as to avoid a problem that the time consumed for directly rendering the first resolution is long.
For example, if the 4K rendering task of the original image takes about 500 seconds if the 4K rendering is directly performed, in this embodiment, the electronic device responds to the 4K rendering task, and performs the 2K rendering of the original image by the renderer instead of directly performing the 4K rendering of the original image by the renderer, and the time for the 2K rendering is about 120 seconds, so that the problem that it takes a long time to directly render the 4K image can be avoided.
In step 102, a plurality of layers are obtained from the renderer, and the plurality of layers are layers generated by the renderer in the process of rendering at the second resolution.
The renderer can generate a plurality of layers in the rendering process, and the plurality of layers comprise at least one of the following layers: diffuse reflection layer, illumination layer, reflection intensity layer, refraction intensity layer, highlight layer, self-luminous layer and background layer. The image output by the renderer is the image obtained by combining the layers by the renderer.
Taking VRay (a renderer) as an example, VRay generates a plurality of layers during rendering: vraydiffuise filter (diffuse reflection layer), VRayRawTotalLighting (illumination layer), VRayRawReflection (reflection layer), vrayreflexion filter (reflection intensity layer), vrayrawraction (refraction layer), vrayreflaction filter (refraction intensity layer), VRaySpecular (highlight layer), vrayselflumination (self-luminescence layer), and VRayBackground (background layer).
The images output by VRay are merged by:
the image output by the composemage (VRay) is vraydiffuise filter (diffuse reflection layer) × VRayRawTotalLighting (illumination layer) + vrayraywrefection (reflection layer) × vrayrefection filter (reflection intensity layer) + vrayraywrefection (refraction layer) + vrayspecula (highlight layer) + vrayselflumination (self-luminescence layer) + VRayBackground layer).
And the image layer generated by the renderer in the process of rendering with the second resolution and the image output by the renderer are both the second resolution.
In step 103, the plurality of image layers are processed based on the first resolution to obtain a first resolution image corresponding to the target image.
Since the step 101 is to perform the rendering with the second resolution by the renderer, although the problem that it takes a long time to directly render the first resolution is avoided, no matter the layer generated by the renderer in the rendering process or the image output by the renderer is the second resolution, but not the first resolution, therefore, the embodiment processes the plurality of layers based on the first resolution to obtain the first resolution image corresponding to the target image.
In this embodiment, because the time for the renderer to perform the second resolution rendering is at least half shorter than the time for the first resolution rendering, and the time for processing the plurality of layers is at least half shorter than the time for the second resolution rendering, the embodiment of the present disclosure may shorten the time for rendering the first resolution image, and improve user experience.
For example, for the 4K rendering task of the original image, in this embodiment, the electronic device responds to the 4K rendering task, and performs 2K rendering of the original image by the renderer, where the time for 2K rendering is about 120 seconds, and the time for processing the plurality of image layers is about 30 seconds, so that the time for rendering the 4K image in the embodiment of the present disclosure is about 150 seconds, the time for rendering the 4K image in the case where the rarely-occurring cluster is fully loaded is about 200 seconds, and compared with the time for directly performing the 4K rendering, it takes about 500 seconds, and this embodiment may shorten the rendering time by at least half.
Fig. 2 is an exemplary flowchart for processing a plurality of layers according to an embodiment of the present disclosure. The layers are generated by the renderer in the rendering process. In some embodiments, the plurality of layers are layers generated by the renderer in a second resolution (e.g., 2K resolution) rendering process, and the layers are 2K resolution layers. The electronic device processes a plurality of image layers generated by the renderer in a rendering process of a second resolution (for example, 2K resolution) based on a first resolution (for example, 4K resolution) to obtain a first resolution image.
In step 201, super-resolution reconstruction is performed on each first-type layer in the plurality of layers to obtain a first resolution layer corresponding to each first-type layer.
The first type of image layer is an image layer with a detail information amount larger than or equal to a preset threshold, and the detail information includes but is not limited to information representing image details such as texture information. The first type of layer comprises at least one of the following layers: diffuse reflection layer, illumination layer, reflection layer and refraction layer. Taking VRay (a renderer) as an example, the first type of layer includes vraydiffuisefilter (diffuse reflection layer), VRayRawTotalLighting (illumination layer), VRayRawReflection (reflection layer), and vrayrawrection (refraction layer).
Since the layer generated by the renderer is an HDR (High-Dynamic Range) layer, in order to reduce the data processing amount, before performing super-resolution reconstruction on each first-type layer, High-Dynamic compression needs to be performed on the layer, and the High-Dynamic compression belongs to a mature technology in the field of image processing and is not repeated.
And after high dynamic compression, performing Super-Resolution reconstruction on each first type layer through a Super-Resolution (Super-Resolution) model. The hyper-Resolution model can be different existing hyper-Resolution models, such as an Enhanced Super-Resolution adaptive network (ESRGAN) model. In this embodiment, super-resolution reconstruction is performed through the hyper-resolution model, so that a hyper-resolution layer output by the hyper-resolution model contains clearer image detail information.
After high dynamic compression, each first-class image layer is subjected to super-resolution reconstruction through the hyper-resolution model, so that high dynamic restoration needs to be performed on the hyper-resolution image layer output by the hyper-resolution model to obtain a corresponding first resolution image layer.
For example, the renderer generates a 2K-resolution (1920 × 1080) HDR (High-Dynamic Range) illumination layer, the electronic device performs High-Dynamic compression on the illumination layer, inputs the illumination layer into the ESRGAN model to perform 4-time super-resolution reconstruction, and the electronic device performs High-Dynamic restoration on a 4K-resolution (3840 × 2160) LDR (Low-Dynamic Range) illumination layer output by the ESRGAN model, thereby obtaining a 4K-resolution HDR illumination layer.
Fig. 9 is a schematic flow chart of super-resolution reconstruction based on the ESRGAN model. In fig. 9, the ESRGAN model includes a plurality of convolutional layers (Conv), a plurality of Basic blocks (Basic Block), and an Upsampling layer (Upsampling), and the connection relationship between the layers is as shown in fig. 9, and the functions of the layers may refer to the relevant documents of the ESRGAN model, and are not described again.
In fig. 9, the renderer generates an HDR illumination layer with 2K Resolution (1920 × 1080), and first, the HDR illumination layer is subjected to high dynamic compression to obtain an LDR illumination layer with LR (Low Resolution), that is, an LDR illumination layer with 2K Resolution; then inputting the data into an ESRGAN model for 4-time over-Resolution reconstruction, and outputting an LDR illumination layer of SR (Super-Resolution over-Resolution), namely an LDR illumination layer with 4K Resolution by the ESRGAN model; and performing high dynamic recovery on the LDR illumination image layer with the resolution of 4K to obtain an HDR illumination image layer with the resolution of 4K.
In step 202, each second type of map layer in the plurality of map layers is enlarged to obtain a first resolution map layer corresponding to each second type of map layer.
The second type of layer is a layer with the detail information amount smaller than a preset threshold value. The second type of layer comprises at least one of the following layers: a reflection intensity layer, a refraction intensity layer, a highlight layer, a self-luminous layer and a background layer. Taking VRay (a type of renderer) as an example, the second type of layers include VRayReflectionFilter (reflection intensity layer), VRayReflectionFilter (refraction intensity layer), VRaySpecular (highlight layer), vrayself illumination (self-illumination layer), and VRayBackground (background layer).
In this embodiment, each second type of map layer in the plurality of map layers is subjected to amplification (Resize) processing based on the first resolution, so as to obtain a first resolution map layer corresponding to each second type of map layer. For example, for the second type of layer with 2K resolution, the layer can be enlarged to 4K resolution through Resize processing.
In step 203, the first resolution image layers corresponding to the first type image layers and the first resolution image layers corresponding to the second type image layers are merged to obtain a first resolution image corresponding to the target image.
Taking VRay (a renderer) as an example, the first resolution images corresponding to the target image are merged by the following formula:
a first resolution layer corresponding to a first resolution layer × vrayrawltotal lighting (illumination layer) corresponding to a first resolution layer × vrayraywtotal lighting (reflection layer) corresponding to a ComposeImage (target image) and a first resolution layer corresponding to a first resolution layer × vrayraywreflection filter (reflection intensity layer) + vrayraywreflection filter (refraction layer)) corresponding to a first resolution layer × vrayreragractionfilter (refraction intensity layer) + vrayspetemporal (high light intensity layer)).
Fig. 3 is an exemplary flowchart of an image optimization method provided in an embodiment of the present disclosure. The method may optimize the first resolution image corresponding to the target image so as to remove a white edge (white edge for short) in the first resolution image, where the white edge is a white edge caused by misalignment of edges of the plurality of first resolution layers when the plurality of first resolution layers are merged.
In step 301, super-resolution reconstruction is performed on the target image to obtain a first-resolution RGB image.
In this embodiment, the super-resolution reconstruction of the target image is similar to the super-resolution reconstruction mentioned in step 201 in fig. 2, and is not repeated here to avoid repetition.
In some embodiments, in order to reduce the data processing amount, the target image may be subjected to high-Dynamic compression and then super-resolution reconstruction, and the obtained first-resolution RGB image is an LDR (Low-Dynamic Range) image.
Note that there is no white edge on a hard edge (e.g., an edge of a table) in the first resolution RGB image.
In step 302, based on the illumination layer and the first resolution RGB image, a white edge of the first resolution image corresponding to the target image is removed, so as to obtain a first resolution image corresponding to the target image after the white edge is removed.
The inventor of the present invention finds that the white edge occurs at the edge where the luminance difference of the illumination layer is large, and the first-resolution RGB image has no white edge, so in this embodiment, the white edge in the first-resolution image corresponding to the target image is removed based on the illumination layer and the first-resolution RGB image.
Fig. 4 is an exemplary flowchart of removing a white edge according to an embodiment of the present disclosure, and is applicable to step 302 in fig. 3. As shown in fig. 4, based on the illumination layer and the first resolution RGB image, removing a white edge from the first resolution image corresponding to the target image to obtain a first resolution image corresponding to the target image with the white edge removed includes steps 401 to 403:
in step 401, a first resolution white-edge mask map corresponding to the illumination layer is determined.
The illumination layer is generated in the process of rendering the second resolution by the renderer, and therefore the resolution of the illumination layer is the second resolution.
The resolution of the first-resolution white-side mask graph is the first resolution, the value of each pixel point corresponding to a white side in the first-resolution white-side mask graph is 1, and the value of each pixel point corresponding to a non-white side is 0.
In step 402, the first-resolution RGB image is converted from RGB space to LAB (a device-independent color model) space to obtain an L (luminance) channel high-frequency image corresponding to the first-resolution RGB image.
The L-channel high-frequency image is a luminance image with a large gradient change, and corresponds to details such as edges and textures in the first-resolution RGB image. The L-channel low-frequency image corresponds to a non-edge portion in the first-resolution RGB image.
The L-channel high frequency image is determined by: if the first-resolution RGB image is an HDR (High-Dynamic Range) image, performing High-Dynamic compression on the first-resolution RGB image, and converting the RGB image into an LAB (laboratory) space from an RGB space; if the first-resolution RGB image is an LDR (Low-Dynamic Range) image, directly converting the first-resolution RGB image from an RGB space to an LAB space; after the L-channel image is converted into the LAB space, the L-channel image can be separated, and then the L-channel image is subjected to mean filtering to obtain an L-channel low-frequency image, so that the L-channel high-frequency image is obtained by subtracting the L-channel low-frequency image from the L-channel image.
In step 403, based on the first-resolution white-edge mask map and the L-channel high-frequency image corresponding to the first-resolution RGB image, a white edge is removed from the first-resolution image corresponding to the target image, so as to obtain a white-edge-removed first-resolution image corresponding to the target image.
In this embodiment, by separating the luminance image (i.e., the L-channel high-frequency image), and using the first-resolution white-edge mask map, the white edge of the first-resolution image corresponding to the target image may be removed, so as to obtain the first-resolution image with the white edge removed corresponding to the target image.
Fig. 5 is an exemplary flowchart for determining a white-edge mask map according to an embodiment of the present disclosure, and is applicable to step 402 in fig. 4. As shown in fig. 5, determining the first-resolution white-edge mask map corresponding to the illumination layer includes steps 501 to 503:
in step 501, a white edge among the edges of the illumination layer is determined.
In this embodiment, first, an edge in an illumination layer is determined; then determining the brightness difference of pixel points on two sides of the edge; and finally, determining the edge with the brightness difference value larger than or equal to the preset brightness threshold value as a white edge.
The preset brightness threshold is, for example, 0.6 candela per square meter, and a person skilled in the art can set the preset brightness threshold according to actual needs, and the specific value of the preset brightness threshold is not limited in this embodiment.
In some embodiments, the edges in the illumination layer may be determined by a laplacian edge operator or other edge algorithm.
In step 502, the value of each pixel point of the white edge is set to 1, and the value of each pixel point of the non-white edge in the edge of the illumination layer is set to 0, so as to obtain a second resolution white edge mask map corresponding to the illumination layer.
The illumination layer is generated in the process of rendering the second resolution by the renderer, so that the resolution of the illumination layer is the second resolution, and the resolution of the white-edge mask map determined by the illumination layer is the second resolution.
In step 503, the second resolution white-edge mask map is amplified based on the first resolution, and the pixel points with the value greater than 0 obtained after the amplification processing are set to 1, so as to obtain the first resolution white-edge mask map corresponding to the illumination layer.
In this embodiment, after the second resolution white border mask map is amplified (Resize) based on the first resolution, there are a plurality of pixel points with values greater than 0 and less than 1, and the pixel points are located on both sides of the white border, and in order to improve the reliability of removing the white border, the pixel points with values greater than 0 obtained after the amplification are set to 1, so as to obtain the first resolution white border mask map corresponding to the illumination layer.
Fig. 6 is another exemplary flowchart for removing a white edge according to an embodiment of the present disclosure, which is suitable for step 403 in fig. 4. As shown in fig. 6, based on the first-resolution white-edge mask map and the L-channel high-frequency image corresponding to the first-resolution RGB image, the step of removing the white edge from the first-resolution image corresponding to the target image to obtain the first-resolution image with the white edge removed corresponding to the target image includes steps 601 to 606:
in step 601, a first-resolution non-white-edge mask map corresponding to the illumination layer is determined based on the first-resolution white-edge mask map.
And setting each pixel point with the value of 1 in the first resolution white edge mask graph as 0 and setting each pixel point with the value of 0 as 1 to obtain a first resolution non-white edge mask graph corresponding to the illumination layer.
In step 602, the first resolution image corresponding to the target image is converted from the RGB space to the LAB space, so as to obtain an L-channel high-frequency image corresponding to the first resolution image corresponding to the target image.
The L-channel high-frequency image is a luminance image with a large gradient change, and corresponds to details such as edges and textures in the first-resolution image. The L-channel low-frequency image corresponds to a non-edge portion in the first resolution image.
The L-channel high frequency image is determined by: if the first resolution image is an HDR (High-Dynamic Range) image, performing High-Dynamic compression on the first resolution image, and converting the RGB space into an LAB space; if the first resolution image is an LDR (Low-Dynamic Range) image, directly converting the first resolution image from an RGB space to an LAB space; after the L-channel image is converted into the LAB space, the L-channel image can be separated, and then the L-channel image is subjected to mean filtering to obtain an L-channel low-frequency image, so that the L-channel high-frequency image is obtained by subtracting the L-channel low-frequency image from the L-channel image.
In step 603, the L-channel high-frequency image corresponding to the RGB image with the first resolution and the white-edge mask map with the first resolution are merged to obtain a first composite image with the first resolution.
The first composite image with the first resolution is the L-channel high-frequency image x the first-resolution white-edge mask map corresponding to the RGB image with the first resolution.
As can be seen, there is no white edge in the first composite image with the first resolution, that is, the white edge is removed by using the L-channel high-frequency image corresponding to the RGB image with the first resolution.
In step 604, the L-channel high-frequency image corresponding to the first-resolution image corresponding to the target image and the first-resolution non-white-edge mask map are merged to obtain a first-resolution second composite image.
The first-resolution second composite image is an L-channel high-frequency image multiplied by the first-resolution non-white edge mask image corresponding to the first-resolution image corresponding to the target image.
As can be seen, in this embodiment, the edge of the non-white edge retains the detail information by using the L-channel high-frequency image corresponding to the first resolution image corresponding to the target image.
In step 605, the first-resolution first synthesized image, the first-resolution second synthesized image, and the L-channel low-frequency image corresponding to the first-resolution RGB image are merged to obtain the first-resolution L-channel image without the white edge.
And the first-resolution L-channel image without the white edge is an L-channel low-frequency image corresponding to the first-resolution first synthetic image, the first-resolution second synthetic image and the first-resolution RGB image.
As can be seen, in this embodiment, the white edge is removed by using the L-channel high-frequency image corresponding to the RGB image with the first resolution, the edge of the non-white edge retains the detail information by using the L-channel high-frequency image corresponding to the RGB image with the first resolution corresponding to the target image, and the L-channel low-frequency image corresponding to the RGB image with the first resolution is added to obtain the L-channel image with the first resolution after the white edge is removed.
In step 606, the first resolution L-channel image without the white edge, the a-channel image corresponding to the first resolution RGB image, and the B-channel image are converted into an RGB space, so as to obtain a first resolution image without the white edge corresponding to the target image.
And if the first resolution image corresponding to the target image and without the white edge is an LDR (Low-Dynamic Range) image, performing high-Dynamic recovery.
In some embodiments, the time consumption for 4K rendering different scenes according to the common 4K rendering scheme in the prior art (i.e. the renderer directly performs the 4K image rendering task) and the image rendering scheme disclosed in the above embodiments (denoted as the super-resolution 4K rendering scheme) is as follows:
scene name The common 4k rendering scheme takes time (seconds) The super resolution 4k rendering scheme is time consuming (seconds)
Nezha 3.0 in daytime 412 145
Luxurious villa 3.0 394 191
Villa 3.0 383 200
In contrast, the time taken to generate a 4K resolution image based on the image rendering scheme disclosed in the above embodiment is within 200 seconds, the time taken to generate a 4K resolution image with a rarely occurring full cluster is 200 seconds, and the rendering time is shortened by at least half.
It is noted that, for simplicity of description, the foregoing method embodiments are described as a series of acts or combination of acts, but those skilled in the art will appreciate that the disclosed embodiments are not limited by the order of acts described, as some steps may occur in other orders or concurrently with other steps in accordance with the disclosed embodiments. In addition, those skilled in the art can appreciate that the embodiments described in the specification all belong to alternative embodiments.
Fig. 7 is an exemplary block diagram of an image rendering apparatus according to an embodiment of the present disclosure. The image rendering device can be applied to any type of electronic equipment, such as portable mobile equipment like a smart phone, a tablet computer and a notebook computer, and fixed equipment like a desktop computer and a smart television. As shown in fig. 7, the image rendering apparatus may include, but is not limited to, the following units: a rendering unit 71, an acquisition unit 72, and a processing unit 73.
A rendering unit 71, configured to perform a second-resolution rendering on the target image through the renderer in response to a first-resolution rendering task of the target image, where the second resolution is smaller than the first resolution;
an obtaining unit 72, configured to obtain a plurality of layers from the renderer, where the plurality of layers are layers generated by the renderer in a rendering process at a second resolution;
the processing unit 73 is configured to process the plurality of image layers based on the first resolution to obtain a first resolution image corresponding to the target image.
In some embodiments, the processing unit 73 is configured to: performing super-resolution reconstruction on each first type of image layer in the plurality of image layers to obtain a first resolution image layer corresponding to each first type of image layer; amplifying each second type of image layer in the plurality of image layers to obtain a first resolution image layer corresponding to each second type of image layer; merging the first resolution image layers corresponding to the first type image layers and the first resolution image layers corresponding to the second type image layers to obtain first resolution images corresponding to the target images; the first type of layer is a layer with the detail information amount larger than or equal to a preset threshold value; the second type of layer is a layer with the detail information amount smaller than a preset threshold value.
In some embodiments, the first type of layer comprises at least one of: a diffuse reflection layer, an illumination layer, a reflection layer and a refraction layer; the second type of layer comprises at least one of the following layers: a reflection intensity layer, a refraction intensity layer, a highlight layer, a self-luminous layer and a background layer.
In some embodiments, the image rendering apparatus may further comprise a reconstruction unit 74 and an optimization unit 75, not shown in fig. 7:
and the reconstruction unit 74 is configured to perform super-resolution reconstruction on the target image to obtain a first-resolution RGB image.
The optimizing unit 75 is configured to remove a white edge from the first resolution image corresponding to the target image based on the illumination layer and the first resolution RGB image, so as to obtain a first resolution image corresponding to the target image after the white edge is removed.
In some embodiments, optimization unit 75 includes a determine subunit, a convert subunit, and a remove subunit:
the determining subunit is used for determining a first resolution ratio white edge mask map corresponding to the illumination layer;
the conversion subunit is configured to convert the first-resolution RGB image from an RGB space to an LAB space, so as to obtain an L-channel high-frequency image corresponding to the first-resolution RGB image;
and the removing subunit is used for removing the white edge of the first resolution image corresponding to the target image based on the first resolution white edge mask map and the L-channel high-frequency image corresponding to the first resolution RGB image to obtain the first resolution image corresponding to the target image after the white edge is removed.
In some embodiments, the determining subunit is to: determining a white edge in the edge of the illumination layer; setting the value of each pixel point of the white edge as 1, and setting the value of each pixel point of the non-white edge in the edge of the illumination layer as 0 to obtain a second resolution white edge mask map corresponding to the illumination layer; and amplifying the second-resolution white-edge mask map based on the first resolution, and setting the pixel point with the value greater than 0 obtained after amplification as 1 to obtain the first-resolution white-edge mask map corresponding to the illumination layer.
In some embodiments, the determining subunit is to: determining edges in the illumination layer; determining the brightness difference of pixel points on two sides of the edge; and determining the edge with the brightness difference value larger than or equal to the preset brightness threshold value as a white edge.
In some embodiments, the removal subunit is to:
determining a first-resolution non-white edge mask map corresponding to the illumination layer based on the first-resolution white edge mask map;
converting a first resolution ratio image corresponding to the target image from an RGB space to an LAB space to obtain an L-channel high-frequency image corresponding to the first resolution ratio image corresponding to the target image;
merging the L-channel high-frequency image corresponding to the RGB image with the first resolution and the white edge mask image with the first resolution to obtain a first synthetic image with the first resolution;
combining an L-channel high-frequency image corresponding to a first resolution image corresponding to the target image with the first resolution non-white edge mask image to obtain a first resolution second composite image;
merging the first synthetic image with the first resolution, the second synthetic image with the first resolution and the L-channel low-frequency image corresponding to the RGB image with the first resolution to obtain an L-channel image with the first resolution after white edges are removed;
and converting the first-resolution L channel image without the white edge, the A channel image corresponding to the first-resolution RGB image and the B channel image into RGB space to obtain the first-resolution image without the white edge corresponding to the target image.
The details of the embodiments of the image rendering apparatus disclosed above may refer to the details of the embodiments of the image processing method, and are not repeated herein for avoiding repetition.
In some embodiments, the division of each unit in the above apparatus embodiments is only one logic function division, and there may be another division manner in actual implementation, for example, at least two units may be implemented as one unit; each unit may also be divided into a plurality of sub-units. It will be understood that the various units or sub-units may be implemented in electronic hardware, or a combination of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application.
Fig. 8 is a schematic structural diagram of an electronic device provided in an embodiment of the present disclosure. The electronic device is, for example, a portable mobile device such as a smart phone, a tablet computer, and a notebook computer, and is, for example, a stationary device such as a desktop computer and a smart television.
As shown in fig. 8, the electronic apparatus includes: at least one processor 801, at least one memory 802, and at least one communication interface 803. Various components in the electronic device are coupled together by a bus system 804. A communication interface 803 for information transmission with an external device. Understandably, the bus system 804 is used to enable connective communication between these components. The bus system 804 includes a power bus, a control bus, and a status signal bus in addition to a data bus. For clarity of illustration, the various buses are labeled as bus system 804 in fig. 8.
It will be appreciated that the memory 802 in this embodiment can be either volatile memory or nonvolatile memory, or can include both volatile and nonvolatile memory.
In some embodiments, memory 802 stores elements, executable units or data structures, or a subset thereof, or an expanded set thereof as follows: an operating system and an application program.
The operating system includes various system programs, such as a framework layer, a core library layer, a driver layer, and the like, and is used for implementing various basic tasks and processing hardware-based tasks. The application programs, including various application programs such as a Media Player (Media Player), a Browser (Browser), etc., are used to implement various application tasks. The program for implementing the image rendering method provided by the embodiment of the present disclosure may be included in an application program.
In the embodiment of the present disclosure, the processor 801 is configured to execute the steps of the embodiments of the image rendering method provided by the embodiment of the present disclosure by calling a program or an instruction stored in the memory 802, specifically, a program or an instruction stored in an application program.
The image rendering method provided by the embodiment of the disclosure may be applied to the processor 801 or implemented by the processor 801. The processor 801 may be an integrated circuit chip having signal processing capabilities. In implementation, the steps of the above method may be performed by integrated logic circuits of hardware or instructions in the form of software in the processor 801. The Processor 801 may be a general purpose Processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), an off-the-shelf Programmable Gate Array (FPGA) or other Programmable logic device, discrete Gate or transistor logic device, or discrete hardware components. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
The steps of the image rendering method provided by the embodiment of the present disclosure may be directly implemented by a hardware decoding processor, or implemented by a combination of hardware and software units in the decoding processor. The software elements may be located in ram, flash, rom, prom, or eprom, registers, among other storage media that are well known in the art. The storage medium is located in a memory 802, and the processor 801 reads information in the memory 802, and completes steps of the method in combination with hardware thereof.
The embodiments of the present disclosure also provide a computer-readable storage medium, where the computer-readable storage medium stores a program or an instruction, where the program or the instruction causes a computer to execute steps of various embodiments of an image rendering method, and in order to avoid repeated descriptions, the steps are not described herein again. In some embodiments, the computer-readable storage medium is a non-transitory computer-readable storage medium.
The embodiments of the present disclosure also provide a computer program product, where the computer program product includes a computer program, the computer program is stored in a non-transitory computer-readable storage medium, and at least one processor of the computer reads and executes the computer program from the storage medium, so that the computer executes the steps of the embodiments of the image rendering method, and details are not repeated here in order to avoid repeated descriptions.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising … …" does not exclude the presence of other identical elements in a process, method, article, or apparatus that comprises the element.
Those skilled in the art will appreciate that while some embodiments described herein include some features included in other embodiments, rather than others, combinations of features of different embodiments are meant to be within the scope of the disclosure and form different embodiments.
Those skilled in the art will appreciate that the description of each embodiment has a respective emphasis, and reference may be made to the related description of other embodiments for those parts of an embodiment that are not described in detail.
Although the embodiments of the present disclosure have been described in conjunction with the accompanying drawings, those skilled in the art may make various modifications and variations without departing from the spirit and scope of the present disclosure, and such modifications and variations fall within the scope defined by the appended claims.

Claims (10)

1. A method of image rendering, the method comprising:
performing second-resolution rendering on a target image through a renderer in response to a first-resolution rendering task of the target image, wherein the second resolution is smaller than the first resolution;
obtaining a plurality of layers from the renderer, wherein the layers are layers generated by the renderer in the process of rendering at the second resolution;
and processing the layers based on the first resolution to obtain a first resolution image corresponding to the target image.
2. The method of claim 1, wherein the processing the plurality of image layers based on the first resolution to obtain a first resolution image corresponding to the target image comprises:
performing super-resolution reconstruction on each first type of image layer in the plurality of image layers to obtain a first resolution image layer corresponding to each first type of image layer;
amplifying each second type of image layer in the plurality of image layers to obtain a first resolution image layer corresponding to each second type of image layer;
merging the first resolution image layers corresponding to the first type image layers and the first resolution image layers corresponding to the second type image layers to obtain first resolution images corresponding to the target images;
the first type of layer is a layer with the detail information amount larger than or equal to a preset threshold value; the second type of layer is a layer with the detail information amount smaller than the preset threshold.
3. The method of claim 1, wherein the plurality of layers comprises an illumination layer;
after the processing is performed on the plurality of image layers based on the first resolution to obtain the first resolution image corresponding to the target image, the method further includes:
performing super-resolution reconstruction on the target image to obtain a first-resolution RGB image;
and removing white edges of the first resolution image corresponding to the target image based on the illumination image layer and the first resolution RGB image to obtain the white edge-removed first resolution image corresponding to the target image.
4. The method of claim 3, wherein the removing white edges from the first resolution image corresponding to the target image based on the illumination layer and the first resolution RGB image to obtain the white edge removed first resolution image corresponding to the target image comprises:
determining a first resolution white edge mask map corresponding to the illumination layer;
converting the RGB image with the first resolution ratio from an RGB space to an LAB space to obtain an L-channel high-frequency image corresponding to the RGB image with the first resolution ratio;
and removing white edges of the first resolution image corresponding to the target image based on the first resolution white edge mask map and the L-channel high-frequency image corresponding to the first resolution RGB image to obtain the first resolution image corresponding to the target image after the white edges are removed.
5. The method of claim 4, wherein the determining the first resolution white-edge mask map corresponding to the illumination layer comprises:
determining a white edge in the edge of the illumination layer;
setting the value of each pixel point of the white edge to be 1, and setting the value of each pixel point of a non-white edge in the edge of the illumination layer to be 0, so as to obtain a second resolution ratio white edge mask map corresponding to the illumination layer;
and amplifying the second-resolution white-edge mask map based on the first resolution, and setting the pixel points with the value greater than 0 obtained after amplification as 1 to obtain the first-resolution white-edge mask map corresponding to the illumination layer.
6. The method of claim 5, wherein the determining a white edge in the edges of the illumination layer comprises:
determining edges in the illumination layer;
determining the brightness difference value of pixel points on two sides of the edge;
and determining the edge with the brightness difference value larger than or equal to a preset brightness threshold value as a white edge.
7. The method according to claim 4, wherein the removing white edges from the first-resolution image corresponding to the target image based on the first-resolution white-edge mask map and the L-channel high-frequency image corresponding to the first-resolution RGB image to obtain the white-edge-removed first-resolution image corresponding to the target image comprises:
determining a first-resolution non-white edge mask map corresponding to the illumination layer based on the first-resolution white edge mask map;
converting the first resolution image corresponding to the target image from an RGB space to an LAB space to obtain an L-channel high-frequency image corresponding to the first resolution image corresponding to the target image;
merging the L-channel high-frequency image corresponding to the RGB image with the first resolution and the white edge mask image with the first resolution to obtain a first synthetic image with the first resolution;
merging the L-channel high-frequency image corresponding to the first resolution image corresponding to the target image and the first resolution non-white edge mask image to obtain a first resolution second composite image;
merging the first-resolution first synthesized image, the first-resolution second synthesized image and the L-channel low-frequency image corresponding to the first-resolution RGB image to obtain a first-resolution L-channel image without white edges;
and converting the first resolution L channel image without the white edge, the A channel image corresponding to the first resolution RGB image and the B channel image into RGB space to obtain the first resolution image without the white edge corresponding to the target image.
8. An image rendering apparatus, the apparatus comprising:
a rendering unit, configured to perform, in response to a first resolution rendering task of a target image, second resolution rendering on the target image through a renderer, where the second resolution is smaller than the first resolution;
an obtaining unit, configured to obtain a plurality of layers from the renderer, where the plurality of layers are layers generated by the renderer in the process of rendering at the second resolution;
and the processing unit is used for processing the layers based on the first resolution to obtain a first resolution image corresponding to the target image.
9. An electronic device, comprising: a processor and a memory;
the processor is adapted to perform the steps of the image rendering method of any of claims 1 to 7 by calling a program or instructions stored in the memory.
10. A computer-readable storage medium, wherein the computer-readable storage medium stores a program or instructions for causing a computer to perform the steps of the image rendering method according to any one of claims 1 to 7.
CN202111300910.8A 2021-11-04 2021-11-04 Image rendering method, device, equipment and medium Pending CN114240745A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111300910.8A CN114240745A (en) 2021-11-04 2021-11-04 Image rendering method, device, equipment and medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111300910.8A CN114240745A (en) 2021-11-04 2021-11-04 Image rendering method, device, equipment and medium

Publications (1)

Publication Number Publication Date
CN114240745A true CN114240745A (en) 2022-03-25

Family

ID=80743759

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111300910.8A Pending CN114240745A (en) 2021-11-04 2021-11-04 Image rendering method, device, equipment and medium

Country Status (1)

Country Link
CN (1) CN114240745A (en)

Similar Documents

Publication Publication Date Title
EP3087730B1 (en) Method for inverse tone mapping of an image
Possa et al. A multi-resolution FPGA-based architecture for real-time edge and corner detection
Wong et al. Image contrast enhancement using histogram equalization with maximum intensity coverage
KR102317613B1 (en) Systems and methods for localized contrast enhancement
US9082171B2 (en) Image processing device for reducing image noise and the method thereof
CN106485668A (en) Mthods, systems and devices for overexposure correction
JP2010511326A (en) Apparatus and method for enhancing the dynamic range of digital images
US20090202167A1 (en) Method and apparatus for image enhancement
KR20150031241A (en) A device and a method for color harmonization of an image
Xu et al. Colour image enhancement by virtual histogram approach
CN111260580A (en) Image denoising method based on image pyramid, computer device and computer readable storage medium
CA3086523A1 (en) Improved inverse tone mapping method and corresponding device
US20140185104A1 (en) Image processing device and computer-readable storage medium storing computer-readable instructions
CN111489322A (en) Method and device for adding sky filter to static picture
CN113935911A (en) High dynamic range video image processing method, computer device and computer readable storage medium
CN111353955A (en) Image processing method, device, equipment and storage medium
Shiau et al. A low-cost hardware architecture for illumination adjustment in real-time applications
CN114173062B (en) Image processing method, device, equipment and storage medium for image pickup equipment
US20140092116A1 (en) Wide dynamic range display
US10019645B2 (en) Image processing apparatus and method, and electronic equipment
Zhang et al. Multi-scale-based joint super-resolution and inverse tone-mapping with data synthesis for UHD HDR video
CN112153388A (en) Image compression method, device and related equipment
CN114240745A (en) Image rendering method, device, equipment and medium
US8750648B2 (en) Image processing apparatus and method of processing image
Yadav et al. Multiple feature-based contrast enhancement of ROI of backlit images

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination