CN114298922A - Image processing method and device and electronic equipment - Google Patents

Image processing method and device and electronic equipment Download PDF

Info

Publication number
CN114298922A
CN114298922A CN202111511051.7A CN202111511051A CN114298922A CN 114298922 A CN114298922 A CN 114298922A CN 202111511051 A CN202111511051 A CN 202111511051A CN 114298922 A CN114298922 A CN 114298922A
Authority
CN
China
Prior art keywords
image
texture
regions
enhanced
reconstructed
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111511051.7A
Other languages
Chinese (zh)
Inventor
孙宇乐
陈焕浜
杨海涛
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Huawei Technologies Co Ltd
Original Assignee
Huawei Technologies Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huawei Technologies Co Ltd filed Critical Huawei Technologies Co Ltd
Priority to CN202111511051.7A priority Critical patent/CN114298922A/en
Publication of CN114298922A publication Critical patent/CN114298922A/en
Priority to PCT/CN2022/131594 priority patent/WO2023103715A1/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/73Deblurring; Sharpening
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/60Image enhancement or restoration using machine learning, e.g. neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/40Analysis of texture
    • G06T7/41Analysis of texture based on statistical description of texture
    • G06T7/45Analysis of texture based on statistical description of texture using co-occurrence matrix computation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]

Landscapes

  • Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Probability & Statistics with Applications (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Processing (AREA)

Abstract

The embodiment of the application provides an image processing method and device and electronic equipment. The method comprises the following steps: firstly, acquiring a reconstructed image, and then, carrying out image enhancement on the reconstructed image to obtain an intermediate image; and then, determining texture complexity weights corresponding to the regions in the intermediate image, wherein the texture complexity weights are numbers between 0 and 1, and then respectively attenuating the texture intensities corresponding to the regions in the intermediate image according to the texture complexity weights corresponding to the regions in the intermediate image to obtain an enhanced image corresponding to the reconstructed image. Therefore, the texture of the texture region in the intermediate image can be reserved, and the texture in the non-texture region is attenuated, so that the texture of the texture region in the reconstructed image is enhanced, and the non-texture region is prevented from generating false texture, thereby reducing the visual distortion of the reconstructed image, increasing the sense of reality of the image and improving the subjective quality of the image.

Description

Image processing method and device and electronic equipment
Technical Field
The embodiment of the application relates to the technical field of image processing, in particular to an image processing method and device and electronic equipment.
Background
Generally, before transmitting video, the video is compressed to improve the efficiency of video transmission; the larger the video compression rate is, the smaller the data amount of the compressed video is. However, as the compression rate increases, the video will appear visually distorted; in order to reduce the visual distortion of the reconstructed image, the reconstructed image may be post-processed to improve the quality under the condition of unchanged code rate.
Although the post-processing of the reconstructed image can significantly improve the subjective quality of the reconstructed image, false textures are easily generated in non-texture regions, which affects the subjective quality of the image.
Disclosure of Invention
In order to solve the technical problem, the present application provides an image processing method, an image processing apparatus and an electronic device.
In a first aspect, an embodiment of the present application provides an image processing method, including: firstly, acquiring a reconstructed image; and then, carrying out image enhancement on the reconstructed image to obtain an intermediate image. Then, determining texture complexity weights respectively corresponding to all the areas in the intermediate image, wherein the texture complexity weights are numbers between 0 and 1; and then, respectively attenuating the texture intensity corresponding to each region in the intermediate image according to the texture complexity weight corresponding to each region in the intermediate image, so as to obtain an enhanced image corresponding to the reconstructed image. Therefore, the texture of the texture region in the intermediate image can be reserved, and the texture in the non-texture region is attenuated, so that the texture of the texture region in the reconstructed image is enhanced, and the non-texture region is prevented from generating false texture, thereby reducing the visual distortion of the reconstructed image, increasing the sense of reality of the image and improving the subjective quality of the image.
In addition, the texture complexity weight is controlled to be in the range of 0 to 1, so that the textures corresponding to the adjacent regions are softer, and the problem of inconsistent enhanced texture effects of the adjacent regions can be avoided.
Illustratively, the texture complexity weight is proportional to the texture complexity, i.e., the greater the texture complexity weight. In this way, the texture effect of the textured region in the intermediate image can be maintained as much as possible, while attenuation is performed for the non-textured region according to the complexity of the texture.
Illustratively, the intermediate image is a texture enhanced image or a residual image.
Illustratively, the resolution of both the intermediate image and the enhanced image is the same as the resolution of the reconstructed image.
For example, the reconstructed image may be image enhanced by using a GANEF (generic adaptive Network Enhancement Filter, which generates an Enhancement Filter against a Network), so as to obtain an intermediate image.
Illustratively, the image output by the GANEF may be a texture enhanced image or a residual image.
Although the enhanced image is the attenuation of texture enhancement on each region in the intermediate image, only the intermediate image is partially attenuated by the texture intensity, and the texture intensity of the partial region of the enhanced image is still greater than that of the reconstructed image; that is, the enhanced image is still texture enhanced.
According to a first aspect, the intermediate image is a residual image; respectively attenuating the texture intensity corresponding to each region in the intermediate image according to the texture complexity weight corresponding to each region in the intermediate image to obtain an enhanced image corresponding to the reconstructed image, comprising: multiplying the pixel value of each pixel point in the residual image by the texture complexity weight corresponding to the region to which each pixel point belongs respectively to obtain a residual updated image; and generating an enhanced image according to the residual updating image and the reconstructed image.
Illustratively, the residual image is obtained by subtracting the texture enhanced image from the reconstructed image.
According to the first aspect, or any implementation manner of the first aspect above, generating an enhanced image according to a residual updated image and a reconstructed image, includes: and adding the residual updating image and the reconstructed image to obtain an enhanced image.
Illustratively, the residual updated image and the reconstructed image may be added pixel by pixel to obtain an enhanced image.
According to the first aspect, or any implementation manner of the first aspect above, the intermediate image is a texture enhanced image, and the method further includes: and performing image fidelity on the reconstructed image to obtain a basic fidelity image.
Illustratively, the base fidelity image is the same resolution as the reconstructed image.
Illustratively, image fidelity may be applied to the reconstructed image using non-GANEF, resulting in a base fidelity image.
According to the first aspect or any implementation manner of the first aspect, determining a texture complexity weight corresponding to each region in the intermediate image includes: dividing the basic fidelity image and the texture enhanced image into N areas according to a preset partition rule, wherein N is a positive integer; respectively determining the texture complexity of N areas in the basic fidelity image; and determining texture complexity weights respectively corresponding to the N regions in the texture enhanced image based on the texture complexity of the N regions in the basic fidelity image.
According to the first aspect or any implementation manner of the first aspect, attenuating the texture intensities corresponding to the regions in the intermediate image respectively according to the texture complexity weights corresponding to the regions in the intermediate image respectively to obtain an enhanced image corresponding to the reconstructed image includes: determining weighting calculation weights respectively corresponding to the N areas in the texture enhanced image and weighting calculation weights respectively corresponding to the N areas in the basic fidelity image according to texture complexity weights corresponding to the N areas in the texture enhanced image; and according to the weighting calculation weights respectively corresponding to the N areas in the texture enhanced image and the weighting calculation weights respectively corresponding to the N areas in the basic fidelity image, carrying out weighting calculation on the N areas in the texture enhanced image and the N areas in the basic fidelity image to obtain an enhanced image corresponding to the reconstructed image.
According to the first aspect, or any implementation manner of the first aspect, performing weighted calculation on N regions in the texture enhanced image and N regions in the basic fidelity image according to weighted calculation weights corresponding to the N regions in the texture enhanced image and the weighted calculation weights corresponding to the N regions in the basic fidelity image, to obtain an enhanced image corresponding to the reconstructed image, includes: respectively multiplying the weighted calculation weights respectively corresponding to the N areas in the texture enhanced image with the N areas in the texture enhanced image to obtain a first product; respectively multiplying the weighted calculation weights respectively corresponding to the N areas in the basic fidelity image with the N areas in the basic fidelity image to obtain a second product; and adding the first product and the second product to obtain an enhanced image corresponding to the reconstructed image.
According to the first aspect, or any implementation manner of the first aspect, determining, according to texture complexity weights respectively corresponding to N regions in a texture enhanced image, weighting calculation weights respectively corresponding to the N regions in the texture enhanced image and weighting calculation weights respectively corresponding to the N regions in a basic fidelity image, includes: determining texture complexity weights respectively corresponding to the N areas in the texture enhanced image as weighted calculation weights respectively corresponding to the N areas in the texture enhanced image; and determining the difference value of the texture complexity weight corresponding to 1 and N areas in the texture enhanced image as the weighting calculation weight corresponding to N areas in the basic fidelity image.
According to the first aspect or any implementation manner of the first aspect, determining the texture complexity weight corresponding to each region in the intermediate image includes: and decoding texture complexity weights respectively corresponding to N areas in the intermediate image from the received code stream, wherein N is a positive integer. Therefore, the accuracy of determining the texture intensity weights corresponding to the regions in the intermediate image can be improved, the attenuation of the texture intensity of the control image can be more accurately realized, and the image quality is further improved.
According to the first aspect or any implementation manner of the first aspect, determining the texture complexity weight corresponding to each region in the intermediate image includes: according to a preset partition rule, respectively dividing the reconstructed image and the intermediate image into N areas, wherein N is a positive integer; respectively determining the texture complexity of N areas in a reconstructed image; and determining texture complexity weights respectively corresponding to the N areas in the intermediate image based on the texture complexity of the N areas in the reconstructed image.
In a second aspect, an embodiment of the present application provides an image processing apparatus, including:
the image acquisition module is used for acquiring a reconstructed image;
the image enhancement module is used for carrying out image enhancement on the reconstructed image to obtain an intermediate image;
the texture weight determining module is used for determining the texture complexity weight corresponding to each region in the intermediate image, and the texture complexity weight is a number between 0 and 1;
and the texture attenuation module is used for respectively attenuating the texture intensity corresponding to each region in the intermediate image according to the texture complexity weight corresponding to each region in the intermediate image so as to obtain an enhanced image corresponding to the reconstructed image.
According to a second aspect, the intermediate image is a residual image; a texture attenuation module comprising:
the residual error updating module is used for multiplying the pixel value of each pixel point in the residual error image by the texture complexity weight corresponding to the region to which each pixel point belongs respectively to obtain a residual error updated image;
and the image generation module is used for updating the image and reconstructing the image according to the residual error to generate an enhanced image.
According to the second aspect or any implementation manner of the second aspect, the image generating module is specifically configured to add the residual updated image and the reconstructed image to obtain an enhanced image.
According to a second aspect, or any implementation manner of the second aspect above, the intermediate image is a texture enhanced image, and the apparatus further includes: and the image fidelity module is used for performing image fidelity on the reconstructed image to obtain a basic fidelity image.
According to the second aspect or any implementation manner of the second aspect, the texture weight determining module is specifically configured to divide the basic fidelity image and the texture enhanced image into N regions according to a preset partition rule, where N is a positive integer; respectively determining the texture complexity of N areas in the basic fidelity image; and determining texture complexity weights respectively corresponding to the N regions in the texture enhanced image based on the texture complexity of the N regions in the basic fidelity image.
According to a second aspect, or any implementation form of the second aspect above, the texture attenuation module comprises:
the weighted weight determining module is used for determining weighted calculation weights respectively corresponding to the N areas in the texture enhanced image and weighted calculation weights respectively corresponding to the N areas in the basic fidelity image according to texture complexity weights corresponding to the N areas in the texture enhanced image;
and the weighting calculation module is used for performing weighting calculation on the N areas in the texture enhanced image and the N areas in the basic fidelity image according to the weighting calculation weights respectively corresponding to the N areas in the texture enhanced image and the weighting calculation weights respectively corresponding to the N areas in the basic fidelity image to obtain an enhanced image corresponding to the reconstructed image.
According to the second aspect, or any implementation manner of the second aspect, the weighting calculation module is specifically configured to multiply the weighting calculation weights respectively corresponding to the N regions in the texture enhanced image by the N regions in the texture enhanced image, respectively, to obtain a first product; respectively multiplying the weighted calculation weights respectively corresponding to the N areas in the basic fidelity image with the N areas in the basic fidelity image to obtain a second product; and adding the first product and the second product to obtain an enhanced image corresponding to the reconstructed image.
According to the second aspect, or any implementation manner of the second aspect, the weighted weight determining module is specifically configured to determine texture complexity weights corresponding to N regions in the texture enhanced image as weighted calculation weights corresponding to the N regions in the texture enhanced image; and determining the difference value of the texture complexity weight corresponding to 1 and N areas in the texture enhanced image as the weighting calculation weight corresponding to N areas in the basic fidelity image.
According to the second aspect, or any implementation manner of the second aspect, the texture weight determining module is specifically configured to decode texture complexity weights corresponding to N regions in the intermediate image from the received code stream, where N is a positive integer.
According to the second aspect, or any implementation manner of the second aspect, the texture weight determining module 803 is specifically configured to divide the reconstructed image and the intermediate image into N regions according to a preset partition rule, where N is a positive integer; respectively determining the texture complexity of N areas in a reconstructed image; and determining texture complexity weights respectively corresponding to the N areas in the intermediate image based on the texture complexity of the N areas in the reconstructed image.
Any one implementation manner of the second aspect and the second aspect corresponds to any one implementation manner of the first aspect and the first aspect, respectively. For technical effects corresponding to any one implementation manner of the second aspect and the second aspect, reference may be made to the technical effects corresponding to any one implementation manner of the first aspect and the first aspect, and details are not repeated here.
In a third aspect, an embodiment of the present application provides an electronic device, including: a memory and a processor, the memory coupled with the processor; the memory stores program instructions that, when executed by the processor, cause the electronic device to perform the image processing method of the first aspect or any possible implementation manner of the first aspect.
Any one implementation manner of the third aspect corresponds to any one implementation manner of the first aspect. For technical effects corresponding to any one implementation manner of the third aspect and the third aspect, reference may be made to the technical effects corresponding to any one implementation manner of the first aspect and the first aspect, and details are not repeated here.
In a fourth aspect, embodiments of the present application provide a chip, including one or more interface circuits and one or more processors; the interface circuit is used for receiving signals from a memory of the electronic equipment and sending the signals to the processor, and the signals comprise computer instructions stored in the memory; the computer instructions, when executed by a processor, cause an electronic device to perform the image processing method of the first aspect or any possible implementation of the first aspect.
Any one implementation manner of the fourth aspect and the fourth aspect corresponds to any one implementation manner of the first aspect and the first aspect, respectively. For technical effects corresponding to any one implementation manner of the fourth aspect and the fourth aspect, reference may be made to the technical effects corresponding to any one implementation manner of the first aspect and the first aspect, and details are not repeated here.
In a fifth aspect, embodiments of the present application provide a computer-readable storage medium, which stores a computer program, and when the computer program runs on a computer or a processor, the computer or the processor is caused to execute the image processing method according to the first aspect or any possible implementation manner of the first aspect.
Any one implementation manner of the fifth aspect and the fifth aspect corresponds to any one implementation manner of the first aspect and the first aspect, respectively. For technical effects corresponding to any one of the implementation manners of the fifth aspect and the fifth aspect, reference may be made to the technical effects corresponding to any one of the implementation manners of the first aspect and the first aspect, and details are not repeated here.
In a sixth aspect, embodiments of the present application provide a computer-readable storage medium, which stores a computer program, and when the computer program runs on a computer or a processor, the computer or the processor is caused to execute the image processing method in the first aspect or any possible implementation manner of the first aspect.
Any one implementation form of the sixth aspect and the sixth aspect corresponds to any one implementation form of the first aspect and the first aspect, respectively. For technical effects corresponding to any one implementation manner of the sixth aspect and the sixth aspect, reference may be made to the technical effects corresponding to any one implementation manner of the first aspect and the first aspect, and details are not described here again.
Drawings
FIG. 1a is a schematic diagram of an exemplary application scenario;
FIG. 1b is a schematic diagram of an exemplary process;
FIG. 2 is a schematic diagram of an exemplary process;
FIG. 3a is a schematic diagram illustrating an exemplary image enhancement process;
FIG. 3b is a schematic diagram of an exemplary process;
FIG. 3c is a schematic diagram illustrating an exemplary image enhancement effect;
FIG. 4 is a schematic diagram of an exemplary process;
FIG. 5 is a schematic diagram of an exemplary process;
FIG. 6 is a schematic diagram illustrating an exemplary image enhancement process;
FIG. 7 is a schematic diagram of an exemplary process;
FIG. 8 is a schematic diagram of an exemplary illustrative image processing apparatus;
fig. 9 is a schematic structural diagram of an exemplary illustrated apparatus.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are some, but not all, embodiments of the present application. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
The term "and/or" herein is merely an association describing an associated object, meaning that three relationships may exist, e.g., a and/or B, may mean: a exists alone, A and B exist simultaneously, and B exists alone.
The terms "first" and "second," and the like, in the description and in the claims of the embodiments of the present application are used for distinguishing between different objects and not for describing a particular order of the objects. For example, the first target object and the second target object, etc. are specific sequences for distinguishing different target objects, rather than describing target objects.
In the embodiments of the present application, words such as "exemplary" or "for example" are used to mean serving as an example, instance, or illustration. Any embodiment or design described herein as "exemplary" or "e.g.," is not necessarily to be construed as preferred or advantageous over other embodiments or designs. Rather, use of the word "exemplary" or "such as" is intended to present concepts related in a concrete fashion.
In the description of the embodiments of the present application, the meaning of "a plurality" means two or more unless otherwise specified. For example, a plurality of processing units refers to two or more processing units; the plurality of systems refers to two or more systems.
Fig. 1a is a schematic diagram of an exemplary application scenario.
Referring to fig. 1a, for example, the encoding end may encode an image to be encoded to obtain a code stream; and then transmitting the code stream to a decoding end. After the decoding end acquires the code stream, the decoding end can decode the code stream to obtain a reconstructed image.
For example, after obtaining the reconstructed image, the decoding end may perform post-processing on the reconstructed image to enhance the reconstructed image, obtain an enhanced image, and then output the enhanced image to reduce visual distortion of the reconstructed image.
Illustratively, aiming at the post-processing stage, the application provides an image processing method which can enhance the texture of a texture region in a reconstructed image and avoid the false texture generated by a non-texture region so as to reduce the visual distortion of the reconstructed image, further increase the sense of reality of the image and improve the subjective quality of the image.
Fig. 1b is a schematic diagram of an exemplary process.
And S101, acquiring a reconstructed image.
Illustratively, after the decoding end obtains the code stream, the decoding end can decode the code stream to obtain a reconstructed image.
And S102, carrying out image enhancement on the reconstructed image to obtain an intermediate image.
For example, in order to reduce the visual distortion of the reconstructed image, the reconstructed image may be image enhanced, resulting in an intermediate image.
For example, the reconstructed image may be image enhanced using an image enhancement model. Illustratively, the image Enhancement model may be a GANEF (generic adaptive Network Enhancement Filter), that is, an Enhancement Filter based on the implementation of a generation countermeasure Network. Wherein, the GAN (generating the countermeasure network) may include a generator and a discriminator; in this way, the GANEF also includes a generator and a discriminator, which may be alternately iterated when training the GANEF. The discriminator aims at distinguishing the true and false of the image generated by the generator and the original image, the generator aims at generating the image close to the original image, and the discriminator is deceived, so that the generator can generate the image as close to the original image as possible through effective countermeasure training.
Illustratively, the training process for GANEF may be as follows: generating a plurality of sets of training data, a set of training data comprising: and the reconstructed image is obtained by decoding the code stream of the image to be coded and the image to be coded. Illustratively, the generator and the discriminator may be alternately trained in turn using training data, wherein the weight parameters of the discriminator may be fixed while training the generator; the weight parameters of the generator may be fixed when training the discriminator.
Illustratively, the process of training the generator may be as follows: in advance, the image to be coded in the training data is input to a discriminator, the discriminator performs forward calculation on the image to be coded, and the discrimination result is output. Then, the weighting parameter of the discriminator is adjusted by taking the probability that the discrimination result output by the discriminator is 'true' as a target to be close to the preset probability. The preset probability may be set as required, which is not limited in this application. When the discriminator outputs the difference value between the probability of 'true' and the preset probability according to the input image to be coded, the weight parameter of the discriminator can be fixed. The probability threshold may be set according to a requirement, which is not limited in this application. At this time, the reconstructed image in the training data is input into the generator, the generator performs forward calculation on the reconstructed image, and the intermediate image is output to the discriminator. Secondly, on one hand, the discriminator performs forward calculation on the intermediate image and outputs a discrimination result; on the other hand, a loss function value is generated based on the intermediate image and the image to be encoded. Then, the weight parameters of the generator are adjusted again with the goal of maximizing the probability that the discrimination result output by the discriminator is "true" and minimizing the loss function value.
Illustratively, the process of training the discriminator may be as follows: in advance, the image to be coded in the training data is input to a discriminator, the discriminator performs forward calculation on the image to be coded, and the discrimination result is output. Then, the weighting parameter of the discriminator is adjusted by taking the probability that the discrimination result output by the discriminator is 'true' as a target to be close to the preset probability. And when the discriminator outputs the difference value between the probability with the discrimination result of 'true' and the preset probability according to the input image to be coded and is smaller than the probability threshold, inputting the reconstructed image in the training data into the generator, carrying out forward calculation on the reconstructed image by the generator, and outputting an intermediate image to the discriminator. Then, the discriminator performs forward calculation on the intermediate image, outputs the discrimination result, and adjusts the weight parameters of the discriminator by taking the probability that the discrimination result output by the maximum discriminator is false as a target.
In this way, the generator and the discriminator in the GANEF are alternately and iteratively trained in the above manner until the discriminator cannot make a good decision between the intermediate image output by the generator and the image to be encoded.
Illustratively, the intermediate image may be a residual image or a texture enhanced image. In the training process, when the intermediate image output by the generator is a residual image, a texture enhanced image can be generated based on the residual image and the reconstructed image; and then inputting the texture enhanced image into a discriminator, and calculating a loss function value according to the texture enhanced image and the image to be coded. When the intermediate image output by the generator is a texture enhanced image, the texture enhanced image may be directly input into the discriminator.
Illustratively, the residual image and the reconstructed image may be added to generate a texture enhanced image. Illustratively, the residual image and the reconstructed image may be added pixel by pixel, that is, the pixel values of the pixel points at the corresponding positions of the residual image and the reconstructed image are added to obtain the enhanced image corresponding to the reconstructed image.
For example, after the GANEF training is completed, the reconstructed image may be image-enhanced by using only the generator trained in GANEF to obtain an intermediate image.
Wherein the resolution of the intermediate image is the same as the resolution of the reconstructed image.
And S103, determining texture complexity weights corresponding to the regions in the intermediate image respectively.
In one possible approach, after the intermediate image is determined, texture complexity weights corresponding to respective regions in the intermediate image may be determined according to texture complexity of the respective regions in the reconstructed image.
In a possible manner, after the intermediate image is determined, texture complexity weights respectively corresponding to regions in the intermediate image may be determined according to texture complexity of the regions in the image to be encoded corresponding to the reconstructed image.
Illustratively, the texture complexity weight is proportional to the texture complexity, i.e., the higher the texture complexity, the larger the texture complexity weight; the lower the texture complexity, the smaller the texture complexity weight.
Illustratively, the texture complexity weight is a fraction between 0 and 1.
And S104, respectively attenuating the texture intensity corresponding to each region in the intermediate image according to the texture complexity weight corresponding to each region in the intermediate image, so as to obtain an enhanced image corresponding to the reconstructed image.
For example, for each region in the intermediate image, the pixel values of the pixels in the region may be attenuated based on the texture complexity weight corresponding to the region, so as to adjust the texture strength corresponding to the region. Since the texture complexity weight is in direct proportion to the texture complexity, after the texture intensity is adjusted, the attenuation of the corresponding texture intensity is smaller in the region with higher texture complexity in the intermediate image, and the attenuation of the corresponding texture enhancement is larger in the region with lower texture complexity. In this way, the texture of the texture region (with higher texture complexity) can be enhanced, and the generation of false stripes in the non-texture region (with lower texture complexity) can be avoided. That is to say, the method and the device perform texture intensity attenuation control on the GANEF result according to the texture complexity of the local region of the image, that is, the texture region keeps the filtering effect of the original GANEF as much as possible, the non-texture region performs attenuation of the GANEF filtering effect according to the texture complexity, the smaller the texture complexity of the local region is, the larger the attenuation is, and therefore the false texture of the non-texture region can be effectively removed. In addition, the texture complexity weight is controlled to be in the range of 0 to 1, so that the textures corresponding to the adjacent regions are softer, and the problem of inconsistent enhanced texture effects of the adjacent regions can be avoided.
For example, the specific implementation of fig. 1b may refer to fig. 2, fig. 4, fig. 5, and fig. 7, and the corresponding description.
Fig. 2 is a schematic diagram of an exemplary process. In the embodiment of fig. 2, firstly, the GANEF is adopted to filter the reconstructed image, so as to obtain a residual image; then determining texture complexity weights corresponding to the regions in the residual image respectively based on the texture complexity of the regions in the reconstructed image; and updating the residual image according to the texture complexity weight corresponding to each region, and finally adding the updated residual image and the reconstructed image to obtain a final enhanced image.
S201, image enhancement is carried out on the reconstructed image to obtain a residual image.
Illustratively, after the decoding end acquires the code stream, the decoding end can decode the code stream to obtain a reconstructed image; and then, carrying out image enhancement on the reconstructed image by adopting an image enhancement model to obtain an intermediate image.
Illustratively, the intermediate image is a residual image.
Fig. 3a is a schematic diagram illustrating an exemplary image enhancement process.
In one possible approach, the output of the trained image enhancement model is a residual image. In this way, the reconstructed image is input to the image enhancement model, and the residual image can be directly obtained, as shown in fig. 3a (1).
In one possible approach, the output of the trained image enhancement model is a texture enhanced image. Therefore, after the reconstructed image is input into the image enhancement model, an image enhancement model output texture enhancement image can be obtained; a residual image is then determined based on the texture enhanced image and the reconstructed image. Optionally, pixel-by-pixel subtraction may be performed on the texture enhanced image and the reconstructed image, that is, pixel values of corresponding pixel points of the texture enhanced image and the reconstructed image are subtracted, so as to obtain a residual image, as shown in fig. 3a (2).
Illustratively, the residual image is of the same resolution as the reconstructed image.
S202, determining texture complexity weights corresponding to all the areas in the residual image based on the reconstructed image.
For example, the texture complexity weight corresponding to each region in the residual image may be determined according to the texture complexity corresponding to each region in the reconstructed image. Illustratively, S202 may include S2021 to S2023:
s2021, dividing the reconstructed image and the residual image into N areas respectively according to a preset partition rule.
Illustratively, the reconstructed image may be divided into N regions according to a preset partition rule set in advance; and dividing the residual image into N areas according to a preset partition rule. That is to say, the reconstructed image and the residual image have the same region division mode, and since the resolution of the reconstructed image and the resolution of the residual image are the same, N regions in the reconstructed image and N regions in the residual image are in one-to-one correspondence.
For example, the preset partition rule may be set as required, which is not limited in this application; wherein N is a positive integer and is determined according to a preset partition rule. For example, the preset partition rule is: the image is divided into N regions of size w x h. W-W/N1, H-H/N2, N-N1-N2, W and H are width and height of the image, and N1 and N2 are positive integers.
It should be noted that the size of each of the N regions may be the same or different, and the present application is not limited thereto. In addition, the shape of each of the N regions may be the same or different, and the present application is not limited thereto. And the shape of the N areas is not limited in the application.
S2022, determining the texture complexity of the N areas in the reconstructed image respectively.
The texture complexity of the ith (i is an integer between 1 and N, and the value of i may be 1 and N) region in the reconstructed image is determined as an example.
For example, for the ith region of the reconstructed image, the texture complexity of the ith region may be determined according to pixel values of pixel points included in the ith region.
In one possible approach, the texture complexity of the ith region of the reconstructed image may be determined based on the co-occurrence matrix of the ith region in the reconstructed image. Exemplarily, the co-occurrence matrix of the ith region in the reconstructed image can be determined according to the pixel values of the pixel points included in the ith region in the reconstructed image. Then, extracting characteristic quantities (such as energy, contrast, entropy, inverse variance and the like) corresponding to the co-occurrence matrix of the ith area in the reconstructed image, and determining the image texture complexity of the ith area in the reconstructed image according to the characteristic quantities of the co-occurrence matrix of the ith area in the reconstructed image. For example, the feature quantity of the co-occurrence matrix of the ith region in the reconstructed image is used as the texture complexity of the ith region in the reconstructed image.
In one possible approach, the texture complexity of the ith region in the reconstructed image may be determined based on the edge proportion of the ith region in the reconstructed image. For example, the gradient strength corresponding to each pixel point in the ith region in the reconstructed image may be calculated according to the pixel value of the pixel point included in the ith region of the reconstructed image. And then, taking the proportion of the pixel points with the gradient intensity larger than the gradient intensity threshold value in the ith area in the reconstructed image as the texture complexity of the ith area of the reconstructed image. The gradient strength threshold may be set as required, which is not limited in this application.
It should be understood that other ways may be used, such as determining the texture complexity of the ith region in the reconstructed image according to the gray histogram distribution of the ith region in the reconstructed image, which is not limited in this application.
In this way, the texture complexity of the N regions in the reconstructed image can be determined in the manner described above.
S2023, determining texture complexity weights respectively corresponding to the N areas in the residual image based on the texture complexity of the N areas in the reconstructed image.
The texture complexity weights corresponding to the ith (i is an integer between 1 and N, i may be equal to 1 and N) regions in the residual image are determined below for illustrative purposes.
In a possible manner, the texture complexity of the ith region in the reconstructed image may be used as the texture complexity weight corresponding to the ith region in the residual image.
In a possible manner, the texture complexity of the ith region in the reconstructed image may be mapped according to a preset mapping rule, so as to obtain a texture complexity weight corresponding to the ith region in the residual image. For example, the preset mapping rule may be set according to a requirement, such as normalization, which is not limited in this application. For example, normalization processing may be performed on the texture complexity of the ith region in the reconstructed image, so as to obtain a texture complexity weight corresponding to the ith region in the residual image.
Illustratively, the texture complexity weight may be a fraction between 0 and 1.
Illustratively, the texture complexity weight is proportional to the texture complexity, i.e., the higher the texture complexity, the larger the texture complexity weight; the lower the texture complexity, the smaller the texture complexity weight.
For example, after obtaining texture complexity weights corresponding to N regions in the residual image, the texture intensity corresponding to each region in the residual image may be attenuated according to the texture complexity weight corresponding to each region in the residual image, so as to perform image enhancement on the reconstructed image, and obtain an enhanced image corresponding to the reconstructed image, which may refer to S203 to S204:
and S203, performing residual error updating on each region in the residual error image based on the texture complexity weight corresponding to each region in the residual error image to obtain a residual error updated image.
Illustratively, residual updating may be performed on N regions in the residual image respectively based on texture complexity weights corresponding to the N regions of the residual image, so as to obtain a residual updated image.
In a possible manner, the residual image may be updated by multiplying the pixel value of each pixel point in the residual image by the texture complexity weight corresponding to the region to which each pixel point belongs, respectively, to obtain a residual updated image.
The following is an exemplary description of residual update of the ith (i is an integer between 1 and N, i may be equal to 1 and N) region in the residual image.
For example, assuming that the texture enhancement weight corresponding to the ith region in the residual image is ratio _ i, for a pixel (k, j) of the ith region, the pixel value corresponding to the pixel (k, j) may be adopted as R1(k, j) multiplied by ratio _ i, and the pixel value of the pixel (k, j) is updated to obtain a new pixel value R2(k, j) ═ R1(k, j) × ratio _ i. Wherein, (k, j) is pixel point coordinate integer index of the residual image, k, j respectively represent horizontal and vertical direction coordinate index, and pixel index of the upper left corner of the residual image is (0, 0). That is to say, after the pixel values of the pixel points in the residual image are all updated, the residual updated image can be obtained, and the pixel values of the pixel points in the residual updated image are R2(k, j).
Illustratively, the resolution of the residual update image and the reconstructed image is the same.
Since the texture complexity weight is proportional to the texture complexity, the corresponding texture intensity attenuation is smaller for regions with higher texture complexity of the residual image, and the corresponding texture enhancement attenuation is larger for regions with lower texture complexity.
And S204, adding the reconstructed image and the residual error updating image to obtain an enhanced image.
Illustratively, the pixel values of the pixel points at the corresponding positions of the reconstructed image and the residual updated image may be added to obtain an enhanced image corresponding to the reconstructed image.
Fig. 3b is a schematic diagram of an exemplary process.
Referring to fig. 3b, for example, a1 is a reconstructed image, and the reconstructed image is filtered by GANEF to obtain a texture-enhanced image, as shown in a 2. The reconstructed image A1 may then be subtracted from the texture-enhanced image A2 to yield a residual image, shown as A4
For example, the gradient of each pixel in the reconstructed image a1 may be calculated, and then the reconstructed image may be binarized based on the gradient of each pixel in the reconstructed image a1 to obtain a binary image, as shown in A3. Then dividing the binary image A3 into N regions, and determining the texture complexity corresponding to each region according to the proportion of black pixel points in the region for each region; and then, texture complexity corresponding to the N regions in the binary image a3 can be obtained. And then determining texture complexity weights corresponding to the N regions in the residual image A4 according to the texture complexity corresponding to the N regions in the binary image. And performing residual updating on the residual image A4 based on texture complexity weights respectively corresponding to the N areas in the residual image A4 to obtain a residual updated image, as shown in A5.
Illustratively, the reconstructed image a1 may be added to the residual updated image a5 to yield an enhanced image a 6.
Fig. 3c is a schematic diagram illustrating an exemplary image enhancement effect.
Referring to fig. 3c, fig. 3c (1) is a schematic diagram of a local region in the texture enhanced image a2 of fig. 3b, and fig. 3c (2) is a schematic diagram of a local region in the enhanced image a6 of fig. 3 b.
Illustratively, the road surface in fig. 3c (1) and 3c (2) is a non-texture region, and as can be seen from the elliptical region in fig. 3c (1) and 3c (2) and comparing the rectangular region in fig. 3c (1) and 3c (2), the non-texture region in the enhanced image obtained by the present application has no false stripes.
Therefore, the texture complexity weight is controlled to be in the range of 0 to 1 according to the texture complexity of the reconstructed image, so that the texture intensity corresponding to each region in the reconstructed image is attenuated, the texture of the texture region in the intermediate image can be reserved, and the texture in the non-texture region is attenuated, so that the false stripes generated by the non-texture region (with lower texture complexity) in the reconstructed image are avoided while the texture of the texture region (with higher texture complexity) in the reconstructed image is enhanced; thereby reducing the visual distortion of the reconstructed image. In addition, the texture complexity weight is controlled to be in the range of 0 to 1, so that the corresponding texture of the adjacent region is softer, and the problem of inconsistent enhanced texture effects of the adjacent region can be avoided.
In addition, compared with the prior art that two models are adopted for post-processing, only one model is used, and the computational complexity is reduced.
Fig. 4 is a schematic diagram of an exemplary process. In the embodiment of fig. 4, firstly, the GANEF is adopted to filter the reconstructed image, so as to obtain a residual image; then determining texture complexity weights corresponding to all regions of the residual image respectively based on the texture complexity of all regions in the reconstructed image corresponding to the image to be coded; and updating the residual image according to the texture complexity weight corresponding to each region, and finally adding the updated residual image and the reconstructed image to obtain a final enhanced image.
S401, decoding the code stream to obtain texture complexity weights corresponding to all regions in the reconstructed image and the image to be coded.
For example, the encoding end may generate texture complexity weights corresponding to regions in the image to be encoded based on the image to be encoded, and the texture complexity weights may include S4011 to S4013:
s4011, according to a preset partition rule, dividing an image to be coded into N regions.
S4012, texture complexity of N regions in the image to be coded is respectively determined.
For example, S4011 to S4012 can refer to the descriptions of S2021 to S2022, which are not described herein again.
S4013, determining texture complexity weights corresponding to N regions in the image to be coded respectively based on the texture complexity of the N regions in the image to be coded.
The following example illustrates determining a texture complexity weight corresponding to an ith (i is an integer between 1 and N, and the value of i may be 1 and N) region in an image to be encoded.
In one possible approach, the texture complexity of the ith region in the image to be encoded may be used as the texture complexity weight corresponding to the ith region in the image to be encoded.
In a possible manner, the texture complexity of the ith region in the image to be encoded may be mapped according to a preset mapping rule, so as to obtain a texture complexity weight corresponding to the ith region in the image to be encoded. For example, normalization processing may be performed on the texture complexity of the ith region in the image to be encoded, so as to obtain the texture complexity weight corresponding to the ith region in the image to be encoded.
Illustratively, the texture complexity weight may be a fraction between 0 and 1.
Illustratively, the texture complexity weight is proportional to the texture complexity, i.e., the higher the texture complexity, the larger the texture complexity weight; the lower the texture complexity, the smaller the texture complexity weight.
Exemplarily, the encoding end may encode an image to be encoded to obtain a corresponding code stream; and coding texture complexity weights corresponding to the N areas in the image to be coded to obtain corresponding code streams. And then, a code stream obtained by coding texture complexity weights respectively corresponding to the N areas in the image to be coded and a code stream obtained by coding the image to be coded can be sent to a decoding end. Therefore, after the decoding end acquires the code stream, texture complexity weights corresponding to the N areas in the reconstructed image and the image to be coded are decoded from the code stream.
S402, image enhancement is carried out on the reconstructed image to obtain a residual image.
For example, the decoding end may perform image enhancement on the reconstructed image by using an image enhancement model, so as to obtain an intermediate image. Illustratively, the intermediate image is a residual image.
S402 may refer to the description of S201, which is not described herein again.
Illustratively, the resolution of the residual image is the same as the resolution of the reconstructed image and the resolution of the image to be encoded.
And S403, generating texture complexity weights corresponding to the regions in the residual image according to the texture complexity weights corresponding to the regions in the image to be coded.
For example, after the residual image is obtained, the residual image may be divided into N regions according to a preset partitioning rule set in advance. The method comprises the steps of obtaining a residual image, obtaining a to-be-coded image, obtaining N regions in the residual image, and obtaining N regions in the to-be-coded image. Furthermore, the texture complexity weight corresponding to the i-th region of the image to be encoded may be used as the texture complexity weight corresponding to the i-th region of the residual image. In this way, texture complexity weights corresponding to the N regions in the residual image can be determined.
Illustratively, the texture complexity weight may be a fraction between 0 and 1.
For example, after obtaining texture complexity weights corresponding to N regions in the residual image, texture intensities corresponding to the regions in the residual image may be adjusted according to the texture complexity weights corresponding to the regions in the residual image, so as to obtain an enhanced image corresponding to the reconstructed image, which may refer to S404 to S405:
and S404, residual error updating is carried out on the residual error image based on texture complexity weights respectively corresponding to all the areas in the residual error image, and a residual error updated image is obtained.
And S405, adding the reconstructed image and the residual error updating image to obtain an enhanced image.
For example, S404 to S405, the descriptions of S203 to S204 above may be referred to, and are not repeated herein.
Therefore, the texture enhancement corresponding to each region in the reconstructed image is adjusted by controlling the texture complexity weight in the range of 0 to 1 according to the texture complexity of the image to be coded, so that the false stripes generated by non-texture regions (with lower texture complexity) in the reconstructed image can be avoided while the texture of the texture regions (with higher texture complexity) in the reconstructed image is enhanced; thereby reducing the visual distortion of the reconstructed image. In addition, the texture complexity weight is controlled to be in the range of 0 to 1, so that the corresponding texture of the adjacent region is softer, and the problem of inconsistent enhanced texture effects of the adjacent region can be avoided.
In addition, compared with the prior art that two models are adopted for post-processing, only one model is used, and the computational complexity is reduced.
And thirdly, compared with the texture complexity weight determined according to the reconstructed image, the texture complexity weight determined according to the image to be coded is more accurate, so that the attenuation of the image texture intensity can be more accurately controlled, and the image quality is further improved.
Fig. 5 is a schematic diagram of an exemplary process. In the embodiment of fig. 5, a non-generative countermeasure network (non-GANEF) and a generative countermeasure network (GANEF) are first used to filter the reconstructed image, so as to obtain a basic fidelity image and a texture-enhanced image, then, based on the texture complexity corresponding to each region in the reconstructed image or the basic fidelity image, weighting calculation factors corresponding to the basic fidelity image and the texture-enhanced image are determined, and finally, based on the weighting calculation factors, the basic fidelity image and the texture-enhanced image are weighted and fused, so as to obtain a final enhanced image.
S501, performing image enhancement on the reconstructed image to obtain a texture enhanced image.
Illustratively, after the decoding end acquires the code stream, the decoding end can decode the code stream to obtain a reconstructed image; and then, carrying out image enhancement on the reconstructed image by adopting an image enhancement model to obtain an intermediate image. Illustratively, the intermediate image is a texture enhanced image.
Fig. 6 is a schematic diagram illustrating an exemplary image enhancement process.
In one possible approach, the output of the trained image enhancement model is a residual image. In this way, the reconstructed image is input to the image enhancement model, a residual image output by the image enhancement model can be obtained, and then the texture enhanced image can be determined according to the residual image and the reconstructed image. Optionally, the pixel values of the corresponding pixel points of the residual image and the reconstructed image may be added to obtain a texture-enhanced image, as shown in fig. 6 (1).
In one possible approach, the output of the trained image enhancement model is a texture enhanced image. In this way, after the reconstructed image is input to the image enhancement model, the texture enhanced image can be directly obtained, as shown in fig. 6 (2).
Illustratively, the texture enhanced image is at the same resolution as the reconstructed image.
And S502, performing image fidelity on the reconstructed image to obtain a basic fidelity image.
For example, a preset image fidelity model may be used to reconstruct an image for image fidelity, resulting in a basic fidelity image.
For example, the network used by the image fidelity model may be a non-generative countermeasure network, such as a convolutional neural network, and the like, which is not limited in this application.
Illustratively, the training process for the image fidelity model may be as follows: and collecting a plurality of groups of training data, wherein one group of training data comprises an image to be coded and a reconstructed image obtained by decoding a code stream of the image to be coded. And aiming at a group of training data, inputting the group of training data into the image fidelity model, carrying out forward calculation on the reconstructed image by the image fidelity model, and outputting a basic fidelity image. And calculating a loss function value based on the basic fidelity image output by the image fidelity model and the image to be coded in the training data, and adjusting the weight parameter of the image fidelity model by taking the minimized loss function value as a target. Then, according to the above manner, the image fidelity model is trained by using the collected multiple sets of training data until the training times of the image fidelity model are equal to the preset training times, or the loss function value of the image fidelity model is less than or equal to the loss function threshold, or the effect of the image fidelity model meets the preset effect condition, and the training of the image fidelity model is stopped to obtain the trained image fidelity model. For example, the preset training times, the loss function threshold and the preset effect condition may all be set as required, which is not limited in this application.
It should be noted that the present application does not limit the execution order of S501 and S502.
Illustratively, the texture enhanced image and the base fidelity image are both at the same resolution as the reconstructed image.
And S503, generating texture complexity weights corresponding to the regions in the texture enhanced image based on the basic fidelity image.
Illustratively, S503 may include S5031 a-S5034 a:
s5031a, according to the preset partition rule, dividing the basic fidelity image and the texture enhanced image into N regions, respectively.
S5032a, determining the texture complexity of the N regions in the base fidelity image, respectively.
S5033a, determining texture complexity weights corresponding to the N regions in the texture enhanced image based on the texture complexity of the N regions in the underlying fidelity image.
For example, S5031 a-S5033 a can refer to the descriptions of S2021-S2023 above, and are not described herein again.
In one possible embodiment, the texture complexity weight corresponding to each region in the texture enhanced image may be generated based on the reconstructed image, and reference may be made to S5031b to S5033 b:
s5031b divides the reconstructed image and the texture-enhanced image into N regions according to a preset partition rule.
S5032b, determining the texture complexity of the N regions in the reconstructed image respectively.
S5033b, determining texture complexity weights corresponding to the N regions in the texture enhanced image based on the texture complexity of the N regions in the reconstructed image.
For example, S5031 b-S5033 b can refer to the descriptions of S2021-S2023 above, and are not described herein again.
S504, according to the texture complexity weight corresponding to each region in the texture enhanced image, the basic fidelity image and the texture enhanced image are subjected to weighted fusion to obtain an enhanced image.
Illustratively, after the texture enhanced image and the basic fidelity image are both divided into N regions, the regions of the texture enhanced image and the basic fidelity image are in one-to-one correspondence. In this way, each region in the texture enhanced image can be weighted and fused with the corresponding region in the basic fidelity image, and the enhanced image can be obtained.
Illustratively, S504 may include S5041-S5042:
s5041, determining the weighting calculation weight corresponding to the N regions in the texture enhanced image and the weighting calculation weight corresponding to the N regions in the basic fidelity image according to the texture complexity weight corresponding to the N regions in the texture enhanced image.
For example, the texture complexity weights corresponding to the N regions in the texture enhanced image may be used as the weighted calculation weights of the N regions in the texture enhanced image. And taking the difference value between the texture complexity weights of 1 and N areas in the texture enhanced image as the weighted calculation weights of the N areas in the basic fidelity image.
For example, for the ith region, the texture complexity weight corresponding to the ith region of the texture enhanced image is ratio _ i, the weighting calculation weight of the ith region of the texture enhanced image may be ratio _ i, and the weighting calculation weight of the ith region of the foundation fidelity image is 1-ratio _ i.
S5042, carrying out weighted calculation on the N areas in the texture enhanced image and the N areas in the basic fidelity image according to the weighted calculation weights respectively corresponding to the N areas in the texture enhanced image and the weighted calculation weights respectively corresponding to the N areas in the basic fidelity image, and obtaining an enhanced image.
Illustratively, for the ith region, according to the weighting calculation weight of the ith region in the texture enhanced image and the weighting calculation weight of the ith region in the basic fidelity image, weighting calculation is performed on each pixel point of the ith region in the texture enhanced image and each pixel point of the ith region in the basic fidelity image, so as to obtain an enhanced image of the ith region.
For example, the weighting calculation weights corresponding to the N regions in the texture enhanced image may be multiplied by the N regions in the texture enhanced image, respectively, to obtain a first product; respectively multiplying the weighted calculation weights respectively corresponding to the N areas in the basic fidelity image with the N areas in the basic fidelity image to obtain a second product; and adding the first product and the second product to obtain an enhanced image corresponding to the reconstructed image.
For example, the weighting calculation weight of the ith region in the texture enhanced image is ratio _ i, and the pixel value of one pixel point (j, k) of the ith region in the texture enhanced image is E1(j, k); and the weighting calculation weight of the ith area in the basic fidelity image is 1-ratio _ i, the pixel value of one pixel point (j, k) of the ith area in the basic fidelity image is E2(j, k), and the pixel value R (i, j) ═ ratio _ i E1(i, j) + (1-ratio _ i) × E2(i, j) after image enhancement is carried out on the pixel point (j, k) of the ith area in the reconstructed image.
Therefore, the texture enhancement corresponding to each region in the reconstructed image is adjusted by controlling the weight of the texture complexity in the range of 0 to 1 according to the texture complexity of the reconstructed image, so that the false stripes generated by the non-texture region (with lower texture complexity) in the reconstructed image can be avoided while the texture of the texture region (with higher texture complexity) in the reconstructed image is enhanced; thereby reducing the visual distortion of the reconstructed image. In addition, the texture complexity weight is controlled to be in the range of 0 to 1, so that the corresponding texture of the adjacent region is softer, and the problem of inconsistent enhanced texture effects of the adjacent region can be avoided.
Fig. 7 is a schematic diagram of an exemplary process. In the embodiment of fig. 7, a non-generative countermeasure network (non-GANEF) and a generative countermeasure network (GANEF) are first used to filter the reconstructed image, so as to obtain a basic fidelity image and a texture-enhanced image, then, based on the texture complexity corresponding to each region in the image to be encoded corresponding to the reconstructed image or the basic fidelity image, weighting calculation factors corresponding to the basic fidelity image and the texture-enhanced image are determined, and finally, based on the weighting calculation factors, the basic fidelity image and the texture-enhanced image are weighted and fused, so as to obtain a final enhanced image.
S701, decoding the code stream to obtain texture complexity weights corresponding to all regions in the reconstructed image and the image to be coded.
For example, the encoding end may generate a texture complexity weight based on an image to be encoded, and may include S7011 to S7013:
s7011, according to a preset partition rule, dividing the image to be encoded into N regions.
S7012, texture complexity of N areas in the image to be coded is respectively determined.
S7013, determining texture complexity weights corresponding to the N areas in the image to be coded respectively based on the texture complexity of the N areas in the image to be coded.
For example, S7011 to S7013 may refer to the descriptions of S2021 to S2023, which are not described herein again.
And S702, carrying out image enhancement on the reconstructed image to obtain a texture enhanced image.
For example, S702 may refer to the description of S502 above, and is not described herein again.
And S703, determining texture enhancement weights respectively corresponding to the regions in the texture enhanced image according to the texture complexity weights corresponding to the regions in the image to be encoded.
For example, after obtaining the texture enhanced image, the texture enhanced image may be divided into N regions according to a preset partition rule set in advance. The texture enhanced image is divided into regions in the same manner as the regions of the image to be encoded, and the resolution of the texture enhanced image is the same as that of the image to be encoded, so that the regions in the texture enhanced image correspond to the regions in the image to be encoded one by one. Furthermore, the texture complexity weight corresponding to the i-th region of the image to be encoded may be used as the texture complexity weight corresponding to the i-th region of the texture enhanced image. In this way, texture complexity weights corresponding to the N regions in the texture enhanced image may be determined.
Illustratively, the texture complexity weight may be a fraction between 0 and 1.
For example, after obtaining the texture complexity weights corresponding to the N regions in the texture enhanced image, the texture intensity corresponding to each region in the texture enhanced image may be adjusted according to the texture complexity weight corresponding to each region in the texture enhanced image, so as to obtain an enhanced image corresponding to the reconstructed image, and reference may be made to S704 to S705:
s704, image fidelity is conducted on the reconstructed image, and a basic fidelity image is obtained.
For example, S704 may refer to the description of S502 above, and will not be described herein again.
For example, the present application does not limit the execution order of S704 and S702.
S705, according to the texture complexity weight corresponding to each region in the texture enhanced image, carrying out weighted fusion on the basic fidelity image and the texture enhanced image to obtain an enhanced image.
For example, S705 may refer to the description of S504 above, and is not described herein again.
Therefore, the texture enhancement corresponding to each region in the reconstructed image is adjusted by controlling the texture complexity weight in the range of 0 to 1 according to the texture complexity of the image to be coded, so that the false stripes generated by non-texture regions (with lower texture complexity) in the reconstructed image can be avoided while the texture of the texture regions (with higher texture complexity) in the reconstructed image is enhanced; thereby reducing the visual distortion of the reconstructed image. In addition, the texture complexity weight is controlled to be in the range of 0 to 1, so that the corresponding texture of the adjacent region is softer, and the problem of inconsistent enhanced texture effects of the adjacent region can be avoided.
And thirdly, compared with the texture complexity weight determined according to the reconstructed image, the texture complexity weight determined according to the image to be coded is more accurate, so that the attenuation of the image texture intensity can be more accurately controlled, and the image quality is further improved.
Fig. 8 is a schematic diagram of an exemplary image processing apparatus.
Referring to fig. 8, the image processing apparatus illustratively includes: an image acquisition module 801, an image enhancement module 802, a texture weight determination module 803, and a texture attenuation module 804, wherein:
an image obtaining module 801, configured to obtain a reconstructed image;
an image enhancement module 802, configured to perform image enhancement on the reconstructed image to obtain an intermediate image;
a texture weight determining module 803, configured to determine a texture complexity weight corresponding to each region in the intermediate image, where the texture complexity weight is a number between 0 and 1;
and the texture attenuation module 804 is configured to attenuate the texture intensity corresponding to each region in the intermediate image according to the texture complexity weight corresponding to each region in the intermediate image, so as to obtain an enhanced image corresponding to the reconstructed image.
Illustratively, the intermediate image is a residual image;
a texture attenuation module 804 comprising:
the residual error updating module is used for multiplying the pixel value of each pixel point in the residual error image by the texture complexity weight corresponding to the region to which each pixel point belongs respectively to obtain a residual error updated image;
and the image generation module is used for updating the image and reconstructing the image according to the residual error to generate an enhanced image.
Illustratively, the image generation module is specifically configured to add the residual updated image and the reconstructed image to obtain an enhanced image.
Illustratively, the intermediate image is a texture enhanced image, the apparatus further comprising:
and the image fidelity module is used for performing image fidelity on the reconstructed image to obtain a basic fidelity image.
Illustratively, the texture weight determining module 803 is specifically configured to divide the basic fidelity image and the texture enhanced image into N regions according to a preset partition rule, where N is a positive integer; respectively determining the texture complexity of N areas in the basic fidelity image; and determining texture complexity weights respectively corresponding to the N regions in the texture enhanced image based on the texture complexity of the N regions in the basic fidelity image.
Illustratively, the texture attenuation module 804 includes:
the weighted weight determining module is used for determining weighted calculation weights respectively corresponding to the N areas in the texture enhanced image and weighted calculation weights respectively corresponding to the N areas in the basic fidelity image according to texture complexity weights corresponding to the N areas in the texture enhanced image;
and the weighting calculation module is used for performing weighting calculation on the N areas in the texture enhanced image and the N areas in the basic fidelity image according to the weighting calculation weights respectively corresponding to the N areas in the texture enhanced image and the weighting calculation weights respectively corresponding to the N areas in the basic fidelity image to obtain an enhanced image corresponding to the reconstructed image.
Exemplarily, the weighting calculation module is specifically configured to multiply the weighting calculation weights respectively corresponding to the N regions in the texture enhanced image by the N regions in the texture enhanced image, respectively, to obtain a first product; respectively multiplying the weighted calculation weights respectively corresponding to the N areas in the basic fidelity image with the N areas in the basic fidelity image to obtain a second product; and adding the first product and the second product to obtain an enhanced image corresponding to the reconstructed image.
Exemplarily, the weighting weight determining module is specifically configured to determine texture complexity weights corresponding to N regions in the texture enhanced image as weighting calculation weights corresponding to the N regions in the texture enhanced image; and determining the difference value of the texture complexity weight corresponding to 1 and N areas in the texture enhanced image as the weighting calculation weight corresponding to N areas in the basic fidelity image.
Illustratively, the texture weight determining module 803 is specifically configured to decode texture complexity weights corresponding to N regions in the intermediate image from the received code stream, where N is a positive integer.
Illustratively, the texture weight determining module 803 is specifically configured to divide the reconstructed image and the intermediate image into N regions according to a preset partition rule, where N is a positive integer; respectively determining the texture complexity of N areas in a reconstructed image; and determining texture complexity weights respectively corresponding to the N areas in the intermediate image based on the texture complexity of the N areas in the reconstructed image.
In one example, fig. 9 shows a schematic block diagram of an apparatus 900 of an embodiment of the present application, where the apparatus 900 may include: a processor 901 and transceiver/transceiver pins 902, and optionally, memory 903.
The various components of device 900 are coupled together by a bus 904, where bus 904 includes a power bus, a control bus, and a status signal bus in addition to a data bus. But for clarity of illustration the various busses are referred to in the drawings as the bus 904.
Optionally, the memory 903 may be used for instructions in the aforementioned method embodiments. The processor 901 is operable to execute instructions in the memory 903 and to control the receive pin to receive signals and the transmit pin to transmit signals.
The apparatus 900 may be an electronic device or a chip of an electronic device in the above method embodiments.
All relevant contents of each step related to the above method embodiment may be referred to the functional description of the corresponding functional module, and are not described herein again.
The present embodiment also provides a computer-readable storage medium, in which computer instructions are stored, and when the computer instructions are executed on an electronic device, the electronic device executes the above related method steps to implement the image processing method in the above embodiment.
The present embodiment also provides a computer program product, which when run on a computer causes the computer to execute the above-mentioned related steps to implement the image processing method in the above-mentioned embodiment.
In addition, embodiments of the present application also provide an apparatus, which may be specifically a chip, a component or a module, and may include a processor and a memory connected to each other; the memory is used for storing computer execution instructions, and when the device runs, the processor can execute the computer execution instructions stored in the memory, so that the chip can execute the image processing method in the above-mentioned method embodiments.
The electronic device, the computer-readable storage medium, the computer program product, or the chip provided in this embodiment are all configured to execute the corresponding method provided above, so that the beneficial effects achieved by the electronic device, the computer-readable storage medium, the computer program product, or the chip may refer to the beneficial effects in the corresponding method provided above, and are not described herein again.
Through the description of the above embodiments, those skilled in the art will understand that, for convenience and simplicity of description, only the division of the above functional modules is used as an example, and in practical applications, the above function distribution may be completed by different functional modules as needed, that is, the internal structure of the device may be divided into different functional modules to complete all or part of the above described functions.
In the several embodiments provided in the present application, it should be understood that the disclosed apparatus and method may be implemented in other ways. For example, the above-described embodiments of the apparatus are merely illustrative, and for example, a module or a unit may be divided into only one logic function, and may be implemented in other ways, for example, a plurality of units or components may be combined or integrated into another apparatus, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
Units described as separate parts may or may not be physically separate, and parts displayed as units may be one physical unit or a plurality of physical units, may be located in one place, or may be distributed to a plurality of different places. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
Any of the various embodiments of the present application, as well as any of the same embodiments, can be freely combined. Any combination of the above is within the scope of the present application.
The integrated unit, if implemented in the form of a software functional unit and sold or used as a stand-alone product, may be stored in a readable storage medium. Based on such understanding, the technical solutions of the embodiments of the present application may be essentially or partially contributed to by the prior art, or all or part of the technical solutions may be embodied in the form of a software product, where the software product is stored in a storage medium and includes several instructions to enable a device (which may be a single chip, a chip, or the like) or a processor (processor) to execute all or part of the steps of the methods of the embodiments of the present application. And the aforementioned storage medium includes: various media capable of storing program codes, such as a usb disk, a removable hard disk, a Read Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk, or an optical disk.
While the present embodiments have been described with reference to the accompanying drawings, it is to be understood that the invention is not limited to the precise embodiments described above, which are meant to be illustrative and not restrictive, and that various changes may be made therein by those skilled in the art without departing from the spirit and scope of the invention as defined by the appended claims.
The steps of a method or algorithm described in connection with the disclosure of the embodiments of the application may be embodied in hardware or in software instructions executed by a processor. The software instructions may be comprised of corresponding software modules that may be stored in Random Access Memory (RAM), flash Memory, Read Only Memory (ROM), Erasable Programmable ROM (EPROM), Electrically Erasable Programmable ROM (EEPROM), registers, a hard disk, a removable disk, a compact disc Read Only Memory (CD-ROM), or any other form of storage medium known in the art. An exemplary storage medium is coupled to the processor such the processor can read information from, and write information to, the storage medium. Of course, the storage medium may also be integral to the processor. The processor and the storage medium may reside in an ASIC.
Those skilled in the art will recognize that, in one or more of the examples described above, the functions described in the embodiments of the present application may be implemented in hardware, software, firmware, or any combination thereof. When implemented in software, the functions may be stored on or transmitted over as one or more instructions or code on a computer-readable medium. Computer-readable media includes both computer-readable storage media and communication media including any medium that facilitates transfer of a computer program from one place to another. A storage media may be any available media that can be accessed by a general purpose or special purpose computer.
While the present embodiments have been described with reference to the accompanying drawings, it is to be understood that the invention is not limited to the precise embodiments described above, which are meant to be illustrative and not restrictive, and that various changes may be made therein by those skilled in the art without departing from the spirit and scope of the invention as defined by the appended claims.

Claims (24)

1. An image processing method, characterized in that the method comprises:
acquiring a reconstructed image;
carrying out image enhancement on the reconstructed image to obtain an intermediate image;
determining texture complexity weights respectively corresponding to all regions in the intermediate image, wherein the texture complexity weights are numbers between 0 and 1;
and respectively attenuating the texture intensity corresponding to each region in the intermediate image according to the texture complexity weight corresponding to each region in the intermediate image, so as to obtain an enhanced image corresponding to the reconstructed image.
2. The method of claim 1, wherein the intermediate image is a residual image;
the attenuating the texture intensity corresponding to each region in the intermediate image respectively according to the texture complexity weight corresponding to each region in the intermediate image to obtain an enhanced image corresponding to the reconstructed image includes:
multiplying the pixel value of each pixel point in the residual image by the texture complexity weight corresponding to the region to which each pixel point belongs to obtain a residual updated image;
and generating the enhanced image according to the residual updating image and the reconstructed image.
3. The method of claim 2, wherein generating the enhanced image from the residual updated image and the reconstructed image comprises:
and adding the residual updating image and the reconstructed image to obtain the enhanced image.
4. The method of claim 1, wherein the intermediate image is a texture enhanced image, the method further comprising:
and performing image fidelity on the reconstructed image to obtain a basic fidelity image.
5. The method of claim 4, wherein determining the texture complexity weight corresponding to each region in the intermediate image comprises:
according to a preset partition rule, dividing the basic fidelity image and the texture enhanced image into N regions respectively, wherein N is a positive integer;
respectively determining the texture complexity of N areas in the basic fidelity image;
and determining texture complexity weights corresponding to the N regions in the texture enhanced image respectively based on the texture complexity of the N regions in the basic fidelity image.
6. The method according to claim 5, wherein the attenuating the texture intensities corresponding to the regions in the intermediate image respectively according to the texture complexity weights corresponding to the regions in the intermediate image to obtain the enhanced image corresponding to the reconstructed image comprises:
determining weighting calculation weights respectively corresponding to the N regions in the texture enhanced image and weighting calculation weights respectively corresponding to the N regions in the basic fidelity image according to texture complexity weights corresponding to the N regions in the texture enhanced image;
and carrying out weighted calculation on the N regions in the texture enhanced image and the N regions in the basic fidelity image according to the weighted calculation weights respectively corresponding to the N regions in the texture enhanced image and the weighted calculation weights respectively corresponding to the N regions in the basic fidelity image to obtain an enhanced image corresponding to the reconstructed image.
7. The method according to claim 6, wherein the performing weighted computation on the N regions in the texture-enhanced image and the N regions in the basic fidelity image according to the weighted computation weights corresponding to the N regions in the texture-enhanced image and the weighted computation weights corresponding to the N regions in the basic fidelity image, respectively, to obtain an enhanced image corresponding to the reconstructed image comprises:
respectively multiplying the weighted calculation weights corresponding to the N areas in the texture enhanced image with the N areas in the texture enhanced image to obtain a first product;
respectively multiplying the weighting calculation weights respectively corresponding to the N areas in the basic fidelity image with the N areas in the basic fidelity image to obtain a second product;
and adding the first product and the second product to obtain an enhanced image corresponding to the reconstructed image.
8. The method according to claim 6, wherein the determining the weighted computation weights corresponding to the N regions in the texture enhanced image and the weighted computation weights corresponding to the N regions in the basic fidelity image according to the texture complexity weights corresponding to the N regions in the texture enhanced image respectively comprises:
determining texture complexity weights respectively corresponding to the N areas in the texture enhanced image as weighted calculation weights respectively corresponding to the N areas in the texture enhanced image;
and determining the difference value of the texture complexity weight corresponding to 1 and N areas in the texture enhanced image as the weighting calculation weight corresponding to the N areas in the basic fidelity image.
9. The method according to any one of claims 1 to 8, wherein the determining the texture complexity weight corresponding to each region in the intermediate image comprises:
and decoding texture complexity weights respectively corresponding to N areas in the intermediate image from the received code stream, wherein N is a positive integer.
10. The method according to claim 1, 2, 3, 4, 6, 7, 8 or 9, wherein the determining the texture complexity weight corresponding to each region in the intermediate image comprises:
according to a preset partition rule, dividing the reconstructed image and the intermediate image into N areas respectively, wherein N is a positive integer;
respectively determining the texture complexity of N areas in the reconstructed image;
and determining texture complexity weights corresponding to the N regions in the intermediate image respectively based on the texture complexity of the N regions in the reconstructed image.
11. An image processing apparatus, characterized in that the apparatus comprises:
the image acquisition module is used for acquiring a reconstructed image;
the image enhancement module is used for carrying out image enhancement on the reconstructed image to obtain an intermediate image;
a texture weight determining module, configured to determine a texture complexity weight corresponding to each region in the intermediate image, where the texture complexity weight is a number between 0 and 1;
and the texture attenuation module is used for respectively attenuating the texture intensity corresponding to each region in the intermediate image according to the texture complexity weight corresponding to each region in the intermediate image, so as to obtain an enhanced image corresponding to the reconstructed image.
12. The apparatus of claim 11, wherein the intermediate image is a residual image;
the texture attenuation module comprising:
the residual error updating module is used for multiplying the pixel value of each pixel point in the residual error image by the texture complexity weight corresponding to the region to which each pixel point belongs respectively to obtain a residual error updated image;
and the image generation module is used for generating the enhanced image according to the residual error updating image and the reconstructed image.
13. The apparatus of claim 12,
the image generation module is specifically configured to add the residual updated image and the reconstructed image to obtain the enhanced image.
14. The apparatus of claim 11, wherein the intermediate image is a texture enhanced image, the apparatus further comprising:
and the image fidelity module is used for performing image fidelity on the reconstructed image to obtain a basic fidelity image.
15. The apparatus of claim 14,
the texture weight determining module is specifically configured to divide the basic fidelity image and the texture enhanced image into N regions according to a preset partition rule, where N is a positive integer; respectively determining the texture complexity of N areas in the basic fidelity image; and determining texture complexity weights corresponding to the N regions in the texture enhanced image respectively based on the texture complexity of the N regions in the basic fidelity image.
16. The apparatus of claim 15, wherein the texture attenuation module comprises:
a weighted weight determining module, configured to determine, according to texture complexity weights corresponding to N regions in the texture enhanced image, weighted calculation weights corresponding to the N regions in the texture enhanced image and weighted calculation weights corresponding to the N regions in the basic fidelity image;
and the weighting calculation module is used for performing weighting calculation on the N areas in the texture enhanced image and the N areas in the basic fidelity image according to the weighting calculation weights respectively corresponding to the N areas in the texture enhanced image and the weighting calculation weights respectively corresponding to the N areas in the basic fidelity image to obtain an enhanced image corresponding to the reconstructed image.
17. The apparatus of claim 16,
the weighting calculation module is specifically configured to multiply the weighting calculation weights respectively corresponding to the N regions in the texture enhanced image by the N regions in the texture enhanced image, respectively, to obtain a first product; respectively multiplying the weighting calculation weights respectively corresponding to the N areas in the basic fidelity image with the N areas in the basic fidelity image to obtain a second product; and adding the first product and the second product to obtain an enhanced image corresponding to the reconstructed image.
18. The apparatus of claim 16,
the weighting weight determining module is specifically configured to determine texture complexity weights corresponding to the N regions in the texture enhanced image as weighting calculation weights corresponding to the N regions in the texture enhanced image; and determining the difference value of the texture complexity weight corresponding to 1 and N areas in the texture enhanced image as the weighting calculation weight corresponding to the N areas in the basic fidelity image.
19. The apparatus of any one of claims 11 to 18,
the texture weight determining module is specifically configured to decode texture complexity weights corresponding to N regions in the intermediate image from the received code stream, where N is a positive integer.
20. The apparatus of claim 11 or 12 or 13 or 14 or 16 or 17 or 18 or 19,
the texture weight determining module is specifically configured to divide the reconstructed image and the intermediate image into N regions according to a preset partition rule, where N is a positive integer; respectively determining the texture complexity of N areas in the reconstructed image; and determining texture complexity weights corresponding to the N regions in the intermediate image respectively based on the texture complexity of the N regions in the reconstructed image.
21. An electronic device, comprising:
a memory and a processor, the memory coupled with the processor;
the memory stores program instructions that, when executed by the processor, cause the electronic device to perform the image processing method of any one of claims 1 to 10.
22. A chip comprising one or more interface circuits and one or more processors; the interface circuit is configured to receive signals from a memory of an electronic device and to transmit the signals to the processor, the signals including computer instructions stored in the memory; the computer instructions, when executed by the processor, cause the electronic device to perform the image processing method of any of claims 1 to 10.
23. A computer-readable storage medium, characterized in that the computer-readable storage medium stores a computer program which, when run on a computer or a processor, causes the computer or the processor to execute an image processing method according to any one of claims 1 to 10.
24. A computer program product, characterized in that it contains a software program which, when executed by a computer or processor, causes the steps of the method of any one of claims 1 to 10 to be performed.
CN202111511051.7A 2021-12-10 2021-12-10 Image processing method and device and electronic equipment Pending CN114298922A (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202111511051.7A CN114298922A (en) 2021-12-10 2021-12-10 Image processing method and device and electronic equipment
PCT/CN2022/131594 WO2023103715A1 (en) 2021-12-10 2022-11-14 Image processing method and apparatus, and electronic device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111511051.7A CN114298922A (en) 2021-12-10 2021-12-10 Image processing method and device and electronic equipment

Publications (1)

Publication Number Publication Date
CN114298922A true CN114298922A (en) 2022-04-08

Family

ID=80967411

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111511051.7A Pending CN114298922A (en) 2021-12-10 2021-12-10 Image processing method and device and electronic equipment

Country Status (2)

Country Link
CN (1) CN114298922A (en)
WO (1) WO2023103715A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2023103715A1 (en) * 2021-12-10 2023-06-15 华为技术有限公司 Image processing method and apparatus, and electronic device

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116567286B (en) * 2023-07-10 2023-09-22 武汉幻忆信息科技有限公司 Online live video processing method and system based on artificial intelligence

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109982082B (en) * 2019-05-05 2022-11-15 山东大学 HEVC multi-distortion criterion rate-distortion optimization method based on local texture characteristics
CN110246093B (en) * 2019-05-05 2021-05-04 北京大学 Method for enhancing decoded image
CN112634278B (en) * 2020-10-30 2022-06-14 上海大学 Super-pixel-based just noticeable distortion method
CN113744235A (en) * 2021-08-30 2021-12-03 河南工业大学 Knee MRI reconstruction technology based on SC-GAN
CN114298922A (en) * 2021-12-10 2022-04-08 华为技术有限公司 Image processing method and device and electronic equipment

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2023103715A1 (en) * 2021-12-10 2023-06-15 华为技术有限公司 Image processing method and apparatus, and electronic device

Also Published As

Publication number Publication date
WO2023103715A1 (en) 2023-06-15

Similar Documents

Publication Publication Date Title
CN113658051B (en) Image defogging method and system based on cyclic generation countermeasure network
WO2023103715A1 (en) Image processing method and apparatus, and electronic device
CN109949219B (en) Reconstruction method, device and equipment of super-resolution image
CN111079764B (en) Low-illumination license plate image recognition method and device based on deep learning
CN112365418B (en) Image distortion evaluation method and device and computer equipment
CN110675334A (en) Image enhancement method and device
CN111340866A (en) Depth image generation method, device and storage medium
CN111598796A (en) Image processing method and device, electronic device and storage medium
CN114782564B (en) Point cloud compression method and device, electronic equipment and storage medium
CN110830808A (en) Video frame reconstruction method and device and terminal equipment
CN107908998A (en) Quick Response Code coding/decoding method, device, terminal device and computer-readable recording medium
CN111046893A (en) Image similarity determining method and device, and image processing method and device
CN115278257A (en) Image compression method and device, electronic equipment and storage medium
CN108986210B (en) Method and device for reconstructing three-dimensional scene
CN108491747B (en) Method for beautifying QR (quick response) code after image fusion
CN111784699A (en) Method and device for carrying out target segmentation on three-dimensional point cloud data and terminal equipment
WO2022067790A1 (en) Point cloud layering method, decoder, encoder, and storage medium
CN110458754B (en) Image generation method and terminal equipment
CN116503508A (en) Personalized model construction method, system, computer and readable storage medium
CN112541972A (en) Viewpoint image processing method and related equipment
GB2571818A (en) Selecting encoding options
CN115359170A (en) Scene data generation method and device, electronic equipment and storage medium
CN112767539B (en) Image three-dimensional reconstruction method and system based on deep learning
CN111179326B (en) Monocular depth estimation method, system, equipment and storage medium
CN113205579A (en) Three-dimensional reconstruction method, device, equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination