CN111415311B - Resource-saving image quality enhancement model - Google Patents

Resource-saving image quality enhancement model Download PDF

Info

Publication number
CN111415311B
CN111415311B CN202010231334.5A CN202010231334A CN111415311B CN 111415311 B CN111415311 B CN 111415311B CN 202010231334 A CN202010231334 A CN 202010231334A CN 111415311 B CN111415311 B CN 111415311B
Authority
CN
China
Prior art keywords
quality
image
quality enhancement
enhancement
enhanced
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010231334.5A
Other languages
Chinese (zh)
Other versions
CN111415311A (en
Inventor
徐迈
幸群亮
关振宇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hangzhou Innovation Research Institute of Beihang University
Original Assignee
Hangzhou Innovation Research Institute of Beihang University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hangzhou Innovation Research Institute of Beihang University filed Critical Hangzhou Innovation Research Institute of Beihang University
Priority to CN202010231334.5A priority Critical patent/CN111415311B/en
Publication of CN111415311A publication Critical patent/CN111415311A/en
Application granted granted Critical
Publication of CN111415311B publication Critical patent/CN111415311B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/70Denoising; Smoothing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30168Image quality inspection

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Quality & Reliability (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Analysis (AREA)

Abstract

The embodiment of the invention relates to a resource-saving image quality enhancement model. Wherein, this model includes: quality enhancement module and quality evaluation module, wherein: the quality enhancement module comprises a plurality of levels of quality enhancement units, and performs progressive optimization on the input image according to the levels of the quality enhancement units; and the quality evaluation module is used for judging the image optimization results of the quality enhancement units respectively, outputting the enhanced images when the image optimization results meet preset conditions, realizing the purpose of progressively enhancing the images and simultaneously evaluating the quality of the enhanced images, and finishing the enhancement and outputting the enhanced images under the condition that the quality of the images reaches a preset quality threshold. The invention solves the technical problem that the blind quality enhancement of the picture can not be realized because the model needs to be trained and enhanced respectively aiming at the compressed images with different qualities in the related technology, and simultaneously saves the computing resources.

Description

Resource-saving image quality enhancement model
Technical Field
The invention relates to the field of computers, in particular to a resource-saving image quality enhancement model.
Background
At present, images (images and videos) are growing explosively, and in order to alleviate the problem of limited bandwidth resources caused by image explosion, lossy compression technologies for images and videos are widely applied, such as JPEG, JPEG2000 and HEVC. However, compressed images and video can suffer from compression artifacts such as blocking, ringing and blurring. Compression artifacts degrade the user experience and reduce the accuracy of visual tasks, such as the compressed image classification task. Therefore, quality enhancement of compressed images is a hot spot area in recent years.
Most methods require a certain number of models to be trained simultaneously in order to enhance the quality of the compressed image. This is when the quality of the compressed images is different, the corresponding enhancement models should also be different. For example, in the HEVC video compression standard, a Quantization Parameter (QP) is an important parameter that controls the degree and Quality of compression. Thus for HEVC compressed images, most approaches train the models separately for different QPs. There are two main drawbacks to doing so: (1) Training a large number of models consumes a large amount of computing resources; (2) the quality difference of the compressed image is not considered: the structure of each model is the same, as is the computational complexity. Intuitively, the quality of a high-quality compressed image is simply enhanced, and a more ideal enhancement effect can be achieved; at the same time we need more sophisticated enhancement of low quality compressed images. Still further, these enhanced inference processes (or network structures) of varying complexity can be shared to some extent.
In view of the above problems, no effective solution has been proposed.
Disclosure of Invention
The embodiment of the invention provides a resource-saving image quality enhancement model, which at least solves the technical problem that blind quality enhancement of images cannot be realized due to the fact that models and enhancement are required to be trained and enhanced respectively aiming at compressed images with different qualities in the related technology, and meanwhile saves computing resources.
According to an aspect of the embodiments of the present invention, there is provided a resource-saving image quality enhancement model, which includes a quality enhancement module and a quality evaluation module, wherein: the quality enhancement module comprises a plurality of levels of quality enhancement units, and performs progressive optimization on the input image according to the levels of the quality enhancement units; and the quality evaluation module is used for respectively judging the image optimization results of the quality enhancement units and outputting enhanced images when the image optimization results meet preset conditions.
Further, the quality enhancement module comprises nested N +1 levels of U-Net, and N U-Net are nested in the nested N +1 levels of U-Net, wherein N is a positive integer.
Further, there are dense connections between different levels of U-Net.
Further, the quality evaluation module is configured to judge the image optimization result of the U-Net, and when the image optimization result of the quality enhancement unit of the current level does not meet a preset condition, judge the image optimization result of the quality enhancement unit of the next level.
Further, the quality assessment module is specifically configured to: obtaining the enhanced image according to the image optimization result and the image characteristics of the input image; and outputting the enhanced image when the quality score of the enhanced image is greater than or equal to a preset score threshold value.
Further, the quality assessment module is specifically configured to: obtaining a quality score by evaluating the smoothness degree of the image texture block and the block effect strength of the smooth block; and judging whether the enhanced image is output or not according to the quality score and a preset score threshold value.
In the resource-saving image quality enhancement model of the embodiment of the present invention, the model includes a quality enhancement module and a quality evaluation module, wherein: the quality enhancement module comprises a plurality of levels of quality enhancement units, and performs progressive optimization on the input image according to the levels of the quality enhancement units; and the quality evaluation module is used for respectively judging the image optimization results of the quality enhancement units, outputting the enhanced images when the image optimization results meet preset conditions, realizing that the quality of the enhanced images is evaluated while the images are enhanced progressively, and finishing the enhancement and outputting the enhanced images under the condition that the quality of the images reaches a preset quality threshold. Therefore, the technical effect of carrying out adaptive quality enhancement on the images with different qualities is achieved, the technical problem that in the related technology, models and enhancement are required to be trained and enhanced respectively on the compressed images with different qualities, so that blind quality enhancement of the images cannot be achieved is solved, and meanwhile, computing resources are saved.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings required to be used in the description of the embodiments or the prior art will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art that other drawings can be obtained according to these drawings without inventive labor.
FIG. 1 is a schematic diagram of an alternative resource-saving image quality enhancement model according to an embodiment of the invention;
FIG. 2 is a schematic diagram of an alternative U-Net configuration according to embodiments of the present invention;
FIG. 3 is a schematic diagram of an alternative quality enhancement module according to an embodiment of the present invention;
fig. 4 is a schematic diagram of another alternative resource-saving image quality enhancement model according to an embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present invention clearer, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, but not all, embodiments of the present invention. All other embodiments, which can be obtained by a person skilled in the art without any inventive step based on the embodiments of the present invention, are within the scope of the present invention.
It is noted that, in this document, relational terms such as "first" and "second," and the like, may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions.
According to an embodiment of the present invention, a resource-saving image quality enhancement model is provided, as shown in fig. 1, the model includes a quality enhancement module 10 and a quality evaluation module 20, wherein:
1) The quality enhancing module 10 includes a plurality of levels of quality enhancing units 100, and performs progressive optimization on the input image according to the levels of the quality enhancing units 100;
2) And the quality evaluation module 20 is configured to respectively determine an image optimization result of the quality enhancement unit 100, and output an enhanced image when the image optimization result meets a preset condition.
Note that, in fig. 1, the subscript of the quality enhancement unit 100 is the current level of the quality enhancement unit, and n is a positive integer. In this embodiment, an input image is first input into the quality enhancement module 10, the input image is sequentially subjected to progressive optimization according to the hierarchy of the quality enhancement units 100, each quality enhancement unit 100 optimizes the input image and then inputs the image optimization result into the quality evaluation module 20, the quality evaluation module 20 judges the image optimization result, and when the image optimization result meets a preset condition, an enhanced image is output.
In a specific application scenario, when the quality evaluation module 20 determines that the image optimization result of the current-level quality enhancement unit 100 does not meet the preset condition, the image optimization result of the next-level quality enhancement unit 100 is obtained, and when the image optimization result of the next-level quality enhancement unit 100 meets the preset condition, the image is output, otherwise, the above-mentioned process is continued. When each quality enhancement unit 100 does not meet the preset condition, the image optimization result of the last layer of quality enhancement unit 100 is output.
In this embodiment, the quality enhancing unit 100 outputs an image enhancement residual for enhancing the input image, the image enhancement residual is added to the input image to obtain an enhanced image, and the quality evaluating module 20 determines whether the enhanced image reaches a preset quality threshold. If yes, determining that the quality enhancement unit 100 meets a preset condition, finishing image enhancement, and outputting an enhanced image; if not, the image optimization result of the next-level quality enhancement unit 100 is obtained.
Further, in the embodiment, the quality enhancement module comprises nested N +1 levels of U-nets, and N U-nets are nested in the U nested N +1 levels of U-nets, wherein N is a positive integer.
In a specific application scenario, the U-Net network is a CNN-based image segmentation network. In one example, a minimum 2 level as shown in FIG. 2In the encoding path of U-net structure (2), the input image is first input to the convolution layer C (1,1) Obtaining a characteristic diagram F (1,1) ,F (1,1) Down-sampled and input to the convolution layer C (2,1) Obtaining a characteristic diagram F (2,1) Feature map F (2,1) Up-sampled and input to the convolution layer C (1,2) Obtaining a characteristic diagram F (1,2) ,F (1,2) Is the output of U-Net.
In an example of this embodiment, as shown in fig. 3, in the quality enhancing module, N is 5. The quality enhancement module comprises a nested 6-level U-Net structure, the trunk part is a nested U-Net structure, and the convolution layer C (1,1) And a convolution layer C (2,1) Can be regarded as a minimum 2-level U-Net structure, convolutional layer C (1,1) And a convolution layer C (2,1) And a convolution layer C (3,1) Can be regarded as a 3-level U-Net structure, and so on, the convolution layer C (1,1) To the convolutional layer can be C (6,1) Consider a 6-level U-Net structure.
Further, in the present embodiment, there are dense connections between different levels of U-Net.
In a specific application scenario, dense connections are added among U-Net structures of different levels on the basis of nested U-Net structures. Still taking the quality enhancement module shown in FIG. 3 as an example, taking U-Net of 2-level and 3-level in the quality enhancement module as an example, the feature diagram F (1,1) And feature map F (1,2) Firstly, channel splicing is carried out, and then convolution C is input together (1,3) Generating a feature map F (1,3) . The rest of the dense connections in the quality enhancement module are the same: all the characteristic maps pointing to a certain convolutional layer are firstly subjected to channel splicing and then input into the convolutional layer as a whole.
Optionally, in this embodiment, the quality evaluation module is configured to determine an image optimization result of the U-Net, and when the image optimization result of the quality enhancement unit of the current level does not meet a preset condition, determine an image optimization result of the quality enhancement unit of the next level.
Further optionally, in this embodiment, the quality evaluation module is specifically configured to: obtaining an enhanced image according to the image optimization result and the image characteristics of the input image; and outputting the enhanced image when the quality score of the enhanced image is greater than or equal to a preset score threshold value.
In a preferred scheme, the resource-saving image quality enhancement model shown in fig. 4 includes a quality enhancement module 40 and a quality evaluation module 42, where the quality enhancement module 40 is an N +1 level nested U-Net structure, and the quality evaluation module 42 includes N convolutional layers and quality evaluation units, where the N convolutional layers are respectively connected to the N quality enhancement units in the quality enhancement module 40 in a one-to-one correspondence manner, and after processing an image optimization result of the quality enhancement unit, the convolutional layers are added to input image point points to obtain an enhanced image, and then the enhanced image input value quality evaluation unit is evaluated.
In one example, in the resource-saving image quality enhancement model shown in fig. 4, N is 5, wherein a 2-level U-Net structure is taken as an example for explanation, and the input image S in First input to the convolution layer C (1,1) Obtaining a characteristic diagram F (1,1) ,F (1,1) Down-sampled and input to the convolution layer C (2,1) Obtaining a characteristic diagram F (2,1) Feature map F (2,1) Up-sampled and input to the convolution layer C (1,2) Obtaining a characteristic diagram F (1,2) ,F (1,2) Passing through the convolutional layer C in the quality evaluation module 42 (0,2) Obtaining a characteristic diagram F (0,2) I.e. enhancing the residual R 2 Inputting an image S in Adding the points to obtain an enhanced image:
S (out,2) =S in +R 2 (1)
the quality evaluation module 42 evaluates the enhanced image, if the enhanced image is evaluated by the quality evaluation module 42, otherwise, the U-Net derivation process of level 3 is executed to obtain S (out,3) And sent to the quality assessment module 42 for assessment. In enhancing the image S (out,3) In the case of a failed evaluation, the U-Net derivation procedure of level 4, 5 or 6 can be performed stepwise.
As aIn a preferred embodiment, if the U-Net derivation procedure with the level 6 is performed, the obtained S is (out,6) Will no longer be evaluated by the quality evaluation module 42 but will be output directly. Through the preferred scheme, an early exit mechanism is realized, and resources are saved.
Optionally, in this embodiment, during the training process of the resource-saving image quality enhancement model, all the output ends are supervised simultaneously:
Figure BDA0002429382030000061
therein, loss j Is the loss of the jth exit, defined as the mean square error of the output image of the output end and the original compressed image:
Figure BDA0002429382030000071
and w j Is the relative weight of the output, and should be determined empirically. For example, when it is determined that an enhanced image of a high-quality compressed image is to be output at a shallow exit of a low-level U-Net, the weight of the shallow exit is set to be larger than that of the deep exit when the training sample is a high-quality compressed image. The training may employ an optimization algorithm such as Adam. The initial learning rate may be set to 1 × 10 -4
In addition, in a specific application scenario, the quality enhancement module in fig. 3 is taken as an example for illustration,
Figure BDA0002429382030000072
consists of two 32X 3 convolutional layers. The down-sampling is implemented by a 32 x 1 convolutional layer with a step size of 2. The upsampling is implemented by a transposed convolutional layer of step size 2, with a filter size of 32 × 2 × 2. Between convolutional layers are ReLU nonlinear active layers.
Further, in this embodiment, the quality evaluation module is specifically configured to: obtaining a quality score by evaluating the smoothness degree of the image texture block and the block effect strength of the smooth block; and judging whether to output the enhanced image or not according to the quality score and a preset score threshold value.
In a specific application scene, aiming at the quality evaluation task of an image, the Chebyshev moment is adopted to evaluate the smoothing of a texture block and the block effect strength of a smooth block, and the smoothing and the block effect strength are integrated to obtain the quality score of a compressed image.
Through the resource-saving image quality enhancement model provided in the embodiment, the model comprises a quality enhancement module and a quality evaluation module, wherein: the quality enhancement module comprises a plurality of levels of quality enhancement units, and performs progressive optimization on the input image according to the levels of the quality enhancement units; and the quality evaluation module is used for respectively judging the image optimization results of the quality enhancement units, outputting the enhanced images when the image optimization results meet preset conditions, realizing that the quality of the enhanced images is evaluated while the images are enhanced progressively, and finishing the enhancement and outputting the enhanced images under the condition that the quality of the images reaches a preset quality threshold. Therefore, the technical effect of carrying out adaptive quality enhancement on the images with different qualities is achieved, the technical problem that in the related technology, models and enhancement are required to be trained and enhanced respectively on the compressed images with different qualities, so that blind quality enhancement of the images cannot be achieved is solved, and meanwhile, computing resources are saved.
It should be noted that, for simplicity of description, the above-mentioned method embodiments are described as a series of acts or combination of acts, but those skilled in the art will recognize that the present invention is not limited by the order of acts, as some steps may occur in other orders or concurrently in accordance with the invention. Further, those skilled in the art should also appreciate that the embodiments described in the specification are preferred embodiments and that the acts and modules referred to are not necessarily required by the invention.
Through the above description of the embodiments, those skilled in the art can clearly understand that the method according to the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but the former is a better implementation mode in many cases. Based on such understanding, the technical solutions of the present invention may be embodied in the form of a software product, which is stored in a storage medium (e.g., ROM/RAM, magnetic disk, optical disk) and includes instructions for enabling a terminal device (e.g., a mobile phone, a computer, a server, or a network device) to execute the method according to the embodiments of the present invention.
The integrated unit in the above embodiments, if implemented in the form of a software functional unit and sold or used as a separate product, may be stored in the above computer-readable storage medium. Based on such understanding, the technical solution of the present invention may be embodied in the form of a software product, which is stored in a storage medium and includes several instructions for causing one or more computer devices (which may be personal computers, servers, network devices, etc.) to execute all or part of the steps of the method according to the embodiments of the present invention.
In the above embodiments of the present invention, the descriptions of the respective embodiments have respective emphasis, and for parts that are not described in detail in a certain embodiment, reference may be made to related descriptions of other embodiments.
In the several embodiments provided in the present application, it should be understood that the disclosed client may be implemented in other manners. The above-described embodiments of the apparatus are merely illustrative, and for example, the division of the units is only one type of division of logical functions, and there may be other divisions when actually implemented, for example, a plurality of units or components may be combined or may be integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, units or modules, and may be in an electrical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present invention may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The foregoing is only a preferred embodiment of the present invention, and it should be noted that, for those skilled in the art, various modifications and amendments can be made without departing from the principle of the present invention, and these modifications and amendments should also be considered as the protection scope of the present invention.

Claims (5)

1. A resource-saving image quality enhancement model, comprising a quality enhancement module and a quality assessment module, wherein:
the quality enhancement module comprises a plurality of levels of quality enhancement units, and performs progressive optimization on the input image according to the levels of the quality enhancement units;
the quality evaluation module is used for respectively judging the image optimization results of the quality enhancement units and outputting enhanced images when the image optimization results meet preset conditions;
the quality enhancement module comprises a nested N +1 level U-Net, N U-nets are nested in the nested N +1 level U-Net, and for the N +1 level U-Net structure, all convolution layers of the U-Net structure with the N level and newly added convolution layers of the N +1 level are included, wherein N is a positive integer.
2. The model of claim 1, wherein there are dense connections between different levels of U-nets.
3. The model according to claim 1, wherein the quality evaluation module is configured to determine the image optimization result of the U-Net, and determine the image optimization result of the quality enhancement unit of the next hierarchy when the image optimization result of the quality enhancement unit of the current hierarchy does not meet a preset condition.
4. The model of claim 1, wherein the quality assessment module is specifically configured to:
obtaining the enhanced image according to the image optimization result and the image characteristics of the input image;
and outputting the enhanced image when the quality score of the enhanced image is greater than or equal to a preset score threshold value.
5. The model of claim 4, wherein the quality assessment module is specifically configured to:
obtaining a quality score by evaluating the smoothness degree of the image texture block and the block effect strength of the smooth block;
and judging whether the enhanced image is output or not according to the quality score and a preset score threshold value.
CN202010231334.5A 2020-03-27 2020-03-27 Resource-saving image quality enhancement model Active CN111415311B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010231334.5A CN111415311B (en) 2020-03-27 2020-03-27 Resource-saving image quality enhancement model

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010231334.5A CN111415311B (en) 2020-03-27 2020-03-27 Resource-saving image quality enhancement model

Publications (2)

Publication Number Publication Date
CN111415311A CN111415311A (en) 2020-07-14
CN111415311B true CN111415311B (en) 2023-03-14

Family

ID=71493369

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010231334.5A Active CN111415311B (en) 2020-03-27 2020-03-27 Resource-saving image quality enhancement model

Country Status (1)

Country Link
CN (1) CN111415311B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112819707B (en) * 2021-01-15 2022-05-03 电子科技大学 End-to-end anti-blocking effect low-illumination image enhancement method
CN112906721B (en) * 2021-05-07 2021-07-23 腾讯科技(深圳)有限公司 Image processing method, device, equipment and computer readable storage medium
CN114255226A (en) * 2021-12-22 2022-03-29 北京安德医智科技有限公司 Doppler whole-flow quantitative analysis method and device, electronic equipment and storage medium

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107403415A (en) * 2017-07-21 2017-11-28 深圳大学 Compression depth plot quality Enhancement Method and device based on full convolutional neural networks
CN107481209A (en) * 2017-08-21 2017-12-15 北京航空航天大学 A kind of image or video quality Enhancement Method based on convolutional neural networks
CN108537743A (en) * 2018-03-13 2018-09-14 杭州电子科技大学 A kind of face-image Enhancement Method based on generation confrontation network
WO2018227105A1 (en) * 2017-06-08 2018-12-13 The United States Of America, As Represented By The Secretary, Department Of Health And Human Services Progressive and multi-path holistically nested networks for segmentation
CN110088799A (en) * 2016-11-09 2019-08-02 三星电子株式会社 Image processing equipment and image processing method
CN110675335A (en) * 2019-08-31 2020-01-10 南京理工大学 Superficial vein enhancement method based on multi-resolution residual error fusion network
WO2020020809A1 (en) * 2018-07-26 2020-01-30 Koninklijke Philips N.V. Ultrasound system with an artificial neural network for guided liver imaging
CN110889813A (en) * 2019-11-15 2020-03-17 安徽大学 Low-light image enhancement method based on infrared information

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110088799A (en) * 2016-11-09 2019-08-02 三星电子株式会社 Image processing equipment and image processing method
WO2018227105A1 (en) * 2017-06-08 2018-12-13 The United States Of America, As Represented By The Secretary, Department Of Health And Human Services Progressive and multi-path holistically nested networks for segmentation
CN107403415A (en) * 2017-07-21 2017-11-28 深圳大学 Compression depth plot quality Enhancement Method and device based on full convolutional neural networks
CN107481209A (en) * 2017-08-21 2017-12-15 北京航空航天大学 A kind of image or video quality Enhancement Method based on convolutional neural networks
CN108537743A (en) * 2018-03-13 2018-09-14 杭州电子科技大学 A kind of face-image Enhancement Method based on generation confrontation network
WO2020020809A1 (en) * 2018-07-26 2020-01-30 Koninklijke Philips N.V. Ultrasound system with an artificial neural network for guided liver imaging
CN110675335A (en) * 2019-08-31 2020-01-10 南京理工大学 Superficial vein enhancement method based on multi-resolution residual error fusion network
CN110889813A (en) * 2019-11-15 2020-03-17 安徽大学 Low-light image enhancement method based on infrared information

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
Effectiveness of u-net in denoising rgb images;Rina Komatsu 等;《Computer Science & Information Technology》;20191231;第9卷(第2期);全文 *
Overview of the high efficiency video coding (HEVC) standard;Gary J. Sullivan 等;《IEEE Transactions on Circuits and Systems for Video Technology》;20120928;第22卷(第12期);全文 *
多特征增量学习的视频重建图像质量增强算法;丁丹丹 等;《华南理工大学学报(自然科学版)》;20181231;第46卷(第12期);全文 *

Also Published As

Publication number Publication date
CN111415311A (en) 2020-07-14

Similar Documents

Publication Publication Date Title
CN111415311B (en) Resource-saving image quality enhancement model
Zhou et al. End-to-end Optimized Image Compression with Attention Mechanism.
CN108932697B (en) Distortion removing method and device for distorted image and electronic equipment
CN109120937B (en) Video encoding method, decoding method, device and electronic equipment
EP3746944A1 (en) Use of non-linear function applied to quantization parameters in machine-learning models for video coding
EP1968326A2 (en) Motion compensated frame rate upconversion in a video decoder
US20190294931A1 (en) Systems and Methods for Generative Ensemble Networks
CN107481209B (en) Image or video quality enhancement method based on convolutional neural network
CN110136057B (en) Image super-resolution reconstruction method and device and electronic equipment
US11928843B2 (en) Signal processing apparatus and signal processing method
KR20220137076A (en) Image processing method and related device
WO2020062074A1 (en) Reconstructing distorted images using convolutional neural network
CN111247797A (en) Method and apparatus for image encoding and decoding
CN110753225A (en) Video compression method and device and terminal equipment
CN111105357B (en) Method and device for removing distortion of distorted image and electronic equipment
Jin et al. Quality enhancement for intra frame coding via cnns: An adversarial approach
Jin et al. Post-processing for intra coding through perceptual adversarial learning and progressive refinement
WO2016101663A1 (en) Image compression method and device
TWI826160B (en) Image encoding and decoding method and apparatus
CN110276728B (en) Human face video enhancement method based on residual error generation countermeasure network
CN110072104B (en) Perceptual image compression method based on image-level JND prediction
JP2013258465A (en) Processing system, preprocessing device, postprocessing device, preprocessing program and postprocessing program
JP4083670B2 (en) Image coding apparatus and image coding method
KR102066012B1 (en) Motion prediction method for generating interpolation frame and apparatus
Hitha et al. Comparison of image compression analysis using deep autoencoder and deep cnn approach

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant