CN110047044B - Image processing model construction method and device and terminal equipment - Google Patents

Image processing model construction method and device and terminal equipment Download PDF

Info

Publication number
CN110047044B
CN110047044B CN201910217717.4A CN201910217717A CN110047044B CN 110047044 B CN110047044 B CN 110047044B CN 201910217717 A CN201910217717 A CN 201910217717A CN 110047044 B CN110047044 B CN 110047044B
Authority
CN
China
Prior art keywords
model
degradation
layer
image processing
level
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910217717.4A
Other languages
Chinese (zh)
Other versions
CN110047044A (en
Inventor
乔宇
何静雯
董超
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Institute of Advanced Technology of CAS
Original Assignee
Shenzhen Institute of Advanced Technology of CAS
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Institute of Advanced Technology of CAS filed Critical Shenzhen Institute of Advanced Technology of CAS
Priority to CN201910217717.4A priority Critical patent/CN110047044B/en
Publication of CN110047044A publication Critical patent/CN110047044A/en
Priority to PCT/CN2019/130876 priority patent/WO2020186888A1/en
Application granted granted Critical
Publication of CN110047044B publication Critical patent/CN110047044B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Biomedical Technology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

The invention is suitable for the technical field of image processing, and provides a method, a device and a terminal device for constructing an image processing model, wherein a degradation starting level parameter of a base model based on a residual module is configured, a configured base model is trained, a characteristic adjusting layer is added into the base model to generate a self-adaptive model, a degradation ending level parameter of the self-adaptive model is configured, the configured self-adaptive model is trained, interpolation operation is carried out on the characteristic adjusting layer in the self-adaptive model, so that the finally formed image processing model can realize image processing of any degradation level from the degradation starting level to the degradation ending level, an image restoration task of any degradation level is realized, continuous adjustability of restoration strength is realized, and no new image noise is brought in, so that a user can adjust an adjusting coefficient of the characteristic adjusting layer according to preference to achieve satisfactory image processing effect And the user experience is better.

Description

Image processing model construction method and device and terminal equipment
Technical Field
The invention belongs to the technical field of image processing, and particularly relates to a method and a device for constructing an image processing model and terminal equipment.
Background
The existing image restoration technology based on deep learning trains a single model for image restoration of a specific degradation level. If the model with unmatched degradation levels is used for restoring the degraded image, the effect of over-smoothing or over-sharpening can be brought, the quality of the restored image is poor, and the requirement of a user cannot be met.
In real life, the degradation levels of images are continuous, so that in order to solve the problem of restoration of degraded images, the conventional method can only solve the degradation images with a large range of degradation degrees by training a plurality of image restoration models for different degradation levels or training one image restoration model large enough. However, such a method would result in a very large amount of calculation and lack of flexibility.
Disclosure of Invention
In view of this, embodiments of the present invention provide a method and an apparatus for constructing an image processing model, and a terminal device, so as to solve the problem that the conventional image restoration processing has a single degradation level and lacks flexibility when processing degraded images of multiple degradation levels.
A first aspect of an embodiment of the present invention provides a method for constructing an image processing model, including:
configuring a degradation starting level parameter of a base model based on a residual module, and training the configured base model to adjust a network parameter of the base model, wherein the base model comprises a convolution layer, an activation function layer and an image up-sampling layer;
adding a characteristic adjusting layer into the trained base model to generate an adaptive model, wherein the characteristic adjusting layer is composed of a plurality of convolution kernels;
configuring the degradation finishing level parameter of the self-adaptive model, and training the configured self-adaptive model to adjust the parameter of the characteristic adjusting layer;
and performing interpolation operation on the characteristic adjusting layer in the trained adaptive model so that the finally formed image processing model can realize image processing at any degradation level from the degradation starting level to the degradation ending level.
A second aspect of an embodiment of the present invention provides an apparatus for constructing an image processing model, including:
the first configuration training unit is used for configuring the initial degradation level parameters of the base model based on the residual error module and training the configured base model to adjust the network parameters of the base model, wherein the base model comprises a convolution layer, an activation function layer and an image up-sampling layer;
the model generation unit is used for adding a characteristic regulation layer into the trained base model to generate an adaptive model, wherein the characteristic regulation layer is composed of a plurality of convolution kernels;
the second configuration training unit is used for configuring the degradation finishing level parameters of the self-adaptive model and training the configured self-adaptive model to adjust the parameters of the characteristic adjusting layer;
and the model adjusting unit is used for carrying out interpolation operation on the characteristic adjusting layer in the trained adaptive model so that the finally formed image processing model can realize image processing at any degradation level from the degradation starting level to the degradation ending level.
A third aspect of an embodiment of the present invention provides a terminal device, including:
the image processing system comprises a memory, a processor and a computer program stored in the memory and capable of running on the processor, wherein the processor implements the steps of the image processing model construction method provided by the first aspect of the embodiment of the invention when executing the computer program.
Wherein the computer program comprises:
the first configuration training unit is used for configuring the initial degradation level parameters of the base model based on the residual error module and training the configured base model to adjust the network parameters of the base model, wherein the base model comprises a convolution layer, an activation function layer and an image up-sampling layer;
the model generation unit is used for adding a characteristic regulation layer into the trained base model to generate an adaptive model, wherein the characteristic regulation layer is composed of a plurality of convolution kernels;
the second configuration training unit is used for configuring the degradation finishing level parameters of the self-adaptive model and training the configured self-adaptive model to adjust the parameters of the characteristic adjusting layer;
and the model adjusting unit is used for carrying out interpolation operation on the characteristic adjusting layer in the trained adaptive model so that the finally formed image processing model can realize image processing at any degradation level from the degradation starting level to the degradation ending level.
A fourth aspect of the embodiments of the present invention provides a computer-readable storage medium, which stores a computer program, wherein the computer program, when executed by a processor, implements the steps of the method for constructing an image processing model provided by the first aspect of the embodiments of the present invention.
Wherein the computer program comprises:
the first configuration training unit is used for configuring the initial degradation level parameters of the base model based on the residual error module and training the configured base model to adjust the network parameters of the base model, wherein the base model comprises a convolution layer, an activation function layer and an image up-sampling layer;
the model generation unit is used for adding a characteristic regulation layer into the trained base model to generate an adaptive model, wherein the characteristic regulation layer is composed of a plurality of convolution kernels;
the second configuration training unit is used for configuring the degradation finishing level parameters of the self-adaptive model and training the configured self-adaptive model to adjust the parameters of the characteristic adjusting layer;
and the model adjusting unit is used for carrying out interpolation operation on the characteristic adjusting layer in the trained adaptive model so that the finally formed image processing model can realize image processing at any degradation level from the degradation starting level to the degradation ending level.
Compared with the prior art, the embodiment of the invention has the following beneficial effects: configuring a starting degradation level parameter of a base model based on a residual module, training the configured base model to adjust a network parameter of the base model, adding a characteristic adjusting layer into the trained base model to generate an adaptive model, configuring an ending degradation level parameter of the adaptive model, training the configured adaptive model to adjust a parameter of the characteristic adjusting layer, and then performing interpolation operation on the characteristic adjusting layer in the trained adaptive model, so that the finally formed image processing model can realize image processing of any degradation level from the starting degradation level to the ending degradation level, thereby realizing an image restoration task of any degradation level, realizing continuous adjustability of restoration strength, and enabling a user to adjust an adjusting coefficient of the characteristic adjusting layer according to the preference to achieve a satisfactory image processing effect because no new image noise is brought in, the user experience is better.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present invention, the drawings needed to be used in the embodiments or the prior art descriptions will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without inventive exercise.
FIG. 1 is a flowchart of an implementation of a method for constructing an image processing model according to an embodiment of the present invention;
FIG. 2 is a diagram illustrating a network architecture of a base model according to an embodiment of the present invention;
FIG. 3 is a schematic diagram of a network architecture of an adaptive model according to an embodiment of the present invention;
FIG. 4 is a schematic diagram of an apparatus for constructing an image processing model according to an embodiment of the present invention;
fig. 5 is a schematic diagram of a terminal device according to an embodiment of the present invention.
Detailed Description
In the following description, for purposes of explanation and not limitation, specific details are set forth, such as particular system structures, techniques, etc. in order to provide a thorough understanding of the embodiments of the invention. It will be apparent, however, to one skilled in the art that the present invention may be practiced in other embodiments that depart from these specific details. In other instances, detailed descriptions of well-known systems, circuits, and methods are omitted so as not to obscure the description of the present invention with unnecessary detail.
In order to explain the technical means of the present invention, the following description will be given by way of specific examples. Referring to fig. 1, fig. 1 shows an implementation flow of a method for constructing an image processing model according to an embodiment of the present invention, which is detailed as follows:
in step S101, a degradation start level parameter of a base model based on a residual module is configured, and the configured base model is trained to adjust a network parameter of the base model.
In the embodiment of the invention, the basic model based on the residual module is mainly used for solving the image restoration of the degradation starting level and comprises a convolution layer, an activation function layer and an image upsampling layer.
The embodiment of the present invention is preferably a network architecture similar to SRResNet and having a residual module as a core, and mainly includes a convolutional layer, an activation function layer, and an image upsampling layer, specifically referring to fig. 2, where the stride of the first convolutional layer in the base model is 2, so that the length and width of an image passing through the convolutional layer becomes 1/2, thereby improving the processing efficiency of the residual module, and the image processed by the residual module is input to the convolutional layer connected to the residual module, and the output image of the convolutional layer is input to the image upsampling layer, so that the image is restored to the original size. The model has good performance on Gaussian denoising, super-resolution and JPEG lossy compression restoration tasks.
After a base model with a residual error module as a core is constructed, pre-configured degradation level parameters are obtained, wherein the pre-configured degradation level parameters comprise a starting degradation level parameter and an ending degradation level parameter. And configuring the initial degradation level parameter of the base model according to the preconfigured degradation level parameter.
Here, the start degradation level parameter and the end degradation level parameter each include: gaussian noise parameters, JPEG compression quality parameters and bicubic down-sampling parameters, wherein:
the onset degradation level parameters are: gaussian noise, σ 15; JPEG compression quality, q 80; double-triple down-sampling, x 3;
the corresponding ending degradation level parameters are: gaussian noise, σ 75; JPEG compression quality, q 10; bi-cubic down-sampling, x 4.
It is to be understood that the start degradation level parameter and the end degradation level parameter are not limited to the specific parameters above, which are only used as an example and are not specific parameters.
In step S102, a feature adjustment layer is added to the trained base model, and an adaptive model is generated.
In the embodiment of the present invention, the feature adjustment layer is composed of a plurality of convolution kernels, which is much faster than the operation speed of a general convolution kernel because the depth-level convolution kernel operates based on a single feature map, and preferably, the feature adjustment layer is composed of a plurality of depth-level convolution kernels (depthwise convolution filters).
It is to be understood that, if the operation speed of the base model is desired to be fast, the convolution kernels constituting the feature adjustment layer may be constituted by convolution kernels of the corresponding operation speed, and are not limited to only the depth-level convolution kernels.
Here, the size of the convolution kernel may be 1 × 1, 3 × 3, 5 × 5, 7 × 7, etc., and is not particularly limited herein. In general, the larger the size of the convolution kernel, the better the adaptive model performs at ending the task of image restoration at the degradation level, but as the convolution kernel increases, the improvement does not appear to be significant. Experiments show that on JPEG lossy image restoration and Gaussian denoising tasks, ideal effects can be obtained by setting the size of a convolution kernel to be 1 x 1. And on the image super-resolution task, the size of the convolution kernel is set to be at least 5 multiplied by 5.
Here, the adaptive model is mainly used for enabling the formed adaptive model to process the image restoration task of ending the degradation level by adding the feature adjustment layer on the basis of the base model, and since only the feature adjustment layer composed of the convolution kernel is added, the added network parameters are not much, for example, for the feature adjustment layer with the convolution kernel size of 1 × 1 and 5 × 5, only the network parameters of 01.5% and 3.65% are added, and the calculation amount of the network model is not increased, and the specific structure of the adaptive model refers to fig. 3, which is a model formed by adding the feature adjustment layer after each convolution layer of the base model or after the convolution layer of the residual structure on the basis of the base model, where the residual structure is a structure composed of 16 residual modules, where the residual modules include convolution, an active layer connected to the convolutional layer, and another convolutional layer connected to the active layer.
Optionally, step S102 specifically includes:
and placing the characteristic adjusting layer on all the convolution layers of the trained base model to generate an adaptive model.
Optionally, step S102 specifically includes:
and after the characteristic adjusting layer is placed on the convolution layer in the residual error structure of the trained base model, generating the self-adaptive model.
In the embodiment of the present invention, in order to adjust the feature map after the convolution operation, the feature adjusting layer needs to be placed behind the convolution layer, and specifically may be placed behind all convolution layers of the base model, or may be placed behind a convolution layer in the residual structure of the base model, which is not specifically limited herein.
Optionally, step S102 specifically includes:
step S1021, configuring the size parameter of the convolution kernel of the characteristic adjusting layer.
And step S1022, adding the configured characteristic adjusting layer into the trained base model to generate an adaptive model.
In the embodiment of the present invention, because different image processing tasks have different sizes of convolution kernels in corresponding feature adjustment layers, the size of the convolution kernel of a feature adjustment layer can be configured according to actual conditions, for example, when an image processing model composed of an adaptive model is to process JPEG lossy image restoration and gaussian denoising tasks, the size of the convolution kernel is set to 1 × 1; when an image super-resolution task is to be processed, the size of the convolution kernel is set to 5 × 5 or more.
Here, the size parameter of the convolution kernel of the configuration feature adjustment layer may be configured in advance, for example, an image processing task selected by a user is displayed on the adaptive model interface, and a corresponding configuration parameter may be found according to a comparison table between the image processing task and the configuration parameter; or may be user-configurable, and is not specifically limited herein.
In step S103, an end degradation level parameter of the adaptive model is configured, and the configured adaptive model is trained to adjust a parameter of the feature adjusting layer.
In the embodiment of the invention, after the adaptive model is generated, the degradation ending level parameter of the adaptive model is configured according to the acquired pre-configured degradation level parameter, and the degradation ending level parameter of the adaptive model is trained after being configured, so that the adaptive model can output a restored image with better quality.
In the process of training the adaptive model, only the parameters of the feature adjustment layer are adjusted, and other network parameters are kept unchanged, that is, in the process of training the adaptive model, the network parameters in the adaptive model, which are the same as those of the base model, are fixed so as to be kept unchanged, the adaptive model is trained only by adjusting the parameters of the feature adjustment layer, and the training effect of image restoration is achieved after multiple times of training and adjustment of the parameters of the feature adjustment layer.
Here, in the training of the base model and the adaptive model, the training samples used are images corresponding to respective degradation levels.
In step S104, interpolation operation is performed on the feature adjustment layer in the trained adaptive model, so that the finally formed image processing model can realize image processing at any degradation level from the start degradation level to the end degradation level.
In the embodiment of the present invention, when an adaptive model is trained and parameters of a feature adjustment layer are adjusted to an optimal state, an adjustment test is performed on the adaptive model, specifically, interpolation operation is performed on all feature adjustment layers in the adaptive model, so that the finally formed image processing model can realize image restoration at any degradation level from "start" to "end", where "start" is a start degradation level and "end" is an end degradation level, and performing interpolation operation can also be substantially understood as multiplying parameters of all feature adjustment layers of the adaptive model by an adjustment coefficient, where the range of the adjustment coefficient is between 0 and 1, and by changing the adjustment coefficient, the restoration strength of an image at a given degradation level can be continuously changed. The restoration strength of an image for a given degradation level is also changed by changing the magnitude of the adjustment coefficient, and if the degree of degradation at the start degradation level is lighter than that at the end degradation level, the degree of restoration of the degraded image is higher by increasing the adjustment coefficient.
In the training process of the base model and the adaptive model, the convolutional neural network performs convolution processing on a degraded image in a training sample and a clear image corresponding to the degraded image to extract the characteristic features inside the image, and then trains and adjusts each network parameter in the base model and the parameter of a feature adjusting layer in the adaptive model, so that the mapping relation from the degraded image to the clear image is learned. In the training process, the error between the target image and the learned image is calculated by utilizing the mean square error, then the network parameters are adjusted and updated through back propagation, and after the model is converged, the network parameters reach the optimal values after multiple optimization iterations.
In the embodiment of the invention, by configuring the initial degradation level parameter of the base model based on the residual module, training the configured base model to adjust the network parameter of the base model, then adding the characteristic adjusting layer into the trained base model to generate the self-adaptive model, configuring the ending degradation level parameter of the self-adaptive model, then training the configured self-adaptive model to adjust the parameter of the characteristic adjusting layer, then carrying out interpolation operation on the characteristic adjusting layer in the trained self-adaptive model, so that the finally formed image processing model can realize image processing at any degradation level from the initial degradation level to the ending degradation level, thereby realizing the image restoration task at any degradation level, realizing continuous adjustability of restoration strength, and not bringing new image noise, the user can adjust the adjusting coefficient of the characteristic adjusting layer according to the preference so as to achieve a satisfactory image processing effect, and the user experience is better.
It should be understood that, the sequence numbers of the steps in the foregoing embodiments do not imply an execution sequence, and the execution sequence of each process should be controlled by its function and internal logic, and should not constitute any limitation to the implementation process of the embodiments of the present invention.
Fig. 4 is a schematic diagram of an image processing model constructing apparatus according to an embodiment of the present invention, which corresponds to the image processing model constructing method described in the foregoing embodiment, and only shows a part related to the embodiment of the present invention for convenience of description.
Referring to fig. 4, the apparatus includes:
a first configuration training unit 41, configured to configure degradation level starting parameters of a base model based on a residual module, and train the configured base model to adjust network parameters of the base model, where the base model includes a convolution layer, an activation function layer, and an image upsampling layer;
a model generating unit 42, configured to add a feature adjusting layer to the trained base model to generate an adaptive model, where the feature adjusting layer is composed of a plurality of convolution kernels;
a second configuration training unit 43, configured to configure the degradation-ending level parameter of the adaptive model, and train the configured adaptive model to adjust the parameter of the feature adjusting layer;
and the model adjusting unit 44 is used for performing interpolation operation on the characteristic adjusting layer in the trained adaptive model so that the finally formed image processing model can realize image processing at any degradation level from the degradation starting level to the degradation ending level.
Optionally, the model generating unit 42 is specifically configured to:
and placing the characteristic adjusting layer on all the convolution layers of the trained base model to generate an adaptive model.
Optionally, the model generating unit 42 is specifically configured to:
and after the characteristic adjusting layer is placed on the convolution layer in the residual error structure of the trained base model, generating the self-adaptive model.
Optionally, the starting degradation level parameter and the ending degradation level parameter each include: gaussian noise parameters, JPEG compression quality parameters, and bicubic downsampling parameters.
Optionally, the convolution kernel is a depth-level convolution kernel.
Optionally, the model generating unit 42 includes:
a convolution kernel configuration subunit, configured to configure a size parameter of a convolution kernel of the feature adjusting layer;
and the model generation word unit is used for adding the configured characteristic adjusting layer into the trained base model to generate the self-adaptive model.
In the embodiment of the invention, by configuring the initial degradation level parameter of the base model based on the residual module, training the configured base model to adjust the network parameter of the base model, then adding the characteristic adjusting layer into the trained base model to generate the self-adaptive model, configuring the ending degradation level parameter of the self-adaptive model, then training the configured self-adaptive model to adjust the parameter of the characteristic adjusting layer, then carrying out interpolation operation on the characteristic adjusting layer in the trained self-adaptive model, so that the finally formed image processing model can realize image processing at any degradation level from the initial degradation level to the ending degradation level, thereby realizing the image restoration task at any degradation level, realizing continuous adjustability of restoration strength, and not bringing new image noise, the user can adjust the adjusting coefficient of the characteristic adjusting layer according to the preference so as to achieve a satisfactory image processing effect, and the user experience is better.
Fig. 5 is a schematic diagram of a terminal according to an embodiment of the present invention. As shown in fig. 5, the terminal device 5 of this embodiment includes: a processor 50, a memory 51 and a computer program 52 stored in said memory 51 and executable on said processor 50. The processor 50, when executing the computer program 52, implements the steps in the above-described embodiments of the method for constructing the respective image processing models, such as the steps 101 to 104 shown in fig. 1. Alternatively, the processor 50, when executing the computer program 52, implements the functions of the units in the system embodiments, such as the functions of the modules 41 to 44 shown in fig. 4.
Illustratively, the computer program 52 may be divided into one or more units, which are stored in the memory 51 and executed by the processor 50 to accomplish the present invention. The one or more units may be a series of computer program instruction segments capable of performing specific functions, which are used to describe the execution process of the computer program 52 in the terminal device 5. For example, the computer program 52 may be divided into a first configuration training unit 41, a model generation unit 42, a second configuration training unit 43, and a model adjustment unit 44, and the specific functions of each unit are as follows:
a first configuration training unit 41, configured to configure degradation level starting parameters of a base model based on a residual module, and train the configured base model to adjust network parameters of the base model, where the base model includes a convolution layer, an activation function layer, and an image upsampling layer;
a model generating unit 42, configured to add a feature adjusting layer to the trained base model to generate an adaptive model, where the feature adjusting layer is composed of a plurality of convolution kernels;
a second configuration training unit 43, configured to configure the degradation-ending level parameter of the adaptive model, and train the configured adaptive model to adjust the parameter of the feature adjusting layer;
and the model adjusting unit 44 is used for performing interpolation operation on the characteristic adjusting layer in the trained adaptive model so that the finally formed image processing model can realize image processing at any degradation level from the degradation starting level to the degradation ending level.
Optionally, the model generating unit 42 is specifically configured to:
and placing the characteristic adjusting layer on all the convolution layers of the trained base model to generate an adaptive model.
Optionally, the model generating unit 42 is specifically configured to:
and after the characteristic adjusting layer is placed on the convolution layer in the residual error structure of the trained base model, generating the self-adaptive model.
Optionally, the starting degradation level parameter and the ending degradation level parameter each include: gaussian noise parameters, JPEG compression quality parameters, and bicubic downsampling parameters.
Optionally, the convolution kernel is a depth-level convolution kernel.
Optionally, the model generating unit 42 includes:
a convolution kernel configuration subunit, configured to configure a size parameter of a convolution kernel of the feature adjusting layer;
and the model generation word unit is used for adding the configured characteristic adjusting layer into the trained base model to generate the self-adaptive model.
The terminal device 5 may include, but is not limited to, a processor 50 and a memory 51. It will be appreciated by those skilled in the art that fig. 5 is merely an example of a terminal device 5 and does not constitute a limitation of the terminal device 5 and may include more or less components than those shown, or some components may be combined, or different components, for example the terminal may also include input output devices, network access devices, buses, etc.
The Processor 50 may be a Central Processing Unit (CPU), other general purpose Processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), an off-the-shelf Programmable Gate Array (FPGA) or other Programmable logic device, discrete Gate or transistor logic, discrete hardware components, etc. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
The memory 51 may be an internal storage unit of the terminal device 5, such as a hard disk or a memory of the terminal device 5. The memory 51 may also be an external storage device of the terminal device 5, such as a plug-in hard disk, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash memory Card (Flash Card), and the like, which are provided on the terminal device 5. Further, the memory 51 may also include both an internal storage unit and an external storage device of the terminal device 5. The memory 51 is used for storing the computer program and other programs and data required by the terminal. The memory 51 may also be used to temporarily store data that has been output or is to be output.
It will be apparent to those skilled in the art that, for convenience and brevity of description, only the above-mentioned division of the functional units and modules is illustrated, and in practical applications, the above-mentioned function distribution may be performed by different functional units and modules according to needs, that is, the internal structure of the system is divided into different functional units or modules to perform all or part of the above-mentioned functions. Each functional unit and module in the embodiments may be integrated in one processing unit, or each unit may exist alone physically, or two or more units are integrated in one unit, and the integrated unit may be implemented in a form of hardware, or in a form of software functional unit. In addition, specific names of the functional units and modules are only for convenience of distinguishing from each other, and are not used for limiting the protection scope of the present application. The specific working processes of the units and modules in the system may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
In the above embodiments, the descriptions of the respective embodiments have respective emphasis, and reference may be made to the related descriptions of other embodiments for parts that are not described or illustrated in a certain embodiment.
Those of ordinary skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present invention.
In the embodiments provided in the present invention, it should be understood that the disclosed system/terminal device and method can be implemented in other ways. For example, the above-described system/terminal device embodiments are merely illustrative, and for example, the division of the modules or units is only one logical division, and there may be other divisions when actually implemented, for example, multiple units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, systems or units, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present invention may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The integrated modules/units, if implemented in the form of software functional units and sold or used as separate products, may be stored in a computer readable storage medium. Based on such understanding, all or part of the flow of the method according to the embodiments of the present invention may also be implemented by a computer program, which may be stored in a computer-readable storage medium, and when the computer program is executed by a processor, the steps of the method embodiments may be implemented. Wherein the computer program comprises computer program code, which may be in the form of source code, object code, an executable file or some intermediate form, etc. The computer-readable medium may include: any entity or system capable of carrying said computer program code, recording medium, usb disk, removable hard disk, magnetic disk, optical disk, computer Memory, Read-Only Memory (ROM), Random Access Memory (RAM), electrical carrier wave signals, telecommunications signals, software distribution medium, etc. It should be noted that the computer readable medium may contain content that is subject to appropriate increase or decrease as required by legislation and patent practice in jurisdictions, for example, in some jurisdictions, computer readable media does not include electrical carrier signals and telecommunications signals as is required by legislation and patent practice.
The above-mentioned embodiments are only used for illustrating the technical solutions of the present invention, and not for limiting the same; although the present invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; such modifications and substitutions do not substantially depart from the spirit and scope of the embodiments of the present invention, and are intended to be included within the scope of the present invention.

Claims (10)

1. A method of constructing an image processing model, the method comprising:
configuring a degradation starting level parameter of a base model based on a residual module, and training the configured base model to adjust a network parameter of the base model, wherein the base model comprises a convolution layer, an activation function layer and an image up-sampling layer;
adding a characteristic adjusting layer into the trained base model to generate an adaptive model, wherein the characteristic adjusting layer needs to be placed behind a convolution layer, and the characteristic adjusting layer is composed of a plurality of convolution kernels;
configuring the degradation finishing level parameter of the self-adaptive model, and training the configured self-adaptive model to adjust the parameter of the characteristic adjusting layer;
and performing interpolation operation on the characteristic adjusting layer in the trained adaptive model so that the finally formed image processing model can realize image processing at any degradation level from the degradation starting level to the degradation ending level.
2. The method of claim 1, wherein the step of adding a feature adjustment layer to the trained base model to generate an adaptive model comprises:
and placing the characteristic adjusting layer on all the convolution layers of the trained base model to generate an adaptive model.
3. The method of claim 1, wherein the step of adding a feature adjustment layer to the trained base model to generate an adaptive model comprises:
and after the characteristic adjusting layer is placed on the convolution layer in the residual error structure of the trained base model, generating the self-adaptive model.
4. The method of claim 1, wherein each of the starting degradation level parameter and the ending degradation level parameter comprises: gaussian noise parameters, JPEG compression quality parameters, and bicubic downsampling parameters.
5. The method of claim 1, wherein the convolution kernel is a depth level convolution kernel.
6. The method of any one of claims 1 to 5, wherein the step of adding a feature adjustment layer to the trained base model to generate an adaptive model comprises:
configuring a size parameter of a convolution kernel of the characteristic adjusting layer;
and adding the configured characteristic adjusting layer into the trained base model to generate the self-adaptive model.
7. An apparatus for constructing an image processing model, the apparatus comprising:
the first configuration training unit is used for configuring the initial degradation level parameters of the base model based on the residual error module and training the configured base model to adjust the network parameters of the base model, wherein the base model comprises a convolution layer, an activation function layer and an image up-sampling layer;
the model generation unit is used for adding a characteristic adjusting layer into the trained base model to generate an adaptive model, wherein the characteristic adjusting layer needs to be placed behind a convolution layer, and the characteristic adjusting layer consists of a plurality of convolution kernels;
the second configuration training unit is used for configuring the degradation finishing level parameters of the self-adaptive model and training the configured self-adaptive model to adjust the parameters of the characteristic adjusting layer;
and the model adjusting unit is used for carrying out interpolation operation on the characteristic adjusting layer in the trained adaptive model so that the finally formed image processing model can realize image processing at any degradation level from the degradation starting level to the degradation ending level.
8. The apparatus for constructing an image processing model according to claim 7, wherein the model generating unit is specifically configured to:
and placing the characteristic adjusting layer on all the convolution layers of the trained base model to generate an adaptive model.
9. A terminal device comprising a memory, a processor and a computer program stored in the memory and executable on the processor, characterized in that the processor implements the steps of the method of constructing an image processing model according to any one of claims 1 to 6 when executing the computer program.
10. A computer-readable storage medium, in which a computer program is stored which, when being executed by a processor, carries out the steps of the method of constructing an image processing model according to any one of claims 1 to 6.
CN201910217717.4A 2019-03-21 2019-03-21 Image processing model construction method and device and terminal equipment Active CN110047044B (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN201910217717.4A CN110047044B (en) 2019-03-21 2019-03-21 Image processing model construction method and device and terminal equipment
PCT/CN2019/130876 WO2020186888A1 (en) 2019-03-21 2019-12-31 Method and apparatus for constructing image processing model, and terminal device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910217717.4A CN110047044B (en) 2019-03-21 2019-03-21 Image processing model construction method and device and terminal equipment

Publications (2)

Publication Number Publication Date
CN110047044A CN110047044A (en) 2019-07-23
CN110047044B true CN110047044B (en) 2021-01-29

Family

ID=67274921

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910217717.4A Active CN110047044B (en) 2019-03-21 2019-03-21 Image processing model construction method and device and terminal equipment

Country Status (2)

Country Link
CN (1) CN110047044B (en)
WO (1) WO2020186888A1 (en)

Families Citing this family (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110047044B (en) * 2019-03-21 2021-01-29 深圳先进技术研究院 Image processing model construction method and device and terminal equipment
CN111028174B (en) * 2019-12-10 2023-08-04 深圳先进技术研究院 Multi-dimensional image restoration method and device based on residual connection
CN111275620B (en) * 2020-01-17 2023-08-01 金华青鸟计算机信息技术有限公司 Image super-resolution method based on Stacking integrated learning
CN111539337A (en) * 2020-04-26 2020-08-14 上海眼控科技股份有限公司 Vehicle posture correction method, device and equipment
CN112669240B (en) * 2021-01-22 2024-05-10 深圳市格灵人工智能与机器人研究院有限公司 High-definition image restoration method and device, electronic equipment and storage medium
CN112906554B (en) * 2021-02-08 2022-12-23 智慧眼科技股份有限公司 Model training optimization method and device based on visual image and related equipment
CN113222855B (en) * 2021-05-28 2023-07-11 北京有竹居网络技术有限公司 Image recovery method, device and equipment

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8907973B2 (en) * 2012-10-22 2014-12-09 Stmicroelectronics International N.V. Content adaptive image restoration, scaling and enhancement for high definition display
CN106251289A (en) * 2016-07-21 2016-12-21 北京邮电大学 A kind of based on degree of depth study and the video super-resolution method for reconstructing of self-similarity
CN106709875A (en) * 2016-12-30 2017-05-24 北京工业大学 Compressed low-resolution image restoration method based on combined deep network
CN108537746A (en) * 2018-03-21 2018-09-14 华南理工大学 A kind of fuzzy variable method for blindly restoring image based on depth convolutional network
WO2018214671A1 (en) * 2017-05-26 2018-11-29 杭州海康威视数字技术股份有限公司 Image distortion correction method and device and electronic device
CN109146788A (en) * 2018-08-16 2019-01-04 广州视源电子科技股份有限公司 Super-resolution image reconstruction method and device based on deep learning

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP6772112B2 (en) * 2017-07-31 2020-10-21 株式会社日立製作所 Medical imaging device and medical image processing method
CN108288251A (en) * 2018-02-11 2018-07-17 深圳创维-Rgb电子有限公司 Image super-resolution method, device and computer readable storage medium
CN108765338A (en) * 2018-05-28 2018-11-06 西华大学 Spatial target images restored method based on convolution own coding convolutional neural networks
CN110047044B (en) * 2019-03-21 2021-01-29 深圳先进技术研究院 Image processing model construction method and device and terminal equipment

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8907973B2 (en) * 2012-10-22 2014-12-09 Stmicroelectronics International N.V. Content adaptive image restoration, scaling and enhancement for high definition display
CN106251289A (en) * 2016-07-21 2016-12-21 北京邮电大学 A kind of based on degree of depth study and the video super-resolution method for reconstructing of self-similarity
CN106709875A (en) * 2016-12-30 2017-05-24 北京工业大学 Compressed low-resolution image restoration method based on combined deep network
WO2018214671A1 (en) * 2017-05-26 2018-11-29 杭州海康威视数字技术股份有限公司 Image distortion correction method and device and electronic device
CN108537746A (en) * 2018-03-21 2018-09-14 华南理工大学 A kind of fuzzy variable method for blindly restoring image based on depth convolutional network
CN109146788A (en) * 2018-08-16 2019-01-04 广州视源电子科技股份有限公司 Super-resolution image reconstruction method and device based on deep learning

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
"Photo-Realistic Single Image Super-Resolution Using a Generative Adversarial Network";Christian Ledig 等;《2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR)》;20171109;106-114 *
"基于卷积神经网络的图像复原方法研究";赵建梁;《中国优秀硕士学位论文全文数据库-信息科技辑》;20180815;第2018年卷(第8期);I138-485 *
"基于深度学习的图像超分辨率复原研究进展";孙旭 等;《自动化学报》;20170531;第43卷(第5期);697-709 *

Also Published As

Publication number Publication date
CN110047044A (en) 2019-07-23
WO2020186888A1 (en) 2020-09-24

Similar Documents

Publication Publication Date Title
CN110047044B (en) Image processing model construction method and device and terminal equipment
CN109410123B (en) Deep learning-based mosaic removing method and device and electronic equipment
CN110569961A (en) neural network training method and device and terminal equipment
CN111951167B (en) Super-resolution image reconstruction method, super-resolution image reconstruction device, computer equipment and storage medium
CN102378978A (en) Methods for fast and memory efficient implementation of transforms
CN108960425B (en) Rendering model training method, system, equipment, medium and rendering method
WO2020093914A1 (en) Content-weighted deep residual learning for video in-loop filtering
CN110062282A (en) A kind of super-resolution video method for reconstructing, device and electronic equipment
CN110782397B (en) Image processing method, generation type countermeasure network, electronic equipment and storage medium
CN114742911A (en) Image compressed sensing reconstruction method, system, equipment and medium
CN113222856A (en) Inverse halftone image processing method, terminal equipment and readable storage medium
CN113837980A (en) Resolution adjusting method and device, electronic equipment and storage medium
CN110782398A (en) Image processing method, generation type countermeasure network, electronic equipment and storage medium
CN108717687B (en) Image enhancement method based on conversion compression and terminal equipment
CN108230253A (en) Image recovery method, device, electronic equipment and computer storage media
CN116385280B (en) Image noise reduction system and method and noise reduction neural network training method
CN111083482A (en) Video compression network training method and device and terminal equipment
CN111028182A (en) Image sharpening method and device, electronic equipment and computer-readable storage medium
CN110677671A (en) Image compression method and device and terminal equipment
CN114119377A (en) Image processing method and device
CN111383187B (en) Image processing method and device and intelligent terminal
CN111382772B (en) Image processing method and device and terminal equipment
Li Inverse halftoning with nonlocal regularization
CN108648155B (en) Image enhancement method based on compressed domain and terminal equipment
US20230169752A1 (en) Image processing apparatus and operating method thereof

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant