WO2020186888A1 - Procédé et appareil pour construire un modèle de traitement d'image et dispositif terminal - Google Patents

Procédé et appareil pour construire un modèle de traitement d'image et dispositif terminal Download PDF

Info

Publication number
WO2020186888A1
WO2020186888A1 PCT/CN2019/130876 CN2019130876W WO2020186888A1 WO 2020186888 A1 WO2020186888 A1 WO 2020186888A1 CN 2019130876 W CN2019130876 W CN 2019130876W WO 2020186888 A1 WO2020186888 A1 WO 2020186888A1
Authority
WO
WIPO (PCT)
Prior art keywords
model
degradation level
adjustment layer
image processing
feature adjustment
Prior art date
Application number
PCT/CN2019/130876
Other languages
English (en)
Chinese (zh)
Inventor
乔宇
何静雯
董超
Original Assignee
深圳先进技术研究院
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 深圳先进技术研究院 filed Critical 深圳先进技术研究院
Publication of WO2020186888A1 publication Critical patent/WO2020186888A1/fr

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration

Definitions

  • the invention belongs to the technical field of image processing, and in particular relates to a method, device and terminal equipment for building an image processing model.
  • the degradation levels of images are continuous.
  • the traditional method can only train many image restoration models for different degradation levels, or train one large enough The image restoration model to solve a wide range of degraded images.
  • this method will bring a very large amount of calculation and lack of flexibility.
  • the embodiments of the present invention provide an image processing model construction method, device, and terminal equipment to solve the problem that the existing image restoration processing has a single degradation level and lacks flexibility in processing degraded images with multiple degradation levels. .
  • the first aspect of the embodiments of the present invention provides a method for constructing an image processing model, including:
  • the base model includes a convolution layer, an activation function layer, and an image upsampling layer ;
  • Interpolation is performed on the feature adjustment layer in the trained adaptive model, so that the finally formed image processing model can realize image processing of any degradation level from the initial degradation level to the end degradation level.
  • a second aspect of the embodiments of the present invention provides an image processing model construction device, including:
  • the first configuration training unit is used to configure the initial degradation level parameters of the base model based on the residual module, and to train the configured base model to adjust the network parameters of the base model.
  • the base model includes a convolutional layer, Activation function layer, image upsampling layer;
  • a model generation unit configured to add a feature adjustment layer to the trained base model to generate an adaptive model, where the feature adjustment layer is composed of multiple convolution kernels;
  • the second configuration training unit is configured to configure the end degradation level parameters of the adaptive model, and train the configured adaptive model to adjust the parameters of the feature adjustment layer;
  • the model adjustment unit is used to perform interpolation operations on the feature adjustment layer in the trained adaptive model, so that the finally formed image processing model can realize image processing of any degradation level from the start degradation level to the end degradation level.
  • a third aspect of the embodiments of the present invention provides a terminal device, including:
  • the computer program includes:
  • the first configuration training unit is used to configure the initial degradation level parameters of the base model based on the residual module, and to train the configured base model to adjust the network parameters of the base model.
  • the base model includes a convolutional layer, Activation function layer, image upsampling layer;
  • a model generation unit configured to add a feature adjustment layer to the trained base model to generate an adaptive model, where the feature adjustment layer is composed of multiple convolution kernels;
  • the second configuration training unit is configured to configure the end degradation level parameters of the adaptive model, and train the configured adaptive model to adjust the parameters of the feature adjustment layer;
  • the model adjustment unit is used to perform interpolation operations on the feature adjustment layer in the trained adaptive model, so that the finally formed image processing model can realize image processing of any degradation level from the start degradation level to the end degradation level.
  • a fourth aspect of the embodiments of the present invention provides a computer-readable storage medium storing a computer program, wherein the computer program is executed by a processor to implement the first aspect of the embodiments of the present invention Provide the steps of the method for constructing the image processing model.
  • the computer program includes:
  • the first configuration training unit is used to configure the initial degradation level parameters of the base model based on the residual module, and to train the configured base model to adjust the network parameters of the base model.
  • the base model includes a convolutional layer, Activation function layer, image upsampling layer;
  • a model generation unit configured to add a feature adjustment layer to the trained base model to generate an adaptive model, where the feature adjustment layer is composed of multiple convolution kernels;
  • the second configuration training unit is configured to configure the end degradation level parameters of the adaptive model, and train the configured adaptive model to adjust the parameters of the feature adjustment layer;
  • the model adjustment unit is used to perform interpolation operations on the feature adjustment layer in the trained adaptive model, so that the finally formed image processing model can realize image processing of any degradation level from the start degradation level to the end degradation level.
  • the embodiment of the present invention has the beneficial effect of configuring the initial degradation level parameter of the base model based on the residual module, and training the configured base model to adjust the network parameters of the base model. , Then add the feature adjustment layer to the trained base model to generate an adaptive model, configure the end degradation level parameters of the adaptive model, and then train the configured adaptive model to adjust the feature adjustment Then, perform interpolation operation on the feature adjustment layer in the trained adaptive model, so that the finally formed image processing model can realize the image processing of any degradation level from the start degradation level to the end degradation level, thereby Realizes the image restoration task of any degradation level, and realizes the continuous adjustment of the restoration intensity, and since no new image noise is introduced, the user can adjust the adjustment coefficient of the feature adjustment layer according to their preferences to achieve a satisfactory image processing effect , The user experience is better.
  • FIG. 1 is an implementation flowchart of an image processing model construction method provided by an embodiment of the present invention
  • FIG. 2 is a schematic diagram of a network architecture of a base model provided by an embodiment of the present invention.
  • FIG. 3 is a schematic diagram of a network architecture of an adaptive model provided by an embodiment of the present invention.
  • FIG. 4 is a schematic diagram of an image processing model construction device provided by an embodiment of the present invention.
  • Fig. 5 is a schematic diagram of a terminal device provided by an embodiment of the present invention.
  • FIG. 1 shows an implementation process of an image processing model construction method provided by an embodiment of the present invention, which is detailed as follows:
  • step S101 the initial degradation level parameter of the base model based on the residual module is configured, and the configured base model is trained to adjust the network parameters of the base model.
  • the base model based on the residual module is mainly used to solve the image restoration of the initial degradation level, which includes a convolution layer, an activation function layer, and an image upsampling layer.
  • the embodiment of the present invention is preferably a network architecture similar to SRResNet with a residual module as the core. It is mainly composed of a convolutional layer, an activation function layer, and an image up-sampling layer. Please refer to Figure 2 for details.
  • the stride of the first convolutional layer in the base model is 2, so that the image passing through the convolutional layer The length and width becomes 1/2 of the original, thereby improving the processing efficiency of the residual module, and after being processed by the residual module, it is input to the convolutional layer connected to the residual module, and the output image of the convolutional layer is input to The image is up-sampled to restore the original size.
  • the model performs well in Gaussian denoising, super-resolution and JPEG lossy compression and restoration tasks.
  • the pre-configured degradation level parameters are obtained, and the pre-configured degradation level parameters include a start degradation level parameter and an end degradation level parameter. Then according to the pre-configured degradation level parameters, configure the starting degradation level parameters of the base model.
  • both the start degradation level parameter and the end degradation level parameter include: Gaussian noise parameters, JPEG compression quality parameters and bi-cubic downsampling parameters, where:
  • the initial degradation level parameters are: Gaussian noise, ⁇ 15; JPEG compression quality, q80; Bi-cubic downsampling, ⁇ 3;
  • the corresponding end degradation level parameters are: Gaussian noise, ⁇ 75; JPEG compression quality, q10; Bi-cubic downsampling, ⁇ 4.
  • start degradation level parameter and the end degradation level parameter are not limited to the specific parameters mentioned above, and the specific parameters mentioned above are only an example and not specific specific parameters.
  • step S102 the feature adjustment layer is added to the trained base model to generate an adaptive model.
  • the feature adjustment layer is composed of multiple convolution kernels. Since the depth-level convolution kernel is operated based on a single feature map, it is much faster than the general convolution kernel.
  • the feature adjustment layer is composed of multiple depthwise convolution filters.
  • the convolution kernel constituting the feature adjustment layer can be composed of a convolution kernel with a corresponding calculation speed, and is not limited to a depth-level convolution kernel.
  • the size of the convolution kernel can be 1 ⁇ 1, 3 ⁇ 3, 5 ⁇ 5, 7 ⁇ 7, etc., which is not specifically limited here.
  • the larger the size of the convolution kernel the better performance of the adaptive model on image restoration tasks that end the degradation level, but as the convolution kernel increases, this improvement is not obvious.
  • setting the size of the convolution kernel to 1 ⁇ 1 can get very ideal results.
  • the size of the convolution kernel must be set to at least 5 ⁇ 5.
  • the adaptive model is mainly used to add a feature adjustment layer on the basis of the base model, so that the formed adaptive model can handle the image restoration task of the end degradation level, and because only the convolution kernel is added
  • the feature adjustment layer is composed of not many network parameters. For example, for the feature adjustment layer with the convolution kernel size of 1 ⁇ 1 and 5 ⁇ 5, only 0.15% and 3.65% of the network parameters are added, which is not much. Increase the amount of calculation of the network model.
  • Figure 3 for the specific structure of the adaptive model. It is added after the convolutional layer of the base model or after the convolutional layer of the residual structure on the basis of the base model.
  • the model formed by the feature adjustment layer, the residual structure referred to here is a structure composed of 16 residual modules, where the residual module includes a convolutional layer, and the activation layer connected to the convolutional layer, And another convolutional layer connected to the activation layer.
  • step S102 is specifically:
  • the feature adjustment layer is placed after all the convolutional layers of the trained base model to generate an adaptive model.
  • step S102 is specifically:
  • the feature adjustment layer is placed after the convolutional layer in the residual structure of the trained base model to generate an adaptive model.
  • the feature adjustment layer in order to facilitate the adjustment of the feature map after the convolution operation, the feature adjustment layer needs to be placed behind the convolution layer. Specifically, it can be placed after all the convolution layers of the base model, or it can be placed on the base model. After the convolutional layer in the residual structure of the model, there is no specific limitation here.
  • step S102 specifically includes:
  • Step S1021 Configure the size parameter of the convolution kernel of the feature adjustment layer.
  • Step S1022 adding the configured feature adjustment layer to the trained base model to generate an adaptive model.
  • the size of the convolution kernel in the corresponding feature adjustment layer is different.
  • the size of the convolution kernel in the feature adjustment layer can be configured according to the actual situation, for example, by the adaptive model
  • the size of the convolution kernel is set to 1 ⁇ 1; when the image super-resolution task is to be processed, the size of the convolution kernel is set to 5 ⁇ 5 or more.
  • the size parameters of the convolution kernel of the configuration feature adjustment layer can be pre-configured.
  • the image processing task for the user to be selected on the adaptive model interface can be found according to the comparison table of the image processing task and the configuration parameters Corresponding configuration parameters; it can also be configured by the user, and there is no specific limitation here.
  • step S103 the end degradation level parameter of the adaptive model is configured, and the configured adaptive model is trained to adjust the parameters of the feature adjustment layer.
  • the end degradation level parameter of the adaptive model is configured according to the acquired pre-configured degradation level parameter, and the end degradation level parameter of the adaptive model is configured. Training is performed so that the adaptive model can output a better quality restored image.
  • the adaptive model is trained only by adjusting the parameters of the feature adjustment layer, and the training effect of image restoration is achieved after multiple training and adjustment of the parameters of the feature adjustment layer.
  • the training samples used are the images corresponding to the corresponding degradation levels.
  • step S104 interpolation is performed on the feature adjustment layer in the trained adaptive model, so that the finally formed image processing model can realize image processing of any degradation level from the start degradation level to the end degradation level.
  • the adjustment test is performed on the adaptive model, specifically by testing all the feature adjustment layers in the adaptive model.
  • Perform interpolation operation so that the finally formed image processing model can realize the image restoration of any degradation level between "start” and "end", where "start” is the beginning of the degradation level, and "end” is the end of the degradation level.
  • Interpolation is performed In essence, it can also be understood as multiplying the parameters of all the characteristic adjustment layers of the adaptive model by an adjustment coefficient.
  • the adjustment coefficient ranges from 0 to 1. By changing the adjustment coefficient, you can continuously change the The restoration strength of an image with a certain degradation level.
  • the restoration strength of the image with a given degradation level will also change accordingly, and if the degradation degree of the initial degradation level is less than that of the end degradation level, increase the adjustment coefficient, The higher the degree of restoration of degraded images.
  • the convolutional neural network performs convolution processing on the degraded image in the training sample and the clear image corresponding to the degraded image to extract the unique features inside the image, and then The network parameters in the base model and the parameters of the feature adjustment layer in the adaptive model are trained and adjusted to learn the mapping relationship from the degraded image to the clear image.
  • the mean square error is used to calculate the error between the target image and the learned image, and then the network parameters are adjusted and updated through backpropagation.
  • the model converges, the network parameters reach the optimal after multiple optimization iterations Value.
  • the initial degradation level parameter of the base model based on the residual module is configured, and the configured base model is trained to adjust the network parameters of the base model, and then the feature adjustment layer is added to the economics
  • the trained base model an adaptive model is generated, and the end degradation level parameters of the adaptive model are configured, and then the configured adaptive model is trained to adjust the parameters of the feature adjustment layer, and then the trained
  • the feature adjustment layer in the adaptive model performs interpolation operations, so that the finally formed image processing model can realize image processing at any degradation level from the start degradation level to the end degradation level, thereby achieving the image restoration task at any degradation level , And realizes the continuous adjustment of restoration intensity, and because no new image noise is introduced, users can adjust the adjustment coefficient of the feature adjustment layer according to their preferences to achieve a satisfactory image processing effect, and the user experience is better.
  • FIG. 4 shows a schematic diagram of a device for constructing an image processing model provided by an embodiment of the present invention. The relevant part of the embodiment of the invention.
  • the device includes:
  • the first configuration training unit 41 is configured to configure the initial degradation level parameters of the base model based on the residual module, and train the configured base model to adjust the network parameters of the base model, the base model including a convolutional layer , Activation function layer, image upsampling layer;
  • the model generating unit 42 is configured to add a feature adjustment layer to the trained base model to generate an adaptive model, where the feature adjustment layer is composed of multiple convolution kernels;
  • the second configuration training unit 43 is configured to configure the end degradation level parameters of the adaptive model, and train the configured adaptive model to adjust the parameters of the feature adjustment layer;
  • the model adjustment unit 44 is configured to perform interpolation operations on the feature adjustment layer in the trained adaptive model, so that the finally formed image processing model can realize image processing of any degradation level from the start degradation level to the end degradation level .
  • model generating unit 42 is specifically configured to:
  • the feature adjustment layer is placed after all the convolutional layers of the trained base model to generate an adaptive model.
  • model generating unit 42 is specifically configured to:
  • the feature adjustment layer is placed after the convolutional layer in the residual structure of the trained base model to generate an adaptive model.
  • both the start degradation level parameter and the end degradation level parameter include: Gaussian noise parameters, JPEG compression quality parameters, and bi-cubic downsampling parameters.
  • the convolution kernel is a depth-level convolution kernel.
  • the model generating unit 42 includes:
  • the convolution kernel configuration subunit is used to configure the size parameters of the convolution kernel of the feature adjustment layer
  • the model generation word unit is used to add the configured feature adjustment layer to the trained base model to generate an adaptive model.
  • the initial degradation level parameter of the base model based on the residual module is configured, and the configured base model is trained to adjust the network parameters of the base model, and then the feature adjustment layer is added to the economics
  • the trained base model an adaptive model is generated, and the end degradation level parameters of the adaptive model are configured, and then the configured adaptive model is trained to adjust the parameters of the feature adjustment layer, and then the trained
  • the feature adjustment layer in the adaptive model performs interpolation operations, so that the finally formed image processing model can realize image processing at any degradation level from the start degradation level to the end degradation level, thereby achieving the image restoration task at any degradation level , And realizes the continuous adjustment of restoration intensity, and because no new image noise is introduced, users can adjust the adjustment coefficient of the feature adjustment layer according to their preferences to achieve a satisfactory image processing effect, and the user experience is better.
  • Fig. 5 is a schematic diagram of a terminal according to an embodiment of the present invention.
  • the terminal device 5 of this embodiment includes a processor 50, a memory 51, and a computer program 52 stored in the memory 51 and running on the processor 50.
  • the processor 50 executes the computer program 52, the steps in the embodiment of the method for constructing each image processing model described above are implemented, for example, steps 101 to 104 shown in FIG. 1.
  • the processor 50 executes the computer program 52, the functions of the units in the foregoing system embodiments, such as the functions of the modules 41 to 44 shown in FIG. 4, are realized.
  • the computer program 52 may be divided into one or more units, and the one or more units are stored in the memory 51 and executed by the processor 50 to complete the present invention.
  • the one or more units may be a series of computer program instruction segments capable of completing specific functions, and the instruction segments are used to describe the execution process of the computer program 52 in the terminal device 5.
  • the computer program 52 can be divided into a first configuration training unit 41, a model generation unit 42, a second configuration training unit 43, and a model adjustment unit 44.
  • the specific functions of each unit are as follows:
  • the first configuration training unit 41 is configured to configure the initial degradation level parameters of the base model based on the residual module, and train the configured base model to adjust the network parameters of the base model, the base model including a convolutional layer , Activation function layer, image upsampling layer;
  • the model generating unit 42 is configured to add a feature adjustment layer to the trained base model to generate an adaptive model, where the feature adjustment layer is composed of multiple convolution kernels;
  • the second configuration training unit 43 is configured to configure the end degradation level parameters of the adaptive model, and train the configured adaptive model to adjust the parameters of the feature adjustment layer;
  • the model adjustment unit 44 is configured to perform interpolation operations on the feature adjustment layer in the trained adaptive model, so that the finally formed image processing model can realize image processing of any degradation level from the start degradation level to the end degradation level .
  • model generating unit 42 is specifically configured to:
  • the feature adjustment layer is placed after all the convolutional layers of the trained base model to generate an adaptive model.
  • model generating unit 42 is specifically configured to:
  • the feature adjustment layer is placed after the convolutional layer in the residual structure of the trained base model to generate an adaptive model.
  • both the start degradation level parameter and the end degradation level parameter include: Gaussian noise parameters, JPEG compression quality parameters, and bi-cubic downsampling parameters.
  • the convolution kernel is a depth-level convolution kernel.
  • the model generating unit 42 includes:
  • the convolution kernel configuration subunit is used to configure the size parameters of the convolution kernel of the feature adjustment layer
  • the model generation word unit is used to add the configured feature adjustment layer to the trained base model to generate an adaptive model.
  • the terminal device 5 may include, but is not limited to, a processor 50 and a memory 51. Those skilled in the art can understand that FIG. 5 is only an example of the terminal device 5, and does not constitute a limitation on the terminal device 5. It may include more or less components than shown in the figure, or a combination of certain components, or different components. For example, the terminal may also include input and output devices, network access devices, buses, etc.
  • the so-called processor 50 may be a central processing unit (Central Processing Unit, CPU), other general-purpose processors, digital signal processors (Digital Signal Processor, DSP), application specific integrated circuits (Application Specific Integrated Circuit, ASIC), Ready-made programmable gate array (Field-Programmable Gate Array, FPGA) or other programmable logic devices, discrete gate or transistor logic devices, discrete hardware components, etc.
  • the general-purpose processor may be a microprocessor or the processor may also be any conventional processor or the like.
  • the memory 51 may be an internal storage unit of the terminal device 5, such as a hard disk or a memory of the terminal device 5.
  • the memory 51 may also be an external storage device of the terminal device 5, for example, a plug-in hard disk equipped on the terminal device 5, a smart memory card (Smart Media Card, SMC), and a Secure Digital (SD) Card, Flash Card, etc. Further, the memory 51 may also include both an internal storage unit of the terminal device 5 and an external storage device.
  • the memory 51 is used to store the computer program and other programs and data required by the terminal.
  • the memory 51 can also be used to temporarily store data that has been output or will be output.
  • the disclosed system/terminal device and method may be implemented in other ways.
  • the system/terminal device embodiments described above are only illustrative.
  • the division of the modules or units is only a logical function division, and there may be other divisions in actual implementation, such as multiple units.
  • components can be combined or integrated into another system, or some features can be omitted or not implemented.
  • the displayed or discussed mutual coupling or direct coupling or communication connection may be indirect coupling or communication connection through some interfaces, systems or units, and may be in electrical, mechanical or other forms.
  • the units described as separate components may or may not be physically separated, and the components displayed as units may or may not be physical units, that is, they may be located in one place, or they may be distributed on multiple network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
  • the functional units in the various embodiments of the present invention may be integrated into one processing unit, or each unit may exist alone physically, or two or more units may be integrated into one unit.
  • the above-mentioned integrated unit can be implemented in the form of hardware or software functional unit.
  • the integrated module/unit is implemented in the form of a software functional unit and sold or used as an independent product, it can be stored in a computer readable storage medium.
  • the present invention implements all or part of the processes in the above-mentioned embodiments and methods, and can also be completed by instructing relevant hardware through a computer program.
  • the computer program can be stored in a computer-readable storage medium. When the program is executed by the processor, the steps of the foregoing method embodiments can be implemented.
  • the computer program includes computer program code, and the computer program code may be in the form of source code, object code, executable file, or some intermediate forms.
  • the computer-readable medium may include: any entity or system capable of carrying the computer program code, recording medium, U disk, mobile hard disk, magnetic disk, optical disk, computer memory, read-only memory (ROM, Read-Only Memory) , Random Access Memory (RAM, Random Access Memory), electrical carrier signal, telecommunications signal, and software distribution media.
  • ROM Read-Only Memory
  • RAM Random Access Memory
  • electrical carrier signal telecommunications signal
  • software distribution media any entity or system capable of carrying the computer program code
  • recording medium U disk, mobile hard disk, magnetic disk, optical disk, computer memory, read-only memory (ROM, Read-Only Memory) , Random Access Memory (RAM, Random Access Memory), electrical carrier signal, telecommunications signal, and software distribution media.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Biomedical Technology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

La présente invention concerne un procédé et un appareil pour construire un modèle de traitement d'image, et un dispositif terminal. Le procédé comprend les étapes consistant à : configurer un paramètre de niveau de dégradation initial d'un modèle de base sur la base d'un module résiduel, et entraîner le modèle de base configuré pour ajuster un paramètre de réseau du modèle de base (S101) ; ajouter une couche d'ajustement de caractéristique dans le modèle de base, et générer un modèle adaptatif (S102) ; configurer un paramètre de niveau de dégradation final du modèle adaptatif, et entraîner le modèle adaptatif configuré pour ajuster des paramètres de la couche de réglage de caractéristique (S103) ; effectuer une opération d'interpolation sur la couche d'ajustement de caractéristique dans le modèle adaptatif entraîné, de telle sorte que le modèle de traitement d'image finalement formé peut réaliser un traitement d'image à n'importe quel niveau de dégradation à partir du niveau de dégradation initial jusqu'au niveau de dégradation final (S104). Le procédé permet un réglage continu de l'intensité de restauration et améliore l'expérience de l'utilisateur.
PCT/CN2019/130876 2019-03-21 2019-12-31 Procédé et appareil pour construire un modèle de traitement d'image et dispositif terminal WO2020186888A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201910217717.4A CN110047044B (zh) 2019-03-21 2019-03-21 一种图像处理模型的构建方法、装置及终端设备
CN201910217717.4 2019-03-21

Publications (1)

Publication Number Publication Date
WO2020186888A1 true WO2020186888A1 (fr) 2020-09-24

Family

ID=67274921

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2019/130876 WO2020186888A1 (fr) 2019-03-21 2019-12-31 Procédé et appareil pour construire un modèle de traitement d'image et dispositif terminal

Country Status (2)

Country Link
CN (1) CN110047044B (fr)
WO (1) WO2020186888A1 (fr)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112669240A (zh) * 2021-01-22 2021-04-16 深圳市格灵人工智能与机器人研究院有限公司 高清图像修复方法、装置、电子设备和存储介质

Families Citing this family (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110047044B (zh) * 2019-03-21 2021-01-29 深圳先进技术研究院 一种图像处理模型的构建方法、装置及终端设备
CN111028174B (zh) * 2019-12-10 2023-08-04 深圳先进技术研究院 一种基于残差连接的多维图像复原方法和设备
CN111275620B (zh) * 2020-01-17 2023-08-01 金华青鸟计算机信息技术有限公司 一种基于Stacking集成学习的图像超分辨方法
CN111539337A (zh) * 2020-04-26 2020-08-14 上海眼控科技股份有限公司 车辆姿态矫正方法、装置及设备
CN112906554B (zh) * 2021-02-08 2022-12-23 智慧眼科技股份有限公司 基于视觉图像的模型训练优化方法、装置及相关设备
CN113222855B (zh) * 2021-05-28 2023-07-11 北京有竹居网络技术有限公司 一种图像恢复方法、装置和设备

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106709875A (zh) * 2016-12-30 2017-05-24 北京工业大学 一种基于联合深度网络的压缩低分辨率图像复原方法
CN108288251A (zh) * 2018-02-11 2018-07-17 深圳创维-Rgb电子有限公司 图像超分辨率方法、装置及计算机可读存储介质
CN108765338A (zh) * 2018-05-28 2018-11-06 西华大学 基于卷积自编码卷积神经网络的空间目标图像复原方法
WO2019026407A1 (fr) * 2017-07-31 2019-02-07 株式会社日立製作所 Dispositif d'imagerie médicale et procédé de traitement d'image médicale
CN110047044A (zh) * 2019-03-21 2019-07-23 深圳先进技术研究院 一种图像处理模型的构建方法、装置及终端设备

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8907973B2 (en) * 2012-10-22 2014-12-09 Stmicroelectronics International N.V. Content adaptive image restoration, scaling and enhancement for high definition display
CN106251289A (zh) * 2016-07-21 2016-12-21 北京邮电大学 一种基于深度学习和自相似性的视频超分辨率重建方法
CN108932697B (zh) * 2017-05-26 2020-01-17 杭州海康威视数字技术股份有限公司 一种失真图像的去失真方法、装置及电子设备
CN108537746B (zh) * 2018-03-21 2021-09-21 华南理工大学 一种基于深度卷积网络的模糊可变图像盲复原方法
CN109146788B (zh) * 2018-08-16 2023-04-18 广州视源电子科技股份有限公司 基于深度学习的超分辨率图像重建方法和装置

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106709875A (zh) * 2016-12-30 2017-05-24 北京工业大学 一种基于联合深度网络的压缩低分辨率图像复原方法
WO2019026407A1 (fr) * 2017-07-31 2019-02-07 株式会社日立製作所 Dispositif d'imagerie médicale et procédé de traitement d'image médicale
CN108288251A (zh) * 2018-02-11 2018-07-17 深圳创维-Rgb电子有限公司 图像超分辨率方法、装置及计算机可读存储介质
CN108765338A (zh) * 2018-05-28 2018-11-06 西华大学 基于卷积自编码卷积神经网络的空间目标图像复原方法
CN110047044A (zh) * 2019-03-21 2019-07-23 深圳先进技术研究院 一种图像处理模型的构建方法、装置及终端设备

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112669240A (zh) * 2021-01-22 2021-04-16 深圳市格灵人工智能与机器人研究院有限公司 高清图像修复方法、装置、电子设备和存储介质
CN112669240B (zh) * 2021-01-22 2024-05-10 深圳市格灵人工智能与机器人研究院有限公司 高清图像修复方法、装置、电子设备和存储介质

Also Published As

Publication number Publication date
CN110047044A (zh) 2019-07-23
CN110047044B (zh) 2021-01-29

Similar Documents

Publication Publication Date Title
WO2020186888A1 (fr) Procédé et appareil pour construire un modèle de traitement d'image et dispositif terminal
CN108830816B (zh) 图像增强方法及装置
WO2021208151A1 (fr) Procédé de compression de modèle et procédé et dispositif de traitement d'image
CN110490296A (zh) 一种构造卷积神经网络(cnn)模型的方法和系统
CN109325928A (zh) 一种图像重建方法、装置及设备
CN108369494A (zh) 音频信号的频谱校正
WO2019194299A1 (fr) Dispositif d'apprentissage, procédé d'apprentissage et programme d'apprentissage
CN109616102B (zh) 声学模型的训练方法、装置及存储介质
CN112768056A (zh) 基于联合学习框架的疾病预测模型建立方法和装置
CN109102468B (zh) 图像增强方法、装置、终端设备及存储介质
CN109616103B (zh) 声学模型的训练方法、装置及存储介质
CN115034375B (zh) 数据处理方法及装置、神经网络模型、设备、介质
CN112201272A (zh) 音频数据降噪的方法、装置、设备及存储介质
CN108230253A (zh) 图像恢复方法、装置、电子设备和计算机存储介质
US20220405561A1 (en) Electronic device and controlling method of electronic device
CN111158907A (zh) 数据处理方法及装置、电子设备和存储介质
CN112801882A (zh) 图像处理方法及装置、存储介质和电子设备
CN115511754A (zh) 基于改进的Zero-DCE网络的低照度图像增强方法
CN109271499A (zh) 一种知识问答中答题用户的推荐方法、装置和终端设备
CN112561822B (zh) 美颜方法、装置、电子设备及存储介质
CN112184568A (zh) 图像处理方法、装置、电子设备及可读存储介质
CN111382772B (zh) 一种图像处理方法、装置及终端设备
TW202219750A (zh) 機器學習模型訓練方法、電子設備、控制器及存儲介質
TW202044125A (zh) 訓練稀疏連接神經網路的方法
CN111062886A (zh) 酒店图片的超分辨方法、系统、电子产品和介质

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19919784

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 19919784

Country of ref document: EP

Kind code of ref document: A1

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 22.02.2022)

122 Ep: pct application non-entry in european phase

Ref document number: 19919784

Country of ref document: EP

Kind code of ref document: A1