WO2023130650A1 - 一种图像复原方法、装置、电子设备及存储介质 - Google Patents

一种图像复原方法、装置、电子设备及存储介质 Download PDF

Info

Publication number
WO2023130650A1
WO2023130650A1 PCT/CN2022/095379 CN2022095379W WO2023130650A1 WO 2023130650 A1 WO2023130650 A1 WO 2023130650A1 CN 2022095379 W CN2022095379 W CN 2022095379W WO 2023130650 A1 WO2023130650 A1 WO 2023130650A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
processing model
extraction network
feature extraction
image processing
Prior art date
Application number
PCT/CN2022/095379
Other languages
English (en)
French (fr)
Inventor
张英杰
史宏志
赵雅倩
Original Assignee
苏州浪潮智能科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 苏州浪潮智能科技有限公司 filed Critical 苏州浪潮智能科技有限公司
Publication of WO2023130650A1 publication Critical patent/WO2023130650A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • G06F18/253Fusion techniques of extracted features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning

Definitions

  • the present application relates to the field of image processing, in particular to an image restoration method, device, electronic equipment and storage medium.
  • SR Super Resolution
  • the present application provides an image restoration method, device, electronic equipment and storage medium.
  • an image restoration method including:
  • the image processing model includes: an optical feature extraction network and an image feature extraction network;
  • obtaining a pre-trained image processing model includes:
  • the training sample set includes a plurality of dark light sample images and bright sample images corresponding to the dark light sample images;
  • the dark light sample image is input to the initial image processing model, so that the light feature extraction network and the image feature extraction network in the initial image processing model extract the image features of the dark light sample image respectively, and generate a bright image based on the image feature;
  • the initial image processing model is determined as the image processing model.
  • the method also includes:
  • the updated initial image processing model is trained using the dark light sample images in the training sample set until the loss function value between the bright image output by the updated initial image processing model and the bright sample image is less than a preset threshold.
  • the initial image processing model includes: an optical feature extraction network and an image feature extraction network, wherein the optical feature extraction network includes convolution parameters and convolutional layers determined based on the illumination encoding matrix, and the image feature extraction network includes Multiple fully connected layers.
  • the method prior to inputting the dark light sample image into the initial image processing model, the method further comprises:
  • the convolution parameters of the optical feature extraction network in the feature fusion process are determined, and the channel coefficients of the image feature extraction network in the feature fusion process are determined.
  • the dark light sample image is input to the initial image processing model, so that the light feature extraction network and the image feature extraction network in the initial image processing model extract the image features of the dark light sample image respectively, and generate bright images, including:
  • the network generates the second image feature according to the image feature and the channel coefficient, and fuses the first image feature and the second image feature to generate a bright image.
  • an image restoration device including:
  • the first acquisition module is used to acquire the original dark light image to be restored
  • the second acquisition module is used to acquire a pre-trained image processing model, wherein the image processing model includes: an optical feature extraction network and an image feature extraction network; and
  • the processing module is used to input the original dark light image into the image processing model, so that the light feature extraction network in the image processing model extracts the light feature in the original dark light image, and the image feature extraction network extracts the target image in the original dark light image Features, based on illumination features and target image features to generate target bright images.
  • a storage medium is further provided, the storage medium includes a stored program, and the above steps are executed when the program runs.
  • an electronic device including a processor, a communication interface, a memory, and a communication bus, wherein, the processor, the communication interface, and the memory complete communication with each other through the communication bus; wherein:
  • the memory is used to store computer-readable instructions; the processor is used to execute the steps in the above method by running the program stored in the memory.
  • the present application also provides one or more non-volatile storage media storing computer-readable instructions, which, when executed by one or more processors, cause one or more processors to perform A step of an image restoration method according to any one of the above items.
  • Fig. 1 is a schematic diagram of an application environment of an image restoration method according to one or more embodiments
  • Fig. 2 is a flowchart of an image restoration method according to one or more embodiments
  • Fig. 3 is a flowchart of an image restoration method according to one or more embodiments
  • Fig. 4 is a schematic diagram of training an illumination encoding matrix according to one or more embodiments
  • Fig. 5 is a schematic diagram of an image restoration process according to one or more embodiments.
  • Fig. 6 is a block diagram of an image restoration device according to one or more embodiments.
  • Fig. 7 is a schematic structural diagram of an electronic device according to one or more embodiments.
  • a method for process cooperation in a cluster can be applied to the application environment shown in FIG. 1 .
  • the server 100 communicates with the client 101 through the network 102 .
  • the server 100 is used to receive the image processing request sent by the client 101, and obtain the original dark light image to be restored from the image processing request; obtain a pre-trained image processing model, wherein the image processing model includes: light feature extraction network and image feature Extracting the network; inputting the original dark light image into the image processing model, so that the light feature extraction network in the image processing model extracts the illumination features in the original dark light image, and the image feature extraction network extracts the target image features in the original dark light image, Generate target bright images based on illumination features and target image features.
  • the server 100 can be implemented by an independent server or a server cluster composed of multiple servers.
  • the client 101 is used to send an image processing request to the server 100 .
  • the client 100 can be, but not limited to, various personal computers, notebook computers, smart phones, tablet computers and portable wearable devices.
  • the network 102 is used to realize the network connection between the terminal 101 and the server 100, specifically, the network 102 may include various types of wired or wireless networks.
  • Embodiments of the present application provide an image restoration method, device, electronic equipment, and storage medium.
  • the method provided by the embodiment of the present invention can be applied to any required electronic device, for example, it can be a server, a terminal and other electronic devices, which are not specifically limited here, and for the convenience of description, it will be referred to as electronic device in the following.
  • Fig. 2 is a flow chart of an image restoration method provided by the embodiment of the present application.
  • the method is applied to the server in Fig. 1 as an example for illustration.
  • the method includes:
  • Step S11 acquiring the original dark-light image to be restored.
  • the method provided in the embodiment of this application is applied to the server, and the server is used to restore the original dark-light image.
  • the server obtains the dark-light image from the image processing request by receiving the image processing request sent by the client. , if the resolution of the dark-light image is smaller than the preset resolution, determine the dark-light image as the original dark-light image to be restored.
  • Step S12 acquiring a pre-trained image processing model, wherein the image processing model includes: an optical feature extraction network and an image feature extraction network.
  • the pre-trained image processing model includes: an optical feature extraction network and an image feature extraction network, wherein the optical feature extraction network includes determining convolution parameters and convolutional layers according to the illumination encoding matrix, image feature extraction
  • the network includes multiple fully connected layers.
  • the convolution parameters of the convolutional layer in the optical feature extraction network can be set according to the pre-obtained illumination coding matrix, and the channel coefficients between the fully connected layers in the image feature extraction network can also be set according to the pre-obtained illumination encoding matrix. Encoding matrix settings.
  • the pre-trained illumination encoding matrix as R first obtain the pre-trained illumination encoding matrix as R, set the convolution parameter w in the light feature extraction network based on the illumination encoding matrix R, and then determine the convolution kernel of the first convolutional layer as 3 ⁇ 3, The convolution kernel of the second layer is 1 ⁇ 1.
  • the channel coefficients in the image feature extraction network are set based on the illumination coding matrix R.
  • step S12 obtaining a pre-trained image processing model includes the following steps A1-A4:
  • Step A1 obtaining a training sample set, wherein the training sample set includes a plurality of dark light sample images and bright sample images corresponding to the dark light sample images.
  • the training sample set includes: pairs of dark-light sample images and bright sample images, wherein the dark-light sample images are low-resolution images obtained based on short exposures in dark-light environments, and the bright sample images are It is a high-resolution image obtained based on a long exposure in a dark light environment.
  • Step A2 input the dark light sample image into the initial image processing model, so that the image feature extraction network and the light feature extraction network in the initial image processing model respectively extract the image features of the dark light sample image, and generate a bright image based on the image features.
  • the initial image processing model includes: an optical feature extraction network and an image feature extraction network, wherein the optical feature extraction network includes convolution parameters and convolutional layers determined based on the illumination encoding matrix, and the image feature extraction network includes Contains multiple fully connected layers.
  • the dark light sample image is input to the initial image processing model, so that the initial image processing model extracts the image features of the dark light sample image, and the first image is generated based on the light feature extraction network according to the image features and convolution parameters feature, and based on the image feature extraction network to generate the second image feature according to the image feature and the channel coefficient, and fuse the first image feature and the second image feature to generate a bright image.
  • fusing the first image feature and the second image feature to generate a bright image includes: adding the first image feature and the second image feature to obtain the fused image feature, based on the fused image feature produces bright images.
  • Step A3 calculating the loss function value between the bright image and the bright sample image.
  • the calculation method of the loss function value between the bright image and the bright sample image is as follows:
  • Loss
  • Step A4 if the loss function value is less than the preset threshold, determine the initial image processing model as the image processing model.
  • the initial image processing model is determined as the target processing model.
  • the method further includes the following steps B1-B2:
  • Step B1 when the value of the loss function is greater than or equal to the preset threshold, update the model parameters in the initial image processing model to obtain an updated initial image processing model.
  • Step B2 use the dark light sample images in the training sample set to train the updated initial image processing model until the loss function value between the bright image output by the updated initial image processing model and the bright sample image is less than the preset threshold .
  • the error is gradiently returned, and the parameters in the model are corrected to obtain new parameter values.
  • the subsequent model uses the new parameter values to perform image processing again to obtain a new loss function of the output image. , when the loss function no longer decreases, the final image processing model is obtained.
  • the method before inputting the dark light sample image into the initial image processing model, the method further includes the following steps C1-C3:
  • step C1 multiple real bright images with different exposure levels are acquired, and the real bright images are cropped to obtain multiple image blocks.
  • Step C2 image coding is performed based on image blocks to obtain coding information, and an illumination coding matrix is obtained according to the coding information.
  • Step C3 determining the convolution parameters of the optical feature extraction network in the process of feature fusion based on the illumination encoding matrix, and determining the channel coefficients of the image feature extraction network in the process of feature fusion.
  • each image block is encoded to obtain the encoding information corresponding to each image block, the illumination code is extracted from the encoding information, and the illumination code is generated based on each illumination code. encoding matrix. Then the illumination encoding matrix is trained, and the convolution parameters in the light feature extraction network and the channel coefficients in the image feature extraction network can be preset according to the trained illumination encoding matrix.
  • the images obtained under different exposure levels of the same scene have different illumination characteristics, but the illumination characteristics of different parts on the image are the same.
  • the image blocks in the same image can be regarded as positive samples.
  • the encoder adopts a 6-layer CNN to extract the illumination encoding matrix from the above image block, and then input it into the two-layer perceptron (two-layer MLP) to obtain the illumination encoding matrix: x + , x, x - .
  • x + and x should be closer (that is, same illumination representation), and x + and x - should be far away (that is, different illumination representation).
  • InfoNCE is adopted to measure the similarity between each representation, which is defined as follows. where t is a hyperparameter.
  • Lx is the loss encoded by a single light.
  • Li llumination is the overall loss, is the image block encoding queue, and j is a random number.
  • the final illumination encoding matrix is obtained.
  • Step S13 inputting the original dark-light image into the image processing model, so that the light feature extraction network in the image processing model extracts the lighting features in the original dark-light image, and the image feature extraction network extracts the target image features in the original dark-light image, Generate target bright images based on illumination features and target image features.
  • step S13 the original dark light image is input into the image processing model, so that the light feature extraction network in the image processing model extracts the illumination features in the original dark light image, and the image feature extraction network extracts the original dark light
  • the target image features in the image the specific process of generating a target bright image based on the illumination features and target image features is as follows:
  • the image processing model first extracts the original image feature F in the original dark light image through the convolution layer, and the original image feature F will be transmitted to the light feature extraction network and the image feature extraction network respectively.
  • the light feature extraction The fully connected layer (Fully Connected, abbreviated FC) in the network multiplies the original image feature F and the convolution parameter w to obtain the processed original image feature, and then inputs the processed original image feature into the convolutional layer of the optical feature extraction network (The convolutional layer includes: a deep convolutional layer and a convolutional layer with a convolution kernel of 1 ⁇ 1) to obtain the illumination feature F1.
  • the fully connected layer (FC for short) of the image feature extraction network performs feature processing on the original image feature F, and multiplies the processed original image feature with the channel coefficient v to obtain the target image feature F2.
  • the illumination feature F1 and the target image feature F2 are fused, and the target bright image is generated based on the fused features.
  • This application uses the illumination code extraction extraction network and image feature extraction network in the image processing model to process the image features of the original dark light image respectively to obtain the illumination features and target image features, and then according to the features after the fusion of illumination features and target image features Carry out image restoration, and achieve a bright image by dark-light enhancement of a dark-light image through a model.
  • the illumination encoding matrix of the image is learned in an unsupervised way, and the image processing model is used to fuse the image features with the illumination encoding matrix, and finally a high-light super-resolution image is obtained, which can achieve visual dark light enhancement.
  • FIG. 6 is a block diagram of an image restoration device provided in an embodiment of the present application.
  • the image restoration device can be implemented as part or all of an electronic device through software, hardware or a combination of the two. As shown in Figure 5, the image restoration device includes:
  • the first acquiring module 31 is used to acquire the original dark light image to be restored
  • the second acquisition module 32 is used to acquire a pre-trained image processing model, wherein the image processing model includes: an optical feature extraction network and an image feature extraction network;
  • the processing module 33 is used to input the original dark-light image into the image processing model, so that the light feature extraction network in the image processing model extracts the illumination features in the original dark-light image, and the image feature extraction network extracts the target in the original dark-light image Image features, generate target bright images based on illumination features and target image features.
  • the first acquisition module is configured to acquire a training sample set, wherein the training sample set includes a plurality of dark light sample images, and bright sample images corresponding to the dark light sample images; input the dark light sample images To the initial image processing model, so that the light feature extraction network and image feature extraction network in the initial image processing model extract the image features of the dark light sample image respectively, and generate a bright image based on the image feature; calculate the bright image and the bright image corresponding to the bright sample image
  • the loss function value between; when the loss function value is less than the preset threshold, the initial image processing model is determined as the image processing model.
  • the device further includes: a training module, configured to update the model parameters in the initial image processing model to obtain an updated initial image processing model when the loss function value is greater than or equal to a preset threshold; use The dark light sample images in the training sample set train the updated initial image processing model until the loss function value between the bright image output by the updated initial image processing model and the bright sample image is less than a preset threshold.
  • a training module configured to update the model parameters in the initial image processing model to obtain an updated initial image processing model when the loss function value is greater than or equal to a preset threshold
  • use The dark light sample images in the training sample set train the updated initial image processing model until the loss function value between the bright image output by the updated initial image processing model and the bright sample image is less than a preset threshold.
  • the initial image processing model includes: an optical feature extraction network and an image feature extraction network, wherein the optical feature extraction network includes convolution parameters and convolutional layers determined based on the illumination encoding matrix, and the image feature extraction network includes Contains multiple fully connected layers.
  • the image restoration device further includes: a determination module, configured to obtain multiple real bright images with different exposure levels, and crop the real bright images to obtain multiple image blocks; perform image encoding based on the image blocks, The encoding information is obtained, and the illumination encoding matrix is obtained according to the encoding information; the convolution parameters of the optical feature extraction network in the feature fusion process are determined based on the illumination encoding matrix, and the channel coefficients of the image feature extraction network in the feature fusion process are determined.
  • a determination module configured to obtain multiple real bright images with different exposure levels, and crop the real bright images to obtain multiple image blocks
  • perform image encoding based on the image blocks The encoding information is obtained, and the illumination encoding matrix is obtained according to the encoding information; the convolution parameters of the optical feature extraction network in the feature fusion process are determined based on the illumination encoding matrix, and the channel coefficients of the image feature extraction network in the feature fusion process are determined.
  • the first acquisition module is used to input the dark-light sample image to the initial image processing model, so that the initial image processing model extracts the image features of the dark-light sample image, based on the light feature extraction network according to the image features and
  • the illumination encoding matrix generates the first image features
  • the image feature extraction network generates the second image features according to the image features and channel coefficients, and fuses the first image features and the second image features to generate a bright image.
  • the processing module 33 is used to input the original dark-light image into the image processing model, so that the image processing model extracts the image features of the original dark-light image, based on the light feature extraction network in the image processing model according to the image features And the illumination coding matrix generates illumination features, and the target image features are generated based on the image feature extraction network based on image features and channel coefficients, and the illumination features and target image features are fused to generate target bright images.
  • a single model is used to perform dark light enhancement on low-resolution dark light images to obtain high-resolution bright images.
  • Image restoration shortens the processing flow, and at the same time learns the illumination encoding matrix of the image in an unsupervised manner, and uses the illumination encoding matrix to determine various parameters in the image processing model, so that the image processing model can finally restore the dark light image into a super High-resolution images, which visually achieve low-light enhancement.
  • the embodiment of the present application also provides an electronic device. As shown in FIG.
  • the communication bus 1504 completes mutual communication.
  • Memory 1503 for storing computer-readable instructions
  • the processor 1501 is configured to implement the steps of the above-mentioned embodiments when executing the computer-readable instructions stored in the memory 1503 .
  • the communication bus mentioned in the above-mentioned terminal may be a Peripheral Component Interconnect (PCI for short) bus or an Extended Industry Standard Architecture (EISA for short) bus or the like.
  • PCI Peripheral Component Interconnect
  • EISA Extended Industry Standard Architecture
  • the communication bus can be divided into an address bus, a data bus, a control bus, and the like. For ease of representation, only one thick line is used in the figure, but it does not mean that there is only one bus or one type of bus.
  • the communication interface is used for communication between the terminal and other devices.
  • the memory may include a random access memory (Random Access Memory, RAM for short), and may also include a non-volatile memory (non-volatile memory), such as at least one disk memory.
  • non-volatile memory such as at least one disk memory.
  • the memory may also be at least one storage device located far away from the aforementioned processor.
  • the above-mentioned processor can be a general-purpose processor, including a central processing unit (Central Processing Unit, referred to as CPU), a network processor (Network Processor, referred to as NP), etc.; it can also be a digital signal processor (Digital Signal Processing, referred to as DSP) , Application Specific Integrated Circuit (ASIC for short), Field Programmable Gate Array (Field-Programmable Gate Array, FPGA for short) or other programmable logic devices, discrete gate or transistor logic devices, discrete hardware components.
  • CPU Central Processing Unit
  • NP Network Processor
  • DSP Digital Signal Processing
  • ASIC Application Specific Integrated Circuit
  • FPGA Field-Programmable Gate Array
  • the embodiment of the present specification also provides one or more non-volatile storage media storing computer-readable instructions, and when the computer-readable instructions are executed by one or more processors, one or more processors can execute any one of the above-mentioned implementations.
  • the steps of the image restoration method in the example are provided.
  • a computer-readable instruction product including instructions is also provided, and when it is run on a computer, it causes the computer to execute the image restoration method in any one of the above-mentioned embodiments.
  • FIG. 7 is only a block diagram of a part of the structure related to the solution of this application, and does not constitute a limitation on the equipment to which the solution of this application is applied.
  • the specific equipment may include More or fewer components are shown in the figures, or certain components are combined, or have different component arrangements.
  • Nonvolatile memory can include read only memory (ROM), programmable ROM (PROM), electrically programmable ROM (EPROM), electrically erasable programmable ROM (EEPROM), or flash memory.
  • Volatile memory can include random access memory (RAM) or external cache memory.
  • RAM random access memory
  • RAM is available in many forms such as Static RAM (SRAM), Dynamic RAM (DRAM), Synchronous DRAM (SDRAM), Double Data Rate SDRAM (DDRSDRAM), Enhanced SDRAM (ESDRAM), Synchronous Chain Synchlink DRAM (SLDRAM), memory bus (Rambus) direct RAM (RDRAM), direct memory bus dynamic RAM (DRDRAM), and memory bus dynamic RAM (RDRAM), etc.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Artificial Intelligence (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

一种图像复原方法、装置、电子设备及存储介质。该方法包括:获取待复原的原始暗光图像;获取预先训练的图像处理模型(S11);将原始暗光图像输入图像处理模型,以使图像处理模型中的光特征提取网络提取原始暗光图像中的光照特征,以及图像特征提取网络提取原始暗光图像中的目标图像特征,基于光照特征以及目标图像特征生成目标明亮图像(S13)。采用光特征提取网络和图像特征提取网络分别对原始暗光图像的图像特征进行处理,得到光照特征和目标图像特征,然后融合光照特征和目标图像特征进行图像复原,实现以一个模型对暗光图像进行暗光增强得到明亮图像,不再需要分别使用暗光增强模型和超分辨率两个模型进行图像复原。

Description

一种图像复原方法、装置、电子设备及存储介质
相关申请的交叉引用
本申请要求于2022年01月04日提交中国专利局,申请号为202210000435.0,申请名称为“一种图像复原方法、装置、电子设备及存储介质”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本申请涉及图像处理领域,尤其涉及一种图像复原方法、装置、电子设备及存储介质。
背景技术
超分辨率(Super Resolution,SR)是从给定的低分辨率图像中恢复高分辨率图像的过程,是计算机视觉的一个经典应用,在监控设备、卫星图像遥感等领域都有重要的应用价值。
在监控或遥感等场景中,当晚上或大雾天等光照缺乏的情况下,获取的图像质量很差,此时直接进行超分辨率所得到的图像会呈现灰暗模糊的情况,无法达到提升视觉效果的作用,需要对图像再使用暗光图像增强技术进行图像复原。
申请人意识到,现有的技术中的超分辨率模型,大部分都是应用在充足光照图像下的超分辨率,没有针对暗光图像进行视觉增强,这些缺点限制了它们在真实暗光场景中的使用。
发明内容
本申请提供了一种图像复原方法、装置、电子设备及存储介质。
根据本申请实施例的一个方面,提供了一种图像复原方法,包括:
获取待复原的原始暗光图像;
获取预先训练的图像处理模型,其中,图像处理模型包括:光特征提取网络和图像特征提取网络;和
将原始暗光图像输入图像处理模型,以使图像处理模型中的光特征提取网络提取原始暗光图像中的光照特征,以及图像特征提取网络提取原始暗光图像中的目标图像特 征,基于光照特征以及目标图像特征生成目标明亮图像。
在一些实施例中,获取预先训练的图像处理模型,包括:
获取训练样本集合,其中,训练样本集合中包括多个暗光样本图像,以及暗光样本图像对应的明亮样本图像;
将暗光样本图像输入至初始图像处理模型,以使初始图像处理模型中的光特征提取网络和图像特征提取网络分别提取暗光样本图像的图像特征,基于图像特征生成明亮图像;
计算明亮图像与明亮图像对应明亮样本图像之间的损失函数值;和
在损失函数值小于预设阈值的情况下,将初始图像处理模型确定为图像处理模型。
在一些实施例中,方法还包括:
在损失函数值大于或等于预设阈值的情况下,更新初始图像处理模型中的模型参数,得到更新后的初始图像处理模型;和
使用训练样本集合中的暗光样本图像对更新后的初始图像处理模型进行训练,直至更新后的初始图像处理模型输出的明亮图像与明亮样本图像之间的损失函数值小于预设阈值。
在一些实施例中,初始图像处理模型包括:光特征提取网络和图像特征提取网络,其中,光特征提取网络中包括基于光照编码矩阵确定的卷积参数以及卷积层,图像特征提取网络中包括多个全连接层。
在一些实施例中,在将暗光样本图像输入至初始图像处理模型之前,方法还包括:
获取多个不同曝光程度的真实明亮图像,并对真实明亮图像进行裁剪,得到多个图像块;
基于图像块进行图像编码,得到编码信息,并根据编码信息得到光照编码矩阵;和
基于光照编码矩阵确定光特征提取网络在特征融合过程中的卷积参数,以及确定图像特征提取网络在特征融合过程中的通道系数。
在一些实施例中,将暗光样本图像输入至初始图像处理模型,以使初始图像处理模型中的光特征提取网络和图像特征提取网络分别提取暗光样本图像的图像特征,基于图像特征生成明亮图像,包括:
将暗光样本图像输入至初始图像处理模型,以使初始图像处理模型提取暗光样本图像的图像特征,基于光特征提取网络根据图像特征以及光照编码矩阵生成第一图像特征,以及基于图像特征提取网络根据图像特征和通道系数生成第二图像特征,融合第一图像特征以及第二图像特征,生成明亮图像。
根据本申请实施例的另一方面,还提供了一种图像复原装置,包括:
第一获取模块,用于获取待复原的原始暗光图像;
第二获取模块,用于获取预先训练的图像处理模型,其中,图像处理模型包括:光特征提取网络和图像特征提取网络;和
处理模块,用于将原始暗光图像输入图像处理模型,以使图像处理模型中的光特征提取网络提取原始暗光图像中的光照特征,以及图像特征提取网络提取原始暗光图像中的目标图像特征,基于光照特征以及目标图像特征生成目标明亮图像。
根据本申请实施例的另一方面,还提供了一种存储介质,该存储介质包括存储的程序,程序运行时执行上述的步骤。
根据本申请实施例的另一方面,还提供了一种电子装置,包括处理器、通信接口、存储器和通信总线,其中,处理器,通信接口,存储器通过通信总线完成相互间的通信;其中:存储器,用于存放计算机可读指令;处理器,用于通过运行存储器上所存放的程序来执行上述方法中的步骤。
在一些实施例中,本申请还提供一个或多个存储有计算机可读指令的非易失性存储介质,计算机可读指令被一个或多个处理器执行时,使得一个或多个处理器执行上述任意一项的一种图像复原方法的步骤。
本申请的一个或多个实施例的细节在下面的附图和描述中提出。本申请的其它特征和优点将从说明书、附图以及权利要求书变得明显。
附图说明
此处的附图被并入说明书中并构成本说明书的一部分,示出了符合本申请的实施例,并与说明书一起用于解释本申请的原理。
为了更清楚地说明本申请实施例或现有技术中的技术方案,下面将对实施例或现有技术描述中所需要使用的附图作简单地介绍,显而易见地,对于本领域普通技术人员而言,在不付出创造性劳动性的前提下,还可以根据这些附图获得其他的附图。
图1为根据一个或多个实施例中一种图像复原方法的应用环境示意图;
图2为根据一个或多个实施例中一种图像复原方法的流程图;
图3为根据一个或多个实施例中一种图像复原方法的流程图;
图4为根据一个或多个实施例中光照编码矩阵的训练示意图;
图5为根据一个或多个实施例中图像复原过程的示意图;
图6为根据一个或多个实施例中一种图像复原装置的框图;
图7为根据一个或多个实施例中一种电子设备的结构示意图。
具体实施方式
为使本申请实施例的目的、技术方案和优点更加清楚,下面将结合本申请实施例中的附图,对本申请实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例是本申请的一部分实施例,而不是全部的实施例,本申请的示意性实施例及其说明用于解释本申请,并不构成对本申请的不当限定。基于本申请中的实施例,本领域普通技术人员在没有做出创造性劳动的前提下所获得的所有其他实施例,都属于本申请保护的范围。
需要说明的是,在本文中,诸如“第一”和“第二”等之类的关系术语仅仅用来将一个实体或者操作与另一个类似的实体或操作区分开来,而不一定要求或者暗示这些实体或操作之间存在任何这种实际的关系或者顺序。而且,术语“包括”、“包含”或者其任何其他变体意在涵盖非排他性的包含,从而使得包括一系列要素的过程、方法、物品或者设备不仅包括那些要素,而且还包括没有明确列出的其他要素,或者是还包括为这种过程、方法、物品或者设备所固有的要素。在没有更多限制的情况下,由语句“包括一个……”限定的要素,并不排除在包括要素的过程、方法、物品或者设备中还存在另外的相同要素。
本申请提供的一种集群中进程协作的方法,可以应用于如图1所示的应用环境中。其中,服务器100与客户端101通过网络102进行通信。服务器100用于接收客户端101发送的图像处理请求,从图像处理请求中获取待复原的原始暗光图像;获取预先训练的图像处理模型,其中,图像处理模型包括:光特征提取网络和图像特征提取网络;将原始暗光图像输入图像处理模型,以使图像处理模型中的光特征提取网络提取原始暗光图像中的光照特征,以及图像特征提取网络提取原始暗光图像中的目标图像特征,基于光照特征以及目标图像特征生成目标明亮图像。服务器100可以用独立的服务器或者是多个服务器组成的服务器集群来实现。
客户端101用于向服务器100发送图像处理请求。其中客户端100可以但不限于是各种个人计算机、笔记本电脑、智能手机、平板电脑和便携式可穿戴设备。
网络102用于实现终端101与服务器100之间的网络连接,具体的,网络102可以包括多种类型的有线或无线网络。
本申请实施例提供了一种图像复原方法、装置、电子设备及存储介质。本发明实施例所提供的方法可以应用于任意需要的电子设备,例如,可以为服务器、终端等电子设 备,在此不做具体限定,为描述方便,后续简称为电子设备。
根据本申请实施例的一方面,提供了一种图像复原方法的方法实施例。图2为本申请实施例提供的一种图像复原方法的流程图,以该方法应用于图1中的服务器为例进行说明,该方法包括:
步骤S11,获取待复原的原始暗光图像。
本申请实施例提供的方法应用于服务端,该服务端用于执行原始暗光图像的复原操作,具体的,服务端通过接收客户端发送的图像处理请求,从图像处理请求中获取暗光图像,在该暗光图像的分辨率小于预设分辨率的情况下,将该暗光图像确定为待复原的原始暗光图像。
步骤S12,获取预先训练的图像处理模型,其中,图像处理模型包括:光特征提取网络和图像特征提取网络。
在本申请实施例中,预先训练的图像处理模型中包括:光特征提取网络和图像特征提取网络,其中,光特征提取网络中包括根据光照编码矩阵确定卷积参数以及卷积层,图像特征提取网络中包括多个全连接层。需要说明的是,光特征提取网络中卷积层的卷积参数是可以依据预先得到的光照编码矩阵设定,图像特征提取网络中的全连接层之间的通道系数同时可以依据预先得到的光照编码矩阵设定。
作为一个示例,首先获取预先训练的光照编码矩阵为R,基于光照编码矩阵R设定光特征提取网络中的卷积参数w,然后并确定第一卷积层的卷积核为3×3,第二层的卷积核为1×1。同时基于光照编码矩阵R设定图像特征提取网络中的通道系数。
在本申请实施例中,如图3所示,步骤S12,获取预先训练的图像处理模型包括以下步骤A1-A4:
步骤A1,获取训练样本集合,其中,训练样本集合中包括多个暗光样本图像,以及暗光样本图像对应的明亮样本图像。
在本申请实施例中,训练样本集合中包括:成对的暗光样本图像以及明亮样本图像,其中,暗光样本图像是基于暗光环境下进行短曝光得到的低分辨率图像,明亮样本图像是基于暗光环境下进行长曝光得到的高分辨率图像。
步骤A2,将暗光样本图像输入至初始图像处理模型,以使初始图像处理模型中的图像特征提取网络和光特征提取网络分别提取暗光样本图像的图像特征,基于图像特征生成明亮图像。
在本申请实施例中,初始图像处理模型包括:光特征提取网络和图像特征提取网络,其中,光特征提取网络中包括基于光照编码矩阵确定的卷积参数以及卷积层,图像 特征提取网络中包括多个全连接层。
在本申请实施例中,将暗光样本图像输入至初始图像处理模型,以使初始图像处理模型提取暗光样本图像的图像特征,基于光特征提取网络根据图像特征以及卷积参数生成第一图像特征,以及基于图像特征提取网络根据图像特征和通道系数生成第二图像特征,融合第一图像特征以及第二图像特征,生成明亮图像。
在本申请实施例中,融合第一图像特征以及第二图像特征,生成明亮图像,包括:对第一图像特征和第二图像特征进行相加,得到融合后的图像特征,基于融合后的图像特征生成明亮图像。
步骤A3,计算明亮图像与明亮样本图像之间的损失函数值。
在本申请实施例中,明亮图像与明亮样本图像之间的损失函数值计算方式如下:
Loss=||GT-ISR|| 2,式中,GT为明亮图像的图像特征,ISR为明亮样本图像的图像特征。
步骤A4,在损失函数值小于预设阈值的情况下,将初始图像处理模型确定为图像处理模型。
在本申请实施例中,Loss小于预设阈值的情况下,将初始图像处理模型确定为目标处理模型。
在本申请实施例中,方法还包括以下步骤B1-B2:
步骤B1,在损失函数值大于或等于预设阈值的情况下,更新初始图像处理模型中的模型参数,得到更新后的初始图像处理模型。
步骤B2,使用训练样本集合中的暗光样本图像对更新后的初始图像处理模型进行训练,直至更新后的初始图像处理模型输出的明亮图像与明亮样本图像之间的损失函数值小于预设阈值。
在本申请实施例中,根据损失函数的导数,将误差进行梯度回传,修正模型里面的参数,得到新参数值,后续模型使用新参数值重新进行图像处理,得到新的输出图像的损失函数,在该损失函数不再下降的时候,得到最终的图像处理模型。
在本申请实施例中,在将暗光样本图像输入至初始图像处理模型之前,方法还包括以下步骤C1-C3:
步骤C1,获取多个不同曝光程度的真实明亮图像,并对真实明亮图像进行裁剪,得到多个图像块。
步骤C2,基于图像块进行图像编码,得到编码信息,并根据编码信息得到光照编码 矩阵。
步骤C3,基于光照编码矩阵确定光特征提取网络在特征融合过程中的卷积参数,以及确定图像特征提取网络在特征融合过程中的通道系数。
在本申请实施例中,由于存在多个图像块,因此对每个图像块进行编码,能够得到每个图像块对应的编码信息,从编码信息中提取光照编码,并基于每个光照编码生成光照编码矩阵。然后对光照编码矩阵进行训练,依据训练后的光照编码矩阵可以预先设定光特征提取网络中的卷积参数,以及预先设定图像特征提取网络中的通道系数。
需要说明的是,同一场景在不同曝光程度下所获得的图像具有的光照特征是不相同的,而图像上不同部位之间的光照特征是相同的。例如:如图4所示,给定同一场景下不同曝光的Bayer Raw格式图像并转成RGB图,同一图像内的图像块可视作正样本。其中encoder编码器采取一个6层的CNN对上述图像块提取光照编码矩阵,然后输入到2层感知器(two-layer MLP)中得到光照编码矩阵:x +、x、x -。所得到的光照编码矩阵中,x +、x应该更加相近(即same illumination representation),x +和x -应当相远(即different illumination representation)。此处采用InfoNCE来度量每个表示之间的相似性,定义如下。其中t是一个超参数。
Figure PCTCN2022095379-appb-000001
L x为单个光照编码的损失。
在训练过程中,首先选取B个图像(即B个不同曝光图像)并每个图像内随机裁剪两个块,然后将这2×B个图像块编码为
Figure PCTCN2022095379-appb-000002
基于2B个图像块编码计算整体损失,计算过程如下:
Figure PCTCN2022095379-appb-000003
Li llu min ation为整体损失,
Figure PCTCN2022095379-appb-000004
为图像块编码队列,j为随机数。
然后在整体损失小于预设阈值时,得到最终的光照编码矩阵。
步骤S13,将原始暗光图像输入图像处理模型,以使图像处理模型中的光特征提取网络提取原始暗光图像中的光照特征,以及图像特征提取网络提取原始暗光图像中的目标图像特征,基于光照特征以及目标图像特征生成目标明亮图像。
在本申请实施例中,步骤S13,将原始暗光图像输入图像处理模型,以使图像处理模型中的光特征提取网络提取原始暗光图像中的光照特征,以及图像特征提取网络提取原 始暗光图像中的目标图像特征,基于光照特征以及目标图像特征生成目标明亮图像的具体过程如下:
如图5所示,图像处理模型首先通过卷积层提取原始暗光图像中的原始图像特征F,原始图像特征F会分别传输至光特征提取网络和图像特征提取网络,此时,光特征提取网络中的全连接层(Fully Connected,缩写FC)对原始图像特征F和卷积参数w相乘得到处理后的原始图像特征,然后将处理后的原始图像特征输入光特征提取网络的卷积层(卷积层包括:深度卷积层和卷积核为1×1的卷积层)得到光照特征F1。同时,图像特征提取网络的全连接层(Fully Connected,缩写FC)会对原始图像特征F进行特征处理,并基于处理后的原始图像特征与通道系数v进行相乘得到目标图像特征F2。融合光照特征F1和目标图像特征F2,基于融合后的特征生成目标明亮图像。
本申请采用图像处理模型中的光照编码提取提取网络和图像特征提取网络分别对原始暗光图像的图像特征进行处理,得到光照特征和目标图像特征,然后依据光照特征和目标图像特征融合后的特征进行图像复原,实现了通过一个模型对暗光图像进行暗光增强得到明亮图像,相比现有技术,不再需要分别使用暗光增强模型和超分辨率两个模型进行图像复原,提高了处理效率。同时通过无监督的方式学习图像的光照编码矩阵,并利用图像处理模型利用光照编码矩阵融合图像特征,最后得到高光超分辨率图像,能够实现视觉上的暗光增强。
图6为本申请实施例提供的一种图像复原装置的框图,该图像复原装置可以通过软件、硬件或者两者的结合实现成为电子设备的部分或者全部。如图5所示,该图像复原装置包括:
第一获取模块31,用于获取待复原的原始暗光图像;
第二获取模块32,用于获取预先训练的图像处理模型,其中,图像处理模型包括:光特征提取网络和图像特征提取网络;
处理模块33,用于将原始暗光图像输入图像处理模型,以使图像处理模型中的光特征提取网络提取原始暗光图像中的光照特征,以及图像特征提取网络提取原始暗光图像中的目标图像特征,基于光照特征以及目标图像特征生成目标明亮图像。
在本申请实施例中,第一获取模块,用于获取训练样本集合,其中,训练样本集合中包括多个暗光样本图像,以及暗光样本图像对应的明亮样本图像;将暗光样本图像输入至初始图像处理模型,以使初始图像处理模型中的光特征提取网络和图像特征提取网络分别提取暗光样本图像的图像特征,基于图像特征生成明亮图像;计算明亮图像与明亮图像对应明亮样本图像之间的损失函数值;在损失函数值小于预设阈值的情况下,将 初始图像处理模型确定为图像处理模型。
在本申请实施例中,装置还包括:训练模块,用于在损失函数值大于或等于预设阈值的情况下,更新初始图像处理模型中的模型参数,得到更新后的初始图像处理模型;使用训练样本集合中的暗光样本图像对更新后的初始图像处理模型进行训练,直至更新后的初始图像处理模型输出的明亮图像与明亮样本图像之间的损失函数值小于预设阈值。
在本申请实施例中,初始图像处理模型包括:光特征提取网络和图像特征提取网络,其中,光特征提取网络中包括基于光照编码矩阵确定的卷积参数以及卷积层,图像特征提取网络中包括多个全连接层。
在本申请实施例中,图像复原装置还包括:确定模块,用于获取多个不同曝光程度的真实明亮图像,并对真实明亮图像进行裁剪,得到多个图像块;基于图像块进行图像编码,得到编码信息,并根据编码信息得到光照编码矩阵;基于光照编码矩阵确定光特征提取网络在特征融合过程中的卷积参数,以及确定图像特征提取网络在特征融合过程中的通道系数。
在本申请实施例中,第一获取模块,用于将暗光样本图像输入至初始图像处理模型,以使初始图像处理模型提取暗光样本图像的图像特征,基于光特征提取网络根据图像特征以及光照编码矩阵生成第一图像特征,以及基于图像特征提取网络根据图像特征和通道系数生成第二图像特征,融合第一图像特征以及第二图像特征,生成明亮图像。
在本申请实施例中,处理模块33,用于将原始暗光图像输入图像处理模型,以使图像处理模型提取原始暗光图像的图像特征,基于图像处理模型中的光特征提取网络根据图像特征以及光照编码矩阵生成光照特征,以及基于图像特征提取网络根据图像特征和通道系数生成目标图像特征,融合光照特征以及目标图像特征,生成目标明亮图像。
本申请实施例采用单一模型对低分辨率暗光图像进暗光增强最终得到高分辨率的明亮图像,相比现有技术,不再需要分别使用暗光增强模型和超分辨率两个模型进行图像复原,缩短了处理流程,同时通过无监督的方式学习图像的光照编码矩阵,并利用光照编码矩阵确定图像处理模型中的各项参数,以使图像处理模型最终能够将暗光图像复原成超分辨率图像,在视觉上实现了暗光增强。
本申请实施例还提供一种电子设备,如图7所示,电子设备可以包括:处理器1501、通信接口1502、存储器1503和通信总线1504,其中,处理器1501,通信接口1502,存储器1503通过通信总线1504完成相互间的通信。
存储器1503,用于存放计算机可读指令;
处理器1501,用于执行存储器1503上所存放的计算机可读指令时,实现上述实施例的步骤。
上述终端提到的通信总线可以是外设部件互连标准(Peripheral Component Interconnect,简称PCI)总线或扩展工业标准结构(Extended Industry Standard Architecture,简称EISA)总线等。该通信总线可以分为地址总线、数据总线、控制总线等。为便于表示,图中仅用一条粗线表示,但并不表示仅有一根总线或一种类型的总线。
通信接口用于上述终端与其他设备之间的通信。
存储器可以包括随机存取存储器(Random Access Memory,简称RAM),也可以包括非易失性存储器(non-volatile memory),例如至少一个磁盘存储器。可选的,存储器还可以是至少一个位于远离前述处理器的存储装置。
上述的处理器可以是通用处理器,包括中央处理器(Central Processing Unit,简称CPU)、网络处理器(Network Processor,简称NP)等;还可以是数字信号处理器(Digital Signal Processing,简称DSP)、专用集成电路(Application Specific Integrated Circuit,简称ASIC)、现场可编程门阵列(Field-Programmable Gate Array,简称FPGA)或者其他可编程逻辑器件、分立门或者晶体管逻辑器件、分立硬件组件。
本说明书实施例还提供一个或多个存储有计算机可读指令的非易失性存储介质,计算机可读指令被一个或多个处理器执行时,使得一个或多个处理器执行上述任意一个实施例中图像复原方法的步骤。
在本申请提供的又一实施例中,还提供了一种包含指令的计算机可读指令产品,当其在计算机上运行时,使得计算机执行上述实施例中任一的图像复原方法。
本领域技术人员可以理解,图7中示出的结构,仅仅是与本申请方案相关的部分结构的框图,并不构成对本申请方案所应用于其上的设备的限定,具体的设备可以包括比图中所示更多或更少的部件,或者组合某些部件,或者具有不同的部件布置。
本领域普通技术人员可以理解实现上述实施例方法中的全部或部分流程,是可以通过计算机可读指令来指令相关的硬件来完成,的计算机可读指令可存储于一非易失性计算机可读取存储介质中,该计算机可读指令在执行时,可包括如上述各方法的实施例的流程。其中,本申请所提供的各实施例中所使用的对存储器、存储、数据库或其它介质的任何引用,均可包括非易失性和/或易失性存储器。非易失性存储器可包括只读存储器(ROM)、可编程ROM(PROM)、电可编程ROM(EPROM)、电可擦除可编程ROM(EEPROM)或 闪存。易失性存储器可包括随机存取存储器(RAM)或者外部高速缓冲存储器。作为说明而非局限,RAM以多种形式可得,诸如静态RAM(SRAM)、动态RAM(DRAM)、同步DRAM(SDRAM)、双数据率SDRAM(DDRSDRAM)、增强型SDRAM(ESDRAM)、同步链路(Synchlink)DRAM(SLDRAM)、存储器总线(Rambus)直接RAM(RDRAM)、直接存储器总线动态RAM(DRDRAM)、以及存储器总线动态RAM(RDRAM)等。
以上实施例的各技术特征可以进行任意的组合,为使描述简洁,未对上述实施例中的各个技术特征所有可能的组合都进行描述,然而,只要这些技术特征的组合不存在矛盾,都应当认为是本说明书记载的范围。
以上实施例仅表达了本申请的几种实施方式,其描述较为具体和详细,但并不能因此而理解为对发明专利范围的限制。应当指出的是,对于本领域的普通技术人员来说,在不脱离本申请构思的前提下,还可以做出若干变形和改进,这些都属于本申请的保护范围。因此,本申请专利的保护范围应以所附权利要求为准。

Claims (9)

  1. 一种图像复原方法,其特征在于,包括:
    获取待复原的原始暗光图像;
    获取预先训练的图像处理模型,其中,所述图像处理模型包括:光特征提取网络和图像特征提取网络;和
    将所述原始暗光图像输入所述图像处理模型,以使所述图像处理模型中的光特征提取网络提取所述原始暗光图像中的光照特征,以及图像特征提取网络提取所述原始暗光图像中的目标图像特征,基于所述光照特征以及所述目标图像特征生成目标明亮图像。
  2. 根据权利要求1所述的方法,其特征在于,所述获取预先训练的图像处理模型,包括:
    获取训练样本集合,其中,所述训练样本集合中包括多个暗光样本图像,以及所述暗光样本图像对应的明亮样本图像;
    将所述暗光样本图像输入至初始图像处理模型,以使所述初始图像处理模型中的光特征提取网络和图像特征提取网络分别提取所述暗光样本图像的图像特征,基于所述图像特征生成明亮图像;
    计算所述明亮图像与所述明亮图像对应明亮样本图像之间的损失函数值;和
    在所述损失函数值小于预设阈值的情况下,将所述初始图像处理模型确定为所述图像处理模型。
  3. 根据权利要求2所述的方法,其特征在于,所述方法还包括:
    在所述损失函数值大于或等于预设阈值的情况下,更新所述初始图像处理模型中的模型参数,得到更新后的初始图像处理模型;和
    使用所述训练样本集合中的暗光样本图像对更新后的初始图像处理模型进行训练,直至更新后的初始图像处理模型输出的明亮图像与明亮样本图像之间的损失函数值小于预设阈值。
  4. 根据权利要求2所述的方法,其特征在于,所述初始图像处理模型包括:光特征提取网络和图像特征提取网络,其中,所述光特征提取网络中包括基于光照编码矩阵确定的卷积参数以及卷积层,所述图像特征提取网络中包括多个全连接层。
  5. 根据权利要求4所述的方法,其特征在于,在将所述暗光样本图像输入至初始图像处理模型之前,所述方法还包括:
    获取多个不同曝光程度的真实明亮图像,并对所述真实明亮图像进行裁剪,得到多 个图像块;
    基于所述图像块进行图像编码,得到编码信息,并根据所述编码信息得到光照编码矩阵;和
    基于所述光照编码矩阵确定所述光特征提取网络在特征融合过程中的卷积参数,以及确定所述图像特征提取网络在特征融合过程中的通道系数。
  6. 根据权利要求5所述的方法,其特征在于,所述将所述暗光样本图像输入至初始图像处理模型,以使所述初始图像处理模型中的光特征提取网络和图像特征提取网络分别提取所述暗光样本图像的图像特征,基于所述图像特征生成明亮图像,包括:
    将所述暗光样本图像输入至初始图像处理模型,以使所述初始图像处理模型提取所述暗光样本图像的图像特征,基于所述光特征提取网络根据图像特征以及所述卷积参数生成第一图像特征,以及基于图像特征提取网络根据所述图像特征和所述通道系数生成第二图像特征,融合所述第一图像特征以及第二图像特征,生成明亮图像。
  7. 一种图像复原装置,其特征在于,包括:
    第一获取模块,用于获取待复原的原始暗光图像;
    第二获取模块,用于获取预先训练的图像处理模型,其中,所述图像处理模型包括:光特征提取网络和图像特征提取网络;和
    处理模块,用于将所述原始暗光图像输入所述图像处理模型,以使所述图像处理模型中的光特征提取网络提取所述原始暗光图像中的光照特征,以及图像特征提取网络提取所述原始暗光图像中的目标图像特征,基于所述光照特征以及所述目标图像特征生成目标明亮图像。
  8. 一个或多个存储有计算机可读指令的非易失性计算机可读存储介质,计算机可读指令被一个或多个处理器执行时,使得一个或多个处理器执行如权利要求1-6方法的步骤。
  9. 一种电子设备,其特征在于,包括处理器、通信接口、存储器和通信总线,其中,处理器,通信接口,存储器通过通信总线完成相互间的通信;其中:
    存储器,用于存放计算机可读指令;
    处理器,用于通过运行存储器上所存放的程序来执行权利要求1-6中任一项所述的方法步骤。
PCT/CN2022/095379 2022-01-04 2022-05-26 一种图像复原方法、装置、电子设备及存储介质 WO2023130650A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202210000435.0 2022-01-04
CN202210000435.0A CN114022394B (zh) 2022-01-04 2022-01-04 一种图像复原方法、装置、电子设备及存储介质

Publications (1)

Publication Number Publication Date
WO2023130650A1 true WO2023130650A1 (zh) 2023-07-13

Family

ID=80069488

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2022/095379 WO2023130650A1 (zh) 2022-01-04 2022-05-26 一种图像复原方法、装置、电子设备及存储介质

Country Status (2)

Country Link
CN (1) CN114022394B (zh)
WO (1) WO2023130650A1 (zh)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117809202A (zh) * 2024-02-28 2024-04-02 中国地质大学(武汉) 一种双模态目标检测方法及系统
CN117809202B (zh) * 2024-02-28 2024-05-31 中国地质大学(武汉) 一种双模态目标检测方法及系统

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114022394B (zh) * 2022-01-04 2022-04-19 苏州浪潮智能科技有限公司 一种图像复原方法、装置、电子设备及存储介质
CN117237248A (zh) * 2023-09-27 2023-12-15 中山大学 一种曝光调整曲线估计方法、装置、电子设备及存储介质
CN117745595A (zh) * 2024-02-18 2024-03-22 珠海金山办公软件有限公司 图像处理方法、装置、电子设备及存储介质

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108305236A (zh) * 2018-01-16 2018-07-20 腾讯科技(深圳)有限公司 图像增强处理方法及装置
CN111242868A (zh) * 2020-01-16 2020-06-05 重庆邮电大学 暗视觉环境下基于卷积神经网络的图像增强方法
US20210133932A1 (en) * 2019-11-01 2021-05-06 Lg Electronics Inc. Color restoration method and apparatus
CN113744169A (zh) * 2021-09-07 2021-12-03 讯飞智元信息科技有限公司 图像增强方法、装置、电子设备和存储介质
CN114022394A (zh) * 2022-01-04 2022-02-08 苏州浪潮智能科技有限公司 一种图像复原方法、装置、电子设备及存储介质

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109191388A (zh) * 2018-07-27 2019-01-11 上海爱优威软件开发有限公司 一种暗图像处理方法及系统

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108305236A (zh) * 2018-01-16 2018-07-20 腾讯科技(深圳)有限公司 图像增强处理方法及装置
US20210133932A1 (en) * 2019-11-01 2021-05-06 Lg Electronics Inc. Color restoration method and apparatus
CN111242868A (zh) * 2020-01-16 2020-06-05 重庆邮电大学 暗视觉环境下基于卷积神经网络的图像增强方法
CN113744169A (zh) * 2021-09-07 2021-12-03 讯飞智元信息科技有限公司 图像增强方法、装置、电子设备和存储介质
CN114022394A (zh) * 2022-01-04 2022-02-08 苏州浪潮智能科技有限公司 一种图像复原方法、装置、电子设备及存储介质

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117809202A (zh) * 2024-02-28 2024-04-02 中国地质大学(武汉) 一种双模态目标检测方法及系统
CN117809202B (zh) * 2024-02-28 2024-05-31 中国地质大学(武汉) 一种双模态目标检测方法及系统

Also Published As

Publication number Publication date
CN114022394A (zh) 2022-02-08
CN114022394B (zh) 2022-04-19

Similar Documents

Publication Publication Date Title
WO2023130650A1 (zh) 一种图像复原方法、装置、电子设备及存储介质
WO2019233263A1 (zh) 视频处理方法、电子设备、计算机可读存储介质
WO2019223594A1 (zh) 神经网络模型处理方法和装置、图像处理方法、移动终端
US20220139064A1 (en) Image recognition method and system based on deep learning
WO2022142009A1 (zh) 一种模糊图像修正的方法、装置、计算机设备及存储介质
EP3633991A1 (en) Method and system for optimized encoding
WO2019233271A1 (zh) 图像处理方法、计算机可读存储介质和电子设备
CN111881737B (zh) 年龄预测模型的训练方法及装置、年龄预测方法及装置
WO2021164269A1 (zh) 基于注意力机制的视差图获取方法和装置
WO2020103674A1 (zh) 自然语言描述信息的生成方法及装置
WO2023060434A1 (zh) 一种基于文本的图像编辑方法和电子设备
US11836898B2 (en) Method and apparatus for generating image, and electronic device
WO2023077809A1 (zh) 神经网络训练的方法、电子设备及计算机存储介质
CN114511576A (zh) 尺度自适应特征增强深度神经网络的图像分割方法与系统
CN114549913A (zh) 一种语义分割方法、装置、计算机设备和存储介质
WO2020168807A1 (zh) 图像亮度的调节方法、装置、计算机设备和存储介质
WO2021037174A1 (zh) 一种神经网络模型训练方法及装置
WO2024041108A1 (zh) 图像矫正模型训练及图像矫正方法、装置和计算机设备
WO2022021304A1 (zh) 一种基于弹幕的视频高光片段识别方法、终端及存储介质
CN112702607A (zh) 一种基于光流决策的智能视频压缩方法及装置
CN114239760B (zh) 多模态模型训练以及图像识别方法、装置、电子设备
CN115937121A (zh) 基于多维度特征融合的无参考图像质量评价方法及系统
WO2021164329A1 (zh) 图像处理方法、装置、通信设备及可读存储介质
CN113177566A (zh) 一种特征提取模型训练方法、装置及计算机设备
CN112866692B (zh) 一种基于hevc的编码单元划分方法、装置及电子设备

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22918108

Country of ref document: EP

Kind code of ref document: A1