TW202139140A - Image reconstruction method and apparatus, electronic device and storage medium - Google Patents

Image reconstruction method and apparatus, electronic device and storage medium Download PDF

Info

Publication number
TW202139140A
TW202139140A TW109125062A TW109125062A TW202139140A TW 202139140 A TW202139140 A TW 202139140A TW 109125062 A TW109125062 A TW 109125062A TW 109125062 A TW109125062 A TW 109125062A TW 202139140 A TW202139140 A TW 202139140A
Authority
TW
Taiwan
Prior art keywords
sample
network
image
feature
event
Prior art date
Application number
TW109125062A
Other languages
Chinese (zh)
Other versions
TWI765304B (en
Inventor
張松
姜哲
張宇
邹冬青
任思捷
Original Assignee
大陸商北京市商湯科技開發有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 大陸商北京市商湯科技開發有限公司 filed Critical 大陸商北京市商湯科技開發有限公司
Publication of TW202139140A publication Critical patent/TW202139140A/en
Application granted granted Critical
Publication of TWI765304B publication Critical patent/TWI765304B/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T11/002D [Two Dimensional] image generation
    • G06T11/001Texturing; Colouring; Generation of texture or colour
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • G06F18/253Fusion techniques of extracted features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/77Retouching; Inpainting; Scratch removal

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Software Systems (AREA)
  • Mathematical Physics (AREA)
  • Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • General Health & Medical Sciences (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
  • Television Signal Processing For Recording (AREA)

Abstract

The present disclosure relates to an image reconstruction method and apparatus, an electronic device, and a storage medium. The method comprises: acquiring event information of a target scene, the event information being used to represent changes in brightness of the target scene within a first brightness range; performing feature extraction on the event information to obtain a first event feature of the target scene; and performing image reconstruction on the first event feature to obtain a reconstructed image of the target scene, wherein the brightness of the reconstructed image is within a second brightness range, and the second brightness range is higher than the first brightness range. The embodiments of the present disclosure may improve the effect of image reconstruction.

Description

圖像重建方法及圖像重建裝置、電子設備和電腦可讀儲存媒體Image reconstruction method, image reconstruction device, electronic equipment and computer readable storage medium

本公開涉及電腦技術領域,尤其涉及一種圖像重建方法及圖像重建裝置、電子設備和電腦可讀儲存媒體。本申請要求在2020年3月31日提交中國專利局、申請號為202010243153.4、發明名稱為“圖像重建方法及裝置、電子設備和儲存媒體”的中國專利申請的優先權,其全部內容通過引用結合在本申請中。The present disclosure relates to the field of computer technology, and in particular to an image reconstruction method, an image reconstruction device, electronic equipment, and computer-readable storage media. This application claims the priority of a Chinese patent application filed with the Chinese Patent Office, the application number is 202010243153.4, and the invention title is "Image reconstruction method and device, electronic equipment and storage media" on March 31, 2020, the entire content of which is incorporated by reference Incorporated in this application.

傳統的圖像採集設備可以採集到符合人們的觀察習慣的圖像,例如RGB圖像或強度圖像等。但受其本身較低的動態範圍的限制,圖像採集設備在光照較低的暗光條件下會出現曝光不足的情況,無法生成高品質的清晰圖像。Traditional image acquisition equipment can acquire images that conform to people's observation habits, such as RGB images or intensity images. However, due to the limitation of its own low dynamic range, the image acquisition device will be underexposed under low light conditions and cannot generate high-quality clear images.

本公開提出了一種圖像重建技術方案。The present disclosure proposes a technical solution for image reconstruction.

根據本公開的一方面,提供了一種圖像重建方法,包括:獲取目標場景的事件資訊,所述事件資訊用於表示所述目標場景在第一亮度範圍內的亮度變化;對所述事件資訊進行特徵提取,得到所述目標場景的第一事件特徵;對所述第一事件特徵進行圖像重建,得到所述目標場景的重建圖像,所述重建圖像的亮度處於第二亮度範圍內,所述第二亮度範圍高於所述第一亮度範圍。According to one aspect of the present disclosure, there is provided an image reconstruction method, including: acquiring event information of a target scene, the event information being used to represent the brightness change of the target scene within a first brightness range; Perform feature extraction to obtain the first event feature of the target scene; perform image reconstruction on the first event feature to obtain a reconstructed image of the target scene, and the brightness of the reconstructed image is within the second brightness range , The second brightness range is higher than the first brightness range.

在一種可能的實現方式中,對所述第一事件特徵進行圖像重建,得到所述目標場景的重建圖像,包括:根據第一雜訊資訊及所述第一事件特徵,對所述第一事件特徵進行細節增強,得到第二事件特徵;將所述第一事件特徵與所述第二事件特徵融合,得到融合特徵;對所述融合特徵進行圖像重建,得到所述目標場景的重建圖像。In a possible implementation manner, performing image reconstruction on the first event feature to obtain a reconstructed image of the target scene includes: performing image reconstruction on the first event feature according to the first noise information and the first event feature The details of an event feature are enhanced to obtain a second event feature; the first event feature and the second event feature are fused to obtain a fusion feature; the fusion feature is image-reconstructed to obtain a reconstruction of the target scene image.

在一種可能的實現方式中,所述圖像重建方法通過圖像處理網路實現,所述圖像處理網路包括第一特徵提取網路及圖像重建網路,所述第一特徵提取網路用於對所述事件資訊進行特徵提取,所述圖像重建網路用於對所述第一事件特徵進行圖像重建,所述圖像重建方法還包括:根據預設的訓練集訓練所述圖像處理網路,所述訓練集包括多個第一樣本場景的第一樣本事件資訊,多個第二樣本場景的第二樣本事件資訊及樣本場景圖像;其中,所述第一樣本事件資訊是在第三亮度範圍內獲取的,所述第二樣本事件資訊是在第四亮度範圍內獲取的,所述樣本場景圖像是在所述第四亮度範圍內獲取的,所述第四亮度範圍高於所述第三亮度範圍。In a possible implementation manner, the image reconstruction method is implemented by an image processing network, the image processing network includes a first feature extraction network and an image reconstruction network, the first feature extraction network The image reconstruction network is used to perform feature extraction on the event information, the image reconstruction network is used to perform image reconstruction on the first event feature, and the image reconstruction method further includes: training the institute according to a preset training set. In the image processing network, the training set includes first sample event information of a plurality of first sample scenes, second sample event information of a plurality of second sample scenes and sample scene images; wherein, the first sample scene The sample event information is obtained in the third brightness range, the second sample event information is obtained in the fourth brightness range, and the sample scene image is obtained in the fourth brightness range, The fourth brightness range is higher than the third brightness range.

在一種可能的實現方式中,所述圖像處理網路還包括鑒別網路,所述根據預設的訓練集訓練所述圖像處理網路,包括:將所述第一樣本場景的第一樣本事件資訊和所述第二樣本場景的第二樣本事件資訊分別輸入所述第一特徵提取網路,得到第一樣本事件特徵和第二樣本事件特徵;將所述第一樣本事件特徵和所述第二樣本事件特徵分別輸入所述鑒別網路,得到第一鑒別結果和第二鑒別結果;根據所述第一鑒別結果及所述第二鑒別結果,對抗訓練所述圖像處理網路。In a possible implementation, the image processing network further includes a discrimination network, and the training of the image processing network according to a preset training set includes: The sample event information and the second sample event information of the second sample scene are respectively input into the first feature extraction network to obtain the first sample event feature and the second sample event feature; and the first sample The event feature and the second sample event feature are respectively input to the identification network to obtain a first identification result and a second identification result; according to the first identification result and the second identification result, the image is trained against Deal with the network.

在一種可能的實現方式中,所述根據預設的訓練集訓練所述圖像處理網路,還包括:將所述第二樣本事件特徵輸入所述圖像重建網路,得到所述第二樣本場景的第一重建圖像;根據所述第二樣本場景的第一重建圖像及所述樣本場景圖像,訓練所述圖像處理網路。In a possible implementation, the training the image processing network according to a preset training set further includes: inputting the second sample event feature into the image reconstruction network to obtain the second A first reconstructed image of a sample scene; training the image processing network according to the first reconstructed image of the second sample scene and the sample scene image.

在一種可能的實現方式中,所述圖像處理網路還包括細節增強網路,所述根據預設的訓練集訓練所述圖像處理網路,還包括:將所述第二樣本事件特徵及第三雜訊資訊輸入所述細節增強網路,得到第四樣本事件特徵;將所述第二樣本事件特徵與所述第四樣本事件特徵融合,得到第二樣本融合特徵;將所述第二樣本融合特徵輸入所述圖像重建網路,得到所述第二樣本場景的第三重建圖像;根據所述第二樣本場景的第一重建圖像、所述第三重建圖像及所述樣本場景圖像,訓練所述圖像處理網路。In a possible implementation, the image processing network further includes a detail enhancement network, and the training of the image processing network according to a preset training set further includes: combining the second sample event feature And the third noise information is input into the detail enhancement network to obtain the fourth sample event feature; the second sample event feature is fused with the fourth sample event feature to obtain the second sample fusion feature; The two-sample fusion feature is input into the image reconstruction network to obtain the third reconstructed image of the second sample scene; according to the first reconstructed image of the second sample scene, the third reconstructed image and all The sample scene image is used to train the image processing network.

在一種可能的實現方式中,所述圖像處理網路還包括第二特徵提取網路,所述根據預設的訓練集訓練所述圖像處理網路,還包括:將所述第二樣本場景的第二樣本事件資訊及第二雜訊資訊輸入所述第二特徵提取網路,得到第三樣本事件特徵;將所述第二樣本事件特徵與所述第三樣本事件特徵融合,得到第一樣本融合特徵;將所述第一樣本融合特徵輸入所述鑒別網路,得到第三鑒別結果;根據所述第一鑒別結果及所述第三鑒別結果,對抗訓練所述圖像處理網路。In a possible implementation, the image processing network further includes a second feature extraction network, and the training of the image processing network according to a preset training set further includes: combining the second sample The second sample event information and the second noise information of the scene are input into the second feature extraction network to obtain the third sample event feature; the second sample event feature is fused with the third sample event feature to obtain the first Sample fusion features; input the first sample fusion features into the identification network to obtain a third identification result; according to the first identification result and the third identification result, oppose the training of the image processing network.

在一種可能的實現方式中,所述根據預設的訓練集訓練所述圖像處理網路,還包括:將所述第一樣本融合特徵輸入所述圖像重建網路,得到所述第二樣本場景的第二重建圖像;根據所述第二樣本場景的第二重建圖像及所述樣本場景圖像,訓練所述圖像處理網路。In a possible implementation, the training of the image processing network according to a preset training set further includes: inputting the first sample fusion feature into the image reconstruction network to obtain the first A second reconstructed image of a two-sample scene; training the image processing network according to the second reconstructed image of the second sample scene and the sample scene image.

在一種可能的實現方式中,所述圖像處理網路還包括細節增強網路,所述根據預設的訓練集訓練所述圖像處理網路,還包括:將所述第一樣本融合特徵及第四雜訊資訊輸入所述細節增強網路,得到第五樣本事件特徵;將所述第一樣本融合特徵與所述第五樣本事件特徵融合,得到第三樣本融合特徵;將所述第三樣本融合特徵輸入所述圖像重建網路,得到所述第二樣本場景的第四重建圖像;根據所述第二樣本場景的第二重建圖像、所述第四重建圖像及所述樣本場景圖像,訓練所述圖像處理網路。In a possible implementation, the image processing network further includes a detail enhancement network, and the training of the image processing network according to a preset training set further includes: fusing the first sample The feature and the fourth noise information are input into the detail enhancement network to obtain the fifth sample event feature; the first sample fusion feature and the fifth sample event feature are fused to obtain the third sample fusion feature; The third sample fusion feature is input into the image reconstruction network to obtain a fourth reconstructed image of the second sample scene; according to the second reconstructed image and the fourth reconstructed image of the second sample scene And the sample scene image to train the image processing network.

在一種可能的實現方式中,所述根據所述第二樣本場景的第二重建圖像、所述第四重建圖像及所述樣本場景圖像,訓練所述圖像處理網路,包括:根據所述第二樣本場景的第二重建圖像、所述第四重建圖像及所述樣本場景圖像,確定所述圖像處理網路的總體損失;根據所述總體損失,確定所述圖像處理網路的梯度資訊;根據所述梯度資訊,調整所述第一特徵提取網路、所述第二特徵提取網路、所述細節增強網路及所述圖像重建網路的網路參數,其中,所述細節增強網路的梯度資訊不傳遞到所述第二特徵提取網路。In a possible implementation, the training the image processing network according to the second reconstructed image of the second sample scene, the fourth reconstructed image, and the sample scene image includes: According to the second reconstructed image of the second sample scene, the fourth reconstructed image, and the sample scene image, determine the overall loss of the image processing network; according to the overall loss, determine the Gradient information of the image processing network; according to the gradient information, adjust the network of the first feature extraction network, the second feature extraction network, the detail enhancement network, and the image reconstruction network Path parameters, wherein the gradient information of the detail enhancement network is not transmitted to the second feature extraction network.

根據本公開的一方面,提供了一種圖像重建裝置,包括:According to an aspect of the present disclosure, there is provided an image reconstruction device, including:

事件獲取模組,用於獲取目標場景的事件資訊,所述事件資訊用於表示所述目標場景在第一亮度範圍內的亮度變化;特徵提取模組,用於對所述事件資訊進行特徵提取,得到所述目標場景的第一事件特徵;圖像重建模組,用於對所述第一事件特徵進行圖像重建,得到所述目標場景的重建圖像,所述重建圖像的亮度處於第二亮度範圍內,所述第二亮度範圍高於所述第一亮度範圍。The event acquisition module is used to acquire event information of the target scene, and the event information is used to indicate the brightness change of the target scene in the first brightness range; the feature extraction module is used to perform feature extraction on the event information , The first event feature of the target scene is obtained; the image reconstruction module is used to perform image reconstruction on the first event feature to obtain a reconstructed image of the target scene, and the brightness of the reconstructed image is at In the second brightness range, the second brightness range is higher than the first brightness range.

在一種可能的實現方式中,所述圖像重建模組包括:細節增強子模組,用於根據第一雜訊資訊及所述第一事件特徵,對所述第一事件特徵進行細節增強,得到第二事件特徵;融合子模組,用於將所述第一事件特徵與所述第二事件特徵融合,得到融合特徵;重建子模組,用於對所述融合特徵進行圖像重建,得到所述目標場景的重建圖像。In a possible implementation, the image reconstruction module includes: a detail enhancement sub-module for performing detail enhancement on the first event feature according to the first noise information and the first event feature, Obtain a second event feature; a fusion sub-module for fusing the first event feature with the second event feature to obtain a fusion feature; a reconstruction sub-module for performing image reconstruction on the fusion feature, Obtain a reconstructed image of the target scene.

在一種可能的實現方式中,所述圖像重建裝置通過圖像處理網路實現,所述圖像處理網路包括第一特徵提取網路及圖像重建網路,所述第一特徵提取網路用於對所述事件資訊進行特徵提取,所述圖像重建網路用於對所述第一事件特徵進行圖像重建,所述圖像重建裝置還包括:In a possible implementation manner, the image reconstruction device is implemented by an image processing network, the image processing network includes a first feature extraction network and an image reconstruction network, the first feature extraction network The path is used for feature extraction of the event information, the image reconstruction network is used for image reconstruction of the first event feature, and the image reconstruction device further includes:

訓練模組,用於根據預設的訓練集訓練所述圖像處理網路,所述訓練集包括多個第一樣本場景的第一樣本事件資訊,多個第二樣本場景的第二樣本事件資訊及樣本場景圖像;其中,所述第一樣本事件資訊是在第三亮度範圍內獲取的,所述第二樣本事件資訊是在第四亮度範圍內獲取的,所述樣本場景圖像是在所述第四亮度範圍內獲取的,所述第四亮度範圍高於所述第三亮度範圍。The training module is used to train the image processing network according to a preset training set. The training set includes first sample event information of a plurality of first sample scenes, and second sample scenes of a plurality of second sample scenes. Sample event information and sample scene images; wherein the first sample event information is acquired within a third brightness range, the second sample event information is acquired within a fourth brightness range, and the sample scene The image is acquired in the fourth brightness range, and the fourth brightness range is higher than the third brightness range.

在一種可能的實現方式中,所述圖像處理網路還包括鑒別網路,所述訓練模組包括:第一提取子模組,用於將所述第一樣本場景的第一樣本事件資訊和所述第二樣本場景的第二樣本事件資訊分別輸入所述第一特徵提取網路,得到第一樣本事件特徵和第二樣本事件特徵;第一鑒別子模組,用於將所述第一樣本事件特徵和所述第二樣本事件特徵分別輸入所述鑒別網路,得到第一鑒別結果和第二鑒別結果;第一對抗訓練子模組,用於根據所述第一鑒別結果及所述第二鑒別結果,對抗訓練所述圖像處理網路。In a possible implementation, the image processing network further includes an identification network, and the training module includes: a first extraction sub-module configured to convert the first sample of the first sample scene The event information and the second sample event information of the second sample scene are respectively input into the first feature extraction network to obtain the first sample event feature and the second sample event feature; the first discrimination sub-module is used to combine The first sample event feature and the second sample event feature are respectively input to the identification network to obtain a first identification result and a second identification result; the first confrontation training sub-module is used to obtain the first identification result according to the first The identification result and the second identification result are used against training the image processing network.

在一種可能的實現方式中,所述訓練模組還包括:第一重建子模組,用於將所述第二樣本事件特徵輸入所述圖像重建網路,得到所述第二樣本場景的第一重建圖像;第一訓練子模組,用於根據所述第二樣本場景的第一重建圖像及所述樣本場景圖像,訓練所述圖像處理網路。In a possible implementation, the training module further includes: a first reconstruction sub-module, configured to input the second sample event feature into the image reconstruction network to obtain the second sample scene A first reconstructed image; a first training sub-module for training the image processing network according to the first reconstructed image of the second sample scene and the sample scene image.

在一種可能的實現方式中,所述圖像處理網路還包括細節增強網路,所述訓練模組還包括:第一增強子模組,用於將所述第二樣本事件特徵及第三雜訊資訊輸入所述細節增強網路,得到第四樣本事件特徵;第一融合子模組,用於將所述第二樣本事件特徵與所述第四樣本事件特徵融合,得到第二樣本融合特徵;第二重建子模組,用於將所述第二樣本融合特徵輸入所述圖像重建網路,得到所述第二樣本場景的第三重建圖像;第二訓練子模組,用於根據所述第二樣本場景的第一重建圖像、所述第三重建圖像及所述樣本場景圖像,訓練所述圖像處理網路。In a possible implementation, the image processing network further includes a detail enhancement network, and the training module further includes: a first enhancement sub-module for combining the second sample event features and the third The noise information is input into the detail enhancement network to obtain the fourth sample event feature; the first fusion sub-module is used to fuse the second sample event feature with the fourth sample event feature to obtain a second sample fusion Features; a second reconstruction sub-module, used to input the second sample fusion feature into the image reconstruction network to obtain a third reconstructed image of the second sample scene; a second training sub-module, using Training the image processing network according to the first reconstructed image of the second sample scene, the third reconstructed image, and the sample scene image.

在一種可能的實現方式中,所述圖像處理網路還包括第二特徵提取網路,所述訓練模組還包括:第二提取子模組,用於將所述第二樣本場景的第二樣本事件資訊及第二雜訊資訊輸入所述第二特徵提取網路,得到第三樣本事件特徵;第二融合子模組,用於將所述第二樣本事件特徵與所述第三樣本事件特徵融合,得到第一樣本融合特徵;第二鑒別子模組,用於將所述第一樣本融合特徵輸入所述鑒別網路,得到第三鑒別結果;第二對抗訓練子模組,用於根據所述第一鑒別結果及所述第三鑒別結果,對抗訓練所述圖像處理網路。In a possible implementation manner, the image processing network further includes a second feature extraction network, and the training module further includes: a second extraction sub-module configured to convert the second feature extraction network of the second sample scene Two sample event information and second noise information are input into the second feature extraction network to obtain a third sample event feature; a second fusion sub-module is used to combine the second sample event feature with the third sample Event feature fusion to obtain a first sample fusion feature; a second discrimination sub-module for inputting the first sample fusion feature into the discrimination network to obtain a third discrimination result; a second confrontation training sub-module , Used for training the image processing network against the first identification result and the third identification result.

在一種可能的實現方式中,所述訓練模組還包括:第三重建子模組,用於將所述第一樣本融合特徵輸入所述圖像重建網路,得到所述第二樣本場景的第二重建圖像;第三訓練子模組,用於根據所述第二樣本場景的第二重建圖像及所述樣本場景圖像,訓練所述圖像處理網路。In a possible implementation, the training module further includes: a third reconstruction sub-module, configured to input the first sample fusion feature into the image reconstruction network to obtain the second sample scene The second reconstructed image; the third training sub-module is used to train the image processing network according to the second reconstructed image of the second sample scene and the sample scene image.

在一種可能的實現方式中,所述圖像處理網路還包括細節增強網路,所述訓練模組還包括:第二增強子模組,用於將所述第一樣本融合特徵及第四雜訊資訊輸入所述細節增強網路,得到第五樣本事件特徵;第三融合子模組,用於將所述第一樣本融合特徵與所述第五樣本事件特徵融合,得到第三樣本融合特徵;第四重建子模組,用於將所述第三樣本融合特徵輸入所述圖像重建網路,得到所述第二樣本場景的第四重建圖像;第四訓練子模組,用於根據所述第二樣本場景的第二重建圖像、所述第四重建圖像及所述樣本場景圖像,訓練所述圖像處理網路。In a possible implementation manner, the image processing network further includes a detail enhancement network, and the training module further includes: a second enhancement sub-module for fusing the features of the first sample with the first sample Four noise information is input into the detail enhancement network to obtain the fifth sample event feature; the third fusion sub-module is used to fuse the first sample fusion feature with the fifth sample event feature to obtain the third Sample fusion features; a fourth reconstruction sub-module, used to input the third sample fusion features into the image reconstruction network to obtain a fourth reconstructed image of the second sample scene; a fourth training sub-module , For training the image processing network according to the second reconstructed image of the second sample scene, the fourth reconstructed image, and the sample scene image.

在一種可能的實現方式中,所述第四訓練子模組用於:根據所述第二樣本場景的第二重建圖像、所述第四重建圖像及所述樣本場景圖像,確定所述圖像處理網路的總體損失;根據所述總體損失,確定所述圖像處理網路的梯度資訊;根據所述梯度資訊,調整所述第一特徵提取網路、所述第二特徵提取網路、所述細節增強網路及所述圖像重建網路的網路參數,其中,所述細節增強網路的梯度資訊不傳遞到所述第二特徵提取網路。In a possible implementation manner, the fourth training sub-module is used to determine the second reconstructed image of the second sample scene, the fourth reconstructed image, and the sample scene image. The overall loss of the image processing network; according to the overall loss, the gradient information of the image processing network is determined; according to the gradient information, the first feature extraction network and the second feature extraction are adjusted The network, the detail enhancement network and the network parameters of the image reconstruction network, wherein the gradient information of the detail enhancement network is not transmitted to the second feature extraction network.

根據本公開的一方面,提供了一種電子設備,包括:處理器;用於儲存處理器可執行指令的記憶體;其中,所述處理器被配置為調用所述記憶體儲存的指令,以執行上述圖像重建方法。According to an aspect of the present disclosure, there is provided an electronic device including: a processor; a memory for storing executable instructions of the processor; wherein the processor is configured to call the instructions stored in the memory to execute The above-mentioned image reconstruction method.

根據本公開的一方面,提供了一種電腦可讀儲存媒體,其上儲存有電腦程式指令,所述電腦程式指令被處理器執行時實現上述圖像重建方法。According to one aspect of the present disclosure, there is provided a computer-readable storage medium on which computer program instructions are stored, and the computer program instructions are executed by a processor to realize the above-mentioned image reconstruction method.

根據本公開的一方面,提供了一種電腦程式,包括電腦可讀代碼,當所述電腦可讀代碼在電子設備中運行時,所述電子設備中的處理器執行上述圖像重建方法。According to an aspect of the present disclosure, there is provided a computer program including computer-readable code, and when the computer-readable code is run in an electronic device, a processor in the electronic device executes the above-mentioned image reconstruction method.

在本公開實施例中,能夠獲取目標場景在較低的第一亮度範圍內的事件資訊;對事件資訊進行特徵提取,得到事件特徵;對事件特徵進行圖像重建,得到目標場景在較高的第二亮度範圍內的重建圖像,從而通過暗光條件下的事件重建出正常光照條件下的高品質圖像,提高了圖像重建的效果。In the embodiments of the present disclosure, it is possible to obtain event information of the target scene in the lower first brightness range; perform feature extraction on the event information to obtain the event feature; perform image reconstruction on the event feature to obtain the target scene in the higher The reconstructed image in the second brightness range can reconstruct a high-quality image under normal lighting conditions through events under dark light conditions, which improves the effect of image reconstruction.

應當理解的是,以上的一般描述和後文的細節描述僅是示例性和解釋性的,而非限制本公開。根據下面參考附圖對示例性實施例的詳細說明,本公開的其它特徵及方面將變得清楚。It should be understood that the above general description and the following detailed description are only exemplary and explanatory, rather than limiting the present disclosure. According to the following detailed description of exemplary embodiments with reference to the accompanying drawings, other features and aspects of the present disclosure will become clear.

以下將參考圖式詳細說明本公開的各種示例性實施例、特徵和方面。圖式中相同的圖式標記表示功能相同或相似的元件。儘管在圖式中示出了實施例的各種方面,但是除非特別指出,不必按比例繪製圖式。Various exemplary embodiments, features, and aspects of the present disclosure will be described in detail below with reference to the drawings. The same drawing symbols in the drawings indicate elements with the same or similar functions. Although various aspects of the embodiments are shown in the drawings, the drawings need not be drawn to scale unless otherwise noted.

在這裡專用的詞“示例性”意為“用作例子、實施例或說明性”。這裡作為“示例性”所說明的任何實施例不必解釋為優於或好於其它實施例。The dedicated word "exemplary" here means "serving as an example, embodiment, or illustration." Any embodiment described herein as "exemplary" need not be construed as being superior or better than other embodiments.

本文中術語“和/或”,僅僅是一種描述關聯物件的關聯關係,表示可以存在三種關係,例如,A和/或B,可以表示:單獨存在A,同時存在A和B,單獨存在B這三種情況。另外,本文中術語“至少一種”表示多種中的任意一種或多種中的至少兩種的任意組合,例如,包括A、B、C中的至少一種,可以表示包括從A、B和C構成的集合中選擇的任意一個或多個元素。The term "and/or" in this article is only an association relationship describing related objects, which means that there can be three relationships. For example, A and/or B can mean: A alone exists, A and B exist at the same time, and B exists alone. three conditions. In addition, the term "at least one" herein means any one or any combination of at least two of the multiple, for example, including at least one of A, B, and C, and may mean including those made from A, B, and C Any one or more elements selected in the set.

另外,為了更好地說明本公開,在下文的具體實施方式中給出了眾多的具體細節。本領域技術人員應當理解,沒有某些具體細節,本公開同樣可以實施。在一些實例中,對於本領域技術人員熟知的方法、手段、元件和電路未作詳細描述,以便於凸顯本公開的主旨。In addition, in order to better illustrate the present disclosure, numerous specific details are given in the following specific embodiments. Those skilled in the art should understand that the present disclosure can also be implemented without certain specific details. In some instances, the methods, means, elements, and circuits well known to those skilled in the art have not been described in detail, so as to highlight the gist of the present disclosure.

在圖像拍攝、圖像處理、人臉識別、安防等領域,通常需要通過圖像採集設備(例如強度相機或攝像頭等)採集圖像。圖像採集設備在暗光條件下(例如夜間、光線不足或其他黑暗環境下)採集的圖像容易曝光不足,圖像品質較差。在該情況下,可對品質較差的圖像進行重建,以得到正常光照條件下的高品質圖像。In the fields of image shooting, image processing, face recognition, security, etc., it is usually necessary to collect images through image acquisition equipment (such as intensity cameras or cameras, etc.). Images captured by image capture devices under dark conditions (such as night, low light or other dark environments) are prone to underexposure and poor image quality. In this case, a poor quality image can be reconstructed to obtain a high-quality image under normal lighting conditions.

圖1示出根據本公開實施例的圖像重建方法的流程圖,如圖1所示,所述圖像重建方法包括:Fig. 1 shows a flowchart of an image reconstruction method according to an embodiment of the present disclosure. As shown in Fig. 1, the image reconstruction method includes:

在步驟S11中,獲取目標場景的事件資訊,所述事件資訊用於表示所述目標場景在第一亮度範圍內的亮度變化;In step S11, obtain event information of the target scene, where the event information is used to indicate the brightness change of the target scene within a first brightness range;

在步驟S12中,對所述事件資訊進行特徵提取,得到所述目標場景的第一事件特徵;In step S12, feature extraction is performed on the event information to obtain the first event feature of the target scene;

在步驟S13中,對所述第一事件特徵進行圖像重建,得到所述目標場景的重建圖像,所述重建圖像的亮度處於第二亮度範圍內,所述第二亮度範圍高於所述第一亮度範圍。In step S13, image reconstruction is performed on the first event feature to obtain a reconstructed image of the target scene, the brightness of the reconstructed image is within a second brightness range, and the second brightness range is higher than all. The first brightness range.

在一種可能的實現方式中,所述圖像重建方法可以由終端設備或伺服器等電子設備執行,終端設備可以為使用者設備(User Equipment,UE)、行動設備、使用者終端、終端、蜂巢式電話、室內無線電話、個人數位助理(Personal Digital Assistant,PDA)、手持設備、計算設備、車載設備、可穿戴設備等,所述圖像重建方法可以通過處理器調用記憶體中儲存的電腦可讀指令的方式來實現。或者,可通過伺服器執行所述圖像重建方法。In a possible implementation, the image reconstruction method may be executed by electronic equipment such as a terminal device or a server, and the terminal device may be a user equipment (User Equipment, UE), a mobile device, a user terminal, a terminal, or a cellular device. Mobile phones, indoor wireless phones, personal digital assistants (Personal Digital Assistant, PDA), handheld devices, computing devices, in-vehicle devices, wearable devices, etc. The image reconstruction method can call the computer memory stored in the memory through the processor. This is achieved by reading instructions. Alternatively, the image reconstruction method can be executed by a server.

在一種可能的實現方式中,目標場景可以是包括建築、風景、人物、車輛等場景的地理區域。該目標場景可能處於暗光條件(例如夜間或其他黑暗環境)下,通過圖像採集設備(例如強度相機或攝像頭等)採集的該目標場景的圖像曝光不足,圖像品質較差。在該情況下,可在步驟S11中,通過事件採集設備(例如事件相機),在與暗光條件相對應的第一亮度範圍內,獲取目標場景的事件資訊,該事件資訊用於表示目標場景在第一亮度範圍內的亮度變化。本公開對第一亮度範圍的具體取值不作限制。In a possible implementation manner, the target scene may be a geographic area including scenes such as buildings, landscapes, people, and vehicles. The target scene may be under dark light conditions (for example, night or other dark environments), and the image of the target scene collected by an image acquisition device (for example, an intensity camera or a camera, etc.) is underexposed and the image quality is poor. In this case, in step S11, an event collection device (such as an event camera) may be used to obtain event information of the target scene within the first brightness range corresponding to the dark light condition, and the event information is used to represent the target scene The brightness change in the first brightness range. The present disclosure does not limit the specific value of the first brightness range.

在一種可能的實現方式中,事件相機能夠非同步地記錄場景中亮度的改變,輸出流形式的事件資料(事件流),其資料單元如下所示:

Figure 02_image001
(1) 公式(1)中,
Figure 02_image003
Figure 02_image005
表示場景中第k個位置的事件資料
Figure 02_image007
的空間座標,
Figure 02_image009
表示事件資料
Figure 02_image007
產生的時間,
Figure 02_image011
表示事件資料
Figure 02_image007
的極性,極性為正表示亮度增強,極性為負表示亮度降低。In a possible implementation, the event camera can record the brightness changes in the scene asynchronously, and output event data in the form of a stream (event stream). The data unit is as follows:
Figure 02_image001
(1) In formula (1),
Figure 02_image003
with
Figure 02_image005
Indicates the event data at the k-th position in the scene
Figure 02_image007
Space coordinates,
Figure 02_image009
Indicates event data
Figure 02_image007
Time generated,
Figure 02_image011
Indicates event data
Figure 02_image007
The polarity is positive, the brightness is increased, and the polarity is negative, the brightness is decreased.

傳統的CNN方法只能處理圖片形式的規則資料,無法應用於事件流。因此,在目標場景處於第一亮度範圍時,可通過事件採集設備採集目標場景在一個或多個預設時間段內的亮度變化,得到事件資料,並在空間維度上對各事件資料的極性進行積分,得到單通道或多通道的事件資訊。The traditional CNN method can only process rule data in the form of pictures, and cannot be applied to event streams. Therefore, when the target scene is in the first brightness range, the brightness change of the target scene in one or more preset time periods can be collected by the event collection device to obtain event data, and the polarity of each event data can be measured in the spatial dimension. Integrate to obtain single-channel or multi-channel event information.

積分方式如下式所示:

Figure 02_image013
(2) 公式(2)中,
Figure 02_image015
表示第k個位置的事件資料在預設時間段
Figure 02_image017
內的事件資訊。這樣,對場景中各個位置的事件資料進行積分,可得到單通道的事件資訊(也可稱為事件幀);對多個預設時間段內各個位置的事件資料進行積分,可得到多通道的事件資訊,例如四通道的事件資訊。為保證資料範圍的一致性,可將各通道的事件資訊分別在空間維度上進行標準化,將標準化後的事件資訊作為目標場景的事件資訊。本公開對事件資訊的通道數量不作限制The integration method is as follows:
Figure 02_image013
(2) In formula (2),
Figure 02_image015
Indicates that the event data at the kth position is in the preset time period
Figure 02_image017
Event information within. In this way, by integrating event data at various locations in the scene, single-channel event information (also called event frames) can be obtained; by integrating event data at various locations in multiple preset time periods, multi-channel event information can be obtained. Event information, such as four-channel event information. In order to ensure the consistency of the data range, the event information of each channel can be standardized in the spatial dimension, and the standardized event information can be regarded as the event information of the target scene. This disclosure does not limit the number of channels for event information

在一種可能的實現方式中,可在步驟S12中對所述事件資訊進行特徵提取,得到該目標場景的第一事件特徵。該第一事件特徵至少包括表示該目標場景的結構的資訊。可例如通過卷積神經網路提取事件資訊的特徵,該卷積神經網路可包括多個卷積層、多個殘差層等,本公開對卷積神經網路的網路結構不作限制。In a possible implementation manner, feature extraction may be performed on the event information in step S12 to obtain the first event feature of the target scene. The first event feature includes at least information representing the structure of the target scene. For example, the feature of the event information can be extracted through a convolutional neural network, which can include multiple convolutional layers, multiple residual layers, etc. The present disclosure does not limit the network structure of the convolutional neural network.

在一種可能的實現方式中,可在步驟S13中對第一事件特徵進行圖像重建,得到該目標場景的重建圖像。該重建圖像可例如為強度圖像,該重建圖像的亮度處於與正常光照條件對應的第二亮度範圍內,該第二亮度範圍高於第一亮度範圍。In a possible implementation manner, image reconstruction may be performed on the first event feature in step S13 to obtain a reconstructed image of the target scene. The reconstructed image may be, for example, an intensity image, the brightness of the reconstructed image is within a second brightness range corresponding to normal lighting conditions, and the second brightness range is higher than the first brightness range.

在一種可能的實現方式中,可例如通過反卷積神經網路對第一事件特徵進行圖像重建,該反卷積神經網路可包括多個反卷積層、多個殘差層以及卷積層等,本公開對第二亮度範圍的具體取值以及反卷積神經網路的網路結構不作限制。In a possible implementation manner, image reconstruction of the first event feature may be performed, for example, through a deconvolutional neural network, which may include multiple deconvolutional layers, multiple residual layers, and convolutional layers. Etc., the present disclosure does not limit the specific value of the second brightness range and the network structure of the deconvolutional neural network.

根據本公開的實施例,能夠獲取目標場景在較低的第一亮度範圍內的事件資訊;對事件資訊進行特徵提取,得到事件特徵;對事件特徵進行圖像重建,得到目標場景在較高的第二亮度範圍內的重建圖像,從而通過暗光條件下的事件重建出正常光照條件下的高品質圖像,提高了圖像重建的效果。According to the embodiment of the present disclosure, it is possible to obtain event information of the target scene in the lower first brightness range; perform feature extraction on the event information to obtain the event feature; perform image reconstruction on the event feature to obtain the target scene in the higher The reconstructed image in the second brightness range can reconstruct a high-quality image under normal lighting conditions through events under dark light conditions, which improves the effect of image reconstruction.

在一種可能的實現方式中,步驟S13可包括:In a possible implementation manner, step S13 may include:

根據第一雜訊資訊及所述第一事件特徵,對所述第一事件特徵進行細節增強,得到第二事件特徵;Performing detail enhancement on the first event feature according to the first noise information and the first event feature to obtain a second event feature;

將所述第一事件特徵與所述第二事件特徵融合,得到融合特徵;Fusing the first event feature and the second event feature to obtain a fusion feature;

對所述融合特徵進行圖像重建,得到所述目標場景的重建圖像。Image reconstruction is performed on the fusion feature to obtain a reconstructed image of the target scene.

舉例來說,在暗光條件下獲取到的事件資訊可能存在較多的雜訊幹擾及局部的結構資訊缺失。在該情況下,可對第一事件特徵進行增強,以便恢復更多的細節資訊。For example, the event information obtained under dim light conditions may have more noise interference and partial structural information loss. In this case, the feature of the first event can be enhanced to recover more detailed information.

在一種可能的實現方式中,可預設有隨機的第一雜訊資訊,根據該第一雜訊資訊為第一事件特徵添加額外的雜訊通道。將添加雜訊通道後的第一事件特徵輸入細節增強網路中進行細節增強,得到第二事件特徵。該細節增強網路可例如為殘差網路,包括卷積層及多個殘差層。本公開對第一雜訊資訊的獲取方式及細節增強網路的具體網路結構不作限制。In a possible implementation, random first noise information can be preset, and additional noise channels are added to the first event feature based on the first noise information. The first event feature after adding the noise channel is input into the detail enhancement network for detail enhancement to obtain the second event feature. The detail enhancement network can be, for example, a residual network, including a convolutional layer and multiple residual layers. The present disclosure does not limit the acquisition method of the first noise information and the specific network structure of the detail enhancement network.

在一種可能的實現方式中,可將第一事件特徵與第二事件特徵進行融合,例如疊加,得到融合特徵;將融合特徵輸入反卷積神經網路中進行圖像重建,得到該目標場景的重建圖像。In a possible implementation, the first event feature and the second event feature can be fused, for example, superimposed, to obtain the fusion feature; the fusion feature is input into the deconvolutional neural network for image reconstruction, and the target scene is obtained. Reconstruct the image.

通過這種方式,可以增強第一事件特徵中的細節資訊,進一步提高重建圖像的品質。In this way, the detailed information in the features of the first event can be enhanced, and the quality of the reconstructed image can be further improved.

在一種可能的實現方式中,根據本公開實施例的圖像重建方法可通過圖像處理網路實現,該圖像處理網路至少包括第一特徵提取網路及圖像重建網路,第一特徵提取網路用於對所述事件資訊進行特徵提取,例如為卷積神經網路;圖像重建網路用於對所述第一事件特徵進行圖像重建,例如為反卷積神經網路。In a possible implementation, the image reconstruction method according to the embodiment of the present disclosure can be implemented by an image processing network, the image processing network includes at least a first feature extraction network and an image reconstruction network, the first The feature extraction network is used for feature extraction of the event information, such as a convolutional neural network; the image reconstruction network is used for image reconstruction of the first event feature, such as a deconvolutional neural network .

應當理解,圖像處理網路可以採用其他類型的網路或模型,本領域技術人員可根據實際情況設置,本公開對此不作限制。It should be understood that the image processing network can adopt other types of networks or models, and those skilled in the art can set it according to actual conditions, and the present disclosure does not limit this.

在應用該圖像處理網路之前,可對該圖像處理網路進行訓練。Before applying the image processing network, the image processing network can be trained.

在一種可能的實現方式中,根據本公開實施例的圖像重建方法還包括:根據預設的訓練集訓練所述圖像處理網路,所述訓練集包括多個第一樣本場景的第一樣本事件資訊,多個第二樣本場景的第二樣本事件資訊及樣本場景圖像,In a possible implementation, the image reconstruction method according to the embodiment of the present disclosure further includes: training the image processing network according to a preset training set, the training set including the first sample scenes of a plurality of first sample scenes. One sample event information, second sample event information and sample scene images of multiple second sample scenes,

其中,所述第一樣本事件資訊是在第三亮度範圍內獲取的,所述第二樣本事件資訊是在第四亮度範圍內獲取的,所述樣本場景圖像是在所述第四亮度範圍內獲取的,所述第四亮度範圍高於所述第三亮度範圍。Wherein, the first sample event information is obtained in a third brightness range, the second sample event information is obtained in a fourth brightness range, and the sample scene image is obtained in the fourth brightness range. Acquired within the range, the fourth brightness range is higher than the third brightness range.

舉例來說,可預先設定有訓練集,訓練集中包括多個樣本場景,例如建築、風景、人物、車輛等場景。樣本場景可分為暗光場景(可稱為第一樣本場景)和正常光照的場景(可稱為第二樣本場景)。每個第一樣本場景包括第一樣本事件資訊;每個第二樣本場景包括第二樣本事件資訊及樣本場景圖像。第一樣本場景和第二樣本場景可以為相同或不同的場景,本公開對此不作限制。For example, a training set may be preset, and the training set includes multiple sample scenes, such as scenes such as buildings, landscapes, people, and vehicles. The sample scene can be divided into a dark light scene (may be called the first sample scene) and a normally illuminated scene (may be called the second sample scene). Each first sample scene includes first sample event information; each second sample scene includes second sample event information and sample scene images. The first sample scene and the second sample scene may be the same or different scenes, which is not limited in the present disclosure.

在一種可能的實現方式中,在第一樣本場景處於與暗光條件相對應的第三亮度範圍時,可通過事件採集設備(例如事件相機)獲取第一樣本場景的亮度變化,得到第一樣本事件資訊,以便作為圖像處理網路的輸入。該第一樣本事件資訊包括表示該第一樣本場景的整體結構的資訊。第三亮度範圍可與前述的第一亮度範圍相同或不同,本公開對此不作限制。In a possible implementation manner, when the first sample scene is in the third brightness range corresponding to the dark light condition, the brightness change of the first sample scene can be acquired through an event collection device (for example, an event camera), and the first sample scene can be obtained. The same event information can be used as input to the image processing network. The first sample event information includes information representing the overall structure of the first sample scene. The third brightness range may be the same as or different from the aforementioned first brightness range, which is not limited in the present disclosure.

暗光條件下的該第一樣本事件資訊包括表示該第一樣本場景的整體結構的資訊,但缺少強度資訊(即圖像的亮度資訊)。在該情況下,可引入正常光照條件下的第二樣本場景的事件資訊(可稱為第二樣本事件資訊),以便通過圖像處理網路學習該第二樣本事件資訊中的強度資訊。The first sample event information under dark light conditions includes information representing the overall structure of the first sample scene, but lacks intensity information (that is, brightness information of the image). In this case, the event information of the second sample scene under normal lighting conditions (which can be referred to as the second sample event information) can be introduced, so as to learn the intensity information in the second sample event information through the image processing network.

在一種可能的實現方式中,在第二樣本場景處於與正常光照條件相對應的第四亮度範圍時,可通過事件採集設備獲取第二樣本場景的亮度變化,得到第二樣本事件資訊。第四亮度範圍高於第三亮度範圍。其中,第四亮度範圍可與前述的第二亮度範圍相同或不同,本公開對此不作限制。In a possible implementation manner, when the second sample scene is in the fourth brightness range corresponding to normal lighting conditions, the brightness change of the second sample scene can be acquired by the event collection device to obtain the second sample event information. The fourth brightness range is higher than the third brightness range. Wherein, the fourth brightness range may be the same as or different from the aforementioned second brightness range, which is not limited in the present disclosure.

其中,第一樣本場景的第一樣本事件資訊和第二樣本場景的第二樣本事件資訊的獲取方式可與目標場景的事件資訊的獲取方式相似,此處不再重複描述。The method for acquiring the first sample event information of the first sample scene and the second sample event information of the second sample scene may be similar to the method for acquiring the event information of the target scene, and the description will not be repeated here.

此外,對於處於暗光條件下的第一樣本場景,通過圖像採集設備採集的目標場景的圖像品質較差,無法作為監督資訊。在該情況下,可引入正常光照條件下的第二樣本場景的樣本場景圖像,作為圖像處理網路的監督資訊。可通過圖像採集設備(例如攝像頭)在與正常光照條件相對應第四亮度範圍內獲取該樣本場景圖像。In addition, for the first sample scene under dark light conditions, the image quality of the target scene collected by the image acquisition device is poor and cannot be used as supervision information. In this case, the sample scene image of the second sample scene under normal lighting conditions can be introduced as the supervision information of the image processing network. The sample scene image may be acquired by an image acquisition device (for example, a camera) in the fourth brightness range corresponding to normal lighting conditions.

通過這種方式,可以提高圖像處理網路的訓練效果。In this way, the training effect of the image processing network can be improved.

在一種可能的實現方式中,所述圖像處理網路還包括鑒別網路,所述根據預設的訓練集訓練所述圖像處理網路的步驟,包括:In a possible implementation manner, the image processing network further includes an identification network, and the step of training the image processing network according to a preset training set includes:

將所述第一樣本場景的第一樣本事件資訊和所述第二樣本場景的第二樣本事件資訊分別輸入所述第一特徵提取網路,得到第一樣本事件特徵和第二樣本事件特徵;The first sample event information of the first sample scene and the second sample event information of the second sample scene are respectively input to the first feature extraction network to obtain first sample event features and second samples Event characteristics

將所述第一樣本事件特徵和所述第二樣本事件特徵分別輸入所述鑒別網路,得到第一鑒別結果和第二鑒別結果;Input the first sample event feature and the second sample event feature into the authentication network respectively to obtain a first authentication result and a second authentication result;

根據所述第一鑒別結果及所述第二鑒別結果,對抗訓練所述圖像處理網路。According to the first identification result and the second identification result, the image processing network is trained against training.

舉例來說,圖像處理網路中的鑒別網路用於對第一特徵提取網路的輸出結果進行鑒別。也就是說,可通過對抗訓練的方式訓練第一特徵提取網路,以使第一特徵提取網路學習到暗光條件下的第一樣本事件資訊和正常光照條件下的第二樣本事件資訊之間共同分布資訊。For example, the identification network in the image processing network is used to identify the output result of the first feature extraction network. That is to say, the first feature extraction network can be trained by adversarial training, so that the first feature extraction network can learn the first sample event information under dark light conditions and the second sample event information under normal lighting conditions. Distribute information together.

在一種可能的實現方式中,可將第一樣本場景的第一樣本事件資訊和第二樣本場景的第二樣本事件資訊分別輸入到第一特徵提取網路中處理,輸出第一樣本事件特徵和第二樣本事件特徵;將第一樣本事件特徵和第二樣本事件特徵分別輸入鑒別網路,得到第一鑒別結果和第二鑒別結果;根據第一鑒別結果和第二鑒別結果,對抗訓練所述圖像處理網路。In a possible implementation manner, the first sample event information of the first sample scene and the second sample event information of the second sample scene can be separately input into the first feature extraction network for processing, and the first sample can be output Event characteristics and second sample event characteristics; input the first sample event characteristics and the second sample event characteristics into the identification network respectively to obtain the first identification result and the second identification result; according to the first identification result and the second identification result, Confrontation training the image processing network.

在對抗訓練過程中,第一特徵提取網路試圖混淆第一樣本事件特徵和第二樣本事件特徵,鑒別網路試圖區分第一樣本事件特徵和第二樣本事件特徵,兩者相互對抗,相互促進。In the adversarial training process, the first feature extraction network tries to confuse the first sample event feature and the second sample event feature, and the identification network tries to distinguish the first sample event feature from the second sample event feature, and the two are opposed to each other. mutual improvement.

這樣,可強制第一特徵提取網路提取出正常光照條件下的特徵域與暗光條件下的特徵域之間的公共分布域,使得暗光條件下的第一樣本事件特徵具有正常光照條件下的事件資訊的分布特點,正常光照條件下的第二樣本事件特徵具有暗光條件下的事件資訊的分布特點。即,通過域自適應(domain adaptation)的方式,使得第一特徵提取網路同時適用於兩種不同分布的資料的特徵提取。本公開對對抗訓練的損失函數的選取不作限制。In this way, the first feature extraction network can be forced to extract the common distribution domain between the feature domain under normal lighting conditions and the feature domain under low light conditions, so that the first sample event feature under low light conditions has normal lighting conditions The distribution characteristics of event information under normal light conditions, and the second sample event characteristics under normal light conditions have the distribution characteristics of event information under dark light conditions. That is, through domain adaptation, the first feature extraction network is suitable for feature extraction of two differently distributed data at the same time. The present disclosure does not limit the selection of the loss function for the confrontation training.

通過這種方式,可以使得第一特徵提取網路能夠更好地提取暗光下的事件特徵,提高第一特徵提取網路的精度,以便利用暗光下的事件資訊實現高品質的圖像重建。In this way, the first feature extraction network can better extract event features under dark light, and improve the accuracy of the first feature extraction network, so as to use event information under dark light to achieve high-quality image reconstruction .

在一種可能的實現方式中,所述根據預設的訓練集訓練所述圖像處理網路的步驟,還包括:In a possible implementation, the step of training the image processing network according to a preset training set further includes:

將所述第二樣本事件特徵輸入所述圖像重建網路,得到所述第二樣本場景的第一重建圖像;Inputting the feature of the second sample event into the image reconstruction network to obtain a first reconstructed image of the second sample scene;

根據所述第二樣本場景的第一重建圖像及所述樣本場景圖像,訓練所述圖像處理網路。Training the image processing network according to the first reconstructed image of the second sample scene and the sample scene image.

舉例來說,在對抗訓練後,第一特徵提取網路提取出的第二樣本事件特徵,具有暗光條件下的事件資訊的分布特點,並且,相應的第二樣本事件資訊具有監督資訊(即,正常光照條件下的樣本場景圖像)。For example, after the confrontation training, the second sample event features extracted by the first feature extraction network have the characteristics of the distribution of event information under dark light conditions, and the corresponding second sample event information has supervision information (ie , The sample scene image under normal lighting conditions).

在一種可能的實現方式中,所述根據預設的訓練集訓練所述圖像處理網路的步驟,還包括:In a possible implementation, the step of training the image processing network according to a preset training set further includes:

將所述第二樣本事件特徵輸入所述圖像重建網路,得到所述第二樣本場景的第一重建圖像;Inputting the feature of the second sample event into the image reconstruction network to obtain a first reconstructed image of the second sample scene;

根據所述第二樣本場景的第一重建圖像及所述樣本場景圖像,訓練所述圖像處理網路。Training the image processing network according to the first reconstructed image of the second sample scene and the sample scene image.

舉例來說,在對抗訓練後,第一特徵提取網路提取出的第二樣本事件特徵,具有暗光條件下的事件資訊的分布特點,並且,相應的第二樣本事件資訊具有監督資訊(即,正常光照條件下的樣本場景圖像)。For example, after the confrontation training, the second sample event features extracted by the first feature extraction network have the characteristics of the distribution of event information under dark light conditions, and the corresponding second sample event information has supervision information (ie , The sample scene image under normal lighting conditions).

在一種可能的實現方式中,所述圖像處理網路還包括第二特徵提取網路,所述根據預設的訓練集訓練所述圖像處理網路的步驟,還包括:In a possible implementation, the image processing network further includes a second feature extraction network, and the step of training the image processing network according to a preset training set further includes:

將所述第二樣本場景的第二樣本事件資訊及第二雜訊資訊輸入所述第二特徵提取網路,得到第三樣本事件特徵;Input second sample event information and second noise information of the second sample scene into the second feature extraction network to obtain third sample event features;

將所述第二樣本事件特徵與所述第三樣本事件特徵融合,得到第一樣本融合特徵;Fusing the second sample event feature with the third sample event feature to obtain a first sample fusion feature;

將所述第一樣本融合特徵輸入所述鑒別網路,得到第三鑒別結果;Input the fusion feature of the first sample into the authentication network to obtain a third authentication result;

根據所述第一鑒別結果及所述第三鑒別結果,對抗訓練所述圖像處理網路。According to the first identification result and the third identification result, the image processing network is trained against training.

舉例來說,暗光條件下的第一樣本事件資訊可能存在一定的雜訊干擾,而正常光照條件下的第二樣本事件資訊中的雜訊較低。在該情況下,可為第二樣本事件資訊引入額外的雜訊通道,以便提高網路的泛化性。For example, the event information of the first sample under dim light conditions may have a certain amount of noise interference, while the event information of the second sample under normal lighting conditions has lower noise. In this case, additional noise channels can be introduced for the second sample event information to improve the generalization of the network.

在一種可能的實現方式中,圖像處理網路還包括第二特徵提取網路,例如為卷積圖像處理網路,包括多個卷積層及多個殘差層,本公開對第二特徵提取網路的網路結構不作限制。In a possible implementation, the image processing network further includes a second feature extraction network, such as a convolutional image processing network, including multiple convolutional layers and multiple residual layers. The network structure of the extraction network is not restricted.

在一種可能的實現方式中,可預設有隨機的第二雜訊資訊,根據該第二雜訊資訊為第二樣本事件資訊添加雜訊通道。將添加雜訊通道後的第二樣本事件資訊輸入第二特徵提取網路中進行特徵提取,輸出第三樣本事件特徵;將所述第二樣本事件特徵與所述第三樣本事件特徵融合,得到第一樣本融合特徵。這樣,可實現第二樣本事件特徵的特徵強化。In a possible implementation, random second noise information can be preset, and a noise channel is added to the second sample event information based on the second noise information. The second sample event information after adding the noise channel is input into the second feature extraction network for feature extraction, and the third sample event feature is output; the second sample event feature is fused with the third sample event feature to obtain Fusion features of the first sample. In this way, the feature enhancement of the second sample event feature can be achieved.

在一種可能的實現方式中,將第一樣本融合特徵輸入鑒別網路,可得到第三鑒別結果;進而,根據第一鑒別結果及所述第三鑒別結果,對抗訓練所述圖像處理網路。對抗訓練的具體過程不再重複描述。In a possible implementation, the fusion feature of the first sample is input into the identification network to obtain the third identification result; further, according to the first identification result and the third identification result, the image processing network is opposed to training road. The specific process of confrontation training will not be repeated.

通過這種方式,可進一步提高第一特徵提取網路的精度。In this way, the accuracy of the first feature extraction network can be further improved.

在一種可能的實現方式中,所述圖像處理網路還包括第二特徵提取網路,所述根據預設的訓練集訓練所述圖像處理網路的步驟,還包括:In a possible implementation, the image processing network further includes a second feature extraction network, and the step of training the image processing network according to a preset training set further includes:

將所述第一樣本融合特徵輸入所述圖像重建網路,得到所述第二樣本場景的第二重建圖像;Inputting the first sample fusion feature into the image reconstruction network to obtain a second reconstructed image of the second sample scene;

根據所述第二樣本場景的第二重建圖像及所述樣本場景圖像,訓練所述圖像處理網路。Training the image processing network according to the second reconstructed image of the second sample scene and the sample scene image.

舉例來說,在對抗訓練後,第一特徵提取網路及第二特徵提取網路提取出的第一樣本融合特徵,具有暗光條件下的事件資訊的分布特點,並且,相應的第二樣本事件資訊具有監督資訊(即,正常光照條件下的樣本場景圖像)。For example, after the confrontation training, the first sample fusion features extracted by the first feature extraction network and the second feature extraction network have the characteristics of the distribution of event information under dark light conditions, and the corresponding second The sample event information has supervision information (ie, the sample scene image under normal lighting conditions).

在一種可能的實現方式中,可將該第一樣本融合特徵輸入圖像重建網路中處理,輸出第二樣本場景的第二重建圖像;根據第二樣本場景的第二重建圖像及樣本場景圖像之間的差異,可確定第一特徵提取網路、第二特徵提取網路及圖像重建網路的網路損失,例如L1損失;進而,可根據該網路損失反向調整第一特徵提取網路、第二特徵提取網路及圖像重建網路的網路參數,實現第一特徵提取網路、第二特徵提取網路及圖像重建網路的訓練。In a possible implementation, the fusion feature of the first sample can be input into the image reconstruction network for processing, and the second reconstructed image of the second sample scene is output; the second reconstructed image according to the second sample scene and The difference between the sample scene images can determine the network loss of the first feature extraction network, the second feature extraction network, and the image reconstruction network, such as L1 loss; further, it can be adjusted inversely according to the network loss The network parameters of the first feature extraction network, the second feature extraction network, and the image reconstruction network realize the training of the first feature extraction network, the second feature extraction network, and the image reconstruction network.

在實際訓練過程中,同樣可進行交替訓練。即,在每輪反覆運算過程中,根據對抗網路損失,反向調整鑒別網路的網路參數;再根據第一特徵提取網路、第二特徵提取網路及圖像重建網路的網路損失,反向調整第一特徵提取網路、第二特徵提取網路及圖像重建網路的網路參數,本次訓練中仍然會得到鑒別網路的輸出作為指導資訊,但不更新鑒別網路的參數。這樣,經過多輪反覆運算,在滿足訓練條件(例如網路收斂)的情況下,可得到訓練後的圖像處理網路。In the actual training process, alternate training can also be performed. That is, in each round of iterative calculations, the network parameters of the identification network are adjusted in the reverse direction according to the loss of the counter network; then the network parameters of the first feature extraction network, the second feature extraction network, and the image reconstruction network are adjusted Path loss, reversely adjust the network parameters of the first feature extraction network, the second feature extraction network, and the image reconstruction network. In this training, the output of the identification network will still be obtained as guidance information, but the identification will not be updated The parameters of the network. In this way, after multiple rounds of iterative calculations, the trained image processing network can be obtained when the training conditions (such as network convergence) are met.

通過這種方式,可以實現整個圖像處理網路的訓練過程,得到高精度的圖像處理網路。In this way, the training process of the entire image processing network can be realized, and a high-precision image processing network can be obtained.

在一種可能的實現方式中,所述圖像處理網路還包括細節增強網路,所述根據預設的訓練集訓練所述圖像處理網路的步驟,還可包括:In a possible implementation manner, the image processing network further includes a detail enhancement network, and the step of training the image processing network according to a preset training set may further include:

將所述第二樣本事件特徵及第三雜訊資訊輸入所述細節增強網路,得到第四樣本事件特徵;Input the second sample event feature and the third noise information into the detail enhancement network to obtain the fourth sample event feature;

將所述第二樣本事件特徵與所述第四樣本事件特徵融合,得到第二樣本融合特徵;Fusing the second sample event feature with the fourth sample event feature to obtain a second sample fusion feature;

將所述第二樣本融合特徵輸入所述圖像重建網路,得到所述第二樣本場景的第三重建圖像;Input the second sample fusion feature into the image reconstruction network to obtain a third reconstructed image of the second sample scene;

根據所述第二樣本場景的第一重建圖像、所述第三重建圖像及所述樣本場景圖像,訓練所述圖像處理網路。Training the image processing network according to the first reconstructed image of the second sample scene, the third reconstructed image, and the sample scene image.

舉例來說,可引入細節增強網路對事件特徵進行細節增強,以便恢復更多的圖像細節資訊(例如局部的結構資訊)。細節增強網路可例如為殘差網路,包括卷積層及多個殘差層,本公開對細節增強網路的網路結構不作限制。For example, the detail enhancement network can be introduced to enhance the details of the event features in order to recover more detailed image information (such as local structural information). The detail enhancement network may be, for example, a residual network, including a convolutional layer and multiple residual layers. The present disclosure does not limit the network structure of the detail enhancement network.

在一種可能的實現方式中,在未引入第二特徵提取網路的情況下,可直接使用第二樣本事件特徵進行細節增強。可將預設有隨機的第三雜訊資訊,根據該第三雜訊資訊為第二樣本事件特徵添加雜訊通道。將添加雜訊通道後的第二樣本事件特徵輸入細節增強網路中處理,得到第四樣本事件特徵;將第二樣本事件特徵與第四樣本事件特徵融合,得到第二樣本融合特徵;將所述第二樣本融合特徵輸入所述圖像重建網路,得到所述第二樣本場景的第三重建圖像。In a possible implementation manner, in the case where the second feature extraction network is not introduced, the second sample event feature can be directly used for detail enhancement. A random third noise information can be preset, and a noise channel can be added to the second sample event feature based on the third noise information. The second sample event feature after adding the noise channel is input into the detail enhancement network for processing to obtain the fourth sample event feature; the second sample event feature is fused with the fourth sample event feature to obtain the second sample fusion feature; The second sample fusion feature is input into the image reconstruction network to obtain a third reconstructed image of the second sample scene.

在一種可能的實現方式中,根據所述樣本場景的第一重建圖像、所述第三重建圖像及所述樣本場景圖像,訓練所述圖像處理網路。In a possible implementation manner, the image processing network is trained according to the first reconstructed image, the third reconstructed image, and the sample scene image of the sample scene.

其中,根據第三重建圖像與樣本場景圖像之間的差異,可確定第一特徵提取網路、細節增強網路及圖像重建網路的第一損失;根據第三重建圖像與樣本場景圖像之間的差異,以及第一重建圖像與樣本場景圖像之間的差異,可確定第一特徵提取網路、細節增強網路及圖像重建網路的第二損失。該第二損失可保證引入細節增強後的第三重建圖像的品質優於未引入細節增強時的第一重建圖像的品質,保證細節增強網路能起到預期的作用。Among them, according to the difference between the third reconstructed image and the sample scene image, the first loss of the first feature extraction network, the detail enhancement network, and the image reconstruction network can be determined; according to the third reconstructed image and the sample The difference between the scene images and the difference between the first reconstructed image and the sample scene image can determine the second loss of the first feature extraction network, the detail enhancement network, and the image reconstruction network. The second loss can ensure that the quality of the third reconstructed image after the detail enhancement is introduced is better than the quality of the first reconstructed image when the detail enhancement is not introduced, ensuring that the detail enhancement network can play an expected role.

在一種可能的實現方式中,可根據第一損失和第二損失確定第一特徵提取網路、細節增強網路及圖像重建網路的總體損失,例如將第一損失與第二損失的加權和確定為總體損失;進而,可根據該總體損失反向調整第一特徵提取網路、細節增強網路及圖像重建網路的網路參數,實現第一特徵提取網路、細節增強網路及圖像重建網路的訓練。In a possible implementation, the total loss of the first feature extraction network, the detail enhancement network, and the image reconstruction network can be determined based on the first loss and the second loss, for example, the first loss and the second loss are weighted The sum is determined as the overall loss; further, the network parameters of the first feature extraction network, the detail enhancement network, and the image reconstruction network can be adjusted inversely according to the overall loss to realize the first feature extraction network and the detail enhancement network And training of image reconstruction network.

在實際訓練過程中,同樣可進行交替訓練。即在每輪反覆運算過程中,對抗訓練鑒別網路;再訓練第一特徵提取網路、細節增強網路及圖像重建網路,鑒別網路的輸出作為指導資訊,但不更新鑒別網路的參數。經過多輪反覆運算,在滿足訓練條件(例如網路收斂)的情況下,可得到訓練後的圖像處理網路。In the actual training process, alternate training can also be performed. That is, in each round of iterative operations, the identification network is trained against; the first feature extraction network, the detail enhancement network and the image reconstruction network are trained again, and the output of the identification network is used as guidance information, but the identification network is not updated Parameters. After multiple rounds of iterative calculations, the trained image processing network can be obtained when the training conditions (such as network convergence) are met.

通過這種方式,可以實現重建圖像的細節增強,進一步提高訓練後的圖像處理網路得到的重建圖像的品質。In this way, the details of the reconstructed image can be enhanced, and the quality of the reconstructed image obtained by the trained image processing network can be further improved.

在一種可能的實現方式中,所述根據預設的訓練集訓練所述圖像處理網路的步驟,還可包括:In a possible implementation, the step of training the image processing network according to a preset training set may further include:

將所述第一樣本融合特徵及第四雜訊資訊輸入所述細節增強網路,得到第五樣本事件特徵;Input the fusion feature of the first sample and the fourth noise information into the detail enhancement network to obtain the event feature of the fifth sample;

將所述第一樣本融合特徵與所述第五樣本事件特徵融合,得到第三樣本融合特徵;Fuse the first sample fusion feature with the fifth sample event feature to obtain a third sample fusion feature;

將所述第三樣本融合特徵輸入所述圖像重建網路,得到所述第二樣本場景的第四重建圖像;Input the third sample fusion feature into the image reconstruction network to obtain a fourth reconstructed image of the second sample scene;

根據所述第二樣本場景的第二重建圖像、所述第四重建圖像及所述樣本場景圖像,訓練所述圖像處理網路。Training the image processing network according to the second reconstructed image of the second sample scene, the fourth reconstructed image, and the sample scene image.

舉例來說,在已引入第二特徵提取網路的情況下,可使用第一樣本融合特徵進行細節增強。可將預設有隨機的第四雜訊資訊,根據該第四雜訊資訊為第一樣本融合特徵添加雜訊通道。將添加雜訊通道後的第一樣本融合特徵輸入細節增強網路中處理,得到第五樣本事件特徵;將第一樣本融合特徵與第五樣本事件特徵融合,得到第三樣本融合特徵;將所述第三樣本融合特徵輸入所述圖像重建網路,得到所述第二樣本場景的第四重建圖像。For example, when the second feature extraction network has been introduced, the first sample fusion feature can be used for detail enhancement. A random fourth noise information can be preset, and a noise channel is added to the first sample fusion feature based on the fourth noise information. Input the fusion feature of the first sample after adding the noise channel to the detail enhancement network for processing to obtain the event feature of the fifth sample; fuse the fusion feature of the first sample with the event feature of the fifth sample to obtain the fusion feature of the third sample; The third sample fusion feature is input into the image reconstruction network to obtain a fourth reconstructed image of the second sample scene.

在一種可能的實現方式中,根據所述第二樣本場景的第二重建圖像、所述第四重建圖像及所述樣本場景圖像,訓練圖像處理網路。該步驟可包括:In a possible implementation manner, an image processing network is trained based on the second reconstructed image of the second sample scene, the fourth reconstructed image, and the sample scene image. This step can include:

根據所述第二樣本場景的第二重建圖像、所述第四重建圖像及所述樣本場景圖像,確定所述圖像處理網路的總體損失;Determine the overall loss of the image processing network according to the second reconstructed image of the second sample scene, the fourth reconstructed image, and the sample scene image;

根據所述總體損失,確定所述圖像處理網路的梯度資訊;Determine the gradient information of the image processing network according to the total loss;

根據所述梯度資訊,調整所述第一特徵提取網路、所述第二特徵提取網路、所述細節增強網路及所述圖像重建網路的網路參數,Adjusting the network parameters of the first feature extraction network, the second feature extraction network, the detail enhancement network, and the image reconstruction network according to the gradient information,

其中,所述細節增強網路的梯度資訊不傳遞到所述第二特徵提取網路。Wherein, the gradient information of the detail enhancement network is not transmitted to the second feature extraction network.

舉例來說,根據第四重建圖像與樣本場景圖像之間的差異,可確定第一特徵提取網路、第二特徵提取網路、細節增強網路及圖像重建網路的第三損失;根據第四重建圖像與樣本場景圖像之間的差異,以及第二重建圖像與樣本場景圖像之間的差異,可確定第一特徵提取網路、第二特徵提取網路、細節增強網路及圖像重建網路的第四損失。該第四損失可保證引入細節增強後的第四重建圖像的品質優於未引入細節增強時的第二重建圖像的品質,保證細節增強網路能起到預期的作用。For example, according to the difference between the fourth reconstructed image and the sample scene image, the third loss of the first feature extraction network, the second feature extraction network, the detail enhancement network, and the image reconstruction network can be determined ; According to the difference between the fourth reconstructed image and the sample scene image, and the difference between the second reconstructed image and the sample scene image, the first feature extraction network, the second feature extraction network, and the details can be determined The fourth loss of the enhancement network and the image reconstruction network. The fourth loss can ensure that the quality of the fourth reconstructed image after the detail enhancement is introduced is better than the quality of the second reconstructed image when the detail enhancement is not introduced, ensuring that the detail enhancement network can play an expected role.

在一種可能的實現方式中,可根據第三損失和第四損失確定第一特徵提取網路、第二特徵提取網路、細節增強網路及圖像重建網路的總體損失,例如將第三損失與第四損失的加權和確定為總體損失;根據該總體損失,可確定第一特徵提取網路、第二特徵提取網路、細節增強網路及圖像重建網路的梯度資訊,進而,可在第一特徵提取網路、第二特徵提取網路、細節增強網路及圖像重建網路中反向傳遞該梯度資訊,從而調整第一特徵提取網路、第二特徵提取網路、細節增強網路及圖像重建網路的網路參數,實現第一特徵提取網路、第二特徵提取網路、細節增強網路及圖像重建網路的訓練。In a possible implementation, the total loss of the first feature extraction network, the second feature extraction network, the detail enhancement network, and the image reconstruction network can be determined based on the third loss and the fourth loss. For example, the third loss The weighted sum of the loss and the fourth loss is determined as the overall loss; according to the overall loss, the gradient information of the first feature extraction network, the second feature extraction network, the detail enhancement network, and the image reconstruction network can be determined, and further, The gradient information can be transmitted backwards in the first feature extraction network, the second feature extraction network, the detail enhancement network, and the image reconstruction network, so as to adjust the first feature extraction network, the second feature extraction network, The network parameters of the detail enhancement network and the image reconstruction network realize the training of the first feature extraction network, the second feature extraction network, the detail enhancement network and the image reconstruction network.

在一種可能的實現方式中,由於第二特徵提取網路與細節增強網路的輸入均添加了雜訊通道,因此,為了降低早期訓練階段對學習效果的影響,在反向傳遞梯度資訊時,細節增強網路與第二特徵提取網路之間停止梯度傳遞(stop gradient),從而可降低細節增強網路與第二特徵提取網路之間的相互干擾,有效地減少資訊流中的迴圈,降低模式崩潰的概率。In a possible implementation, noise channels are added to the inputs of the second feature extraction network and the detail enhancement network. Therefore, in order to reduce the impact of the early training stage on the learning effect, when the gradient information is transmitted backwards, The stop gradient between the detail enhancement network and the second feature extraction network can reduce the mutual interference between the detail enhancement network and the second feature extraction network, effectively reducing the loops in the information flow , To reduce the probability of mode collapse.

在實際訓練過程中,同樣可進行交替訓練。即在每輪反覆運算過程中,對抗訓練鑒別網路。再訓練第一特徵提取網路、第二特徵提取網路、細節增強網路及圖像重建網路,鑒別網路的輸出作為指導資訊,但不更新鑒別網路的參數。經過多輪反覆運算,在滿足訓練條件(例如網路收斂)的情況下,可得到訓練後的圖像處理網路。In the actual training process, alternate training can also be performed. That is, in each round of iterative operation, the discriminating network is trained against the training. Retrain the first feature extraction network, the second feature extraction network, the detail enhancement network, and the image reconstruction network. The output of the identification network is used as guidance information, but the parameters of the identification network are not updated. After multiple rounds of iterative calculations, the trained image processing network can be obtained when the training conditions (such as network convergence) are met.

通過這種方式,可以實現重建圖像的細節增強,進一步提高訓練後的圖像處理網路得到的重建圖像的品質。In this way, the details of the reconstructed image can be enhanced, and the quality of the reconstructed image obtained by the trained image processing network can be further improved.

圖2示出根據本公開實施例的圖像重建方法的網路訓練的處理過程的示意圖。如圖2所示,根據本公開實施例的圖像處理網路包括第一特徵提取網路EC 、第二特徵提取網路EP 、鑒別網路D、細節增強網路Te 及圖像重建網路R。Fig. 2 shows a schematic diagram of the network training process of the image reconstruction method according to an embodiment of the present disclosure. 2, the image processing network according to the disclosed embodiments of the present embodiment comprises a first feature extraction network E C, the second feature extraction network E P, D network authentication, detail enhancement and image network T e Rebuild the network R.

在示例中,對於任意一組第一樣本場景和第二樣本場景,可將暗光條件下的第一樣本事件資訊21輸入第一特徵提取網路EC 中處理,輸出第一樣本事件特徵XLE ;將正常光照條件下的第二樣本事件資訊22輸入參數共用的第一特徵提取網路EC 中處理,輸出第二樣本事件特徵XC ;對正常光照條件下的第二樣本事件資訊22添加雜訊資訊23後,輸入參數不共用的第二特徵提取網路EP 中處理,輸出第三樣本事件特徵Xp ;將第二樣本事件特徵XC 與第三樣本事件特徵Xp 進行疊加,得到第一樣本融合特徵XDE ;將第一樣本事件特徵XLE 和第一樣本融合特徵XDE 分別輸入鑒別網路D中進行鑒別,得到各自的鑒別結果(未示出)。In an example, for any set of the first sample and the second sample scenario scenario, the event may be the first sample in low light conditions information feature extractor 21 receives the first E C in the web, and outputs the first sample Event feature X LE ; input the second sample event information 22 under normal lighting conditions into the first feature extraction network E C that shares the parameters for processing, and output the second sample event feature X C ; for the second sample under normal lighting conditions event information 22 is added after the noise information 23, the second input parameter characteristic not shared network E P extraction processing, output of the third characteristic X-P sample event; event wherein the second sample and the third sample C X X event characteristics p is superimposed to obtain the first sample fusion feature X DE ; the first sample event feature X LE and the first sample fusion feature X DE are respectively input into the identification network D for identification, and the respective identification results (not shown) out).

在示例中,根據鑒別結果對抗訓練鑒別網路D。網路損失LD 表示如下:

Figure 02_image019
(3) 公式(3)中,
Figure 02_image021
Figure 02_image023
分別表示第一樣本事件特徵XLE 和第一樣本融合特徵XDE 對應的損失。In the example, the discrimination network D is trained against the discrimination result according to the discrimination result. The network loss L D is expressed as follows:
Figure 02_image019
(3) In formula (3),
Figure 02_image021
with
Figure 02_image023
Respectively represent the loss corresponding to the first sample event feature X LE and the first sample fusion feature X DE.

在示例中,將第一樣本融合特徵XDE 輸入圖像重建網路R中,輸出第二重建圖像

Figure 02_image025
;同時,對第一樣本融合特徵XDE 添加雜訊資訊24後,輸入細節增強網路Te 中,輸出第五樣本事件特徵
Figure 02_image027
;將第一樣本融合特徵XDE 與第五樣本事件特徵
Figure 02_image027
融合後,輸入圖像重建網路R中,輸出第四重建圖像
Figure 02_image029
。In the example, the first sample fusion feature X DE is input into the image reconstruction network R, and the second reconstructed image is output
Figure 02_image025
; At the same time, after adding noise information 24 to the first sample fusion feature X DE , input it into the detail enhancement network Te , and output the fifth sample event feature
Figure 02_image027
; Fuse the feature X DE of the first sample with the event feature of the fifth sample
Figure 02_image027
After fusion, enter the image reconstruction network R, and output the fourth reconstructed image
Figure 02_image029
.

在示例中,根據第二重建圖像

Figure 02_image025
、第四重建圖像
Figure 02_image029
及所述樣本場景圖像
Figure 02_image031
(未示出),可確定第一特徵提取網路EC 、第二特徵提取網路EP 、細節增強網路Te 及圖像重建網路R的總體損失
Figure 02_image033
(也可稱為重建損失),表示如下:
Figure 02_image035
(4) 公式(4)中,
Figure 02_image037
表示亮度重建損失,可以為第四重建圖像
Figure 02_image029
與所述樣本場景圖像
Figure 02_image031
之間的L1損失,以及第二重建圖像
Figure 02_image025
與所述樣本場景圖像
Figure 02_image031
之間的L1損失的和。
Figure 02_image039
表示細節增強網路的殘差損失,可以為
Figure 02_image027
與-Xp 之間的L1損失(表示為
Figure 02_image041
)。
Figure 02_image043
表示排名損失,可以為第四重建圖像
Figure 02_image029
與所述樣本場景圖像
Figure 02_image031
之間的L1損失,以及第二重建圖像
Figure 02_image025
與所述樣本場景圖像
Figure 02_image031
之間的L1損失的差。β和γ表示超參數項,本領域技術人員可根據實際情況設置In the example, according to the second reconstructed image
Figure 02_image025
, The fourth reconstructed image
Figure 02_image029
And the sample scene image
Figure 02_image031
(Not shown), may determine the first feature extraction network E C, the second feature extraction network E P, the overall loss of detail enhancement, and image reconstruction network T e of the web R
Figure 02_image033
(It can also be called reconstruction loss), which is expressed as follows:
Figure 02_image035
(4) In formula (4),
Figure 02_image037
Indicates the loss of brightness reconstruction, which can be the fourth reconstructed image
Figure 02_image029
With the sample scene image
Figure 02_image031
L1 loss between and the second reconstructed image
Figure 02_image025
With the sample scene image
Figure 02_image031
The sum of L1 losses between.
Figure 02_image039
Represents the residual loss of the detail enhancement network, which can be
Figure 02_image027
L1 loss between -X p (expressed as
Figure 02_image041
).
Figure 02_image043
Represents the ranking loss, which can be the fourth reconstructed image
Figure 02_image029
With the sample scene image
Figure 02_image031
L1 loss between and the second reconstructed image
Figure 02_image025
With the sample scene image
Figure 02_image031
The difference between the L1 loss. β and γ represent hyperparameter items, which can be set by those skilled in the art according to the actual situation

其中,

Figure 02_image045
的第一項用於確保網路能夠恢復出正確的圖像,第二項用於保證細節增強網路的精度,第三項用於保證網路在引入細節增強網路Te 後的重構效果更好,使得細節增強網路Te 能真正地起到細節增強的作用。in,
Figure 02_image045
The first is used to ensure the correct web to recover the image, the second network to ensure the accuracy of detail enhancement, for securing the third web reconstructed after introduction detail enhancement T e of the web The effect is better, so that the detail enhancement network Te can really play the role of detail enhancement.

在示例中,根據本公開實施例的圖像處理網路的總體優化目標可表示如下:

Figure 02_image047
(5)In an example, the overall optimization goal of the image processing network according to the embodiment of the present disclosure can be expressed as follows:
Figure 02_image047
(5)

公式(5)中,

Figure 02_image049
分別表示用於第一特徵提取網路EC 、第二特徵提取網路EP 、圖像重建網路R及細節增強網路Te 的參數;
Figure 02_image051
表示鑒別網路D的參數;
Figure 02_image053
是相應的超參數權重,本領域技術人員可根據實際情況設置。根據本公開實施例,可使用對抗式訓練交替優化這兩類參數,可例如採用隨機批次處理梯度下降的方式進行訓練,本公開對此不作限制。經訓練後,可得到高精度的圖像處理網路。In formula (5),
Figure 02_image049
Denote a first feature extracting network E C, the second feature extraction network E P, the image reconstruction of the network R and detail enhancement parameters T e of the web;
Figure 02_image051
Indicates the parameters of the authentication network D;
Figure 02_image053
Is the corresponding hyperparameter weight, which can be set by those skilled in the art according to the actual situation. According to the embodiments of the present disclosure, adversarial training can be used to alternately optimize the two types of parameters, and training can be performed, for example, by using random batch processing gradient descent, which is not limited in the present disclosure. After training, a high-precision image processing network can be obtained.

根據本公開實施例的圖像重建方法,通過將域自我調整方法與事件相機結合,利用暗光條件下的事件資訊進行圖像重建,得到正常光照條件下的高品質圖像,提高了圖像重建的效果。該方法在訓練過程中無需暗光下強度圖像進行監督訓練,實現了無監督的網路框架,降低了資料集構建難度。該方法通過細節增強網路對事件特徵中的暗光分布域進行增強,降低其中的雜訊干擾、增強局部細節,提高了圖像重建的效果以及訓練效果。According to the image reconstruction method of the embodiment of the present disclosure, by combining the domain self-adjustment method with the event camera, image reconstruction is performed using event information under dark light conditions to obtain high-quality images under normal lighting conditions, thereby improving the image The effect of reconstruction. In the training process, this method does not need to perform supervised training on intensity images under dark light, realizes an unsupervised network framework, and reduces the difficulty of data set construction. This method enhances the dark light distribution area in the event feature through the detail enhancement network, reduces the noise interference, enhances the local details, and improves the effect of image reconstruction and training.

根據本公開實施例的圖像重建方法的網路框架,不依賴於事件資訊,也適用於其它基於域自我調整方法的任務,比如圖像風格變換、語義分割域自我調整等。只需更改相應的輸入資料並將圖像重構網路替換成各自任務對應的網路結構即可。The network framework of the image reconstruction method according to the embodiments of the present disclosure does not depend on event information, and is also applicable to other tasks based on domain self-adjustment methods, such as image style transformation, semantic segmentation domain self-adjustment, and so on. Just change the corresponding input data and replace the image reconstruction network with the network structure corresponding to the respective tasks.

根據本公開實施例的圖像重建方法,可應用於圖像拍攝、圖像處理、人臉識別、安防等領域,實現暗光條件下的圖像重建。The image reconstruction method according to the embodiment of the present disclosure can be applied to the fields of image shooting, image processing, face recognition, security, etc., to realize image reconstruction under dark light conditions.

例如,採用相關技術的電子設備(例如智慧手機)的拍攝系統以強度相機為基礎,在暗光條件下無法成像,使用閃光燈作為輔助進行拍照或錄製視頻會帶來極大的能耗提升,而且閃光燈的刺眼光芒對於場景中的人來說很不友好。高動態的事件相機不需要額外的光源輔助,而且能耗很低。可設置事件相機獲取暗光條件下的事件資訊,通過本公開實施例的圖像重建方法,根據該事件資訊生成清晰圖像,從而實現暗光條件下的圖像拍攝。For example, the shooting system of electronic equipment (such as smart phones) using related technologies is based on intensity cameras, which cannot be imaged in low light conditions. Using flash as an aid to take pictures or record videos will bring about a great increase in energy consumption, and flash The dazzling light is very unfriendly to the people in the scene. The highly dynamic event camera does not require additional light source assistance, and the energy consumption is very low. The event camera can be set to obtain event information under dark light conditions, and through the image reconstruction method of the embodiment of the present disclosure, a clear image is generated according to the event information, thereby realizing image shooting under dark light conditions.

例如,本公開實施例的圖像重建方法,可作為多種圖像處理演算法的上游演算法。如人臉識別、物體檢測、語義分割等圖像處理任務在暗光條件下都會因無法獲取高品質強度圖像而失效。該圖像重建方法能夠通過暗光條件下的事件資訊,重構出暗光下的強度圖像,使得以上演算法可以繼續應用。For example, the image reconstruction method of the embodiment of the present disclosure can be used as an upstream algorithm of various image processing algorithms. Image processing tasks such as face recognition, object detection, semantic segmentation, etc. will fail due to the inability to obtain high-quality intensity images in low light conditions. This image reconstruction method can reconstruct the intensity image under dark light based on the event information under dark light conditions, so that the above algorithm can continue to be applied.

例如,城市的安防領域應用了大量的強度相機攝像頭,陰影區域和暗光條件下會有很多死角無法清晰檢測。可設置事件相機獲取暗光條件下的事件資訊,通過本公開實施例的圖像重建方法,根據事件資訊生成清晰的圖像,從而提高安防檢測的效果,保障城市安全。For example, a large number of intensity cameras are used in the security field of cities, and there will be many blind spots that cannot be clearly detected in shadow areas and dark light conditions. The event camera can be set to obtain event information under dark light conditions, and through the image reconstruction method of the embodiment of the present disclosure, a clear image is generated according to the event information, thereby improving the effect of security detection and ensuring city safety.

可以理解,本公開提及的上述各個方法實施例,在不違背原理邏輯的情況下,均可以彼此相互結合形成結合後的實施例,限於篇幅,本公開不再贅述。本領域技術人員可以理解,在具體實施方式的上述圖像重建方法中,各步驟的具體執行順序應當以其功能和可能的內在邏輯確定。It can be understood that the various method embodiments mentioned in the present disclosure can be combined with each other to form a combined embodiment without violating the principle and logic. The length is limited, and the details of this disclosure will not be repeated. Those skilled in the art can understand that, in the above-mentioned image reconstruction method of the specific embodiment, the specific execution order of each step should be determined by its function and possible internal logic.

此外,本公開還提供了圖像重建裝置、電子設備、電腦可讀儲存媒體、程式,上述均可用來實現本公開提供的任一種圖像重建方法,相應技術方案和描述和參見方法部分的相應記載,不再贅述。In addition, the present disclosure also provides image reconstruction devices, electronic equipment, computer-readable storage media, and programs, all of which can be used to implement any of the image reconstruction methods provided in the present disclosure. For the corresponding technical solutions and descriptions, refer to the corresponding methods in the method section. Record, not repeat it.

圖3示出根據本公開實施例的圖像重建裝置的框圖,如圖3所示,所述圖像重建裝置包括:Fig. 3 shows a block diagram of an image reconstruction device according to an embodiment of the present disclosure. As shown in Fig. 3, the image reconstruction device includes:

事件獲取模組31,用於獲取目標場景的事件資訊,所述事件資訊用於表示所述目標場景在第一亮度範圍內的亮度變化;The event acquisition module 31 is configured to acquire event information of a target scene, where the event information is used to indicate a brightness change of the target scene within a first brightness range;

特徵提取模組32,用於對所述事件資訊進行特徵提取,得到所述目標場景的第一事件特徵;The feature extraction module 32 is configured to perform feature extraction on the event information to obtain the first event feature of the target scene;

圖像重建模組33,用於對所述第一事件特徵進行圖像重建,得到所述目標場景的重建圖像,所述重建圖像的亮度處於第二亮度範圍內,所述第二亮度範圍高於所述第一亮度範圍。The image reconstruction module 33 is configured to perform image reconstruction on the first event feature to obtain a reconstructed image of the target scene, the brightness of the reconstructed image is within a second brightness range, and the second brightness The range is higher than the first brightness range.

在一種可能的實現方式中,所述圖像重建模組33包括:細節增強子模組,用於根據第一雜訊資訊及所述第一事件特徵,對所述第一事件特徵進行細節增強,得到第二事件特徵;融合子模組,用於將所述第一事件特徵與所述第二事件特徵融合,得到融合特徵;重建子模組,用於對所述融合特徵進行圖像重建,得到所述目標場景的重建圖像。In a possible implementation, the image reconstruction module 33 includes: a detail enhancement sub-module for performing detail enhancement on the first event feature according to the first noise information and the first event feature , Obtain a second event feature; a fusion sub-module, used to fuse the first event feature with the second event feature to obtain a fusion feature; a reconstruction sub-module, used to perform image reconstruction on the fusion feature To obtain a reconstructed image of the target scene.

在一種可能的實現方式中,所述圖像重建裝置通過圖像處理網路實現,所述圖像處理網路包括第一特徵提取網路及圖像重建網路,所述第一特徵提取網路用於對所述事件資訊進行特徵提取,所述圖像重建網路用於對所述第一事件特徵進行圖像重建,所述圖像重建裝置還包括:In a possible implementation manner, the image reconstruction device is implemented by an image processing network, the image processing network includes a first feature extraction network and an image reconstruction network, the first feature extraction network The path is used for feature extraction of the event information, the image reconstruction network is used for image reconstruction of the first event feature, and the image reconstruction device further includes:

訓練模組,用於根據預設的訓練集訓練所述圖像處理網路,所述訓練集包括多個第一樣本場景的第一樣本事件資訊,多個第二樣本場景的第二樣本事件資訊及樣本場景圖像;其中,所述第一樣本事件資訊是在第三亮度範圍內獲取的,所述第二樣本事件資訊是在第四亮度範圍內獲取的,所述樣本場景圖像是在所述第四亮度範圍內獲取的,所述第四亮度範圍高於所述第三亮度範圍。The training module is used to train the image processing network according to a preset training set. The training set includes first sample event information of a plurality of first sample scenes, and second sample scenes of a plurality of second sample scenes. Sample event information and sample scene images; wherein the first sample event information is acquired within a third brightness range, the second sample event information is acquired within a fourth brightness range, and the sample scene The image is acquired in the fourth brightness range, and the fourth brightness range is higher than the third brightness range.

在一種可能的實現方式中,所述圖像處理網路還包括鑒別網路,所述訓練模組包括:第一提取子模組,用於將所述第一樣本場景的第一樣本事件資訊和所述第二樣本場景的第二樣本事件資訊分別輸入所述第一特徵提取網路,得到第一樣本事件特徵和第二樣本事件特徵;第一鑒別子模組,用於將所述第一樣本事件特徵和所述第二樣本事件特徵分別輸入所述鑒別網路,得到第一鑒別結果和第二鑒別結果;第一對抗訓練子模組,用於根據所述第一鑒別結果及所述第二鑒別結果,對抗訓練所述圖像處理網路。In a possible implementation, the image processing network further includes an identification network, and the training module includes: a first extraction sub-module configured to convert the first sample of the first sample scene The event information and the second sample event information of the second sample scene are respectively input into the first feature extraction network to obtain the first sample event feature and the second sample event feature; the first discrimination sub-module is used to combine The first sample event feature and the second sample event feature are respectively input to the identification network to obtain a first identification result and a second identification result; the first confrontation training sub-module is used to obtain the first identification result according to the first The identification result and the second identification result are used against training the image processing network.

在一種可能的實現方式中,所述訓練模組還包括:第一重建子模組,用於將所述第二樣本事件特徵輸入所述圖像重建網路,得到所述第二樣本場景的第一重建圖像;第一訓練子模組,用於根據所述第二樣本場景的第一重建圖像及所述樣本場景圖像,訓練所述圖像處理網路。In a possible implementation, the training module further includes: a first reconstruction sub-module, configured to input the second sample event feature into the image reconstruction network to obtain the second sample scene A first reconstructed image; a first training sub-module for training the image processing network according to the first reconstructed image of the second sample scene and the sample scene image.

在一種可能的實現方式中,所述圖像處理網路還包括細節增強網路,所述訓練模組還包括:第一增強子模組,用於將所述第二樣本事件特徵及第三雜訊資訊輸入所述細節增強網路,得到第四樣本事件特徵;第一融合子模組,用於將所述第二樣本事件特徵與所述第四樣本事件特徵融合,得到第二樣本融合特徵;第二重建子模組,用於將所述第二樣本融合特徵輸入所述圖像重建網路,得到所述第二樣本場景的第三重建圖像;第二訓練子模組,用於根據所述第二樣本場景的第一重建圖像、所述第三重建圖像及所述樣本場景圖像,訓練所述圖像處理網路。In a possible implementation, the image processing network further includes a detail enhancement network, and the training module further includes: a first enhancement sub-module for combining the second sample event features and the third The noise information is input into the detail enhancement network to obtain the fourth sample event feature; the first fusion sub-module is used to fuse the second sample event feature with the fourth sample event feature to obtain a second sample fusion Features; a second reconstruction sub-module, used to input the second sample fusion feature into the image reconstruction network to obtain a third reconstructed image of the second sample scene; a second training sub-module, using Training the image processing network according to the first reconstructed image of the second sample scene, the third reconstructed image, and the sample scene image.

在一種可能的實現方式中,所述圖像處理網路還包括第二特徵提取網路,所述訓練模組還包括:第二提取子模組,用於將所述第二樣本場景的第二樣本事件資訊及第二雜訊資訊輸入所述第二特徵提取網路,得到第三樣本事件特徵;第二融合子模組,用於將所述第二樣本事件特徵與所述第三樣本事件特徵融合,得到第一樣本融合特徵;第二鑒別子模組,用於將所述第一樣本融合特徵輸入所述鑒別網路,得到第三鑒別結果;第二對抗訓練子模組,用於根據所述第一鑒別結果及所述第三鑒別結果,對抗訓練所述圖像處理網路。In a possible implementation manner, the image processing network further includes a second feature extraction network, and the training module further includes: a second extraction sub-module configured to convert the second feature extraction network of the second sample scene Two sample event information and second noise information are input into the second feature extraction network to obtain a third sample event feature; a second fusion sub-module is used to combine the second sample event feature with the third sample Event feature fusion to obtain a first sample fusion feature; a second discrimination sub-module for inputting the first sample fusion feature into the discrimination network to obtain a third discrimination result; a second confrontation training sub-module , Used for training the image processing network against the first identification result and the third identification result.

在一種可能的實現方式中,所述訓練模組還包括:第三重建子模組,用於將所述第一樣本融合特徵輸入所述圖像重建網路,得到所述第二樣本場景的第二重建圖像;第三訓練子模組,用於根據所述第二樣本場景的第二重建圖像及所述樣本場景圖像,訓練所述圖像處理網路。In a possible implementation, the training module further includes: a third reconstruction sub-module, configured to input the first sample fusion feature into the image reconstruction network to obtain the second sample scene The second reconstructed image; the third training sub-module is used to train the image processing network according to the second reconstructed image of the second sample scene and the sample scene image.

在一種可能的實現方式中,所述圖像處理網路還包括細節增強網路,所述訓練模組還包括:第二增強子模組,用於將所述第一樣本融合特徵及第四雜訊資訊輸入所述細節增強網路,得到第五樣本事件特徵;第三融合子模組,用於將所述第一樣本融合特徵與所述第五樣本事件特徵融合,得到第三樣本融合特徵;第四重建子模組,用於將所述第三樣本融合特徵輸入所述圖像重建網路,得到所述第二樣本場景的第四重建圖像;第四訓練子模組,用於根據所述第二樣本場景的第二重建圖像、所述第四重建圖像及所述樣本場景圖像,訓練所述圖像處理網路。In a possible implementation manner, the image processing network further includes a detail enhancement network, and the training module further includes: a second enhancement sub-module for fusing the features of the first sample with the first sample Four noise information is input into the detail enhancement network to obtain the fifth sample event feature; the third fusion sub-module is used to fuse the first sample fusion feature with the fifth sample event feature to obtain the third Sample fusion features; a fourth reconstruction sub-module, used to input the third sample fusion features into the image reconstruction network to obtain a fourth reconstructed image of the second sample scene; a fourth training sub-module , For training the image processing network according to the second reconstructed image of the second sample scene, the fourth reconstructed image, and the sample scene image.

在一種可能的實現方式中,所述第四訓練子模組用於:根據所述第二樣本場景的第二重建圖像、所述第四重建圖像及所述樣本場景圖像,確定所述圖像處理網路的總體損失;根據所述總體損失,確定所述圖像處理網路的梯度資訊;根據所述梯度資訊,調整所述第一特徵提取網路、所述第二特徵提取網路、所述細節增強網路及所述圖像重建網路的網路參數,其中,所述細節增強網路的梯度資訊不傳遞到所述第二特徵提取網路。In a possible implementation manner, the fourth training sub-module is used to determine the second reconstructed image of the second sample scene, the fourth reconstructed image, and the sample scene image. The overall loss of the image processing network; according to the overall loss, the gradient information of the image processing network is determined; according to the gradient information, the first feature extraction network and the second feature extraction are adjusted The network, the detail enhancement network and the network parameters of the image reconstruction network, wherein the gradient information of the detail enhancement network is not transmitted to the second feature extraction network.

在一些實施例中,本公開實施例提供的圖像重建裝置具有的功能或包含的模組可以用於執行上文方法實施例描述的方法,其具體實現可以參照上文方法實施例的描述,為了簡潔,這裡不再贅述。In some embodiments, the functions or modules included in the image reconstruction apparatus provided in the embodiments of the present disclosure can be used to execute the methods described in the above method embodiments, and for specific implementation, refer to the description of the above method embodiments. For the sake of brevity, I won't repeat them here.

本公開實施例還提出一種電腦可讀儲存媒體,其上儲存有電腦程式指令,所述電腦程式指令被處理器執行時實現上述圖像重建方法。電腦可讀儲存媒體可以是非揮發性電腦可讀儲存媒體或揮發性電腦可讀儲存媒體。The embodiment of the present disclosure also provides a computer-readable storage medium on which computer program instructions are stored, and the computer program instructions are executed by a processor to realize the above-mentioned image reconstruction method. The computer-readable storage medium may be a non-volatile computer-readable storage medium or a volatile computer-readable storage medium.

本公開實施例還提出一種電子設備,包括:處理器;用於儲存處理器可執行指令的記憶體;其中,所述處理器被配置為調用所述記憶體儲存的指令,以執行上述圖像重建方法。An embodiment of the present disclosure also provides an electronic device, including: a processor; a memory for storing executable instructions of the processor; wherein the processor is configured to call the instructions stored in the memory to execute the above image Reconstruction method.

本公開實施例還提供了一種電腦程式產品,包括電腦可讀代碼,當電腦可讀代碼在設備上運行時,設備中的處理器執行用於實現如上任一實施例提供的圖像重建方法的指令。The embodiments of the present disclosure also provide a computer program product, including computer-readable code. When the computer-readable code runs on the device, the processor in the device executes the image reconstruction method for implementing the image reconstruction method provided by any of the above embodiments. instruction.

本公開實施例還提供了另一種電腦程式產品,用於儲存電腦可讀指令,指令被執行時使得電腦執行上述任一實施例提供的圖像重建方法的操作。The embodiments of the present disclosure also provide another computer program product for storing computer-readable instructions, which when executed cause the computer to perform the operation of the image reconstruction method provided by any of the above-mentioned embodiments.

電子設備可以被提供為終端、伺服器或其它形態的設備。Electronic devices can be provided as terminals, servers, or other types of devices.

圖4示出根據本公開實施例的一種電子設備800的框圖。例如,電子設備800可以是行動電話,電腦,數位廣播終端,消息收發設備,遊戲控制台,平板設備,醫療設備,健身設備,個人數位助理等終端。FIG. 4 shows a block diagram of an electronic device 800 according to an embodiment of the present disclosure. For example, the electronic device 800 may be a mobile phone, a computer, a digital broadcasting terminal, a messaging device, a game console, a tablet device, a medical device, a fitness device, a personal digital assistant, and other terminals.

參照圖4,電子設備800可以包括以下一個或多個組件:處理組件802,記憶體804,電源組件806,多媒體組件808,音訊組件810,輸入/輸出介面812,感測器組件814,以及通信組件816。4, the electronic device 800 may include one or more of the following components: a processing component 802, a memory 804, a power component 806, a multimedia component 808, an audio component 810, an input/output interface 812, a sensor component 814, and communication Components 816.

處理組件802通常控制電子設備800的整體操作,諸如與顯示,電話呼叫,資料通信,相機操作和記錄操作相關聯的操作。處理組件802可以包括一個或多個處理器820來執行指令,以完成上述的方法的全部或部分步驟。此外,處理組件802可以包括一個或多個模組,便於處理組件802和其他組件之間的交互。例如,處理組件802可以包括多媒體模組,以方便多媒體組件808和處理組件802之間的交互。The processing component 802 generally controls the overall operations of the electronic device 800, such as operations associated with display, telephone calls, data communication, camera operations, and recording operations. The processing component 802 may include one or more processors 820 to execute instructions to complete all or part of the steps of the foregoing method. In addition, the processing component 802 may include one or more modules to facilitate the interaction between the processing component 802 and other components. For example, the processing component 802 may include a multimedia module to facilitate the interaction between the multimedia component 808 and the processing component 802.

記憶體804被配置為儲存各種類型的資料以支援在電子設備800的操作。這些資料的示例包括用於在電子設備800上操作的任何應用程式或方法的指令,聯絡人資料,電話簿資料,消息,圖片,影片等。記憶體804可以由任何類型的揮發性或非揮發性儲存設備或者它們的組合實現,如靜態隨機存取記憶體(SRAM),電子抹除式可複寫唯讀記憶體(EEPROM),可擦除可規劃式唯讀記憶體(EPROM),可程式化唯讀記憶體(PROM),唯讀記憶體(ROM),磁記憶體,快閃記憶體,磁片或光碟。The memory 804 is configured to store various types of data to support the operation of the electronic device 800. Examples of such data include instructions of any application or method used to operate on the electronic device 800, contact information, phone book data, messages, pictures, videos, etc. The memory 804 can be implemented by any type of volatile or non-volatile storage devices or their combination, such as static random access memory (SRAM), electronically erasable rewritable read-only memory (EEPROM), erasable Programmable read-only memory (EPROM), programmable read-only memory (PROM), read-only memory (ROM), magnetic memory, flash memory, floppy disk or CD-ROM.

電源組件806為電子設備800的各種組件提供電力。電源組件806可以包括電源管理系統,一個或多個電源,及其他與為電子設備800生成、管理和分配電力相關聯的組件。The power supply component 806 provides power for various components of the electronic device 800. The power supply component 806 may include a power management system, one or more power supplies, and other components associated with the generation, management, and distribution of power for the electronic device 800.

多媒體組件808包括在所述電子設備800和使用者之間的提供一個輸出介面的螢幕。在一些實施例中,螢幕可以包括液晶顯示器(LCD)和觸控面板(TP)。如果螢幕包括觸控面板,螢幕可以被實現為觸控式螢幕,以接收來自使用者的輸入信號。觸控面板包括一個或多個觸控感測器以感測觸摸、滑動和觸摸面板上的手勢。所述觸控感測器可以不僅感測觸摸或滑動動作的邊界,而且還檢測與所述觸摸或滑動操作相關的持續時間和壓力。在一些實施例中,多媒體組件808包括一個前置攝像頭和/或後置攝像頭。當電子設備800處於操作模式,如拍攝模式或視訊模式時,前置攝像頭和/或後置攝像頭可以接收外部的多媒體資料。每個前置攝像頭和後置攝像頭可以是一個固定的光學透鏡系統或具有焦距和光學變焦能力。The multimedia component 808 includes a screen that provides an output interface between the electronic device 800 and the user. In some embodiments, the screen may include a liquid crystal display (LCD) and a touch panel (TP). If the screen includes a touch panel, the screen can be implemented as a touch screen to receive input signals from the user. The touch panel includes one or more touch sensors to sense touch, sliding, and gestures on the touch panel. The touch sensor can not only sense the boundary of the touch or slide action, but also detect the duration and pressure related to the touch or slide operation. In some embodiments, the multimedia component 808 includes a front camera and/or a rear camera. When the electronic device 800 is in an operation mode, such as a shooting mode or a video mode, the front camera and/or the rear camera can receive external multimedia data. Each front camera and rear camera can be a fixed optical lens system or have focal length and optical zoom capabilities.

音訊組件810被配置為輸出和/或輸入音訊信號。例如,音訊組件810包括一個麥克風(MIC),當電子設備800處於操作模式,如呼叫模式、記錄模式和語音辨識模式時,麥克風被配置為接收外部音訊信號。所接收的音訊信號可以被進一步儲存在記憶體804或經由通信組件816發送。在一些實施例中,音訊組件810還包括一個揚聲器,用於輸出音訊信號。The audio component 810 is configured to output and/or input audio signals. For example, the audio component 810 includes a microphone (MIC). When the electronic device 800 is in an operation mode, such as a call mode, a recording mode, and a voice recognition mode, the microphone is configured to receive external audio signals. The received audio signal can be further stored in the memory 804 or sent via the communication component 816. In some embodiments, the audio component 810 further includes a speaker for outputting audio signals.

輸入/輸出介面812為處理組件802和週邊介面模組之間提供介面,上述週邊介面模組可以是鍵盤,點擊輪,按鈕等。這些按鈕可包括但不限於:主頁按鈕、音量按鈕、啟動按鈕和鎖定按鈕。The input/output interface 812 provides an interface between the processing component 802 and a peripheral interface module. The peripheral interface module may be a keyboard, a click wheel, a button, and the like. These buttons may include, but are not limited to: home button, volume button, start button, and lock button.

感測器組件814包括一個或多個感測器,用於為電子設備800提供各個方面的狀態評估。例如,感測器組件814可以檢測到電子設備800的打開/關閉狀態,組件的相對定位,例如所述組件為電子設備800的顯示器和小鍵盤,感測器組件814還可以檢測電子設備800或電子設備800一個組件的位置改變,使用者與電子設備800接觸的存在或不存在,電子設備800方位或加速/減速和電子設備800的溫度變化。感測器組件814可以包括接近感測器,被配置用來在沒有任何的物理接觸時檢測附近物體的存在。感測器組件814還可以包括光感測器,如互補式金屬氧化物半導體(CMOS)或電荷耦合裝置(CCD)圖像感測器,用於在成像應用中使用。在一些實施例中,該感測器組件814還可以包括加速度感測器,陀螺儀感測器,磁感測器,壓力感測器或溫度感測器。The sensor component 814 includes one or more sensors for providing the electronic device 800 with various aspects of state evaluation. For example, the sensor component 814 can detect the on/off state of the electronic device 800 and the relative positioning of the components. For example, the component is the display and the keypad of the electronic device 800. The sensor component 814 can also detect the electronic device 800 or The position of a component of the electronic device 800 changes, the presence or absence of contact between the user and the electronic device 800, the orientation or acceleration/deceleration of the electronic device 800, and the temperature change of the electronic device 800. The sensor assembly 814 may include a proximity sensor configured to detect the presence of nearby objects when there is no physical contact. The sensor component 814 may also include a light sensor, such as a complementary metal oxide semiconductor (CMOS) or charge coupled device (CCD) image sensor, for use in imaging applications. In some embodiments, the sensor component 814 may also include an acceleration sensor, a gyroscope sensor, a magnetic sensor, a pressure sensor or a temperature sensor.

通信組件816被配置為便於電子設備800和其他設備之間有線或無線方式的通信。電子設備800可以接入基於通信標準的無線網路,如無線網路(WiFi),第二代行動通信技術(2G)或第三代行動通信技術(3G),或它們的組合。在一個示例性實施例中,通信組件816經由廣播通道接收來自外部廣播管理系統的廣播信號或廣播相關資訊。在一個示例性實施例中,所述通信組件816還包括近場通信(NFC)模組,以促進短程通信。例如,在NFC模組可基於射頻識別(RFID)技術,紅外資料協會(IrDA)技術,超寬頻(UWB)技術,藍牙(BT)技術和其他技術來實現。The communication component 816 is configured to facilitate wired or wireless communication between the electronic device 800 and other devices. The electronic device 800 can access a wireless network based on a communication standard, such as a wireless network (WiFi), a second-generation mobile communication technology (2G) or a third-generation mobile communication technology (3G), or a combination thereof. In an exemplary embodiment, the communication component 816 receives a broadcast signal or broadcast-related information from an external broadcast management system via a broadcast channel. In an exemplary embodiment, the communication component 816 further includes a near field communication (NFC) module to facilitate short-range communication. For example, the NFC module can be implemented based on radio frequency identification (RFID) technology, infrared data association (IrDA) technology, ultra-wideband (UWB) technology, Bluetooth (BT) technology and other technologies.

在示例性實施例中,電子設備800可以被一個或多個應用專用積體電路(ASIC)、數位訊號處理器(DSP)、數位信號處理設備(DSPD)、可程式設計邏輯裝置(PLD)、現場可程式設計閘陣列(FPGA)、控制器、微控制器、微處理器或其他電子元件實現,用於執行上述圖像重建方法。In an exemplary embodiment, the electronic device 800 may be implemented by one or more application-specific integrated circuits (ASIC), digital signal processor (DSP), digital signal processing device (DSPD), programmable logic device (PLD), On-site programmable gate array (FPGA), controller, microcontroller, microprocessor or other electronic components are used to implement the above-mentioned image reconstruction method.

在示例性實施例中,還提供了一種非揮發性電腦可讀儲存媒體,例如包括電腦程式指令的記憶體804,上述電腦程式指令可由電子設備800的處理器820執行以完成上述圖像重建方法。In an exemplary embodiment, there is also provided a non-volatile computer-readable storage medium, such as a memory 804 including computer program instructions, which can be executed by the processor 820 of the electronic device 800 to complete the image reconstruction method described above. .

圖5示出根據本公開實施例的一種電子設備1900的框圖。例如,電子設備1900可以被提供為一伺服器。參照圖5,電子設備1900包括處理組件1922,其進一步包括一個或多個處理器,以及由記憶體1932所代表的記憶體資源,用於儲存可由處理組件1922的執行的指令,例如應用程式。記憶體1932中儲存的應用程式可以包括一個或一個以上的每一個對應於一組指令的模組。此外,處理組件1922被配置為執行指令,以執行上述圖像重建方法。FIG. 5 shows a block diagram of an electronic device 1900 according to an embodiment of the present disclosure. For example, the electronic device 1900 may be provided as a server. 5, the electronic device 1900 includes a processing component 1922, which further includes one or more processors, and a memory resource represented by a memory 1932 for storing instructions that can be executed by the processing component 1922, such as application programs. The application program stored in the memory 1932 may include one or more modules each corresponding to a set of commands. In addition, the processing component 1922 is configured to execute instructions to perform the above-mentioned image reconstruction method.

電子設備1900還可以包括一個電源組件1926被配置為執行電子設備1900的電源管理,一個有線或無線網路介面1950被配置為將電子設備1900連接到網路,和一個輸入輸出(I/O)介面1958。電子設備1900可以操作基於儲存在記憶體1932的作業系統,例如微軟伺服器作業系統(Windows ServerTM ),蘋果公司推出的基於圖形化使用者介面作業系統(Mac OS XTM ),多用戶多行程的電腦作業系統(UnixTM ), 自由和開放原始碼的類UNIX作業系統(LinuxTM ),開放原始碼的類UNIX作業系統(FreeBSDTM )或類似。The electronic device 1900 may also include a power component 1926 configured to perform power management of the electronic device 1900, a wired or wireless network interface 1950 configured to connect the electronic device 1900 to a network, and an input and output (I/O) Interface 1958. The electronic device 1900 can operate operating systems based on storage in the memory 1932, such as Microsoft's server operating system (Windows Server TM ), Apple’s graphical user interface operating system (Mac OS X TM ), multi-user and multi-stroke Computer operating system (Unix TM ), free and open source UNIX-like operating system (Linux TM ), open source UNIX-like operating system (FreeBSD TM ) or similar.

在示例性實施例中,還提供了一種非揮發性電腦可讀儲存媒體,例如包括電腦程式指令的記憶體1932,上述電腦程式指令可由電子設備1900的處理組件1922執行以完成上述圖像重建方法。In an exemplary embodiment, there is also provided a non-volatile computer-readable storage medium, such as a memory 1932 including computer program instructions, which can be executed by the processing component 1922 of the electronic device 1900 to complete the image reconstruction method described above. .

本公開可以是系統、方法和/或電腦程式產品。電腦程式產品可以包括電腦可讀儲存媒體,其上載有用於使處理器實現本公開的各個方面的電腦可讀程式指令。The present disclosure may be a system, method, and/or computer program product. The computer program product may include a computer-readable storage medium loaded with computer-readable program instructions for enabling the processor to implement various aspects of the present disclosure.

電腦可讀儲存媒體可以是可以保持和儲存由指令執行設備使用的指令的有形設備。電腦可讀儲存媒體例如可以是――但不限於――電儲存設備、磁儲存設備、光儲存設備、電磁儲存設備、半導體儲存設備或者上述的任意合適的組合。電腦可讀儲存媒體的更具體的例子(非窮舉的列表)包括:可擕式電腦盤、硬碟、隨機存取記憶體(RAM)、唯讀記憶體(ROM)、可擦除可規劃式唯讀記憶體(EPROM或快閃記憶體)、靜態隨機存取記憶體(SRAM)、可擕式壓縮磁碟唯讀記憶體(CD-ROM)、數位影音光碟(DVD)、記憶棒、軟碟、機械編碼設備、例如其上儲存有指令的打孔卡或凹槽內凸起結構、以及上述的任意合適的組合。這裡所使用的電腦可讀儲存媒體不被解釋為暫態信號本身,諸如無線電波或者其他自由傳播的電磁波、通過波導或其他傳輸媒介傳播的電磁波(例如,通過光纖電纜的光脈衝)、或者通過電線傳輸的電信號。The computer-readable storage medium may be a tangible device that can hold and store instructions used by the instruction execution device. The computer-readable storage medium may be, for example, but not limited to, an electrical storage device, a magnetic storage device, an optical storage device, an electromagnetic storage device, a semiconductor storage device, or any suitable combination of the foregoing. More specific examples of computer-readable storage media (non-exhaustive list) include: portable computer disks, hard disks, random access memory (RAM), read-only memory (ROM), erasable and programmable Read-only memory (EPROM or flash memory), static random access memory (SRAM), portable compact disk read-only memory (CD-ROM), digital audio-visual disc (DVD), memory stick, Floppy disks, mechanical encoding devices, such as punch cards on which instructions are stored or raised structures in the grooves, and any suitable combination of the above. The computer-readable storage medium used here is not interpreted as transient signals themselves, such as radio waves or other freely propagating electromagnetic waves, electromagnetic waves propagating through waveguides or other transmission media (for example, light pulses through fiber optic cables), or passing through Electrical signals transmitted by wires.

這裡所描述的電腦可讀程式指令可以從電腦可讀儲存媒體下載到各個計算/處理設備,或者通過網路、例如網際網路、區域網路、廣域網路和/或無線網下載到外部電腦或外部儲存設備。網路可以包括銅傳輸電纜、光纖傳輸、無線傳輸、路由器、防火牆、交換機、閘道電腦和/或邊緣伺服器。每個計算/處理設備中的網路介面卡或者網路介面從網路接收電腦可讀程式指令,並轉發該電腦可讀程式指令,以供儲存在各個計算/處理設備中的電腦可讀儲存媒體中。The computer-readable program instructions described here can be downloaded from a computer-readable storage medium to various computing/processing devices, or downloaded to an external computer or via a network, such as the Internet, a local area network, a wide area network, and/or a wireless network. External storage device. The network may include copper transmission cables, optical fiber transmission, wireless transmission, routers, firewalls, switches, gateway computers, and/or edge servers. The network interface card or network interface in each computing/processing device receives computer-readable program instructions from the network and forwards the computer-readable program instructions for computer-readable storage in each computing/processing device In the media.

用於執行本公開操作的電腦程式指令可以是彙編指令、指令集架構(ISA)指令、機器指令、機器相關指令、微代碼、固件指令、狀態設置資料、或者以一種或多種程式設計語言的任意組合編寫的原始程式碼或目標代碼,所述程式設計語言包括物件導向的程式設計語言—諸如Smalltalk、C++等,以及常規的過程式程式設計語言—諸如“C”語言或類似的程式設計語言。電腦可讀程式指令可以完全地在使用者電腦上執行、部分地在使用者電腦上執行、作為一個獨立的套裝軟體執行、部分在使用者電腦上部分在遠端電腦上執行、或者完全在遠端電腦或伺服器上執行。在涉及遠端電腦的情形中,遠端電腦可以通過任意種類的網路—包括區域網路(LAN)或廣域網路(WAN)—連接到使用者電腦,或者,可以連接到外部電腦(例如利用網際網路服務提供者來通過網際網路連接)。在一些實施例中,通過利用電腦可讀程式指令的狀態資訊來個性化定制電子電路,例如可程式設計邏輯電路、現場可程式設計閘陣列(FPGA)或可程式設計邏輯陣列(PLA),該電子電路可以執行電腦可讀程式指令,從而實現本公開的各個方面。The computer program instructions used to perform the operations of the present disclosure may be assembly instructions, instruction set architecture (ISA) instructions, machine instructions, machine-related instructions, microcode, firmware instructions, state setting data, or any of one or more programming languages. Combining source code or object code written, the programming language includes object-oriented programming languages-such as Smalltalk, C++, etc., and conventional procedural programming languages-such as "C" language or similar programming languages. Computer-readable program instructions can be executed entirely on the user's computer, partly on the user's computer, executed as a stand-alone software package, partly on the user's computer and partly executed on a remote computer, or completely remotely executed. Run on the end computer or server. In the case of a remote computer, the remote computer can be connected to the user’s computer through any kind of network—including a local area network (LAN) or a wide area network (WAN)—or, it can be connected to an external computer (for example, using Internet service providers to connect via the Internet). In some embodiments, the electronic circuit is personalized by using the status information of computer-readable program instructions, such as programmable logic circuit, field programmable gate array (FPGA), or programmable logic array (PLA). The electronic circuit can execute computer-readable program instructions to realize various aspects of the present disclosure.

這裡參照根據本公開實施例的圖像重建方法、圖像重建裝置(系統)和電腦程式產品的流程圖和/或框圖描述了本公開的各個方面。應當理解,流程圖和/或框圖的每個方框以及流程圖和/或框圖中各方框的組合,都可以由電腦可讀程式指令實現。Herein, various aspects of the present disclosure are described with reference to flowcharts and/or block diagrams of image reconstruction methods, image reconstruction devices (systems), and computer program products according to embodiments of the present disclosure. It should be understood that each block of the flowchart and/or block diagram and the combination of each block in the flowchart and/or block diagram can be implemented by computer-readable program instructions.

這些電腦可讀程式指令可以提供給通用電腦、專用電腦或其它可程式設計資料處理裝置的處理器,從而生產出一種機器,使得這些指令在通過電腦或其它可程式設計資料處理裝置的處理器執行時,產生了實現流程圖和/或框圖中的一個或多個方框中規定的功能/動作的裝置。也可以把這些電腦可讀程式指令儲存在電腦可讀儲存媒體中,這些指令使得電腦、可程式設計資料處理裝置和/或其他設備以特定方式工作,從而,儲存有指令的電腦可讀介質則包括一個製造品,其包括實現流程圖和/或框圖中的一個或多個方框中規定的功能/動作的各個方面的指令。These computer-readable program instructions can be provided to the processors of general-purpose computers, special-purpose computers, or other programmable data processing devices, thereby producing a machine that allows these instructions to be executed by the processors of the computer or other programmable data processing devices At this time, a device that implements the functions/actions specified in one or more blocks in the flowcharts and/or block diagrams is produced. It is also possible to store these computer-readable program instructions in a computer-readable storage medium. These instructions make the computer, programmable data processing device and/or other equipment work in a specific manner, so that the computer-readable medium storing the instructions is It includes an article of manufacture, which includes instructions for implementing various aspects of the functions/actions specified in one or more blocks in the flowchart and/or block diagram.

也可以把電腦可讀程式指令載入到電腦、其它可程式設計資料處理裝置、或其它設備上,使得在電腦、其它可程式設計資料處理裝置或其它設備上執行一系列操作步驟,以產生電腦實現的過程,從而使得在電腦、其它可程式設計資料處理裝置、或其它設備上執行的指令實現流程圖和/或框圖中的一個或多個方框中規定的功能/動作。It is also possible to load computer-readable program instructions into a computer, other programmable data processing device, or other equipment, so that a series of operation steps are executed on the computer, other programmable data processing device, or other equipment to generate a computer The process of implementation enables instructions executed on a computer, other programmable data processing device, or other equipment to implement the functions/actions specified in one or more blocks in the flowcharts and/or block diagrams.

圖式中的流程圖和框圖顯示了根據本公開的多個實施例的系統、方法和電腦程式產品的可能實現的體系架構、功能和操作。在這點上,流程圖或框圖中的每個方框可以代表一個模組、程式段或指令的一部分,所述模組、程式段或指令的一部分包含一個或多個用於實現規定的邏輯功能的可執行指令。在有些作為替換的實現中,方框中所標注的功能也可以以不同於圖式中所標注的順序發生。例如,兩個連續的方框實際上可以基本並行地執行,它們有時也可以按相反的循序執行,這依所涉及的功能而定。也要注意的是,框圖和/或流程圖中的每個方框、以及框圖和/或流程圖中的方框的組合,可以用執行規定的功能或動作的專用的基於硬體的系統來實現,或者可以用專用硬體與電腦指令的組合來實現。The flowcharts and block diagrams in the drawings show the possible implementation architecture, functions, and operations of the system, method, and computer program product according to multiple embodiments of the present disclosure. In this regard, each block in the flowchart or block diagram may represent a module, program segment, or part of an instruction, and the module, program segment, or part of an instruction includes one or more Executable instructions for logic functions. In some alternative implementations, the functions marked in the block may also occur in a different order from the order marked in the drawings. For example, two consecutive blocks can actually be executed basically in parallel, and they can sometimes be executed in reverse order, depending on the functions involved. It should also be noted that each block in the block diagram and/or flowchart, as well as the combination of the blocks in the block diagram and/or flowchart, can be used to perform specified functions or actions based on dedicated hardware. The system can be implemented, or it can be implemented by a combination of dedicated hardware and computer instructions.

該電腦程式產品可以具體通過硬體、軟體或其結合的方式實現。在一個可選實施例中,所述電腦程式產品具體體現為電腦儲存媒體,在另一個可選實施例中,電腦程式產品具體體現為軟體產品,例如軟體發展包(Software Development Kit,SDK)等等。The computer program product can be implemented by hardware, software, or a combination thereof. In an optional embodiment, the computer program product is specifically embodied as a computer storage medium. In another optional embodiment, the computer program product is specifically embodied as a software product, such as a software development kit (SDK), etc. Wait.

在不違背邏輯的情況下,本公開不同實施例之間可以相互結合,不同實施例描述有所側重,為側重描述的部分可以參見其他實施例的記載。Without violating logic, different embodiments of the present disclosure can be combined with each other, and the description of different embodiments is emphasized. For the part of the description, reference may be made to the records of other embodiments.

以上已經描述了本公開的各實施例,上述說明是示例性的,並非窮盡性的,並且也不限於所披露的各實施例。在不偏離所說明的各實施例的範圍和精神的情況下,對於本技術領域的普通技術人員來說許多修改和變更都是顯而易見的。本文中所用術語的選擇,旨在最好地解釋各實施例的原理、實際應用或對市場中的技術的改進,或者使本技術領域的其它普通技術人員能理解本文披露的各實施例。The embodiments of the present disclosure have been described above, and the above description is exemplary, not exhaustive, and is not limited to the disclosed embodiments. Without departing from the scope and spirit of the illustrated embodiments, many modifications and changes are obvious to those of ordinary skill in the art. The choice of terms used herein is intended to best explain the principles, practical applications, or improvements to technologies in the market of the embodiments, or to enable other ordinary skilled in the art to understand the embodiments disclosed herein.

S11~S13:流程步驟 D:鑒別網路 EC :第一特徵提取網路 EP :第二特徵提取網路 R:圖像重建網路 Te :細節增強網路 XLE :第一樣本事件特徵 XC :第二樣本事件特徵 Xp :第三樣本事件特徵 XDE :第一樣本融合特徵

Figure 02_image055
:第二重建圖像
Figure 02_image057
:第四重建圖像
Figure 02_image059
:第五樣本事件特徵 21:第一樣本事件資訊 22:第二樣本事件資訊 23:雜訊資訊 24:雜訊資訊 31:事件獲取模組 32:特徵提取模組 33:圖像重建模組 800:電子設備 802:處理組件 804:記憶體 806:電源組件 808:多媒體組件 810:音訊組件 812:輸入/輸出介面 814:感測器組件 816:通信組件 820:處理器 1900:電子設備 1922:處理組件 1926:電源組件 1932:記憶體 1950:網路介面 1958:輸入輸出介面S11 ~ S13: Process Step D: Identification Network E C: the first network feature extraction E P: a second network feature extraction R: image reconstruction network T e: Detail Enhancement Network X LE: the first sample Event feature X C : Second sample event feature X p : Third sample event feature X DE : First sample fusion feature
Figure 02_image055
: The second reconstructed image
Figure 02_image057
: The fourth reconstructed image
Figure 02_image059
: Fifth sample event feature 21: First sample event information 22: Second sample event information 23: Noise information 24: Noise information 31: Event acquisition module 32: Feature extraction module 33: Image reconstruction module 800: electronic device 802: processing component 804: memory 806: power supply component 808: multimedia component 810: audio component 812: input/output interface 814: sensor component 816: communication component 820: processor 1900: electronic device 1922: Processing component 1926: Power component 1932: Memory 1950: Network interface 1958: Input and output interface

本發明之其他的特徵及功效,將於參照圖式的實施方式中清楚地呈現,其中: 圖1示出根據本公開實施例的圖像重建方法的流程圖。 圖2示出根據本公開實施例的圖像重建方法的網路訓練的處理過程的示意圖。 圖3示出根據本公開實施例的圖像重建裝置的框圖。 圖4示出根據本公開實施例的一種電子設備的框圖。 圖5示出根據本公開實施例的一種電子設備的框圖。Other features and effects of the present invention will be clearly presented in the embodiments with reference to the drawings, in which: Fig. 1 shows a flowchart of an image reconstruction method according to an embodiment of the present disclosure. Fig. 2 shows a schematic diagram of the network training process of the image reconstruction method according to an embodiment of the present disclosure. Fig. 3 shows a block diagram of an image reconstruction device according to an embodiment of the present disclosure. Fig. 4 shows a block diagram of an electronic device according to an embodiment of the present disclosure. Fig. 5 shows a block diagram of an electronic device according to an embodiment of the present disclosure.

S11~S13:流程步驟S11~S13: Process steps

Claims (13)

一種圖像重建方法,包括: 獲取目標場景的事件資訊,所述事件資訊用於表示所述目標場景在第一亮度範圍內的亮度變化; 對所述事件資訊進行特徵提取,得到所述目標場景的第一事件特徵; 對所述第一事件特徵進行圖像重建,得到所述目標場景的重建圖像,所述重建圖像的亮度處於第二亮度範圍內,所述第二亮度範圍高於所述第一亮度範圍。An image reconstruction method, including: Acquiring event information of a target scene, where the event information is used to indicate a brightness change of the target scene within a first brightness range; Performing feature extraction on the event information to obtain the first event feature of the target scene; Perform image reconstruction on the first event feature to obtain a reconstructed image of the target scene, the brightness of the reconstructed image is within a second brightness range, and the second brightness range is higher than the first brightness range . 根據請求項1所述的圖像重建方法,其中,對所述第一事件特徵進行圖像重建,得到所述目標場景的重建圖像,包括: 根據第一雜訊資訊及所述第一事件特徵,對所述第一事件特徵進行細節增強,得到第二事件特徵; 將所述第一事件特徵與所述第二事件特徵融合,得到融合特徵; 對所述融合特徵進行圖像重建,得到所述目標場景的重建圖像。The image reconstruction method according to claim 1, wherein performing image reconstruction on the first event feature to obtain a reconstructed image of the target scene includes: Performing detail enhancement on the first event feature according to the first noise information and the first event feature to obtain a second event feature; Fusing the first event feature and the second event feature to obtain a fusion feature; Image reconstruction is performed on the fusion feature to obtain a reconstructed image of the target scene. 根據請求項1或2所述的圖像重建方法,其中,所述圖像重建方法通過圖像處理網路實現,所述圖像處理網路包括第一特徵提取網路及圖像重建網路,所述第一特徵提取網路用於對所述事件資訊進行特徵提取,所述圖像重建網路用於對所述第一事件特徵進行圖像重建: 所述圖像重建方法還包括:根據預設的訓練集訓練所述圖像處理網路,所述訓練集包括多個第一樣本場景的第一樣本事件資訊,多個第二樣本場景的第二樣本事件資訊及樣本場景圖像; 其中,所述第一樣本事件資訊是在第三亮度範圍內獲取的,所述第二樣本事件資訊是在第四亮度範圍內獲取的,所述樣本場景圖像是在所述第四亮度範圍內獲取的,所述第四亮度範圍高於所述第三亮度範圍。The image reconstruction method according to claim 1 or 2, wherein the image reconstruction method is implemented by an image processing network, and the image processing network includes a first feature extraction network and an image reconstruction network The first feature extraction network is used for feature extraction of the event information, and the image reconstruction network is used for image reconstruction of the first event feature: The image reconstruction method further includes: training the image processing network according to a preset training set, the training set including first sample event information of a plurality of first sample scenes, and a plurality of second sample scenes The second sample event information and sample scene images of; Wherein, the first sample event information is obtained in a third brightness range, the second sample event information is obtained in a fourth brightness range, and the sample scene image is obtained in the fourth brightness range. Acquired within the range, the fourth brightness range is higher than the third brightness range. 根據請求項3所述的圖像重建方法,其中,所述圖像處理網路還包括鑒別網路,所述根據預設的訓練集訓練所述圖像處理網路,包括: 將所述第一樣本場景的第一樣本事件資訊和所述第二樣本場景的第二樣本事件資訊分別輸入所述第一特徵提取網路,得到第一樣本事件特徵和第二樣本事件特徵; 將所述第一樣本事件特徵和所述第二樣本事件特徵分別輸入所述鑒別網路,得到第一鑒別結果和第二鑒別結果; 根據所述第一鑒別結果及所述第二鑒別結果,對抗訓練所述圖像處理網路。The image reconstruction method according to claim 3, wherein the image processing network further includes a discrimination network, and the training of the image processing network according to a preset training set includes: The first sample event information of the first sample scene and the second sample event information of the second sample scene are respectively input to the first feature extraction network to obtain first sample event features and second samples Event characteristics Input the first sample event feature and the second sample event feature into the authentication network respectively to obtain a first authentication result and a second authentication result; According to the first identification result and the second identification result, the image processing network is trained against training. 根據請求項4所述的圖像重建方法,其中,所述根據預設的訓練集訓練所述圖像處理網路,還包括: 將所述第二樣本事件特徵輸入所述圖像重建網路,得到所述第二樣本場景的第一重建圖像; 根據所述第二樣本場景的第一重建圖像及所述樣本場景圖像,訓練所述圖像處理網路。The image reconstruction method according to claim 4, wherein the training of the image processing network according to a preset training set further includes: Inputting the feature of the second sample event into the image reconstruction network to obtain a first reconstructed image of the second sample scene; Training the image processing network according to the first reconstructed image of the second sample scene and the sample scene image. 根據請求項5所述的圖像重建方法,其中,所述圖像處理網路還包括細節增強網路,所述根據預設的訓練集訓練所述圖像處理網路,還包括: 將所述第二樣本事件特徵及第三雜訊資訊輸入所述細節增強網路,得到第四樣本事件特徵; 將所述第二樣本事件特徵與所述第四樣本事件特徵融合,得到第二樣本融合特徵; 將所述第二樣本融合特徵輸入所述圖像重建網路,得到所述第二樣本場景的第三重建圖像; 根據所述第二樣本場景的第一重建圖像、所述第三重建圖像及所述樣本場景圖像,訓練所述圖像處理網路。The image reconstruction method according to claim 5, wherein the image processing network further includes a detail enhancement network, and the training of the image processing network according to a preset training set further includes: Input the second sample event feature and the third noise information into the detail enhancement network to obtain the fourth sample event feature; Fusing the second sample event feature with the fourth sample event feature to obtain a second sample fusion feature; Input the second sample fusion feature into the image reconstruction network to obtain a third reconstructed image of the second sample scene; Training the image processing network according to the first reconstructed image of the second sample scene, the third reconstructed image, and the sample scene image. 根據請求項4所述的圖像重建方法,其中,所述圖像處理網路還包括第二特徵提取網路,所述根據預設的訓練集訓練所述圖像處理網路,還包括: 將所述第二樣本場景的第二樣本事件資訊及第二雜訊資訊輸入所述第二特徵提取網路,得到第三樣本事件特徵; 將所述第二樣本事件特徵與所述第三樣本事件特徵融合,得到第一樣本融合特徵; 將所述第一樣本融合特徵輸入所述鑒別網路,得到第三鑒別結果; 根據所述第一鑒別結果及所述第三鑒別結果,對抗訓練所述圖像處理網路。The image reconstruction method according to claim 4, wherein the image processing network further includes a second feature extraction network, and the training of the image processing network according to a preset training set further includes: Input second sample event information and second noise information of the second sample scene into the second feature extraction network to obtain third sample event features; Fusing the second sample event feature with the third sample event feature to obtain a first sample fusion feature; Input the fusion feature of the first sample into the authentication network to obtain a third authentication result; According to the first identification result and the third identification result, the image processing network is trained against training. 根據請求項7所述的圖像重建方法,其中,所述根據預設的訓練集訓練所述圖像處理網路,還包括: 將所述第一樣本融合特徵輸入所述圖像重建網路,得到所述第二樣本場景的第二重建圖像; 根據所述第二樣本場景的第二重建圖像及所述樣本場景圖像,訓練所述圖像處理網路。The image reconstruction method according to claim 7, wherein the training of the image processing network according to a preset training set further includes: Inputting the first sample fusion feature into the image reconstruction network to obtain a second reconstructed image of the second sample scene; Training the image processing network according to the second reconstructed image of the second sample scene and the sample scene image. 根據請求項8所述的圖像重建方法,其中,所述圖像處理網路還包括細節增強網路,所述根據預設的訓練集訓練所述圖像處理網路,還包括: 將所述第一樣本融合特徵及第四雜訊資訊輸入所述細節增強網路,得到第五樣本事件特徵; 將所述第一樣本融合特徵與所述第五樣本事件特徵融合,得到第三樣本融合特徵; 將所述第三樣本融合特徵輸入所述圖像重建網路,得到所述第二樣本場景的第四重建圖像; 根據所述第二樣本場景的第二重建圖像、所述第四重建圖像及所述樣本場景圖像,訓練所述圖像處理網路。The image reconstruction method according to claim 8, wherein the image processing network further includes a detail enhancement network, and the training of the image processing network according to a preset training set further includes: Input the fusion feature of the first sample and the fourth noise information into the detail enhancement network to obtain the event feature of the fifth sample; Fuse the first sample fusion feature with the fifth sample event feature to obtain a third sample fusion feature; Input the third sample fusion feature into the image reconstruction network to obtain a fourth reconstructed image of the second sample scene; Training the image processing network according to the second reconstructed image of the second sample scene, the fourth reconstructed image, and the sample scene image. 根據請求項9所述的圖像重建方法,其中,所述根據所述第二樣本場景的第二重建圖像、所述第四重建圖像及所述樣本場景圖像,訓練所述圖像處理網路,包括: 根據所述第二樣本場景的第二重建圖像、所述第四重建圖像及所述樣本場景圖像,確定所述圖像處理網路的總體損失; 根據所述總體損失,確定所述圖像處理網路的梯度資訊; 根據所述梯度資訊,調整所述第一特徵提取網路、所述第二特徵提取網路、所述細節增強網路及所述圖像重建網路的網路參數, 其中,所述細節增強網路的梯度資訊不傳遞到所述第二特徵提取網路。The image reconstruction method according to claim 9, wherein the image is trained according to the second reconstructed image of the second sample scene, the fourth reconstructed image, and the sample scene image Handling the network, including: Determine the overall loss of the image processing network according to the second reconstructed image of the second sample scene, the fourth reconstructed image, and the sample scene image; Determine the gradient information of the image processing network according to the total loss; Adjusting the network parameters of the first feature extraction network, the second feature extraction network, the detail enhancement network, and the image reconstruction network according to the gradient information, Wherein, the gradient information of the detail enhancement network is not transmitted to the second feature extraction network. 一種圖像重建裝置,包括: 事件獲取模組,用於獲取目標場景的事件資訊,所述事件資訊用於表示所述目標場景在第一亮度範圍內的亮度變化; 特徵提取模組,用於對所述事件資訊進行特徵提取,得到所述目標場景的第一事件特徵; 圖像重建模組,用於對所述第一事件特徵進行圖像重建,得到所述目標場景的重建圖像,所述重建圖像的亮度處於第二亮度範圍內,所述第二亮度範圍高於所述第一亮度範圍。An image reconstruction device includes: An event acquisition module for acquiring event information of a target scene, where the event information is used to indicate a brightness change of the target scene within a first brightness range; The feature extraction module is used to perform feature extraction on the event information to obtain the first event feature of the target scene; The image reconstruction module is configured to perform image reconstruction on the first event feature to obtain a reconstructed image of the target scene, the brightness of the reconstructed image is within a second brightness range, and the second brightness range Higher than the first brightness range. 一種電子設備,包括: 處理器; 用於儲存處理器可執行指令的記憶體; 其中,所述處理器被配置為調用所述記憶體儲存的指令,以執行請求項1至10中任意一項所述的圖像重建方法。An electronic device including: processor; Memory used to store executable instructions of the processor; Wherein, the processor is configured to call instructions stored in the memory to execute the image reconstruction method described in any one of request items 1 to 10. 一種電腦可讀儲存媒體,其上儲存有電腦程式指令,所述電腦程式指令被處理器執行時實現請求項1至10中任意一項所述的圖像重建方法。A computer-readable storage medium has computer program instructions stored thereon, and when the computer program instructions are executed by a processor, the image reconstruction method described in any one of request items 1 to 10 is realized.
TW109125062A 2020-03-31 2020-07-24 Image reconstruction method and image reconstruction device, electronic device and computer-readable storage medium TWI765304B (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202010243153.4A CN111462268B (en) 2020-03-31 2020-03-31 Image reconstruction method and device, electronic equipment and storage medium
CN202010243153.4 2020-03-31

Publications (2)

Publication Number Publication Date
TW202139140A true TW202139140A (en) 2021-10-16
TWI765304B TWI765304B (en) 2022-05-21

Family

ID=71682204

Family Applications (1)

Application Number Title Priority Date Filing Date
TW109125062A TWI765304B (en) 2020-03-31 2020-07-24 Image reconstruction method and image reconstruction device, electronic device and computer-readable storage medium

Country Status (3)

Country Link
CN (1) CN111462268B (en)
TW (1) TWI765304B (en)
WO (1) WO2021196401A1 (en)

Families Citing this family (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114205646B (en) * 2020-09-18 2024-03-29 阿里巴巴达摩院(杭州)科技有限公司 Data processing method, device, electronic equipment and storage medium
CN112712170B (en) * 2021-01-08 2023-06-20 西安交通大学 Neuromorphic visual target classification system based on input weighted impulse neural network
CN112785672B (en) * 2021-01-19 2022-07-05 浙江商汤科技开发有限公司 Image processing method and device, electronic equipment and storage medium
CN112668557A (en) * 2021-01-29 2021-04-16 南通大学 Method for defending image noise attack in pedestrian re-identification system
CN112950497A (en) * 2021-02-22 2021-06-11 上海商汤智能科技有限公司 Image processing method, image processing device, electronic equipment and storage medium
CN113506229B (en) * 2021-07-15 2024-04-12 清华大学 Neural network training and image generating method and device
CN113506320B (en) * 2021-07-15 2024-04-12 清华大学 Image processing method and device, electronic equipment and storage medium
CN113506325B (en) * 2021-07-15 2024-04-12 清华大学 Image processing method and device, electronic equipment and storage medium
CN113837938B (en) * 2021-07-28 2022-09-09 北京大学 Super-resolution method for reconstructing potential image based on dynamic vision sensor
CN114663842B (en) * 2022-05-25 2022-09-09 深圳比特微电子科技有限公司 Image fusion processing method and device, electronic equipment and storage medium
CN115661336A (en) * 2022-09-21 2023-01-31 华为技术有限公司 Three-dimensional reconstruction method and related device
CN115578295B (en) * 2022-11-17 2023-04-07 中国科学技术大学 Video rain removing method, system, equipment and storage medium
CN116456183B (en) * 2023-04-20 2023-09-26 北京大学 High dynamic range video generation method and system under guidance of event camera
CN117576522B (en) * 2024-01-18 2024-04-26 之江实验室 Model training method and device based on mimicry structure dynamic defense

Family Cites Families (23)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP4424518B2 (en) * 2007-03-27 2010-03-03 セイコーエプソン株式会社 Image processing apparatus, image processing method, and image processing program
TWI369122B (en) * 2008-12-03 2012-07-21 Altek Corp Method for improving image resolution
TWI401963B (en) * 2009-06-25 2013-07-11 Pixart Imaging Inc Dynamic image compression method for face detection
US20110085729A1 (en) * 2009-10-12 2011-04-14 Miaohong Shi De-noising method and related apparatus for image sensor
MX352400B (en) * 2013-07-12 2017-11-23 Sony Corp Player device, play method, and recording medium.
EP3046319A1 (en) * 2015-01-19 2016-07-20 Thomson Licensing Method for generating an HDR image of a scene based on a tradeoff between brightness distribution and motion
KR101680602B1 (en) * 2015-06-03 2016-11-29 한국생산기술연구원 System, apparatus and method for reconstructing three dimensional internal image and non-transitory computer-readable recording medium
EP3504682B1 (en) * 2016-08-24 2020-07-15 Universität Zürich Simultaneous localization and mapping with an event camera
CN108073857B (en) * 2016-11-14 2024-02-27 北京三星通信技术研究有限公司 Dynamic visual sensor DVS event processing method and device
CN107395983B (en) * 2017-08-24 2020-04-07 维沃移动通信有限公司 Image processing method, mobile terminal and computer readable storage medium
CN108154474B (en) * 2017-12-22 2021-08-27 浙江大华技术股份有限公司 Super-resolution image reconstruction method, device, medium and equipment
CN108182670B (en) * 2018-01-15 2020-11-10 清华大学 Resolution enhancement method and system for event image
KR102083721B1 (en) * 2018-03-06 2020-03-02 한국과학기술원 Stereo Super-ResolutionImaging Method using Deep Convolutional Networks and Apparatus Therefor
CN109801214B (en) * 2018-05-29 2023-08-29 京东方科技集团股份有限公司 Image reconstruction device, image reconstruction method, image reconstruction device, image reconstruction apparatus, computer-readable storage medium
CN109087269B (en) * 2018-08-21 2020-08-04 厦门美图之家科技有限公司 Weak light image enhancement method and device
CN109118430B (en) * 2018-08-24 2023-05-09 深圳市商汤科技有限公司 Super-resolution image reconstruction method and device, electronic equipment and storage medium
CN109685746B (en) * 2019-01-04 2021-03-05 Oppo广东移动通信有限公司 Image brightness adjusting method and device, storage medium and terminal
CN109859144B (en) * 2019-02-22 2021-03-12 上海商汤智能科技有限公司 Image processing method and device, electronic equipment and storage medium
CN110070498A (en) * 2019-03-12 2019-07-30 浙江工业大学 A kind of image enchancing method based on convolution self-encoding encoder
CN109977876A (en) * 2019-03-28 2019-07-05 腾讯科技(深圳)有限公司 Image-recognizing method, calculates equipment, system and storage medium at device
CN109981991A (en) * 2019-04-17 2019-07-05 北京旷视科技有限公司 Model training method, image processing method, device, medium and electronic equipment
CN110533097B (en) * 2019-08-27 2023-01-06 腾讯科技(深圳)有限公司 Image definition recognition method and device, electronic equipment and storage medium
CN110769196A (en) * 2019-10-17 2020-02-07 天津大学 Video prediction method for discontinuous monitoring road section

Also Published As

Publication number Publication date
WO2021196401A1 (en) 2021-10-07
TWI765304B (en) 2022-05-21
CN111462268B (en) 2022-11-11
CN111462268A (en) 2020-07-28

Similar Documents

Publication Publication Date Title
TW202139140A (en) Image reconstruction method and apparatus, electronic device and storage medium
TWI766286B (en) Image processing method and image processing device, electronic device and computer-readable storage medium
TWI777162B (en) Image processing method and apparatus, electronic device and computer-readable storage medium
TWI717865B (en) Image processing method and device, electronic equipment, computer readable recording medium and computer program product
TWI724736B (en) Image processing method and device, electronic equipment, storage medium and computer program
US20210089799A1 (en) Pedestrian Recognition Method and Apparatus and Storage Medium
TWI759647B (en) Image processing method, electronic device, and computer-readable storage medium
TWI706379B (en) Method, apparatus and electronic device for image processing and storage medium thereof
TWI738172B (en) Video processing method and device, electronic equipment, storage medium and computer program
WO2021031609A1 (en) Living body detection method and device, electronic apparatus and storage medium
JP7106687B2 (en) Image generation method and device, electronic device, and storage medium
TW202105244A (en) Image processing method and device, electronic equipment and storage medium
TW202113757A (en) Target object matching method and apparatus, electronic device and storage medium
WO2021093375A1 (en) Method, apparatus, and system for detecting people walking together, electronic device and storage medium
CN111340731B (en) Image processing method and device, electronic equipment and storage medium
JP2022544893A (en) Network training method and apparatus, target detection method and apparatus, and electronic equipment
TW202107337A (en) Face image recognition method and device, electronic device and storage medium
CN110659690B (en) Neural network construction method and device, electronic equipment and storage medium
JP2022522551A (en) Image processing methods and devices, electronic devices and storage media
JP2022526381A (en) Image processing methods and devices, electronic devices and storage media
TW202032425A (en) Method, apparatus and electronic device for image processing and storage medium
WO2020220807A1 (en) Image generation method and apparatus, electronic device, and storage medium
CN107025441B (en) Skin color detection method and device
TWI738349B (en) Image processing method, image processing device, electronic equipment and computer readable storage medium
CN113032627A (en) Video classification method and device, storage medium and terminal equipment