WO2020087607A1 - Bi-skip-net-based image deblurring method - Google Patents
Bi-skip-net-based image deblurring method Download PDFInfo
- Publication number
- WO2020087607A1 WO2020087607A1 PCT/CN2018/117634 CN2018117634W WO2020087607A1 WO 2020087607 A1 WO2020087607 A1 WO 2020087607A1 CN 2018117634 W CN2018117634 W CN 2018117634W WO 2020087607 A1 WO2020087607 A1 WO 2020087607A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- features
- skip
- shallow
- scale
- image
- Prior art date
Links
- 238000000034 method Methods 0.000 title claims abstract description 22
- 230000009467 reduction Effects 0.000 claims description 6
- 238000005070 sampling Methods 0.000 claims description 6
- 208000037170 Delayed Emergence from Anesthesia Diseases 0.000 claims description 5
- 238000013461 design Methods 0.000 claims description 5
- 230000006835 compression Effects 0.000 claims description 4
- 238000007906 compression Methods 0.000 claims description 4
- 238000011176 pooling Methods 0.000 claims description 3
- 230000001105 regulatory effect Effects 0.000 claims description 2
- 230000000694 effects Effects 0.000 abstract description 16
- 238000013135 deep learning Methods 0.000 abstract description 9
- 238000012545 processing Methods 0.000 abstract description 5
- 230000007547 defect Effects 0.000 abstract 1
- 230000006870 function Effects 0.000 description 6
- 238000011084 recovery Methods 0.000 description 6
- 230000007246 mechanism Effects 0.000 description 5
- 230000008569 process Effects 0.000 description 4
- 238000005516 engineering process Methods 0.000 description 3
- 230000004438 eyesight Effects 0.000 description 3
- 238000013527 convolutional neural network Methods 0.000 description 2
- 238000010586 diagram Methods 0.000 description 2
- 241000282414 Homo sapiens Species 0.000 description 1
- 238000004458 analytical method Methods 0.000 description 1
- 230000006399 behavior Effects 0.000 description 1
- 238000001514 detection method Methods 0.000 description 1
- 238000003384 imaging method Methods 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000005457 optimization Methods 0.000 description 1
- 238000003909 pattern recognition Methods 0.000 description 1
- 238000007781 pre-processing Methods 0.000 description 1
- 238000011160 research Methods 0.000 description 1
- 230000011218 segmentation Effects 0.000 description 1
- 238000012360 testing method Methods 0.000 description 1
- 230000016776 visual perception Effects 0.000 description 1
Images
Classifications
-
- G06T5/73—
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
Definitions
- the invention relates to the field of digital image processing, in particular to a Bi-Skip-Net-based image deblurring method.
- the method is to realize blurred image restoration through a Bi-Skip-Net network.
- Deblurring technology is the subject of extensive research in the field of image and video processing. To a certain extent, blurring caused by camera shake seriously affects the imaging quality and visual perception of images. As an important branch of image preprocessing, the improvement of deblurring technology directly affects the performance of other computer vision algorithms, such as foreground segmentation, object detection, behavior analysis, etc. At the same time, it also affects the image coding performance. Therefore, it is imperative to study a high-performance deblurring algorithm.
- Documents 1-3 introduces the deblurring technology of image and video processing, deep learning deblurring algorithm;
- Document 1 Kupyn O, Budzan V, Mykhailych M, et al.
- DeblurGAN Blind Motion Deblurring Using Conditional Adversarial Networks [J] .arXiv preprint arXiv: 1711.07064, 2017.
- Reference 2 Nah S, Kim T, H, Lee K, M. Deep multi-scale convolutional neural network for dynamic scene deblurring [C] // CVPR.2017, 1 (2): 3.
- Reference 3 Sun J, Cao W, Xu Z, et al. Learning a convolutional neural network for non-uniform motion blur removal [C] // Proceedings of the IEEE Conference Computer on Vision Vision and Pattern Recognition. 2015: 769-777. .
- image deblurring algorithms can be divided into traditional algorithms based on probability models and deblurring algorithms based on deep learning.
- the traditional algorithm uses a convolution model to explain the cause of blur.
- the process of camera shake can be mapped to a blur kernel trajectory PSF (Point Spread Function).
- PSF Point Spread Function
- the problem of restoring a clear image when the blur kernel is unknown is an ill-posed problem, so it is usually necessary to estimate the blur kernel first, and then use the evaluated blur kernel to perform the deconvolution operation to obtain the restored image.
- the deep learning-based deblurring algorithm uses the deep network structure to obtain the latent information of the image, and then realize the blurred image restoration.
- the deep learning deblurring algorithm can realize two operations of fuzzy kernel estimation and non-blind deconvolution to restore the image, and at the same time, it can also use the generated confrontation mechanism to restore the image.
- This patent aims to solve the shortcomings of the deblurring algorithm:
- the present invention proposes a Bi-Skip-Net network as a GAN (Generative Adversarial Network) generating network, aiming to solve the shortcomings of the existing deep learning deblurring algorithm.
- GAN Geneative Adversarial Network
- the present invention improves the time complexity by 0.1s and the original performance of the image complex image by an average of 1dB.
- the invention adopts a generation-antagonism network mechanism to realize the restoration of blurred images, and a Bi-Skip-Net network is designed as a generator therein. Specific steps are as follows:
- the shallow features of the previous scale are passed through a convolutional layer with a size of 1x1 and a step size of 1 to obtain shallow dimensionality reduction features; the corresponding depth features are passed through a convolutional core with a size of 3x3 and a step size Obtain the depth dimensionality reduction feature for the convolutional layer of 2 and connect it with the basic feature in series for upsampling; connect the upsampled feature and the shallow layer dimensionality reduction feature in series to obtain the basic feature at the current scale;
- the obtained basic features are passed through a convolution layer with a convolution kernel size of 7x7 and a step size of 1 to obtain residual features;
- the Bi-Skip-Net plus residual mode is used as the generator.
- step 4 the number of downsampling is 5 according to the regulations.
- the blurred image is obtained by the generator to obtain the restored image
- the task of discrimination is to distinguish the restored image and the clear image as much as possible
- the task of the generator is to deceive the discriminator as much as possible to reduce the ability to distinguish between the two images.
- the Bi-Skip-Net network is composed of three parts: contract path (D), Skip path (S) and expand path (U).
- the contract layer performs downsampling to achieve feature compression
- the Skip layer is used to connect deep and shallow features
- the expand layer performs upsampling.
- D *, S *, U * are the features under the corresponding downsampling scale.
- the current feature obtains deep features through 3 residual blocks (3xResBlock), and uses the residual mode of pooling and convolution to obtain the next feature.
- Scale features in the Skip path, shallow features are compressed by 1x1 convolution, and depth features are compressed by 3x3 convolution; in the expand path, feature connection is achieved by concat, and features are achieved by 3x3 deconvolution Upsampling.
- the present invention has the following technical effects: Since the present invention uses the Bi-Skip-Net network as the GAN (Generative Adversarial Network) generating network, compared with the prior art, it has the following technical effects:
- the traditional de-motion blur method uses two steps of fuzzy kernel estimation and non-blind deconvolution, and these two steps require multiple iterations to achieve a good recovery effect. Because of this, it also takes a long time to process a single motion blurred image; and the model designed by the present invention can avoid the time loss caused by multiple iteration optimization.
- Figure 1 is a mechanism for generating an adversarial network of the present invention
- FIG. 3 is a characteristic operation at a sampling scale of the present invention.
- Figures 5a-d are subjective comparisons between the present invention and other algorithms.
- Figure 5a Blurred image
- FIG. 1 is a mechanism for generating an adversarial network adopted by the present invention.
- the blurred image is obtained by the generator to obtain the restored image
- the task of discrimination is to distinguish the restored image and the clear image as much as possible
- the task of the generator is to deceive the discriminator as much as possible to reduce the ability to distinguish between the two images.
- ⁇ is the weight of the conditional loss function.
- L and S respectively represent the output and true value of the model at different levels, and the value of ⁇ is 1 or 2, the entire conditional loss function is regulated by the number of channels c, width w and height h.
- FIG. 1 the method of the embodiment of the present invention adopts a generative adversarial network mechanism to achieve blurred image restoration.
- Figure 2 shows the structure of the Bi-Skip-Net network. Taking the network structure shown in Figure 2, a Bi-Skip-Net network is designed as the generator.
- the discriminator parameters in the Bi-Skip-Net network structure diagram are shown in Table 1. Discriminator parameter table.
- the Bi-Skip-Net network designed by the embodiment of the present invention is composed of three parts: a contract path (D), including D0, D1, D2, and D3; a Skip path (S), including S0, S1, S2 And S3; and the expand path (U), including U0, U1, U2, and U3.
- the contract layer performs downsampling to achieve feature compression
- the Skip layer is used to connect deep and shallow features
- the expand layer performs upsampling.
- D * (D0, D1, D2, and D3), S * (S0, S1, S2, and S3), and U * (U0, U1, U2, and U3) are features under the corresponding downsampling scale.
- Figure 3 is a feature operation at a sampling scale.
- the current feature obtains deep features through 3 residual blocks (3xResBlock), and uses pooling and volume.
- the residual mode of product addition is used to obtain the next-scale features; in the Skip path, that is, across the connection path, the shallow features are compressed by 1x1 convolution, and the deep features are compressed by 3x3 convolution; in the expand path, Through concate, namely concatenate, to achieve feature connection, and through 3x3 deconvolution to achieve feature upsampling.
- FIG 4 Generator design: Bi-Skip-Net + residuals, as shown in Figure 4, and finally adopt Bi-Skip-Net plus residual mode as the generator.
- Figures 5a-d are subjective comparisons between the present invention and other algorithms.
- Figure 5a is a blurred image
- Figure 5b is the restoration effect of Nah et al.
- Figure 5c is the restoration effect of Kupyn et al.
- Figure 5d is the restoration effect of Bi-Skip-Net of the present invention.
- the text "HARDWARE" in the lower left corner of the picture cannot be recognized or blurred in the other three pictures, and the invention can clearly restore and recognize. It can be seen from the subjective comparison of human beings that the present invention has obvious repairing effect on blurred images.
- the invention is applied to the field of digital image processing, and a blurred image restoration method is realized through a Bi-Skip-Net-based image deblurring method.
Abstract
The present invention relates to the field of digital image processing, in particular to a Bi-Skip-Net-based image deblurring method for realizing blurred image restoration by means of a Bi-Skip-Net, which aims to solve the problems in existing deep learning deblurring algorithms of high time complexity, inaccurate texture restoration, and a square effect of a restored image, etc. In the disclosure of the present invention, a Bi-Skip-Net serves as a generative network of a GAN (Generative Adversarial Network), which aims to overcome the defects in existing deep learning deblurring algorithms. Comparing the present invention with existing optimal algorithms, the time complexity is improved by 0.1s, and the image restoration performance is improved by 1dB on average.
Description
本发明涉及数字图像处理领域,特别是一种基于Bi-Skip-Net的图像去模糊方法,该方法是通过Bi-Skip-Net网络来实现模糊图像复原。The invention relates to the field of digital image processing, in particular to a Bi-Skip-Net-based image deblurring method. The method is to realize blurred image restoration through a Bi-Skip-Net network.
去模糊技术是图像和视频处理领域被广泛研究的主题。基于相机抖动造成的模糊在一定意义上严重影响图像的成像质量,视觉观感。作为图像预处理领域一个及其重要的分支,去模糊技术的提升直接影响其他计算机视觉算法的性能,如前景分割,物体检测,行为分析等;同时它也影响着图像的编码性能。因此,研究一种高性能的去模糊算法势在必行。Deblurring technology is the subject of extensive research in the field of image and video processing. To a certain extent, blurring caused by camera shake seriously affects the imaging quality and visual perception of images. As an important branch of image preprocessing, the improvement of deblurring technology directly affects the performance of other computer vision algorithms, such as foreground segmentation, object detection, behavior analysis, etc. At the same time, it also affects the image coding performance. Therefore, it is imperative to study a high-performance deblurring algorithm.
文献1-3介绍了图像和视频处理的去模糊技术,深度学习去模糊算法;文献1:Kupyn O,Budzan V,Mykhailych M,et al.DeblurGAN:Blind Motion Deblurring Using Conditional Adversarial Networks[J].arXiv preprint arXiv:1711.07064,2017。文献2:Nah S,Kim T H,Lee K M.Deep multi-scale convolutional neural network for dynamic scene deblurring[C]//CVPR.2017,1(2):3。文献3:Sun J,Cao W,Xu Z,et al.Learning a convolutional neural network for non-uniform motion blur removal[C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition.2015:769-777.。Documents 1-3 introduces the deblurring technology of image and video processing, deep learning deblurring algorithm; Document 1: Kupyn O, Budzan V, Mykhailych M, et al. DeblurGAN: Blind Motion Deblurring Using Conditional Adversarial Networks [J] .arXiv preprint arXiv: 1711.07064, 2017. Reference 2: Nah S, Kim T, H, Lee K, M. Deep multi-scale convolutional neural network for dynamic scene deblurring [C] // CVPR.2017, 1 (2): 3. Reference 3: Sun J, Cao W, Xu Z, et al. Learning a convolutional neural network for non-uniform motion blur removal [C] // Proceedings of the IEEE Conference Computer on Vision Vision and Pattern Recognition. 2015: 769-777. .
一般来说,图像去模糊算法可以分为基于概率模型的传统算法和基于深度学习的去模糊算法。传统算法采用卷积模型来解释模糊成因,相机抖动的过程可以映射为模糊核轨迹PSF(Point Spread Function)。在模糊核未知的情况下还原清晰图像,这一问题属于不适定(ill-posed)问题,所以通常意义上需要先估计模糊核,再利用评估 的模糊核进行返卷积操作得到复原图像。基于深度学习的去模糊算法则利用深层网络结构获取图像的潜在信息,进而实现模糊图像复原。深度学习去模糊算法可以实现模糊核估计和非盲反卷积两个操作来进行图像复原,同时也可以采用生成对抗机制来复原图像。本专利旨在解决去模糊算法存在的缺点:In general, image deblurring algorithms can be divided into traditional algorithms based on probability models and deblurring algorithms based on deep learning. The traditional algorithm uses a convolution model to explain the cause of blur. The process of camera shake can be mapped to a blur kernel trajectory PSF (Point Spread Function). The problem of restoring a clear image when the blur kernel is unknown is an ill-posed problem, so it is usually necessary to estimate the blur kernel first, and then use the evaluated blur kernel to perform the deconvolution operation to obtain the restored image. The deep learning-based deblurring algorithm uses the deep network structure to obtain the latent information of the image, and then realize the blurred image restoration. The deep learning deblurring algorithm can realize two operations of fuzzy kernel estimation and non-blind deconvolution to restore the image, and at the same time, it can also use the generated confrontation mechanism to restore the image. This patent aims to solve the shortcomings of the deblurring algorithm:
1)时间复杂度高,1) High time complexity,
2)纹理恢复不准确,2) The texture restoration is not accurate,
3)复原图像存在方格效应。3) There is a grid effect in the restored image.
发明的公开Disclosure of invention
本发明提出了一种Bi-Skip-Net网络来作为GAN(Generative Adversarial Network)的生成网络,旨在解决现有深度学习去模糊算法存在的缺点。通过对比现有最优算法,本发明在时间复杂度上提升了0.1s,在图像复图像原性能上平均提升了1dB。The present invention proposes a Bi-Skip-Net network as a GAN (Generative Adversarial Network) generating network, aiming to solve the shortcomings of the existing deep learning deblurring algorithm. By comparing with the existing optimal algorithm, the present invention improves the time complexity by 0.1s and the original performance of the image complex image by an average of 1dB.
本发明提供的技术方案如下:(注:需要用自然语言对技术方案进行说明,不能用“如图”这样的叙述方式,方法技术方案最好按:步骤1:步骤2…的方式撰写)The technical solutions provided by the present invention are as follows: (Note: The technical solutions need to be explained in natural language, and cannot be described in a "picture" way. The technical solutions are best written in the following manner: Step 1: Step 2 ...
本发明采用生成对抗网络机制来实现模糊图像复原,并设计了一种Bi-Skip-Net网络来作为其中的生成器。具体步骤如下:The invention adopts a generation-antagonism network mechanism to realize the restoration of blurred images, and a Bi-Skip-Net network is designed as a generator therein. Specific steps are as follows:
1):输入模糊图像,通过卷积核尺寸为7x7,步长为1的卷积层得到浅层特征;1): Input the blurred image, and obtain the shallow features through the convolutional layer with a convolution kernel size of 7x7 and a step size of 1;
2):将浅层特征通过3个残差块得到当前尺度下的深度特征;2): Pass the shallow features through 3 residual blocks to get the depth features at the current scale;
3):将深度特征进行下采样加残差的模式得到下一个尺度下的浅层特征;3): The deep feature is down-sampled and the residual mode is used to obtain the shallow feature at the next scale;
4):按照规定的下采样次数n,重复步骤2,3获取不同尺度下的浅层特征和深度特征,并且在最小尺度下不获取深度特征;4): According to the specified number of downsampling times n, repeat steps 2 and 3 to obtain shallow features and depth features at different scales, and do not acquire depth features at the smallest scale;
5):将最小尺度的浅层特征作为基本特征;5): Take the shallow features of the smallest scale as basic features;
6):将上一层尺度的浅层特征经过卷积核大小为1x1,步长为1的卷积层得到浅层降维特征;将对应的深度特征经过卷积核大小为3x3,步长为2的卷积层得到深度降维 特征,并与基本特征串联进行上采样;将上采样后的特征与浅层降维特征进行串联得到当前尺度下的基本特征;6): The shallow features of the previous scale are passed through a convolutional layer with a size of 1x1 and a step size of 1 to obtain shallow dimensionality reduction features; the corresponding depth features are passed through a convolutional core with a size of 3x3 and a step size Obtain the depth dimensionality reduction feature for the convolutional layer of 2 and connect it with the basic feature in series for upsampling; connect the upsampled feature and the shallow layer dimensionality reduction feature in series to obtain the basic feature at the current scale;
7):重复步骤6直到采样操作截止;7): Repeat step 6 until the sampling operation ends;
8):将得到的基本特征经过卷积核大小为7x7,步长为1的卷积层得到残差特征;8): The obtained basic features are passed through a convolution layer with a convolution kernel size of 7x7 and a step size of 1 to obtain residual features;
9):将残差特征与输入图像相加得到复原图像;9): Add the residual features to the input image to obtain the restored image;
……
采用Bi-Skip-Net加残差的模式来作为生成器。The Bi-Skip-Net plus residual mode is used as the generator.
步骤4)中,按照规定的下采样次数为5。In step 4), the number of downsampling is 5 according to the regulations.
其中,模糊图像通过生成器来获得复原图像,判别的任务为尽可能区分复原图像和清晰图像;而生成器的任务为尽可能欺骗判别器来降低对两种图像的区分的能力。Among them, the blurred image is obtained by the generator to obtain the restored image, the task of discrimination is to distinguish the restored image and the clear image as much as possible; and the task of the generator is to deceive the discriminator as much as possible to reduce the ability to distinguish between the two images.
所述的Bi-Skip-Net网络由三部分组成:contract路径(D),Skip路径(S)以及expand路径(U)。Contract层进行下采样实现特征压缩,Skip层用于连接深层特征与浅层特征,expand层进行上采样。其中D*,S*,U*为对应下采样尺度下的特征。The Bi-Skip-Net network is composed of three parts: contract path (D), Skip path (S) and expand path (U). The contract layer performs downsampling to achieve feature compression, the Skip layer is used to connect deep and shallow features, and the expand layer performs upsampling. D *, S *, U * are the features under the corresponding downsampling scale.
所述的采样尺度下的特征操作,在contract路径,当前特征通过3个残差块(3xResBlock)来获得深层特征,并采用池化(pooling)和卷积相加的残差模式来获取下一尺度的特征;在Skip路径,通过1x1的卷积来对压缩浅层特征,通过3x3的卷积来压缩深度特征;在expand路径,通过concat来实现特征连接,并通过3x3的反卷积实现特征上采样。In the feature operation at the sampling scale, in the contract path, the current feature obtains deep features through 3 residual blocks (3xResBlock), and uses the residual mode of pooling and convolution to obtain the next feature. Scale features; in the Skip path, shallow features are compressed by 1x1 convolution, and depth features are compressed by 3x3 convolution; in the expand path, feature connection is achieved by concat, and features are achieved by 3x3 deconvolution Upsampling.
本发明具有如下技术效果:由于本发明采用了Bi-Skip-Net网络来作为GAN(Generative Adversarial Network)的生成网络与现有技术相比具有以下技术效果:The present invention has the following technical effects: Since the present invention uses the Bi-Skip-Net network as the GAN (Generative Adversarial Network) generating network, compared with the prior art, it has the following technical effects:
1、时间复杂度低;对比传统方法,传统的去运动模糊方法采用模糊核估计和非盲反卷积两个步骤,而这两个步骤均需进行多次迭代才能达到较好的复原效果,正因如此,也造成处理单张运动模糊图像的时间较长;而本发明设计的模型可以避免多次迭 代优化造成的时间损耗。1. Low time complexity; compared with the traditional method, the traditional de-motion blur method uses two steps of fuzzy kernel estimation and non-blind deconvolution, and these two steps require multiple iterations to achieve a good recovery effect. Because of this, it also takes a long time to process a single motion blurred image; and the model designed by the present invention can avoid the time loss caused by multiple iteration optimization.
2、纹理恢复准确;对比传统方法,在传统方法中,模糊核估计不准确会造成复原过程中图像信息的错误恢复,而非盲反卷操作经常会造成纹理部分出现振铃效应;本发明设计的双跨连接网络在每一层尺度上都提取了深度特征和浅层特征,通过特征连接,网络在一定程度可以恢复更多的细节信息。2. Accurate texture recovery; compared with traditional methods, inaccurate fuzzy kernel estimation will cause error recovery of image information in the restoration process, and non-blind rewinding operations often cause ringing effects in the texture portion; The double-span connection network has extracted deep and shallow features at each layer scale. Through feature connection, the network can recover more detailed information to a certain extent.
3、复原图像不存在方格效应,对比现存的深度学习方法,现存的多数深度学习方法在上采样过程中采用反卷积层来实现,而由于每次反卷积都存在一定的锯齿效应,这使得最后的复原图像也存在一些锯齿,即本发明提到的方格效应。3. There is no grid effect in the restored image. Compared with the existing deep learning methods, most of the existing deep learning methods are implemented by deconvolution layers during the upsampling process. Since each deconvolution has a certain aliasing effect, This causes some jaggedness in the final restored image, that is, the checker effect mentioned in the present invention.
为了更好地理解本发明的构思和原理,下面结合附图和实施例,对本发明进行详细的描述。但具体实施例的描述不以任何方式限制本发明的保护范围。In order to better understand the concept and principle of the present invention, the following describes the present invention in detail with reference to the accompanying drawings and embodiments. However, the description of specific embodiments does not limit the protection scope of the present invention in any way.
附图的简要说明Brief description of the drawings
图1为本发明的生成对抗网络机制;Figure 1 is a mechanism for generating an adversarial network of the present invention;
图2为本发明的Bi-Skip-Net网络结构图;2 is a structural diagram of the Bi-Skip-Net network of the present invention;
图3为本发明的一个采样尺度下的特征操作;FIG. 3 is a characteristic operation at a sampling scale of the present invention;
图4生成器设计:Bi-Skip-Net+残差;Figure 4 generator design: Bi-Skip-Net + residual;
图5a-d为本发明与其它算法的主观对比;其中,Figures 5a-d are subjective comparisons between the present invention and other algorithms; where,
图5a:模糊图像;Figure 5a: Blurred image;
图5b:Nah等人的复原效果;Figure 5b: The recovery effect of Nah et al;
图5c:Kupyn等人的复原效果;Figure 5c: The recovery effect of Kupyn et al;
图5d:Bi-Skip-Net的复原效果。Figure 5d: Bi-Skip-Net recovery effect.
实现本发明的最佳方式Best way to implement the invention
图1为本发明采用的生成对抗网络机制。其中,模糊图像通过生成器来获得复原 图像,判别的任务为尽可能区分复原图像和清晰图像;而生成器的任务为尽可能欺骗判别器来降低对两种图像的区分的能力。FIG. 1 is a mechanism for generating an adversarial network adopted by the present invention. Among them, the blurred image is obtained by the generator to obtain the restored image, the task of discrimination is to distinguish the restored image and the clear image as much as possible; and the task of the generator is to deceive the discriminator as much as possible to reduce the ability to distinguish between the two images.
本发明实施例的具体步骤如下:The specific steps of the embodiments of the present invention are as follows:
(1)设计生成器和判别器,原理如图4所示,是建筑物的模糊图像通过Bi-Skip-Net生成器,而得到清晰的建筑物图片;其他任何模糊图像都可以用这个模型生成清晰的图片。(1) Design the generator and discriminator. The principle is shown in Figure 4. The blurred image of the building is obtained through the Bi-Skip-Net generator to obtain a clear picture of the building; any other blurred image can be generated using this model Clear picture.
(2)采用如下的损失函数来训练网络,(2) Use the following loss function to train the network,
其中
为对抗损失函数,
为条件损失函数,λ为条件损失函数的权重。
among them To fight the loss function, For the conditional loss function, λ is the weight of the conditional loss function.
通过最小化式3来优化生成器G;Optimize generator G by minimizing Equation 3;
其中,L,S分别表示模型在不同层级的输出和真值,α取值为1或2,整个条件损失函数被通道数c,宽度w和高度h所规范。Among them, L and S respectively represent the output and true value of the model at different levels, and the value of α is 1 or 2, the entire conditional loss function is regulated by the number of channels c, width w and height h.
(3)将训练好的网络作为最终的复原模型。(3) Take the trained network as the final restoration model.
如图1所示,本发明实施例的方法采用生成对抗网络机制来实现模糊图像复原。图2为Bi-Skip-Net网络结构图,采取图2所示的网络结构,设计了一种Bi-Skip-Net网络来作为其中的生成器。As shown in FIG. 1, the method of the embodiment of the present invention adopts a generative adversarial network mechanism to achieve blurred image restoration. Figure 2 shows the structure of the Bi-Skip-Net network. Taking the network structure shown in Figure 2, a Bi-Skip-Net network is designed as the generator.
该Bi-Skip-Net网络结构图中的判别器参数见表1.判别器参数表。The discriminator parameters in the Bi-Skip-Net network structure diagram are shown in Table 1. Discriminator parameter table.
表1.判别器参数表Table 1. Discriminator parameter table
### | 层Floor |
参数维度 | 步长Step | |
11 | convconv | 32x3x5x532x3x5x5 | 22 | |
22 | convconv | 64x32x5x564x32x5x5 | 11 | |
33 | convconv | 64x64x5x564x64x5x5 | 22 | |
44 | convconv | 128x64x5x5128x64x5x5 | 11 | |
55 | convconv | 128x128x5x5128x128x5x5 | 44 | |
66 | convconv | 256x128x5x5256x128x5x5 | 11 | |
77 | convconv | 256x256x5x5256x256x5x5 | 44 | |
88 | convconv | 512x256x5x5512x256x5x5 | 11 | |
99 | convconv | 512x512x4x4512x512x4x4 | 44 | |
1010 | fcfc | 512x1x1x1512x1x1x1 | -- |
如图2所示,本发明实施例设计的Bi-Skip-Net网络由三部分组成:contract路径(D),包括D0、D1、D2和D3;Skip路径(S),包括S0、S1、S2和S3;以及expand路径(U),包括U0、U1、U2和U3。Contract层进行下采样实现特征压缩,Skip层用于连接深层特征与浅层特征,expand层进行上采样。其中D*(D0、D1、D2和D3),S*(S0、S1、S2和S3),U*(U0、U1、U2和U3)为对应下采样尺度下的特征。As shown in FIG. 2, the Bi-Skip-Net network designed by the embodiment of the present invention is composed of three parts: a contract path (D), including D0, D1, D2, and D3; a Skip path (S), including S0, S1, S2 And S3; and the expand path (U), including U0, U1, U2, and U3. The contract layer performs downsampling to achieve feature compression, the Skip layer is used to connect deep and shallow features, and the expand layer performs upsampling. Among them, D * (D0, D1, D2, and D3), S * (S0, S1, S2, and S3), and U * (U0, U1, U2, and U3) are features under the corresponding downsampling scale.
图3为一个采样尺度下的特征操作,如图3所示,在contract路径,即压缩路径,当前特征通过3个残差块(3xResBlock)来获得深层特征,并采用池化(pooling)和卷积相加的残差模式来获取下一尺度的特征;在Skip路径,即跨连接路径,通过1x1的卷积来对压缩浅层特征,通过3x3的卷积来压缩深层特征;在expand路径,通过concat,即concatenate,来实现特征连接,并通过3x3的反卷积实现特征上采样。Figure 3 is a feature operation at a sampling scale. As shown in Figure 3, in the contract path, that is, the compression path, the current feature obtains deep features through 3 residual blocks (3xResBlock), and uses pooling and volume. The residual mode of product addition is used to obtain the next-scale features; in the Skip path, that is, across the connection path, the shallow features are compressed by 1x1 convolution, and the deep features are compressed by 3x3 convolution; in the expand path, Through concate, namely concatenate, to achieve feature connection, and through 3x3 deconvolution to achieve feature upsampling.
图4生成器设计:Bi-Skip-Net+残差,如图4所示,最后采用Bi-Skip-Net加残差的模式来作为生成器。Figure 4 Generator design: Bi-Skip-Net + residuals, as shown in Figure 4, and finally adopt Bi-Skip-Net plus residual mode as the generator.
本发明实施与其他算法的对比结果详见表2.本发明与其它算法在GoPro数据集上的测试对比。The comparison results between the implementation of the present invention and other algorithms are shown in Table 2. The test comparison between the present invention and other algorithms on the GoPro dataset.
表2.本发明与其它算法在GoPro数据集上的测试对比Table 2. Comparison between the present invention and other algorithms on GoPro dataset
图5a-d为本发明与其它算法的主观对比。图5a为模糊图像,图5b为Nah等人的复原效果,图5c为Kupyn等人的复原效果,图5d为本发明的Bi-Skip-Net的复原效果。图片左下角的文字“HARDWARE”在其他三张图片中无法辨认或辨认模糊,本发明能清晰还原和识别。从人主观对比可以看出,本发明对模糊图像的修复效果明显。Figures 5a-d are subjective comparisons between the present invention and other algorithms. Figure 5a is a blurred image, Figure 5b is the restoration effect of Nah et al., Figure 5c is the restoration effect of Kupyn et al., And Figure 5d is the restoration effect of Bi-Skip-Net of the present invention. The text "HARDWARE" in the lower left corner of the picture cannot be recognized or blurred in the other three pictures, and the invention can clearly restore and recognize. It can be seen from the subjective comparison of human beings that the present invention has obvious repairing effect on blurred images.
需要注意的是,公布实施例的目的在于帮助进一步理解本发明,但是本领域的技术人员可以理解:在不脱离本发明及所附权利要求的精神和范围内,各种替换和修改都是可能的。因此,本发明不应局限于实施例所公开的内容,本发明要求保护的范围以权利要求书界定的范围为准。It should be noted that the purpose of the disclosed embodiments is to help further understand the present invention, but those skilled in the art can understand that various replacements and modifications are possible without departing from the spirit and scope of the present invention and the appended claims. of. Therefore, the present invention should not be limited to the contents disclosed in the embodiments, and the scope of protection claimed by the present invention is subject to the scope defined by the claims.
工业应用性Industrial applicability
本发明应用于数字图像处理领域,通过一种基于Bi-Skip-Net的图像去模糊方法,实现模糊图像复原。The invention is applied to the field of digital image processing, and a blurred image restoration method is realized through a Bi-Skip-Net-based image deblurring method.
Claims (5)
- 一种基于Bi-Skip-Net的图像去模糊方法,包括如下步骤:An image deblurring method based on Bi-Skip-Net includes the following steps:1)输入模糊图像,通过卷积核尺寸为7x7,步长为1的卷积层得到浅层特征;1) Input the blurred image, and obtain the shallow features through the convolution layer with a convolution kernel size of 7x7 and a step size of 1;2)将浅层特征通过3个残差块得到当前尺度下的深度特征;2) Pass the shallow features through 3 residual blocks to get the depth features at the current scale;3)将深度特征进行下采样加残差的模式得到下一个尺度下的浅层特征;3) The deep feature is down-sampled and the residual mode is used to obtain the shallow feature at the next scale;4)按照规定的下采样次数n,重复步骤2,3获取不同尺度下的浅层特征和深度特征,并且在最小尺度下不获取深度特征;4) According to the specified number of downsampling times n, repeat steps 2 and 3 to obtain shallow features and depth features at different scales, and do not acquire depth features at the minimum scale;5)将最小尺度的浅层特征作为基本特征;5) Take the shallowest features of the smallest scale as basic features;6)将上一层尺度的浅层特征经过卷积核大小为1x1,步长为1的卷积层得到浅层降维特征;将对应的深度特征经过卷积核大小为3x3,步长为2的卷积层得到深度降维特征,并与基本特征串联进行上采样;将上采样后的特征与浅层降维特征进行串联得到当前尺度下的基本特征;6) Pass the shallow features of the previous scale through the convolution kernel with a size of 1x1 and a step size of 1 to obtain shallow dimensionality reduction features; pass the corresponding depth features through the convolution kernel with a size of 3x3 and a step size of The convolutional layer of 2 obtains depth dimensionality reduction features and up-samples with the basic features in series; connects the up-sampled features and shallow layer dimensionality reduction features in series to obtain the basic features at the current scale;7)重复步骤6直到采样操作截止;7) Repeat step 6 until the sampling operation ends;8)将得到的基本特征经过卷积核大小为7x7,步长为1的卷积层得到残差特征;8) Pass the obtained basic features through a convolution layer with a convolution kernel size of 7x7 and a step size of 1 to obtain residual features;9)将残差特征与输入图像相加得到复原图像;9) Add the residual features to the input image to obtain the restored image;10)采用Bi-Skip-Net加残差的模式作为生成器。10) Use Bi-Skip-Net plus residual mode as the generator.
- 根据权利要求1所述的图像去模糊方法,其特征在于:The image deblurring method according to claim 1, characterized in that:步骤4)按照规定的下采样次数为5。Step 4) According to the specified number of downsampling times is 5.
- 根据权利要求1所述的图像去模糊方法,其特征在于:The image deblurring method according to claim 1, characterized in that:所述的Bi-Skip-Net网络由三部分组成:contract路径(D),Skip路径(S)以及expand路径(U);Contract层进行下采样实现特征压缩,Skip层用于连接深层特征与浅层特征,expand层进行上采样;其中D*,S*,U*为对应下采样尺度下的特征。The Bi-Skip-Net network is composed of three parts: contract path (D), Skip path (S) and expand path (U); Contract layer performs downsampling to achieve feature compression, and Skip layer is used to connect deep features and shallow For layer features, the expand layer performs upsampling; where D *, S *, and U * are features at the corresponding downsampling scale.
- 根据权利要求3所述的图像去模糊方法,其特征在于:The image deblurring method according to claim 3, characterized in that:所述的采样尺度下的特征操作,在contract路径,当前特征通过3个残差块(3xResBlock)来获得深层特征,并采用池化(pooling)和卷积相加的残差模式来获取下一尺度的特征;在Skip路径,通过1x1的卷积来对压缩浅层特征,通过3x3的卷积来压缩深度特征;在expand路径,通过concat来实现特征连接,并通过3x3的反卷积实现特征上采样。In the feature operation at the sampling scale, in the contract path, the current feature obtains deep features through 3 residual blocks (3xResBlock), and uses the residual mode of pooling and convolution to obtain the next feature. Scale features; in the Skip path, shallow features are compressed by 1x1 convolution, and depth features are compressed by 3x3 convolution; in the expand path, feature connection is achieved by concat, and features are achieved by 3x3 deconvolution Upsampling.
- 根据权利要求1所述的图像去模糊方法,其特征在于:The image deblurring method according to claim 1, characterized in that:步骤10)所述的生成器是按如下方式来设计的,The generator described in step 10) is designed as follows,①采用如下的损失函数来训练网络,①Use the following loss function to train the network,其中 为对抗损失函数, 为条件损失函数,λ为条件损失函数的权重; among them To fight the loss function, Is the conditional loss function, λ is the weight of the conditional loss function;通过最小化式3来优化生成器G;Optimize generator G by minimizing Equation 3;其中,L,S分别表示模型在不同层级的输出和真值,α取值为1或2,整个条件损失函数被通道数c,宽度w和高度h所规范;Among them, L and S respectively represent the output and true value of the model at different levels, and the value of α is 1 or 2, the entire conditional loss function is regulated by the number of channels c, width w and height h;②将训练好的网络作为最终的复原模型。②Take the trained network as the final restoration model.
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201811298475.8 | 2018-11-02 | ||
CN201811298475.8A CN109410146A (en) | 2018-11-02 | 2018-11-02 | A kind of image deblurring algorithm based on Bi-Skip-Net |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2020087607A1 true WO2020087607A1 (en) | 2020-05-07 |
Family
ID=65471437
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2018/117634 WO2020087607A1 (en) | 2018-11-02 | 2018-11-27 | Bi-skip-net-based image deblurring method |
Country Status (2)
Country | Link |
---|---|
CN (1) | CN109410146A (en) |
WO (1) | WO2020087607A1 (en) |
Cited By (12)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111986102A (en) * | 2020-07-15 | 2020-11-24 | 万达信息股份有限公司 | Digital pathological image deblurring method |
CN112070658A (en) * | 2020-08-25 | 2020-12-11 | 西安理工大学 | Chinese character font style migration method based on deep learning |
CN112070693A (en) * | 2020-08-27 | 2020-12-11 | 西安理工大学 | Single sand-dust image recovery method based on gray world adaptive network |
CN112184590A (en) * | 2020-09-30 | 2021-01-05 | 西安理工大学 | Single sand-dust image recovery method based on gray world self-guided network |
CN112330554A (en) * | 2020-10-30 | 2021-02-05 | 西安工业大学 | Structure learning method for astronomical image deconvolution |
CN112561819A (en) * | 2020-12-17 | 2021-03-26 | 温州大学 | Self-filtering image defogging algorithm based on self-supporting model |
CN113592736A (en) * | 2021-07-27 | 2021-11-02 | 温州大学 | Semi-supervised image deblurring method based on fusion attention mechanism |
CN114723630A (en) * | 2022-03-31 | 2022-07-08 | 福州大学 | Image deblurring method and system based on cavity double-residual multi-scale depth network |
CN114841897A (en) * | 2022-06-08 | 2022-08-02 | 西北工业大学 | Depth deblurring method based on self-adaptive fuzzy kernel estimation |
CN114913095A (en) * | 2022-06-08 | 2022-08-16 | 西北工业大学 | Depth deblurring method based on domain adaptation |
CN115760589A (en) * | 2022-09-30 | 2023-03-07 | 浙江大学 | Image optimization method and device for motion blurred image |
CN117058038A (en) * | 2023-08-28 | 2023-11-14 | 北京航空航天大学 | Diffraction blurred image restoration method based on even convolution deep learning |
Families Citing this family (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111612711B (en) * | 2019-05-31 | 2023-06-09 | 北京理工大学 | Picture deblurring method based on generation of countermeasure network improvement |
CN110570375B (en) * | 2019-09-06 | 2022-12-09 | 腾讯科技(深圳)有限公司 | Image processing method, device, electronic device and storage medium |
CN112102184A (en) * | 2020-09-04 | 2020-12-18 | 西北工业大学 | Image deblurring method based on Scale-Encoder-Decoder-Net network |
CN113570516B (en) * | 2021-07-09 | 2022-07-22 | 湖南大学 | Image blind motion deblurring method based on CNN-Transformer hybrid self-encoder |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20130170698A1 (en) * | 2011-12-30 | 2013-07-04 | Honeywell International Inc. | Image acquisition systems |
CN108460742A (en) * | 2018-03-14 | 2018-08-28 | 日照职业技术学院 | A kind of image recovery method based on BP neural network |
CN108711141A (en) * | 2018-05-17 | 2018-10-26 | 重庆大学 | The motion blur image blind restoration method of network is fought using improved production |
Family Cites Families (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US9779491B2 (en) * | 2014-08-15 | 2017-10-03 | Nikon Corporation | Algorithm and device for image processing |
CN106251303A (en) * | 2016-07-28 | 2016-12-21 | 同济大学 | A kind of image denoising method using the degree of depth full convolutional encoding decoding network |
CN107689034B (en) * | 2017-08-16 | 2020-12-01 | 清华-伯克利深圳学院筹备办公室 | Denoising method and denoising device |
CN108629743B (en) * | 2018-04-04 | 2022-03-25 | 腾讯科技(深圳)有限公司 | Image processing method and device, storage medium and electronic device |
-
2018
- 2018-11-02 CN CN201811298475.8A patent/CN109410146A/en active Pending
- 2018-11-27 WO PCT/CN2018/117634 patent/WO2020087607A1/en active Application Filing
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20130170698A1 (en) * | 2011-12-30 | 2013-07-04 | Honeywell International Inc. | Image acquisition systems |
CN108460742A (en) * | 2018-03-14 | 2018-08-28 | 日照职业技术学院 | A kind of image recovery method based on BP neural network |
CN108711141A (en) * | 2018-05-17 | 2018-10-26 | 重庆大学 | The motion blur image blind restoration method of network is fought using improved production |
Non-Patent Citations (3)
Title |
---|
MAO, YONG ET AL: "License Plate Motion Deblurring Based on Deep Learning", JOURNAL OF HANGZHOU DIANZI UNIVERSITY, vol. 38, no. 5, 1 September 2018 (2018-09-01), pages 29 - 33, XP055699211, ISSN: 1001-9146, DOI: 10.13954/j.cnki.hdu.2018.05.006 * |
NAH, S. ET AL.: "Deep Multi-scale Convolutional Neural Network for Dynamic Scene Deblurring", ARXIV.ORG, 7 December 2017 (2017-12-07), pages 1 - 21, XP080737454, DOI: 10.1109/CVPR.2017.35 * |
ZHOU, TONGTONG: "Realization of Blind Restoration of Blurred Images Based on Camera Shake", MASTER THESIS, no. 07, 15 July 2013 (2013-07-15), pages 1 - 60, XP009520638, ISSN: 1674-0246 * |
Cited By (21)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111986102A (en) * | 2020-07-15 | 2020-11-24 | 万达信息股份有限公司 | Digital pathological image deblurring method |
CN111986102B (en) * | 2020-07-15 | 2024-02-27 | 万达信息股份有限公司 | Digital pathological image deblurring method |
CN112070658B (en) * | 2020-08-25 | 2024-04-16 | 西安理工大学 | Deep learning-based Chinese character font style migration method |
CN112070658A (en) * | 2020-08-25 | 2020-12-11 | 西安理工大学 | Chinese character font style migration method based on deep learning |
CN112070693A (en) * | 2020-08-27 | 2020-12-11 | 西安理工大学 | Single sand-dust image recovery method based on gray world adaptive network |
CN112070693B (en) * | 2020-08-27 | 2024-03-26 | 西安理工大学 | Single dust image recovery method based on gray world adaptive network |
CN112184590B (en) * | 2020-09-30 | 2024-03-26 | 西安理工大学 | Single dust image recovery method based on gray world self-guiding network |
CN112184590A (en) * | 2020-09-30 | 2021-01-05 | 西安理工大学 | Single sand-dust image recovery method based on gray world self-guided network |
CN112330554A (en) * | 2020-10-30 | 2021-02-05 | 西安工业大学 | Structure learning method for astronomical image deconvolution |
CN112330554B (en) * | 2020-10-30 | 2024-01-19 | 西安工业大学 | Structure learning method for deconvolution of astronomical image |
CN112561819A (en) * | 2020-12-17 | 2021-03-26 | 温州大学 | Self-filtering image defogging algorithm based on self-supporting model |
CN113592736A (en) * | 2021-07-27 | 2021-11-02 | 温州大学 | Semi-supervised image deblurring method based on fusion attention mechanism |
CN113592736B (en) * | 2021-07-27 | 2024-01-12 | 温州大学 | Semi-supervised image deblurring method based on fused attention mechanism |
CN114723630A (en) * | 2022-03-31 | 2022-07-08 | 福州大学 | Image deblurring method and system based on cavity double-residual multi-scale depth network |
CN114913095B (en) * | 2022-06-08 | 2024-03-12 | 西北工业大学 | Depth deblurring method based on domain adaptation |
CN114841897B (en) * | 2022-06-08 | 2024-03-15 | 西北工业大学 | Depth deblurring method based on self-adaptive fuzzy kernel estimation |
CN114913095A (en) * | 2022-06-08 | 2022-08-16 | 西北工业大学 | Depth deblurring method based on domain adaptation |
CN114841897A (en) * | 2022-06-08 | 2022-08-02 | 西北工业大学 | Depth deblurring method based on self-adaptive fuzzy kernel estimation |
CN115760589A (en) * | 2022-09-30 | 2023-03-07 | 浙江大学 | Image optimization method and device for motion blurred image |
CN117058038A (en) * | 2023-08-28 | 2023-11-14 | 北京航空航天大学 | Diffraction blurred image restoration method based on even convolution deep learning |
CN117058038B (en) * | 2023-08-28 | 2024-04-30 | 北京航空航天大学 | Diffraction blurred image restoration method based on even convolution deep learning |
Also Published As
Publication number | Publication date |
---|---|
CN109410146A (en) | 2019-03-01 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2020087607A1 (en) | Bi-skip-net-based image deblurring method | |
WO2021208122A1 (en) | Blind video denoising method and device based on deep learning | |
Mao et al. | Non rigid geometric distortions correction-application to atmospheric turbulence stabilization | |
CN110782399A (en) | Image deblurring method based on multitask CNN | |
Anwar et al. | Image deblurring with a class-specific prior | |
CN111091503A (en) | Image out-of-focus blur removing method based on deep learning | |
CN111462019A (en) | Image deblurring method and system based on deep neural network parameter estimation | |
CN111553867B (en) | Image deblurring method and device, computer equipment and storage medium | |
CN110428382B (en) | Efficient video enhancement method and device for mobile terminal and storage medium | |
Liu et al. | A motion deblur method based on multi-scale high frequency residual image learning | |
Malik et al. | Llrnet: A multiscale subband learning approach for low light image restoration | |
CN110503608B (en) | Image denoising method based on multi-view convolutional neural network | |
CN114331902B (en) | Noise reduction method and device, electronic equipment and medium | |
Zhou et al. | Sparse representation with enhanced nonlocal self-similarity for image denoising | |
Jaiswal et al. | Physics-driven turbulence image restoration with stochastic refinement | |
Kollem et al. | A General Regression Neural Network based Blurred Image Restoration | |
Sharma et al. | Deep learning based frameworks for image super-resolution and noise-resilient super-resolution | |
KR102299360B1 (en) | Apparatus and method for recognizing gender using image reconsturction based on deep learning | |
Wei et al. | Image denoising with deep unfolding and normalizing flows | |
CN115103118B (en) | High dynamic range image generation method, device, equipment and readable storage medium | |
CN110717873A (en) | Traffic sign deblurring detection recognition algorithm based on multi-scale residual error | |
Kiani et al. | Solving robust regularization problems using iteratively re-weighted least squares | |
KR102358355B1 (en) | Method and apparatus for progressive deblurring of face image | |
US20240005464A1 (en) | Reflection removal from an image | |
Tao et al. | Blind image deconvolution using the Gaussian scale mixture fields of experts prior |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 18938423 Country of ref document: EP Kind code of ref document: A1 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 18938423 Country of ref document: EP Kind code of ref document: A1 |