Detailed Description
According to one or more embodiments, as shown in fig. 1, a light field multi-view image super-resolution method based on multi-scale fusion features includes the following steps:
a1, constructing a training set of a high-resolution and low-resolution image pair by using a light field camera multi-view image or a light field camera array image (N multiplied by N array-shaped distributed multi-view images);
a2, constructing a multilayer characteristic extraction network from the NxN light field multi-view image array to the NxN light field multi-view characteristic image;
a3, stacking the characteristic images, constructing a characteristic fusion and enhancement multilayer convolution network, and obtaining 4D light field structural characteristics which can be used for reconstructing light field multi-view images;
a4, constructing an up-sampling module to obtain a nonlinear mapping relation from the 4D light field structural characteristics to the high-resolution N multiplied by N light field multi-view image;
a5, constructing a loss function based on the multi-scale feature fusion network, training, and finely adjusting network parameters;
and A6, inputting the low-resolution NxN light field multi-view image into the trained network to obtain the high-resolution NxN light field multi-view image.
According to one or more embodiments, the specific process of constructing the training set of high-resolution and low-resolution image pairs by using the light field camera multi-view images or the light field camera array images (N × N array-like distributed multi-view images) in step A1 is as follows:
step A1.1, firstly, for the multi-view image G distributed in the form of N × N array HR Performing bicubic interpolation for 2-fold down-sampling to obtain low-resolution NxN light-field multi-view image G LR ;
Step A1.2, then, for the low-resolution light field multi-view image G LR Is cut into small blocks with the space size of M multiplied by M pixels by the step length of K pixels, and the high-resolution light field multi-view image G HR Is also cut into small blocks with the size of 2 Mx 2M pixels correspondingly;
step A1.3, normalization and regularization processing are respectively carried out on the two light field multi-view images, and the value of each pixel is in the range of [0,1], so that input data and real data of the deep learning network model in the embodiment are formed.
According to one or more embodiments, as shown in fig. 2, a specific process of constructing the multi-layer feature extraction network from the N × N light field multi-view image array to the N × N light field multi-view feature image in step A2 is as follows:
step A2.1, extracting low-level features of the multi-view images in the low-resolution light field through 1 conventional convolution and 1 residual block (ResB);
step a2.2, performing multi-scale feature extraction and feature fusion on the extracted low-level features by using a residual block and a residual block which alternately appear twice (residual aperture spatial imaging, resASPP), so as to obtain the medium-level features of each light-field multi-view image.
Wherein the ResASPP block is a block of ASPP with 3 identical structure parameters concatenated and added to the upstream input in the form of a residual; as shown in fig. 3, an atomic spatial pyramid pooling block (ASPP) performs multi-scale feature extraction on upstream input by using atomic hole convolutions parallel to each other and having different expansion rates; in each ASPP block, first 3 atomic hole convolutions were performed to feature the upstream input with a dilation rate of d =1,4,8, respectively, and then the resulting multi-scale features were fused by 1 × 1 convolution kernel.
According to one or more embodiments, the specific process of stacking the feature images and constructing a feature fusion and enhancement multilayer convolution network in the step A3 to obtain the 4D light field structural features that can be used for reconstructing the light field multi-view image is as follows:
step A3.1, multiscale feature map array Q 0 ∈R NH×NW×C Each view of (a) is stacked on channel C in order from top left to bottom right, where H denotes the number of columns of multi-view images and W denotes the number of rows of images; n represents the number of multi-view images in a single direction, and the total number is N × N; c denotes the number of channels of the image. Thereby obtaining a characteristic diagram Q epsilon R H×W×(N×N×C) 。
Step A3.2, the characteristic diagram Q epsilon after stacking belongs to R H×W×(N×N×C) Will be sent as input to the global feature fusion module. Firstly, performing feature re-extraction on the stacked multi-scale features through 3 conventional convolutionsThen, carrying out feature fusion through 1 residual block;
step a3.3, then enter the fusion block to achieve feature enhancement. The fusion block can accumulate more texture detail information on the original characteristics by extracting the angle characteristics in the 4D light field. The enhanced features are sent to 4 cascaded residual blocks for full feature fusion, and finally 4D light field structural features which can be used for super-resolution reconstruction of light field images are generated.
The fusion block is used for performing feature fusion and enhancement on the extracted multi-scale features, and adopts a network structure shown in fig. 4. The central perspective image may be transformed by some "warping" to generate other surrounding perspective images, and vice versa. The process of generating the peripheral view from the central view can be described mathematically as:
G s',t' =M st→s't' ·W st→s't' ·G s,t +N st→s't'
in the formula, G s,t Representing central view angle image, G s',t' Representing other peripheral view images, W st→s't' Is a "warp matrix", and N st→s't' Is a view generated after the warping transformation and a multi-view image G of the original view s',t' An error term between; m st→s't' Is a "mask" matrix used to remove the effects of the occlusion problem described above.
As shown in FIG. 4, the peripheral view feature Q in the NxN feature map array s',t' Through 'warping transformation' W s't'→st Central view feature Q 'may be generated separately' s,t As indicated by the feature block labeled (1). Likewise, central perspective feature Q s,t Subjected to a "warp transformation" W st→s't' The peripheral viewing angle characteristics W can also be generated accordingly st→s't' As shown in the feature block labeled (2) in fig. 4. The foregoing process can be expressed as:
in the formula (I), the compound is shown in the specification,
is a batch matrix multiplication. Then, the module performs "mask" processing on the feature blocks (1) and (2) respectively to deal with the occlusion problem existing between different views. The method for acquiring the mask matrix comprises the following steps: obtaining an absolute value of an error item between the generated view and the original view, wherein the larger the absolute value is, the region is indicated as an occlusion region, and specifically:
wherein T =0.9 × max (| Q' s,t -Q s,t || 1 ) For empirical thresholds set in the algorithm, a "mask" matrix M st→s't' Is derived from M s't'→s,t Similarly. Then, the occlusion regions in the feature blocks (1) and (2) are filtered:
in the formula (I), the compound is shown in the specification,
and
respectively are the feature blocks obtained after the mask processing. Since N = N × N-1 central view angle feature maps are formed in the above process, the image processing method is suitable for the image processing method
Normalization processing is carried out to obtain a characteristic diagram (3) shown in figure 4
In the formula, k is an index value when other views except the center view in the N × N feature map array are arranged from top left to bottom right;
it represents the k-th other surrounding view feature map generated from the center view and processed by the "mask" process. Further, will
And replacing the feature map of the central position with the feature map of the label (3) to obtain a feature block (4) after global fusion. The feature block (4) is added to the original input multi-scale features to realize feature enhancement, and finally a feature block (5) subjected to feature fusion and enhancement is obtained.
According to one or more embodiments, the specific process of constructing the upsampling module in step A4 to obtain the non-linear mapping relationship from the 4D light field structural feature to the high-resolution N × N light field multi-view image is as follows:
step A4.1, using sub-pixel convolution, first generating r from the input feature map with channel number C 2 A characteristic diagram with the number of channels being C;
step A4.2, then the obtained number of channels is r 2 The xc profile is sampled and thus generates a high resolution profile with a resolution r times.
And step A4.3, sending the high-resolution feature map to 1 conventional convolutional layer for feature fusion, and finally generating the super-resolution light field multi-view image array.
According to one or more embodiments, step A5 is to construct a loss function based on the multi-scale feature fusion network and train the loss function, and the specific process of fine tuning the network parameters is as follows:
in the training process, the super-resolved light field multi-view images are respectively compared with the actual high-resolution light field multi-view images one by one, and a leakage correction linear unit (leak ReLU) with a leakage factor of 0.1 is adopted by a network as an activation function so as to avoid the condition that information transmission is not carried out on neurons in the training process:
wherein u, v respectively represent the positions of the multi-view images in the N × N arrayed array in the lateral and longitudinal directions, respectively; s, t represent the position of the multi-view image pixel in the x-axis direction and the y-axis direction of the image, respectively.
And step A6, specifically, inputting the low-resolution NxN light field multi-view image into the trained network to obtain the high-resolution NxN light field multi-view image.
The invention is discussed in terms of one or more embodiments implementing the method.
Training was performed using the university of Heidelberg light field dataset in Germany and the Lytro Illum light field camera dataset of Stanford, using 5 x 5 number of light field multi-view images, and the training data was sliced into 64 x 64 pixel low resolution images and 128 x 128 pixel high resolution image patches in 32 pixel steps. Data enhancement is performed by randomly flipping the image horizontally and vertically. The built neural network is trained in a Pythrch frame, and the model initializes the weight of each convolution layer by using an Adam optimization method and an Xaviers method. The initial learning rate of the model was set to 2 x 10-4, decayed 0.5 times every 20 cycles, and the training was stopped after 80 cycles.
And carrying out comparative analysis on the trained network on the synthetic data set and the real data set respectively.
Fig. 5 shows a comparison table of bicubic interpolation and the method of the present invention under two evaluation indexes of PSNR and SSIM on three images with different synthetic data sets.
Fig. 6 shows a comparison table of bicubic interpolation and the method of the present invention under two evaluation indexes of PSNR and SSIM on three images with different real data sets.
The higher the PSNR and SSIM parameter values, the better the super-resolution image effect. The specific implementation example results show that the super-resolution effect of the method is obvious.
It should be understood that, in the embodiment of the present invention, the term "and/or" is only one kind of association relation describing an associated object, and means that three kinds of relations may exist. For example, a and/or B, may represent: a exists alone, A and B exist simultaneously, and B exists alone. In addition, the character "/" herein generally indicates that the former and latter related objects are in an "or" relationship.
Those of ordinary skill in the art will appreciate that the elements and algorithm steps of the examples described in connection with the embodiments disclosed herein may be embodied in electronic hardware, computer software, or combinations of both, and that the components and steps of the examples have been described in a functional general in the foregoing description for the purpose of illustrating clearly the interchangeability of hardware and software. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present invention.
In the several embodiments provided in the present application, it should be understood that the disclosed system, apparatus and method may be implemented in other ways. For example, the above-described apparatus embodiments are merely illustrative, and for example, the division of the units is only one type of logical functional division, and other divisions may be realized in practice, for example, multiple units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may also be an electrical, mechanical or other form of connection.
In addition, functional units in the embodiments of the present invention may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
While the invention has been described with reference to specific embodiments, the invention is not limited thereto, and various equivalent modifications and substitutions can be easily made by those skilled in the art within the technical scope of the invention. Therefore, the protection scope of the present invention shall be subject to the protection scope of the claims.