Detailed Description
Reference will now be made in detail to embodiments of the present application, examples of which are illustrated in the accompanying drawings, wherein like or similar reference numerals refer to the same or similar elements or elements having the same or similar function throughout. The embodiments described below by referring to the drawings are exemplary only for the purpose of explaining the embodiments of the present application, and are not to be construed as limiting the embodiments of the present application.
Referring to fig. 1 to 4, an embodiment of the present application provides a method for generating an image sample set, where the method includes:
01: acquiring a first image P1, the first image P1 having a first resolution;
03: processing a preset first fuzzy core H1 to obtain a second fuzzy core H2, wherein the second fuzzy core H2 is different from the first fuzzy core H1 in type or fuzzy degree; and
05: convolving the first image P1 with the first blur kernel H1 or the second blur kernel H2 to obtain a second image P2 corresponding to the first image P1, the second image P2 having a second resolution, the first resolution being greater than the second resolution, the first image P1 and the second image P2 forming pairs of image samples, circularly performing the convolution operation to obtain a plurality of pairs of image samples, and forming the plurality of sets of pairs of image samples into a set of image samples.
Referring to fig. 2, the present application further provides a generating apparatus 100, where the generating apparatus 100 includes one or more processors 10, and the generating method of the embodiment of the present application can be applied to the generating apparatus 100 of the embodiment of the present application, where one or more processors 10 are configured to execute the method in 01, 03, and 05, that is, one or more processors 10 are configured to: acquiring a first image P1, the first image P1 having a first resolution; processing a preset first fuzzy core H1 to obtain a second fuzzy core H2, wherein the second fuzzy core H2 is different from the first fuzzy core H1 in type or fuzzy degree; and performing a convolution operation on the first image P1 by using the first blurring kernel H1 or the second blurring kernel H2 to obtain a second image P2 corresponding to the first image P1, wherein the second image P2 has a second resolution, the first resolution is higher than the second resolution, the first image P1 and the second image P2 form an image sample pair, the convolution operation is performed in a loop to obtain a plurality of image sample pairs, and the plurality of image sample pairs are collected to form an image sample set.
The image sample set generated herein can be applied to deblurring processing in the image field or other processing that needs to be applied to a pair of sharp and blurred images. The generating apparatus 100 may be any device that needs to use the image sample set, such as a mobile phone, a watch, a notebook computer, a tablet computer, a desktop computer, an intelligent eye, an intelligent helmet, an unmanned aerial vehicle, and the like, without limitation. Referring to fig. 2, the present application only illustrates that the generating device 100 is a mobile phone, and the situation when the generating device 100 is another type of device is similar to that when it is a mobile phone, and is not described in detail. The processor 10 may be integrated in the generating device 100.
At present, the more mature method for making the motion blur removed data set is as follows: the method comprises the steps of shooting a video of a moving picture through a high frame rate (240fps) mode of a camera, accumulating and averaging n continuous frames in the video, correcting gamma to obtain a motion blurred image, and selecting a middle frame in the n frames as a clear frame corresponding to the middle frame to form an image sample pair. The current virtual focus blurred image sample pair is prepared in the following way: the method comprises the steps that a shot object is a static scene, the focusing state of a camera is adjusted manually, one shot is taken when the scene is in focus, and the other shot is taken when the scene is in virtual focus, so that a virtual focus fuzzy image sample pair is obtained.
In the method for generating an image sample set according to the embodiment of the present application, a single first image P1 (sharp image) is acquired, and a convolution operation is performed on the first image P1 by using a first blurring kernel H1 or a second blurring kernel H2 to generate a second image P2 (blurred image) corresponding to the first image P1, wherein the first image P1 and the second image P2 form an image sample pair, and a cyclic convolution operation is performed on the first image P1 to acquire a plurality of image sample pairs based on the first image P1, so as to increase the number of the image sample sets and reduce the acquisition cost of the image sample pairs.
Referring to fig. 3 and 4, in the embodiment of the present application, the predetermined first blur kernel H1 may be designed according to an actual application scenario. The blur kernel is actually a matrix, and the blur kernel is converted into an image by visualizing the matrix, so that the acquisition of the first blur kernel H1 can be obtained by designing the matrix and visualizing the matrix. In particular, different kinds of first blur kernel H1 may be generated by the blur kernel generator, e.g., fig. 3 including 1-15 different kinds of first blur kernels H1. When the first image P1 is convolved with different kinds of first blur kernels H1, different kinds of second images P2 can be obtained, for example, the first blur kernel H1 numbered 1 in fig. 3 convolves the first image P1 to obtain the first kind of second image P2 (numbered 1), the first blur kernel H1 numbered 2 in fig. 3 convolves the first image P1 to obtain the second kind of second image P2 (numbered 2), the first blur kernel H1 numbered 3 in fig. 3 convolves the first image P1 to obtain the third kind of second image P2 (numbered 3), and so on, fifteen kinds of second images P2 can be obtained, numbered from 1 to 15. The first image P1 respectively forms image sample pairs with a plurality of different kinds of second images P2, for example, the first image P1 forms a first image sample pair with the second image P2 numbered 1, the first image P1 forms a second image sample pair with the second image P2 numbered 2, the first image P1 forms a third image sample pair with the second image P2 numbered 3, and so on, fifteen image sample pairs can be obtained. Thus, a greater number of pairs of image samples can be acquired by a single first image P1 to increase the number of image sample sets and reduce the acquisition cost of the image sample pairs. The number of the first blur kernel H1 is not limited to 15 shown in fig. 3, and may be one or more, and may be considered as a set number.
In some embodiments, when the convolution operation is performed on the first image P1 by using the first blur kernel H1 or the second blur kernel H2, the pixel values of each point in the first image P1 form a matrix, and since the blur kernel is actually a matrix, the matrix formed by the second image P2 is obtained by performing a mathematical operation on the two matrices. For example, the first image P1 forms a matrix of
The matrix formed by the first fuzzy core H1 or the second fuzzy core H2 is
The matrix of the second image P2 obtained after the convolution operation is
Wherein, C (1, 1) — 10 × (-1) +10 × 0+10 × 1 ═ 10, that is, matrix a is sequentially moved from left to right, from top to bottom row by row, and divided into matrices of the same size as matrix B, and then matrix-multiplied by matrix B and added to obtain corresponding element values in matrix C.
Referring to fig. 5, in some embodiments, method 03: processing the preset first fuzzy core H1 to obtain a second fuzzy core H2, including:
031: performing one or more of translation, scaling, rotation, or affine transformation on the first blur kernel H1 to obtain an intermediate blur kernel; and
033: and calculating the point values of the second fuzzy core H2 according to the point values of the intermediate fuzzy core and the point values of the first fuzzy core H1 to obtain a second fuzzy core H2.
Referring to fig. 2, one or more processors 10 are also configured to execute 031, 033, i.e., one or more processors 10 are configured to: performing one or more of translation, scaling, rotation, or affine transformation on the first blur kernel H1 to obtain an intermediate blur kernel; and calculating the point values of the second fuzzy core H2 according to the point values of the intermediate fuzzy core and the point values of the first fuzzy core H1 to obtain a second fuzzy core H2.
In particular, the one or more processors 10 may implement any one or more of translation, scaling, rotation, or affine transformation on a preset first blur kernel H1 to obtain more kinds of intermediate blur kernels with different degrees of blur, and computing different kinds of intermediate blur kernels with different degrees of blur may obtain a second blur kernel H2 with different kinds of blur or different degrees of blur. Therefore, the types of the blur kernels are increased, the data volume of the image sample set is increased more easily, and various types of blur (such as motion blur, virtual focus blur and lens blur) can be simulated.
In some embodiments, the one or more processors 10 perform any one of translation, scaling, rotation, or emulation on a preset first blur kernel H1 to obtain an intermediate blur kernel, and obtain point values of the second blur kernel H2 according to the point values of the intermediate blur kernel and the point values of the first blur kernel H1, so as to obtain a second blur kernel H2 corresponding to the first blur kernel H1.
In one embodiment, the one or more processors 10 perform a panning transformation on a preset first blur kernel H1 to obtain an intermediate blur kernel corresponding to the first blur kernel H1. Specifically, the first blurring kernel H1 includes a plurality of pixels, and the first blurring kernel H1 performs a translation transformation such that each pixel of the first blurring kernel H1 is translated from an initial position to an end position by a given offset, and finally, a gray value of each pixel of the intermediate blurring kernel is obtained through an interpolation algorithm. For example, the coordinate of a certain pixel in the first blurring kernel H1 in space is (x)0,y0) The first blur kernel H1 is translated in the horizontal direction by xtIs translated in the vertical directiontThe pixel coordinate of the pixel to be checked by the intermediate blur kernel obtained after the translation is (x)0+xt,y0+yt). The pixel points in the first blur kernel H1 that are moved out of the image display area can be increased by increasing the size of the intermediate blur kernel (e.g., the image width of the intermediate blur kernel is increased by T)xIncrease in height Ty) To ensure that the intermediate blur kernel can contain all the pixel points in the first blur kernel H1. And finally, carrying out gray level interpolation processing on the intermediate fuzzy core to ensure that pixel points of the intermediate fuzzy core obtained after translation can correspond to pixel points in the first fuzzy core H1. The interpolation algorithm comprises a nearest neighbor interpolation method, a bilinear interpolation method and a bicubic interpolation method. After the intermediate blur kernel is obtained, the numerical values of the points of the intermediate blur kernel are divided by the accumulated value of the numerical values of the points of the first blur kernel H1 to obtain a second blur kernel H2 corresponding to the intermediate blur kernel, so that the numerical accumulation of the points of the obtained second blur kernel H2 is equal to 1, and the brightness of the second blur kernel H2 obtained by transforming the first blur kernel H1 is consistent with the brightness of the first blur kernel H1.
Referring to fig. 6, in one example, one or more processors 10 perform a scaling transform on a preset first blur core H1 to obtain an intermediate blur core corresponding to the first blur core H1. Specifically, the first blurring kernel H1 includes a plurality of pixels, and the first blurring kernel H1 performs a scaling transform including a magnification transform or a reduction transformThat is, the first blur kernel H1 is scaled up or down by a certain scale, and finally, the gray value of each pixel of the intermediate blur kernel is obtained through an interpolation algorithm. For example, the coordinate of a certain pixel in the first blurring kernel H1 in space is (x)0,y0) Scaling s in the horizontal direction by scaling the first blur kernel H1 centered at an arbitrary point (x, y)xMultiple, scaling s in the vertical directionyMultiplying, then the scaled coordinate of the point pixel is (x)0+sx(x-x0),y0+sy(y-y0). Similarly, for pixels in the first blur kernel H1 that are moved out of the image display area, the size of the intermediate blur kernel may be increased (e.g., the image width of the intermediate blur kernel is increased by T)xIncrease in height Ty) To ensure that the intermediate blur kernel can contain all the pixel points in the first blur kernel H1. And finally, carrying out gray level interpolation processing on the intermediate fuzzy core to ensure that all pixel points of the intermediate fuzzy core obtained after zooming can correspond to the pixel points in the first fuzzy core H1. The interpolation algorithm comprises a nearest neighbor interpolation method, a bilinear interpolation method and a bicubic interpolation method. After the intermediate blur kernel is obtained, the numerical values of each point of the intermediate blur kernel are divided by the accumulated value of the numerical values of each point of the first blur kernel H1 to obtain a second blur kernel H2 corresponding to the intermediate blur kernel, so that the numerical accumulation of each point of the obtained second blur kernel H2 is equal to 1, and the brightness of the second blur kernel H2 obtained by transforming the first blur kernel H1 is consistent with the brightness of the first blur kernel H1.
Referring to fig. 7, in one embodiment, the one or more processors 10 perform a rotation transformation on a preset first blur kernel H1 to obtain an intermediate blur kernel corresponding to the first blur kernel H1. Specifically, the first blur kernel H1 includes a plurality of pixels, and the rotation transformation performed by the first blur kernel H1 is that each pixel point in the first blur kernel H1 rotates around a certain point as a center by a certain angle, and finally, the gray value of each pixel of the intermediate blur kernel is obtained through an interpolation algorithm. For example, the coordinate of a certain pixel in the first blurring kernel H1 in space is (x)0,y0) The corresponding point of the first blur kernel H1 after being rotated counterclockwise by an angle θ about a point is (x, y), where(x, y) can be calculated by the following matrix expression.
Similarly, for pixels in the first blur kernel H1 that are moved out of the image display area, the size of the intermediate blur kernel may be increased (e.g., the image width of the intermediate blur kernel is increased by T)xIncrease in height Ty) The interpolation method includes a nearest neighbor interpolation method, a bilinear interpolation method and a bicubic interpolation method. The certified intermediate blur kernel can contain all the pixel points in the first blur kernel H1. And finally, carrying out gray level interpolation processing on the intermediate fuzzy core to ensure that all pixel points of the intermediate fuzzy core obtained after zooming can correspond to the pixel points in the first fuzzy core H1. After the intermediate blur kernel is reached, dividing the numerical values of each point of the intermediate blur kernel by the accumulated value of the numerical values of each point of the first blur kernel H1 to obtain a second blur kernel H2 corresponding to the intermediate blur kernel, so as to ensure that the numerical accumulation of each point of the obtained second blur kernel H2 is equal to 1, and further ensure that the brightness of the second blur kernel H2 obtained by transforming the first blur kernel H1 is consistent with the brightness of the first blur kernel H1.
Referring to fig. 2, in one embodiment, the one or
more processors 10 perform affine transformation on a preset first blur kernel H1 to obtain an intermediate blur kernel corresponding to the first blur kernel H1. Specifically, the first blur kernel H1 includes a plurality of pixels, and the affine transformation performed by the first blur kernel H1 is that each pixel point in the first blur kernel H1 is subjected to linear (e.g., translation, scaling, rotation, miscut, etc.) transformation once and added with a translation, and finally, the gray value of each pixel of the intermediate blur kernel is obtained through an interpolation algorithm. For example, the coordinate of a certain pixel in the first blurring kernel H1 in space is (x)
0,y
0) Affine matrices are 2-by-3 matrices, e.g. matrices
Wherein the elements of the third column play a translational role, the diagonal of the elements of the first two columns play a zooming role, and the anti-diagonal plays a roleWith a rotating or shearing action. A certain pixel (x) in the first blur kernel H1
0,y
0) And (3) obtaining an intermediate fuzzy core corresponding point after affine matrix A transformation, wherein the (x, y) can be calculated by the following matrix expression.
Similarly, for pixels in the first blur kernel H1 that are moved out of the image display area, the size of the intermediate blur kernel may be increased (e.g., the image width of the intermediate blur kernel is increased by T)xIncrease in height Ty) To ensure that the intermediate blur kernel can contain all the pixel points in the first blur kernel H1. And finally, carrying out gray level interpolation processing on the intermediate fuzzy core to ensure that all pixel points of the intermediate fuzzy core obtained after zooming can correspond to the pixel points in the first fuzzy core H1. The interpolation algorithm comprises a nearest neighbor interpolation method, a bilinear interpolation method and a bicubic interpolation method. After the intermediate blur kernel is obtained, the numerical values of each point of the intermediate blur kernel are divided by the accumulated value of the numerical values of each point of the first blur kernel H1 to obtain a second blur kernel H2 corresponding to the intermediate blur kernel, so that the numerical accumulation of each point of the obtained second blur kernel H2 is equal to 1, and the brightness of the second blur kernel H2 obtained by transforming the first blur kernel H1 is consistent with the brightness of the first blur kernel H1.
Referring to fig. 2, in some embodiments, the one or more processors 10 perform a plurality of operations of translating, scaling, rotating or emulating the preset first blur kernel H1 to obtain the intermediate blur kernel. For example, the one or more processors 10 sequentially perform panning and scaling transformations on a preset first blur kernel H1 to obtain intermediate blur kernels; for another example, the one or more processors 10 sequentially perform panning and retranslating transforms on the preset first blur kernel H1 to obtain intermediate blur kernels. That is, when the one or more processors 10 may perform any combination of translation, scaling, rotation, and affine transformation on the preset first blur kernel H1 to obtain intermediate blur kernels with more blur types or blur degrees, so as to obtain more second blur kernels H2 corresponding to the first blur kernel H1. Various transformation principles are consistent with the above description, and are not described in detail herein.
Referring to fig. 2 and 4, in the embodiment of the present application, the types of the first blur kernel H1 include multiple types, and the one or more processors 10 may randomly select a predetermined first blur kernel H1 to perform one or more transformations of translation, scaling, rotation, or affine transformation, so as to obtain a second blur kernel H2 with a greater variety or blur degree different from that of the first blur kernel H1. When performing a convolution operation on the first image P1, the one or more processors 10 randomly select the first blurring kernel H1 or the second blurring kernel H2 to perform blurring processing on the first image P1, so that the second image P2 with different blurring types or blurring degrees can be obtained only by performing a single first image P1, thereby obtaining multiple pairs of image sample pairs, and performing a circular convolution operation on the first image P1 can further increase the number of the image sample pairs, so that the number of the image sample sets is richer. Meanwhile, one or more processors 10 perform convolution operations by acquiring different first images P1 and then using different first blurring kernels H1 or second blurring kernels H2 for different first images P1, so that scenes of the image sample set are richer.
Referring to FIG. 2, in some embodiments, one or more processors 10 perform multiple transformations on each first obfuscated core H1, in the same or different manners.
When the one or more processors 10 perform multiple transformations on each first blur core H1, the manner of the multiple transformations may be the same. For example, the one or more processors 10 may perform two, three, or more times of translation or scaling or rotation or affine transformation on each first blur kernel H1, and may obtain the second blur kernels H2 having different blur types or blur degrees by transforming the first blur kernel H1 a plurality of times to obtain intermediate blur kernels, thereby increasing the types of blur kernels for the convolution operation of the first image P1 to increase the types of image sample pairs.
When the one or more processors 10 perform multiple transformations on each first blur core H1, the manner of the multiple transformations may be different. For example, when the number of transformations is two, the one or more processors 10 perform a translation on the first blur kernel H1, then may perform one or more of scaling, rotation, and emulation on the transformed first blur kernel H1 to obtain an intermediate blur kernel, and finally perform a calculation on the intermediate blur kernel to obtain a second blur kernel H2 with an image brightness consistent with the first blur kernel H1. For another example, when the number of times of transformation is three or more, the one or more processors 10 may perform translation on the first blur kernel H1, perform any one of scaling, rotation, and affine transformation on the transformed first blur kernel H1, and finally perform one or more of translation, scaling, rotation, and reflection modeling on the twice transformed first blur kernel H1 to obtain an intermediate blur kernel, and finally perform calculation on the intermediate blur kernel to obtain the second blur kernel H2 whose image brightness is consistent with the first blur kernel H1. Of course, when one or more processors 10 perform multiple transformations on each first obfuscation kernel H1, the way of the multiple transformations may be partially different and partially the same.
Referring to fig. 8, in some embodiments, the generating method may further include:
02: processing the first image P1 to obtain at least one first sub-image, the first sub-image including a subject to be blurred;
05: performing a convolution operation on the first image P1 by using the first blurring kernel H1 or the second blurring kernel H2 to obtain a second image P2 corresponding to the first image P1, including:
051: performing convolution operation on the first sub-image by using the first blurring kernel H1 or the second blurring kernel H2 to obtain a second sub-image; and
053: the second sub-image and the first image P1 are fused to obtain a second image P2.
Referring to fig. 2, one or more processors 10 are also configured to perform the methods of 02, 051, 053, i.e., one or more processors 10 are further configured to: processing the first image P1 to obtain at least one first sub-image, the first sub-image including a subject to be blurred; performing convolution operation on the first sub-image by using the first blurring kernel H1 or the second blurring kernel H2 to obtain a second sub-image; and fusing the second sub-image and the first image P1 to obtain a second image P2.
Referring to FIG. 4, in one embodiment, to simulate a non-uniform blur of the full map of the first image P1, i.e., the full map corresponds to a plurality of different types of blur kernels. In one example, the first image P1 is subjected to image segmentation to obtain at least one first sub-image, where the number of the first sub-images may include one, two, three or more, and the specific number of the first sub-images is related to the number of the subject to be blurred contained in the first image P1. For example, the first image P1 includes three subjects to be blurred, that is, when the first image P1 is blurred, three different types of blur kernels need to be used for performing convolution operation, so that the second image P2 is a blurred image including three different types of blur kernels, and the image generation apparatus 100 (shown in fig. 2) can generate various types of blur generated by various types of targets in more real scenes, so that the blur model trained through deep learning does not have a condition of overfitting, and the trained blur types have high practicability.
In particular, segmenting the first image P1 may obtain at least one first sub-image by a threshold segmentation method. The threshold segmentation method is obtained by the following transformation formula:
where T is the threshold, g is the first sub-image, f is the first image P1, g (i, j) is 1 for the image of the object and g (i, j) is 0 for the image element of the background. The determination of the threshold T may be segmented by a global threshold, an adaptive threshold, an optimal threshold. For example, when the background and foreground portions of the first image P1 are significantly contrasted, a global threshold may be used for segmentation. For another example, when the contrast of the object and the background of the first image P1 is different everywhere in the first image P1, segmentation may be performed with different thresholds according to the local features of the first image P1. Preferably, the first image P1 may be segmented using adaptive thresholds, i.e. different thresholds are selected for different regions of the first image P1 or the threshold at each point is dynamically selected according to a certain neighborhood range for image segmentation.
After the first image P1 is divided into a plurality of first sub-images, the first sub-images are convolved by the one or more processors 10 (shown in fig. 2) by using different types of first blur kernel H1 or second blur kernel H2 to obtain second sub-images corresponding to the first sub-images, and finally the second sub-images and the divided first image P1 are subjected to a fitting process to obtain a second image P2 corresponding to the first image P1. The specific convolution operation is consistent with the above-mentioned convolution operation principle, and is not described herein again.
In some embodiments, the first sub-image comprises a plurality of different first sub-images, different first sub-images being convolved with different first blurring kernel H1 or second blurring kernel H2; the subject to be blurred in the first sub-images includes a plurality of categories, and the first sub-images of the subject to be blurred with the same category are convolved by using the same first blurring kernel H1 or the second blurring kernel H2.
For example, the first image P1 is divided into a plurality of different first sub-images, which may be convolved with different first blurring kernels H1 or second blurring kernels H2, so that the resulting second image P2 (blurred image) more closely resembles the type of blur in a real scene, e.g. moving classes of objects in the street, from which scene the first image P1 (sharp image) was taken, due to the different motion trajectories of different objects in the scene, the first image P1 (sharp image) needs to be segmented into a plurality of different first sub-images, and at the same time, convolving the first sub-image with a different first blur kernel H1 or second blur kernel H2 to obtain a second image P2 (blurred image) that corresponds more closely to the first image P1 (sharp image) in the scene, such that pairs of image samples in the set of image samples have pairs of motion blurred samples in the real scene.
For another example, the first image P1 is divided into a plurality of different first sub-images, where the subject to be blurred in the plurality of first sub-images includes a plurality of categories (e.g., categories including human, animal, plant, etc.), where both the subject to be blurred in the two divided first sub-images are stationary plants, the two first sub-images may be convolved by using the same kind of first blur kernel H1 or second blur kernel H2, and the person or other animal may be convolved by using another kind of first blur kernel H1 or second blur kernel H2, so that the second image P2 (blurred image) is closer to a blurred image of a scene image (including human, animal, plant) actually captured, so that the image samples in the image sample set have motion blurred sample pairs in a real scene.
Referring to fig. 9, the present application provides a non-volatile computer-readable storage medium 200 comprising 201. The computer program 201, when executed by the one or more processors 10, causes the one or more processors 10 to perform the generation methods in 01, 02, 03, 031, 033, 05, 051, 053.
For example, the computer program 201, when executed by the one or more processors 10, causes the processors 10 to perform the generation method of:
01: acquiring a first image P1, the first image P1 having a first resolution;
03: processing a preset first fuzzy core H1 to obtain a second fuzzy core H2, wherein the second fuzzy core H2 is different from the first fuzzy core H1 in type or fuzzy degree; and
05: convolving the first image P1 with the first blur kernel H1 or the second blur kernel H2 to obtain a second image P2 corresponding to the first image P1, the second image P2 having a second resolution, the first resolution being greater than the second resolution, the first image P1 and the second image P2 forming pairs of image samples, circularly performing the convolution operation to obtain a plurality of pairs of image samples, and forming the plurality of sets of pairs of image samples into a set of image samples.
As another example, the computer program 201, when executed by the one or more processors 10, causes the processors 10 to perform the generation method as follows:
01: acquiring a first image P1, the first image P1 having a first resolution;
031: performing one or more of translation, scaling, rotation, or affine transformation on the first blur kernel H1 to obtain an intermediate blur kernel;
033: calculating the numerical values of all points of a second fuzzy core H2 according to the numerical values of all points of the intermediate fuzzy core and the numerical values of all points of the first fuzzy core H1 to obtain a second fuzzy core H2;
05: convolving the first image P1 with the first blur kernel H1 or the second blur kernel H2 to obtain a second image P2 corresponding to the first image P1, the second image P2 having a second resolution, the first resolution being greater than the second resolution, the first image P1 and the second image P2 forming pairs of image samples, circularly performing the convolution operation to obtain a plurality of pairs of image samples, and forming the plurality of sets of pairs of image samples into a set of image samples.
As another example, the computer program 201, when executed by the one or more processors 10, causes the processors 10 to perform the generation method as follows:
01: acquiring a first image P1, the first image P1 having a first resolution;
02: processing the first image P1 to obtain at least one first sub-image, the first sub-image including a subject to be blurred;
03: processing a preset first fuzzy core H1 to obtain a second fuzzy core H2, wherein the second fuzzy core H2 is different from the first fuzzy core H1 in type or fuzzy degree;
051: performing convolution operation on the first sub-image by using the first blurring kernel H1 or the second blurring kernel H2 to obtain a second sub-image; and
053: the second sub-image and the first image P1 are fused to obtain a second image P2.
Also for example, the computer program 201, when executed by the one or more processors 10, causes the processors 10 to perform the generation method of:
01: acquiring a first image P1, the first image P1 having a first resolution;
02: processing the first image P1 to obtain at least one first sub-image, the first sub-image including a subject to be blurred;
031: performing one or more of translation, scaling, rotation, or affine transformation on the first blur kernel H1 to obtain an intermediate blur kernel;
033: calculating the numerical values of all points of a second fuzzy core H2 according to the numerical values of all points of the intermediate fuzzy core and the numerical values of all points of the first fuzzy core H1 to obtain a second fuzzy core H2;
051: performing convolution operation on the first sub-image by using the first blurring kernel H1 or the second blurring kernel H2 to obtain a second sub-image; and
053: the second sub-image and the first image P1 are fused to obtain a second image P2.
In the description herein, references to the description of the terms "certain embodiments," "one example," "exemplary," etc., mean that a particular feature, structure, material, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the application. In this specification, schematic representations of the above terms do not necessarily refer to the same embodiment or example. Furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or more embodiments or examples. Furthermore, various embodiments or examples and features of different embodiments or examples described in this specification can be combined and combined by one skilled in the art without contradiction.
Any process or method descriptions in flow charts or otherwise described herein may be understood as representing modules, segments, or portions of code which include one or more executable instructions for implementing specific logical functions or steps of the process, and the scope of the preferred embodiments of the present application includes other implementations in which functions may be executed out of order from that shown or discussed, including substantially concurrently or in reverse order, depending on the functionality involved, as would be understood by those reasonably skilled in the art of the present application.
Although embodiments of the present application have been shown and described above, it is to be understood that the above embodiments are exemplary and not to be construed as limiting the present application, and that changes, modifications, substitutions and alterations can be made to the above embodiments by those of ordinary skill in the art within the scope of the present application.