WO2021008322A1 - 图像处理方法、装置及设备 - Google Patents

图像处理方法、装置及设备 Download PDF

Info

Publication number
WO2021008322A1
WO2021008322A1 PCT/CN2020/098208 CN2020098208W WO2021008322A1 WO 2021008322 A1 WO2021008322 A1 WO 2021008322A1 CN 2020098208 W CN2020098208 W CN 2020098208W WO 2021008322 A1 WO2021008322 A1 WO 2021008322A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
filter
texture
size
super
Prior art date
Application number
PCT/CN2020/098208
Other languages
English (en)
French (fr)
Inventor
林焕
季杰
吴江铮
Original Assignee
华为技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 华为技术有限公司 filed Critical 华为技术有限公司
Priority to EP20841415.1A priority Critical patent/EP3992903A4/en
Publication of WO2021008322A1 publication Critical patent/WO2021008322A1/zh
Priority to US17/574,185 priority patent/US20220138906A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • G06T3/4053Scaling of whole images or parts thereof, e.g. expanding or contracting based on super-resolution, i.e. the output image resolution being higher than the sensor resolution
    • G06T3/4069Scaling of whole images or parts thereof, e.g. expanding or contracting based on super-resolution, i.e. the output image resolution being higher than the sensor resolution by subpixel displacements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • G06T3/4053Scaling of whole images or parts thereof, e.g. expanding or contracting based on super-resolution, i.e. the output image resolution being higher than the sensor resolution
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • G06T3/4038Image mosaicing, e.g. composing plane images from plane sub-images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • G06T3/4046Scaling of whole images or parts thereof, e.g. expanding or contracting using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration using two or more images, e.g. averaging or subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20021Dividing image into blocks, subimages or windows
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20024Filtering details
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging

Definitions

  • This application relates to the field of computer technology, in particular to an image processing method, device, and equipment.
  • Image super-resolution refers to the reconstruction of low-resolution images to obtain high-resolution images.
  • low-resolution images can be super-divided through neural networks to obtain high-resolution images.
  • Super-Resolution Convolutional Neural Network SRCNN
  • ESRGAN Enhanced Super-Resolution Generative Adversarial Networks
  • ESRGAN Super-Resolution Generative Adversarial Networks
  • VDSR Deep network for Super-Resolution
  • the above-mentioned neural network can only perform better super-segmentation on images of some texture types. For example, it can perform better super-segmentation processing on images of building types.
  • the above-mentioned neural network cannot perform better super-segmentation on face images.
  • Super score processing That is, the existing image super-division method has low reliability in image processing.
  • This application provides an image processing method, device, and equipment, which improve the reliability of image processing.
  • an embodiment of the present application provides an image processing method.
  • the image filter corresponding to the first image can be determined, and the image filter of the first image can be used for the first image.
  • One image is subjected to super-division processing to obtain a super-division image of the first image.
  • the image filter has filter parameters corresponding to each pixel in the first image, and pixels with different texture characteristics have different filter parameters.
  • the determined image filter has different filter parameters corresponding to pixels with different texture characteristics.
  • the determined image filter since the determined image filter includes the filter parameter corresponding to each pixel in the first image, and the filter parameters corresponding to the pixels with different texture characteristics in the first image are different, the When the filter performs super-division processing on the first image, it can perform different super-division processing on pixels with different texture characteristics in the first image, so that the super-division processing performed on each pixel in the image is related to the texture characteristics of the pixel itself. Makes the effect of super-division processing on the image better, and improves the reliability of image processing.
  • the image filter corresponding to the first image may be determined by the following feasible implementation manners: obtain the texture image of the first image; determine C in the texture image according to the texture characteristics of each pixel in the texture image The weight value of each local texture image and each local texture image, and the image filter is determined according to the weight value of C local texture images and each local texture image.
  • the preset magnification can also be changed into other factors, which can be specifically determined according to the requirements of over-resolution.
  • the (x, y) filter parameter in the i-th channel of the image filter is: the pixel value of the (x, y)-th pixel in the i-th local texture image and The product of the weight value of the i-th local texture image; where i is a positive integer less than or equal to C, x is a positive integer less than or equal to M, y is a positive integer less than or equal to N, and M is the first image in The number of pixels included in the horizontal direction, N is the number of pixels included in the first image in the vertical direction, and M and N are respectively integers greater than 1.
  • the texture image of the first image is acquired first, multiple partial images and the weight value of each partial image are determined in the texture image, and the image filter is determined according to the characteristics of the partial texture image and the partial texture image. Because the texture features of the textures in the local texture images are the same, in the image filter obtained by the above method, the first image has filter parameters corresponding to each pixel in the first image, and pixels with different texture features correspond The filter parameters are different. When performing super-division processing on the first image through the image filter, different pixels in the first image can be subjected to different super-division processing according to the image filter, thereby improving the reliability of image processing.
  • the texture image of the first image may be obtained through the following feasible implementation manners: the first image is compressed according to a preset size, and the size of the compressed first image is the preset size; According to the compressed first image, the texture image is determined.
  • the number of pixels included in the compressed first image is less, and the number of pixels in the determined texture image is also less. In this way, In the subsequent processing of the texture image, the amount of data processing can be reduced, thereby increasing the efficiency of image processing.
  • the image filter corresponding to the first image may be determined by the following feasible implementation manners: the first image is processed by the recognition model to obtain the image filter of the first image, and the recognition model is a pair-many Learned from the group of samples, each group of samples includes the first sample image and the second sample image, the first sample image and the second sample image have the same image content, and the resolution of the first sample image is greater than the second sample image Resolution.
  • the recognition model can be learned through the following feasible implementation methods: multiple sets of sample images can be input to the recognition model, and the recognition model can learn from multiple sets of sample images. Since the multiple sets of sample images include sample images with multiple texture characteristics, Therefore, the recognition model can learn: the sample low-resolution image (second sample image) with various texture characteristics is super-divided into the sample high-resolution image (first sample image). Sample image filter (or filter parameter). After the recognition model learns multiple sets of sample images, the recognition model has the function of determining the image filter of the image. Among them, only when the filter parameters corresponding to pixels with different texture characteristics in the image filter are different, the high-resolution image obtained by the image filter super-division is the high-resolution image with better super-division effect.
  • the sample high-resolution images in the set of sample images are high-resolution images with better super-resolution effects. Therefore, after the recognition model learns from the above multiple sets of sample images, the recognition model has the function of the image filter of the output image, and The output image filter has the following characteristics: the image filter has filter parameters corresponding to each pixel in the image, and pixels with different texture characteristics have different filter parameters.
  • the recognition model is learned in advance, and the image filter of the first image can be obtained through the recognition model.
  • the recognition model can learn the filter parameters corresponding to images with various texture characteristics from low-resolution images to high-resolution images. Therefore, in the image filter output by the recognition model, the first image
  • the filter parameters of image filters corresponding to pixels with different texture characteristics are different, and different pixels in the first image may be subjected to different super-division processing according to the image filter, thereby improving the reliability of image processing.
  • the first image of the recognition model can be processed through the following feasible implementation manners: the first image is compressed according to a preset size, and the size of the compressed first image is the preset size ; Process the compressed first image through the recognition model.
  • the number of pixels included in the compressed first image is small, so that the recognition model can process fewer pixels to obtain the first image
  • the image filter improves the efficiency of the image filter that the recognition model determines to obtain the first image.
  • the first image may be super-divided in the following feasible implementation manners: Obtain a gradient image of the first image, the gradient image is the same size as the first image, and the size of the gradient image is M* N, M and N are integers greater than 1 respectively; the gradient image is processed by the image filter to obtain the second image, the size of the second image is (f*M)*(f*N); the first image is enlarged f times, the third image is obtained, the size of the third image is (f*M)*(f*N); according to the second image and the third image, the super-division image is obtained, and the size of the super-division image is (f*M )*(f*N).
  • the gradient image of the first image is first processed by the image filter to obtain the second image, and then the super-division image is obtained according to the second image and the third image after the first image is enlarged.
  • the data amount of the gradient image is smaller than the data amount of the first image. Therefore, the gradient image can be quickly processed through the image filter, and the super-division image can be determined quickly.
  • processing the gradient image through the image filter to obtain the second image includes: processing the gradient image through the image filter to obtain f 2 sub-images, and the size of each sub-image is (f*M)*(f*N); Determine the second image based on f 2 sub-images.
  • different sub-images can represent the fine texture characteristics of different regions in the first image. Therefore, the determined second image can represent the fine texture characteristics of the first image.
  • the gradient image is processed by the image filter of the first image to obtain f 2 sub-images, including: passing k*t+1 to (k+1)* in the image filter
  • the parameters in the t channels process the gradient image to obtain the k-th sub-image, and k sequentially takes 0, 1, ..., f 2 -1.
  • each sub-image is determined according to the parameters in the different channels in the image filter, the parameters in the different channels have an effect on the texture characteristics of different regions in the first image. Therefore, each sub-image can represent the first image. Fine texture features of different areas in the image.
  • determining the second image based on f 2 sub-images includes: respectively splicing pixels at the same pixel position in f 2 sub-images to obtain M*N image blocks, each image The size of the block is f*f; according to the pixel position of the pixel in each image block in the sub-image, the M*N image blocks are spliced to obtain the second image.
  • each sub-image can represent the fine texture characteristics of different regions in the first image
  • the second image can be spliced by the above method, which can not only make the size of the second image larger, but also make the second image It can represent the fine texture features of the first image.
  • an embodiment of the present application provides an image processing device, including: a determination module and a super-division module, wherein:
  • the determining module is configured to determine an image filter corresponding to the first image, the image filter having filter parameters corresponding to each pixel in the first image, and filter parameters corresponding to pixels with different texture characteristics different;
  • the super-division module is configured to perform super-division processing on the first image according to the image filter of the first image to obtain a super-division image of the first image.
  • the determining module is specifically configured to:
  • the image filter is determined according to the C local texture images and the weight value of each local texture image, and the number of channels of the image filter is the C.
  • the (x, y) filter parameter in the i-th channel of the image filter is: the pixel of the (x, y)-th pixel in the i-th local texture image The product of the value and the weight value of the i-th local texture image;
  • the i is a positive integer less than or equal to the C
  • the x is a positive integer less than or equal to M
  • the y is a positive integer less than or equal to N
  • the M is the first image in The number of pixels included in the horizontal direction
  • the N is the number of pixels included in the vertical direction of the first image
  • the M and the N are integers greater than 1.
  • the determining module is specifically configured to:
  • the texture image is determined according to the compressed first image.
  • the determining module is specifically configured to:
  • the first image is processed by a recognition model to obtain an image filter of the first image.
  • the recognition model is obtained by learning from multiple sets of samples, and each set of samples includes a first sample image and a second sample image
  • the image content in the first sample image and the second sample image are the same, and the resolution of the first sample image is greater than the resolution of the second sample image.
  • the determining module is specifically configured to:
  • the compressed first image is processed by the recognition model.
  • the super-division module is specifically used for:
  • the gradient image and the first image have the same size, the gradient image has a size of M*N, and the M and the N are respectively integers greater than 1;
  • the gradient image is processed by the image filter to obtain a second image, the size of the second image is (f*M)*(f*N), where f is a preset magnification;
  • the superdivision image is obtained according to the second image and the third image, and the size of the superdivision image is (f*M)*(f*N).
  • the super-division module is specifically used for:
  • the gradient image is processed by the image filter to obtain f 2 sub-images, and the size of each sub-image is (f*M)*(f*N);
  • the second image is determined according to the f 2 sub-images.
  • the super-division module is specifically used for:
  • the gradient image is processed by the parameters in the k*t+1 to (k+1)*t channels in the image filter to obtain the k-th sub-image, and the k is sequentially taken as 0 and 1, ising, f 2 -1.
  • the super-division module is specifically used for:
  • the M*N image blocks are stitched together to obtain the second image.
  • an embodiment of the present application provides a computer system, including: a memory, a processor, and a computer program.
  • the computer program is stored in the memory.
  • the processor runs the computer program and executes the following steps:
  • the processor is specifically configured to:
  • the image filter is determined according to the C local texture images and the weight value of each local texture image, and the number of channels of the image filter is the C.
  • the (x, y) filter parameter in the i-th channel of the image filter is: the pixel of the (x, y)-th pixel in the i-th local texture image The product of the value and the weight value of the i-th local texture image;
  • the i is a positive integer less than or equal to the C
  • the x is a positive integer less than or equal to M
  • the y is a positive integer less than or equal to N
  • the M is the first image in The number of pixels included in the horizontal direction
  • the N is the number of pixels included in the vertical direction of the first image
  • the M and the N are integers greater than 1.
  • the processor is specifically configured to:
  • the texture image is determined according to the compressed first image.
  • the processor is specifically configured to:
  • the first image is processed by a recognition model to obtain an image filter of the first image.
  • the recognition model is obtained by learning from multiple sets of samples, and each set of samples includes a first sample image and a second sample image
  • the image content in the first sample image and the second sample image are the same, and the resolution of the first sample image is greater than the resolution of the second sample image.
  • the processor is specifically configured to:
  • the compressed first image is processed by the recognition model.
  • the processor is specifically configured to:
  • the gradient image and the first image have the same size, the gradient image has a size of M*N, and the M and the N are respectively integers greater than 1;
  • the gradient image is processed by the image filter to obtain a second image, the size of the second image is (f*M)*(f*N), where f is a preset magnification;
  • the superdivision image is obtained according to the second image and the third image, and the size of the superdivision image is (f*M)*(f*N).
  • the processor is specifically configured to:
  • the gradient image is processed by the image filter to obtain f 2 sub-images, and the size of each sub-image is (f*M)*(f*N);
  • the second image is determined according to the f 2 sub-images.
  • the processor is specifically configured to:
  • the gradient image is processed by the parameters in the k*t+1 to (k+1)*t channels in the image filter to obtain the k-th sub-image, and the k is sequentially taken as 0, 1, ising, f 2 -1.
  • the processor is specifically configured to:
  • the M*N image blocks are stitched together to obtain the second image.
  • an embodiment of the present application provides a computer-readable storage medium, the computer-readable storage medium includes a computer program, and the computer program is used to implement the image processing method according to any one of the first aspect.
  • an embodiment of the present application also provides a chip or integrated circuit, including: an interface circuit and a processor.
  • the processor is configured to call program instructions through the interface circuit to implement the image processing method according to any one of the first aspect.
  • the embodiments of the present application also provide a computer program or computer program product.
  • the computer program or computer program product includes computer-readable instructions that are read by one or more processors.
  • the image processing method of any one of the first aspect is implemented.
  • the image filter of the first image can be determined first, and then the super-division processing of the first image is performed through the image filter , To obtain the super-division image of the first image.
  • the image filter has filter parameters corresponding to each pixel in the first image, and pixels with different texture characteristics have different filter parameters.
  • FIG. 1 is a schematic diagram of a super-division image provided by an embodiment of the application
  • Figure 2 is a schematic diagram of a texture image provided by an embodiment of the application.
  • Figure 3 is a schematic diagram of a gradient image provided by an embodiment of the application.
  • FIG. 4 is a schematic diagram of an image filter provided by an embodiment of this application.
  • FIG. 5 is a schematic flowchart of a method for determining an image filter according to an embodiment of the application
  • FIG. 6 is a schematic diagram of a local texture image provided by an embodiment of the application.
  • FIG. 7 is a schematic diagram of another image filter provided by an embodiment of the application.
  • FIG. 8 is a schematic diagram of an image provided by an embodiment of the application.
  • FIG. 9 is a schematic flowchart of another method for determining an image filter according to an embodiment of the application.
  • FIG. 10 is a collection of sample images provided by an embodiment of the application.
  • FIG. 11 is a schematic flowchart of a method for acquiring a collection of sample images provided by an embodiment of the application.
  • FIG. 12 is a schematic flowchart of a method for super-division processing a first image by an image filter according to an embodiment of the application;
  • FIG. 13 is a schematic diagram of processing pixels through filter parameters according to an embodiment of the application.
  • FIG. 14 is another schematic diagram of processing pixels through filter parameters provided by an embodiment of the application.
  • FIG. 15 is a schematic diagram of a process of determining an image block provided by an embodiment of the application.
  • FIG. 16 is a schematic diagram of a splicing process of image blocks provided by an embodiment of the application.
  • FIG. 17 is a schematic flowchart of an image processing method provided by an embodiment of the application.
  • FIG. 18 is a schematic diagram of an image processing process provided by an embodiment of this application.
  • FIG. 19 is a schematic diagram of an application scenario provided by an embodiment of the application.
  • FIG. 20 is a schematic diagram of the process of generating and using a processing model provided by an embodiment of the application.
  • FIG. 21 is a schematic structural diagram of an image processing apparatus provided by an embodiment of the application.
  • FIG. 22 is a schematic diagram of the hardware structure of a computer system provided by an embodiment of the application.
  • FIG. 1 is a schematic diagram of a super-division image provided by an embodiment of the application. Please refer to Fig. 1, which includes the original image and the super-division image, wherein the super-division processing is performed on the original image to obtain the super-division image, and the original image and the super-division image have the same image content. For example, if the resolution of the original image is a*b and the magnification is 3, then the resolution of the super-division image is 3a*3b.
  • a refers to the number of pixels included in the image in the horizontal direction
  • b refers to the number of pixels included in the image in the vertical direction.
  • the resolution of the image can also be referred to as the size of the image.
  • Texture It can also be called texture, which refers to the patterns or lines on the surface of an object.
  • Texture image refers to an image that includes texture in the original image.
  • Figure 2 is a schematic diagram of a texture image provided by an embodiment of the application. See Figure 2, which includes the original image and the texture image.
  • the texture image includes the texture in the original image. It should be noted that FIG. 2 only schematically illustrates one type of texture image in the original image.
  • the types of texture images may include local binary patterns (LBP) type, Gaussian markov random field (GMRF) type, gray-level co-occurrence matrix, GLCM) type, etc.
  • LBP local binary patterns
  • GMRF Gaussian markov random field
  • GLCM gray-level co-occurrence matrix
  • Texture characteristics refers to the characteristics of texture, and different objects in the image have different texture characteristics. For example, please refer to Figure 2.
  • the texture characteristics of the sky, the texture characteristics of the building, the texture characteristics of the hair, the texture characteristics of the face, the texture characteristics of the clothes, and the texture characteristics of the grass are all different.
  • FIG. 3 is a schematic diagram of a gradient image provided by an embodiment of the application. Please refer to Figure 3, including the original image and the gradient image.
  • the original image is usually three channels, and the gradient image is usually one channel. Therefore, processing the gradient image of the original image can reduce the amount of calculation. For example, the calculation amount for the gradient image is one third of the calculation amount for the original image.
  • Image filter The image can be processed to improve the resolution of the image.
  • the image filter involved in this application is a three-dimensional filter, and the three dimensions can be denoted as H, W, and C, respectively.
  • FIG. 4 is a schematic diagram of an image filter provided by an embodiment of the application. Please refer to Figure 4, the image filter is a three-dimensional filter, the horizontal size of the image filter is W, the vertical size is H, and the number of channels is C. Among them, each square in Figure 4 represents a filter parameter, and the image filter includes H*W*C filter parameters. The essence of processing an image through an image filter is to process an image through filter parameters in the image filter.
  • Downsampling refers to compression processing, for example, downsampling an image refers to compression processing on the image.
  • Up-sampling refers to enlargement processing, for example, up-sampling an image refers to method processing of the image.
  • the image filter of the first image can be determined first, and then the super-division processing is performed on the first image through the image filter to obtain the super-division image of the first image image.
  • the image filter includes filter parameters corresponding to each pixel in the first image, and pixels with different texture characteristics in the first image have different filter parameters.
  • FIGS. 5 to 8 are a way of determining an image filter
  • FIGS. 9 to 11 are another way of determining an image filter.
  • FIG. 5 is a schematic flowchart of a method for determining an image filter according to an embodiment of the application. Referring to Figure 5, the method may include:
  • S501 Acquire a texture image of the first image.
  • the first image is an image to be subjected to super-division processing, and the first image is usually an image with a lower resolution.
  • the texture image of the first image can be obtained through a convolutional neural network.
  • the texture image of the first image can be obtained through any one of the LBP model, the GMRF model, or the GLCM model.
  • the first image can be compressed to a preset size (down-sampling the first image to obtain an image of the preset size), and then the texture of the first image after compression processing can be obtained image.
  • the preset size can be 256*256, 512*512, etc.
  • the preset size can be set according to actual needs. It should be noted that the size of the image involved in this application refers to the resolution of the image.
  • the size of the first image of M*N means that the first image includes M pixels in the horizontal direction and N pixels in the vertical direction. .
  • the size of the texture image of the compressed first image is the same as the size of the compressed first image. For example, assuming that the size of the first image is M*N and the size of the compressed first image is M1*N1, the size of the texture image of the compressed first image is M1*N1. As the pixels included in the compressed first image are reduced, the texture image of the compressed first image can be quickly obtained. Further, the texture image of the compressed first image will also include fewer pixels, which can reduce the amount of data for subsequent processing of the texture image.
  • S502 Determine C local texture images and a weight value of each local texture image in the texture image according to the texture feature of each pixel in the texture image.
  • C f 2 ⁇ t
  • f is a preset magnification
  • t is the number of filter parameters corresponding to each pixel in the first image.
  • f is greater than 1
  • t is an integer greater than or equal to 1.
  • each local texture image includes a part of textures in the texture image, and the texture features of the textures in one local texture image are the same.
  • a local texture image includes only human face textures, and the texture features of the human face textures are the same, or a local texture image includes only sky textures, and the texture features of the sky textures are the same.
  • the size of the local texture image may be equal to the size of the texture image. Wherein, if the first image is compressed to a preset size in S502, the size of the texture image and the size of the local texture image are both the preset size, and if the first image is not compressed in S502, the texture image The size of and the size of the local texture image are the same as the size of the first image.
  • FIG. 6 is a schematic diagram of a local texture image provided by an embodiment of the application. Please refer to Figure 6. Assuming that C is 5, 5 local texture images can be determined in the texture image, which are respectively denoted as: local texture image 1, local texture image 2, local texture image 3, local texture image 4, and local texture image 5.
  • the local texture image 1 is a local texture image corresponding to the building, and the local texture image 1 includes the texture corresponding to the building.
  • the local texture image 2 is a local texture image corresponding to the sky, and the local texture image 2 includes a texture corresponding to the sky.
  • the local texture image 3 is a local texture image corresponding to the grass, and the local texture image 3 includes the texture corresponding to the grass.
  • the local texture image 4 is a local texture image corresponding to the car, and the local texture image 4 includes the texture corresponding to the car.
  • the local texture image 5 is a local texture image corresponding to a cloud, and the local texture image 5 includes a texture corresponding to the cloud.
  • FIG. 6 only illustrates part of the local texture image determined in the texture image by way of example.
  • other local texture images can also be determined in the texture image, which is not specifically limited in the embodiment of the present application.
  • a local texture image can correspond to a weight value.
  • the weight value of the local texture image can be a number between 0-1.
  • the weight value of the local texture image may be determined according to the types of objects included in the local texture image.
  • the corresponding relationship between the object type and the weight value can be preset, and accordingly, the weight value of the local texture image can be determined according to the object type in the local texture image and the corresponding relationship.
  • the object type of the object in the local texture image can be identified, and the weight value of the local texture image can be determined according to the relationship between the object type and the object.
  • the weight value of the local texture image may be determined according to the texture feature of the texture in the local texture image.
  • the corresponding relationship between the texture feature and the weight value can be preset, and accordingly, the weight value of the local texture image can be determined according to the texture feature of the texture in the local texture image and the corresponding relationship.
  • the weight value of each local texture image can also be determined through a preset model, which is obtained by learning from multiple sets of sample images.
  • Each set of sample images may include sample low-resolution images and corresponding sample high-resolution images.
  • Multiple sets of sample images can be input to the preset model, and the preset model can learn from multiple sets of sample images to learn from the sample low-resolution image super-divided to the sample high-resolution image, each sample low-resolution image The weight value of the corresponding sample local texture image.
  • the preset model may have the function of determining the weight value of each local texture image. Therefore, the partial texture image can be input to the preset model so that the preset model outputs the weight value of the partial texture image.
  • S503 Determine an image filter according to the C local texture images and the weight value of each local texture image, the image filter is a three-dimensional image filter, and the number of channels of the image filter is C.
  • each local texture image can be multiplied by the corresponding weight value to obtain an updated local texture image.
  • Each updated local texture image corresponds to a channel of the image filter, and the pixel value of an updated local texture image is a filter parameter in a channel of the image filter.
  • the horizontal size (W) of the image filter is the same as the horizontal size of the local texture image (the number of pixels included in each line of the local texture image), and the vertical size (H) of the image filter is the same as the vertical size of the local texture image (local texture image). The number of pixels included in each column) is the same.
  • the (x,y)th filter parameter in the i-th channel of the image filter is: the pixel value of the (x,y)th pixel in the i-th local texture image and the weight value of the i-th local texture image
  • i is a positive integer less than or equal to C
  • x is a positive integer less than or equal to M
  • y is a positive integer less than or equal to N
  • M is the number of pixels in the first image in the horizontal direction
  • N is the first image
  • M and N are integers greater than 1, respectively.
  • the pixel values of the five updated local texture images can be arranged in the manner shown in FIG. 7, The pixel value in each updated local texture image is a filter parameter of a channel in the image filter.
  • FIG. 7 only illustrates the relationship between the local texture image and the image filter, and does not represent the true form of the image filter.
  • the filter parameter of the determined image filter is related to the object type in the image, so that when the image is processed by the image filter, This enables a smooth transition between different areas in the processed image, and avoids obvious dividing lines between different areas in the processed image.
  • FIG. 8 is a schematic diagram of an image provided by an embodiment of the application.
  • the original image is subjected to super-division processing by the image filter 1 to obtain the super-division image 1
  • the original image is subjected to the super-division processing by the filter 2 to obtain the super-division image 2.
  • the weight value of the local texture image is not used, that is, the image filter 1 is determined directly according to the local texture image.
  • the weight value of the local texture image is used, that is, the image filter 2 is determined according to the weight value of the local texture image and the local texture image.
  • the texture image of the first image is acquired first, multiple partial images and the weight value of each partial image are determined in the texture image, and the weight value is determined according to the partial texture image and the partial texture image Image filter. Since the texture features of the texture in a local texture image are the same, the image filter obtained by the above method has the filter parameters corresponding to each pixel in the first image, and the filter parameters corresponding to the pixels with different texture features different. When performing super-division processing on the first image through the image filter, different pixels in the first image can be subjected to different super-division processing according to the image filter, thereby improving the reliability of image processing.
  • FIG. 9 is a schematic flowchart of another method for determining an image filter according to an embodiment of the application. Referring to Figure 9, the method may include:
  • S901 Perform compression processing on the first image according to a preset size, and the size of the first image after compression processing is the preset size.
  • the first image when the size of the first image is greater than the preset size, the first image may be compressed. When the size of the first image is smaller than the preset size, the first image may not be compressed.
  • the preset size can be 256*256, 512*512, etc. In the actual application process, the preset size can be set according to actual needs.
  • S902 Process the first image through the recognition model to obtain an image filter of the first image.
  • S901 can be an optional step.
  • S901 When S901 is executed, the compressed first image is processed by the recognition model in S902. In this way, the data processing volume of the recognition model can be reduced, and the Identify the efficiency of the output image filter of the model.
  • S901 When S901 is not executed, the original first image is processed in S902.
  • the recognition model is learned from multiple sets of samples.
  • Each set of samples includes a first sample image and a second sample image.
  • the first sample image and the second sample image have the same image content.
  • the resolution is greater than the resolution of the second sample image.
  • the first sample image may be a high-resolution image
  • the second sample image may be a low-resolution image.
  • the resolution of the first sample image is greater than or equal to the first threshold
  • the resolution of the second sample image is less than or equal to the second threshold.
  • the recognition model may be learned in advance. The following describes the process of learning to obtain the recognition model.
  • the sample image set may be as shown in FIG. 10.
  • Figure 10 is a collection of sample images provided by an embodiment of the application. Referring to FIG. 10, the sample image set includes multiple animal sample images, multiple human face sample images, and multiple sky sample images.
  • FIG. 11 is a schematic flowchart of a method for acquiring a collection of sample images provided by an embodiment of the application.
  • Each data set includes multiple initial sample images.
  • the initial sample images in each data set are input to the data set processor, and the data set processor processes the initial sample images in each data set to obtain a sample image set.
  • the initial sample images in each data set can be image segmented (patch extraction) to extract the required sample images from the initial sample images.
  • image segmented patch extraction
  • an initial sample image including human faces, sky, and buildings is segmented, and a sample image including only human faces, a sample image including only sky, and a sample image including only buildings are segmented.
  • the sample image obtained by image segmentation is rotated, stretched, and scaled by the data augmentation module to obtain a sample image set.
  • sample image set Dataset ⁇ Texture info
  • Texture is a texture feature
  • AllData is a collection of texture features.
  • sample image set After the sample image set is obtained, multiple sets of sample images can be determined in the sample image set. For example, for a sample high-resolution image in the sample image set, the sample high-resolution image is compressed to obtain the corresponding sample Low-resolution images, or, for sample low-resolution images in the sample image set, the sample low-resolution images can be processed to obtain corresponding sample high-resolution images.
  • multiple sets of sample images can be input to the recognition model.
  • the recognition model can learn from multiple sets of sample images. Since the multiple sets of sample images include sample images with multiple texture characteristics, the recognition The model can be learned: sample low-resolution images (second sample images) with various texture characteristics are super-divided to sample high-resolution images (first sample images), sample low-resolution images corresponding to sample image filtering Filter (or filter parameter). After the recognition model learns multiple sets of sample images, the recognition model has the function of determining the image filter of the image.
  • the recognition model learns from the above multiple sets of sample images, the recognition model has the function of the image filter of the output image, and
  • the output image filter has the following characteristics: the image filter has filter parameters corresponding to each pixel in the image, and pixels with different texture characteristics have different filter parameters.
  • the data representing the first image, the preset magnification and the number t of filter parameters corresponding to each pixel can be input to the recognition model, and the recognition model outputs the image filter of the first image according to the received data .
  • the data representing the first image may be the first image itself, or a grayscale image of the first image, or the like.
  • the recognition model can determine the number of channels C of the image filter according to the preset magnification f and the number t of filter parameters corresponding to each pixel, and determine the image filter according to the number of channels C and the data representing the first image .
  • the number of channels of the image filter output by the recognition model is C.
  • DTF (H, W, C) Conv (resize (Input) (H, W) ), where Input is the original image of the input, and resize (Input ) (H,W) refers to scaling the input original image to size H*W, Conv represents the recognition model (convolutional neural network), DTF (H,W,C) is the image filter output by the recognition model, H Is the vertical size of the image filter, W is the horizontal size of the image filter, and C is the number of channels of the image filter.
  • the recognition model is obtained by learning in advance, and the image filter of the first image can be obtained through the recognition model.
  • the recognition model can learn the filter parameters corresponding to images with various texture characteristics from low-resolution images to high-resolution images. Therefore, in the image filter output by the recognition model, the first image
  • the filter parameters of image filters corresponding to pixels with different texture characteristics are different, and different pixels in the first image may be subjected to different super-division processing according to the image filter, thereby improving the reliability of image processing.
  • the image filter may perform super-division processing on the first image to obtain the super-division image of the first image.
  • FIG. 12 is a schematic flowchart of a method for super-division processing a first image by an image filter according to an embodiment of the application. See Figure 12, which includes:
  • the gradient image has the same size as the first image, the size of the gradient image is M*N, and M and N are integers greater than 1 respectively.
  • the gradient image of the first image can be obtained through a convolutional neural network.
  • data representing the first image may be input to a convolutional neural network, and the output of the convolutional neural network is a gradient image of the first image.
  • the first image is usually an RGB image or a YUV image, and GGF (H, W, 1) refers to the output guide gradient image.
  • S1202 Process the gradient image through the image filter to obtain f 2 sub-images.
  • the size of each sub-image is (f*M)*(f*N).
  • first determine the H*W of the image filter (W is the number of filter parameters included in a channel in the horizontal direction, and H is the number of filter parameters included in a channel in the vertical direction) and the size of the gradient image Whether it is consistent (determine whether W is the same as the gradient image in the horizontal direction, and whether H is the same as the gradient image in the vertical direction). If the H*W of the image filter does not match the size of the gradient image, then First adjust the H*W of the image filter to the size of the gradient image, the adjusted image filter W and the gradient image include the same number of pixels in the horizontal direction, and the adjusted image filter H and the gradient image include the vertical image The number of pixels is the same.
  • the gradient image is processed by the parameters in the k*t+1 to (k+1)*t channels in the image filter to obtain the k-th sub-image, and k is sequentially selected as 0, 1, ..., f 2 -1.
  • each sub-image When acquiring each sub-image, first determine the corresponding channel in the image filter, and process the gradient image according to the filter parameters in the corresponding channel in the image filter (process each pixel in the gradient image), Get a sub-image. For example, the gradient image is processed through the filter parameters in the first to t channels in the image filter to obtain the 0th sub-image. The gradient image is processed through the filter parameters in the t+1 to 2t channels in the image filter to obtain the first sub-image. The gradient image is processed through the filter parameters in the 2t+1 to 3t channels in the image filter to obtain the second sub-image. And so on, until f 2 sub-images are obtained.
  • the filter parameters corresponding to the pixels in the gradient image can be determined by the following two feasible implementation methods:
  • the coordinates of the pixels in the gradient image correspond to the (h, w) coordinates of the corresponding filter parameters.
  • h is a positive integer between 0 and H-1
  • w is between 0 and W.
  • a positive integer between -1.
  • the filter parameter corresponding to the pixel (0, 0) in the gradient image has h as 0 and w as 0.
  • the filter parameter corresponding to the pixel (1,2) in the gradient image has h as 1, and w as 2.
  • H*W of the image filter is different from the size of the gradient image.
  • the correspondence between the coordinates of the pixels in the gradient image and the (h, w) of the filter parameters can be determined according to the ratio of the H*W of the image filter to the size of the gradient image.
  • FIG. 13 is a schematic diagram of processing pixels through filter parameters according to an embodiment of the application.
  • FIG. 14 is another schematic diagram of processing pixels through filter parameters provided by an embodiment of the application.
  • the magnification factor f 3 and the number of filter parameters corresponding to one pixel is 9, the number of channels C of the image filter is 81 (only some channels are shown in FIG. 13).
  • the gradient image is processed according to the filter parameters in channels 1-9 in the image filter.
  • the matrix formed by the 9 filter parameters corresponding to the pixel (0,0) is Since there are no pixels around the part of pixel (0,0), pixel 0 can be filled around pixel (0,0), and the center (0.2) of the matrix is directly opposite to pixel (0,0), corresponding to the element at the position After multiplying, adding up, and then taking the average value to get the pixel value on pixel (0,0).
  • the pixels are processed in the manner shown in FIG. 13.
  • the matrix formed by the 9 filter parameters corresponding to the pixel (1,1) is The center (0.1) of the matrix is directly opposite to the pixel (1,1), the elements at the corresponding position are multiplied and then added, and then the average value is taken to obtain the pixel value on the pixel (1,1).
  • the pixels are processed in the manner shown in FIG. 14.
  • FIGS. 13-14 are only schematic illustrations of a process of processing pixels through filter parameters, and do not limit the processing of pixels through filter parameters.
  • S1203 Perform splicing processing on pixels at the same pixel position in f 2 sub-images respectively to obtain M*N image blocks.
  • each image block is f*f.
  • Zhang Zi f 2 of the image coordinates (0, 0) f 2 of pixels spliced to obtain the coordinates (0,0) corresponding to one image block.
  • M*N image blocks are obtained.
  • FIG. 15 is a schematic diagram of a process of determining an image block provided by an embodiment of the application.
  • 9 sub-images can be obtained, which are respectively denoted as sub-image 1, sub-image 2, ..., sub-image 9.
  • Splicing the 9 pixels of the coordinates (0,0) in the 9 sub-images to obtain the image block 1 corresponding to the coordinates (0,0), and the 9 pixels of the coordinates (0,1) in the 9 sub-images Perform stitching to get the image block 2 corresponding to the coordinates (0,1),..., stitch the 9 pixels of the coordinates (2,2) in the 9 sub-images to get the image block corresponding to the coordinates (2,2) 9.
  • the numbers in the sub-images and image blocks shown in FIG. 15 are the labels of the pixels.
  • “11” in the sub-image 1 is the label of the pixel (0, 0) in the sub-image 1
  • the sub-image 1 The “12” in the sub-image 1 is the label of the pixel (0,1).
  • "11” in image block 1 is the label of pixel (0, 0) in image block 1
  • "21" in image block 11 is the label of pixel (0, 1) in image block 1.
  • S1204 Perform splicing processing on the M*N image blocks according to the pixel positions of the pixels in each image block in the sub-image to obtain a second image.
  • the size of the second image is (f*M)*(f*N).
  • the placement position of each image block can be determined according to the pixel position of the pixel in each image block in the sub-image, and M*N image blocks can be spliced according to the placement position of each image block deal with.
  • the placement position of each image block corresponds to the pixel position of the pixel in the image block in the sub-image.
  • FIG. 16 is a schematic diagram of a stitching process for image blocks provided by an embodiment of the application.
  • 9 image blocks are determined to be obtained, which are respectively denoted as image block 1, image block 2, ..., image block 9.
  • the pixel position of each pixel in the 9 image blocks is as shown in Figure 15, then according to the pixel position of each pixel in the image block in the sub-image, the 9 image blocks are stitched to obtain the second image shown in Figure 16 .
  • the number in the image block and the second image shown in FIG. 16 is the label of the pixel.
  • “11” in the image block 1 is the label of the pixel (0,0) in the image block 1.
  • “21” in 11 is the label of the pixel (0,1) in image block 1.
  • "11” in the second image is the label of the pixel (0, 0) in the second image, and "21" in the second image is the label of the pixel (0, 1) in the second image.
  • the second image can be determined according to the image filter and the gradient image.
  • the second image can also be determined through other feasible implementation manners, which is not specifically limited in the embodiment of the present application.
  • the size of the third image is (f*M)*(f*N).
  • the first image can be magnified by f times by means of bicubic difference magnification to obtain the third image.
  • the first image may also be enlarged through other feasible implementation manners, which is not specifically limited in the embodiment of the present application.
  • the size of the super-division image is (f*M)*(f*N).
  • the pixel values in the third image and the pixel values in the second image may be correspondingly added to obtain a super-division image.
  • the pixel value at the pixel position (0,0) in the third image and the pixel value at the pixel position (0,0) in the second image can be added together as the super-division image at the pixel position (0,0) The pixel value.
  • the gradient image of the first image is processed by the image filter to obtain the second image, and then the first image is enlarged according to the second image and In the third image, the super-division image is obtained.
  • the data amount of the gradient image is smaller than the data amount of the first image. Therefore, the gradient image can be quickly processed through the image filter, and the super-division image can be determined quickly.
  • FIG. 17 is a schematic flowchart of an image processing method provided by an embodiment of the application. Referring to Figure 17, the method can include:
  • the filter parameters of the image filters corresponding to pixels with different texture characteristics in the first image are different.
  • the image filter may be determined by the method shown in the embodiments of FIGS. 5-8, or the image filter may be determined by the method shown in FIGS. 9-11, which will not be repeated here.
  • S1702 according to the image filter of the first image, perform super-division processing on the first image to obtain a super-division image of the first image.
  • the super-division processing of the first image can be performed on the first image by the method shown in the embodiments of FIGS. 12-16 to obtain the super-division image of the first image, which will not be repeated here.
  • the image filter of the first image when it is necessary to obtain the super-division image of the first image, the image filter of the first image can be determined first, and then the super-division processing of the first image is performed through the image filter to obtain the first image.
  • the filter parameters in the image filter have a corresponding relationship with the pixels in the first image, and the filter parameters of the image filters corresponding to the pixels with different texture characteristics in the first image are different.
  • FIG. 18 is a schematic diagram of an image processing process provided by an embodiment of this application.
  • the first image is down-sampled first, and the image filter is determined by the method shown in the embodiments of Figs. 5-8, or, The image filter is determined by the method shown in the embodiment of FIG. 9-11.
  • the image filter is up-sampled so that the H*W size of the up-sampled image filter is the same as the size of the first image.
  • the gradient image of the first image is also obtained, and the image filter and the gradient image are processed by the method shown in the embodiments of FIGS. 12-16 to obtain the second image.
  • the first image is also up-sampled to obtain an enlarged image, and the second image and the enlarged image are processed by the method shown in the embodiments of FIGS. 12-16 to obtain a super-division image.
  • the first image includes objects such as sky, clouds, cars, buildings, and grass.
  • the filter coefficients corresponding to different objects are different.
  • different super-division processing can be performed on different objects in the first image, thereby avoiding all objects in the image.
  • Performing the same super-division processing causes some objects to appear jagged, white fringing, blur and other problems, making the super-division image more real and natural, and improving the reliability of image processing.
  • the image processing method shown in the foregoing embodiment can be applied to a variety of application scenarios.
  • the image processing method shown in the foregoing embodiment can be applied to a terminal device to display an image, that is, before the terminal device displays an image
  • the image processing method shown in the foregoing embodiment performs super-division processing on the image to make the image displayed by the terminal device clearer and more natural.
  • the image processing method shown in the above embodiment can also be applied to a video call scenario of a terminal device, that is, the terminal device can receive video frames from other terminal devices during the video passing process between the terminal device and other terminal devices.
  • the terminal device may perform super-division processing on each image in the video frame by the method shown in the above embodiment, so as to make the video picture displayed by the terminal device clearer and more natural.
  • the image processing method shown in the above embodiment can also be applied to a scene where the terminal device plays a video, that is, in the process of the terminal device playing video, the terminal device can use the method shown in the above embodiment
  • Each frame of the image is super-divided to make the video screen displayed by the terminal device clearer and more natural.
  • the image processing method shown in the above embodiment can also be applied to a game scene, that is, before the terminal device displays the game screen, the game screen can be super-divided by the method shown in the above embodiment to make the terminal device display
  • the game screen is clearer and more natural.
  • the terminal equipment involved in the embodiment of the present application may be a mobile phone, a computer, a television, a vehicle-mounted terminal (or an unmanned driving system), an augmented reality (AR) device, and a virtual reality (VR) device , Mixed reality devices, wearable devices, smart home devices, drone terminal devices, etc.
  • a vehicle-mounted terminal or an unmanned driving system
  • AR augmented reality
  • VR virtual reality
  • Mixed reality devices wearable devices, smart home devices, drone terminal devices, etc.
  • FIG. 19 is a schematic diagram of an application scenario provided by an embodiment of the application.
  • an application program is installed in the terminal device.
  • the application program may be an image display application, a video playback application, or a video call application.
  • the application program can obtain the media stream (for example, the video stream), the decoder in the terminal device decodes the media stream to obtain the original image, and the terminal device performs super-division processing on the original image by the image processing method shown in the above embodiment to obtain Super-division image, the terminal device displays the super-division image.
  • the image filter can be determined once every T frame, that is, the image filter determined from one frame of image can be applied to the T-1 frame image after the frame, for example, T can be between 5 and 10. Any number between.
  • the image processing method shown in the foregoing embodiment can be implemented through a processing model.
  • sample data can be obtained, and the sample data can be trained to obtain a processing model, which can implement the aforementioned image processing method.
  • the processing model can be obtained by training on a personal computer (PC) based on sample data, and the processing model can be converted to an offline model on the computer, and the offline model can be moved To any other terminal devices (for example, mobile devices such as mobile phones, tablet computers), so that the terminal devices can perform image processing through the processing model.
  • PC personal computer
  • FIG. 20 is a schematic diagram of the process of generating and using a processing model provided by an embodiment of the application.
  • the processing model can be trained on the PC based on the sample data, and the processing model can be converted to an offline model on the computer. Install the offline model to the mobile terminal. For example, after an application in a mobile terminal obtains a media stream (for example, a video stream), the decoder in the terminal device decodes the media stream to obtain the original image, and the terminal device supervises the original image through the offline processing model. Sub-processing to obtain the super-division image, and the terminal device displays the super-division image.
  • a media stream for example, a video stream
  • the decoder in the terminal device decodes the media stream to obtain the original image
  • the terminal device supervises the original image through the offline processing model. Sub-processing to obtain the super-division image, and the terminal device displays the super-division image.
  • FIG. 21 is a schematic structural diagram of an image processing apparatus provided by an embodiment of the application.
  • the image processing apparatus 10 may include: a determination module 11 and a super-division module 12, where:
  • the determining module 11 is configured to determine the image filter corresponding to the first image, and the image filter has filter parameters corresponding to each pixel in the first image, and filters corresponding to pixels with different texture characteristics. The parameters are different;
  • the super-division module 12 is configured to perform super-division processing on the first image according to the image filter of the first image to obtain a super-division image of the first image.
  • the determining module 11 may execute S501-S503 in the embodiment in FIG. 5, S901-S902 in the embodiment in FIG. 9, and S1701 in the embodiment in FIG.
  • the super-division module can execute S1201-S1206 in the embodiment in FIG. 12 and S1702 in the embodiment in FIG. 17.
  • the determining module 11 is specifically configured to:
  • the image filter is determined according to the C local texture images and the weight value of each local texture image, and the number of channels of the image filter is the C.
  • the determining module 11 is specifically configured to:
  • the texture image is determined according to the compressed first image.
  • the (x, y) filter parameter in the i-th channel of the image filter is: the pixel of the (x, y)-th pixel in the i-th local texture image The product of the value and the weight value of the i-th local texture image;
  • the i is a positive integer less than or equal to the C
  • the x is a positive integer less than or equal to M
  • the y is a positive integer less than or equal to N
  • the M is the first image in The number of pixels included in the horizontal direction
  • the N is the number of pixels included in the vertical direction of the first image
  • the M and the N are integers greater than 1.
  • the determining module 11 is specifically configured to:
  • the first image is processed by a recognition model to obtain an image filter of the first image.
  • the recognition model is obtained by learning from multiple sets of samples, and each set of samples includes a first sample image and a second sample image
  • the image content in the first sample image and the second sample image are the same, and the resolution of the first sample image is greater than the resolution of the second sample image.
  • the determining module 11 is specifically configured to:
  • the compressed first image is processed by the recognition model.
  • the super-division module 12 is specifically configured to:
  • the gradient image and the first image have the same size, the gradient image has a size of M*N, and the M and the N are respectively integers greater than 1;
  • the gradient image is processed by the image filter to obtain a second image, the size of the second image is (f*M)*(f*N), where f is a preset magnification;
  • the superdivision image is obtained according to the second image and the third image, and the size of the superdivision image is (f*M)*(f*N).
  • the super-division module 12 is specifically configured to:
  • the gradient image is processed by the image filter to obtain f 2 sub-images, and the size of each sub-image is (f*M)*(f*N);
  • the second image is determined according to the f 2 sub-images.
  • the super-division module 12 is specifically configured to:
  • the gradient image is processed by the parameters in the k*t+1 to (k+1)*t channels in the image filter to obtain the k-th sub-image, and the k is sequentially taken as 0, 1, ising, f 2 -1.
  • the super-division module 12 is specifically configured to:
  • the M*N image blocks are stitched together to obtain the second image.
  • FIG. 22 is a schematic diagram of the hardware structure of a computer system provided by an embodiment of the application.
  • the computer system 20 includes: a memory 21 and a processor 22, where the memory 21 and the processor 22 communicate; for example, the memory 21 and the processor 22 can communicate through a communication bus 23, and the memory 21 uses
  • the processor 22 executes the computer program to perform the following steps:
  • the processor 21 may implement the functions of the determining module 11 and the super-division module 12 shown in the embodiment of FIG. 21.
  • the processor 22 is specifically configured to:
  • the image filter is determined according to the C local texture images and the weight value of each local texture image, and the number of channels of the image filter is the C.
  • the processor 22 is specifically configured to:
  • the texture image is determined according to the compressed first image.
  • the (x, y) filter parameter in the i-th channel of the image filter is: the pixel of the (x, y)-th pixel in the i-th local texture image The product of the value and the weight value of the i-th local texture image;
  • the i is a positive integer less than or equal to the C
  • the x is a positive integer less than or equal to M
  • the y is a positive integer less than or equal to N
  • the M is the first image in The number of pixels included in the horizontal direction
  • the N is the number of pixels included in the vertical direction of the first image
  • the M and the N are integers greater than 1.
  • the processor 22 is specifically configured to:
  • the first image is processed by a recognition model to obtain an image filter of the first image.
  • the recognition model is obtained by learning from multiple sets of samples, and each set of samples includes a first sample image and a second sample image
  • the image content in the first sample image and the second sample image are the same, and the resolution of the first sample image is greater than the resolution of the second sample image.
  • the processor 22 is specifically configured to:
  • the compressed first image is processed by the recognition model.
  • the processor 22 is specifically configured to:
  • the gradient image and the first image have the same size, the gradient image has a size of M*N, and the M and the N are respectively integers greater than 1;
  • the gradient image is processed by the image filter to obtain a second image, the size of the second image is (f*M)*(f*N), where f is a preset magnification;
  • the superdivision image is obtained according to the second image and the third image, and the size of the superdivision image is (f*M)*(f*N).
  • the processor 22 is specifically configured to:
  • the gradient image is processed by the image filter to obtain f 2 sub-images, and the size of each sub-image is (f*M)*(f*N);
  • the second image is determined according to the f 2 sub-images.
  • the processor 22 is specifically configured to:
  • the gradient image is processed by the parameters in the k*t+1 to (k+1)*t channels in the image filter to obtain the k-th sub-image, and the k is sequentially taken as 0, 1, ising, f 2 -1.
  • the processor 22 is specifically configured to:
  • the M*N image blocks are stitched together to obtain the second image.
  • the foregoing processor may be a CPU, or other general-purpose processors, DSPs, ASICs, and so on.
  • the general-purpose processor may be a microprocessor or the processor may also be any conventional processor or the like.
  • the steps in the embodiment of the authentication method disclosed in this application may be directly embodied as being executed and completed by a hardware processor, or executed and completed by a combination of hardware and software modules in the processor.
  • This application provides a computer-readable storage medium, the computer-readable storage medium is used to store a computer program, and the computer program is used to implement the image processing method described in the foregoing embodiment.
  • the embodiment of the present application also provides a chip or integrated circuit, including a memory and a processor;
  • the memory is used for storing program instructions and sometimes also used for storing intermediate data
  • the processor is configured to call the program instructions stored in the memory to implement the image processing method described above.
  • the memory can be independent or integrated with the processor.
  • the memory may also be located outside the chip or integrated circuit.
  • An embodiment of the present application further provides a program product, the program product includes a computer program, the computer program is stored in a storage medium, and the computer program is used to implement the above-mentioned image processing method.
  • All or part of the steps in the foregoing method embodiments can be implemented by a program instructing relevant hardware.
  • the aforementioned program can be stored in a readable memory.
  • the program executes the steps that include the foregoing method embodiments; and the foregoing memory (storage medium) includes: read-only memory (English: read-only memory, abbreviation: ROM), RAM, flash memory, hard disk, Solid state drives, magnetic tapes (English: magnetic tape), floppy disks (English: floppy disk), optical discs (English: optical disc) and any combination thereof.
  • These computer program instructions can be provided to the processing unit of a general-purpose computer, a special-purpose computer, an embedded processor, or other programmable data processing equipment to generate a machine, so that the instructions executed by the processing unit of the computer or other programmable data processing equipment are generated for use It is a device that realizes the functions specified in one process or multiple processes in the flowchart and/or one block or multiple blocks in the block diagram.
  • These computer program instructions can also be stored in a computer-readable memory that can guide a computer or other programmable data processing equipment to work in a specific manner, so that the instructions stored in the computer-readable memory produce an article of manufacture including the instruction device.
  • the device implements the functions specified in one process or multiple processes in the flowchart and/or one block or multiple blocks in the block diagram.
  • These computer program instructions can also be loaded on a computer or other programmable data processing equipment, so that a series of operation steps are executed on the computer or other programmable equipment to produce computer-implemented processing, so as to execute on the computer or other programmable equipment.
  • the instructions provide steps for implementing functions specified in a flow or multiple flows in the flowchart and/or a block or multiple blocks in the block diagram.
  • the term “including” and its variations may refer to non-limiting inclusion; the term “or” and its variations may refer to “and/or”.
  • the terms “first”, “second”, etc. in the present application are used to distinguish similar objects, and are not necessarily used to describe a specific order or sequence.
  • “plurality” means two or more.
  • “And/or” describes the association relationship of the associated objects, indicating that there can be three types of relationships, for example, A and/or B, which can mean: A alone exists, A and B exist at the same time, and B exists alone.
  • the character “/” generally indicates that the associated objects are in an "or” relationship.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Computation (AREA)
  • Image Processing (AREA)
  • Image Analysis (AREA)

Abstract

本申请提供一种图像处理方法、装置及设备等,该方法通过生成特殊的图像滤波器,并根据该图像滤波器对图像进行超分,从而提高图像超分效果。该图像滤波器中具有待超分的图像中每个像素对应的滤波器参数,具有不同纹理特征的像素对应的滤波器参数不同。该图像超分的方法、装置及设备等可以应用于视频、游戏、拍照等各种场景,提高这些场景中的图像效果,从而提升用户体验。

Description

图像处理方法、装置及设备
本申请要求于2019年07月12日提交中国专利局、申请号为2019106290316、申请名称为“图像处理方法、装置及设备”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本申请涉及计算机技术领域,尤其涉及一种图像处理的方法、装置及设备等。
背景技术
图像超分是指,对低分辨率的图像进行重建,以得到高分辨率的图像。
目前,可以通过神经网络对低分辨率图像进行超分处理,以得到高分辨率图像。例如,可以采用超分辨卷积神经网络(Super-Resolution Convolutional Neural Network,SRCNN)、增强型超分辨率生成对抗网络(Enhanced Super-Resolution Generative Adversarial Networks,ESRGAN)、超分辨率的极深网络(Very Deep network for Super-Resolution,VDSR)对低分辨率图像进行超分处理,以得到高分辨率图像。然而,上述神经网络只能对部分纹理类型的图像可以进行较好的超分,例如,对建筑物类型的图像可以进行较好的超分处理,上述神经网络对人脸图像无法进行较好的超分处理。即,现有的图像超分方法对图像处理的可靠性低。
发明内容
本申请提供一种图像处理方法、装置及设备,提高了图像处理的可靠性。
第一方面,本申请实施例提供一种图像处理方法,当需要获取第一图像的超分图像时,可以确定第一图像对应的图像滤波器,并根据第一图像的图像滤波器,对第一图像进行超分处理,得到第一图像的超分图像。其中,图像滤波器中具有第一图像中每个像素对应的滤波器参数,具有不同纹理特征的像素对应的滤波器参数不同。换句话说,确定出的所述图像滤波器中具有与不同纹理特征的像素对应的不同的滤波器参数。
在上述过程中,由于确定得到的图像滤波器中包括第一图像中每个像素对应的滤波器参数,且第一图像中具有不同纹理特征的像素对应的滤波器参数不同,因此,在通过图像滤波器对第一图像进行超分处理时,可以对第一图像中具有不同纹理特征的像素进行不同的超分处理,使得对图像中各像素进行的超分处理与像素本身的纹理特性相关,使得对图像进行超分处理的效果更好,提高了图像处理的可靠性。
在一种可能的实施方式中,可以通过如下可行的实现方式确定第一图像对应的图像滤波器:获取第一图像的纹理图像;根据纹理图像中各像素的纹理特征,在纹理图像中确定C个局部纹理图像和每个局部纹理图像的权重值,根据C个局部纹理图像和每个局部纹理图像的权重值,确定图像滤波器。其中,局部纹理图像中纹理的纹理特征相同,图像滤波器的通道数量为C,C=f 2×t,f为预设放大倍数,t为第一图像中每个像素对应的滤波器参数的个数,f大于1,t为大于或等于1的整数。在其他可能的实现方式中,预设放大 倍数也可以变成其他的因素,具体可根据超分的需求确定。
在一种可能的实施方式中,图像滤波器的第i个通道中的第(x,y)个滤波器参数为:第i个局部纹理图像中第(x,y)个像素的像素值与第i个局部纹理图像的权重值的乘积;其中,i为小于或等于C的正整数,x为小于或等于M的正整数,y为小于或等于N的正整数,M为第一图像在横向包括的像素个数,N为第一图像在纵向包括的像素个数,M和N分别为大于1的整数。
在上述过程中,先获取第一图像的纹理图像,在纹理图像中确定多个局部图像和每个局部图像的权重值,并根据局部纹理图像和局部纹理图像的特征确定图像滤波器。由于局部纹理图像中纹理的纹理特征相同,因此,在通过上述方法得到的图像滤波器中,第一图像中具有第一图像中每个像素对应的滤波器参数,且具有不同纹理特征的像素对应的滤波器参数不同。在通过该图像滤波器对第一图像进行超分处理时,可以根据图像滤波器对第一图像中不同的像素进行不同的超分处理,进而提高图像处理的可靠性。
在一种可能的实施方式中,可以通过如下可行的实现方式获取第一图像的纹理图像:根据预设尺寸对第一图像进行压缩处理,压缩处理后的第一图像的尺寸为预设尺寸;根据压缩处理后的第一图像,确定纹理图像。
在上述过程中,在对第一图像进行压缩处理之后,压缩处理后的第一图像中包括的像素个数较少,进而使得确定得到的纹理图像中的像素个数也较少,这样,在后续对纹理图像进行处理的过程中,可以减少数据处理量,进而提高图像处理效率。
在一种可能的实施方式中,可以通过如下可行的实现方式确定第一图像对应的图像滤波器:通过识别模型对第一图像进行处理,得到第一图像的图像滤波器,识别模型为对多组样本学习得到的,每组样本包括第一样本图像和第二样本图像,第一样本图像和第二样本图像中的图像内容相同,第一样本图像的分辨率大于第二样本图像的分辨率。
可以通过如下可行的实现方式学习得到识别模型:可以将多组样本图像输入至识别模型,识别模型可以对多组样本图像进行学习,由于该多组样本图像中包括多种纹理特性的样本图像,因此,识别模型可以学习得到:具有各种纹理特性的样本低分辨率图像(第二样本图像)超分至样本高分辨率图像(第一样本图像)过程中,样本低分辨率图像对应的样本图像滤波器(或者滤波器参数)。在识别模型对多组样本图像进行学习之后,识别模型具有确定图像的图像滤波器的功能。其中,只有当图像滤波器中与具有不同纹理特征的像素对应的滤波器参数不同时,通过图像滤波器超分得到的高分辨率图像才为超分效果较好的高分辨率图像,由于每组样本图像中的样本高分辨率图像为超分效果较好的高分辨率图像,因此,在识别模型根据上述多组样本图像进行学习之后,识别模型具有输出图像的图像滤波器的功能,且输出的图像滤波器具有如下特性:图像滤波器中具有图像中每个像素对应的滤波器参数,且具有不同纹理特征的像素对应的滤波器参数不同。
在上述过程中,预先学习得到识别模型,通过识别模型可以获取第一图像的图像滤波器。通过对识别模型进行训练,可以使得识别模型学习得到各种纹理特征的图像从低分辨率图像至高分辨率图像对应的滤波器参数,因此,在识别模型输出的图像滤波器中,第一图像中具有不同纹理特征的像素对应的图像滤波器的滤波器参数不同,进而可以根据图像滤波器对第一图像中不同的像素进行不同的超分处理,进而提高图像处理的可靠性。
在一种可能的实施方式中,可以通过如下可行的实现方式通过识别模型第一图像进行 处理:根据预设尺寸对第一图像进行压缩处理,压缩处理后的第一图像的尺寸为预设尺寸;通过识别模型对压缩处理后的第一图像进行处理。
在上述过程中,在对第一图像进行压缩处理之后,压缩处理后的第一图像中包括的像素个数较少,使得识别模型可以对较少的像素进行处理,以得到的第一图像的图像滤波器,提高了识别模型确定得到第一图像的图像滤波器的效率。
在一种可能的实施方式中,可以通过如下可行的实现方式对第一图像进行超分处理:获取第一图像的梯度图像,梯度图像与第一图像的尺寸相同,梯度图像的尺寸为M*N,M和N分别为大于1的整数;通过图像滤波器对梯度图像进行处理,得到第二图像,第二图像的尺寸为(f*M)*(f*N);将第一图像放大f倍,得到第三图像,第三图像的尺寸为(f*M)*(f*N);根据第二图像和第三图像,得到超分图像,超分图像的尺寸为(f*M)*(f*N)。
在上述过程中,先通过图像滤波器对第一图像的梯度图像进行处理,得到第二图像,再根据第二图像和对第一图像放大后的第三图像,得到超分图像。梯度图像的数据量小于第一图像的数据量,因此,通过图像滤波器可以对梯度图像进行快速的处理,进而可以快速确定得到超分图像。
在一种可能的实施方式中,通过图像滤波器对梯度图像进行处理,得到第二图像,包括:通过图像滤波器对梯度图像进行处理,得到f 2张子图像,每张子图像的尺寸为(f*M)*(f*N);根据f 2张子图像确定第二图像。
在上述过程中,不同的子图像可以表示第一图像中不同区域的精细的纹理特性,因此,确定得到的第二图像可以表示第一图像的精细的纹理特征。
在一种可能的实施方式中,通过第一图像的图像滤波器对梯度图像进行处理,得到f 2张子图像,包括:通过图像滤波器中第k*t+1至(k+1)*t个通道中的参数对梯度图像进行处理,得到第k张子图像,k依次取0,1,……,f 2-1。
在上述过程中,由于每个子图像是根据图像滤波器中不同通道中的参数确定得到的,不同通道中的参数对于第一图像中不同区域的纹理特性,因此,每张子图像可以表示第一图像中不同区域的精细的纹理特征。
在一种可能的实施方式中,根据f 2张子图像确定第二图像,包括:分别对f 2张子图像中相同像素位置的像素进行拼接处理,得到M*N个图像块,每个图像块的尺寸为f*f;根据每个图像块中的像素在子图像中的像素位置,对M*N个图像块进行拼接处理,得到第二图像。
在上述过程中,由于每张子图像可以表示第一图像中不同区域的精细的纹理特征,通过上述方法拼接得到第二图像,不但可以使得第二图像的尺寸变大,还可以使得第二图像可以表示第一图像的精细的纹理特征。
第二方面,本申请实施例提供一种图像处理装置,包括:确定模块和超分模块,其中,
所述确定模块用于,确定第一图像对应的图像滤波器,所述图像滤波器中具有所述第一图像中每个像素对应的滤波器参数,具有不同纹理特征的像素对应的滤波器参数不同;
所述超分模块用于,根据所述第一图像的图像滤波器,对所述第一图像进行超分处理,得到所述第一图像的超分图像。
在一种可能的实施方式中,所述确定模块具体用于:
获取所述第一图像的纹理图像;
根据所述纹理图像中各像素的纹理特征,在所述纹理图像中确定C个局部纹理图像和每个局部纹理图像的权重值,所述局部纹理图像中纹理的纹理特征相同;C=f 2×t,所述f为所述预设放大倍数,所述t为所述第一图像中每个像素对应的滤波器参数的个数,所述f大于1,所述t为大于或等于1的整数;
根据所述C个局部纹理图像和每个局部纹理图像的权重值,确定所述图像滤波器,所述图像滤波器的通道数量为所述C。
在一种可能的实施方式中,所述图像滤波器的第i个通道中的第(x,y)个滤波器参数为:第i个局部纹理图像中第(x,y)个像素的像素值与所述第i个局部纹理图像的权重值的乘积;
其中,所述i为小于或等于所述C的正整数,所述x为小于或等于M的正整数,所述y为小于或等于N的正整数,所述M为所述第一图像在横向包括的像素个数,所述N为所述第一图像在纵向包括的像素个数,所述M和所述N分别为大于1的整数。
在一种可能的实施方式中,所述确定模块具体用于:
根据预设尺寸对所述第一图像进行压缩处理,压缩处理后的所述第一图像的尺寸为所述预设尺寸;
根据压缩处理后的所述第一图像,确定所述纹理图像。
在一种可能的实施方式中,所述确定模块具体用于:
通过识别模型对所述第一图像进行处理,得到所述第一图像的图像滤波器,所述识别模型为对多组样本学习得到的,每组样本包括第一样本图像和第二样本图像,所述第一样本图像和所述第二样本图像中的图像内容相同,所述第一样本图像的分辨率大于所述第二样本图像的分辨率。
在一种可能的实施方式中,所述确定模块具体用于:
根据预设尺寸对所述第一图像进行压缩处理,压缩处理后的所述第一图像的尺寸为预设尺寸;
通过识别模型对压缩处理后的所述第一图像进行处理。
在一种可能的实施方式中,所述超分模块具体用于:
获取所述第一图像的梯度图像,所述梯度图像与所述第一图像的尺寸相同,所述梯度图像的尺寸为M*N,所述M和所述N分别为大于1的整数;
通过所述图像滤波器对所述梯度图像进行处理,得到第二图像,所述第二图像的尺寸为(f*M)*(f*N),其中,所述f为预设放大倍数;
将所述第一图像放大f倍,得到第三图像,所述第三图像的尺寸为(f*M)*(f*N);
根据所述第二图像和所述第三图像,得到所述超分图像,所述超分图像的尺寸为(f*M)*(f*N)。
在一种可能的实施方式中,所述超分模块具体用于:
通过所述图像滤波器对所述梯度图像进行处理,得到f 2张子图像,每张子图像的尺寸为(f*M)*(f*N);
根据所述f 2张子图像确定所述第二图像。
在一种可能的实施方式中,所述超分模块具体用于:
通过所述图像滤波器中第k*t+1至(k+1)*t个通道中的参数对所述梯度图像进行处理, 得到第k张子图像,所述k依次取0,1,……,f 2-1。
在一种可能的实施方式中,所述超分模块具体用于:
分别对所述f 2张子图像中相同像素位置的像素进行拼接处理,得到M*N个图像块,每个图像块的尺寸为f*f;
根据每个图像块中的像素在所述子图像中的像素位置,对所述M*N个图像块进行拼接处理,得到所述第二图像。
第三方面,本申请实施例提供一种计算机系统,包括:存储器、处理器以及计算机程序,所述计算机程序存储在所述存储器中,所述处理器运行所述计算机程序,并执行如下步骤:
确定第一图像对应的图像滤波器,所述图像滤波器中具有所述第一图像中每个像素对应的滤波器参数,具有不同纹理特征的像素对应的滤波器参数不同;
根据所述第一图像的图像滤波器,对所述第一图像进行超分处理,得到所述第一图像的超分图像。
在一种可能的实施方式中,所述处理器具体用于:
获取所述第一图像的纹理图像;
根据所述纹理图像中各像素的纹理特征,在所述纹理图像中确定C个局部纹理图像和每个局部纹理图像的权重值,所述局部纹理图像中纹理的纹理特征相同;C=f 2×t,所述f为所述预设放大倍数,所述t为所述第一图像中每个像素对应的滤波器参数的个数,所述f大于1,所述t为大于或等于1的整数;
根据所述C个局部纹理图像和每个局部纹理图像的权重值,确定所述图像滤波器,所述图像滤波器的通道数量为所述C。
在一种可能的实施方式中,所述图像滤波器的第i个通道中的第(x,y)个滤波器参数为:第i个局部纹理图像中第(x,y)个像素的像素值与所述第i个局部纹理图像的权重值的乘积;
其中,所述i为小于或等于所述C的正整数,所述x为小于或等于M的正整数,所述y为小于或等于N的正整数,所述M为所述第一图像在横向包括的像素个数,所述N为所述第一图像在纵向包括的像素个数,所述M和所述N分别为大于1的整数。
在一种可能的实施方式中,所述处理器具体用于:
根据预设尺寸对所述第一图像进行压缩处理,压缩处理后的所述第一图像的尺寸为所述预设尺寸;
根据压缩处理后的所述第一图像,确定所述纹理图像。
在一种可能的实施方式中,所述处理器具体用于:
通过识别模型对所述第一图像进行处理,得到所述第一图像的图像滤波器,所述识别模型为对多组样本学习得到的,每组样本包括第一样本图像和第二样本图像,所述第一样本图像和所述第二样本图像中的图像内容相同,所述第一样本图像的分辨率大于所述第二样本图像的分辨率。
在一种可能的实施方式中,所述处理器具体用于:
根据预设尺寸对所述第一图像进行压缩处理,压缩处理后的所述第一图像的尺寸为预设尺寸;
通过识别模型对压缩处理后的所述第一图像进行处理。
在一种可能的实施方式中,所述处理器具体用于:
获取所述第一图像的梯度图像,所述梯度图像与所述第一图像的尺寸相同,所述梯度图像的尺寸为M*N,所述M和所述N分别为大于1的整数;
通过所述图像滤波器对所述梯度图像进行处理,得到第二图像,所述第二图像的尺寸为(f*M)*(f*N),其中,所述f为预设放大倍数;
将所述第一图像放大f倍,得到第三图像,所述第三图像的尺寸为(f*M)*(f*N);
根据所述第二图像和所述第三图像,得到所述超分图像,所述超分图像的尺寸为(f*M)*(f*N)。
在一种可能的实施方式中,所述处理器具体用于:
通过所述图像滤波器对所述梯度图像进行处理,得到f 2张子图像,每张子图像的尺寸为(f*M)*(f*N);
根据所述f 2张子图像确定所述第二图像。
在一种可能的实施方式中,所述处理器具体用于:
通过所述图像滤波器中第k*t+1至(k+1)*t个通道中的参数对所述梯度图像进行处理,得到第k张子图像,所述k依次取0,1,……,f 2-1。
在一种可能的实施方式中,所述处理器具体用于:
分别对所述f 2张子图像中相同像素位置的像素进行拼接处理,得到M*N个图像块,每个图像块的尺寸为f*f;
根据每个图像块中的像素在所述子图像中的像素位置,对所述M*N个图像块进行拼接处理,得到所述第二图像。
第四方面,本申请实施例提供一种计算机可读存储介质,所述计算机可读存储介质包括计算机程序,所述计算机程序用于实现如第一方面任一项所述的图像处理方法。
第五方面,本申请实施例还提供一种芯片或者集成电路,包括:接口电路和处理器。所述处理器,用于通过所述接口电路调用程序指令以实现如第一方面任一项所述的图像处理方法。
第六方面,本申请实施例还提供一种计算机程序或计算机程序产品,所述计算机程序或计算机程序产品包括计算机可读指令,所述计算机可读指令在被一个或多个处理器读取时实现第一方面任一项所述的图像处理方法。
本申请实施例提供的图像处理方法、装置及设备,当需要获取第一图像的超分图像时,可以先确定第一图像的图像滤波器,再通过图像滤波器对第一图像进行超分处理,以获取第一图像的超分图像。其中,图像滤波器中具有第一图像中每个像素对应的滤波器参数,具有不同纹理特征的像素对应的滤波器参数不同,这样,在通过图像滤波器对第一图像进行超分处理时,可以对第一图像中具有不同纹理特征的像素进行不同的超分处理,使得对图像中各像素进行的超分处理与像素本身的纹理特性相关,使得对图像进行超分处理的效果更好,提高了图像处理的可靠性。
附图说明
图1为本申请实施例提供的超分图像示意图;
图2为本申请实施例提供的纹理图像示意图;
图3为本申请实施例提供的梯度图像示意图;
图4为本申请实施例提供的一种图像滤波器的示意图;
图5为本申请实施例提供的一种确定图像滤波器方法的流程示意图;
图6为本申请实施例提供的局部纹理图像的示意图;
图7为本申请实施例提供的另一种图像滤波器的示意图;
图8为本申请实施例提供的一种图像示意图;
图9为本申请实施例提供的另一种确定图像滤波器方法的流程示意图;
图10为本申请实施例提供的样本图像的集合;
图11为本申请实施例提供的获取样本图像集合方法的流程示意图;
图12为本申请实施例提供的图像滤波器对第一图像进行超分处理方法的流程示意图;
图13为本申请实施例提供的一种通过滤波器参数对像素进行处理的示意图;
图14为本申请实施例提供的另一种通过滤波器参数对像素进行处理的示意图;
图15为本申请实施例提供的确定图像块的过程示意图;
图16为本申请实施例提供的对图像块的拼接过程示意图;
图17为本申请实施例提供的图像处理方法的流程示意图;
图18为本申请实施例提供的图像处理的过程示意图;
图19为本申请实施例提供的一种应用场景示意图;
图20为本申请实施例提供的处理模型的生成及使用过程示意图;
图21为本申请实施例提供的图像处理装置的结构示意图;
图22为本申请实施例提供的计算机系统的硬件结构示意图。
具体实施方式
为了便于理解,首先,对本申请所涉及的概念进行说明。
图像超分:还可以称为图像超分辨率,是指对低分辨率图像进行重构以得到高分辨率图像的过程。下面,结合图1,对图像超分进行说明。图1为本申请实施例提供的超分图像示意图。请参见图1,包括原图像和超分图像,其中,对原图像进行了超分处理得到超分图像,原图像和超分图像中的图像内容相同。例如,假设原图像的分辨率为a*b,放大倍数为3,则超分图像的分辨率为3a*3b。图像的分辨率a*b中的a是指图像在横向包括的像素个数,b是指图像在纵向包括的像素个数。图像的分辨率还可以称为图像的尺寸。
纹理:还可以称为纹路,是指物体表面所呈现出来的花纹或纹路。
纹理图像:是指包括原图像中的纹理的图像。下面,结合图2,对纹理图像进行说明。图2为本申请实施例提供的纹理图像示意图。请参见图2,包括原图像和纹理图像,纹理图像中包括原图像中的纹理。需要说明的是,图2只是示意性的示意出原图像中的一种类型的纹理图像。例如,纹理图像的类型可以包括局部二值模式(local binary patterns,LBP)类型、高斯马尔可夫随机场(gaussian markov random field,GMRF)类型、灰度共生矩阵(gray-level co-occurrence matrix,GLCM)类型等。
纹理特征:是指纹理具有的特性,图像中不同对象的纹理特征不同。例如,请参见图2,天空的纹理特征、建筑的纹理特征、头发的纹理特征、人脸的纹理特征、衣服的纹理 特征、草地的纹理特征等均不相同。
梯度图像:将原图像看成一个二维离散函数,则梯度图像就是该二维离散函数求导得到的图像。下面,结合图3,对梯度图像进行说明。图3为本申请实施例提供的梯度图像示意图。请参见图3,包括原图像和梯度图像,原图像通常为三通道,梯度图像通常为1通道,因此,对原图像的梯度图像进行处理,可以减少运算量。例如,对梯度图像的运算量为对原图像的运算量的三分之一。
图像滤波器:可以对图像进行处理,以提高图像的分辨率。本申请所涉及的图像滤波器为三维滤波器,该三个维度可以分别即为记为H、W和C。为了便于理解,下面,结合图4对图像滤波器进行说明。图4为本申请实施例提供的一种图像滤波器的示意图。请参见图4,图像滤波器为三维滤波器,该图像滤波器的横向尺寸为W,纵向尺寸为H,通道数为C。其中,图4中每个方格表示一个滤波器参数,图像滤波器中包括H*W*C个滤波器参数。通过图像滤波器对图像进行处理的实质为通过图像滤波器中的滤波器参数对图像进行处理。
下采样:是指压缩处理,例如,对图像进行下采样是指对图像进行压缩处理。
上采样:是指放大处理,例如,对图像进行上采样是指对图像进行方法处理。
在本申请中,当需要获取第一图像的超分图像时,可以先确定第一图像的图像滤波器,再通过图像滤波器对第一图像进行超分处理,以获取第一图像的超分图像。其中,图像滤波器中包括第一图像中每个像素对应的滤波器参数,且第一图像中具有不同纹理特征的像素对应的滤波器参数不同。这样,在通过图像滤波器对第一图像进行超分处理时,可以对第一图像中具有不同纹理特征的像素进行不同的超分处理,使得对图像中各像素进行的超分处理与像素本身的纹理特性相关,使得对图像进行超分处理的效果更好,提高了图像处理的可靠性。
下面,通过具体实施例对本申请所示的技术方案进行详细说明。需要说明的是,下面几个实施例可以独立存在,也可以相互结合,对于相同或相似的内容,在不同的实施例中不再重复说明。
为了便于理解,首先介绍两种确定第一图像对应的图像滤波器的方式。图5-图8所示的实施例为一种确定图像滤波器的方式,图9-图11所示的实施例为另一种确定图像滤波器的方式。
图5为本申请实施例提供的一种确定图像滤波器方法的流程示意图。请参见图5,该方法可以包括:
S501、获取第一图像的纹理图像。
其中,第一图像为待进行超分处理的图像,第一图像通常为分辨率较低的图像。
可选的,可以通过卷积神经网络获取第一图像的纹理图像。例如,可以通过LBP模型、GMRF模型或者GLCM模型中的任意一种获取第一图像的纹理图像。
为了提高获取第一图像的纹理图像的效率,可以先将第一图像压缩至预设尺寸(对第一图像进行下采样得到预设尺寸的图像),再获取压缩处理后的第一图像的纹理图像。可选的,当第一图像的尺寸小于预设尺寸时,则可以不对第一图像进行压缩处理。例如,预设尺寸可以为256*256、512*512等,在实际应用过程中,可以根据实际需要设置该预设尺寸。需要说明的是,本申请所涉及的图像的尺寸是指图像的分辨率,例如,第一图像的 尺寸为M*N是指,第一图像在横向包括M个像素,在纵向包括N个像素。
压缩处理后的第一图像的纹理图像的尺寸与压缩后的第一图像的尺寸相同。例如,假设第一图像的尺寸为M*N,压缩处理后的第一图像的尺寸为M1*N1,则压缩处理后的第一图像的纹理图像的尺寸为M1*N1。由于压缩后的第一图像中包括的像素减少,使得可以快速获取得到压缩后的第一图像的纹理图像。进一步的,压缩后的第一图像的纹理图像中包括的像素也会较少,进而可以减少后续对纹理图像进行处理的数据量。
S502、根据所述纹理图像中各像素的纹理特征,在纹理图像中确定C个局部纹理图像和每个局部纹理图像的权重值。
其中,C=f 2×t,f为预设放大倍数,t为第一图像中每个像素对应的滤波器参数的个数。f大于1,t为大于或等于1的整数。
可选的,可以预设t的大小,t=a 2,a通常为大于或等于1的奇数,例如,a可以为1,3,5,7等。
例如,假设需要对第一图像放大f=3倍,假设每个像素对应的滤波器参数的个数t=3 2=9,则C=81。
可选的,每个局部纹理图像中包括纹理图像中的一部分纹理,一个局部纹理图像中纹理的纹理特征相同。例如,一个局部纹理图像中仅包括人脸纹理,人脸纹理的纹理特征相同,或者,一个局部纹理图像中仅包括天空纹理,天空纹理的纹理特征相同。
局部纹理图像的尺寸可以等于纹理图像的尺寸。其中,若在S502中将第一图像压缩至预设尺寸,则纹理图像的尺寸和局部纹理图像的尺寸均为该预设尺寸,若在S502中未对第一图像进行压缩处理,则纹理图像的尺寸和局部纹理图像的尺寸均与第一图像的尺寸相同。
下面,结合图6,对确定局部纹理图像的过程进行说明。
图6为本申请实施例提供的局部纹理图像的示意图。请参见图6,假设C为5,则在纹理图像中可以确定5个局部纹理图像,分别记为:局部纹理图像1、局部纹理图像2、局部纹理图像3、局部纹理图像4和局部纹理图像5。
请参见图6,局部纹理图像1为建筑对应的局部纹理图像,局部纹理图像1中包括建筑对应的纹理。局部纹理图像2为天空对应的局部纹理图像,局部纹理图像2中包括天空对应的纹理。局部纹理图像3为草地对应的局部纹理图像,局部纹理图像3中包括草地对应的纹理。局部纹理图像4为汽车对应的局部纹理图像,局部纹理图像4中包括汽车对应的纹理。局部纹理图像5为云朵对应的局部纹理图像,局部纹理图像5中包括云朵对应的纹理。
需要说明的是,图6只是以示例的形式示意在纹理图像中确定得到的部分局部纹理图像,当然,在纹理图像中还可以确定得到其它局部纹理图像,本申请实施例对此不作具体限定。
一个局部纹理图像可以对应一个权重值。例如,局部纹理图像的权重值可以为0-1之间的数。
可选的,可以根据局部纹理图像中包括的对象类型,确定局部纹理图像的权重值。例如,可以预先设置对象类型与权重值之间的对应关系,相应的,可以根据局部纹理图像中的对象类型和该对应关系,确定局部纹理图像的权重值。例如,可以识别局部纹理图像中 对象的对象类型,并根据对象类型和该对象关系,确定局部纹理图像的权重值。
可选的,可以根据局部纹理图像中纹理的纹理特征,确定局部纹理图像的权重值。例如,可以预先设置纹理特征与权重值之间的对应关系,相应的,可以根据局部纹理图像中纹理的纹理特征和该对应关系,确定局部纹理图像的权重值。
可选的,还可以通过预设模型确定每个局部纹理图像的权重值,该预设模型为对多组样本图像学习得到的。每组样本图像可以包括样本低分辨率图像和对应的样本高分辨率图像。可以将多组样本图像输入至预设模型,预设模型可以对多组样本图像进行学习,以学习得到从样本低分辨率图像超分至样本高分辨率图像时,每个样本低分辨率图像对应的样本局部纹理图像的权重值。在预设模型对多组样本图像进行学习之后,预设模型可以具有确定每个局部纹理图像的权重值的功能。因此,可以局部纹理图像输入至预设模型,以使预设模型输出局部纹理图像的权重值。
S503、根据C个局部纹理图像和每个局部纹理图像的权重值,确定图像滤波器,图像滤波器为三维图像滤波器,图像滤波器的通道数量为C。
可选的,可以将每个局部纹理图像与对应的权重值相乘,得到更新后的局部纹理图像。每个更新后的局部纹理图像对应图像滤波器的一个通道,一个更新后的局部纹理图像中的像素值为图像滤波器中的一个通道中的滤波器参数。图像滤波器的横向尺寸(W)与局部纹理图像的横向尺寸(局部纹理图像每行包括的像素个数)相同,图像滤波器的纵向尺寸(H)与局部纹理图像的纵向尺寸(局部纹理图像每列包括的像素个数)相同。
图像滤波器的第i个通道中的第(x,y)个滤波器参数为:第i个局部纹理图像中第(x,y)个像素的像素值与第i个局部纹理图像的权重值的乘积。其中,i为小于或等于C的正整数,x为小于或等于M的正整数,y为小于或等于N的正整数,M为第一图像在横向包括的像素个数,N为第一图像在纵向包括的像素个数,M和N分别为大于1的整数。
下面,结合图7,对图像滤波器进行说明。
图7为本申请实施例提供的另一种图像滤波器的示意图。请参见图7,假设C=5,该5个局部纹理图像分别记为:局部纹理图像1、局部纹理图像2、局部纹理图像3、局部纹理图像4和局部纹理图像5。在对该5个局部纹理图像的像素值乘以对应的权重值之后得到更新后的局部纹理图像,可以按照图7所示的方式对该5个更新后的局部纹理图像的像素值进行排列,每一个更新后的局部纹理图像中的像素值为图像滤波器中的一个通道的滤波器参数。
需要说明的是,图7只是示意局部纹理图像和图像滤波器的关系,并不代表图像滤波器的真实形态。
在上述过程中,通过为每个局部纹理图像设置的权重值,使得确定得到的图像滤波器的滤波器参数与图像中的对象类型相关,这样,在通过图像滤波器对图像进行处理时,可以使得处理后的图像中的不同区域之间可以平缓的过渡,避免处理后的图像中的不同区域之间出现明显的分割线。
下面,结合图8,对采用局部纹理图像的权重值之后的图像处理效果进行说明。
图8为本申请实施例提供的一种图像示意图。请参见图8,通过图像滤波器1对原图像进行超分处理后得到超分图像1,通过滤波器2对原图像进行超分处理后得到超分图像2。其中,在确定图像滤波器1的过程中,未采用局部纹理图像的权重值,即,直接根据 局部纹理图像确定得到图像滤波器1。在确定图像滤波器2的过程中,采用了局部纹理图像的权重值,即,根据局部纹理图像和局部纹理图像的权重值确定得到图像滤波器2。
请参见图8,在超分图像1中,左上角的天空与图像的其它部分之间具有较为明显的分割线,即,左上角的天空与图像的其它部分之间的过渡不平缓。
请参见图8,在超分图像2中,左上角的天空与图像的其它部分之间没有较为明显的分割线,即,左上角的天空与图像的其它部分之间的过渡平缓。
在图5所示的实施例中,先获取第一图像的纹理图像,在纹理图像中确定多个局部图像和每个局部图像的权重值,并根据局部纹理图像和局部纹理图像的权重值确定图像滤波器。由于一个局部纹理图像中纹理的纹理特征相同,因此,在通过上述方法得到的图像滤波器中具有第一图像中每个像素对应的滤波器参数,且具有不同纹理特征的像素对应的滤波器参数不同。在通过该图像滤波器对第一图像进行超分处理时,可以根据图像滤波器对第一图像中不同的像素进行不同的超分处理,进而提高图像处理的可靠性。
图9为本申请实施例提供的另一种确定图像滤波器方法的流程示意图。请参见图9,该方法可以包括:
S901、根据预设尺寸对第一图像进行压缩处理,压缩处理后的第一图像的尺寸为预设尺寸。
可选的,当第一图像的尺寸大于预设尺寸,则可以对第一图像进行压缩处理。当第一图像的尺寸小于预设尺寸时,则可以不对第一图像进行压缩处理。
例如,预设尺寸可以为256*256、512*512等,在实际应用过程中,可以根据实际需要设置该预设尺寸。
S902、通过识别模型对第一图像进行处理,得到第一图像的图像滤波器。
需要说明的是,S901可以为一个可选的步骤,当执行S901时,则在S902中通过识别模型对压缩处理后的第一图像进行处理,这样,可以减少识别模型的数据处理量,进而提高识别模型输出图像滤波器的效率。当不执行S901时,则在S902中对原始的第一图像进行处理。
其中,识别模型为对多组样本学习得到的,每组样本包括第一样本图像和第二样本图像,第一样本图像和第二样本图像中的图像内容相同,第一样本图像的分辨率大于第二样本图像的分辨率。
可选的,第一样本图像可以为高分辨率图像,第二样本图像可以为低分辨率图像。例如,第一样本图像的分辨率大于或等于第一阈值,第二样本图像的分辨率小于或等于第二阈值。
在执行图9所示的实施例之前,可以预先学习得到识别模型。下面,对学习得到识别模型的过程进行说明。
先获取具有不同纹理特征的样本图像得到样本图像集合,例如,样本图像集合可以如图10所示。图10为本申请实施例提供的样本图像的集合。请参见图10,样本图像集合中包括多张动物样本图像、多张人脸样本图像和多张天空样本图像。
下面,结合图11,介绍一种获取样本图像集合的方式。图11为本申请实施例提供的获取样本图像集合方法的流程示意图。请参见图11,可以先获取超分数据集、通过搜索引擎抓取的数据集、自行收集的数据集。每种数据集中包括多张初始样本图像。将每种数据 集中的初始样本图像输入至数据集处理器,由数据集处理器对每种数据集中的初始样本图像进行处理,以得到样本图像集合。
可选的,可以将各数据集中的初始样本图像进行图像分割(patch提取),以在初始样本图像中提取所需的样本图像。例如,对一张包括人脸、天空、建筑的初始样本图像进行分割,分割出仅包括人脸的样本图像、仅包括天空的样本图像、以及仅包括建筑的样本图像。再通过数据增广模块对图像分割得到的样本图像进行旋转、拉伸、缩放等处理,得到样本图像集合。通过上述处理,可以使得样本图像集合中包括丰富的样本图像。
可选的,样本图像集合Dataset={Texture info|Texture∈AllData}。
其中,Texture为纹理特征,AllData为纹理特征集合。Texture info为具有纹理特征Texture的图像,例如,Texture info={天空图像|人脸图像|动物图像....}。
在获取得到样本图像集合之后,可以在样本图像集合中确定多组样本图像,例如,针对样本图像集合中的样本高分辨率图像,则对该样本高分辨率图像进行压缩处理,得到对应的样本低分辨率图像,或者,针对样本图像集合中的样本低分辨率图像,则可以对该样本低分辨率图像进行处理,得到对应的样本高分辨率图像。
在确定得到多组样本图像之后,可以将多组样本图像输入至识别模型,识别模型可以对多组样本图像进行学习,由于该多组样本图像中包括多种纹理特性的样本图像,因此,识别模型可以学习得到:具有各种纹理特性的样本低分辨率图像(第二样本图像)超分至样本高分辨率图像(第一样本图像)过程中,样本低分辨率图像对应的样本图像滤波器(或者滤波器参数)。在识别模型对多组样本图像进行学习之后,识别模型具有确定图像的图像滤波器的功能。其中,只有当图像滤波器中与具有不同纹理特征的像素对应的滤波器参数不同时,通过图像滤波器超分得到的高分辨率图像才为超分效果较好的高分辨率图像,由于每组样本图像中的样本高分辨率图像为超分效果较好的高分辨率图像,因此,在识别模型根据上述多组样本图像进行学习之后,识别模型具有输出图像的图像滤波器的功能,且输出的图像滤波器具有如下特性:图像滤波器中具有图像中每个像素对应的滤波器参数,且具有不同纹理特征的像素对应的滤波器参数不同。
可选的,可以将表示第一图像的数据、预设放大倍数和每个像素对应的滤波器参数的个数t输入至识别模型,识别模型根据接收到的数据输出第一图像的图像滤波器。表示第一图像的数据可以为第一图像本身、或者第一图像的灰度图像等。例如,识别模型可以根据预设放大倍数f和每个像素对应的滤波器参数的个数t,确定图像滤波器的通道数C,并根据通道数C和表示第一图像的数据确定图像滤波器。识别模型输出的图像滤波器的通道数为C。
可选的,图9实施例所示的过程可以描述为:DTF (H,W,C)=Conv(resize(Input) (H,W)),其中,Input为输入的原图,resize(Input) (H,W)是指,将输入的原图缩放至尺寸H*W,Conv表示识别模型(卷积神经网络),DTF (H,W,C)为识别模型输出的图像滤波器,H为图像滤波器的纵向尺寸,W为图像滤波器的横向尺寸,C为图像滤波器的通道数。
在图9所示的实施例中,预先学习得到识别模型,通过识别模型可以获取第一图像的图像滤波器。通过对识别模型进行训练,可以使得识别模型学习得到各种纹理特征的图像从低分辨率图像至高分辨率图像对应的滤波器参数,因此,在识别模型输出的图像滤波器中,第一图像中具有不同纹理特征的像素对应的图像滤波器的滤波器参数不同,进而可以 根据图像滤波器对第一图像中不同的像素进行不同的超分处理,进而提高图像处理的可靠性。
在通过上述任意一种方式获取得到第一图像的图像滤波器之后,可以通过图像滤波器对第一图像进行超分处理,得到第一图像的超分图像。
下面,通过图12-图16所示的实施例,介绍一种通过图像滤波器对第一图像进行超分处理的过程。
图12为本申请实施例提供的图像滤波器对第一图像进行超分处理方法的流程示意图。请参见图12,包括:
S1201、获取第一图像的梯度图像。
其中,梯度图像与第一图像的尺寸相同,梯度图像的尺寸为M*N,M和N分别为大于1的整数。
可选的,可以通过卷积神经网络获取第一图像的梯度图像。例如,可以将表示第一图像的数据输入至卷积神经网络,卷积神经网络的输出为第一图像的梯度图像。
可选的,该步骤可以描述为:GGF (H,W,1)=Conv(Input(H,W,3)),其中,Input (H,W,3)是指输入的第一图像,该第一图像的通道数为3,第一图像通常为RGB图像或者YUV图像,GGF (H,W,1)是指输出引导梯度图像。
S1202、通过图像滤波器对梯度图像进行处理,得到f 2张子图像。
其中,每张子图像的尺寸为(f*M)*(f*N)。
可选的,可以先判断图像滤波器的H*W(W为一个通道在横向包括的滤波器参数的个数,H为一个通道在纵向包括的滤波器参数的个数)与梯度图像的尺寸是否一致(判断W是否与梯度图像在横向包括的像素个数相同,以及H是否与梯度图像在纵向包括的像素个数相同),若图像滤波器的H*W与梯度图像的尺寸不一致,则先将图像滤波器的H*W调整至梯度图像的尺寸,调整后的图像滤波器的W与梯度图像在横向包括的像素个数相同,调整后的图像滤波器的H与梯度图像在纵向包括的像素个数相同。
其中,通过图像滤波器中第k*t+1至(k+1)*t个通道中的参数对梯度图像进行处理,得到第k张子图像,k依次取0,1,……,f 2-1。
在获取每张子图像时,先在图像滤波器中确定对应的通道,并根据图像滤波器中对应通道中的滤波器参数对梯度图像进行处理(对梯度图像中的每个像素进行处理),得到一张子图像。例如,通过图像滤波器中第1至t个通道中的滤波器参数,对梯度图像进行处理,得到第0张子图像。通过图像滤波器中第t+1至2t个通道中的滤波器参数,对梯度图像进行处理,得到第1张子图像。通过图像滤波器中第2t+1至3t个通道中的滤波器参数,对梯度图像进行处理,得到第2张子图像。依次类推,直至得到f 2张子图像。
针对任意一张子图像,在获取该子图像的过程中,先确定梯度图像中每个像素对应的滤波器参数,并根据对应的滤波器参数对该像素进行处理,得到子图像中对应像素的像素值。可以通过如下两种可行的实现方式确定梯度图像中的像素对应的滤波器参数:
一种可行的实现方式:若图像滤波器的H*W尺寸与梯度图像的尺寸相同。
在该种可行的实现方式中,梯度图像中像素的坐标,与对应的滤波器参数的(h,w)坐标对应相同,h为0至H-1之间的正整数,w为0至W-1之间的正整数。
例如,梯度图像中的像素(0,0)对应的滤波器参数的h为0,w为0。梯度图像中的 像素(1,2)对应的滤波器参数的h为1,w为2。
另一种可行的实现方式:图像滤波器的H*W与梯度图像的尺寸不相同。
在该种可行的实现方式中,可以根据图像滤波器的H*W与梯度图像的尺寸的比值,确定梯度图像中像素的坐标与滤波器参数的(h,w)的对应关系。
例如,若W为梯度图像的横向尺寸的一半,H为梯度图像的纵向坐标的一半,则梯度图像中的像素(0,0)、(0,1)、(1,0)、(1,1)对应的滤波器参数的h为0,w为0。
其中,梯度图像中每个像素对应的f个滤波器参数。通过f个滤波器参数对一个像素进行处理的方式有多种,下面,结合图13-图14,介绍一种通过滤波器参数对梯度图像中的像素进行处理的过程。
图13为本申请实施例提供的一种通过滤波器参数对像素进行处理的示意图。图14为本申请实施例提供的另一种通过滤波器参数对像素进行处理的示意图。
请参见图13-图14,假设放大倍数f=3,一个像素对应的滤波器参数的个数为9,则图像滤波器的通道数C为81(图13中仅示意出了部分通道)。在获取第0张子图像时,根据图像滤波器中1-9通道中的滤波器参数对梯度图像进行处理。
请参见图13,针对梯度图像中的像素(0,0),该像素(0,0)对应的滤波器参数为图像滤波器中h=0,w=0,C=1至9的9个滤波器参数。假设梯度图像中的像素(0,0)的像素值为0.5,该(0,0)像素对应的9个滤波器参数形成的矩阵为
Figure PCTCN2020098208-appb-000001
由于像素(0,0)的部分周围没有像素,则可以在像素(0,0)周围填充像素0,将该矩阵的中心(0.2)与像素(0,0)正对,对应位置上的元素相乘之后相加,再取平均值,得到像素(0,0)上的像素值。
针对梯度图像中位于边缘的像素,均通过图13所示的方式对像素进行处理。
请参见图14,针对梯度图像中的像素(1,1),该像素(1,1)对应的滤波器参数为图像滤波器中h=1,w=1,C=1至9的9个滤波器参数。假设梯度图像中的像素(1,1)的像素值为0.5,该(1,1)像素对应的9个滤波器参数形成的矩阵为
Figure PCTCN2020098208-appb-000002
将该矩阵的中心(0.1)与像素(1,1)正对,对应位置上的元素相乘之后相加,再取平均值,得到像素(1,1)上的像素值。
针对梯度图像中位于非边缘的像素,均通过图14所示的方式对像素进行处理。
需要说明的是,图13-图14只是以示意的形式示意一种通过滤波器参数对像素进行处理的过程,并非对通过滤波器参数对像素进行处理的限定。
S1203、分别对f 2张子图像中相同像素位置的像素进行拼接处理,得到M*N个图像块。
其中,每个图像块的尺寸为f*f。
可选的,对f 2张子图像中坐标为(0,0)的f 2个像素进行拼接,得到坐标(0,0)对应的一个图像块。对f 2张子图像中坐标为(0,1)的f 2个像素进行拼接,得到坐标(0,1)对应的一个图像块。依次类推,得到M*N个图像块。
下面,结合图15,对确定图像块的过程进行说明。
图15为本申请实施例提供的确定图像块的过程示意图。请参见图15,假设f=3,则可以确定得到9张子图像,分别记为子图像1、子图像2、……、子图像9。对该9张子图像中坐标(0,0)的9个像素进行拼接,得到坐标(0,0)对应的图像块1,对该9张子图像中 坐标(0,1)的9个像素进行拼接,得到坐标(0,1)对应的图像块2,……,对该9张子图像中坐标(2,2)的9个像素进行拼接,得到坐标(2,2)对应的图像块9。
需要说明的是,图15所示的子图像以及图像块中的数字为像素的标号,例如,子图像1中的“11”为子图像1中像素(0,0)的标号,子图像1中的“12”为子图像1中像素(0,1)的标号。图像块1中的“11”为图像块1中像素(0,0)的标号,图像块11中的“21”为图像块1中像素(0,1)的标号。
S1204、根据每个图像块中的像素在子图像中的像素位置,对M*N个图像块进行拼接处理,得到第二图像。
其中,第二图像的尺寸为(f*M)*(f*N)。
可选的,可以根据每个图像块中的像素在子图像中的像素位置,确定每个图像块的摆放位置,根据每个图像块的摆放位置,对M*N个图像块进行拼接处理。每个图像块的摆放位置与图像块中的像素在子图像中的像素位置相对应。
例如,若图像块1中的像素在子图像中的像素位置为(0,0),图像块2中的像素在子图像中的位置为(0,1),则图像块2的摆放位置位于图像块1之后。
下面,结合图16,对图像块的拼接过程进行说明。
图16为本申请实施例提供的对图像块的拼接过程示意图。请参见图16,假设确定得到9个图像块,分别记为图像块1、图像块2、……、图像块9。假设该9个图像块中各像素的像素位置如图15所示,则根据图像块中各像素在子图像中的像素位置,对该9个图像块进行拼接得到图16所示的第二图像。
需要说明的是,图16所示的图像块以及第二图像中的数字为像素的标号,例如,图像块1中的“11”为图像块1中像素(0,0)的标号,图像块11中的“21”为图像块1中像素(0,1)的标号。第二图像中的“11”第二图像中像素(0,0)的标号,第二图像中的“21”为第二图像中像素(0,1)的标号。
可选的,S1202-S1204可以描述为:DTFS (H,W,C)=Conv(GGF (H,W,1),DTF (H,W,C))),其中,GGF (H,W,1)表示梯度图像,DTF (H,W,C))表示图像滤波器;DTFS (H,W,C)表示将梯度图像与图像滤波器进行融合之后的纹理精细调整的特征图。
通过S1202-S1204所示的步骤,可以根据图像滤波器和梯度图像确定得到第二图像。当然,还可以通过其它可行的实现方式确定得到第二图像,本申请实施例对此不作具体限定。
S1205、将第一图像放大f倍,得到第三图像。
其中,第三图像的尺寸为(f*M)*(f*N)。
可选的,可以通过双立方差值放大的方式将第一图像放大f倍,得到第三图像。当然,还可以通过其它可行的实现方式对第一图像进行放大,本申请实施例对此不作具体限定。
S1206、根据第二图像和第二图像,得到超分图像。
其中,超分图像的尺寸为(f*M)*(f*N)。
可选的,可以将第三图像中的像素值和第二图像中的像素值对应相加,得到超分图像。
例如,可以将第三图像中像素位置(0,0)上的像素值和第二图像中像素位置(0,0)上的像素值相加,作为超分图像像素位置(0,0)上的像素值。
在图12所示的实施例中,在确定得到图像滤波器之后,先通过图像滤波器对第一图 像的梯度图像进行处理,得到第二图像,再根据第二图像和对第一图像放大后的第三图像,得到超分图像。梯度图像的数据量小于第一图像的数据量,因此,通过图像滤波器可以对梯度图像进行快速的处理,进而可以快速确定得到超分图像。
下面,结合图17,介绍一种图像处理方法。
图17为本申请实施例提供的图像处理方法的流程示意图。请参见图17,该方法可以包括:
S1701、确定第一图像对应的图像滤波器。
其中,第一图像中具有不同纹理特征的像素对应的图像滤波器的滤波器参数不同。
需要说明的是,可以通过图5-图8实施例所示的方法确定图像滤波器,或者,通过图9-图11所示的方法确定图像滤波器,此处不再进行赘述。
S1702、根据第一图像的图像滤波器,对第一图像进行超分处理,得到第一图像的超分图像。
需要说明的是,可以通过图12-图16实施例所示的方法对第一图像进行超分处理,得到第一图像的超分图像,此处不再进行赘述。
可选的,该步骤可以表示为:HR (H,W)=Conv(DTFS (H,W,C),LR (H/f,W/f)),其中,HR (H,W)为超分图像,DTFS (H,W,C)为第二图像,LR (H/f,W/f)为第一图像,H为第一图像的横向尺寸,W为第一图像的纵向尺寸,f是放大倍数。
本申请实施例提供的图像处理方法,当需要获取第一图像的超分图像时,可以先确定第一图像的图像滤波器,再通过图像滤波器对第一图像进行超分处理,以获取第一图像的超分图像。其中,图像滤波器中的滤波器参数与第一图像中的像素具有对应关系,第一图像中具有不同纹理特征的像素对应的图像滤波器的滤波器参数不同,这样,在通过图像滤波器对第一图像进行超分处理时,可以对第一图像中具有不同纹理特征的像素进行不同的超分处理,使得对图像中各像素进行的超分处理与像素本身的纹理特性相关,使得对图像进行超分处理的效果更好,提高了图像处理的可靠性。
下面,结合图18,对图像处理过程进行说明。
图18为本申请实施例提供的图像处理的过程示意图。请参见图18,当需要对原图像(第一图像)进行超分处理时,先对第一图像进行下采样,并通过图5-图8实施例所示的方法确定图像滤波器,或者,通过图9-图11实施例所示的方法确定图像滤波器。在确定得到图像滤波器之后,对图像滤波器进行上采样,使得上采样后的图像滤波器的H*W尺寸与第一图像的尺寸相同。
还获取第一图像的梯度图像,通过图12-图16实施例所示的方法对图像滤波器和梯度图像进行处理,得到第二图像。还对第一图像进行上采样,得到放大后的图像,通过图12-图16实施例所示的方法对第二图像和放大后的图像进行处理,得到超分图像。
在图18所示的实施例中,第一图像中包括天空、云朵、汽车、建筑、草地等对象。在对第一图像进行超分处理的过程中,不同对象对应的滤波器系数不同,这样,可以对第一图像中的不同对象进行不同的超分处理,进而可以避免由于对图像中所有的对象进行相同的超分处理而导致部分对象出现锯齿、白边、模糊等问题,使得超分图像更加真实、自然,提高了图像处理的可靠性。
上述实施例所示的图像处理方法可以应用于多种应用场景,例如,上述实施例所示的 图像处理方法可以应用于终端设备对图像进行显示,即,在终端设备显示一张图像之前,通过上述实施例所示的图像处理方法对图像进行超分处理,以使终端设备显示的图像更加清晰、自然。例如,上述实施例所示的图像处理方法还可以应用于终端设备的视频通话场景,即,在终端设备与其它终端设备进行视频通过的过程中,终端设备可以接收来自其它终端设备的视频帧,终端设备可以通过上述实施例所示的方法对视频帧中的每一帧图像进行超分处理,以使终端设备显示的视频画面更加清晰、自然。例如,上述实施例所示的图像处方法还可以应用于终端设备对视频进行播放的场景,即,在终端设备播放视频的过程中,终端设备可以通过上述实施例所示的方法对视频帧中的每一帧图像进行超分处理,以使终端设备显示的视频画面更加清晰、自然。例如,上述实施例所示的图像处理方法还可以应用于游戏场景,即,在终端设备显示游戏画面之前,可以通过上述实施例所示的方法对游戏画面进行超分处理,以使终端设备显示的游戏画面更加清晰、自然。
可选的,本申请实施例所涉及的终端设备可以为手机、电脑、电视、车载终端(或无人驾驶系统)、增强现实(augmented reality,AR)设备、虚拟现实(virtual reality,VR)设备、混合现实设备、可穿戴式设备、智能家庭设备、无人机终端设备等等。
当然,上述实施例所示的图像处理方法还可以应用于其他应用场景,本申请实施例对此不作具体限定。
下面,结合图19,介绍一种图像处理方法的应用场景。
图19为本申请实施例提供的一种应用场景示意图。请参见图19,终端设备中安装有应用程序,例如,应用程序可以为图像显示应用程序、视频播放应用程序、视频通话应用程序等。应用程序可以获取媒体流(例如,视频流),终端设备中的解码器对媒体流进行解码可以得到原图像,终端设备通过上述实施例所示的图像处理方法对原图像进行超分处理,得到超分图像,终端设备显示超分图像。
可选的,当媒体流为视频流时,在通过本申请实施例所示的图像处理方法对视频帧进行处理时,由于视频中的内容(或场景)在短时间内通常不会发生变化,因此,为了提高处理效率,可以每T帧确定一次图像滤波器,即,根据一帧图像确定的图像滤波器可以应用于该帧之后的T-1帧图像,例如,T可以为5至10之间的任意数。
可选的,可以通过处理模型实现上述实施例所示的图像处理方法。例如,可以获取样本数据,并对样本数据进行训练,以得到处理模型,该处理模型可以实现上述图像处理方法。可选的,在实际应用过程中,可以在个人计算机(personal computer,PC)上训练根据样本数据训练得到上述处理模型,并在计算机上将上述处理模型转换为离线模型,可以将该离线模型移动至其它任何终端设备(例如,手机、平板电脑等移动设备),以使终端设备可以通过该处理模型进行图像处理。
需要说明的是,对处理模型的训练过程可以参见上述实施例对识别模型的训练过程,此处不再进行赘述。
下面,结合图20对处理模型的生成及使用过程进行说明。
图20为本申请实施例提供的处理模型的生成及使用过程示意图。请参见图20,可以在PC上训练根据样本数据训练得到处理模型,并在计算机上将上述处理模型转换为离线模型。将该离线模型安装至移动终端。例如,在移动终端中的应用程序获取到媒体流(例如,视频流)之后,终端设备中的解码器对媒体流进行解码可以得到原图像,终端设备通 过该离线的处理模型对原图像进行超分处理,得到超分图像,终端设备显示超分图像。
图21为本申请实施例提供的图像处理装置的结构示意图。请参见图21,该图像处理装置10可以包括:确定模块11和超分模块12,其中,
所述确定模块11用于,确定第一图像对应的图像滤波器,所述图像滤波器中具有所述第一图像中每个像素对应的滤波器参数,具有不同纹理特征的像素对应的滤波器参数不同;
所述超分模块12用于,根据所述第一图像的图像滤波器,对所述第一图像进行超分处理,得到所述第一图像的超分图像。
可选的,确定模块11可以执行上述图5实施例中的S501-S503,图9实施例中的S901-S902,以及图17实施例中的S1701。
可选的,超分模块可以执行上述图12实施例中的S1201-S1206,以及图17实施例中的S1702。
需要说明的是,本申请实施例所示的图像处理装置可以执行上述方法实施例所示的技术方案,其实现原理以及有益效果类似,此处不再进行赘述。
在一种可能的实施方式中,所述确定模块11具体用于:
获取所述第一图像的纹理图像;
根据所述纹理图像中各像素的纹理特征,在所述纹理图像中确定C个局部纹理图像和每个局部纹理图像的权重值,所述局部纹理图像中纹理的纹理特征相同;C=f 2×t,所述f为所述预设放大倍数,所述t为所述第一图像中每个像素对应的滤波器参数的个数,所述f大于1,所述t为大于或等于1的整数;
根据所述C个局部纹理图像和每个局部纹理图像的权重值,确定所述图像滤波器,所述图像滤波器的通道数量为所述C。
在一种可能的实施方式中,所述确定模块11具体用于:
根据预设尺寸对所述第一图像进行压缩处理,压缩处理后的所述第一图像的尺寸为所述预设尺寸;
根据压缩处理后的所述第一图像,确定所述纹理图像。
在一种可能的实施方式中,所述图像滤波器的第i个通道中的第(x,y)个滤波器参数为:第i个局部纹理图像中第(x,y)个像素的像素值与所述第i个局部纹理图像的权重值的乘积;
其中,所述i为小于或等于所述C的正整数,所述x为小于或等于M的正整数,所述y为小于或等于N的正整数,所述M为所述第一图像在横向包括的像素个数,所述N为所述第一图像在纵向包括的像素个数,所述M和所述N分别为大于1的整数。
在一种可能的实施方式中,所述确定模块11具体用于:
通过识别模型对所述第一图像进行处理,得到所述第一图像的图像滤波器,所述识别模型为对多组样本学习得到的,每组样本包括第一样本图像和第二样本图像,所述第一样本图像和所述第二样本图像中的图像内容相同,所述第一样本图像的分辨率大于所述第二样本图像的分辨率。
在一种可能的实施方式中,所述确定模块11具体用于:
根据预设尺寸对所述第一图像进行压缩处理,压缩处理后的所述第一图像的尺寸为预 设尺寸;
通过识别模型对压缩处理后的所述第一图像进行处理。
在一种可能的实施方式中,所述超分模块12具体用于:
获取所述第一图像的梯度图像,所述梯度图像与所述第一图像的尺寸相同,所述梯度图像的尺寸为M*N,所述M和所述N分别为大于1的整数;
通过所述图像滤波器对所述梯度图像进行处理,得到第二图像,所述第二图像的尺寸为(f*M)*(f*N),其中,所述f为预设放大倍数;
将所述第一图像放大f倍,得到第三图像,所述第三图像的尺寸为(f*M)*(f*N);
根据所述第二图像和所述第三图像,得到所述超分图像,所述超分图像的尺寸为(f*M)*(f*N)。
在一种可能的实施方式中,所述超分模块12具体用于:
通过所述图像滤波器对所述梯度图像进行处理,得到f 2张子图像,每张子图像的尺寸为(f*M)*(f*N);
根据所述f 2张子图像确定所述第二图像。
在一种可能的实施方式中,所述超分模块12具体用于:
通过所述图像滤波器中第k*t+1至(k+1)*t个通道中的参数对所述梯度图像进行处理,得到第k张子图像,所述k依次取0,1,……,f 2-1。
在一种可能的实施方式中,所述超分模块12具体用于:
分别对所述f 2张子图像中相同像素位置的像素进行拼接处理,得到M*N个图像块,每个图像块的尺寸为f*f;
根据每个图像块中的像素在所述子图像中的像素位置,对所述M*N个图像块进行拼接处理,得到所述第二图像。
需要说明的是,本申请实施例所示的图像处理装置可以执行上述方法实施例所示的技术方案,其实现原理以及有益效果类似,此处不再进行赘述。
图22为本申请实施例提供的计算机系统的硬件结构示意图。请参见图22,该计算机系统20包括:存储器21和处理器22,其中,存储器21和处理器22通信;示例性的,存储器21和处理器22可以通过通信总线23通信,所述存储器21用于存储计算机程序,所述处理器22执行所述计算机程序,以执行如下步骤:
确定第一图像对应的图像滤波器,所述图像滤波器中具有所述第一图像中每个像素对应的滤波器参数,具有不同纹理特征的像素对应的滤波器参数不同;
根据所述第一图像的图像滤波器,对所述第一图像进行超分处理,得到所述第一图像的超分图像。
可选的,处理器21可以实现图21实施例所示的确定模块11和超分模块12的功能。
需要说明的是,本申请实施例所示的图像处理装置可以执行上述方法实施例所示的技术方案,其实现原理以及有益效果类似,此处不再进行赘述。
在一种可能的实施方式中,所述处理器22具体用于:
获取所述第一图像的纹理图像;
根据所述纹理图像中各像素的纹理特征,在所述纹理图像中确定C个局部纹理图像和每个局部纹理图像的权重值,所述局部纹理图像中纹理的纹理特征相同;C=f 2×t,所 述f为所述预设放大倍数,所述t为所述第一图像中每个像素对应的滤波器参数的个数,所述f大于1,所述t为大于或等于1的整数;
根据所述C个局部纹理图像和每个局部纹理图像的权重值,确定所述图像滤波器,所述图像滤波器的通道数量为所述C。
在一种可能的实施方式中,所述处理器22具体用于:
根据预设尺寸对所述第一图像进行压缩处理,压缩处理后的所述第一图像的尺寸为所述预设尺寸;
根据压缩处理后的所述第一图像,确定所述纹理图像。
在一种可能的实施方式中,所述图像滤波器的第i个通道中的第(x,y)个滤波器参数为:第i个局部纹理图像中第(x,y)个像素的像素值与所述第i个局部纹理图像的权重值的乘积;
其中,所述i为小于或等于所述C的正整数,所述x为小于或等于M的正整数,所述y为小于或等于N的正整数,所述M为所述第一图像在横向包括的像素个数,所述N为所述第一图像在纵向包括的像素个数,所述M和所述N分别为大于1的整数。
在一种可能的实施方式中,所述处理器22具体用于:
通过识别模型对所述第一图像进行处理,得到所述第一图像的图像滤波器,所述识别模型为对多组样本学习得到的,每组样本包括第一样本图像和第二样本图像,所述第一样本图像和所述第二样本图像中的图像内容相同,所述第一样本图像的分辨率大于所述第二样本图像的分辨率。
在一种可能的实施方式中,所述处理器22具体用于:
根据预设尺寸对所述第一图像进行压缩处理,压缩处理后的所述第一图像的尺寸为预设尺寸;
通过识别模型对压缩处理后的所述第一图像进行处理。
在一种可能的实施方式中,所述处理器22具体用于:
获取所述第一图像的梯度图像,所述梯度图像与所述第一图像的尺寸相同,所述梯度图像的尺寸为M*N,所述M和所述N分别为大于1的整数;
通过所述图像滤波器对所述梯度图像进行处理,得到第二图像,所述第二图像的尺寸为(f*M)*(f*N),其中,所述f为预设放大倍数;
将所述第一图像放大f倍,得到第三图像,所述第三图像的尺寸为(f*M)*(f*N);
根据所述第二图像和所述第三图像,得到所述超分图像,所述超分图像的尺寸为(f*M)*(f*N)。
在一种可能的实施方式中,所述处理器22具体用于:
通过所述图像滤波器对所述梯度图像进行处理,得到f 2张子图像,每张子图像的尺寸为(f*M)*(f*N);
根据所述f 2张子图像确定所述第二图像。
在一种可能的实施方式中,所述处理器22具体用于:
通过所述图像滤波器中第k*t+1至(k+1)*t个通道中的参数对所述梯度图像进行处理,得到第k张子图像,所述k依次取0,1,……,f 2-1。
在一种可能的实施方式中,所述处理器22具体用于:
分别对所述f 2张子图像中相同像素位置的像素进行拼接处理,得到M*N个图像块,每个图像块的尺寸为f*f;
根据每个图像块中的像素在所述子图像中的像素位置,对所述M*N个图像块进行拼接处理,得到所述第二图像。
需要说明的是,本申请实施例所示的图像处理装置可以执行上述方法实施例所示的技术方案,其实现原理以及有益效果类似,此处不再进行赘述。
可选的,上述处理器可以是CPU,还可以是其他通用处理器、DSP、ASIC等。通用处理器可以是微处理器或者该处理器也可以是任何常规的处理器等。结合本申请所公开的认证方法实施例中的步骤可以直接体现为硬件处理器执行完成,或者用处理器中的硬件及软件模块组合执行完成。
本申请提供一种计算机可读存储介质,所述计算机可读存储介质用于存储计算机程序,所述计算机程序用于实现上述实施例所述的图像处理方法。
本申请实施例还提供一种芯片或者集成电路,包括:存储器和处理器;
所述存储器,用于存储程序指令,有时还用于存储中间数据;
所述处理器,用于调用所述存储器中存储的所述程序指令以实现如上所述的图像处理方法。
可选的,存储器可以是独立的,也可以跟处理器集成在一起。在有些实施方式中,存储器还可以位于所述芯片或者集成电路之外。
本申请实施例还提供一种程序产品,所述程序产品包括计算机程序,所述计算机程序存储在存储介质中,所述计算机程序用于实现上述的图像处理方法。
实现上述各方法实施例的全部或部分步骤可以通过程序指令相关的硬件来完成。前述的程序可以存储于一可读取存储器中。该程序在执行时,执行包括上述各方法实施例的步骤;而前述的存储器(存储介质)包括:只读存储器(英文:read-only memory,缩写:ROM)、RAM、快闪存储器、硬盘、固态硬盘、磁带(英文:magnetic tape)、软盘(英文:floppy disk)、光盘(英文:optical disc)及其任意组合。
本申请实施例是参照根据本申请实施例的方法、设备(系统)、和计算机程序产品的流程图和/或方框图来描述的。应理解可由计算机程序指令实现流程图和/或方框图中的每一流程和/或方框、以及流程图和/或方框图中的流程和/或方框的结合。可提供这些计算机程序指令到通用计算机、专用计算机、嵌入式处理机或其他可编程数据处理设备的处理单元以产生一个机器,使得通过计算机或其他可编程数据处理设备的处理单元执行的指令产生用于实现在流程图一个流程或多个流程和/或方框图一个方框或多个方框中指定的功能的装置。
这些计算机程序指令也可存储在能引导计算机或其他可编程数据处理设备以特定方式工作的计算机可读存储器中,使得存储在该计算机可读存储器中的指令产生包括指令装置的制造品,该指令装置实现在流程图一个流程或多个流程和/或方框图一个方框或多个方框中指定的功能。
这些计算机程序指令也可装载到计算机或其他可编程数据处理设备上,使得在计算机或其他可编程设备上执行一系列操作步骤以产生计算机实现的处理,从而在计算机或其他可编程设备上执行的指令提供用于实现在流程图一个流程或多个流程和/或方框图一个 方框或多个方框中指定的功能的步骤。
显然,本领域的技术人员可以对本申请实施例进行各种改动和变型而不脱离本申请的精神和范围。这样,倘若本申请实施例的这些修改和变型属于本申请权利要求及其等同技术的范围之内,则本申请也意图包含这些改动和变型在内。
在本申请中,术语“包括”及其变形可以指非限制性的包括;术语“或”及其变形可以指“和/或”。本本申请中术语“第一”、“第二”等是用于区别类似的对象,而不必用于描述特定的顺序或先后次序。本申请中,“多个”是指两个或两个以上。“和/或”,描述关联对象的关联关系,表示可以存在三种关系,例如,A和/或B,可以表示:单独存在A,同时存在A和B,单独存在B这三种情况。字符“/”一般表示前后关联对象是一种“或”的关系。

Claims (22)

  1. 一种图像处理方法,其特征在于,包括:
    确定第一图像对应的图像滤波器,所述图像滤波器中具有所述第一图像中每个像素对应的滤波器参数,具有不同纹理特征的像素对应的滤波器参数不同;
    根据所述第一图像的图像滤波器,对所述第一图像进行超分处理,得到所述第一图像的超分图像。
  2. 根据权利要求1所述的方法,其特征在于,所述确定第一图像对应的图像滤波器,包括:
    获取所述第一图像的纹理图像;
    根据所述纹理图像中各像素的纹理特征,在所述纹理图像中确定C个局部纹理图像和每个局部纹理图像的权重值,所述局部纹理图像中纹理的纹理特征相同;C=f 2×t,所述f为预设放大倍数,所述t为所述第一图像中每个像素对应的滤波器参数的个数,所述f大于1,所述t为大于或等于1的整数;
    根据所述C个局部纹理图像和每个局部纹理图像的权重值,确定所述图像滤波器,所述图像滤波器的通道数量为所述C。
  3. 根据权利要求2所述的方法,其特征在于,所述图像滤波器的第i个通道中的第(x,y)个滤波器参数为:第i个局部纹理图像中第(x,y)个像素的像素值与所述第i个局部纹理图像的权重值的乘积;
    其中,所述i为小于或等于所述C的正整数,所述x为小于或等于M的正整数,所述y为小于或等于N的正整数,所述M为所述第一图像在横向包括的像素个数,所述N为所述第一图像在纵向包括的像素个数,所述M和所述N分别为大于1的整数。
  4. 根据权利要求2或3所述的方法,其特征在于,所述获取所述第一图像的纹理图像,包括:
    根据预设尺寸对所述第一图像进行压缩处理,压缩处理后的所述第一图像的尺寸为所述预设尺寸;
    根据压缩处理后的所述第一图像,确定所述纹理图像。
  5. 根据权利要求1所述的方法,其特征在于,所述确定第一图像对应的图像滤波器,包括:
    通过识别模型对所述第一图像进行处理,得到所述第一图像的图像滤波器,所述识别模型为对多组样本学习得到的,每组样本包括第一样本图像和第二样本图像,所述第一样本图像和所述第二样本图像中的图像内容相同,所述第一样本图像的分辨率大于所述第二样本图像的分辨率。
  6. 根据权利要求5所述的方法,其特征在于,所述通过识别模型所述第一图像进行处理,包括:
    根据预设尺寸对所述第一图像进行压缩处理,压缩处理后的所述第一图像的尺寸为预设尺寸;
    通过识别模型对压缩处理后的所述第一图像进行处理。
  7. 根据权利要求1-6任一项所述的方法,其特征在于,所述根据所述第一图像的图像滤波器,对所述第一图像进行超分处理,得到所述第一图像的超分图像,包括:
    获取所述第一图像的梯度图像,所述梯度图像与所述第一图像的尺寸相同,所述梯度图像的尺寸为M*N,所述M和所述N分别为大于1的整数;
    通过所述图像滤波器对所述梯度图像进行处理,得到第二图像,所述第二图像的尺寸为(f*M)*(f*N),其中,所述f为预设放大倍数;
    将所述第一图像放大f倍,得到第三图像,所述第三图像的尺寸为(f*M)*(f*N);
    根据所述第二图像和所述第三图像,得到所述超分图像,所述超分图像的尺寸为(f*M)*(f*N)。
  8. 根据权利要求7所述的方法,其特征在于,所述通过所述图像滤波器对所述梯度图像进行处理,得到第二图像,包括:
    通过所述图像滤波器对所述梯度图像进行处理,得到f 2张子图像,每张子图像的尺寸为(f*M)*(f*N);
    根据所述f 2张子图像确定所述第二图像。
  9. 根据权利要求8所述的方法,其特征在于,所述通过所述第一图像的图像滤波器对所述梯度图像进行处理,得到f 2张子图像,包括:
    通过所述图像滤波器中第k*t+1至(k+1)*t个通道中的参数对所述梯度图像进行处理,得到第k张子图像,所述k依次取0,1,……,f 2-1。
  10. 根据权利要求8或9所述的方法,其特征在于,所述根据所述f 2张子图像确定所述第二图像,包括:
    分别对所述f 2张子图像中相同像素位置的像素进行拼接处理,得到M*N个图像块,每个图像块的尺寸为f*f;
    根据每个图像块中的像素在所述子图像中的像素位置,对所述M*N个图像块进行拼接处理,得到所述第二图像。
  11. 一种图像处理装置,其特征在于,包括:确定模块和超分模块,其中,
    所述确定模块用于,确定第一图像对应的图像滤波器,所述图像滤波器中具有所述第一图像中每个像素对应的滤波器参数,具有不同纹理特征的像素对应的滤波器参数不同;
    所述超分模块用于,根据所述第一图像的图像滤波器,对所述第一图像进行超分处理,得到所述第一图像的超分图像。
  12. 根据权利要求11所述的装置,其特征在于,所述确定模块具体用于:
    获取所述第一图像的纹理图像;
    根据所述纹理图像中各像素的纹理特征,在所述纹理图像中确定C个局部纹理图像和每个局部纹理图像的权重值,所述局部纹理图像中纹理的纹理特征相同;C=f 2×t,所述f为预设放大倍数,所述t为所述第一图像中每个像素对应的滤波器参数的个数,所述f大于1,所述t为大于或等于1的整数;
    根据所述C个局部纹理图像和每个局部纹理图像的权重值,确定所述图像滤波器,所述图像滤波器的通道数量为所述C。
  13. 根据权利要求12所述的装置,其特征在于,所述图像滤波器的第i个通道中的第(x,y)个滤波器参数为:第i个局部纹理图像中第(x,y)个像素的像素值与所述第i个局部纹理图像的权重值的乘积;
    其中,所述i为小于或等于所述C的正整数,所述x为小于或等于M的正整数,所述 y为小于或等于N的正整数,所述M为所述第一图像在横向包括的像素个数,所述N为所述第一图像在纵向包括的像素个数,所述M和所述N分别为大于1的整数。
  14. 根据权利要求12或13所述的装置,其特征在于,所述确定模块具体用于:
    根据预设尺寸对所述第一图像进行压缩处理,压缩处理后的所述第一图像的尺寸为所述预设尺寸;
    根据压缩处理后的所述第一图像,确定所述纹理图像。
  15. 根据权利要求11所述的装置,其特征在于,所述确定模块具体用于:
    通过识别模型对所述第一图像进行处理,得到所述第一图像的图像滤波器,所述识别模型为对多组样本学习得到的,每组样本包括第一样本图像和第二样本图像,所述第一样本图像和所述第二样本图像中的图像内容相同,所述第一样本图像的分辨率大于所述第二样本图像的分辨率。
  16. 根据权利要求15所述的装置,其特征在于,所述确定模块具体用于:
    根据预设尺寸对所述第一图像进行压缩处理,压缩处理后的所述第一图像的尺寸为预设尺寸;
    通过识别模型对压缩处理后的所述第一图像进行处理。
  17. 根据权利要求11-16任一项所述的装置,其特征在于,所述超分模块具体用于:
    获取所述第一图像的梯度图像,所述梯度图像与所述第一图像的尺寸相同,所述梯度图像的尺寸为M*N,所述M和所述N分别为大于1的整数;
    通过所述图像滤波器对所述梯度图像进行处理,得到第二图像,所述第二图像的尺寸为(f*M)*(f*N),其中,所述f为预设放大倍数;
    将所述第一图像放大f倍,得到第三图像,所述第三图像的尺寸为(f*M)*(f*N);
    根据所述第二图像和所述第三图像,得到所述超分图像,所述超分图像的尺寸为(f*M)*(f*N)。
  18. 根据权利要求17所述的装置,其特征在于,所述超分模块具体用于:
    通过所述图像滤波器对所述梯度图像进行处理,得到f 2张子图像,每张子图像的尺寸为(f*M)*(f*N);
    根据所述f 2张子图像确定所述第二图像。
  19. 根据权利要求18所述的装置,其特征在于,所述超分模块具体用于:
    通过所述图像滤波器中第k*t+1至(k+1)*t个通道中的参数对所述梯度图像进行处理,得到第k张子图像,所述k依次取0,1,……,f 2-1。
  20. 根据权利要求18或19所述的装置,其特征在于,所述超分模块具体用于:
    分别对所述f 2张子图像中相同像素位置的像素进行拼接处理,得到M*N个图像块,每个图像块的尺寸为f*f;
    根据每个图像块中的像素在所述子图像中的像素位置,对所述M*N个图像块进行拼接处理,得到所述第二图像。
  21. 一种计算机系统,其特征在于,包括:存储器和处理器,所述存储器存储有计算机程序,所述处理器运行所述计算机程序执行如权利要求1-10任一项所述的图像处理方法。
  22. 一种计算机可读存储介质,其特征在于,所述计算机可读存储介质包括计算机程序,所述计算机程序在被一个或多个处理器执行时实现权利要求1-10任一项所述的图像处 理方法。
PCT/CN2020/098208 2019-07-12 2020-06-24 图像处理方法、装置及设备 WO2021008322A1 (zh)

Priority Applications (2)

Application Number Priority Date Filing Date Title
EP20841415.1A EP3992903A4 (en) 2019-07-12 2020-06-24 IMAGE PROCESSING METHOD, APPARATUS AND DEVICE
US17/574,185 US20220138906A1 (en) 2019-07-12 2022-01-12 Image Processing Method, Apparatus, and Device

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201910629031.6A CN112215761A (zh) 2019-07-12 2019-07-12 图像处理方法、装置及设备
CN201910629031.6 2019-07-12

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US17/574,185 Continuation US20220138906A1 (en) 2019-07-12 2022-01-12 Image Processing Method, Apparatus, and Device

Publications (1)

Publication Number Publication Date
WO2021008322A1 true WO2021008322A1 (zh) 2021-01-21

Family

ID=74047827

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2020/098208 WO2021008322A1 (zh) 2019-07-12 2020-06-24 图像处理方法、装置及设备

Country Status (4)

Country Link
US (1) US20220138906A1 (zh)
EP (1) EP3992903A4 (zh)
CN (1) CN112215761A (zh)
WO (1) WO2021008322A1 (zh)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114387652A (zh) * 2022-01-12 2022-04-22 北京百度网讯科技有限公司 图像识别方法、识别模型的训练方法、装置和电子设备
CN117392685A (zh) * 2023-11-09 2024-01-12 深圳市深档数码技术有限公司 扫描文档质量提升方法、装置、存储介质及设备

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20050152005A1 (en) * 2004-01-09 2005-07-14 Niranjan Damera-Venkata Dither matrix design using sub-pixel addressability
CN101079949A (zh) * 2006-02-07 2007-11-28 索尼株式会社 图像处理设备和方法、记录介质以及程序
CN101976435A (zh) * 2010-10-07 2011-02-16 西安电子科技大学 基于对偶约束的联合学习超分辨方法
CN102986220A (zh) * 2010-07-20 2013-03-20 西门子公司 用高分辨率的参考帧的视频编码
CN107197135A (zh) * 2016-03-21 2017-09-22 成都理想境界科技有限公司 一种视频生成方法、播放方法及视频生成装置、播放装置
CN109919838A (zh) * 2019-01-17 2019-06-21 华南理工大学 基于注意力机制提升轮廓清晰度的超声图像超分辨率重建方法

Family Cites Families (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8233748B2 (en) * 2007-07-20 2012-07-31 Samsung Electronics Co., Ltd. Image-resolution-improvement apparatus and method
US8571355B2 (en) * 2009-08-13 2013-10-29 Samsung Electronics Co., Ltd. Method and apparatus for reconstructing a high-resolution image by using multi-layer low-resolution images
CN103679631B (zh) * 2012-09-18 2018-01-23 华为技术有限公司 一种放大图像的方法
CN103475876B (zh) * 2013-08-27 2016-06-22 北京工业大学 一种基于学习的低比特率压缩图像超分辨率重建方法
CN103514580B (zh) * 2013-09-26 2016-06-08 香港应用科技研究院有限公司 用于获得视觉体验优化的超分辨率图像的方法和系统
US20170193635A1 (en) * 2014-05-28 2017-07-06 Peking University Shenzhen Graduate School Method and apparatus for rapidly reconstructing super-resolution image
CN105847968B (zh) * 2016-03-21 2018-12-21 京东方科技集团股份有限公司 基于深度学习的解像方法和系统
WO2018053340A1 (en) * 2016-09-15 2018-03-22 Twitter, Inc. Super resolution using a generative adversarial network
CN108122262B (zh) * 2016-11-28 2021-05-07 南京理工大学 基于主结构分离的稀疏表示单帧图像超分辨率重建算法
US10776904B2 (en) * 2017-05-03 2020-09-15 Samsung Electronics Co., Ltd. Method and apparatus for processing image
CN107341765B (zh) * 2017-05-05 2020-04-28 西安邮电大学 一种基于卡通纹理分解的图像超分辨率重建方法
CN107527321B (zh) * 2017-08-22 2020-04-17 维沃移动通信有限公司 一种图像重建方法、终端及计算机可读存储介质
CN108765343B (zh) * 2018-05-29 2021-07-20 Oppo(重庆)智能科技有限公司 图像处理的方法、装置、终端及计算机可读存储介质
US10885608B2 (en) * 2018-06-06 2021-01-05 Adobe Inc. Super-resolution with reference images
CN111986069A (zh) * 2019-05-22 2020-11-24 三星电子株式会社 图像处理装置及其图像处理方法
CN110264404B (zh) * 2019-06-17 2020-12-08 北京邮电大学 一种超分辨图像纹理优化的方法和装置

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20050152005A1 (en) * 2004-01-09 2005-07-14 Niranjan Damera-Venkata Dither matrix design using sub-pixel addressability
CN101079949A (zh) * 2006-02-07 2007-11-28 索尼株式会社 图像处理设备和方法、记录介质以及程序
CN102986220A (zh) * 2010-07-20 2013-03-20 西门子公司 用高分辨率的参考帧的视频编码
CN101976435A (zh) * 2010-10-07 2011-02-16 西安电子科技大学 基于对偶约束的联合学习超分辨方法
CN107197135A (zh) * 2016-03-21 2017-09-22 成都理想境界科技有限公司 一种视频生成方法、播放方法及视频生成装置、播放装置
CN109919838A (zh) * 2019-01-17 2019-06-21 华南理工大学 基于注意力机制提升轮廓清晰度的超声图像超分辨率重建方法

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
See also references of EP3992903A4

Also Published As

Publication number Publication date
EP3992903A1 (en) 2022-05-04
US20220138906A1 (en) 2022-05-05
EP3992903A4 (en) 2022-09-07
CN112215761A (zh) 2021-01-12

Similar Documents

Publication Publication Date Title
US10853916B2 (en) Convolution deconvolution neural network method and system
CN110650368B (zh) 视频处理方法、装置和电子设备
US11017586B2 (en) 3D motion effect from a 2D image
WO2020108083A1 (zh) 视频处理方法、装置、电子设备及计算机可读介质
US11928753B2 (en) High fidelity interactive segmentation for video data with deep convolutional tessellations and context aware skip connections
US9639956B2 (en) Image adjustment using texture mask
US8692830B2 (en) Automatic avatar creation
JP2023539691A (ja) 人物画像修復方法、装置、電子機器、記憶媒体及びプログラム製品
KR102616700B1 (ko) 영상 처리 장치 및 그 영상 처리 방법
CN110881109B (zh) 用于增强现实应用的视频中的实时叠加放置
US11823322B2 (en) Utilizing voxel feature transformations for view synthesis
WO2022156626A1 (zh) 一种图像的视线矫正方法、装置、电子设备、计算机可读存储介质及计算机程序产品
EP3681144A1 (en) Video processing method and apparatus based on augmented reality, and electronic device
KR20200135102A (ko) 영상 처리 장치 및 그 영상 처리 방법
CN110163866A (zh) 一种图像处理方法、电子设备及计算机可读存储介质
KR101028628B1 (ko) 영상 텍스쳐 필터링 방법, 이를 수행하는 프로그램을 기록한 기록매체 및 이를 수행하는 장치
WO2021008322A1 (zh) 图像处理方法、装置及设备
CN113313631B (zh) 图像渲染方法和装置
WO2024001095A1 (zh) 面部表情识别方法、终端设备及存储介质
CN103440674A (zh) 一种数字图像蜡笔特效的快速生成方法
WO2023197780A1 (zh) 图像处理方法、装置、电子设备及存储介质
WO2024041235A1 (zh) 图像处理方法、装置、设备、存储介质及程序产品
WO2022218042A1 (zh) 视频处理方法、装置、视频播放器、电子设备及可读介质
WO2024032331A9 (zh) 图像处理方法及装置、电子设备、存储介质
AU2018271418A1 (en) Creating selective virtual long-exposure images

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20841415

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

ENP Entry into the national phase

Ref document number: 2020841415

Country of ref document: EP

Effective date: 20220126