CN110276720B - Image generation method and device - Google Patents

Image generation method and device Download PDF

Info

Publication number
CN110276720B
CN110276720B CN201810222294.0A CN201810222294A CN110276720B CN 110276720 B CN110276720 B CN 110276720B CN 201810222294 A CN201810222294 A CN 201810222294A CN 110276720 B CN110276720 B CN 110276720B
Authority
CN
China
Prior art keywords
resolution
feature map
orientation
super
low
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201810222294.0A
Other languages
Chinese (zh)
Other versions
CN110276720A (en
Inventor
谭文伟
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Huawei Technologies Co Ltd
Original Assignee
Huawei Technologies Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huawei Technologies Co Ltd filed Critical Huawei Technologies Co Ltd
Priority to CN201810222294.0A priority Critical patent/CN110276720B/en
Priority to PCT/CN2019/077352 priority patent/WO2019174522A1/en
Publication of CN110276720A publication Critical patent/CN110276720A/en
Application granted granted Critical
Publication of CN110276720B publication Critical patent/CN110276720B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • G06T3/4053Scaling of whole images or parts thereof, e.g. expanding or contracting based on super-resolution, i.e. the output image resolution being higher than the sensor resolution
    • G06T3/4076Scaling of whole images or parts thereof, e.g. expanding or contracting based on super-resolution, i.e. the output image resolution being higher than the sensor resolution using the original low-resolution images to iteratively correct the high-resolution images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration using two or more images, e.g. averaging or subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20021Dividing image into blocks, subimages or windows
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Image Processing (AREA)

Abstract

The embodiment of the application provides an image generation method and device, relates to the technical field of image processing, and can solve the problems that an image obtained by ERM principle processing is too smooth and lacks of detailed information. The method comprises the following steps: determining at least one detail orientation low-resolution feature map and at least one complementary orientation low-resolution feature map corresponding to the low-resolution image; determining a detail orientation super-resolution image corresponding to each detail orientation low-resolution feature map in the at least one detail orientation low-resolution feature map; determining a complementary orientation super-resolution image corresponding to each of the at least one complementary orientation low-resolution feature map; and acquiring a super-resolution image corresponding to the low-resolution image according to the detail orientation super-resolution image and the complementary orientation super-resolution image. The embodiment of the application is applied to the image super-resolution process.

Description

Image generation method and device
Technical Field
The present application relates to the field of image processing technologies, and in particular, to an image generation method and apparatus.
Background
Super-Resolution (Super-Resolution) refers to the recovery of a high-Resolution image from a low-Resolution image or sequence of images. Most current methods for super-resolution processing of single images use the Empirical Risk Minimization (ERM) principle. As shown in fig. 1, X columns represent low resolution images and Y columns represent images processed using the ERM principle.
However, in the image obtained by processing the ERM principle, the transition between pixels is often over-smooth, so that the image is blurred, and the image is over-smooth and lacks of detail information. The overall effect looks much different from the original.
Disclosure of Invention
The embodiment of the application provides an image generation method and device, and the problems that an image obtained by ERM principle processing is too smooth and lacks of detail information can be solved.
In a first aspect, an embodiment of the present application provides an image generation method, including: determining at least one detail orientation low-resolution feature map and at least one complementary orientation low-resolution feature map corresponding to the low-resolution image; determining a detail orientation super-resolution image corresponding to each detail orientation low-resolution feature map in the at least one detail orientation low-resolution feature map; determining a complementary orientation super-resolution image corresponding to each of the at least one complementary orientation low-resolution feature map; and acquiring a super-resolution image corresponding to the low-resolution image according to the detail orientation super-resolution image and the complementary orientation super-resolution image.
Therefore, information such as real detail content of the low-resolution image and other feature information are respectively recorded in at least one detail orientation low-resolution feature map and at least one complementary orientation low-resolution feature map determined according to the low-resolution image, the detail orientation super-resolution image and the complementary orientation super-resolution image have more detail information and other information such as local texture, namely, weight values of the information such as the detail content are increased, and therefore the output super-resolution image has richer detail content and structure, and meanwhile, a sawtooth effect generated after processing of some images can be inhibited. As shown in fig. 5 (a), the super-resolution image generated by the image generation method provided by the embodiment of the present application has richer detail content and structure, and the visual sense is more natural. As shown in fig. 5 (b), compared with the problem that the image processed by using the ERM principle is too smooth and lacks detail information, the image generation method provided in the embodiment of the present application can solve the problem that the image processed by using the ERM principle is too smooth and lacks detail information.
In one possible implementation, determining at least one detail orientation low-resolution feature map and at least one complementary orientation low-resolution feature map corresponding to the low-resolution image includes: determining at least one candidate feature map of the low resolution image; for each candidate feature map in at least one candidate feature map, converting the candidate feature map into a gray-scale image; dividing the gray level image into N image blocks, and determining the number D of the image blocks, wherein the median values of the gradient histograms corresponding to the N image blocks are greater than or equal to a first preset threshold value; if R (R ═ D/N) is greater than or equal to a second preset threshold, determining the candidate feature map as a detail orientation low-resolution feature map; wherein N is an integer greater than or equal to 1, and D is an integer greater than or equal to 0; and subtracting the pixel value of the detail orientation low-resolution feature map from the pixel value of the gray scale image of the low-resolution image to obtain a complementary orientation low-resolution feature map.
Similarly, the detail orientation separation module may determine the detail orientation low resolution feature map and the complementary orientation low resolution feature map from the low resolution image, and similarly, the detail orientation separation module may determine the detail orientation high resolution feature map and the complementary orientation high resolution feature map from the high resolution image.
In one possible implementation, determining a detail-oriented super-resolution image corresponding to each of the at least one detail-oriented low-resolution feature map comprises: for each detail orientation low resolution feature map of the at least one detail orientation low resolution feature map, determining a detail orientation super resolution image from the detail orientation low resolution feature map and the detail orientation high resolution feature map to which the detail orientation low resolution feature map corresponds.
Compared with the detail orientation low-resolution feature map, the detail orientation super-resolution image determined according to the detail orientation low-resolution feature map and the detail orientation high-resolution feature map has more detail content and structure, namely, the weight of information such as the detail content of the low-resolution image is increased, and the visual sense of the subsequently generated super-resolution image can be more natural and vivid.
In one possible implementation, determining a complementary directional super-resolution image corresponding to each of the at least one complementary directional low-resolution feature map comprises: for each of the at least one complementary directional low resolution feature map, determining a complementary directional super-resolution image from the complementary directional low resolution feature map and the corresponding complementary directional high resolution feature map of the complementary directional low resolution feature map.
Compared with the complementary directional low-resolution feature map, the complementary directional super-resolution image determined according to the complementary directional low-resolution feature map and the complementary directional high-resolution feature map has more contents except detailed contents, so that the visual sense of the subsequently generated super-resolution image is more natural and vivid.
In one possible implementation, the acquiring a super-resolution image corresponding to the low-resolution image according to the detail orientation super-resolution image and the complementary orientation super-resolution image includes: for each detail orientation super-resolution feature map in the at least one detail orientation super-resolution feature map and each complementary orientation super-resolution feature map in the at least one complementary orientation super-resolution feature map, adding a pixel value of the detail orientation super-resolution image corresponding to the detail orientation super-resolution feature map and a pixel value of the detail orientation super-resolution image corresponding to the complementary orientation super-resolution feature map; the detail orientation super-resolution feature map corresponds to the complementary orientation super-resolution feature map, that is, the complementary orientation low-resolution feature map corresponding to the complementary orientation super-resolution feature map is obtained by subtracting the pixel value of the detail orientation low-resolution feature map corresponding to the detail orientation super-resolution feature map from the pixel value of the corresponding gray image.
In a second aspect, an embodiment of the present application provides an image generating apparatus, including: the detail orientation separation module is used for determining at least one detail orientation low-resolution feature map and at least one complementary orientation low-resolution feature map corresponding to the low-resolution image; the detail orientation super-resolution module is used for determining a detail orientation super-resolution image corresponding to each detail orientation low-resolution feature map in at least one detail orientation low-resolution feature map; the complementary orientation super-resolution module is used for determining a complementary orientation super-resolution image corresponding to each complementary orientation low-resolution feature map in at least one complementary orientation low-resolution feature map; and the super-resolution image fusion module is used for acquiring a super-resolution image corresponding to the low-resolution image according to the detail orientation super-resolution image and the complementary orientation super-resolution image.
Therefore, the information such as real detail content of the low-resolution image and other feature information are respectively recorded by the detail orientation separation module according to at least one detail orientation low-resolution feature map and at least one complementary orientation low-resolution feature map determined by the low-resolution image, and the weight of the information such as the detail content is emphasized by the detail orientation super-resolution image determined by the detail orientation super-resolution module and the complementary orientation super-resolution image determined by the complementary orientation super-resolution module, so that the output super-resolution image has richer detail content and structure, and the sawtooth effect generated after some images are processed can be inhibited. As shown in fig. 5 (a), the super-resolution image generated by the image generation method provided by the embodiment of the present application has richer detail content and structure, and the visual sense is more natural. As shown in fig. 5 (b), compared with the problem that the image processed by using the ERM principle is too smooth and lacks detail information, the image generation method provided in the embodiment of the present application can solve the problem that the image processed by using the ERM principle is too smooth and lacks detail information.
In one possible implementation, the detail orientation separation module is configured to: determining at least one candidate feature map of the low resolution image; for each candidate feature map in at least one candidate feature map, converting the candidate feature map into a gray-scale image; dividing the gray level image into N image blocks, and determining the number D of the image blocks, wherein the median values of the gradient histograms corresponding to the N image blocks are greater than or equal to a first preset threshold value; if R (R ═ D/N) is greater than or equal to a second preset threshold, determining the candidate feature map as a detail orientation low-resolution feature map; wherein N is an integer greater than or equal to 1, and D is an integer greater than or equal to 0; and subtracting the pixel value of the detail orientation low-resolution feature map from the pixel value of the gray scale image of the low-resolution image to obtain a complementary orientation low-resolution feature map.
In one possible implementation, the detail orientation hyper-segmentation module is configured to: for each detail orientation low resolution feature map of the at least one detail orientation low resolution feature map, determining a detail orientation super resolution image from the detail orientation low resolution feature map and the detail orientation high resolution feature map to which the detail orientation low resolution feature map corresponds.
In one possible implementation, the complementary directional hyper-segmentation module is configured to: for each of the at least one complementary directional low resolution feature map, determining a complementary directional super-resolution image from the complementary directional low resolution feature map and the corresponding complementary directional high resolution feature map of the complementary directional low resolution feature map.
In one possible implementation, the hyper-resolution image fusion module is configured to: for each detail orientation super-resolution feature map in the at least one detail orientation super-resolution feature map and each complementary orientation super-resolution feature map in the at least one complementary orientation super-resolution feature map, adding a pixel value of the detail orientation super-resolution image corresponding to the detail orientation super-resolution feature map and a pixel value of the detail orientation super-resolution image corresponding to the complementary orientation super-resolution feature map; and the detail orientation super-resolution feature map corresponds to the complementary orientation super-resolution feature map.
In a third aspect, the present application provides an apparatus, which exists in the form of a chip product, and the apparatus includes a processor and a memory, the memory is configured to be coupled to the processor and stores program instructions and data necessary for the apparatus, and the processor is configured to execute the program instructions stored in the memory, so that the apparatus performs the functions of the image generating apparatus in the method.
In a fourth aspect, an embodiment of the present application provides an image generating apparatus, where the image generating apparatus may implement the functions performed by the image generating apparatus in the foregoing method embodiments, and the functions may be implemented by hardware, or may be implemented by hardware executing corresponding software. The hardware or software includes one or more modules corresponding to the above functions.
In one possible design, the image generating apparatus includes a processor and a communication interface, and the processor is configured to support the image generating apparatus to perform the corresponding functions of the method. The communication interface is used for supporting communication between the image generation device and other network elements. The image generation apparatus may also include a memory for coupling with the processor that retains program instructions and data necessary for the image generation apparatus.
In a fifth aspect, embodiments of the present application provide a computer-readable storage medium, which includes instructions that, when executed on a computer, cause the computer to perform any one of the methods provided in the first aspect.
In a sixth aspect, embodiments of the present application provide a computer program product containing instructions, which when run on a computer, cause the computer to perform any one of the methods provided in the first aspect.
Drawings
Fig. 1 is a schematic diagram illustrating a comparison between a low-resolution image and an image processed by an ERM principle according to an embodiment of the present disclosure;
fig. 2 is a schematic structural diagram of an image generating apparatus according to an embodiment of the present disclosure;
FIG. 3 is a schematic diagram of an end-to-end system framework provided by an embodiment of the present application;
fig. 4 is a schematic flowchart of an image generation method according to an embodiment of the present application;
fig. 5 is a schematic diagram illustrating a comparison between a super-resolution image and an image processed according to the ERM principle provided in an embodiment of the present application;
fig. 6 is a schematic structural diagram of an image generating apparatus according to an embodiment of the present application.
Detailed Description
The embodiment of the application provides an image generation method and an image generation device, which can be applied to a super-resolution process of an image, for example, a process of upgrading a standard definition image into a high definition image.
Fig. 2 is a schematic diagram of an internal structure of an image generating apparatus in an embodiment of the present application, and in the embodiment of the present application, the image generating apparatus may include a processing module 201, a communication module 202, and a storage module 203. The Processing module 201 is used to control hardware devices and Application software of each part of the image generating apparatus, and the Processing module 201 may be a Processor or a controller, such as a Central Processing Unit (CPU), a Graphics Processing Unit (GPU), a general-purpose Processor, a Digital Signal Processor (DSP), an Application-Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA), other Programmable logic devices, a transistor logic device, a hardware component, or any combination thereof. Which may implement or perform the various illustrative logical blocks, modules, and circuits described in connection with the disclosure. A processor may also be a combination of computing functions, e.g., comprising one or more microprocessors, a DSP and a microprocessor, or the like. The communication module 202 is configured to receive an instruction sent by another device using a communication method such as Long Term Evolution (LTE), WIreless Fidelity (WiFi), and the like, and may also send data of the image generating apparatus to the other device. The communication module 202 may be a transceiver, a transceiver circuit or a communication interface, etc. The storage module 203 is used for storing software programs, data, software, and the like of the image generation apparatus, and may be a Read-Only Memory (ROM), other types of static storage devices that can store static information and instructions, a Random Access Memory (RAM), or other types of dynamic storage devices that can store information and instructions, an Electrically Erasable Programmable Read-Only Memory (EEPROM), a Compact Disc Read-Only Memory (CD-ROM), or other optical Disc storage, optical Disc storage (including Compact Disc, laser Disc, optical Disc, digital versatile Disc, blu-ray Disc, and the like), a magnetic Disc storage medium or other magnetic storage device, or any other medium that can be used to carry or store desired program codes in the form of instructions or data structures and that can be accessed by a computer, but is not limited thereto.
The image generating apparatus may be a desktop computer, a portable computer, a web server, a Personal Digital Assistant (PDA), a mobile phone, a tablet computer, a wireless terminal device, a communication device, an embedded device, or a device supporting the image super-resolution technology with a similar structure as in fig. 2.
In this embodiment, further, the processor of the image generating apparatus may implement the image generating method by running an end-to-end system framework, where software modules included in the end-to-end system framework may be stored in a storage medium such as a memory. As shown in fig. 3, a schematic diagram of a logical relationship of an end-to-end system framework provided in the embodiment of the present application is provided, where the system framework adopts a semantic network model, and includes a detail orientation separation module, a detail orientation hyper-resolution module, a complementary orientation hyper-resolution module, and a hyper-resolution image fusion module, and an input of the system framework is a low-resolution image and an output thereof is a super-resolution image.
The detail orientation separation module is used for determining at least one detail orientation low-resolution feature map and at least one complementary orientation low-resolution feature map corresponding to the low-resolution image. The detail orientation hyper-resolution module is used for determining a detail orientation super-resolution image corresponding to each detail orientation low-resolution feature map in the at least one detail orientation low-resolution feature map, and the detail orientation super-resolution image has more edge structures, edge strengths, detail contents and target feature type information than the detail orientation low-resolution image. The complementary orientation hyper-resolution module is used for determining a complementary orientation super-resolution image corresponding to each complementary orientation low-resolution feature map in the at least one complementary orientation low-resolution feature map. The complementary oriented super-resolution image has more other feature information, such as local texture information, than the complementary oriented low resolution image. The super-resolution image fusion module is used for acquiring a super-resolution image corresponding to the low-resolution image according to the detail orientation super-resolution image and the complementary orientation super-resolution image.
For clarity and conciseness of the following description of the various embodiments, a brief introduction to related concepts or technologies is first presented:
RGB three-channel image: consists of R, G, B three channels. Wherein R represents red (red), G represents green (green), and B represents Blue (Blue).
VGG-Net: is a Convolutional Neural Network (CNN), and VGG-Net generally has 16-19 Convolutional layers.
Mean Square Error (MSE) of the network: simply, the sum of the squared errors over a set of numbers is divided by the logarithm of the data.
Region-based fast Convolutional Neural Network (FSRCNN): a convolutional neural network, typically having 4 convolutional layers, is capable of performing a computational convolution on a low resolution image to obtain a high resolution image.
Efficient Sub-Pixel Convolutional Neural Network (ESPCN): like FSRCNN, the method is also a convolution neural network, and can carry out calculation convolution on a low-resolution image to obtain a high-resolution image.
An embodiment of the present application provides an image generation method, as shown in fig. 4, including:
401. at least one detail orientation low-resolution feature map and at least one complementary orientation low-resolution feature map corresponding to the low-resolution image are determined.
The detail orientation separation module may be configured to receive an input low-resolution image, and determine at least one detail orientation low-resolution feature map and at least one complementary orientation low-resolution feature map corresponding to the low-resolution image.
First, a detail orientation separation module determines at least one candidate feature map of a low resolution image. For example, when the low-resolution image is an RGB three-channel image, VGG-net may be used to perform two successive convolution layer operations on the RGB three-channel image, where each convolution layer operation may include N k × k convolution kernel operations. Illustratively, N may be an integer between [20, 100], and k may be 3 or 5. Then, the detail orientation separation module may use at least one feature map obtained after the 2 nd convolutional layer operation as at least one candidate feature map.
For each candidate feature map in the at least one candidate feature map, the detail orientation separation module converts the candidate feature map into a grayscale image; dividing the gray image into N image blocks, and determining the number D of the image blocks, wherein the median of the gradient histograms corresponding to the N image blocks is greater than or equal to a first preset threshold. Where N is an integer greater than or equal to 1, for example N may be 9, 25 or 49. D is an integer greater than or equal to 0. If R (R ═ D/N) is greater than or equal to a second preset threshold, the detail orientation separation module determines the candidate feature map as a detail orientation low resolution feature map. The detail orientation low-resolution feature map comprises edge information and detail content in different directions, and can also comprise different image type feature information. It should be noted that R may be used to determine the richness degree of the detail content, and the higher the value of R, the more the detail content is described, and conversely, the less the detail content is described. For example, the second preset threshold may be 0.3.
For example, assuming that N is 9, that is, the grayscale map obtained by converting the candidate feature map is divided into 9 image blocks, median values of edge gradient histograms corresponding to the 9 image blocks are 0, 10, 20, 30, 40, 50, 60, 70, and 80, respectively, the first preset threshold is 30, the number D of image blocks whose median values of gradient histograms corresponding to the 9 image blocks are greater than or equal to the first preset threshold is 6, and the second preset threshold is 0.3, R/N is 0.67, and R is greater than 0.3, so that the detail orientation separation module may determine that the candidate feature map is the detail orientation low-resolution feature map.
And then, the detail orientation separation module subtracts the pixel value of the detail orientation low-resolution feature map from the pixel value of the gray level image of the low-resolution image according to an information complementation principle to obtain a complementary orientation low-resolution feature map.
It will be appreciated that the functionality of the detail orientation separation module may be determined from the model parameters. For example, a processing result may be obtained by processing a low-resolution image using a neural network such as FSRCNN or ESPCN, and then a weight of the processing result is updated by using a back-propagation method according to an MSE error function, and a model parameter of the detail orientation separation module is trained and generated.
It will be appreciated that the detail orientation separation module may determine the detail orientation low resolution feature map and the complementary orientation low resolution feature map from the low resolution image, and similarly, the detail orientation separation module may determine the detail orientation high resolution feature map and the complementary orientation high resolution feature map from the high resolution image. In one possible design, the detail orientation separation module may determine a high resolution image corresponding to a low resolution image from the low resolution image.
402. Determining a detail orientation super-resolution image corresponding to each detail orientation low resolution feature map of the at least one detail orientation low resolution feature map.
The detail orientation super-resolution module is used for acquiring at least one detail orientation low-resolution feature map from the detail orientation separation module and determining a detail orientation super-resolution image corresponding to each detail orientation low-resolution feature map in the at least one detail orientation low-resolution feature map.
For each of the at least one detail orientation low resolution feature map, the detail orientation super resolution module may determine the detail orientation super resolution image from the detail orientation low resolution feature map and a detail orientation high resolution feature map to which the detail orientation low resolution feature map corresponds.
Compared with the detail orientation low-resolution feature map, the detail orientation super-resolution image determined according to the detail orientation low-resolution feature map and the detail orientation high-resolution feature map has more detail content and structure, namely, the weight of information such as the detail content of the low-resolution image is increased, and the visual sense of the subsequently generated super-resolution image can be more natural and vivid.
It will be appreciated that the functionality of the detail orientation hyper-score module may be determined based on model parameters. For example, a neural network such as FSRCNN or ESPCN may be used to process the low-resolution detail orientation feature map and the high-resolution detail orientation feature map corresponding to the low-resolution detail orientation feature map to obtain a processing result, and then according to an MSE error function, an error back propagation method is used to update the weight of the processing result, and model parameters of the detail orientation hyper-segmentation module are trained and generated.
403. A complementary oriented super-resolution image corresponding to each of the at least one complementary oriented low-resolution feature maps is determined.
The complementary orientation super-resolution module is used for acquiring at least one complementary orientation low-resolution feature map from the detail orientation separation module and determining a complementary orientation super-resolution image corresponding to each complementary orientation low-resolution feature map in the at least one complementary orientation low-resolution feature map.
For each of the at least one complementary directional low resolution feature map, determining a complementary directional super-resolution image from the complementary directional low resolution feature map and the corresponding complementary directional high resolution feature map of the complementary directional low resolution feature map.
Compared with the complementary directional low-resolution feature map, the complementary directional super-resolution image determined according to the complementary directional low-resolution feature map and the complementary directional high-resolution feature map has more contents except detailed contents, so that the visual sense of the subsequently generated super-resolution image is more natural and vivid.
It will be appreciated that the functionality of the complementary directed hyper-segmentation module is determined based on the model parameters. For example, a neural network such as FSRCNN or ESPCN may be used to process the low-resolution complementary directional feature map and the high-resolution complementary directional feature map corresponding to the low-resolution complementary directional feature map to obtain a processing result, and then according to an MSE error function, an error back propagation method is used to update the weight of the processing result, and model parameters of the complementary directional hyper-differentiation module are trained and generated.
404. And acquiring a super-resolution image corresponding to the low-resolution image according to the detail orientation super-resolution image and the complementary orientation super-resolution image.
The super-resolution image fusion module can be used for acquiring at least one detail orientation super-resolution image from the detail orientation super-resolution module, acquiring at least one complementary orientation super-resolution image from the complementary orientation super-resolution module, and acquiring a super-resolution image corresponding to the low-resolution image according to the at least one detail orientation super-resolution image and the at least one complementary orientation super-resolution image.
For each detail orientation super-resolution feature map in the at least one detail orientation super-resolution feature map and each complementary orientation super-resolution feature map in the at least one complementary orientation super-resolution feature map, adding a pixel value of the detail orientation super-resolution image corresponding to the detail orientation super-resolution feature map and a pixel value of the detail orientation super-resolution image corresponding to the complementary orientation super-resolution feature map; the detail orientation super-resolution feature map corresponds to the complementary orientation super-resolution feature map, that is, the complementary orientation low-resolution feature map corresponding to the complementary orientation super-resolution feature map is obtained by subtracting the pixel value of the detail orientation low-resolution feature map corresponding to the detail orientation super-resolution feature map from the pixel value of the corresponding gray image.
Therefore, the information such as real detail content of the low-resolution image and other feature information are respectively recorded by the detail orientation separation module according to at least one detail orientation low-resolution feature map and at least one complementary orientation low-resolution feature map determined by the low-resolution image, and the weight of the information such as the detail content is emphasized by the detail orientation super-resolution image determined by the detail orientation super-resolution module and the complementary orientation super-resolution image determined by the complementary orientation super-resolution module, so that the output super-resolution image has richer detail content and structure, and the sawtooth effect generated after some images are processed can be inhibited. As shown in fig. 5 (a), the super-resolution image generated by the image generation method provided by the embodiment of the present application has richer detail content and structure, and the visual sense is more natural. As shown in fig. 5 (b), compared with the problem that the image processed by using the ERM principle is too smooth and lacks detail information, the image generation method provided in the embodiment of the present application can solve the problem that the image processed by using the ERM principle is too smooth and lacks detail information.
The above description mainly introduces the solution provided in the embodiment of the present application from the perspective of an image generating apparatus. It is to be understood that the image generating apparatus includes hardware structures and/or software modules corresponding to the respective functions in order to implement the above-described functions. Those skilled in the art will readily appreciate that the algorithm steps described in connection with the embodiments disclosed herein may be implemented as hardware or a combination of hardware and software. Whether a function is performed as hardware or software-driven hardware depends upon the particular application and design constraints imposed on the solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
In the embodiment of the present application, the image generating apparatus may be divided into the functional modules according to the method example, for example, each functional module may be divided according to each function, or two or more functions may be integrated into one processing module. The integrated module can be realized in a hardware mode, and can also be realized in a software functional module mode. It should be noted that, in the embodiment of the present application, the division of the module is schematic, and is only one logic function division, and there may be another division manner in actual implementation.
In the case of dividing each functional module by corresponding functions, fig. 6 shows a schematic diagram of a possible configuration of the image generating apparatus 6 according to the above embodiment, and the image generating apparatus includes: a detail orientation separation module 601, a detail orientation super-division module 602, a complementary orientation super-division module 603 and a super-division image fusion module 604. The detail orientation separation module 601 is used to support the image generation apparatus to execute the process 401 in fig. 4. The detail orientation hyper-score module 602 is used to support the image generation apparatus to perform the process 402 in fig. 4. The complementary directional hyper-segmentation module 603 is used to support the image generation apparatus to perform the process 403 in fig. 4. The hyper-differential image fusion module 604 is used to support the image generation apparatus to execute 404. All relevant contents of each step related to the above method embodiment may be referred to the functional description of the corresponding functional module, and are not described herein again.
The steps of a method or algorithm described in connection with the disclosure herein may be embodied in hardware or in software instructions executed by a processor. The software instructions may consist of corresponding software modules that may be stored in RAM, flash memory, ROM, EPROM, EEPROM, registers, hard disk, a removable hard disk, a compact disk, or any other form of storage medium known in the art. An exemplary storage medium is coupled to the processor such the processor can read information from, and write information to, the storage medium. Of course, the storage medium may also be integral to the processor. The processor and the storage medium may reside in an ASIC. Additionally, the ASIC may reside in a core network interface device. Of course, the processor and the storage medium may reside as discrete components in a core network interface device.
Those skilled in the art will recognize that in one or more of the examples described above, the functions described herein may be implemented in hardware, software, firmware, or any combination thereof. When implemented in software, the functions may be stored on or transmitted over as one or more instructions or code on an image generation apparatus readable medium. The image-generating device readable medium includes image-generating device storage media and communication media, where communication media includes any medium that facilitates transfer of an image-generating device program from one place to another. A storage medium may be any available medium that can be accessed by a general purpose or special purpose image generating device.
The above-mentioned embodiments, objects, technical solutions and advantages of the present application are further described in detail, it should be understood that the above-mentioned embodiments are only examples of the present application, and are not intended to limit the scope of the present application, and any modifications, equivalent substitutions, improvements and the like made on the basis of the technical solutions of the present application should be included in the scope of the present application.

Claims (8)

1. An image generation method, comprising:
determining at least one detail orientation low-resolution feature map and at least one complementary orientation low-resolution feature map corresponding to the low-resolution image;
determining a detail orientation super-resolution image corresponding to each detail orientation low resolution feature map of the at least one detail orientation low resolution feature map;
determining a complementary orientation super-resolution image corresponding to each of the at least one complementary orientation low-resolution feature map;
acquiring a super-resolution image corresponding to the low-resolution image according to the detail orientation super-resolution image and the complementary orientation super-resolution image;
the determining at least one detail orientation low resolution feature map and at least one complementary orientation low resolution feature map corresponding to the low resolution image comprises:
determining at least one candidate feature map of the low resolution image;
for each candidate feature map in the at least one candidate feature map, converting the candidate feature map into a gray-scale image; dividing the gray level image into N image blocks, and determining the number D of the image blocks, wherein the median of a gradient histogram corresponding to the N image blocks is greater than or equal to a first preset threshold; if R is larger than or equal to a second preset threshold value, determining the candidate feature map as the detail orientation low-resolution feature map; wherein R ═ D/N, N is an integer greater than or equal to 1, and D is an integer greater than or equal to 0;
and subtracting the pixel value of the detail orientation low-resolution feature map from the pixel value of the gray scale image of the low-resolution image to obtain the complementary orientation low-resolution feature map.
2. The method of claim 1, wherein the determining a detail orientation super-resolution image corresponding to each detail orientation low resolution feature map of the at least one detail orientation low resolution feature map comprises:
for each detail orientation low resolution feature map of the at least one detail orientation low resolution feature map, determining the detail orientation super-resolution image from the detail orientation low resolution feature map and a detail orientation high resolution feature map to which the detail orientation low resolution feature map corresponds.
3. The method of claim 1 or 2, wherein said determining a complementary oriented super resolution image corresponding to each of said at least one complementary oriented low resolution feature map comprises:
for each of the at least one complementary directional low resolution feature map, determining the complementary directional super-resolution image from the complementary directional low resolution feature map and the complementary directional high resolution feature map to which the complementary directional low resolution feature map corresponds.
4. The method of claim 1 or 2, wherein the obtaining the super-resolution image corresponding to the low-resolution image from the detail orientation super-resolution image and the complementary orientation super-resolution image comprises:
for each detail orientation super-resolution feature map in the at least one detail orientation super-resolution feature map and each complementary orientation super-resolution feature map in the at least one complementary orientation super-resolution feature map, adding a pixel value of the detail orientation super-resolution image corresponding to the detail orientation super-resolution feature map and a pixel value of the detail orientation super-resolution image corresponding to the complementary orientation super-resolution feature map;
and the detail orientation super-resolution feature map corresponds to the complementary orientation super-resolution feature map.
5. An image generation apparatus, comprising:
the detail orientation separation module is used for determining at least one detail orientation low-resolution feature map and at least one complementary orientation low-resolution feature map corresponding to the low-resolution image;
a detail orientation super-resolution module for determining a detail orientation super-resolution image corresponding to each of the at least one detail orientation low-resolution feature map;
a complementary orientation super-resolution module for determining a complementary orientation super-resolution image corresponding to each of the at least one complementary orientation low-resolution feature map;
the super-resolution image fusion module is used for acquiring a super-resolution image corresponding to the low-resolution image according to the detail orientation super-resolution image and the complementary orientation super-resolution image;
the detail orientation separation module is configured to:
determining at least one candidate feature map of the low resolution image;
for each candidate feature map in the at least one candidate feature map, converting the candidate feature map into a gray-scale image; dividing the gray level image into N image blocks, and determining the number D of the image blocks, wherein the median of a gradient histogram corresponding to the N image blocks is greater than or equal to a first preset threshold; if R is larger than or equal to a second preset threshold value, determining the candidate feature map as the detail orientation low-resolution feature map; wherein R ═ D/N, N is an integer greater than or equal to 1, and D is an integer greater than or equal to 0;
and subtracting the pixel value of the detail orientation low-resolution feature map from the pixel value of the gray scale image of the low-resolution image to obtain the complementary orientation low-resolution feature map.
6. The image generation apparatus of claim 5, wherein the detail orientation hyper-segmentation module is to:
for each detail orientation low resolution feature map of the at least one detail orientation low resolution feature map, determining the detail orientation super-resolution image from the detail orientation low resolution feature map and a detail orientation high resolution feature map to which the detail orientation low resolution feature map corresponds.
7. The image generation apparatus of claim 5 or 6, wherein the complementary directional hyper-segmentation module is configured to:
for each of the at least one complementary directional low resolution feature map, determining the complementary directional super-resolution image from the complementary directional low resolution feature map and the complementary directional high resolution feature map to which the complementary directional low resolution feature map corresponds.
8. The image generation apparatus of claim 5 or 6, wherein the hyper-resolution image fusion module is configured to:
for each detail orientation super-resolution feature map in the at least one detail orientation super-resolution feature map and each complementary orientation super-resolution feature map in the at least one complementary orientation super-resolution feature map, adding a pixel value of the detail orientation super-resolution image corresponding to the detail orientation super-resolution feature map and a pixel value of the detail orientation super-resolution image corresponding to the complementary orientation super-resolution feature map;
and the detail orientation super-resolution feature map corresponds to the complementary orientation super-resolution feature map.
CN201810222294.0A 2018-03-16 2018-03-16 Image generation method and device Active CN110276720B (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN201810222294.0A CN110276720B (en) 2018-03-16 2018-03-16 Image generation method and device
PCT/CN2019/077352 WO2019174522A1 (en) 2018-03-16 2019-03-07 Image generating method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810222294.0A CN110276720B (en) 2018-03-16 2018-03-16 Image generation method and device

Publications (2)

Publication Number Publication Date
CN110276720A CN110276720A (en) 2019-09-24
CN110276720B true CN110276720B (en) 2021-02-12

Family

ID=67907389

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810222294.0A Active CN110276720B (en) 2018-03-16 2018-03-16 Image generation method and device

Country Status (2)

Country Link
CN (1) CN110276720B (en)
WO (1) WO2019174522A1 (en)

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101216889A (en) * 2008-01-14 2008-07-09 浙江大学 A face image super-resolution method with the amalgamation of global characteristics and local details information
CN101615290A (en) * 2009-07-29 2009-12-30 西安交通大学 A kind of face image super-resolution reconstruction method based on canonical correlation analysis

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8958484B2 (en) * 2009-08-11 2015-02-17 Google Inc. Enhanced image and video super-resolution processing
US8878950B2 (en) * 2010-12-14 2014-11-04 Pelican Imaging Corporation Systems and methods for synthesizing high resolution images using super-resolution processes
WO2018053340A1 (en) * 2016-09-15 2018-03-22 Twitter, Inc. Super resolution using a generative adversarial network
CN106548449A (en) * 2016-09-18 2017-03-29 北京市商汤科技开发有限公司 Generate method, the apparatus and system of super-resolution depth map
CN107240066A (en) * 2017-04-28 2017-10-10 天津大学 Image super-resolution rebuilding algorithm based on shallow-layer and deep layer convolutional neural networks
CN107492070B (en) * 2017-07-10 2019-12-03 华北电力大学 A kind of single image super-resolution calculation method of binary channels convolutional neural networks

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101216889A (en) * 2008-01-14 2008-07-09 浙江大学 A face image super-resolution method with the amalgamation of global characteristics and local details information
CN101615290A (en) * 2009-07-29 2009-12-30 西安交通大学 A kind of face image super-resolution reconstruction method based on canonical correlation analysis

Also Published As

Publication number Publication date
WO2019174522A1 (en) 2019-09-19
CN110276720A (en) 2019-09-24

Similar Documents

Publication Publication Date Title
US10650495B2 (en) High resolution style transfer
CN108921225B (en) Image processing method and device, computer equipment and storage medium
US20190347767A1 (en) Image processing method and device
CN109064399B (en) Image super-resolution reconstruction method and system, computer device and storage medium thereof
WO2019196542A1 (en) Image processing method and apparatus
WO2022127912A1 (en) Image segmentation method, network training method, electronic equipment and storage medium
CN111353956B (en) Image restoration method and device, computer equipment and storage medium
JP2014527210A (en) Content adaptive system, method and apparatus for determining optical flow
CN111160242A (en) Image target detection method, system, electronic terminal and storage medium
US20220351413A1 (en) Target detection method, computer device and non-transitory readable storage medium
CN111754405A (en) Image resolution reduction and restoration method, equipment and readable storage medium
CN112967381A (en) Three-dimensional reconstruction method, apparatus, and medium
WO2023065604A1 (en) Image processing method and apparatus
CN110717405B (en) Face feature point positioning method, device, medium and electronic equipment
CN111935484B (en) Video frame compression coding method and device
CN110276720B (en) Image generation method and device
CN110136061B (en) Resolution improving method and system based on depth convolution prediction and interpolation
CN115619678A (en) Image deformation correction method and device, computer equipment and storage medium
US10984173B2 (en) Vector-based glyph style transfer
CN113947521A (en) Image resolution conversion method and device based on deep neural network and terminal equipment
CN113361535A (en) Image segmentation model training method, image segmentation method and related device
CN112419146A (en) Image processing method and device and terminal equipment
CN113034358B (en) Super-resolution image processing method and related device
US9064347B2 (en) Method, medium, and system rendering 3 dimensional graphics data considering fog effect
US20210383515A1 (en) Evenly Spaced Curve Sampling Technique for Digital Visual Content Transformation

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant