CN109064399B - Image super-resolution reconstruction method and system, computer device and storage medium thereof - Google Patents

Image super-resolution reconstruction method and system, computer device and storage medium thereof Download PDF

Info

Publication number
CN109064399B
CN109064399B CN201810803433.9A CN201810803433A CN109064399B CN 109064399 B CN109064399 B CN 109064399B CN 201810803433 A CN201810803433 A CN 201810803433A CN 109064399 B CN109064399 B CN 109064399B
Authority
CN
China
Prior art keywords
image
image data
resolution
super
segmented
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201810803433.9A
Other languages
Chinese (zh)
Other versions
CN109064399A (en
Inventor
贺永刚
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangzhou Shiyuan Electronics Thecnology Co Ltd
Original Assignee
Guangzhou Shiyuan Electronics Thecnology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangzhou Shiyuan Electronics Thecnology Co Ltd filed Critical Guangzhou Shiyuan Electronics Thecnology Co Ltd
Priority to CN201810803433.9A priority Critical patent/CN109064399B/en
Publication of CN109064399A publication Critical patent/CN109064399A/en
Application granted granted Critical
Publication of CN109064399B publication Critical patent/CN109064399B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformation in the plane of the image
    • G06T3/40Scaling the whole image or part thereof
    • G06T3/4053Super resolution, i.e. output image resolution higher than sensor resolution
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10024Color image

Abstract

The invention relates to an image super-resolution reconstruction method and system, a computer device and a storage medium thereof, belonging to the technical field of image processing. The image super-resolution reconstruction method comprises the following steps: segmenting a low-resolution image to be reconstructed according to scene content, and respectively setting label values of image data of each segmented scene area; splicing the image data of each segmented scene area and the corresponding label value thereof to obtain input data of a deep learning network; and inputting the input data into a deep learning network for parameter learning, and reconstructing a super-resolution image according to the network parameters obtained by learning. According to the technical scheme, the problem that corresponding details of different scene contents in the image are difficult to accurately restore in the prior art is solved, the corresponding details of the different scene contents in the image can be accurately restored, and the quality of the reconstructed image is improved.

Description

Image super-resolution reconstruction method and system, computer device and storage medium thereof
Technical Field
The present invention relates to the field of image processing technologies, and in particular, to a method and a system for reconstructing super-resolution images, a computer device, and a storage medium thereof.
Background
Image super-resolution reconstruction (or reconstruction) refers to the recovery of a high fraction of images from a low resolution image or video sequence. In the traditional technical scheme, sparse coding is mainly adopted to realize super-resolution reconstruction, two sparse dictionary sets and mapping relation thereof are learned from a low-resolution image set and a high-resolution image set respectively, a low-resolution image is mapped into a low-resolution dictionary, corresponding high-resolution dictionary coefficients are obtained through the dictionary mapping relation, and then a high-resolution image is reconstructed; in addition, with the emergence of the deep learning technology, the deep convolution kernel is used as a dictionary, the mapping from the low-resolution image to the high-resolution image is learned through a deep convolution network, and then the high-resolution image is reconstructed, so that the effect higher than that of a sparse coding method is achieved.
However, in the process of implementing the present invention, the inventors have found that the above-described techniques have at least the following problems: when the super-resolution reconstruction is performed on the image, details to be restored are different after super-resolution is performed on different scene contents. For example, the details required to be embodied in super-resolution reconstruction are not exactly the same for smooth wall surfaces and cluttered grass, and it is difficult to accurately recover the corresponding details of different scene contents in images in the prior art, which affects the quality of the reconstructed images.
Disclosure of Invention
Based on this, it is necessary to provide an image super-resolution reconstruction method and system for solving the problem that it is difficult to accurately recover the corresponding details of different scene contents in an image.
An image super-resolution reconstruction method comprises the following steps:
segmenting a low-resolution image to be reconstructed according to scene content, and respectively setting label values of image data of each segmented scene area;
splicing the image data of each segmented scene area and the corresponding label value thereof to obtain input data of a deep learning network;
and inputting the input data into a deep learning network for parameter learning, and reconstructing a super-resolution image according to the network parameters obtained by learning.
According to the image super-resolution reconstruction method, firstly, a low-resolution image to be reconstructed is segmented according to scene content, a label value is set, then scene information in the image is used as prior to be added into a super-resolution deep learning network, and then a super-resolution image is reconstructed; the learning parameters of the technical scheme can adapt to different scene contents, super-resolution reconstruction can be better performed on the image, corresponding details of different scene contents in the image can be accurately recovered, and the quality of the reconstructed image is improved.
In one embodiment, the step of respectively setting the tag values of the image data of the respective segmented scene areas comprises:
setting label values corresponding to various types aiming at various types of scene contents;
acquiring corresponding label values according to the types of the image data of each segmented scene area;
and writing the obtained label value into the image data of the segmented scene area.
In one embodiment, the step of obtaining input data of the deep learning network by stitching the image data of each segmented scene area and the corresponding label value thereof includes:
acquiring RGB three-channel image data of each segmented scene area;
acquiring a label value L of a segmented scene area where the RGB three-channel image data are located;
and combining the RGB three-channel image data and the label value L into four-channel image data RGBL as input data of the deep learning network.
In one embodiment, the deep learning network is a convolutional neural network.
In one embodiment, the step of inputting the input data into a deep learning network for parameter learning, and reconstructing an ultra-resolution image according to the learned network parameters includes:
inputting the image data RGBL of the four channels into a convolutional neural network for parameter learning to obtain a convolutional kernel, reconstructing each segmented scene area of the low-resolution image to be reconstructed according to the convolutional kernel, and outputting the super-resolution image.
An image super-resolution reconstruction system, comprising:
the segmentation module is used for segmenting the low-resolution image to be reconstructed according to the scene content and respectively setting the label value of the image data of each segmented scene area;
the splicing module is used for splicing the image data of each segmented scene area and the corresponding label value thereof to obtain input data of the deep learning network;
and the reconstruction module is used for inputting the input data into a deep learning network for parameter learning and reconstructing a super-resolution image according to the network parameters obtained by learning.
According to the image super-resolution reconstruction system, the low-resolution image to be reconstructed is segmented according to scene contents through the segmentation module, the label value is set, the scene information in the image is added into the super-resolution deep learning network as a priori through the splicing module, and then the super-resolution image is reconstructed through the reconstruction module; the learning parameters of the technical scheme can adapt to different scene contents, super-resolution reconstruction can be better performed on the image, corresponding details of different scene contents in the image can be accurately recovered, and the quality of the reconstructed image is improved.
In one embodiment, the segmentation module is further configured to set tag values corresponding to various types of scene contents; acquiring corresponding label values according to the types of the image data of each segmented scene area; and writing the obtained label value into the image data of the segmented scene area.
In one embodiment, the splicing module is further configured to acquire RGB three-channel image data of each segmented scene area; acquiring a label value L of a segmented scene area where the RGB three-channel image data are located; and combining the RGB three-channel image data and the label value L into four-channel image data RGBL as input data of the deep learning network.
In addition, there is a need to provide a computer device and a storage medium thereof for solving the problem that it is difficult to accurately recover the corresponding details of different scene contents in an image.
A computer device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, the processor implementing the image super-resolution reconstruction method as described above when executing the computer program.
According to the computer equipment, through the computer program running on the processor, the corresponding details of different scene contents in the image can be accurately recovered, and the quality of the reconstructed image is improved.
A computer storage medium having stored thereon a computer program which, when executed by a processor, implements the image super-resolution reconstruction method as described above.
The computer storage medium can accurately recover the corresponding details of different scene contents in the image through the stored computer program, and improves the quality of the reconstructed image.
Drawings
FIG. 1 is a flowchart of a super-resolution image reconstruction method according to an embodiment;
FIG. 2 is a schematic structural diagram of an image super-resolution reconstruction system according to an embodiment;
FIG. 3 is a block diagram of a computer system capable of implementing embodiments of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention is further described in detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention.
Referring to fig. 1, fig. 1 is a flowchart of an image super-resolution reconstruction method according to an embodiment, including the following steps:
and S10, segmenting the low-resolution image to be reconstructed according to the scene content, and respectively setting the label value of the image data of each segmented scene area.
In the above steps, the image is segmented first by combining with the scene content in the low-resolution image to be reconstructed, and when the segmentation is performed, the segmentation can be performed according to the actual requirement, the higher the segmentation degree is, the finer the corresponding details can be during subsequent image reconstruction, and for the tag value set after the segmentation, the factor of image processing is considered, and the description can be performed in a digital form.
For example, for a low-resolution image to be reconstructed, which mainly includes contents of two scenes, namely sky and grassland, a corresponding algorithm may be adopted to segment the image into two segmented scene regions according to the sky and the grassland, and then, according to a set common class label, for example, 0 represents the sky, 1 represents the grassland, and the like. Writing the image segmentation areas into labels according to the contents of the areas in the image, namely writing the image data of the grassland area into a label 1, and writing the image data of the sky area into a label 0; of course, if the classification is divided into three classes, the unknown class is individually set to one tag value of 2, and so on.
In one embodiment, the process of setting the label value of the image data of each divided scene area in step S10 may include the following steps:
firstly, setting label values corresponding to various types of scene contents according to the types of the scene contents; specifically, each type may be set to correspond to one tag value according to all types of the scene content.
Then, acquiring corresponding label values according to the types of the image data of each segmented scene area; specifically, when a low-resolution image to be reconstructed needs to be segmented, the tag value of each image data type is found according to a specific segmentation scheme.
Finally, writing the obtained tag value into the image data of the segmented scene area; specifically, the found tag value corresponding to the type is written into the image data of each divided scene area.
According to the scheme of the embodiment, the label values corresponding to various types of scene contents are set firstly, and the label values are found and written into the image data in use by combining the segmentation scheme, so that the image segmentation processing is facilitated, and the image data processing efficiency is improved.
And S20, splicing the image data of each segmented scene area and the corresponding label value to obtain input data of the deep learning network.
In the above steps, based on the application of the deep learning network, input data is generated by dividing the image data of the scene area and the corresponding label value thereof, and is used in the deep learning process.
According to the technical scheme, the label value is newly added in the input data, and the scene area information in the image is added into the super-resolution deep learning network as a priori, so that the input data of the deep learning network is added with one dimension, and the learned parameters can adapt to different scene contents.
In one embodiment, the splicing process of step S20 may include the following steps:
s201, acquiring RGB three-channel image data of each segmented scene area;
s202, acquiring a label value L of a segmented scene area where the RGB three-channel image data are located;
s203, combining the RGB three-channel image data and the label value L into four-channel image data RGBL which is used as input data of the deep learning network.
The scheme of the embodiment is a processing scheme for RGB format images, which combines RGB three-channel image data and label values L of segmented scene areas into RGBL four-channel data, and inputs the RGBL four-channel data into a deep learning network for learning, so that the learning process can adapt to different scene contents.
Of course, in addition to the above embodiment, the present invention may be applied to multi-channel images such as YUV and YCbCr.
As an example, the deep learning Network may adopt a Convolutional Neural Network (CNN), and the CNN may adopt a full Convolutional Network commonly used in a super-resolution algorithm, and may output a high-resolution image.
And S30, inputting the input data into a deep learning network for parameter learning, and reconstructing a super-resolution image according to the network parameters obtained by learning.
In the above steps, the scene area information in the image is added to the super-resolution deep learning network as a priori, so that the learned parameters can reflect the difference of scene contents, and the content of the reconstructed image is more detailed.
Compared with the traditional scheme that the quality of the reconstructed image is relatively average, the technical scheme of the embodiment of the invention has the advantages that the image data of the sky area is smoother in the recovered image by taking the low-resolution image to be reconstructed comprising the contents of the sky and the grassland as an example; the image data of the grassland area is more exquisite in the restored image.
In one embodiment, the process of reconstructing the super-resolution image in step S30 may include the following steps:
inputting the image data RGBL of the four channels into a convolutional neural network for parameter learning to obtain a convolutional kernel, reconstructing each segmented scene area of the low-resolution image to be reconstructed according to the convolutional kernel, and outputting the super-resolution image.
In the scheme of the embodiment, in the RGB format image reconstruction, the convolution kernel is obtained by learning the image data RGBL of each divided scene region, and the super-resolution image is reconstructed. Corresponding details of different scene contents in the image can be accurately recovered, and the quality of the reconstructed image is improved.
In addition, as an embodiment, the whole low-resolution image to be reconstructed can be used as a label to highlight details of a certain class. For example, assuming that there are both people and grass in the low-resolution image to be reconstructed, the entire segmentation tag value can be set as the tag value of the people in order to highlight the super-resolution effect of the people, and the tag value is sent to the deep learning network for testing without setting different tag values for each segmented scene region to be segmented.
In summary, the scheme of the embodiment of the invention is based on the image super-resolution method of deep learning, and fully considers the scene content of the image, and adds the information of low-resolution image segmentation into the deep learning network for parameter learning. Due to the addition of the label value of the image segmentation, the deep network can learn network parameters according to the image content, so that the generated super-resolution image has a better effect.
Referring to fig. 2, fig. 2 is a schematic structural diagram of an image super-resolution reconstruction system according to an embodiment, including:
the segmentation module 10 is configured to segment a low-resolution image to be reconstructed according to scene content, and set tag values of image data of each segmented scene area respectively;
the splicing module 20 is configured to splice the image data of each segmented scene area and the corresponding label value thereof to obtain input data of the deep learning network;
and the reconstruction module 30 is configured to input the input data to a deep learning network for parameter learning, and reconstruct an ultra-resolution image according to the learned network parameters.
In one embodiment, the segmentation module is further configured to set tag values corresponding to various types of scene contents; acquiring corresponding label values according to the types of the image data of each segmented scene area; and writing the obtained label value into the image data of the segmented scene area.
In one embodiment, the splicing module is further configured to acquire RGB three-channel image data of each segmented scene region; acquiring a label value L of a segmented scene area where the RGB three-channel image data are located; and combining the RGB three-channel image data and the label value L into four-channel image data RGBL as input data of the deep learning network.
In one embodiment, the deep learning network may be a convolutional neural network.
The reconstruction module 30 is further configured to input the image data RGBL of the four channels into a convolutional neural network for parameter learning to obtain a convolutional kernel, reconstruct each segmented scene region of the low-resolution image to be reconstructed according to the convolutional kernel, and output the super-resolution image.
The image super-resolution reconstruction system and the image super-resolution reconstruction method of the invention correspond to each other one by one, and technical features and beneficial effects thereof explained in the embodiment of the image super-resolution reconstruction method are all applicable to the embodiment of the image super-resolution reconstruction system, so that the statement is made.
Based on the examples described above, there is also provided in one embodiment a computer device comprising a memory, a processor, and a computer program stored on the memory and executable on the processor, wherein the processor when executing the program implements any one of the image super-resolution reconstruction methods in the embodiments described above.
According to the computer equipment, the corresponding details of different scene contents in the image can be accurately restored through the computer program running on the processor, and the quality of the reconstructed image is improved.
It will be understood by those skilled in the art that all or part of the processes in the methods of the above embodiments may be implemented by a computer program, which is stored in a non-volatile computer readable storage medium, and in the embodiments of the present invention, the program may be stored in the storage medium of a computer system and executed by at least one processor in the computer system to implement the processes including the embodiments of the sleep assistance methods described above. The storage medium may be a magnetic disk, an optical disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), or the like.
Accordingly, in an embodiment, there is also provided a storage medium having a computer program stored thereon, wherein the program is executed by a processor to realize super-resolution reconstruction of an image as in any of the above embodiments.
The computer storage medium can accurately recover the corresponding details of different scene contents in the image through the stored computer program, and improves the quality of the reconstructed image.
FIG. 3 is a block diagram of a computer system capable of implementing embodiments of the present invention. The computer system is only one example of a suitable computing environment for the invention and is not intended to suggest any limitation as to the scope of use of the invention. Neither should the computer system be interpreted as requiring a dependency or combination of components illustrated in the exemplary computer system.
The computer system shown in FIG. 3 is one example of a computer system suitable for use with the present invention. Other architectures with different subsystem configurations may also be used. Devices such as desktop computers, laptops, personal digital assistants, smart phones, tablets, portable media players, set-top boxes, and the like, as are well known to the public, may be suitable for use with some embodiments of the present invention. But are not limited to, the devices listed above.
As shown in fig. 3, the computer system includes a processor 310, a memory 320, and a system bus 322. Various system components including the memory 320 and the processor 310 are connected to a system bus 322. The processor 310 is hardware used to execute computer program instructions through basic arithmetic and logical operations in a computer system. Memory 320 is a physical device used for the temporary or permanent storage of computing programs or data (e.g., program state information). The system bus 320 may be any of several types of bus structures including a memory bus or memory controller, a peripheral bus, and a local bus. The processor 310 and the memory 320 may be in data communication via a system bus 322. Wherein memory 320 includes Read Only Memory (ROM) or flash memory (neither shown), and Random Access Memory (RAM), which generally refers to main memory loaded with an operating system and application programs.
The computer system also includes a display interface 330 (e.g., a graphics processing unit), a display device 340 (e.g., a liquid crystal display), an audio interface 350 (e.g., a sound card), and an audio device 360 (e.g., a speaker). The display device 340 and the audio device 360 are media devices for experiencing multimedia content.
The computer system generally includes a storage device 370. Storage device 370 may be selected from a variety of computer readable media, which refers to any available media that may be accessed by computer system 300, including both removable and non-removable media. For example, computer-readable media includes, but is not limited to, flash memory (micro SD cards), CD-ROM, digital Versatile Disks (DVD) or other optical disk storage, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices, or any other medium which can be used to store the desired information and which can be accessed by a computer system.
The computer system also includes an input device 380 and an input interface 390 (e.g., an IO controller). A user may enter commands and information into the computer system through input devices 380, such as a keyboard, a mouse, and a touch-panel device on the display device 340. Input device 380 is typically connected to system bus 322 via an input interface 390, but may be connected by other interface and bus structures, such as a Universal Serial Bus (USB).
The computer system may be logically connected in a network environment to one or more network devices. The network device may be a personal computer, a server, a router, a smartphone, a tablet, or other common network node. The computer system 300 is connected to a network device through a Local Area Network (LAN) interface 400 or a mobile communication unit 410. A Local Area Network (LAN) refers to a computer network formed by interconnecting within a limited area, such as a home, a school, a computer lab, or an office building using a network medium. WiFi and twisted pair wiring ethernet are the two most commonly used technologies to build local area networks. WiFi is a technology that enables computer systems to exchange data or connect to a wireless network via radio waves. The mobile communication unit 410 is capable of answering and placing calls over a radio communication link while moving throughout a wide geographic area. In addition to the call, the mobile communication unit 410 also supports internet access in a 2g,3g or 4G cellular communication system providing a mobile data service.
It should be noted that other computer systems including more or fewer subsystems than computer systems can also be suitable for use with the invention.
As described in detail above, a computer system suitable for the present invention can perform the flow of the image super-resolution reconstruction method. The computer system performs these operations by processor 310 executing software instructions in a computer-readable medium. These software instructions may be read into memory 320 from storage device 370 or from another device via local network interface 400. The software instructions stored in the memory 320 cause the processor 310 to perform the image super-resolution reconstruction method described above. Furthermore, the present invention can be implemented by hardware circuits or by a combination of hardware circuits and software instructions. Thus, implementations of the invention are not limited to any specific combination of hardware circuitry and software.
The technical features of the embodiments described above may be arbitrarily combined, and for the sake of brevity, all possible combinations of the technical features in the embodiments described above are not described, but should be considered as being within the scope of the present specification as long as there is no contradiction between the combinations of the technical features.
The above-mentioned embodiments only express several embodiments of the present invention, and the description thereof is more specific and detailed, but not construed as limiting the scope of the invention. It should be noted that various changes and modifications can be made by those skilled in the art without departing from the spirit of the invention, and these changes and modifications are all within the scope of the invention. Therefore, the protection scope of the present patent shall be subject to the appended claims.

Claims (10)

1. The image super-resolution reconstruction method is characterized by comprising the following steps of:
segmenting a low-resolution image to be reconstructed according to scene content, and respectively setting label values of image data of each segmented scene area;
splicing the image data of each segmented scene area and the corresponding label value thereof to obtain input data of a deep learning network;
inputting the input data into a deep learning network for parameter learning, and reconstructing a super-resolution image according to network parameters obtained by learning;
the step of obtaining the input data of the deep learning network by splicing the image data of each segmented scene area and the corresponding label value thereof comprises the following steps:
acquiring RGB three-channel image data of each segmented scene area;
acquiring a label value L of a segmented scene area where the RGB three-channel image data are located;
and combining the RGB three-channel image data and the label value L into four-channel image data RGBL as input data of the deep learning network.
2. The image super-resolution reconstruction method according to claim 1, wherein the step of setting the label values of the image data of the respective segmented scene areas respectively comprises:
setting label values corresponding to various types aiming at various types of scene contents;
acquiring corresponding label values according to the types of the image data of each segmented scene area;
and writing the obtained label value into the image data of the segmented scene area.
3. The image super-resolution reconstruction method according to claim 1, wherein the deep learning network is a convolutional neural network.
4. The image super-resolution reconstruction method according to claim 3, wherein the step of inputting the input data into a deep learning network for parameter learning and reconstructing the super-resolution image according to the learned network parameters comprises:
inputting the image data RGBL of the four channels into a convolutional neural network for parameter learning to obtain a convolutional kernel, reconstructing each segmented scene area of the low-resolution image to be reconstructed according to the convolutional kernel, and outputting the super-resolution image.
5. An image super-resolution reconstruction system, comprising:
the segmentation module is used for segmenting the low-resolution image to be reconstructed according to the scene content and respectively setting the label value of the image data of each segmented scene area;
the splicing module is used for splicing the image data of each segmented scene area and the corresponding label value thereof to obtain input data of the deep learning network;
the reconstruction module is used for inputting the input data into a deep learning network for parameter learning and reconstructing a super-resolution image according to network parameters obtained by learning;
the splicing module is further used for acquiring RGB three-channel image data of each segmented scene area; acquiring a label value L of a segmented scene area where the RGB three-channel image data are located; and combining the RGB three-channel image data and the label value L into four-channel image data RGBL as input data of the deep learning network.
6. The image super-resolution reconstruction system according to claim 5, wherein the segmentation module is further configured to set tag values corresponding to various types for various types of scene contents; acquiring corresponding label values according to the types of the image data of each segmented scene area; and writing the obtained label value into the image data of the segmented scene area.
7. The system of claim 5, wherein the deep learning network is a convolutional neural network.
8. The system of claim 5, wherein the reconstruction module is further configured to input the image data RGBL of the four channels into a convolutional neural network for parameter learning to obtain a convolutional kernel, reconstruct each segmented scene region of the to-be-reconstructed low-resolution image according to the convolutional kernel, and output the super-resolution image.
9. A computer device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, characterized in that the processor implements the image super-resolution reconstruction method according to any one of claims 1 to 4 when executing the computer program.
10. A computer storage medium having a computer program stored thereon, wherein the program, when executed by a processor, implements the image super-resolution reconstruction method according to any one of claims 1 to 4.
CN201810803433.9A 2018-07-20 2018-07-20 Image super-resolution reconstruction method and system, computer device and storage medium thereof Active CN109064399B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810803433.9A CN109064399B (en) 2018-07-20 2018-07-20 Image super-resolution reconstruction method and system, computer device and storage medium thereof

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810803433.9A CN109064399B (en) 2018-07-20 2018-07-20 Image super-resolution reconstruction method and system, computer device and storage medium thereof

Publications (2)

Publication Number Publication Date
CN109064399A CN109064399A (en) 2018-12-21
CN109064399B true CN109064399B (en) 2023-01-24

Family

ID=64817698

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810803433.9A Active CN109064399B (en) 2018-07-20 2018-07-20 Image super-resolution reconstruction method and system, computer device and storage medium thereof

Country Status (1)

Country Link
CN (1) CN109064399B (en)

Families Citing this family (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111800630A (en) * 2019-04-09 2020-10-20 Tcl集团股份有限公司 Method and system for reconstructing video super-resolution and electronic equipment
US11010872B2 (en) * 2019-04-29 2021-05-18 Intel Corporation Method and apparatus for person super resolution from low resolution image
CN110136062B (en) * 2019-05-10 2020-11-03 武汉大学 Super-resolution reconstruction method combining semantic segmentation
CN110288530A (en) * 2019-06-28 2019-09-27 北京金山云网络技术有限公司 A kind of pair of image carries out the processing method and processing device of super-resolution rebuilding
CN110298790A (en) * 2019-06-28 2019-10-01 北京金山云网络技术有限公司 A kind of pair of image carries out the processing method and processing device of super-resolution rebuilding
CN110619603B (en) * 2019-08-29 2023-11-10 浙江师范大学 Single image super-resolution method for optimizing sparse coefficient
CN110796600B (en) * 2019-10-29 2023-08-11 Oppo广东移动通信有限公司 Image super-resolution reconstruction method, image super-resolution reconstruction device and electronic equipment
CN110958469A (en) * 2019-12-13 2020-04-03 联想(北京)有限公司 Video processing method and device, electronic equipment and storage medium
CN111598776B (en) * 2020-04-29 2023-06-30 Oppo广东移动通信有限公司 Image processing method, image processing device, storage medium and electronic apparatus
CN111640067B (en) * 2020-06-10 2022-08-09 华侨大学 Single image super-resolution reconstruction method based on three-channel convolutional neural network
CN112734647A (en) * 2021-01-20 2021-04-30 支付宝(杭州)信息技术有限公司 Image processing method and device

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2013026659A (en) * 2011-07-15 2013-02-04 Univ Of Tsukuba Super-resolution image processing device and dictionary creating device for super-resolution image processing
CN103489173A (en) * 2013-09-23 2014-01-01 百年金海科技有限公司 Video image super-resolution reconstruction method
CN107578377A (en) * 2017-08-31 2018-01-12 北京飞搜科技有限公司 A kind of super-resolution image reconstruction method and system based on deep learning
CN107977929A (en) * 2016-10-21 2018-05-01 中国电信股份有限公司 Image Super Resolution Processing method and apparatus

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8971612B2 (en) * 2011-12-15 2015-03-03 Microsoft Corporation Learning image processing tasks from scene reconstructions
US10147167B2 (en) * 2015-11-25 2018-12-04 Heptagon Micro Optics Pte. Ltd. Super-resolution image reconstruction using high-frequency band extraction

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2013026659A (en) * 2011-07-15 2013-02-04 Univ Of Tsukuba Super-resolution image processing device and dictionary creating device for super-resolution image processing
CN103489173A (en) * 2013-09-23 2014-01-01 百年金海科技有限公司 Video image super-resolution reconstruction method
CN107977929A (en) * 2016-10-21 2018-05-01 中国电信股份有限公司 Image Super Resolution Processing method and apparatus
CN107578377A (en) * 2017-08-31 2018-01-12 北京飞搜科技有限公司 A kind of super-resolution image reconstruction method and system based on deep learning

Also Published As

Publication number Publication date
CN109064399A (en) 2018-12-21

Similar Documents

Publication Publication Date Title
CN109064399B (en) Image super-resolution reconstruction method and system, computer device and storage medium thereof
US10861133B1 (en) Super-resolution video reconstruction method, device, apparatus and computer-readable storage medium
US11373275B2 (en) Method for generating high-resolution picture, computer device, and storage medium
US10699388B2 (en) Digital image fill
CN106056530B (en) Method and device for displaying picture content in application
CN110070496B (en) Method and device for generating image special effect and hardware device
US10681367B2 (en) Intra-prediction video coding method and device
WO2017128632A1 (en) Method, apparatus and system for image compression and image reconstruction
WO2023160617A9 (en) Video frame interpolation processing method, video frame interpolation processing device, and readable storage medium
US20150242988A1 (en) Methods of eliminating redundant rendering of frames
CN108353110B (en) Method, apparatus and computer readable medium for selecting a process to be applied to video data from a set of candidate processes
CN110807300A (en) Image processing method and device, electronic equipment and medium
CN111339367B (en) Video processing method and device, electronic equipment and computer readable storage medium
CN112269957A (en) Picture processing method, device, equipment and storage medium
CN107277650B (en) Video file cutting method and device
WO2023174416A1 (en) Video super-resolution method and apparatus
CN115456858B (en) Image processing method, device, computer equipment and computer readable storage medium
WO2023125467A1 (en) Image processing method and apparatus, electronic device and readable storage medium
WO2023174355A1 (en) Video super-resolution method and device
WO2024007135A1 (en) Image processing method and apparatus, terminal device, electronic device, and storage medium
WO2023174040A1 (en) Picture processing method and related device
CN112215774B (en) Model training and image defogging methods, apparatus, devices and computer readable media
US20240046628A1 (en) Hierarchical audio-visual feature fusing method for audio-visual question answering and product
CN110276720B (en) Image generation method and device
WO2020243973A1 (en) Model-based signal inference method and apparatus

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant