CN109102463B - Super-resolution image reconstruction method and device - Google Patents
Super-resolution image reconstruction method and device Download PDFInfo
- Publication number
- CN109102463B CN109102463B CN201810918370.1A CN201810918370A CN109102463B CN 109102463 B CN109102463 B CN 109102463B CN 201810918370 A CN201810918370 A CN 201810918370A CN 109102463 B CN109102463 B CN 109102463B
- Authority
- CN
- China
- Prior art keywords
- size
- neural network
- convolutional neural
- layer
- pixel matrix
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000000034 method Methods 0.000 title claims abstract description 34
- 239000011159 matrix material Substances 0.000 claims abstract description 90
- 238000012545 processing Methods 0.000 claims abstract description 33
- 238000013136 deep learning model Methods 0.000 claims abstract description 28
- 238000013527 convolutional neural network Methods 0.000 claims description 103
- 238000004590 computer program Methods 0.000 claims description 5
- 238000013528 artificial neural network Methods 0.000 claims description 4
- 238000013135 deep learning Methods 0.000 claims description 4
- 230000010354 integration Effects 0.000 claims description 4
- 238000013461 design Methods 0.000 abstract description 6
- 238000010586 diagram Methods 0.000 description 4
- 238000004364 calculation method Methods 0.000 description 3
- 238000005516 engineering process Methods 0.000 description 3
- 230000003287 optical effect Effects 0.000 description 2
- 238000004891 communication Methods 0.000 description 1
- 238000010801 machine learning Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T3/00—Geometric image transformations in the plane of the image
- G06T3/40—Scaling of whole images or parts thereof, e.g. expanding or contracting
- G06T3/4053—Scaling of whole images or parts thereof, e.g. expanding or contracting based on super-resolution, i.e. the output image resolution being higher than the sensor resolution
Landscapes
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Image Analysis (AREA)
- Image Processing (AREA)
Abstract
The invention provides a super-resolution image reconstruction method and a device, comprising the following steps: setting the size of the image to be detected after the graying treatment as a first size, and acquiring the image to be detected with the first size; inputting an image to be detected with a first size to a first layer deep learning model, and outputting a three-pixel matrix with the first size; inputting the three-pixel matrix into a second layer deep learning model, and outputting a three-pixel matrix with a second size, wherein the second size is larger than the first size; and integrating the three pixel matrixes with the second size to obtain the super-resolution image. The invention has the advantages of high processing speed for converting the low-resolution image into the high-resolution image, no need of worrying about the movement of an object, high reconstruction degree, simple design and high processing speed.
Description
Technical Field
The invention relates to the technical field of image processing and machine learning, in particular to a super-resolution image reconstruction method and device.
Background
As the demand for picture pixels increases, cameras with high pixels are increasingly preferred, but even cameras with high pixels have limited resolution, and it is necessary to convert pictures with low resolution into pictures with high resolution.
At present, the super-resolution image reconstruction technology is mainly based on a frequency or a space domain method, but the method based on the frequency domain is only suitable for the condition that no local motion exists in an image and only the whole object moves, and is only suitable for the condition that the noise of the space domain is not changed; the method based on the airspace has complex design, multiple calculation steps and low speed.
Therefore, the current super-resolution image reconstruction technology is only suitable for the whole motion of an object, and does not have the problems of local motion or complex design, multiple calculation steps and low speed.
Disclosure of Invention
In order to solve the problems that the existing super-resolution image reconstruction technology is only suitable for the whole motion of an object, but does not have the problems of local motion or complex design, multiple calculation steps and low speed, on one hand, the invention provides a super-resolution image reconstruction method, which comprises the following steps:
setting the size of the image to be detected after the graying treatment as a first size, and acquiring the image to be detected with the first size;
inputting an image to be detected with a first size to a first layer deep learning model, and outputting a three-pixel matrix with the first size;
inputting the three-pixel matrix into a second layer deep learning model, and outputting a three-pixel matrix with a second size, wherein the second size is larger than the first size;
and integrating the three pixel matrixes with the second size to obtain the super-resolution image.
Preferably, the acquiring the image to be detected in the first size by setting the size of the image to be detected after the graying processing as the first size includes:
carrying out graying processing on an image to be detected to change an RGB three channel into an R channel, a G channel and a B channel, and acquiring the image to be detected after graying processing;
and expanding or reducing the size of the image to be detected after the graying treatment to a first size by using a bilinear interpolation method.
Preferably, the three-pixel matrix includes an R-channel pixel matrix, a G-channel pixel matrix, and a B-channel pixel matrix.
Preferably, the first layer of deep learning model comprises a first convolutional neural network, a second convolutional neural network and a third convolutional neural network which are connected in parallel, and the first convolutional neural network, the second convolutional neural network and the third convolutional neural network are respectively 3 layers of convolutional neural networks; the first convolutional neural network outputs an R-channel pixel matrix of a first size, the second convolutional neural network outputs a G-channel pixel matrix of the first size, and the third convolutional neural network outputs a B-channel pixel matrix of the first size.
Preferably, the 3-layer convolutional neural network comprises a first layer of convolutional layer, a second layer of convolutional layer and a third layer of convolutional layer which are sequentially performed; the convolution kernel of the first convolutional layer has a size of 5 × 5, the convolution kernel of the second convolutional layer has a size of 4 × 4, the convolution kernel of the third convolutional layer has a size of 3 × 3, and the number of convolution kernels of the first convolutional layer, the second convolutional layer, and the third convolutional layer is 64.
Preferably, the second layer deep learning network comprises a fourth convolutional neural network, a fifth convolutional neural network and a sixth convolutional neural network which are parallel; the fourth convolutional neural network, the fifth convolutional neural network and the sixth convolutional neural network are respectively 18 layers of convolutional neural networks; the fourth convolutional neural network outputs an R-channel pixel matrix of a second size, the fifth convolutional neural network outputs a G-channel pixel matrix of the second size, and the sixth convolutional neural network outputs a B-channel pixel matrix of the second size.
Preferably, the size of the convolution kernel of each convolution layer of the 18-layer convolution neural network is 3 × 3, and the number of the convolution kernels is 32; the 18-layer convolutional neural network is a convolutional neural network with a residual structure, and the residual of the convolutional neural network with the residual structure is connected from the input to the output of the 18-layer convolutional neural network.
In another aspect, the present invention further provides a super-resolution image reconstruction apparatus, including:
the acquisition module is used for acquiring an image to be detected;
the first processing module is used for performing graying processing on the image to be detected, setting the image to be detected as a first size, inputting the image to be detected as the first size to the first layer deep learning model, and outputting a three-pixel matrix of the first size;
the second processing module inputs the three-pixel matrix into the second layer deep learning model and outputs a three-pixel matrix with a second size, wherein the second size is larger than the first size;
and the integration module integrates the three-pixel matrix with the second size to acquire the super-resolution image.
In yet another aspect, an electronic device for super-resolution image reconstruction includes:
the processor and the memory are communicated with each other through a bus; the memory stores program instructions executable by the processor, which invokes the program instructions to perform the method described above.
In a further aspect, the invention provides a computer-readable storage medium having stored thereon a computer program which, when executed by a processor, performs the method described above.
The invention provides a super-resolution image reconstruction method and device, wherein an image to be detected which is set to be a first size after being subjected to graying processing is processed into a three-pixel matrix with the first size through a first layer deep learning model; and processing the three-pixel matrix with the first size into a three-pixel matrix with a second size by using a second layer depth model, so that the three-pixel matrix is clearer, and finally integrating the three-pixel matrix with the second size to obtain a super-resolution image. The invention has the advantages of high processing speed for converting the low-resolution image into the high-resolution image, no need of worrying about object motion, high reconstruction degree, simple design and high processing speed.
Drawings
FIG. 1 is a flow chart of a super-resolution image reconstruction method according to a preferred embodiment of the present invention;
fig. 2 is a schematic structural diagram of a super-resolution image reconstruction apparatus according to a preferred embodiment of the present invention;
fig. 3 is a schematic structural diagram of an electronic device for super-resolution image reconstruction according to a preferred embodiment of the present invention.
Detailed Description
The following detailed description of the present invention is provided in connection with the accompanying drawings and examples. The following examples are intended to illustrate the invention but are not intended to limit the scope of the invention.
Fig. 1 is a flowchart illustrating a super-resolution image reconstruction method according to a preferred embodiment of the present invention, and as shown in fig. 1, an embodiment of the present invention provides a super-resolution image reconstruction method, including:
s101, setting the size of an image to be detected after graying processing as a first size, and acquiring the image to be detected with the first size;
s102, inputting an image to be detected with a first size into a first layer deep learning model, and outputting a three-pixel matrix, wherein the size of the three-pixel matrix keeps the first size;
step S103, inputting the three-pixel matrix into a second layer deep learning model, and outputting a three-pixel matrix with a second size, wherein the second size is larger than the first size;
and step S104, integrating the three-pixel matrix with the second size to obtain a super-resolution image.
Specifically, an image to be detected needs to be acquired, the image to be detected is an image with a low resolution which is usually shot by a camera or a video camera, and if the details of the image to be detected need to be clearly seen, super-resolution image reconstruction needs to be performed on the image to be detected.
Further, in order to enable the image to be detected to be conveniently processed, the image to be detected is subjected to graying processing, three channels are changed into a single channel, the size of the image subjected to graying processing is uniformly set as a first size image to be detected, and the first size image to be detected is obtained; then inputting the image to be detected with the first size to a first layer deep learning model, and outputting three different pixel matrixes, namely three pixel matrixes, wherein the size of the three pixel matrixes still maintains the first size; and then inputting the three-pixel matrix into a second layer of deep learning model to change the size of the three-pixel matrix from the first size to the second size, greatly increasing the size of the three-pixel matrix, and finally integrating the three-pixel matrix with the second size to obtain a super-resolution image.
Wherein the first dimension is less than 1920 x 1080 and the second dimension is not less than 1920 x 1080; for example, the first dimension is 640 x 480 and the second dimension is 1080 x 1920.
It should be noted that the three-pixel matrix includes an R-channel pixel matrix, a G-channel pixel matrix, and a B-channel pixel matrix.
Based on the embodiment, the first layer of deep learning model comprises a first convolutional neural network, a second convolutional neural network and a third convolutional neural network which are connected in parallel, wherein the first convolutional neural network, the second convolutional neural network and the third convolutional neural network are respectively a 3-layer convolutional neural network; the first convolutional neural network outputs an R-channel pixel matrix of a first size, the second convolutional neural network outputs a G-channel pixel matrix of the first size, and the third convolutional neural network outputs a B-channel pixel matrix of the first size.
Further, the 3-layer convolutional neural network comprises a first layer of convolutional layer, a second layer of convolutional layer and a third layer of convolutional layer which are sequentially performed; the convolution kernel of the first convolutional layer has a size of 5 × 5, the convolution kernel of the second convolutional layer has a size of 4 × 4, the convolution kernel of the third convolutional layer has a size of 3 × 3, and the number of convolution kernels of the first convolutional layer, the second convolutional layer, and the third convolutional layer is 64.
Based on the above embodiment, the second layer deep learning network includes a fourth convolutional neural network, a fifth convolutional neural network, and a sixth convolutional neural network that are parallel; the fourth convolutional neural network, the fifth convolutional neural network and the sixth convolutional neural network are respectively 18 layers of convolutional neural networks; the fourth convolutional neural network outputs an R-channel pixel matrix of a second size, the fifth convolutional neural network outputs a G-channel pixel matrix of the second size, and the sixth convolutional neural network outputs a B-channel pixel matrix of the second size.
Further, the input of the fourth convolutional neural network is an R-channel pixel matrix of the first size, the input of the fifth convolutional neural network is a G-channel pixel matrix of the first size, and the input of the sixth convolutional neural network is a B-channel pixel matrix of the first size.
Further, the size of the convolution kernel of each convolution layer of the 18 layers of convolution neural networks is 3 × 3, and the number of the convolution kernels is 32; the 18-layer convolutional neural network is a convolutional neural network with a residual structure, and the residual of the convolutional neural network with the residual structure is connected from the input to the output of the 18-layer convolutional neural network.
Fig. 2 is a schematic structural diagram of a super-resolution image reconstruction apparatus according to a preferred embodiment of the present invention, and as shown in fig. 2, an embodiment of the present invention provides a super-resolution image reconstruction apparatus including an acquisition module 201, a first processing module 202, a second processing module 203, and an integration module 204, wherein:
an obtaining module 201, configured to obtain an image to be detected;
the first processing module 202 is configured to set an image to be detected as a first size after performing graying processing on the image to be detected, input the image to be detected as the first size to the first-layer deep learning model, and output a three-pixel matrix, where the size of the three-pixel matrix maintains the first size;
the second processing module 203 inputs the three-pixel matrix into the second layer deep learning model and outputs a three-pixel matrix with a second size, wherein the second size is larger than the first size;
and the integration module 204 integrates the three pixel matrixes with the second size to acquire a super-resolution image.
Fig. 3 is a schematic structural diagram of an electronic device for super-resolution image reconstruction according to a preferred embodiment of the present invention, and as shown in fig. 3, the present invention provides an electronic device for super-resolution image reconstruction, which includes a processor 301, a memory 302 and a bus 303;
the processor 301 and the memory 302 complete communication with each other through the bus 303;
setting the size of the image to be detected after the graying treatment as a first size, and acquiring the image to be detected with the first size;
inputting an image to be detected with a first size to a first layer deep learning model, and outputting a three-pixel matrix with the first size;
inputting the three-pixel matrix into a second layer deep learning model, and outputting a three-pixel matrix with a second size, wherein the second size is larger than the first size;
and integrating the three pixel matrixes with the second size to obtain the super-resolution image.
Embodiments of the present invention disclose a computer program product comprising a computer program stored on a non-transitory computer-readable storage medium, the computer program comprising program instructions which, when executed by a computer, enable the computer to perform the methods provided by the above-mentioned method embodiments, for example, including:
setting the size of the image to be detected after the graying treatment as a first size, and acquiring the image to be detected with the first size;
inputting an image to be detected with a first size to a first layer deep learning model, and outputting a three-pixel matrix, wherein the size of the three-pixel matrix keeps the first size;
inputting the three-pixel matrix into a second layer deep learning model, and outputting a three-pixel matrix with a second size, wherein the second size is larger than the first size;
and integrating the three pixel matrixes with the second size to obtain the super-resolution image.
Embodiments of the present invention provide a non-transitory computer-readable storage medium, which stores computer instructions, where the computer instructions cause the computer to perform the methods provided by the above method embodiments, for example, the methods include:
setting the size of the image to be detected after the graying treatment as a first size, and acquiring the image to be detected with the first size;
inputting an image to be detected with a first size to a first layer of deep learning model, and outputting a three-pixel matrix, wherein the size of the three-pixel matrix keeps the first size;
inputting the three-pixel matrix into a second layer deep learning model, and outputting a three-pixel matrix with a second size, wherein the second size is larger than the first size;
and integrating the three pixel matrixes with the second size to obtain the super-resolution image.
Those of ordinary skill in the art will understand that: all or part of the steps of implementing the method embodiments may be implemented by hardware related to program instructions, and the program may be stored in a computer-readable storage medium, and when executed, executes the steps including the method embodiments; and the aforementioned storage medium includes: various media that can store program codes, such as ROM, RAM, magnetic or optical disks.
The above-described embodiments of the apparatuses and devices are merely illustrative, where the units described as separate parts may or may not be physically separate, and the parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on multiple network units. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of the embodiments of the present invention. One of ordinary skill in the art can understand and implement it without inventive effort.
Through the above description of the embodiments, those skilled in the art will clearly understand that each embodiment may be implemented by software plus a necessary general hardware platform, and may also be implemented by hardware. Based on the understanding, the above technical solutions substantially or otherwise contributing to the prior art may be embodied in the form of a software product, which may be stored in a computer-readable storage medium, such as ROM/RAM, magnetic disk, optical disk, etc., and includes several instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) to execute the method according to the various embodiments or some parts of the embodiments.
The invention provides a super-resolution image reconstruction method and device, wherein an image to be detected which is set to be a first size after being subjected to graying processing is processed into a three-pixel matrix with the first size through a first layer deep learning model; and processing the three-pixel matrix with the first size into a three-pixel matrix with a second size by using a second layer depth model, so that the three-pixel matrix is clearer, and finally integrating the three-pixel matrix with the second size to obtain a super-resolution image. The invention has the advantages of high processing speed for converting the low-resolution image into the high-resolution image, no need of worrying about the movement of an object, high reconstruction degree, simple design and high processing speed.
Finally, the method of the present invention is only a preferred embodiment, and is not intended to limit the scope of the present invention. Any modification, equivalent replacement, or improvement made within the spirit and principle of the present invention should be included in the protection scope of the present invention.
Claims (5)
1. A super-resolution image reconstruction method is characterized by comprising the following steps:
setting the size of the image to be detected after the graying treatment as a first size, and acquiring the image to be detected with the first size;
inputting the image to be detected with the first size into a first layer deep learning model, and outputting a three-pixel matrix with the first size;
inputting the three-pixel matrix into a second layer deep learning model, and outputting a three-pixel matrix with a second size, wherein the second size is larger than the first size;
integrating the three pixel matrixes with the second size to obtain a super-resolution image;
the method for acquiring the image to be detected with the first size by setting the size of the image to be detected after the graying processing as the first size comprises the following steps:
carrying out graying processing on the image to be detected to change the RGB three channels of the image to be detected into an R channel, a G channel and a B channel, and acquiring the image to be detected after the graying processing;
expanding or reducing the size of the image to be detected after the graying treatment to the first size by using a bilinear interpolation method;
the first layer of deep learning model comprises a first convolutional neural network, a second convolutional neural network and a third convolutional neural network which are connected in parallel, and the first convolutional neural network, the second convolutional neural network and the third convolutional neural network are respectively 3 layers of convolutional neural networks; the first convolutional neural network outputs an R-channel pixel matrix of a first size, the second convolutional neural network outputs a G-channel pixel matrix of the first size, and the third convolutional neural network outputs a B-channel pixel matrix of the first size;
the 3-layer convolutional neural network comprises a first layer of convolutional layer, a second layer of convolutional layer and a third layer of convolutional layer which are sequentially performed; the convolution kernels of the first convolutional layer are 5 × 5 in size, the convolution kernels of the second convolutional layer are 4 × 4 in size, the convolution kernels of the third convolutional layer are 3 × 3 in size, and the number of the convolution kernels of the first convolutional layer, the second convolutional layer and the third convolutional layer is 64;
the second layer of deep learning network comprises a fourth convolutional neural network, a fifth convolutional neural network and a sixth convolutional neural network which are parallel; the fourth convolutional neural network, the fifth convolutional neural network and the sixth convolutional neural network are respectively 18 layers of convolutional neural networks; the fourth convolutional neural network outputs an R-channel pixel matrix of a second size, the fifth convolutional neural network outputs a G-channel pixel matrix of the second size, and the sixth convolutional neural network outputs a B-channel pixel matrix of the second size;
the size of convolution kernels of each convolution layer of the 18 layers of convolution neural networks is 3 x 3, and the number of the convolution kernels is 32; the 18-layer convolutional neural network is a convolutional neural network with a residual structure, and the residual of the convolutional neural network with the residual structure is connected from the input to the output of the 18-layer convolutional neural network.
2. The super-resolution image reconstruction method according to claim 1, wherein the three-pixel matrix comprises an R-channel pixel matrix, a G-channel pixel matrix, and a B-channel pixel matrix.
3. A super-resolution image reconstruction apparatus, comprising:
the acquisition module is used for acquiring an image to be detected;
the first processing module is used for performing graying processing on the image to be detected, setting the image to be detected as a first size, inputting the image to be detected as the first size into the first layer deep learning model, and outputting a three-pixel matrix of the first size;
the second processing module inputs the three-pixel matrix into a second layer deep learning model and outputs a three-pixel matrix with a second size, wherein the second size is larger than the first size;
the integration module integrates the three-pixel matrix with the second size to obtain a super-resolution image;
the obtaining module is specifically configured to:
carrying out graying processing on the image to be detected to change the RGB three channels of the image to be detected into an R channel, a G channel and a B channel, and acquiring the image to be detected after the graying processing;
expanding or reducing the size of the image to be detected after the graying treatment to the first size by using a bilinear interpolation method;
the first layer of deep learning model comprises a first convolutional neural network, a second convolutional neural network and a third convolutional neural network which are connected in parallel, and the first convolutional neural network, the second convolutional neural network and the third convolutional neural network are respectively 3 layers of convolutional neural networks; the first convolutional neural network outputs an R-channel pixel matrix of a first size, the second convolutional neural network outputs a G-channel pixel matrix of a first size, and the third convolutional neural network outputs a B-channel pixel matrix of a first size;
the 3-layer convolutional neural network comprises a first layer of convolutional layer, a second layer of convolutional layer and a third layer of convolutional layer which are sequentially performed; the convolution kernels of the first convolutional layer are 5 × 5 in size, the convolution kernels of the second convolutional layer are 4 × 4 in size, the convolution kernels of the third convolutional layer are 3 × 3 in size, and the number of the convolution kernels of the first convolutional layer, the second convolutional layer and the third convolutional layer is 64;
the second layer of deep learning network comprises a fourth convolutional neural network, a fifth convolutional neural network and a sixth convolutional neural network which are parallel; the fourth convolutional neural network, the fifth convolutional neural network and the sixth convolutional neural network are respectively 18 layers of convolutional neural networks; the fourth convolutional neural network outputs an R-channel pixel matrix of a second size, the fifth convolutional neural network outputs a G-channel pixel matrix of the second size, and the sixth convolutional neural network outputs a B-channel pixel matrix of the second size;
the size of convolution kernels of each convolution layer of the 18 layers of convolution neural networks is 3 x 3, and the number of the convolution kernels is 32; the 18-layer convolutional neural network is a convolutional neural network with a residual structure, and the residual of the convolutional neural network with the residual structure is connected from the input to the output of the 18-layer convolutional neural network.
4. An electronic device for super-resolution image reconstruction, comprising:
the processor and the memory are communicated with each other through a bus; the memory stores program instructions executable by the processor, the processor invoking the program instructions to be capable of performing the method of claim 1 or 2.
5. A computer-readable storage medium, on which a computer program is stored which, when being executed by a processor, carries out the method according to claim 1 or 2.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810918370.1A CN109102463B (en) | 2018-08-13 | 2018-08-13 | Super-resolution image reconstruction method and device |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810918370.1A CN109102463B (en) | 2018-08-13 | 2018-08-13 | Super-resolution image reconstruction method and device |
Publications (2)
Publication Number | Publication Date |
---|---|
CN109102463A CN109102463A (en) | 2018-12-28 |
CN109102463B true CN109102463B (en) | 2023-01-24 |
Family
ID=64849380
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201810918370.1A Active CN109102463B (en) | 2018-08-13 | 2018-08-13 | Super-resolution image reconstruction method and device |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN109102463B (en) |
Families Citing this family (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110659119B (en) | 2019-09-12 | 2022-08-02 | 浪潮电子信息产业股份有限公司 | Picture processing method, device and system |
CN114066722B (en) * | 2021-11-03 | 2024-03-19 | 抖音视界有限公司 | Method and device for acquiring image and electronic equipment |
CN114549328B (en) * | 2022-04-24 | 2022-07-22 | 西南财经大学 | JPG image super-resolution restoration method, computer readable storage medium and terminal |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2014033415A (en) * | 2012-08-06 | 2014-02-20 | Canon Inc | Image processor, image processing method and imaging apparatus |
CN105046672A (en) * | 2015-06-30 | 2015-11-11 | 北京工业大学 | Method for image super-resolution reconstruction |
CN107358575A (en) * | 2017-06-08 | 2017-11-17 | 清华大学 | A kind of single image super resolution ratio reconstruction method based on depth residual error network |
CN108090870A (en) * | 2017-12-13 | 2018-05-29 | 苏州长风航空电子有限公司 | A kind of infrared image super resolution ratio reconstruction method based on thaumatropy self similarity |
CN108259997A (en) * | 2018-04-02 | 2018-07-06 | 腾讯科技(深圳)有限公司 | Image correlation process method and device, intelligent terminal, server, storage medium |
-
2018
- 2018-08-13 CN CN201810918370.1A patent/CN109102463B/en active Active
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2014033415A (en) * | 2012-08-06 | 2014-02-20 | Canon Inc | Image processor, image processing method and imaging apparatus |
CN105046672A (en) * | 2015-06-30 | 2015-11-11 | 北京工业大学 | Method for image super-resolution reconstruction |
CN107358575A (en) * | 2017-06-08 | 2017-11-17 | 清华大学 | A kind of single image super resolution ratio reconstruction method based on depth residual error network |
CN108090870A (en) * | 2017-12-13 | 2018-05-29 | 苏州长风航空电子有限公司 | A kind of infrared image super resolution ratio reconstruction method based on thaumatropy self similarity |
CN108259997A (en) * | 2018-04-02 | 2018-07-06 | 腾讯科技(深圳)有限公司 | Image correlation process method and device, intelligent terminal, server, storage medium |
Also Published As
Publication number | Publication date |
---|---|
CN109102463A (en) | 2018-12-28 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110136066B (en) | Video-oriented super-resolution method, device, equipment and storage medium | |
CN108022212B (en) | High-resolution picture generation method, generation device and storage medium | |
US10311547B2 (en) | Image upscaling system, training method thereof, and image upscaling method | |
CN111080541B (en) | Color image denoising method based on bit layering and attention fusion mechanism | |
CN110163237B (en) | Model training and image processing method, device, medium and electronic equipment | |
KR100380199B1 (en) | A dwt-based up-sampling algorithm suitable for image display in an lcd panel | |
CN111402139B (en) | Image processing method, apparatus, electronic device, and computer-readable storage medium | |
CN109102463B (en) | Super-resolution image reconstruction method and device | |
CN111784570A (en) | Video image super-resolution reconstruction method and device | |
RU2697928C1 (en) | Superresolution of an image imitating high detail based on an optical system, performed on a mobile device having limited resources, and a mobile device which implements | |
CN110428382B (en) | Efficient video enhancement method and device for mobile terminal and storage medium | |
CN110062282A (en) | A kind of super-resolution video method for reconstructing, device and electronic equipment | |
US9697584B1 (en) | Multi-stage image super-resolution with reference merging using personalized dictionaries | |
CN113781320A (en) | Image processing method and device, terminal equipment and storage medium | |
WO2023010750A1 (en) | Image color mapping method and apparatus, electronic device, and storage medium | |
CN113939845A (en) | Method, system and computer readable medium for improving image color quality | |
Karch et al. | Adaptive Wiener filter super-resolution of color filter array images | |
Zhang et al. | Temporal compressive imaging reconstruction based on a 3D-CNN network | |
US20230060988A1 (en) | Image processing device and method | |
CN113658050A (en) | Image denoising method, denoising device, mobile terminal and storage medium | |
CN113658046B (en) | Super-resolution image generation method, device, equipment and medium based on feature separation | |
CN111383171B (en) | Picture processing method, system and terminal equipment | |
CN114119377A (en) | Image processing method and device | |
JP2016082452A (en) | Image processing apparatus and image processing method | |
CN114998138B (en) | High dynamic range image artifact removal method based on attention mechanism |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant | ||
TA01 | Transfer of patent application right |
Effective date of registration: 20230111 Address after: 215000 unit 2-b702, creative industry park, 328 Xinghu street, Suzhou Industrial Park, Jiangsu Province Applicant after: SUZHOU FEISOU TECHNOLOGY Co.,Ltd. Address before: Room 1216, 12 / F, Beijing Beiyou science and technology and cultural exchange center, 10 Xitucheng Road, Haidian District, Beijing, 100876 Applicant before: BEIJING FEISOU TECHNOLOGY CO.,LTD. |
|
TA01 | Transfer of patent application right |