WO2021048863A1 - Methods and systems for super resolution for infra-red imagery - Google Patents

Methods and systems for super resolution for infra-red imagery Download PDF

Info

Publication number
WO2021048863A1
WO2021048863A1 PCT/IL2020/051004 IL2020051004W WO2021048863A1 WO 2021048863 A1 WO2021048863 A1 WO 2021048863A1 IL 2020051004 W IL2020051004 W IL 2020051004W WO 2021048863 A1 WO2021048863 A1 WO 2021048863A1
Authority
WO
WIPO (PCT)
Prior art keywords
layer
output
input
convolution
image
Prior art date
Application number
PCT/IL2020/051004
Other languages
French (fr)
Inventor
Navot OZ
Iftach Klapp
Nir SOCHEN
Original Assignee
The State Of Israel, Ministry Of Agriculture & Rural Development, Agricultural Research Organization (Aro) (Volcani Center)
Ramot At Tel-Aviv University Ltd.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by The State Of Israel, Ministry Of Agriculture & Rural Development, Agricultural Research Organization (Aro) (Volcani Center), Ramot At Tel-Aviv University Ltd. filed Critical The State Of Israel, Ministry Of Agriculture & Rural Development, Agricultural Research Organization (Aro) (Volcani Center)
Priority to US17/641,861 priority Critical patent/US20220335571A1/en
Priority to CN202080077962.0A priority patent/CN114641790A/en
Priority to EP20862956.8A priority patent/EP4028984A4/en
Publication of WO2021048863A1 publication Critical patent/WO2021048863A1/en
Priority to IL291157A priority patent/IL291157A/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • G06T3/4053Scaling of whole images or parts thereof, e.g. expanding or contracting based on super-resolution, i.e. the output image resolution being higher than the sensor resolution
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/048Activation functions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/06Physical realisation, i.e. hardware implementation of neural networks, neurons or parts of neurons
    • G06N3/063Physical realisation, i.e. hardware implementation of neural networks, neurons or parts of neurons using electronic means
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • G06T3/4046Scaling of whole images or parts thereof, e.g. expanding or contracting using neural networks

Definitions

  • the present invention generally relates to image processing, and in particular, it concerns generating high-resolution (HR) images from low-resolution (LR) images.
  • HR high-resolution
  • LR low-resolution
  • Infra-Red imagery is a result of sensing electromagnetic radiation emitted or reflected from a given target surface in the infrared bandwidth of the electromagnetic spectrum (approximately 0.72 to 12 microns). Images produced via current IR uncooled technology suffer from low-resolution, thus reducing the usefulness of these LR images.
  • SR Super-resolution imaging
  • SR is a class of techniques that enhance (increase) the resolution of an imaging system, for example, recovering or generating a high-resolution image from one or more low-resolution input images.
  • Color digital images are composed of pixels, a color pixel composed of cluster of typically 4 red, green type 1 and green type 2, and blue pixels, such that pixels are made of combinations of primary colors represented by a series of codes (numerical values). Each color is referred to as a channel.
  • a color pixel composed of cluster of typically 4 red, green type 1 and green type 2, and blue pixels, such that pixels are made of combinations of primary colors represented by a series of codes (numerical values).
  • Each color is referred to as a channel.
  • RGB red, green and blue channels
  • a grayscale image has just one channel.
  • YUV images are an affine transformation of the RGB color space, originated in broadcasting. The Y channel correlates approximately with perceived intensity, while the U and V channels provide color information SUMMARY
  • a method for generating high-resolution images from low-resolution images using a deep neural network approach for low-power devices can be implemented in general with an artificial neural network (ANN) and more specifically with a convolutional neural network (CNN).
  • ANN artificial neural network
  • CNN convolutional neural network
  • Embodiments include generating super-resolution (SR) images using low-power devices to enhance the ability for early detection, for example, in agriculture for phenotype identification, irrigation monitoring and early detection of disease in plants.
  • SR super-resolution
  • Resolution can depend on the application, for example, LR may be less than 160 x 120 pixels (19,600 pixels) and high (HR) and super (SR) may be 640 x 480 (307,200 pixels) or more.
  • Some methods are based on deep learning, where many of the calculations are done in the low-resolution (LR) domain. The results of each layer are aggregated together to allow better flow of information through the network.
  • LR low-resolution
  • Embodiments achieve results using depthwise-separable convolution with roughly 200K Multiplication- Adds computations (MACs), while contemporary convolutional neural network (CNN) based SR algorithms require around 1500K MACs (1500 kMACs).
  • embodiments improve the functioning of computational devices, for example, by increasing power efficiency (decreasing power usage, cost) and increasing speed of computation (decreasing run-time).
  • Embodiments also, for example, improve metrics of estimation (e.g. peak signal-to-noise ratio PSNR, structural similarity index measure SSIM).
  • Embodiments combine both increased quality and lower complexity, as compared to conventional implementations, so embodiments can be implemented on low-power devices. As a result, new deep learning SR scheme for images is presented.
  • the method is operable, for example, embodiments have been successfully used with real agricultural images.
  • IR infra-red
  • Embodiments provide methods to perform SR using only a single IR image, while balancing between the metric quality of a super resolution image, designated I , with the low-power requirements posed by the hardware of the IR cameras.
  • the computational complexity of the present invention is considerably lower than similar networks.
  • a network uses a bottleneck layer from Kim et al. (2016)[12] combined with dense skip connections of Tong et al. (2017)[19] to preserve high quality performances of a deep network, with only a small portion of the recurred computation power.
  • Calculations of the invention can be performed on the LR space to save computational costs, and the upscale to HR can be done, for example, using techniques from Shi et al. (2016)[17]. Results show that only a handful of skip-connections suffice.
  • depth wise- separable convolution can be used, for example from (Chollet, 2017) [6].
  • a system for image processing including: a processing system containing one or more processors, and an artificial neural network including: an input layer including a memory location for storing an input image, one or more convolution layers, wherein the input layer is connected to a first convolution layer of the convolution layers, and a output layer connected to a last convolution layer of the convolution layers and including a memory location for storing an output image, wherein the layers include instructions for execution on the processing system, the input image is input to the input layer and to at least one of the convolution layers, an initial output of the input layer is input to at least one of the convolution layers, and a layer output of at least one of the convolution layers is input to at least one subsequent convolution layer.
  • the processors are configured to execute instructions programmed using a predefined set of machine codes and the layers include computational instructions implemented in the machine codes of the processor.
  • the input image is a low-resolution image and the output image is a super-resolution image.
  • each of at least one of the convolution layers includes: a respective convolution module accepting data to respective the convolution layer, a respective activation function processing output data from the respective convolution module, and a respective bottleneck layer processing output data from the respective activation function.
  • the input image and the initial output are input to the bottleneck layer, and the bottleneck layer generates the layer output.
  • the input image is input to each of the convolution layers.
  • the initial output is input to each of the convolution layers.
  • the layer output is input to each subsequent convolution layer.
  • the output layer includes: a shuffleblock receiving the layer output of the last convolution layer and the input image and generating a shuffle-block output that is a higher resolution than the input image and the layer output, an interpolation module receiving the input image and generating an interpolated image that is higher resolution than the input image, and a final convolution receiving the shuffle-block output and the interpolated image and generating the output image.
  • the network is trained with a training set based on high- resolution images and corresponding low-resolution images.
  • a method of training the network of claim 1 including the steps of: receiving one or more sets of high- resolution images, applying one or more transformations to at least a subset of the sets of high- resolution images to generate at least one associated set of low-resolution images, creating a training set including the one or more sets of high-resolution images and the at least one associated set of low-resolution images, and training the network using the training set.
  • a method for image processing including the steps of: configuring an artificial neural network based on a training set of high-resolution images and corresponding low-resolution images, and inputting an input image to an input layer and to at least one convolution layer, generating an initial output from the input layer based on the input image and sending the initial output to at least a first convolutional layer of the convolution layers, and generating a current layer output of at least one of the convolution layers based on the input image, the initial output and any previous layer outputs, and sending the current layer output to at least one subsequent convolution layer, and generating an output image by an output layer based on the layer output of a last convolutional layer of the convolutional layers and the input image.
  • a computer usable non-transitory storage medium having a computer program embodied thereon for causing a suitably programmed system to process images, by performing the steps of claim 12 when such program is executed on the system.
  • FIG. 1 a sketch of a convolution neural network that can be used to implement embodiments of the current invention.
  • FIG. 2 a sketch of a shuffle block.
  • FIG. 3A to 3D photographs illustrating the final layer output process.
  • FIG. 4 and FIG. 5 images of SR results.
  • FIG. 6 zoomed-in examples.
  • FIG. 7 a high-level partial block diagram of an exemplary system configured to implement the network.
  • FIG. 8 A to FIG. 8D tables of experimental results of different datasets.
  • FIG. 9 results of the modulation transfer function (MTF) of the embodiments.
  • Bottleneck Layer containing fewer nodes compared to the previous layers. Can be used to obtain a representation with reduced dimensionality. Used as a learning layer giving a significant coefficient for the processed data. Can be used to represent data in a different subspace.
  • Ch The number of channels for each layer of the network. Also known as features. ft Output of the Zth convolution module.
  • the number of filters in both input and output is Ch for all l. I Image.
  • ILR, ILR, I LR The low-resolution input images. Dimensions are H x W. IHR, IHR, I HR The high -resolution label images. Used to teach the network how to create ISR. Dimensions are aH x aW.
  • ISR ISR
  • ISR ISR
  • I SR A super-resolved version of lr. Its dimensions are aH x aW.
  • L The overall number of layers in the network. l, In, L-n A layer in the network, the n-th layer in the network
  • PReLU Parametric rectified linear activation unit implements a rectified linear activation function, a piecewise linear function that will output the input directly if it is positive, otherwise, it will output a value corresponding to a learned parameter. Used as an exemplary, typical implementation of an activation function.
  • the number of filters in the input is l Ch.
  • the number of filters in the output is Ch for all l.
  • Each filter has l x l spatial dimension.
  • Each filter has 3 x 3 spatial dimensions.
  • a present embodiment is a system and method for generating high-resolution images from low- resolution images.
  • the low-resolution images (for example the input low-resolution image ILR) can be IR images, or other images, such as listed following.
  • Other embodiments are contemplated as well. For example, work has been done in the 7.5-14 micron range.
  • Embodiments have already been demonstrated, and can solve real-world problems, for example improving detection of diseases and irrigation defeatist in crops using low-power IR cameras. Embodiments can be used in real-time, with low-power devices, in field-conditions suitable for agriculture and environmental uses.
  • An artificial neural network for processing low-resolution images to generate super resolution images includes feed-forward connections between layers.
  • the network includes an input layer, one or more convolution layers, wherein the input layer is connected to a first convolution layer of the convolution layers, and an output layer connected to a last convolution layer of the convolution layers.
  • An input image is input to the input layer and to at least one of the convolution layers, an initial output of the input layer is input to at least one of the convolution layers, and a layer output of at least one of the convolution layers is input to at least one subsequent convolution layer.
  • the training was done on DIV2K dataset Agustsson et al. [1] and Flicker2K disclosed in Timofte, et al. [18].
  • the images in these datasets has a resolution of 2k so each image contain fine details.
  • the training set is processed and preferably each image is transformed into a lower resolution image, for example each image is down-sampled using bi-cubic interpolation.
  • the training is done on the Y channel because of the proportionality between temperature and pixel intensity shown below.
  • the training results are evaluated on Set5 Bevilacqua et al. [3], Setl4 Zeyde et al. [21] and UrbanlOO from Huang et al. [11].
  • the metrics used are peak signal-to-noise ratio (PSNR) and the structural similarity index (SSIM). Both metrics were calculated between generated super-resolution images ISR and high-resolution images IHR using compare psnr() and compare ssim() from the skimage library in python. The borders of the images were cropped by 10 pixels for each border to neglect borders effects.
  • test sets of different plants were gathered using Therm-App TH Infra-Red camera [23] at mid-day. See below in reference to FIG. 8A to FIG. 8D, for information and results on a cucumber test set.
  • Thermal images tend to be noisy.
  • the characteristic noise in the IR images was analyzed and found to be Gaussian distributed with varying means and variances. To provide better super resolution estimations, the training was done in two stages: first using down sampled images versus their high-resolution source, and second by injecting the characteristic noise to the down sampled images versus their high-resolution source.
  • a feature of embodiments is training the network to ignore noise in the input images.
  • the filters are adjusted to notice (only) significant features in the images.
  • FIG. 1 a sketch of a convolution neural network 100 that can be used to implement embodiments of the current invention.
  • a low-resolution IR image denoted I is propagated through l layers in the network 100 and the resulting output of the network 100 is a super resolution (SR) image I approximation of a high-resolution image I .
  • the convolution neural network 100 decomposes I LR into Ch filters. Each layer of the network 100 has Ch channels (also known as features).
  • the super resolution scale is denoted a. In the current description the network 100 is trained to achieve an upscale factor for the super resolution a E 2, 4.
  • the network has one initial convolution layer L-IN for the input, l convolution layers that are concatenated together and one more final convolution layer L-OUT for the output. All in all (2+L) convolutions and l bottleneck-layers. While the intermediate, or hidden layers l are referred to as “convolutional layers” (being L in number), convolutions are not limited to being implemented only in the intermediate layers, and convolutions can also be done in other locations, for example, in the input L-IN and the output L-OUT layers.
  • the initial convolution layer L-IN is used to cast the low-resolution input image ILR into an initial feature space.
  • each convolution LCON module of layer l is fed to a non-linear activation function, applied elementwise to the result.
  • a non-limiting implementation of the activation function uses PReLU.
  • the result from the activation function PReLU is aggregated via concatenation of the outputs of the previous layers l and to the input image I .
  • the concatenated matrix goes through a bottleneck layer LB which outputs Ch filters.
  • LB-n where “n” is an integer denoting the layer number
  • the bottleneck layer LB is different from a pooling layer, giving significant features based on data intrinsic to the image itself. In part, this feature of the bottleneck layer LB saves energy in the system (network) as output of the bottleneck layer LB will only have the most significant features of the respective layer (processing of the layer, which may include inputs from previous layers).
  • the bottleneck layer LB is typically a learning layer, trained to give only the most significant coefficients in regards to a feature space.
  • the bottleneck layer LB can process input information and generate a representation in a different subspace. In part, the bottleneck layer LB helps keep (number of) features low, by choosing which features are most significant.
  • the network is composed of l convolution modules, each in a corresponding convolution layer, that can be described as follows: l G 2, ..., L Equation (2)
  • Depth wise separable convolution modules as proposed by Chollet (2017) [6] can be used to lower computational cost.
  • An exemplary usage of depth wise separable convolution is described below.
  • the shuffle block L-SB includes a convolution layer 202 and a pixel shuffler 204.
  • the upscale from I LR to I SR is performed in the Shuffle Block L-SB, producing a shuffle-block output 114 that is a higher resolution than the input image ILR and layer output (SI). This method is described in Shi, et al. (2016) [17].
  • the final layer L-OUT of the network 100 includes a final convolution L-FIN with Ch + 1 filters as input.
  • the extra channel is a high-resolution image generated from the low-resolution input image ILR.
  • One exemplary implementation for generating the extra channel is to use a bi-cubic interpolation 112 of the input low-resolution image ILR to generate an extra channel high-resolution interpolation 116.
  • the bi-cubic interpolation 112 inputs low-resolution data (the low-resolution image ILR) and spreads the low- resolution information across the spatial domain to generate high-resolution data (a high-resolution image, interpolation 116).
  • This high-resolution interpolation 116 contains only low-resolution information.
  • the high-resolution interpolation 116 (high-resolution image) is concatenated to the shuffle-block output 114 before going through the final concatenation L-FIN.
  • the output of the shuffle-block 114 contains the high-resolution information. This concatenation and convolution enables the network 100 to learn only the high-resolution difference between I LR and I HR .
  • the final layer L-OUT outputs a single channel 118 of a super-resolution image ISR, without an activation function.
  • the network 100 learns high frequency, significant features, and then combines this learning with processing of low-resolution images.
  • Each layer can be trained to find different aspects in an image.
  • the first layer, layer- 1 (L-l) may be trained (weights of the convolution matrix weighted) to find edges
  • the second layer, layer- 1 (L-2) may be trained to find circles in the LR images.
  • FIG. 3A to 3D photographs illustrating the final layer output process.
  • FIG. 3A is a low-resolution image.
  • FIG. 3B shows the Bi-cubic interpolation 116 of the low-resolution image ILR.
  • FIG. 3C shows the "high-resolution" information 114 learned by the main network pipeline (L). These interpolation data 116 and the "high-resolution” data 114 are summed up in the final layer convolution L-FIN. In the training process, the result of this summation is minimized to resemble the high-resolution IHR ground truth image FIG. 3D.
  • the input low-resolution image I has dimensions H x W with 1 channel.
  • the channel represents the object’s temperature of the low-resolution image 16 / , folk .
  • the I can be standardized to the range (0, 1) such that
  • Network training 120 can be done by minimizing the error between a ground truth HR (high- resolution) image IHR and a work output (SR image) ISR.
  • HR high- resolution
  • SR image work output
  • L ⁇ norm an absolute mean error known as L ⁇ norm, which is robust to outliers, is applied between the I SR and I HR in pixel domain.
  • Equation (4) Equation (4) where H, W are height and width respectively.
  • Q are the learned weights of the network. A list of parameters is provided above in the section “ABBREVIATIONS AND DEFINITIONS”.
  • Bottleneck layers LB are a lxl convolution where the number of output filters is Ch. This process was described in Bishop (2006)[4] and used by Shelhamer et al. (2017)[16].
  • the bottleneck layer LB has several effects. For example, the bottleneck layer LB helps mitigate vanishing gradients. In another example, the most important features are chosen using the computationally efficient and parameter-conservative bottleneck layer, so operations in other convolution layers are always applied only to Ch channels.
  • the Stefan-Boltzmann equation formulates the relation between temperature of a surface to irradiance of the surface.
  • a typical outdoor temperature e.g. 280-320 ⁇
  • the target and the ambient temperature are similar, such that the change in radiation power in this range can be approximate as linearly dependent to the change of the body temperature relative to the ambient temperature. Equation (5)
  • the IR radiation associated with the object temperature is concentrated by the camera's lens on the camera's detector. By heating the pixels, the concentrated IR radiation changes the micro bolometers resistance which in turn linearly changes the pixels reading.
  • the resulted grey scale presentation of the scene is assumed to be linearly connected to the image grey scale.
  • Equation 6 A multiply accumulate operation (MAC) is defined as a single multiplication and a single addition operation.
  • Equation 6 there are n MAC operations. Note that in terms of floating-point operations (FLOP), there are 2n- ⁇ operations for a dot-product.
  • the first (L-l) and last (L- 1) layers are typically convolution layers, but other layers can be depthwise- separable convolution.
  • C Forth C,
  • C o M Ch for brevity.
  • #ShuffleBlock a 2 x H x W x K 2 x Ch 2 where a is the upscale factor of the output.
  • the number of MACs for l convolution layers with bottlenecks The number of MACs for l depthwise-separable convolution layers with bottlenecks:
  • H x W x Ch 2 x 1 x Ch ' A 2 + t) meaning that the factor between the number of MACs performed between the depthwise- separable convolution implementation and the convolution implementation is: Equation (7) with x as a reduction factor.
  • Bias terms and PReFU are neglected for brevity, as each adds C OMi MACs, which is negligible.
  • the training module 120 includes a variety of processing and functions for inputting training data, processing and preparing the training data, configuring the initial system, running the training, etc.
  • Fow-resolution (FR) images, and images with noise present significant problems to determining high-resolution features in an imaged location. While FR images may be readily available from a variety of sources, or low-cost to acquire (via low-cost equipment, compared to equipment for collecting high-resolution images) or be required to be captured using low-power devices (often inherently FR), a problem is how to extract significant features from these FR images.
  • conventional processing of FR images to generate high-resolution (HR) images is typically high cost, long time, and high power, compared to processing lower resolution images.
  • various elements of embodiments are trained to process and extract significant features from FR images, thus improving the operation of the computational systems on which embodiments run, reducing cost, time, and power consumption, compared to conventional techniques for processing HR images and existing techniques for processing FR images.
  • An exemplary network 100 was implemented using Paszke et al. [15]. The mini-batch size was set to 16. Each image was cropped randomly to 192 x 192 to create high-resolution images I HR and then the high-resolution images I HR were down-scaled with a bi-cubic kernel by x2 or x4 to create low-resolution images I for training the network 100. The training dataset was augmented with horizontal flips and 90 degree rotations. All image processing was done using python PIL image library.
  • All network trainable weights are initialized via the method proposed by He, et al. (2015)[9], with a scaling factor of 0.1 as proposed by Wang, et al. (2016) [20].
  • the learning rate was halved at 10 4 and 10 5 iterations.
  • the training ran for 3 ⁇ 10 5 iterations.
  • the training was done using NVIDIA 2080ti GPU. Each permutation of the network was trained for 300k iterations.
  • a method of the invention was evaluated on a database composed of 9630 outdoor IR images of four crops, cucumbers and banana leaves in the wild and in a greenhouse. Where performances compared in terms of restoration temperature PSNR, and MACs against other previously suggested state-of-the-art SR networks.
  • the average results for the four croups, A) cucumber in greenhouse, B) cucumber in wild, C) banana leaves in greenhouse, and D) banana leaves in wild, are presented.
  • each sub-table is composed of seven rows.
  • Rows 1-3 present different implementations of the network.
  • Rows 4-7 present the performances of three previously suggested SR networks- SRCNN Freeman et al.[8], SRDenseNet Wang et al. [20], VDSR Kingma et al. [13] and Bi-Cubic interpolation.
  • the order of the rows is repeated through the sub tables. Observing the results, the network out-preforms SRCNN Freeman et al. [8], SRDenseNet Wang et al. [20], and Bi-Cubic interpolation both in restoration quality and with lower MACs. While VDSR Kingma et al.
  • zoomed-in examples of 4x SR a) is the low-resolution image
  • b) is the Bi-Cubic interpolation results
  • c) is the SR results of VDSR
  • d) is the SR results of the method x4.
  • embodiments have, and can solve real-world problems, for example improving detection of diseases in crops using low-power IR cameras.
  • Embodiments can be used in real-time, with low-power devices, in field-conditions suitable for agriculture and environmental uses.
  • the restoration metrics are on-par with state-of- the-art methods in terms of PSNR, SSIM and temperature estimation, while requiring 4 - 30 times less MACs.
  • VDSR by Kim et al. (2016)[12] achieved the best estimation results, which were only roughly IdB (3% relative improvement ) and 0.022C better than the method of the current embodiment, but with x28 the computational complexity.
  • FIG. 5 shows an enlarged comparison between I LR , Bi-Cubic interpolation, I and VDSR Kingma et al. [13].
  • Results of the current method are sharper and look better than other results including VDSR. A reason may include the propagation of features from all layers throughout the network using bottleneck layers.
  • VDSR is trained using the minimization of L2 norm, which improves the PSNR but tends to produce blurry results.
  • the method of the current embodiment provides a suitable solution in both quality and complexity.
  • FIG. 9 a graph of results of the embodiment on the modulation transfer function (MTF) of an IR camera.
  • the MTF gives a notion of the resolution of a given imaging system.
  • the embodiment offers an improvement of x4 in the cutoff frequency of the imaging system - the sampling resolution of the LR image is 0.4mm, and the embodiments gives a true x4 improvement to 0.1mm (as seen in the current figure).
  • the embodiment offers significant improvement over the diffraction limited MTF of a circular aperture (as seen in the current figure).
  • the processing that the network 100 performs on a given data set is typically not pre-programmed and may vary depending on dynamic factors, such as a time at which the input data set is processed and which other input data sets were previously processed.
  • the current network 100 is a carefully designed framework that, in part, uses algorithms. That is, some algorithms may be used as building blocks for the network 100 framework, within which the system will itself learn its own operation parameters
  • FIG. 7 is a high-level partial block diagram of an exemplary system 600 configured to implement the network 100 of the present invention.
  • System (processing system) 600 includes a processor 602 (one or more) and four exemplary memory devices: a random access memory (RAM) 604, a boot read only memory (ROM) 606, a mass storage device (hard disk) 608, and a flash memory 610, all communicating via a common bus 612.
  • RAM random access memory
  • ROM boot read only memory
  • hard disk hard disk
  • flash memory 610 all communicating via a common bus 612.
  • processing and memory can include any computer readable medium storing software and/or firmware and/or any hardware element(s) including but not limited to field programmable logic array (FPLA) element(s), hard-wired logic element(s), field programmable gate array (FPGA) element(s), graphics processing unit (GPU), and application- specific integrated circuit (ASIC) element(s).
  • the processor 202 is formed of one or more processors, for example, hardware processors, including microprocessors, for performing functions and operations detailed herein.
  • the processors are, for example, conventional processors, such as those used in servers, computers, and other computerized devices.
  • processors may include x86 Processors from AMD and Intel, Xenon® and Pentium® processors from Intel, as well as any combinations thereof.
  • Any instruction set architecture may be used in processor 602 including but not limited to reduced instruction set computer (RISC) architecture and/or complex instruction set computer (CISC) architecture.
  • RISC reduced instruction set computer
  • CISC complex instruction set computer
  • a module (processing module, neural network node or layer) 614 is shown on mass storage 608, but as will be obvious to one skilled in the art, could be located on any of the memory devices.
  • Mass storage device 608 is a non-limiting example of a non-transitory computer-readable storage medium bearing computer-readable code for implementing the image processing methodology described herein.
  • Other examples of such non-transitory computer-readable storage media include read-only memories such as CDs bearing such code.
  • System 600 may have an operating system stored on the memory devices, the ROM may include boot code for the system, and the processor may be configured for executing the boot code to load the operating system to RAM 604, executing the operating system to copy computer- readable code to RAM 604 and execute the code.
  • Network connection 620 provides communications to and from system 600.
  • a single network connection provides one or more links, including virtual connections, to other devices on local and/or remote networks.
  • system 600 can include more than one network connection (not shown), each network connection providing one or more links to other devices and/or networks.
  • System 600 can be implemented as a server or client respectively connected through a network to a client or server.
  • Modules are preferably implemented in software, but can also be implemented in hardware and firmware, on a single processor or distributed processors, at one or more locations.
  • the above-described module functions can be combined and implemented as fewer modules or separated into sub-functions and implemented as a larger number of modules. Based on the above description, one skilled in the art will be able to design an implementation for a specific application.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Computation (AREA)
  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Computational Linguistics (AREA)
  • Neurology (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

An artificial neural network for processing low-resolution images to generate super-resolution images includes feed-forward connections between layers. The network includes an input layer, one or more convolution layers, wherein the input layer is connected to a first convolution layer of the convolution layers, and a output layer connected to a last convolution layer of the convolution layers. An input image is input to the input layer and to at least one of the convolution layers, an initial output of the input layer is input to at least one of the convolution layers, and a layer output of at least one of the convolution layers is input to at least one subsequent convolution layer.

Description

Methods and Systems for Super Resolution for Infra-Red Imagery
FIELD OF THE INVENTION
The present invention generally relates to image processing, and in particular, it concerns generating high-resolution (HR) images from low-resolution (LR) images.
BACKGROUND OF THE INVENTION
Infra-Red (IR) imagery is a result of sensing electromagnetic radiation emitted or reflected from a given target surface in the infrared bandwidth of the electromagnetic spectrum (approximately 0.72 to 12 microns). Images produced via current IR uncooled technology suffer from low-resolution, thus reducing the usefulness of these LR images.
Super-resolution imaging (SR) is a class of techniques that enhance (increase) the resolution of an imaging system, for example, recovering or generating a high-resolution image from one or more low-resolution input images.
Color digital images are composed of pixels, a color pixel composed of cluster of typically 4 red, green type 1 and green type 2, and blue pixels, such that pixels are made of combinations of primary colors represented by a series of codes (numerical values). Each color is referred to as a channel. For example, an image from a standard digital camera will have red, green and blue channels (RGB). A grayscale image has just one channel. YUV images are an affine transformation of the RGB color space, originated in broadcasting. The Y channel correlates approximately with perceived intensity, while the U and V channels provide color information SUMMARY
According to the teachings of the present embodiment there is provided a method for generating high-resolution images from low-resolution images using a deep neural network approach for low-power devices. The embodiment can be implemented in general with an artificial neural network (ANN) and more specifically with a convolutional neural network (CNN). Embodiments include generating super-resolution (SR) images using low-power devices to enhance the ability for early detection, for example, in agriculture for phenotype identification, irrigation monitoring and early detection of disease in plants.
Resolution can depend on the application, for example, LR may be less than 160 x 120 pixels (19,600 pixels) and high (HR) and super (SR) may be 640 x 480 (307,200 pixels) or more.
Some methods are based on deep learning, where many of the calculations are done in the low-resolution (LR) domain. The results of each layer are aggregated together to allow better flow of information through the network.
Embodiments achieve results using depthwise-separable convolution with roughly 200K Multiplication- Adds computations (MACs), while contemporary convolutional neural network (CNN) based SR algorithms require around 1500K MACs (1500 kMACs). Thus, embodiments improve the functioning of computational devices, for example, by increasing power efficiency (decreasing power usage, cost) and increasing speed of computation (decreasing run-time). Embodiments also, for example, improve metrics of estimation (e.g. peak signal-to-noise ratio PSNR, structural similarity index measure SSIM). Embodiments combine both increased quality and lower complexity, as compared to conventional implementations, so embodiments can be implemented on low-power devices. As a result, new deep learning SR scheme for images is presented.
The method is operable, for example, embodiments have been successfully used with real agricultural images. For clarity in the current description, the non-limiting example of processing infra-red (IR) images is used.
Embodiments provide methods to perform SR using only a single IR image, while balancing between the metric quality of a super resolution image, designated I , with the low-power requirements posed by the hardware of the IR cameras. The computational complexity of the present invention is considerably lower than similar networks.
In some embodiments, a network (neural network) uses a bottleneck layer from Kim et al. (2016)[12] combined with dense skip connections of Tong et al. (2017)[19] to preserve high quality performances of a deep network, with only a small portion of the recurred computation power. Calculations of the invention can be performed on the LR space to save computational costs, and the upscale to HR can be done, for example, using techniques from Shi et al. (2016)[17]. Results show that only a handful of skip-connections suffice. To further lower computational complexity, depth wise- separable convolution can be used, for example from (Chollet, 2017) [6].
According to the teachings of the present embodiment there is provided a system for image processing, the system including: a processing system containing one or more processors, and an artificial neural network including: an input layer including a memory location for storing an input image, one or more convolution layers, wherein the input layer is connected to a first convolution layer of the convolution layers, and a output layer connected to a last convolution layer of the convolution layers and including a memory location for storing an output image, wherein the layers include instructions for execution on the processing system, the input image is input to the input layer and to at least one of the convolution layers, an initial output of the input layer is input to at least one of the convolution layers, and a layer output of at least one of the convolution layers is input to at least one subsequent convolution layer.
In an optional embodiment, the processors are configured to execute instructions programmed using a predefined set of machine codes and the layers include computational instructions implemented in the machine codes of the processor.
In another optional embodiment, the input image is a low-resolution image and the output image is a super-resolution image.
In another optional embodiment, each of at least one of the convolution layers, includes: a respective convolution module accepting data to respective the convolution layer, a respective activation function processing output data from the respective convolution module, and a respective bottleneck layer processing output data from the respective activation function.
In another optional embodiment, the input image and the initial output are input to the bottleneck layer, and the bottleneck layer generates the layer output.
In another optional embodiment, the input image is input to each of the convolution layers. In another optional embodiment, the initial output is input to each of the convolution layers. In another optional embodiment, the layer output is input to each subsequent convolution layer.
In another optional embodiment, the output layer includes: a shuffleblock receiving the layer output of the last convolution layer and the input image and generating a shuffle-block output that is a higher resolution than the input image and the layer output, an interpolation module receiving the input image and generating an interpolated image that is higher resolution than the input image, and a final convolution receiving the shuffle-block output and the interpolated image and generating the output image.
In another optional embodiment, the network is trained with a training set based on high- resolution images and corresponding low-resolution images.
According to the teachings of the present embodiment there is provided a method of training the network of claim 1, the method including the steps of: receiving one or more sets of high- resolution images, applying one or more transformations to at least a subset of the sets of high- resolution images to generate at least one associated set of low-resolution images, creating a training set including the one or more sets of high-resolution images and the at least one associated set of low-resolution images, and training the network using the training set.
According to the teachings of the present embodiment there is provided a method for image processing, the method including the steps of: configuring an artificial neural network based on a training set of high-resolution images and corresponding low-resolution images, and inputting an input image to an input layer and to at least one convolution layer, generating an initial output from the input layer based on the input image and sending the initial output to at least a first convolutional layer of the convolution layers, and generating a current layer output of at least one of the convolution layers based on the input image, the initial output and any previous layer outputs, and sending the current layer output to at least one subsequent convolution layer, and generating an output image by an output layer based on the layer output of a last convolutional layer of the convolutional layers and the input image.
According to the teachings of the present embodiment there is provided a computer usable non-transitory storage medium having a computer program embodied thereon for causing a suitably programmed system to process images, by performing the steps of claim 12 when such program is executed on the system.
BRIEF DESCRIPTION OF FIGURES
Some embodiments of the present invention are herein described, by way of example only, with reference to the accompanying drawings. With specific reference to the drawings in detail, it is stressed that the particulars shown are by way of example and for purposes of illustrative discussion of embodiments of the invention. In this regard, the description taken with the drawings makes apparent to those skilled in the art how embodiments of the invention may be practiced.
Attention is now directed to the drawings, where like reference numerals or characters indicate corresponding or like components. In the drawings:
FIG. 1, a sketch of a convolution neural network that can be used to implement embodiments of the current invention.
FIG. 2, a sketch of a shuffle block.
FIG. 3A to 3D, photographs illustrating the final layer output process.
FIG. 4 and FIG. 5, images of SR results.
FIG. 6, zoomed-in examples.
FIG. 7, a high-level partial block diagram of an exemplary system configured to implement the network.
FIG. 8 A to FIG. 8D, tables of experimental results of different datasets.
FIG. 9, results of the modulation transfer function (MTF) of the embodiments.
ABBREVIATIONS AND DEFINITIONS
For convenience of reference, this section contains a brief list of abbreviations, acronyms, and short definitions used in this document. This section should not be considered limiting. Fuller descriptions can be found below, and in the applicable Standards. a The upscale factor for the super resolution.
Bottleneck Layer containing fewer nodes compared to the previous layers. Can be used to obtain a representation with reduced dimensionality. Used as a learning layer giving a significant coefficient for the processed data. Can be used to represent data in a different subspace.
Ch The number of channels for each layer of the network. Also known as features. ft Output of the Zth convolution module.
The number of filters in both input and output is Ch for all l. I Image.
IR Infra-red.
ILR, ILR, ILR The low-resolution input images. Dimensions are H x W. IHR, IHR, IHR The high -resolution label images. Used to teach the network how to create ISR. Dimensions are aH x aW.
ISR, ISR, ISR A super-resolved version of lr. Its dimensions are aH x aW.
HR High-resolution.
L The overall number of layers in the network. l, In, L-n A layer in the network, the n-th layer in the network
LCON Convolutional module.
LCON- 1 The Zth convolutional module.
LR Low-resolution.
MAC Multiply-accumulate operation.
PReLU Parametric rectified linear activation unit implements a rectified linear activation function, a piecewise linear function that will output the input directly if it is positive, otherwise, it will output a value corresponding to a learned parameter. Used as an exemplary, typical implementation of an activation function.
S Output of a bottleneck layer. The “layer output”.
SI The output of the Zth bottleneck layer.
The number of filters in the input is l Ch.
The number of filters in the output is Ch for all l.
SR Super-resolution.
0 Learned weights for the bottleneck layers. Each filter has l x l spatial dimension.
Q Learned weights for the convolution layers (modules). Each filter has 3 x 3 spatial dimensions.
F Non-linear activation function. DETAILED DESCRIPTION - FIRST EMBODIMENT - FIGS. 1 to 9
The principles and operation of the system and method according to a present embodiment may be better understood with reference to the drawings and the accompanying description. A present embodiment is a system and method for generating high-resolution images from low- resolution images.
The following paragraphs describe different embodiments of the present invention. The following embodiments are exemplary only, generally using IR images. The invention should not be limited to the particular embodiments described herein. For example, the low-resolution images (for example the input low-resolution image ILR) can be IR images, or other images, such as listed following. Other embodiments are contemplated as well. For example, work has been done in the 7.5-14 micron range. It is foreseen that based on the current description other ranges of the electromagnetic spectrum can be processed, for example, including but not limited to visual light, IR, terahertz (ThZ), and X -Ray spectrums, as well as other imagery systems, for example, electron- beam imagery, MRI, ultra-sound, satellite imagery, microscopy, mobile phone applications, and radar.
Embodiments have already been demonstrated, and can solve real-world problems, for example improving detection of diseases and irrigation defeatist in crops using low-power IR cameras. Embodiments can be used in real-time, with low-power devices, in field-conditions suitable for agriculture and environmental uses.
Unless otherwise defined, all technical and/or scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which the invention pertains. Although methods and materials similar or equivalent to those described herein can be used in the practice or testing of embodiments of the invention, exemplary methods and/or materials are described below. In case of conflict, the patent specification, including definitions, will control. In addition, the materials, methods, and examples are illustrative only and are not intended to be necessarily limiting.
Before explaining at least one embodiment of the invention in detail, it is to be understood that the invention is not necessarily limited in its application to the details of construction and the arrangement of the components and/or methods set forth in the following description and/or illustrated in the drawings and/or the Examples. The invention is capable of other embodiments or of being practiced or carried out in various ways.
An artificial neural network for processing low-resolution images to generate super resolution images includes feed-forward connections between layers. The network includes an input layer, one or more convolution layers, wherein the input layer is connected to a first convolution layer of the convolution layers, and an output layer connected to a last convolution layer of the convolution layers. An input image is input to the input layer and to at least one of the convolution layers, an initial output of the input layer is input to at least one of the convolution layers, and a layer output of at least one of the convolution layers is input to at least one subsequent convolution layer.
Materials and Method
Data
The training was done on DIV2K dataset Agustsson et al. [1] and Flicker2K disclosed in Timofte, et al. [18]. The images in these datasets has a resolution of 2k so each image contain fine details. To obtain low-resolution images, the training set is processed and preferably each image is transformed into a lower resolution image, for example each image is down-sampled using bi-cubic interpolation. The training is done on the Y channel because of the proportionality between temperature and pixel intensity shown below.
The training results are evaluated on Set5 Bevilacqua et al. [3], Setl4 Zeyde et al. [21] and UrbanlOO from Huang et al. [11]. The metrics used are peak signal-to-noise ratio (PSNR) and the structural similarity index (SSIM). Both metrics were calculated between generated super-resolution images ISR and high-resolution images IHR using compare psnr() and compare ssim() from the skimage library in python. The borders of the images were cropped by 10 pixels for each border to neglect borders effects.
Aside from these training testing sets, several test sets of different plants were gathered using Therm-App TH Infra-Red camera [23] at mid-day. See below in reference to FIG. 8A to FIG. 8D, for information and results on a cucumber test set.
Thermal images tend to be noisy. The characteristic noise in the IR images was analyzed and found to be Gaussian distributed with varying means and variances. To provide better super resolution estimations, the training was done in two stages: first using down sampled images versus their high-resolution source, and second by injecting the characteristic noise to the down sampled images versus their high-resolution source.
A feature of embodiments is training the network to ignore noise in the input images. During the training process, the filters are adjusted to notice (only) significant features in the images.
The Network
Refer to FIG. 1, a sketch of a convolution neural network 100 that can be used to implement embodiments of the current invention. A low-resolution IR image denoted I is propagated through l layers in the network 100 and the resulting output of the network 100 is a super resolution (SR) image I approximation of a high-resolution image I
Figure imgf000011_0001
. The convolution neural network 100 decomposes ILR into Ch filters. Each layer of the network 100 has Ch channels (also known as features). The super resolution scale is denoted a. In the current description the network 100 is trained to achieve an upscale factor for the super resolution a E 2, 4.
The network has one initial convolution layer L-IN for the input, l convolution layers that are concatenated together and one more final convolution layer L-OUT for the output. All in all (2+L) convolutions and l bottleneck-layers. While the intermediate, or hidden layers l are referred to as “convolutional layers” (being L in number), convolutions are not limited to being implemented only in the intermediate layers, and convolutions can also be done in other locations, for example, in the input L-IN and the output L-OUT layers.
The initial convolution layer L-IN is used to cast the low-resolution input image ILR into an initial feature space.
The output of each convolution LCON module of layer l is fed to a non-linear activation function, applied elementwise to the result. In the current description, a non-limiting implementation of the activation function uses PReLU. The result from the activation function PReLU is aggregated via concatenation of the outputs of the previous layers l and to the input image I . The concatenated matrix goes through a bottleneck layer LB which outputs Ch filters. For each bottleneck layer or "bottleneck block" LB-n (where “n” is an integer denoting the layer number), all preceding layers l of the network are concatenated together and are convoluted with the bottleneck layer LB. Denoting the Convolution between two matrices A and B as A * B and the concatenation between these matrices as {A,B}. The mathematical formulation of the bottleneck layer is as follows: Si ø (&i * {lLn i, .... //}) Equation (1) where S / is the output of the /th bottleneck layer, ,9/ denotes the learned weights of the bottleneck layer, with l Ch filters as input and Ch filters as output f the non-linear activation function and //the output from the /’th convolution module. The bias term is omitted for brevity.
The bottleneck layer LB is different from a pooling layer, giving significant features based on data intrinsic to the image itself. In part, this feature of the bottleneck layer LB saves energy in the system (network) as output of the bottleneck layer LB will only have the most significant features of the respective layer (processing of the layer, which may include inputs from previous layers). The bottleneck layer LB is typically a learning layer, trained to give only the most significant coefficients in regards to a feature space. The bottleneck layer LB can process input information and generate a representation in a different subspace. In part, the bottleneck layer LB helps keep (number of) features low, by choosing which features are most significant.
The network is composed of l convolution modules, each in a corresponding convolution layer, that can be described as follows:
Figure imgf000012_0001
l G 2, ..., L Equation (2)
Where Q are learned weights with 3 x 3 spatial dimensions with Ch filters. While a variety of non-linear activation functions can be used, for simplicity in this description, PReLU proposed by He, et al. (2015) [9] will be used as a non-limiting example of the non-linear activation function.
Depth wise separable convolution modules as proposed by Chollet (2017) [6] can be used to lower computational cost. An exemplary usage of depth wise separable convolution is described below.
Refer now also to FIG. 2, a sketch of a shuffle block L-SB. The shuffle block L-SB includes a convolution layer 202 and a pixel shuffler 204. The upscale from I LR to I SR is performed in the Shuffle Block L-SB, producing a shuffle-block output 114 that is a higher resolution than the input image ILR and layer output (SI). This method is described in Shi, et al. (2016) [17].
In the current embodiment, the final layer L-OUT of the network 100 includes a final convolution L-FIN with Ch + 1 filters as input. The extra channel is a high-resolution image generated from the low-resolution input image ILR. One exemplary implementation for generating the extra channel is to use a bi-cubic interpolation 112 of the input low-resolution image ILR to generate an extra channel high-resolution interpolation 116. As is known in the art, the bi-cubic interpolation 112 inputs low-resolution data (the low-resolution image ILR) and spreads the low- resolution information across the spatial domain to generate high-resolution data (a high-resolution image, interpolation 116). This high-resolution interpolation 116 contains only low-resolution information. The high-resolution interpolation 116 (high-resolution image) is concatenated to the shuffle-block output 114 before going through the final concatenation L-FIN. The output of the shuffle-block 114 contains the high-resolution information. This concatenation and convolution enables the network 100 to learn only the high-resolution difference between ILR and IHR. The final layer L-OUT outputs a single channel 118 of a super-resolution image ISR, without an activation function.
The network 100 learns high frequency, significant features, and then combines this learning with processing of low-resolution images. Each layer can be trained to find different aspects in an image. For example, the first layer, layer- 1 (L-l) may be trained (weights of the convolution matrix weighted) to find edges, and the second layer, layer- 1 (L-2) may be trained to find circles in the LR images.
Refer also to FIG. 3A to 3D, photographs illustrating the final layer output process. FIG.
3A is a low-resolution image. FIG. 3B shows the Bi-cubic interpolation 116 of the low-resolution image ILR. FIG. 3C shows the "high-resolution" information 114 learned by the main network pipeline (L). These interpolation data 116 and the "high-resolution" data 114 are summed up in the final layer convolution L-FIN. In the training process, the result of this summation is minimized to resemble the high-resolution IHR ground truth image FIG. 3D.
The input low-resolution image I has dimensions H x W with 1 channel. The channel represents the object’s temperature of the low-resolution image
Figure imgf000013_0001
16/,„ . Before entering the network, the I can be standardized to the range (0, 1) such that
-r R ILR min \ILR]
~ max (ILR - mill [ILR\) Equation (3)
Network training 120 can be done by minimizing the error between a ground truth HR (high- resolution) image IHR and a work output (SR image) ISR. As a cost function, an absolute mean error known as L\ norm, which is robust to outliers, is applied between the I SR and I HR in pixel domain. Formally:
Figure imgf000013_0002
Equation (4) where H, W are height and width respectively. Q are the learned weights of the network. A list of parameters is provided above in the section “ABBREVIATIONS AND DEFINITIONS”.
Bottleneck layers
Bottleneck layers LB are a lxl convolution where the number of output filters is Ch. This process was described in Bishop (2006)[4] and used by Shelhamer et al. (2017)[16]. The bottleneck layer LB has several effects. For example, the bottleneck layer LB helps mitigate vanishing gradients. In another example, the most important features are chosen using the computationally efficient and parameter-conservative bottleneck layer, so operations in other convolution layers are always applied only to Ch channels.
The relation between temperature and pixel intensity
The Stefan-Boltzmann equation formulates the relation between temperature of a surface to irradiance of the surface. In a typical outdoor temperature (e.g. 280-320^) the target and the ambient temperature are similar, such that the change in radiation power in this range can be approximate as linearly dependent to the change of the body temperature relative to the ambient temperature.
Figure imgf000014_0001
Equation (5)
Where P is the radiant power, To, P are the reference ambient temperature and associated radiance respectively, s is the Stefan-Boltzmann coefficient, a is a proportion factor. Eq. (4) present the Taylor expansion around the ambient temperature. Indeed, in a narrow temperature range, the change in radiation is linearly depended upon the change in object temperature AT relative to the ambient temperature To.
The IR radiation associated with the object temperature is concentrated by the camera's lens on the camera's detector. By heating the pixels, the concentrated IR radiation changes the micro bolometers resistance which in turn linearly changes the pixels reading. Here, the resulted grey scale presentation of the scene is assumed to be linearly connected to the image grey scale.
This relation allows training the model on regular visible images and still achieve satisfactory results, even without fine-tuning on IR images. Fine-tuning can further enhance performance due to differences in statistics between IR and visible images. Computational Cost
The operations done in each layer of the network 100 are mainly dot products: y = wo xo H — + wn xn Equation (6)
Where ^and vr arc vectors and y is a scalar. A multiply accumulate operation (MAC) is defined as a single multiplication and a single addition operation. In Equation 6 there are n MAC operations. Note that in terms of floating-point operations (FLOP), there are 2n-\ operations for a dot-product.
Let// be the feature map of the fth layer with size Ch x H x W where H x W arc the spatial dimensions of the feature map and Ch is the number of channels. For a series of convolution layers with K,Cin,Cout as the kernel size, number of input and output channels respectively, for each pixel in the feature map a dot-product is taken for a K window across all C,„ and the process is repeated for COM channels:
H x W x K2 x Cin x Cout
Meaning that a bottleneck-layer where K = 1 has:
H x W x Cin x Cout
For depthwise- separable convolution, the calculations for each pixel are done separately for each channel, so only C,„ times. The resulting number of MACs is a factor of Coll, less than for a convolution layer:
H x Wx K2x Cin
In the network 100, the first (L-l) and last (L- 1) layers are typically convolution layers, but other layers can be depthwise- separable convolution. Henceforth C,„ º CoM = Ch for brevity. MACs in the initial convolution L-IN, final convolution L-FIN and shuffle block L-SB respectively:
Figure imgf000015_0001
#Com0M = a2x H x Wx K2x Ch x l
#ShuffleBlock = a2x H x W x K2 x Ch2 where a is the upscale factor of the output. The number of MACs for l convolution layers with bottlenecks:
Figure imgf000015_0002
The number of MACs for l depthwise-separable convolution layers with bottlenecks:
Figure imgf000016_0001
H x W x Ch2 x 1 x Ch ' A 2 + t) meaning that the factor between the number of MACs performed between the depthwise- separable convolution implementation and the convolution implementation is:
Figure imgf000016_0002
Equation (7) with x as a reduction factor. A comparison between different networks can be seen in FIG. 8A to FIG. 8D. Bias terms and PReFU are neglected for brevity, as each adds COMiMACs, which is negligible.
Training
Refer again to FIG. 1, the training module 120 includes a variety of processing and functions for inputting training data, processing and preparing the training data, configuring the initial system, running the training, etc. Fow-resolution (FR) images, and images with noise, present significant problems to determining high-resolution features in an imaged location. While FR images may be readily available from a variety of sources, or low-cost to acquire (via low-cost equipment, compared to equipment for collecting high-resolution images) or be required to be captured using low-power devices (often inherently FR), a problem is how to extract significant features from these FR images. In addition, conventional processing of FR images to generate high-resolution (HR) images is typically high cost, long time, and high power, compared to processing lower resolution images. As described throughout this description, various elements of embodiments are trained to process and extract significant features from FR images, thus improving the operation of the computational systems on which embodiments run, reducing cost, time, and power consumption, compared to conventional techniques for processing HR images and existing techniques for processing FR images.
An exemplary network 100 was implemented using Paszke et al. [15]. The mini-batch size was set to 16. Each image was cropped randomly to 192 x 192 to create high-resolution images I HR and then the high-resolution images I HR were down-scaled with a bi-cubic kernel by x2 or x4 to create low-resolution images I for training the network 100. The training dataset was augmented with horizontal flips and 90 degree rotations. All image processing was done using python PIL image library.
All network trainable weights are initialized via the method proposed by He, et al. (2015)[9], with a scaling factor of 0.1 as proposed by Wang, et al. (2018) [20]. The network is optimized using gradient descent with Kingma et al. [13] with b\ = 0.9,/L = 0.999 and the initial learning rate set to 5 10-4. The learning rate was halved at 104 and 105 iterations. The training ran for 3 · 105 iterations.
The training was done using NVIDIA 2080ti GPU. Each permutation of the network was trained for 300k iterations.
Examples
Refer to FIG. 8A to FIG. 8D, tables of experimental results of different datasets for upscale factor of a = 4. A method of the invention was evaluated on a database composed of 9630 outdoor IR images of four crops, cucumbers and banana leaves in the wild and in a greenhouse. Where performances compared in terms of restoration temperature PSNR, and MACs against other previously suggested state-of-the-art SR networks. In the current figures, the average results for the four croups, A) cucumber in greenhouse, B) cucumber in wild, C) banana leaves in greenhouse, and D) banana leaves in wild, are presented.
For convenience, the tables of the current figures are separated into of four sub-tables. Each sub-table is composed of seven rows. Rows 1-3 present different implementations of the network. Rows 4-7 present the performances of three previously suggested SR networks- SRCNN Freeman et al.[8], SRDenseNet Wang et al. [20], VDSR Kingma et al. [13] and Bi-Cubic interpolation. For convenience, the order of the rows is repeated through the sub tables. Observing the results, the network out-preforms SRCNN Freeman et al. [8], SRDenseNet Wang et al. [20], and Bi-Cubic interpolation both in restoration quality and with lower MACs. While VDSR Kingma et al. [13] achieves the best restoration results - about 1 dB better in PSNR terms which is only 3% beater in absolute performance and 0.022C° better in mean temperature error terms, however performing x28 or more additional MACs. Compering the relative improvement to the computation costs, the method suggests a cost effective implementation.
Refer to FIG. 4 and FIG. 5, images of SR results. Typical examples for five different datasets are presented one below the other. From left to right columns content is: Low-resolution input, bi-cubic interpolation results, VDSR restoration results and the results of the method. The current figures show SR results for x2 and x4 SR respectively. These show a comparison between ILR, Bi-Cubic, ISR and VDSR proposed by Kim, et al. (2016)[12]. Observing the figures, the method appears in the same level as the VDSR, this is done with a significantly lower computation effort. Both methods perform better than Bi-Cubic interpolation in both appearance and metrics. In Figure 6, presented is a zoomed-in replica of Figure 5e (Cucumber in Green house). Observing the results, the method appears much better than VDSR Kingma et al. [13], and is discussed further below.
All results were obtained while running on a desk top computer equipped with an G7 processor.
Refer now to FIG. 6, zoomed-in examples. In the current figure, zoomed-in examples of 4x SR a) is the low-resolution image, b) is the Bi-Cubic interpolation results, c) is the SR results of VDSR, and d) is the SR results of the method x4.
As noted above, embodiments have, and can solve real-world problems, for example improving detection of diseases in crops using low-power IR cameras. Embodiments can be used in real-time, with low-power devices, in field-conditions suitable for agriculture and environmental uses.
As seen in the tables of FIG. 8A to FIG. 8D, the restoration metrics are on-par with state-of- the-art methods in terms of PSNR, SSIM and temperature estimation, while requiring 4 - 30 times less MACs. VDSR by Kim et al. (2016)[12] achieved the best estimation results, which were only roughly IdB (3% relative improvement ) and 0.022C better than the method of the current embodiment, but with x28 the computational complexity.
As for the appearance of restoration, as seen in FIG. 3 and FIG. 4, the model produces visually pleasing results. If fact FIG. 5 shows an enlarged comparison between I LR , Bi-Cubic interpolation, I and VDSR Kingma et al. [13]. Results of the current method are sharper and look better than other results including VDSR. A reason may include the propagation of features from all layers throughout the network using bottleneck layers. Moreover, VDSR is trained using the minimization of L2 norm, which improves the PSNR but tends to produce blurry results.
Thus, the method of the current embodiment provides a suitable solution in both quality and complexity.
Refer to FIG. 9, a graph of results of the embodiment on the modulation transfer function (MTF) of an IR camera. The MTF gives a notion of the resolution of a given imaging system. As seen in the current figure, the embodiment offers an improvement of x4 in the cutoff frequency of the imaging system - the sampling resolution of the LR image is 0.4mm, and the embodiments gives a true x4 improvement to 0.1mm (as seen in the current figure). Moreover, the embodiment offers significant improvement over the diffraction limited MTF of a circular aperture (as seen in the current figure).
The following references are listed by number in brackets [ ] in the text above, and are all incorporated by reference in their entirety herein.
[1] E. Agustsson and R. Timofte. Ntire 2017 challenge on single image superresolution: Dataset and study. In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR) Workshops, July 2017. 8
[2] B. Berger, B. Parent, and M. Tester. High-throughput shoot imaging to study drought responses. Journal of Experimental Botany, 61(13):3519— 3528, 07 2010. ISSN 0022-0957. doi: 10.1093/jxb/erq201. URL https: //doi.org/10.1093/jxb/erq201. 1
[3] M. Bevilacqua, A. Roumy, C. Guillemot, and M. line Alberi Morel. Low complexity single-image super-resolution based on nonnegative neighbor embedding. 8
[4] C. M. Bishop. Pattern Recognition and Machine Learning (Information Science and Statistics). Springer-Verlag, Berlin, Heidelberg, 2006. ISBN 0387310738. 4
[5] D. Bulanon, T. Burks, and V. Alchanatis. Image fusion of visible and thermal images for fruit detection. Biosystems Engineering, 103:12-22, 05 2009. doi:
10.1016/j.biosystemseng.2009.02.009. 1
[6] F. Chollet. Xception: Deep learning with depthwise separable convolutions. 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 1800-1807, 2017.2, 4
[7] C. Dong, C. C. Loy, K. He, and X. Tang. Image super-resolution using deep convolutional networks. IEEE Transactions on Pattern Analysis and Machine Intelligence, 38(2):295-307, Feb 2016. ISSN 0162-8828. doi: 10. 1109/TPAMI.2015.2439281. 2, 10
[8] W. T. Freeman, T. R. Jones, and E. C. Pasztor. Example-based superresolution. IEEE Computer Graphics and Applications , 22(2):56-65, March 2002. ISSN 0272-1716. doi: 10.1109/38.988747. 2
[9] K. He, X. Zhang, S. Ren, and J. Sun. Delving deep into rectifiers: Surpassing human- level performance on ImageNet classification. IEEE International Conference on Computer Vision (ICCV2015), 1502, 022015. doi: 10.1109/ICCV.2015.123. 4, 8
[10] Z. He, S. Tang, J. Yang, Y. Cao, M. Y. Yang, and Y. Cao. Cascaded deep networks with multiple receptive fields for infrared image super-resolution. IEEE Transactions on Circuits and Systems for Video Technology , PP:1-1, 082018. doi:
10.1109/TCSVT.2018.2864777. 2, 10
[11] J. Huang, A. Singh, and N. Ahuja. Single image super-resolution from transformed self-exemplars. In 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 5197-5206, June 2015. doi: 10. 1109/CVPR.2015.7299156. 8
[12] J. Kim, J. K. Lee, and K. M. Lee. Accurate image super-resolution using very deep convolutional networks. In The IEEE Conference on Computer Vision and Pattern Recognition ( CVPR Oral), June 2016. 2, 10
[13] Kingma, P. Diederik, and J. Ba. Adam: A method for stochastic optimization, 2014. URL http://arxiv.org/abs/1412.6980. cite arxiv:1412.6980Comment: Published as a conference paper at the 3rd International Conference for Learning Representations, San Diego, 2015. 8
[14] M. Mller, V. Alchanatis, Y. Cohen, M. Meron, J. Tsipris, A. Naor, V. Ostrovsky, M. Sprintsin, and S. Cohen. Use of thermal and visible imagery for estimating crop water status of irrigated grapevine*. Journal of Experimental Botany, 58(4):827-838, 092006. ISSN 0022-0957. doi: 10.1093/jxb/erlll5. URL https://doi.org/10.1093/jxb/erll 15. 1
[15] A. Paszke, S. Gross, S. Chintala, G. Chanan, E. Yang, Z. DeVito, Z. Lin, A. Desmaison, L. Antiga, and A. Lerer. Automatic differentiation in PyTorch. In NIPS Autodijf Workshop, 2017. 8
[16] E. Shelhamer, J. Long, and T. Darrell. Fully convolutional networks for semantic segmentation. IEEE Trans. Pattern Anal. Mach. Intell., 39(4): 640-651, Apr. 2017. ISSN 0162-8828. doi: 10.1109/TPAMI.2016.2572683.
URL https://doi.Org/10.l 109/TPAMI.2016.2572683. 4
[17] W. Shi, J. Caballero, F. Husz'ar, J. Totz, A. P. Aitken, R. Bishop, D. Rueckert, and Z. Wang. Real-time single image and video super-resolution using an efficient sub-pixel convolutional neural network. 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 1874-1883, 2016. 5
[18] R. Timofte, E. Agustsson, L. Van Gool, M.-H. Yang, L. Zhang, B. Lim, et al. Ntire 2017 challenge on single image super-resolution: Methods and results. In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR) Workshops, July 2017. 8 [19] T. Tong, G. Li, X. Liu, and Q. Gao. Image super-resolution using dense skip connections. In 2017 IEEE International Conference on Computer Vision (ICCV), pages 4809- 4817, Oct 2017. doi: 10.1109/ICCV.2017.514. 2, 10
[20] X. Wang, K. Yu, S. Wu, J. Gu, Y. Liu, C. Dong, Y. Qiao, and C. C.
Loy. Esrgan: Enhanced super-resolution generative adversarial networks. In The European Conference on Computer Vision Workshops (ECCVW), September 2018. 8
[21] R. Zeyde, M. Elad, and M. Protter. On single image scale-up using sparse representations. In J.-D. Boissonnat, P. Chenin, A. Cohen, C. Gout, T. Lyche, M.-L. Mazure, and L. Schumaker, editors, Curves and Surfaces, pages 711-730, Berlin, Heidelberg, 2012. Springer Berlin Heidelberg. ISBN 9783-642-27413-8. 8
[22] A. Zomet and S. Peleg. Multi-sensor super-resolution. In Proceedings of the Sixth IEEE Workshop on Applications of Computer Vision, WACV Ό2, pages 27-, Washington, DC, USA, 2002. IEEE Computer Society. ISBN 0-7695-1858-3. URL http://dl. acm.org/citation. cfm?id=832302.836830. 2
[23] https://therm-app.com/therm-app-thermography/
While the invention has been described with respect to a limited number of embodiments, it will be appreciated that many variations, modifications and other applications of the invention may be made. Therefore, the claimed invention as recited in the claims that follow is not limited to the embodiments described herein.
It is well known in the field that it is frequently impossible for humans to perform the calculations of artificial intelligence (AI) and machine learning (ML) systems, such as the current embodiment. For example, the processing that the network 100 performs on a given data set is typically not pre-programmed and may vary depending on dynamic factors, such as a time at which the input data set is processed and which other input data sets were previously processed.
The current network 100 is a carefully designed framework that, in part, uses algorithms. That is, some algorithms may be used as building blocks for the network 100 framework, within which the system will itself learn its own operation parameters
FIG. 7 is a high-level partial block diagram of an exemplary system 600 configured to implement the network 100 of the present invention. System (processing system) 600 includes a processor 602 (one or more) and four exemplary memory devices: a random access memory (RAM) 604, a boot read only memory (ROM) 606, a mass storage device (hard disk) 608, and a flash memory 610, all communicating via a common bus 612. As is known in the art, processing and memory can include any computer readable medium storing software and/or firmware and/or any hardware element(s) including but not limited to field programmable logic array (FPLA) element(s), hard-wired logic element(s), field programmable gate array (FPGA) element(s), graphics processing unit (GPU), and application- specific integrated circuit (ASIC) element(s). The processor 202 is formed of one or more processors, for example, hardware processors, including microprocessors, for performing functions and operations detailed herein. The processors are, for example, conventional processors, such as those used in servers, computers, and other computerized devices. For example, the processors may include x86 Processors from AMD and Intel, Xenon® and Pentium® processors from Intel, as well as any combinations thereof. Any instruction set architecture may be used in processor 602 including but not limited to reduced instruction set computer (RISC) architecture and/or complex instruction set computer (CISC) architecture. A module (processing module, neural network node or layer) 614 is shown on mass storage 608, but as will be obvious to one skilled in the art, could be located on any of the memory devices.
Mass storage device 608 is a non-limiting example of a non-transitory computer-readable storage medium bearing computer-readable code for implementing the image processing methodology described herein. Other examples of such non-transitory computer-readable storage media include read-only memories such as CDs bearing such code.
System 600 may have an operating system stored on the memory devices, the ROM may include boot code for the system, and the processor may be configured for executing the boot code to load the operating system to RAM 604, executing the operating system to copy computer- readable code to RAM 604 and execute the code.
Network connection 620 provides communications to and from system 600. Typically, a single network connection provides one or more links, including virtual connections, to other devices on local and/or remote networks. Alternatively, system 600 can include more than one network connection (not shown), each network connection providing one or more links to other devices and/or networks.
System 600 can be implemented as a server or client respectively connected through a network to a client or server.
Note that a variety of implementations for modules, processing, and layers are possible, depending on the application. Modules are preferably implemented in software, but can also be implemented in hardware and firmware, on a single processor or distributed processors, at one or more locations. The above-described module functions can be combined and implemented as fewer modules or separated into sub-functions and implemented as a larger number of modules. Based on the above description, one skilled in the art will be able to design an implementation for a specific application.
Note that the above-described examples, numbers used, and exemplary calculations are to assist in the description of this embodiment. Inadvertent typographical errors, mathematical errors, and/or the use of simplified calculations do not detract from the utility and basic advantages of the invention.
To the extent that the appended claims have been drafted without multiple dependencies, this has been done only to accommodate formal requirements in jurisdictions that do not allow such multiple dependencies. Note that all possible combinations of features that would be implied by rendering the claims multiply dependent are explicitly envisaged and should be considered part of the invention.
The descriptions of the various embodiments of the present disclosure have been presented for purposes of illustration, but are not intended to be exhaustive or limited to the embodiments disclosed. Many modifications and variations will be apparent to those of ordinary skill in the art without departing from the scope and spirit of the described embodiments. The terminology used herein was chosen to best explain the principles of the embodiments, the practical application or technical improvement over technologies found in the marketplace, or to enable others of ordinary skill in the art to understand the embodiments disclosed herein.
As used herein, the singular form “a”, “an” and “the” include plural references unless the context clearly dictates otherwise.
The word “exemplary” is used herein to mean “serving as an example, instance or illustration”. Any embodiment described as “exemplary” is not necessarily to be construed as preferred or advantageous over other embodiments and/or to exclude the incorporation of features from other embodiments.
It is appreciated that certain features of the invention, which are, for clarity, described in the context of separate embodiments, may also be provided in combination in a single embodiment. Conversely, various features of the invention, which are, for brevity, described in the context of a single embodiment, may also be provided separately or in any suitable subcombination or as suitable in any other described embodiment of the invention. Certain features described in the context of various embodiments are not to be considered essential features of those embodiments, unless the embodiment is inoperative without those elements.

Claims

WHAT IS CLAIMED IS:
1. A system for image processing, the system comprising:
(a) a processing system containing one or more processors, and
(b) an artificial neural network (100) including:
(i) an input layer (L-IN) including a memory location for storing an input image
(ILR),
(ii) one or more (L) convolution layers (L-n), wherein said input layer (L-IN) is connected to a first convolution layer (L-l) of said convolution layers (L-n), and
(iii) a output layer (L-OUT) connected to a last convolution layer (L-/) of said convolution layers (L-n) and including a memory location for storing an output image (ISR),
(c) wherein
(A) said layers include instructions for execution on said processing system,
(B) the input image (ILR) is input to said input layer (L-IN) and to at least one of said convolution layers (L-n),
(C) an initial output (110) of said input layer (L-IN) is input to at least one of said convolution layers (L-n), and
(D) a layer output (S) of at least one of said convolution layers (L-n) is input to at least one subsequent convolution layer (L-n).
2. The system of claim 1 wherein the processors are configured to execute instructions programmed using a predefined set of machine codes and said layers include computational instructions implemented in the machine codes of the processor.
3. The system of claim 1 wherein said input image is a low-resolution image and said output image is a super-resolution image.
4. The system of claim 1 wherein each of at least one of said convolution layers (L-n), includes:
(a) a respective convolution module (LCON-n) accepting data to respective said convolution layer (L-n), (b) a respective activation function (PReLU) processing output data from said respective convolution module (LCON-n), and (c) a respective bottleneck layer (LB) processing output data from said respective activation function (PReLU).
5. The system of claim 4 wherein the input image (ILR) and said initial output (110) are input to said bottleneck layer (LB), and said bottleneck layer (LB) generates said layer output (S).
6. The system of claim 1 wherein the input image (ILR) is input to each of said convolution layers (L-n).
7. The system of claim 1 wherein said initial output (110) is input to each of said convolution layers (L-n).
8. The system of claim 1 wherein said layer output (S l) is input to each subsequent convolution layer (L-n).
9. The system of claim 1 wherein said output layer (L-OUT) includes:
(a) a shuffleblock (L-SB) receiving said layer output (S 1) of said last convolution layer (L-/) and the input image (ILR) and generating a shuffle-block output (114) that is a higher resolution than the input image (ILR) and said layer output (SI),
(b) an interpolation module (112) receiving the input image (ILR) and generating an interpolated image (116) that is higher resolution than the input image (ILR), and
(c) a final convolution (L-FIN) receiving said shuffle-block output (114) and said interpolated image (116) and generating said output image (ISR).
10. The system of claim 1 wherein said network (100) is trained with a training set based on high-resolution images and corresponding low-resolution images.
11. A method of training the network (100) of claim 1, the method comprising the steps of:
(a) receiving one or more sets of high-resolution (IHR) images,
(b) applying one or more transformations to at least a subset of said sets of high-resolution images to generate at least one associated set of low-resolution images (ILR), (c) creating a training set including said one or more sets of high-resolution images and said at least one associated set of low-resolution images, and
(d) training (120) the network (100) using said training set.
12. A method for image processing, the method comprising the steps of:
(a) configuring an artificial neural network (100) based on a training set of high-resolution images and corresponding low-resolution images, and
(b) inputting an input image (ILR) to an input layer (L-IN) and to at least one convolution layer (L-n),
(c) generating an initial output (110) from said input layer (L-IN) based on said input image (ILR) and sending said initial output (110) to at least a first convolutional layer (L-l) of said convolution layers (L-n), and
(d) generating a current layer output (S) of at least one of said convolution layers (L-n) based on said input image (ILR), said initial output (110) and any previous layer outputs (S), and sending said current layer output (S) to at least one subsequent convolution layer (L-n), and
(e) generating an output image (ISR) by an output layer (L-OUT) based on said layer output (S) of a last convolutional layer (L -/) of said convolutional layers (L-n) and said input image (ILR).
13. The method of claim 12 wherein said network (100) is configured according to any of claims 2 to 9.
14. A computer usable non-transitory storage medium having a computer program embodied thereon for causing a suitably programmed system to process images, by performing the steps of claim 12 when such program is executed on the system.
PCT/IL2020/051004 2019-09-11 2020-09-13 Methods and systems for super resolution for infra-red imagery WO2021048863A1 (en)

Priority Applications (4)

Application Number Priority Date Filing Date Title
US17/641,861 US20220335571A1 (en) 2019-09-11 2020-09-13 Methods and systems for super resolution for infra-red imagery
CN202080077962.0A CN114641790A (en) 2019-09-11 2020-09-13 Super-resolution processing method and system for infrared image
EP20862956.8A EP4028984A4 (en) 2019-09-11 2020-09-13 Methods and systems for super resolution for infra-red imagery
IL291157A IL291157A (en) 2019-09-11 2022-03-07 Methods and systems for super resolution for infra-red imagery

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US201962898827P 2019-09-11 2019-09-11
US62/898,827 2019-09-11

Publications (1)

Publication Number Publication Date
WO2021048863A1 true WO2021048863A1 (en) 2021-03-18

Family

ID=74866271

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/IL2020/051004 WO2021048863A1 (en) 2019-09-11 2020-09-13 Methods and systems for super resolution for infra-red imagery

Country Status (5)

Country Link
US (1) US20220335571A1 (en)
EP (1) EP4028984A4 (en)
CN (1) CN114641790A (en)
IL (1) IL291157A (en)
WO (1) WO2021048863A1 (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113706383A (en) * 2021-08-30 2021-11-26 上海亨临光电科技有限公司 Super-resolution method, system and device for terahertz image
CN114022355A (en) * 2021-09-26 2022-02-08 陕西师范大学 Image super-resolution method based on recursive attention mechanism
CN115082318A (en) * 2022-07-13 2022-09-20 东北电力大学 Electrical equipment infrared image super-resolution reconstruction method
CN115272083A (en) * 2022-09-27 2022-11-01 中国人民解放军国防科技大学 Image super-resolution method, device, equipment and medium
CN116071239A (en) * 2023-03-06 2023-05-05 之江实验室 CT image super-resolution method and device based on mixed attention model

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20170347061A1 (en) * 2015-02-19 2017-11-30 Magic Pony Technology Limited Machine Learning for Visual Processing
CN108259997A (en) * 2018-04-02 2018-07-06 腾讯科技(深圳)有限公司 Image correlation process method and device, intelligent terminal, server, storage medium
US20180293707A1 (en) * 2017-04-10 2018-10-11 Samsung Electronics Co., Ltd. System and method for deep learning image super resolution
WO2019153671A1 (en) * 2018-02-11 2019-08-15 深圳创维-Rgb电子有限公司 Image super-resolution method and apparatus, and computer readable storage medium

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106056562B (en) * 2016-05-19 2019-05-28 京东方科技集团股份有限公司 A kind of face image processing process, device and electronic equipment
CN106934397B (en) * 2017-03-13 2020-09-01 北京市商汤科技开发有限公司 Image processing method and device and electronic equipment
US10223611B1 (en) * 2018-03-08 2019-03-05 Capital One Services, Llc Object detection using image classification models
JP2020036773A (en) * 2018-09-05 2020-03-12 コニカミノルタ株式会社 Image processing apparatus, image processing method, and program
US11386144B2 (en) * 2019-09-09 2022-07-12 Adobe Inc. Identifying digital attributes from multiple attribute groups within target digital images utilizing a deep cognitive attribution neural network

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20170347061A1 (en) * 2015-02-19 2017-11-30 Magic Pony Technology Limited Machine Learning for Visual Processing
US20180293707A1 (en) * 2017-04-10 2018-10-11 Samsung Electronics Co., Ltd. System and method for deep learning image super resolution
WO2019153671A1 (en) * 2018-02-11 2019-08-15 深圳创维-Rgb电子有限公司 Image super-resolution method and apparatus, and computer readable storage medium
CN108259997A (en) * 2018-04-02 2018-07-06 腾讯科技(深圳)有限公司 Image correlation process method and device, intelligent terminal, server, storage medium

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
HE ZEWEI ET AL.: "Cascaded Deep Networks With Multiple Receptive Fields for Infrared Image Super-Resolution", IEEEE TRANS. ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, vol. 29, no. 8, August 2019 (2019-08-01), pages 2310 - 2322, XP011738148, DOI: 10.1109/TCSVT.2018.2864777
See also references of EP4028984A4
WANG LINGFENG ET AL.: "Reconstructed DenseNets for Image Super-Resolution", 2018 25TH IEEE INTERNATIONAL CONF. ON IMAGE PROCESSING (ICIP, 7 October 2018 (2018-10-07), pages 3558 - 3562, XP033454602, DOI: 10.1109/ICIP.2018.8451027

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113706383A (en) * 2021-08-30 2021-11-26 上海亨临光电科技有限公司 Super-resolution method, system and device for terahertz image
CN114022355A (en) * 2021-09-26 2022-02-08 陕西师范大学 Image super-resolution method based on recursive attention mechanism
CN114022355B (en) * 2021-09-26 2024-02-20 陕西师范大学 Image super-resolution method based on recursive attention mechanism
CN115082318A (en) * 2022-07-13 2022-09-20 东北电力大学 Electrical equipment infrared image super-resolution reconstruction method
CN115272083A (en) * 2022-09-27 2022-11-01 中国人民解放军国防科技大学 Image super-resolution method, device, equipment and medium
CN115272083B (en) * 2022-09-27 2022-12-02 中国人民解放军国防科技大学 Image super-resolution method, device, equipment and medium
CN116071239A (en) * 2023-03-06 2023-05-05 之江实验室 CT image super-resolution method and device based on mixed attention model

Also Published As

Publication number Publication date
IL291157A (en) 2022-05-01
EP4028984A1 (en) 2022-07-20
US20220335571A1 (en) 2022-10-20
CN114641790A (en) 2022-06-17
EP4028984A4 (en) 2023-01-11

Similar Documents

Publication Publication Date Title
Li et al. Hyperspectral image super-resolution using deep convolutional neural network
US20220335571A1 (en) Methods and systems for super resolution for infra-red imagery
Ha et al. Deep learning based single image super-resolution: A survey
Wang et al. Hyperreconnet: Joint coded aperture optimization and image reconstruction for compressive hyperspectral imaging
Kappeler et al. Video super-resolution with convolutional neural networks
Bhat et al. Deep reparametrization of multi-frame super-resolution and denoising
Li et al. Hyperspectral image super-resolution by spectral mixture analysis and spatial–spectral group sparsity
Zhang et al. Color demosaicking by local directional interpolation and nonlocal adaptive thresholding
Zhang et al. CCR: Clustering and collaborative representation for fast single image super-resolution
Fan et al. Scale-wise convolution for image restoration
Liu et al. Switchable temporal propagation network
Roa'a et al. Generation of high dynamic range for enhancing the panorama environment
Chudasama et al. Therisurnet-a computationally efficient thermal image super-resolution network
CN103150713A (en) Image super-resolution method of utilizing image block classification sparse representation and self-adaptive aggregation
CN111126385A (en) Deep learning intelligent identification method for deformable living body small target
Zhu et al. Stacked U-shape networks with channel-wise attention for image super-resolution
Deshpande et al. SURVEY OF SUPER RESOLUTION TECHNIQUES.
Meng et al. Gia-net: Global information aware network for low-light imaging
Noor et al. Gradient image super-resolution for low-resolution image recognition
Yuan et al. Gradient residual attention network for infrared image super-resolution
Zeng et al. U-net-based multispectral image generation from an rgb image
Patel et al. ThermISRnet: an efficient thermal image super-resolution network
Amiri et al. A fast video super resolution for facial image
Zhang et al. Learnable reconstruction methods from RGB images to hyperspectral imaging: a survey
KM et al. QSRNet: towards quaternion-based single image super-resolution

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20862956

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

ENP Entry into the national phase

Ref document number: 2020862956

Country of ref document: EP

Effective date: 20220411