CA2822150A1 - Method and system for fusing multiple images - Google Patents

Method and system for fusing multiple images Download PDF

Info

Publication number
CA2822150A1
CA2822150A1 CA 2822150 CA2822150A CA2822150A1 CA 2822150 A1 CA2822150 A1 CA 2822150A1 CA 2822150 CA2822150 CA 2822150 CA 2822150 A CA2822150 A CA 2822150A CA 2822150 A1 CA2822150 A1 CA 2822150A1
Authority
CA
Canada
Prior art keywords
local
image
matrix
source
source images
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CA 2822150
Other languages
French (fr)
Other versions
CA2822150C (en
Inventor
Rui Shen
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Individual
Original Assignee
Individual
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Individual filed Critical Individual
Priority to CA2822150A priority Critical patent/CA2822150C/en
Publication of CA2822150A1 publication Critical patent/CA2822150A1/en
Application granted granted Critical
Publication of CA2822150C publication Critical patent/CA2822150C/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration using two or more images, e.g. averaging or subtraction
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/70Circuitry for compensating brightness variation in the scene
    • H04N23/741Circuitry for compensating brightness variation in the scene by increasing the dynamic range of the image compared to the dynamic range of the electronic image sensors
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10141Special mode during image acquisition
    • G06T2207/10144Varying exposure
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20004Adaptive image processing
    • G06T2207/20012Locally adaptive
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20172Image enhancement details
    • G06T2207/20208High dynamic range [HDR] image processing

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Image Processing (AREA)

Abstract

A method and system is provided for combining information from a plurality of source images to form a fused image. The fused image is generated by the combination of the source images based on both local features and global features computed from the source images. Local features are computed for local regions in each source image.
For each source image, the computed local features are further processed to form a local weight matrix. Global features are computed for the source images. For each source image, the computed global features are further processed to form a global weight vector.
For each source image, its corresponding local weight matrix and its corresponding global weight vector are combined to form a final weight matrix. The source images are then weighted by the final weight matrices to generate the fused image.

Description

METHOD AND SYSTEM FOR FUSING MULTIPLE IMAGES
FIELD OF THE INVENTION
[0001] The present invention relates to the field of image processing, and in particular to a method and system for combining information from a plurality of images to form a single image.
BACKGROUND OF THE INVENTION
[0002] Image fusion has been found useful in many applications. A single recorded image from an image sensor may contain insufficient details of a scene due to the incompatibility between the image sensor's capture range and the characteristics of the scene. For example, because a natural scene can have a high dynamic range (HDR) that exceeds the dynamic range of an image sensor, a single recorded image is likely to exhibit under- or over-exposure in some regions, which leads to detail loss in those regions. Image fusion can solve such problems by combining local details from a plurality of images recorded by an image sensor under different settings of an imaging device, such as under different exposure settings, or from a plurality of images recorded by different image sensors, each of which captures some but not all characteristics of the scene.
[0003] One type of image fusion method known in the art is based on multi-scale decomposition (MSD). Two types of commonly used MSD schemes include pyramid transform, such as Laplacian pyramid transform, and wavelet transform, such as discrete wavelet transform. Images are decomposed into multi-scale representations (MSRs), each of which contains an approximation scale generated by low-pass filtering and one or more detail scales generated by high-pass or band-pass filtering.
The fused image is reconstructed by inverse MSD from a combined MSR.
[0004] Another type of image fusion methods known in the art computes local features at the original image scale, and then, by solving an optimization problem, generates the fused image or the fusion weights, which are to be used as weighting factors when the Each local contrast value has a magnitude, and the magnitude may be used to determine the local feature matrix.
[0009] Color saturation may be used with the local contrast values. A
plurality of scales may be included when determining the local weight matrix for each of the source images. A local similarity matrix may be constructed to store similarity values between adjacent image regions. A local similarity pyramid may be constructed for each local similarity matrix. A binary function or set function may be used to compute the local similarity pyramids.
[0010] One or more global features may be determined for at least one source image; the global feature may be transformed using a pre-defined function; and a global weight vector may be constructed for each source image using the transformed global feature.
[0011] The local weight matrix and the global weight vector for each source image may be combined using a predetermined combination function. The global feature may be an average luminance of all of the source images or an average luminance of each source image.
[0012] A system for fusing images is provided, including: a computing device having a processor and a memory; the memory configured to store a plurality of source images;
means for fusing an image, having: means for determining a local feature matrix for each of the source images, for a feature of each of the source images; means for determining a local weight matrix for each of the source images, using the local feature matrix associated with the source image; means for determining a global weight vector for each of the source images; means for determining a final weight matrix for each of the source images, using the local weight matrix and global weight vector associated with the source image; and means for using the final weight matrices for the source images to combine the source images into a fused image.
[0013] An image source may provide the source images. The image source may be an image sensor. The means for fusing an image may be stored within processor executable instructions of the computing device. The means for fusing an image may be within a software module or a hardware module in communication with the processor.
BRIEF DESCRIPTION OF THE DRAWINGS
[0014] FIG. 1 is a flow chart illustrating the procedure of combining a plurality of images into a single fused image according to an embodiment of the invention.
[0015] FIG. 2 is a flow chart illustrating the procedure of computing local feature matrices according to an embodiment of the invention.
[0016] FIG. 3 is a flow chart illustrating the procedure of computing local weight matrices according to an embodiment of the invention.
[0017] FIG. 4 is a flow chart illustrating the procedure of computing global weight vectors according to an embodiment of the invention.
[0018] FIG. 5 is a block diagram illustrating an image fusion system according to an embodiment of the invention.
DETAILED DESCRIPTION OF THE INVENTION
[0019] FIG. 1 illustrates the procedure 100 of combining a plurality of source images into a single fused image. An "image" herein refers to a matrix of image elements.
A
matrix can be a one-dimensional (1D) matrix or a multi-dimensional matrix.
Examples of an image element include but are not limited to a pixel in the 1D
case, a pixel in the two-dimensional (2D) case, a voxel in the three-dimensional (3D) case, and a doxel, or dynamic voxel, in the four-dimensional (4D) case. A "source image"
herein refers to an image to be inputted to an image fusion procedure. A
"fused image"
herein refers to an image that is the output of an image fusion procedure.
[0020] For the ease of exposition, the description hereafter generally directs to the case wherein source images are taken under different exposure settings, and the source images are 2D images. However, with no or minimal modifications, the method and system according to the invention can be applied in cases in which source images are taken under other settings or by different imaging devices, such as images taken under different focus settings, images taken by different medical imaging devices, and images taken by one or more multispectral imaging devices; or in cases in which source images are in dimensions other than 2D.
[0021] In step 105, a plurality of source images is obtained from one or more image sensors or from one or more image storage devices. The source images are of the same size; if not, they can be scaled to the same size. In step 110, for each source image, a local feature matrix is computed. A local feature herein represents a certain characteristic of an image region, such as the brightness in an image region or the color variation in an image region. An "image region" herein refers to a single image element or a group of image elements. Each element of a local feature matrix is a numerical value that represents a local feature. In step 115, for each source image, a local weight matrix is computed using the local feature matrices from step 110. A
"weight matrix" herein refers to a matrix, in which each element is a numerical value that corresponds to an image region in a source image and determines the amount of contribution from that image region to a fused image. In step 120, for each source image, a global weight vector is computed. In step 125, for each source image, its local weight matrix and its global weight vector are combined to form a final weight matrix.
[0022] In step 130, the fused image is generated by combining the source images based on the final weight matrices. Such combination can be performed as a weighted average of the source images using the final weight matrices as weighting factors. Let K denote the number of source images, where K 2. Let IA denote the k th source image, where 0 k K ¨1. Let Xk denote the k th final weight matrix, which is of the same size as the source images. Let T denote the fused image. Let Li denote the operator of element-wise multiplication. Then, the weighted average for forming the fused image can be expressed using the following equation:

-=EXkG 1k (1) k=0 If the value of an image element in the fused image exceeds a pre-defined dynamic range, it can be either scaled or truncated to meet that range.
[00231 Although in FIG. 1, step 120 is depicted to be performed after step 110 and step 115, it can be performed before step 115 or before step 110, or in parallel with step 110 or step 115 or with both.
[0024] Computing Local Feature Matrices [0025] FIG. 2 further illustrates step 110 of computing local feature matrices. One or more local features can be considered, depending on the characteristics of the source images and on the individual application scenarios. Examples of a local feature include but are not limited to local contrast in an image region, color saturation in an image region, hue in an image region, brightness/luminance in an image region, color contrast in an image region, average local contrast in an image region, average color saturation in an image region, average hue in an image region, average luminance in an image region, average color contrast in an image region, variation of local contrast in an image region, variation of color saturation in an image region, hue variation in an image region, luminance variation in an image region, variation of color contrast in an image region, and color variation in an image region. The local feature computation (step 110) is performed in a multi-scale fashion, which captures local features at different scales. In addition, because the total number of image elements in source images can be very large, local feature matrices computed in a multi-scale fashion also help to reduce computational cost in subsequent processing. An alternative is to perform the local feature computation only at a single scale, but this excludes the benefit of using local features from different scales and may incur higher computational cost in subsequent processing.
[0026] The computation of local feature matrices is first performed for each source image at its original scale, with reference to step 205. When a single local feature is considered, there is only one local feature matrix computed for each source image in step 205. When multiple local features are considered, a local feature matrix is initially computed for each local feature in each source image in step 205. In step 210, a coarser-scale image is computed for each source image. In step 215, a single local feature matrix (in the case that one local feature is considered) or multiple local feature matrices (in the case that multiple local features are considered) are computed at the current coarser scale. In step 220, the current coarser-scale local feature matrix or matrices are updated using information from those at the previous finer scale. Steps 210, 215, and 220 are repeated until a pre-defined or pre-computed number of scales are reached in step 225. Step 230 checks whether multiple local features are considered. If multiple local features are considered, then, for each source image, its multiple local feature matrices at the current coarsest scale are combined into a single local feature matrix, as depicted in step 235. Finally, the local feature matrices at the current coarsest scale are normalized in step 240.
[0027] For example, in an embodiment of the invention, step 110, the step of computing local feature matrices, can be performed as follows, considering a single local feature.
[0028] Local contrast is used as the single local feature, which is applicable to both color and grayscale source images. Local contrast represents the local luminance variation with respect to the surrounding luminance, and local details are normally associated with local variations. Therefore, taking local contrast in the luminance channel as a local feature helps to preserve local details. Alternatively, local luminance variation can be considered alone without taking into account the surrounding luminance.

However, in this way, image regions with the same amount of local luminance variation are treated equally regardless of their surrounding luminance.
Therefore, some image regions with high local contrast may not be effectively differentiated from other image regions with low local contrast, which may impair the quality of the fused image. Local contrast is normally defined as the ratio between the band-pass or high-pass filtered image and the low-pass filtered image. However, under such a definition, under-exposed image regions, which are normally noisy, may produce stronger responses than well-exposed image regions. This makes under-exposed image regions contribute more to the fused image and reduce the overall brightness.
Thus, if the response from the low-pass filter in an image region is below a threshold 0, the response from the band-pass or high-pass filter in that image region, instead of the ratio, is taken as the local contrast value in order to suppress noise.
This computation of local contrast values is performed for each source image at its original scale, with reference to step 205.
[0029] For a grayscale image, its intensities or luminance values are normalized to the range [0,1] , and then the computation of local contrast values is directly performed on the normalized luminance values. For a color image, a grayscale or luminance image can be extracted, and then the computation of local contrast values is performed on this extracted grayscale image. There are various known methods to extract the luminance image from a color image, such as converting the color image to the LHS (luminance, hue, saturation) color space and then taking the "L"
components.
[0030] Let C k denote the matrix of local contrast values computed for the k th source image Ik at its original scale, i.e., the 0th scale. Let [M]id denote an element of a matrix M, where the location of this element in the matrix M is indicated by an index vector ind . For example, in the 2D case, ind can be expressed as ind =
(i, j) , and then [M],,1 represents the element in the i th row and j th column in a matrix M.
Let lit denote a band-pass or high-pass filter, such as Laplacian filter. Let 0 denote a low-pass filter, such as Gaussian filter. Let 0 denote the operator of convolution.

Then, the computation of local contrast values in step 205 can be expressed as the following equation:
ICJ( l [
ind = Vj ik lind Y' ik lind /[1 Ik lind if [0 'and <0; 2 otherwise. () [0031] The response of a band-pass filter can be approximated by the difference between the original image and the response of a low-pass filter. Therefore, the computation of local contrast values in step 205 can also be expressed as:
['And [10 Ik lind ind c 1 = {([Ik lind [0 lind )/[ 0 k 1 Jind if [0 Idind < 0;
L k otherwise. (3) The matrix of the magnitudes of the local contrast values, denoted by -e'k , can be taken as one local feature matrix at the original scale.
[0032] In step 210, a coarser-scale image of each source image is computed by downsampling the image by a pre-defined factor fin each dimension. Normally, f is taken to be two (2) or a power of two (2). For example, when f =2 , the downsampling can be performed as filtering the image with a low-pass filter and then selecting every other element in each dimension or as directly selecting every other element in each dimension. Let N, denote the pre-defined number of scales used in computing local feature matrices, where 2 .
Let Gnk denote the n th scale of the th source image, where 0n5_N,-1 . Let [M]j denote the operator of downsampling a matrix M by a factor f in each dimension. Then, computing a coarser scale of each source image can be expressed by the following equation:
Ik , n=0;
Gnk =(4) [Gnk-11 1 < n<N ¨1.
[0033] In step 215, the same local feature computation performed in step 205 is performed, except that now the computation is performed at a coarser scale of each source image. Let Ck denote the matrix of local contrast values computed for the k th source image Ik at its n th scale. Then, Ckn can be computed using the following equation:
{ Lig 0 Gnk 1 , ind [ k ]ind = [
I' nk _A
V 0 Gnk lind /[ j G 1nd ' th ind if Po eGrwlise.< 0;
ci (5) [0034] The response of a band-pass filter can be approximated by the difference between the original image and the response of a low-pass filter. Therefore, the computation of local contrast values in step 215 can also be expressed as:
[Ckn law = { [Gknlind ¨ P Gknlind ' if [
([Gkn lind ¨ [Cb Gkn lind VP
Gn ]Ind k . , (11 G'klind <0;
otherwise. (6) The matrix of the magnitudes of the local contrast values at the n th scale, denoted by can be taken as one local feature matrix at the n th scale, where 0 i i ._. N,-1.
[0035] In steps 205 and 215, an alternative to directly taking the matrix tnk of the magnitudes of the local contrast values as a local feature matrix at the n th scale is taking a modulated version of t kn , in order to further suppress noise in image regions with high luminance variations. This modulation can be performed by applying a sigmoid shaped function to t kn . One such sigmoid shaped function is the logistic psychometric function proposed in Garcia-Perez and Alcala-Quintana 2007 (The transducer model for contrast detection and discrimination: Formal relations, implications, and an empirical test, Spatial Vision, vol. 20, nos. 1-2, pp. 5-43, 2007) in which case t kn is modulated by the following equation:
0.5 [tkni = 0.5 +
ind 1+ exp (-- (log (Ltk"iind ) +1.5) /0.11) (7) where exp 0 is an exponential function and log(.) is a logarithmic function.

[0036] In order to obtain the best representative information from finer scales, e'k is updated using the information from e'k-' for any n > 1, as depicted in step 220. This information update can be performed using a set function. A "set function"
herein refers to a function, the input of which is a set of variables. When the input set to the set function has two elements, a binary function can be used. A "binary function"
herein refers to a set function, the input of which is an ordered pair of variables. Let C l, denote the updated matrix of the magnitudes of the local contrast values at the n th scale. 'kn can be taken as one local feature matrix at the n th scale, where 0 ._ . i , i _<_ N, ¨1. Let func (=,.) denote a set function. Then, ekn can be expressed using the following equation:
ind n = 0;
( (8) [nkl= indfunc [ "k 1 , [ekl-111 \ , 1 n < N ¨1.
hind' [_1 J ¨ c For example, func(.,) can be a maximum function, which is to choose the maximum from two input values. Then, enk can be expressed using the following equation:
[ekli = { ( Le;c1ind , n = 0;
n [en-lp-lind rid (9) max [-t ,'ic k , 1 ._. n N, ¨1.
[0037] Steps 210, 215, and 220 are repeated until N, , a pre-defined number of scales, is reached (step 225). Because a single local feature is considered, after step 230, step 240 is performed. The normalization is an element-wise normalization across all local feature matrices at the current coarsest scale. Let Y: denote the normalized local feature matrix associated with the n th scale of the k th source image. Then, ykrv,-;

denotes the output from step 240 for the k th source image. The computation of Yk"
can be expressed using the following equation:
ryN,--Ii [
L k ind K-I ind (10) Jind m=0 [0038] As another example, in an alternative embodiment, step 110, the step of computing local feature matrices, can be performed as follows, considering two local features.
[0039] For color source images, two local features can be considered: local contrast and color saturation. Since local contrast only works in the luminance channel, using local contrast alone may not produce satisfactory results for color images in some cases, for example where high local contrast is achieved at the cost of low colorfulness or color saturation. Objects captured at proper exposures normally exhibit more saturated colors. Therefore, color saturation can be used as another local feature complementary to local contrast for color source images.
[0040] The computation of local contrast values in steps 205, 210, 215, 220, and 225 is the same as that described in the previous embodiment, where a single local feature is considered in step 110 of computing local feature matrices. The computation of color saturation values in steps 205, 210, 215, 220, and 225 and the computation in steps 230, 235, and 240 are described below.
[0041] Let Skn denote the matrix of color saturation values computed for the k th source image 1k at the n th scale. Let Rd, Grk" , and Blnk denote the red channel, the green channel, and the blue channel of the k th source image 1k at the n th scale, respectively. Then, in steps 205 and 215, Skn can be computed for each source image following the color saturation definition in the LHS color space, as expressed in the following equation:

min ([12,d7, , [Gil" , [Blnk [S'ij =1 .nd ind .nd (11) ind (IRClnk lidd [Grk" lind + [131"k lind )/3 Other alternatives include but are not limited to the definition of color saturation in the HSV (hue, saturation, value) color space and the definition of color saturation in the CIELAB (International Commission on Illumination 1976 L*a*b*) color space.
[0042] In step 210, a coarser-scale image of each source image is computed in the same way as described in the previous embodiment. Steps 210, 215, and 220 are repeated until N, , a pre-defined number of scales, is reached in step 225. Because two local features are considered, after step 230, step 235 is performed. The two local feature matrices for each source image at the coarsest scale are combined using a binary function comb (.,.) . For example, comb(,.) can be the multiplication function. Let Cfk" denote the combined local feature matrix for the k th source image at the (N, ¨1) th scale. Then, CfH can be computed using the following equation:
[cfkiv,--1 d = comb ([Cs k'vc-1] ,[skni,-11 'kNrc-11 ind L rskrvc-11 (12) ind ind [0043] In step 240, element-wise normalization is performed on the combined local feature matrices. The normalized local feature matrix Yk" associated with the (N, ¨1) th scale of the k th source image can be computed using the following equation:
[Cf/7`
[ k YAlc-11 = K-1 ind ind (13) E[cfmk-ii ind tn=0 [0044] Computing Local Weight Matrices [0045] FIG. 3 further illustrates step 115 of computing local weight matrices.
The computation is performed in a hierarchical manner in order to achieve higher computational speed with lower memory usage. An alternative is to perform the computation only at a single scale without the hierarchy, but this may incur higher computational cost. In step 305, one or more local similarity matrices are computed at the original scale (i.e., the 0th scale). Each element in a local similarity matrix is a similarity value, which represents the degree of similarity between adjacent image regions in the source images.
[0046] In step 310, for each local similarity matrix computed from step 305, a local similarity pyramid is constructed. The local similarity matrix at a coarser scale of a local similarity pyramid is computed by reducing its previous finer scale.
Although standard downsampling schemes can be used, a reduction scheme that respects boundaries between dissimilar image regions in the source images is preferred, as will be described below. The local similarity pyramids have the same height (i.e., number of scales), which can be pre-defined or pre-computed. The height of local similarity pyramids, denoted by Ns, is larger than N,, the pre-defined number of scales used in step 110 of computing local feature matrices. A local similarity matrix at the n t h scale of a local similarity pyramid corresponds to the local feature matrices at the n th scale, where 0 n 5_ Nc-1.
[0047] In step 315, the local feature matrices at scale N, ¨1 from step 110 are further reduced to a coarser scale taking into account the local similarity matrix or matrices.
In step 315, a local feature matrix at the current scale is first updated based on the local similarity matrix or matrices at the same scale, and is then reduced in spatial resolution. Although standard downsampling schemes can be used, the purpose of using local similarity matrix or matrices in step 315 is to respect boundaries between dissimilar image regions in the source images. An element in a coarser-scale local feature matrix can be computed as a weighted average of its corresponding matrix elements in the finer-scale local feature matrix. The weighting factors used in such weighted average can be determined based on the local similarity matrix or matrices at the finer scale. Step 315 is repeated until scale Ns-1, the coarsest scale of local similarity pyramids, is reached (step 320).

[0048] In step 325, the local feature matrices at the current scale are smoothed. Smoothed local feature matrices help to remove unnatural seams in the fused image. For example, the smoothing can be performed by applying one or more of the following schemes: a low-pass filter, such as Gaussian filter, a relaxation scheme, such as Gauss-Seidel relaxation, or an edge-preserving filter, such as bilateral filter. A
smoothing scheme that uses local similarity matrix or matrices may be used, for the same reason as mentioned above for step 315, which is to respect boundaries between dissimilar image regions in the source images. At each scale, a smoothing scheme can be applied zero, one, or more times. Step 330 checks whether the finest scale of the local similarity pyramids (i.e., the 0th scale) is reached. If the 0th scale is not reached, step 335 is performed; otherwise, step 340 is performed.
[0049] In step 335, the local feature matrices at the current scale are expanded to a finer scale, and this finer scale becomes the current scale for subsequent processing. A
standard upsampling scheme can be used for this expansion operation, which is to perform interpolation between adjacent matrix elements. In addition, a scheme that employs local similarity matrix or matrices can be used, in order to respect boundaries between dissimilar image regions in the source images. An element in a finer-scale local feature matrix can be computed as a weighted average of its corresponding matrix elements in the coarser-scale local feature matrix. The weighting factors used in such weighted average can be determined based on the local similarity matrix or matrices at the finer scale. Steps 325 and 335 are repeated until the 0th scale is reached (step 330) . In step 340, the finest-scale local feature matrices are normalized to form the local weight matrices.
[0050] For example, in an embodiment of the invention, step 115, the step of computing local weight matrices, can be performed as follows.
[0051] In step 305, each local similarity matrix captures local similarities along one direction. One or more similarity matrices can be used. For d -dimensional images, any direction in the d -dimensional space can be considered, such as the horizontal direction, the vertical direction, and the diagonal directions. Let Wcni denote the similarity matrix for the direction d at the n th scale. Let [MiLd denote the matrix element that is closest to the matrix element [M].nd in the direction d. Then, W, , can be computed using the following equation:
K-1 dist -(Lik lind Lrik 1 Jind Wd 1= exp ___________________________________________________________ (14) ind k=0 where dist (=,.) denotes a binary function that computes the distance of two input entities, and a is a free parameter. For example, dist (.,.) can be the function of computing Euclidean distance, the function of computing Manhattan distance, or the function of computing Mahalanobis distance. Wd can be computed based on all source images, as expressed in Equation 14, or it can be computed based on some of the source images.
[0052] For the 2D case, two local similarity matrices that capture local similarities along two directions can be considered: a horizontal similarity matrix that stores the similarity values between adjacent image regions along the horizontal direction, and a vertical similarity matrix that stores the similarity values between adjacent image regions along the vertical direction. Let W: denote the horizontal similarity matrix at the n th scale. Then, W: can be computed using the following equation:
K-I diSt([1kti ,[1k1 j+ir [VV n exp ______________________________________ (15) , k=0 Let W," denote the vertical similarity matrix at the n th scale. Then, Vsi, can be computed using the following equation:
K-1 diSt([I ¨ [1 k ,,j k ,i+i,j) [NV nexp ________________________________________ (16) k=0 The horizontal and vertical similarity matrices can be computed based on all source images, as expressed in Equation 15 and Equation 16, or they can be computed based on some of the source images.
[0053] In step 310, for each local similarity matrix computed from step 305, a local similarity pyramid is constructed. A coarser-scale similarity matrix can be computed using a set function. When the input set to the set function has two elements, a binary function can be used. One such binary function can be the minimum function, and then, the reduction scheme for reducing local similarity matrices at the n th scale, where 0 Ns¨ 2 , can be expressed in the following equations:
[Wdn+11,nd= min ([Wd" Lind 'EWµni 12.ind (17) where * denotes the operator of multiplication.
[0054] For the 2D case, the horizontal similarity matrix at a coarser scale of the horizontal similarity pyramid and the vertical similarity matrix at a coarser scale of the vertical similarity pyramid can be computed using a binary function. One such binary function can be the minimum function, and then, the reduction scheme for reducing local similarity matrices at the n th scale, where 0 ¨ 2 , can be expressed in the following equations:
min ([1V: 121,21' (18) [W,"+' = min ([117122 / ,[VV:] (19) [0055] In step 315, the local feature matrices at scale N, ¨1 from step 110 are further reduced to a coarser scale taking into account the local similarity matrices.
Rather than the previously described reduction schemes, a modified version of the restriction operator of the multigrid linear system solver in Grady et al. 2005 (A
geometric multigrid approach to solving the 2D inhomogeneous Laplace equation with internal dirichlet boundary conditions, IEEE International Conference on Image Processing, vol. 2, pp. 642-645, 2005) can be used as the reduction scheme for the local feature matrices. This reduction scheme for reducing a local feature matrix Y: at the n th scale, where n Ns-2, can be expressed in the following three equations for the 2D case:
[Whn] -E[VST:1 LY:1 [ Ykn = Y:
21,2 j+1 j+1 2j2 [W +[Whn 2i,2 I l 21,2/+1 2/,2 j+2 (20) 21,2/ 2i,2 1+1 + [Wyn [Y. kn + 1/:" = = [Ykn]
[vk k 21+2 " [Yn 21+1,2_1 it +1,2] 2/,2 J 2/,2J
,2 (21) [W] [Wti! _1+1,2 21,2 j 12i+1,2 [Whnl LY:] .+PVV7,1 [1(11 Lyn+11 =[yn 21+1,21 2i+1,2 2i+1,21+1 j+2 k 12i+1,21+1 [w"
]2i+1,2 [W;:12i+1,2 J+1 [WV" ]2/,2 j+1 + [W7 12/+1,2 j+1 .1 (22) LW: 12i,2,1+1 [Y:12/,2 j+1 [W12/+1,2 j+1 LY: 12j+2,2 1+1 [Wfl' + /8j'l + LW
2i+1,2 2i+1,2 j+1 2 j+1 2/+1,2 J+1 Padding to W', W,7, and Y: can be added, when a matrix index is out of range.
Step 315 is repeated until scale Nõ-1 is reached (step 320).
[0056] In step 325, the local feature matrices at the n th scale can be smoothed based on the local similarity matrices at that scale using the following equation for the 2D case:
[1( knl [Y:114 = LW hn] +[VV:1 4-[1? V:1 +LW 1,1 j-1 i,j 1-1,j [Wr] [vkl + LW h" [Ykl , 1-1 ,j+1 (23) +7 [Whni. +[w:],1 -1-[W] +[w:],, [AV] [V] +[W,,n1 ,, [:],1, +7 [IV] +[VsPh] [wvn] [1V]i .
1,j-1 i,j 1-1,j where y is a free parameter.
[0057] Step 330 checks whether the finest scale of the local similarity pyramids (i.e., the 0th scale) is reached. If the 0th scale is not reached, step 335 is performed;
otherwise, step 340 is performed. In step 335, the local feature matrices at the n th scale are expanded to a finer scale. Rather than the previously described expansion schemes, a modified version of the prolongation operator of the multigrid linear system solver in Grady et al. 2005 (A geometric multigrid approach to solving the 2D
inhomogeneous Laplace equation with internal dirichlet boundary conditions, IEEE
International Conference on Image Processing, vol. 2, pp. 642-645, 2005) can be used as an expansion scheme for the local feature matrices. This expansion scheme for expanding a local feature matrix Y: at the n th scale, where 1 n 1.-1 , can be expressed in the following four equations for the 2D case:
kn 1 = [Y:11,, (24) [wnhvn-1 ^ wn,-1]
k k J2, +1,2 -1 [ 12/+1,2 j-I L I J21+1,2 j [

[ r 2/+1,2/+1 Y(25) 2/+1,2 [Whn-11 [Whn-11 2/+1,2 j-1 21+1,2 j [W1' -112 [Yr]
[y n-11 =
1-1,2 j+1[1r-1 i2/-1,2 /+1 [VV'n' ]2/,2 /+1 [Y:12/+1,2 j+1 ^
[wn-1 (26) 2/-1,2 j+1 12/,2 j+1 [wri12/,2 j-1 [Ykn-1]2i,2 /-1 [Whn 1]2/,2 [Yri]2/,2 21,2 j[[wri + 1W,n, [W'711 2/,2jwr112/,2 j-1 2,21 -12/-1,2 j rk yn-ii +rw-i [y n-11 (27) j L J2i-1,2] L J21,2/ k 2/+1,2]
+ ] [VV
21,2 j-1 ]
rWn-11 21,2 2/-1,2j v -12/,2,/
Padding to W:-1, Wv7-I , and Yr' can be added, when a matrix index is out of range.
[0058] Steps 325 and 335 are repeated until the finest scale of the local similarity pyramids (i.e., the 0th scale) is reached (step 330). In step 340, element-wise normalization is performed on the finest-scale local feature matrices to form the local weight matrices. The local weight matrix Uk associated with the 0th scale of the k th source image can be computed using the following equation:

LY/i 1ind [Udind = K-1 (28) I[ymoi m=0 [0059] Computing Global Weight Vectors and Computing Final Weight Matrices [0060] FIG. 4 further illustrates step 120 of computing global weight vectors.
In step 405, one or more global features are computed for individual source images and/or for some source images and/or for all source images. A global feature herein represents a certain characteristic of one or more source images. Examples of a global feature include but are not limited to average luminance of one, some or all source images, average color saturation of one, some, or all source images, difference between the average luminance of one non-empty set of source images and that of another non-empty set of source images, and difference between the average color saturation of one non-empty set of source images and that of another non-empty set of source images. In step 410, the global features are transformed using pre-defined function(s).
Unlike local weight matrices, which only affect local characteristics of the fused image, these global weight vectors affect global characteristics of the fused image. In step 415, a global weight vector for each source image can be constructed by concatenating the global weights associated with that source image. The elements of a global weight vector can be stored as a vector or in a matrix.
[0061] In order for both local features and global features to contribute to the fused image, the local weight matrices and the global weight vectors are combined using a pre-defined function to form a final weight matrix for each source image, as depicted in step 125 in FIG. 1.
[0062] For example, in an embodiment of the invention, steps 120 and 125 can be performed as follows, where the global weight vectors contribute to enhanced global contrast of the fused image.

[0063] In step 405, the average luminance of all source images is computed.
Let L
denote the average luminance of all source images. In step 410, L can be transformed by two non-liner functions, which results in two global weights, denoted by Vo and VI .
The two non-linear functions can take the following forms:
Vo ao /30exP(¨L) (29) = + exp(¨L) (30) where a, , a, , )30 , and are free parameters. Let Vk denote the global weight vector for the k th source image. In step 415, Vk can be constructed by concatenating Vo and VI , i.e., Vk = (Vo,Vi) .
[0064] In step 125, the global weight vectors ( Vk 's) and the local weight matrices ( Uk 'S) are combined to generate the final weight matrices ( Xk 's). Let {T11 denote the i th element in a vector T. The combination function can take the following form:
Xk = [k I0 Uk ¨ [Vk](31) [0065] In this embodiment, the contribution of the global feature L to the fused image is enhanced global contrast. If L is low, it indicates that the amount of under-exposed image regions in the source images may be large. Hence, in step 410, Equation results in a larger Vo and Equation 27 results in a larger VI , when Po and are positive. When global weight vectors are combined with local weight matrices in step 125 using Equation 28, [Vk ]0 helps to increase the global contrast of a fused image, i.e., to extend the dynamic range of a fused image; and [Vic" helps to select the middle portion of the extended dynamic range avoiding over- or under-exposure in a fused image.

[0066] As another example, in an alternative embodiment, steps 120 and 125 can be performed as follows, where global weight vectors contribute to both enhanced global contrast and enhanced brightness in the fused image.
[0067] In step 405, the average luminance of each source image and the average luminance of all source images are computed. Let Lk denote the average luminance of the k th source image, and let L still denote the average luminance of all source images. In step 410, L is transformed in the same way as previously described, which results in two global weights, Vo and V; . Let V2,k denote the global weight computed from L. V2,k can be computed by transforming Lk using the following function:
V
rk, Lk >17.
, = (32) 2,k 0, otherwise.
where 8 and g are free parameters. In step 415, V k can be constructed by concatenating V0, V, ,and V2,k , i.e., Vk = (Vo, VI, V2,k).
[0068] In step 125, a final weight matrix Xk can be computed by combining the global weight vectors ( Vk 's) and the local weight matrices (Uk 's) in the following way:
Xk = [Vk ]O Uk --[Vk ii + [Vk 12 (33) [0069] In this embodiment, the contribution of the global feature L to the fused image is the same as that in the previous embodiment, i.e., enhanced global contrast.
The contribution of the global feature rk to the fused image is enhanced brightness. 11 'S
computed from Equation 29 in step 405 favor those image regions from source images with higher average luminance values, i.e., from brighter source images, when 8 is positive. When global weight vectors are combined with local weight matrices in step 125 using Equation 30, image regions with more local features in those brighter source images can receive higher weights from the final weight matrices, so that those image regions look brighter in a fused image.
[0070] A System [0071] FIG. 5 depicts an exemplary image fusion system 500. The image fusion system 500 includes image source(s) 505, a computing device 510, and display(s) 545.
The image source(s) 505 can be one or more image sensors or one or more image storage devices. Examples of the computing device 510 include but are not limited to a personal computer, server, computer cluster, and a smart phone. The computing device 510 includes three interconnected components: processor(s) 515, an image fusion module 520, and memory 525. Processor(s) 515 can be a single processor or multiple processors. Examples of a processor include but are not limited to a central processing unit (CPU) and a graphics processing unit (GPU). Memory 525 stores source images 530, fused image(s) 535, and intermediate data 540. Intermediate data 540 are used and/or generated by the processor(s) 515 and the image fusion module 520. The image fusion module 520 contains processor executable instructions, which can be executed by the processor(s) 515 one or more times to compute fused image(s) 535 from the source images 530 following the image fusion procedure 100. Image fusion module 520 may be incorporated as hardware, software or both, and may be within computing device 510 or in communication with computing device 510.
[0072] The system 500 functions in the following manner. The image source(s) sends source images 530 to the computing device 510. These source images 530 are stored in the memory 525. The image fusion module 520 uploads instructions to the processor(s) 515. The processor(s) 515 executes the uploaded instructions and generates fused image(s) 535. The fused image(s) 535 is then sent to display(s) 545.
[0073] The above-described embodiments have been provided as examples, for clarity in understanding the invention. A person with skill in the art will recognize that alterations, modifications and variations may be effected to the embodiments described above while remaining within the scope of the invention as defined by claims appended hereto.

images are linearly combined. Another type of image fusion methods divides images into blocks and generates a fused image by optimizing one or more criteria within each block.
[0005] Another type of method that can achieve a similar effect as image fusion methods do when fusing images taken under different exposure settings, is the two-phase procedure of HDR reconstruction and tone mapping. An HDR image is reconstructed from the input images, and then the dynamic range of this HDR image is compressed in the tone mapping phase. However, the above types of methods may impose high spatial computational cost and/or high temporal computational cost, or introduce artifacts into a fused image due to non-linear transformations of pixel values or due to operations performed only in small local regions.
[0006] Accordingly, what is needed is a method and system that effectively and efficiently combines useful information from images, especially in the case of fusing images taken under different exposure settings.
SUMMARY OF THE INVENTION
[0007] A method of producing a fused image is provided, including the steps of:
providing a plurality of source images; determining a local feature matrix for each of the source images, for a feature of each of the source images; determining a local weight matrix for each of the source images, using the local feature matrix associated with the source image; determining a global weight vector for each of the source images; determining a final weight matrix for each of the source images, using the local weight matrix and global weight vector associated with the source image;
and using the final weight matrices for the source images to combine the source images into a fused image.
[0008] A plurality of scales may be included when determining the local feature matrix for each of the source images. A local contrast value may be determined for each of the source images, the local contrast value modulated by a sigmoid shaped function.

Claims (31)

1. A method of producing a fused image, comprising the steps of:
a) providing a plurality of source images;
b) determining a local feature matrix for each of the source images, for a feature of each of the source images;
c) determining a local weight matrix for each of the source images, using the local feature matrix associated with the source image;
d) determining a global weight vector for each of the source images;
e) determining a final weight matrix for each of the source images, using the local weight matrix and global weight vector associated with the source image;

and f) using the final weight matrices for the source images to combine the source images into a fused image.
2. The method of claim 1 wherein in step b) a plurality of scales are included when determining the local feature matrix for each of the source images.
3. The method of claim 2 wherein in step b) a local contrast value is determined for each of the source images.
4. The method of step 3, wherein the local contrast value is modulated by a sigmoid shaped function.
5. The method of claim 3 wherein each local contrast value has a magnitude, and the magnitude is used to determine the local feature matrix.
6. The method of claim 5 wherein in step c), denotes a local feature matrix for the kth source image at the nth scale, and is expressed as:
wherein N c is a predefined number of scales used in determining the local feature matrices, is a matrix of the magnitudes of the local contrast values;
[M]ind denotes an element of a matrix M , where the location of the element in the matrix M is indicated by an index vector ind ; [M].dwnarw..function.
denotes an operator of downsampling a matrix M by a factor .function. in each dimension;
and func(.,.) is a set function.
7. The method of claim 6 wherein func(.,.) is a maximum function for selection of the maximum of first and second input values, so that is expressed as:
8. The method of claim 3 wherein in step b) the local contrast values are expressed as:
wherein C~ denotes the matrix of local contrast values determined for a kth source image I k at a n th scale; [M]ind denotes an element of a matrix M

where the location of the element in the matrix M is indicated by an index vector ind ; .PSI. denotes a filter; .slzero. denotes a low-pass filter; ~
denotes an operator of convolution; G~ denotes the n th scale of the k th source image;
and .theta. is a free parameter.
9. The method of claim 3 wherein color saturation is used with the local contrast values.
10. The method of claim 1, wherein in step c) a plurality of scales are included when determining the local weight matrix for each of the source images.
11. The method of claim 10 wherein a local similarity matrix is constructed to store similarity values between adjacent image regions.
12. The method of claim 11 wherein a local similarity pyramid is constructed for each local similarity matrix.
13. The method of claim 12 wherein a set function is used to compute the local similarity pyramids.
14. The method of claim 12 wherein a binary function is used to compute the local similarity pyramids.
15. The method of claim 14 wherein the binary function is:
wherein W~ denotes the similarity matrix for the direction d at the n th scale;
[M]ind denotes an element of a matrix M where a location of this element in the matrix M is indicated by an index vector ind ; denotes a matrix element that is closest to the matrix element [M]ind in the direction d ; and *
denotes the operator of multiplication.
16. The method of claim 1 wherein in step d):
d.1) at least a global feature is determined for at least one source image;
d.2) the global feature is transformed using a pre-defined function; and d.3) a global weight vector is constructed for each source image using the transformed global feature.
17. The method of claim 1 wherein in step e) the local weight matrix and global weight vector for each source image are combined using a predetermined combination function.
18. The method of claim 16 wherein the global feature is an average luminance of all of the source images.
19. The method of claim 18 wherein the average luminance, L , is transformed to obtain two global weights denoted by V0 and V1 , by two non-liner functions:
V0 = .alpha.0 + .beta.0 exp (-L) V1 = .alpha.1 + .beta.1 exp (-L) wherein .alpha.0 , .alpha.1 , .beta.0, and .beta.1, are free parameters.
20. The method of claim 19 wherein, the global weight vector, V k for the kth source image, is constructed by concatenating V0 and V1, so that V k = (V0 , V1).
21. The method of claim 19 wherein a final weight matrix X k is computed by combining the global weight vectors ( V k s) and the local weight matrices (U
k s) using:
X k = [V k]0U k- [V k]1 wherein [T]i denotes the i th element in a vector T.
22. The method of claim 16 wherein the global feature is an average luminance of each source image.
23. The method of claim 22 wherein global weight, V2,k, is calculated by wherein L k denotes the average luminance of the k th source image, and .delta.
and .eta. are free parameters.
24. The method of claim 23 wherein, the global weight vector, V k for the kth source image, is constructed by concatenating V0 and V1 , and V2,k , so that V k =(V0,V1,V2,k) wherein V0 and V1 are global weights determined by two non-liner functions:
V0 = .alpha.0, + .beta.0 exp (-L) V1 = .alpha.1 + .beta.1 exp (-L) and .alpha.0, .alpha.1, .beta.0, and .beta.1 are free parameters.
25. The method of claim 24 wherein a final weight matrix X k is computed by combining the global weight vectors ( V k s) and the local weight matrices (U
k s) using:
X k = [V k ]0U k- [V k]1 + [V k]2 and wherein [T]i denotes the i th element in a vector T.
26. A system for fusing images, comprising:

a) a computing device having a processor and a memory; the memory configured to store a plurality of source images;
b) means for fusing an image, comprising:
i. means for determining a local feature matrix for each of the source images, for a feature of each of the source images;
ii. means for determining a local weight matrix for each of the source images, using the local feature matrix associated with the source image;
iii. means for determining a global weight vector for each of the source images;
iv. means for determining a final weight matrix for each of the source images, using the local weight matrix and global weight vector associated with the source image; and v. means for using the final weight matrices for the source images to combine the source images into a fused image.
27. The system of claim 26 further comprising an image source for providing the source images.
28. The system of claim 27 wherein the image source is an image sensor.
29. The system of claim 28 wherein the means for fusing an image is stored within processor executable instructions of the computing device.
30. The system of claim 28 wherein the means for fusing an image are within a software module.
31. The system of claim 28 wherein the means for fusing an image are within a hardware module in communication with the processor.
CA2822150A 2013-07-26 2013-07-26 Method and system for fusing multiple images Active CA2822150C (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CA2822150A CA2822150C (en) 2013-07-26 2013-07-26 Method and system for fusing multiple images

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CA2822150A CA2822150C (en) 2013-07-26 2013-07-26 Method and system for fusing multiple images

Publications (2)

Publication Number Publication Date
CA2822150A1 true CA2822150A1 (en) 2015-01-26
CA2822150C CA2822150C (en) 2016-04-12

Family

ID=52471844

Family Applications (1)

Application Number Title Priority Date Filing Date
CA2822150A Active CA2822150C (en) 2013-07-26 2013-07-26 Method and system for fusing multiple images

Country Status (1)

Country Link
CA (1) CA2822150C (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110223264A (en) * 2019-04-26 2019-09-10 中北大学 Image difference characteristic attribute fusion availability distributed structure and synthetic method based on intuition possibility collection
CN110334779A (en) * 2019-07-16 2019-10-15 大连海事大学 A kind of multi-focus image fusing method based on PSPNet detail extraction
CN111163570A (en) * 2019-12-30 2020-05-15 南京东晖光电有限公司 NB-IOT (NB-IOT) -based indoor lamp combination regulation and control system and method
CN113687421A (en) * 2021-08-23 2021-11-23 中国石油大学(北京) Data processing method and device for seismic signals, electronic equipment and storage medium
CN114550236A (en) * 2022-01-24 2022-05-27 北京百度网讯科技有限公司 Image recognition and training method, device, equipment and storage medium of model thereof

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110223264A (en) * 2019-04-26 2019-09-10 中北大学 Image difference characteristic attribute fusion availability distributed structure and synthetic method based on intuition possibility collection
CN110223264B (en) * 2019-04-26 2022-03-25 中北大学 Image difference characteristic attribute fusion validity distribution structure based on intuition possibility set and synthesis method
CN110334779A (en) * 2019-07-16 2019-10-15 大连海事大学 A kind of multi-focus image fusing method based on PSPNet detail extraction
CN110334779B (en) * 2019-07-16 2022-09-30 大连海事大学 Multi-focus image fusion method based on PSPNet detail extraction
CN111163570A (en) * 2019-12-30 2020-05-15 南京东晖光电有限公司 NB-IOT (NB-IOT) -based indoor lamp combination regulation and control system and method
CN111163570B (en) * 2019-12-30 2023-09-08 南京东晖光电有限公司 NB-IOT-based indoor lamp combination regulation and control system and method
CN113687421A (en) * 2021-08-23 2021-11-23 中国石油大学(北京) Data processing method and device for seismic signals, electronic equipment and storage medium
CN114550236A (en) * 2022-01-24 2022-05-27 北京百度网讯科技有限公司 Image recognition and training method, device, equipment and storage medium of model thereof
CN114550236B (en) * 2022-01-24 2023-08-15 北京百度网讯科技有限公司 Training method, device, equipment and storage medium for image recognition and model thereof

Also Published As

Publication number Publication date
CA2822150C (en) 2016-04-12

Similar Documents

Publication Publication Date Title
US9053558B2 (en) Method and system for fusing multiple images
US12008797B2 (en) Image segmentation method and image processing apparatus
KR102574141B1 (en) Image display method and device
US10339643B2 (en) Algorithm and device for image processing
Vanmali et al. Visible and NIR image fusion using weight-map-guided Laplacian–Gaussian pyramid for improving scene visibility
CN111402146B (en) Image processing method and image processing apparatus
US20220188999A1 (en) Image enhancement method and apparatus
CA2822150C (en) Method and system for fusing multiple images
US20220164926A1 (en) Method and device for joint denoising and demosaicing using neural network
CN111028170B (en) Image processing method, image processing apparatus, electronic device, and readable storage medium
US20240062530A1 (en) Deep perceptual image enhancement
Mondal et al. Image dehazing by joint estimation of transmittance and airlight using bi-directional consistency loss minimized FCN
KR20090096684A (en) A method, an apparatus and a computer-readable medium for processing a night vision image dataset
CN111914997A (en) Method for training neural network, image processing method and device
US11145032B2 (en) Image processing apparatus, method and storage medium for reducing color noise and false color
WO2023030139A1 (en) Image fusion method, electronic device, and storage medium
CN114581318B (en) Low-illumination image enhancement method and system
US10217193B2 (en) Image processing apparatus, image capturing apparatus, and storage medium that stores image processing program
WO2020107308A1 (en) Low-light-level image rapid enhancement method and apparatus based on retinex
Yu et al. Continuous digital zooming of asymmetric dual camera images using registration and variational image restoration
US11074677B2 (en) Dynamic range extension of partially clipped pixels in captured images
WO2023028866A1 (en) Image processing method and apparatus, and vehicle
Kondo et al. Edge preserving super-resolution with details based on similar texture synthesis
CN117455785A (en) Image processing method and device and terminal equipment
CN117641135A (en) Image processing method and device, equipment and storage medium

Legal Events

Date Code Title Description
EEER Examination request

Effective date: 20150618