CN113313661A - Image fusion method and device, electronic equipment and computer readable storage medium - Google Patents

Image fusion method and device, electronic equipment and computer readable storage medium Download PDF

Info

Publication number
CN113313661A
CN113313661A CN202110580639.1A CN202110580639A CN113313661A CN 113313661 A CN113313661 A CN 113313661A CN 202110580639 A CN202110580639 A CN 202110580639A CN 113313661 A CN113313661 A CN 113313661A
Authority
CN
China
Prior art keywords
image
processed
pixel point
fusion
fusion weight
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110580639.1A
Other languages
Chinese (zh)
Inventor
李宏伟
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Oppo Mobile Telecommunications Corp Ltd
Original Assignee
Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Oppo Mobile Telecommunications Corp Ltd filed Critical Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority to CN202110580639.1A priority Critical patent/CN113313661A/en
Publication of CN113313661A publication Critical patent/CN113313661A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration by the use of more than one image, e.g. averaging, subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • G06F18/253Fusion techniques of extracted features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/20Image enhancement or restoration by the use of local operators
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/20Image enhancement or restoration by the use of local operators
    • G06T5/30Erosion or dilatation, e.g. thinning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/13Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/40Analysis of texture
    • G06T7/41Analysis of texture based on statistical description of texture
    • G06T7/44Analysis of texture based on statistical description of texture using image operators, e.g. filters, edge density metrics or local histograms
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/40Analysis of texture
    • G06T7/41Analysis of texture based on statistical description of texture
    • G06T7/45Analysis of texture based on statistical description of texture using co-occurrence matrix computation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/56Extraction of image or video features relating to colour
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/60Extraction of image or video features relating to illumination properties, e.g. using a reflectance or lighting model
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/46Descriptors for shape, contour or point-related descriptors, e.g. scale invariant feature transform [SIFT] or bags of words [BoW]; Salient regional features
    • G06V10/467Encoded features or binary features, e.g. local binary patterns [LBP]

Abstract

The embodiment of the application discloses an image fusion method, an image fusion device, electronic equipment and a computer readable storage medium. The method comprises the following steps: calculating a first fusion weight corresponding to each pixel point in the image to be processed from the image characteristics in the image to be processed; acquiring external information corresponding to the image to be processed; calculating a second fusion weight corresponding to the image to be processed according to the external information; determining a third fusion weight corresponding to each pixel point in the image to be processed according to the first fusion weight and the second fusion weight; and fusing the image to be processed and the reference image based on the third fusion weight corresponding to each pixel point in the image to be processed to obtain a fused image. The image fusion method, the image fusion device, the electronic equipment and the computer-readable storage medium can improve the fusion effect of image fusion and can effectively solve the ghost problem caused by fusing multi-frame images.

Description

Image fusion method and device, electronic equipment and computer readable storage medium
Technical Field
The present application relates to the field of image technologies, and in particular, to an image fusion method and apparatus, an electronic device, and a computer-readable storage medium.
Background
The image fusion can be that a plurality of frames of images are processed by image processing, computer and other technologies, useful information in each image is extracted, and images with higher quality are synthesized. In the current video technology, many problems of images can be solved by image fusion, for example, by synthesizing a High-Dynamic Range (HDR) image by image fusion, denoising the image by image fusion, performing super-resolution reconstruction by image fusion, and the like. At present, when multi-frame images are fused, the fusion effect is poor.
Disclosure of Invention
The embodiment of the application discloses an image fusion method, an image fusion device, electronic equipment and a computer readable storage medium, which can improve the fusion effect of image fusion and effectively solve the ghost problem caused by fusing multi-frame images.
The embodiment of the application discloses an image fusion method, which comprises the following steps:
extracting image features from an image to be processed, and calculating first fusion weights corresponding to all pixel points in the image to be processed according to the image features;
acquiring external information corresponding to the image to be processed, wherein the external information refers to information which is acquired through external equipment and can influence the imaging effect of the image to be processed;
calculating a second fusion weight corresponding to the image to be processed according to the external information;
determining a third fusion weight corresponding to each pixel point in the image to be processed according to the first fusion weight and the second fusion weight;
and fusing the image to be processed and the reference image based on the third fusion weight corresponding to each pixel point in the image to be processed to obtain a fused image.
The embodiment of the application discloses image fusion device includes:
the first calculation module is used for extracting image features from an image to be processed and calculating first fusion weights corresponding to all pixel points in the image to be processed according to the image features;
the external information acquisition module is used for acquiring external information corresponding to the image to be processed, wherein the external information refers to information which is acquired through external equipment and can influence the imaging effect of the image to be processed;
the second calculation module is used for calculating a second fusion weight corresponding to the image to be processed according to the external information;
the determining module is used for determining a third fusion weight corresponding to each pixel point in the image to be processed according to the first fusion weight and the second fusion weight;
and the fusion module is used for fusing the image to be processed and the reference image based on the third fusion weight corresponding to each pixel point in the image to be processed to obtain a fusion image.
The embodiment of the application discloses an electronic device, which comprises a memory and a processor, wherein a computer program is stored in the memory, and when the computer program is executed by the processor, the processor is enabled to realize the method.
An embodiment of the application discloses a computer-readable storage medium, on which a computer program is stored, which, when executed by a processor, implements the method as described above.
The image fusion method, the device, the electronic equipment and the computer readable storage medium disclosed in the embodiment of the application calculate the first fusion weight corresponding to each pixel point in the image to be processed through the image characteristics in the image to be processed, calculate the second fusion weight corresponding to the image to be processed according to the external information corresponding to the image to be processed, wherein the external information refers to the information which is collected by the external equipment and can influence the imaging effect of the image to be processed, determine the third fusion weight corresponding to each pixel point in the image to be processed according to the first fusion weight and the second fusion weight, fuse the image to be processed and the reference image based on the third fusion weight corresponding to each pixel point in the image to be processed to obtain the fusion image, and can jointly determine the fusion weight of each pixel point in the image to be processed by combining the image characteristics of the image to be processed and the external information which externally influences the imaging effect, therefore, a fused image with higher image quality can be obtained, the fusion effect of image fusion is improved, and the ghost problem caused by fusing multi-frame images can be effectively solved.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings needed to be used in the embodiments will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings without creative efforts.
FIG. 1 is a block diagram of image processing circuitry in one embodiment;
FIG. 2 is a flow diagram of a method of image fusion in one embodiment;
FIG. 3 is a flow chart of a method of image fusion in another embodiment;
FIG. 4 is a diagram illustrating the fusion of N intermediate difference images in one embodiment;
FIG. 5 is a diagram illustrating fusion of difference images corresponding to reference images of frames according to an embodiment;
FIG. 6 is a block diagram of an image fusion apparatus in one embodiment;
fig. 7 is a block diagram of an electronic device in one embodiment.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
It is to be noted that the terms "comprises" and "comprising" and any variations thereof in the examples and figures of the present application are intended to cover non-exclusive inclusions. For example, a process, method, system, article, or apparatus that comprises a list of steps or elements is not limited to only those steps or elements listed, but may alternatively include other steps or elements not listed, or inherent to such process, method, article, or apparatus.
It will be understood that, as used herein, the terms "first," "second," and the like may be used herein to describe various elements, but these elements are not limited by these terms. These terms are only used to distinguish one element from another. For example, a first fusion weight may be referred to as a second fusion weight, and similarly, a second fusion weight may be referred to as a first fusion weight, without departing from the scope of the present application. The first and second fusion weights are both fusion weights, but they are not the same fusion weight. In addition, the term "plurality" used in the embodiments of the present application means two or more.
The embodiment of the application provides electronic equipment. The electronic device includes therein an Image Processing circuit, which may be implemented using hardware and/or software components, and may include various Processing units defining an ISP (Image Signal Processing) pipeline. FIG. 1 is a block diagram of an image processing circuit in one embodiment. For ease of illustration, FIG. 1 illustrates only aspects of image processing techniques related to embodiments of the present application.
As shown in fig. 1, the image processing circuit includes an ISP processor 140 and control logic 150. The image data captured by the imaging device 110 is first processed by the ISP processor 140, and the ISP processor 140 analyzes the image data to capture image statistics that may be used to determine one or more control parameters of the imaging device 110. The imaging device 110 may include one or more lenses 112 and an image sensor 114. Image sensor 114 may include an array of color filters (e.g., Bayer filters), and image sensor 114 may acquire light intensity and wavelength information captured by each imaging pixel and provide a set of raw image data that may be processed by ISP processor 140. The attitude sensor 120 (e.g., a three-axis gyroscope, hall sensor, accelerometer, etc.) may provide parameters of the acquired image processing (e.g., anti-shake parameters) to the ISP processor 140 based on the type of interface of the attitude sensor 120. The attitude sensor 120 interface may employ an SMIA (Standard Mobile Imaging Architecture) interface, other serial or parallel camera interfaces, or a combination thereof.
It should be noted that, although only one imaging device 110 is shown in fig. 1, in the embodiment of the present application, at least two imaging devices 110 may be included, each imaging device 110 may respectively correspond to one image sensor 114, or a plurality of imaging devices 110 may correspond to one image sensor 114, which is not limited herein. The operation of each image forming apparatus 110 can refer to the above description.
In addition, the image sensor 114 may also transmit raw image data to the attitude sensor 120, the attitude sensor 120 may provide the raw image data to the ISP processor 140 based on the type of interface of the attitude sensor 120, or the attitude sensor 120 may store the raw image data in the image memory 130.
The ISP processor 140 processes the raw image data pixel by pixel in a variety of formats. For example, each image pixel may have a bit depth of 8, 10, 12, or 14 bits, and the ISP processor 140 may perform one or more image processing operations on the raw image data, gathering statistical information about the image data. Wherein the image processing operations may be performed with the same or different bit depth precision.
The ISP processor 140 may also receive image data from the image memory 130. For example, the attitude sensor 120 interface sends raw image data to the image memory 130, and the raw image data in the image memory 130 is then provided to the ISP processor 140 for processing. The image Memory 130 may be a portion of a Memory device, a storage device, or a separate dedicated Memory within an electronic device, and may include a DMA (Direct Memory Access) feature.
Upon receiving raw image data from the image sensor 114 interface or from the attitude sensor 120 interface or from the image memory 130, the ISP processor 140 may perform one or more image processing operations, such as temporal filtering. The processed image data may be sent to image memory 130 for additional processing before being displayed. ISP processor 140 receives the processed data from image memory 130 and performs image data processing on the processed data in the raw domain and in the RGB and YCbCr color spaces. The image data processed by ISP processor 140 may be output to display 160 for viewing by a user and/or further processed by a Graphics Processing Unit (GPU). Further, the output of the ISP processor 140 may also be sent to the image memory 130, and the display 160 may read image data from the image memory 130. In one embodiment, image memory 130 may be configured to implement one or more frame buffers.
The statistics determined by the ISP processor 140 may be sent to the control logic 150. For example, the statistical data may include image sensor 114 statistics such as gyroscope vibration frequency, auto-exposure, auto-white balance, auto-focus, flicker detection, black level compensation, lens 112 shading correction, and the like. The control logic 150 may include a processor and/or microcontroller that executes one or more routines (e.g., firmware) that may determine control parameters of the imaging device 110 and control parameters of the ISP processor 140 based on the received statistical data. For example, the control parameters of the imaging device 110 may include attitude sensor 120 control parameters (e.g., gain, integration time of exposure control, anti-shake parameters, etc.), camera flash control parameters, camera anti-shake displacement parameters, lens 112 control parameters (e.g., focal length for focusing or zooming), or a combination of these parameters. The ISP control parameters may include gain levels and color correction matrices for automatic white balance and color adjustment (e.g., during RGB processing), as well as lens 112 shading correction parameters.
The image fusion method provided by the embodiment of the present application is exemplarily described with reference to the image processing circuit of fig. 1. The ISP processor 140 may obtain an image to be processed from the imaging device 110, where the image to be processed may be an image newly acquired by the imaging device 110, extract image features from the image to be processed, and calculate a first fusion weight corresponding to each pixel point in the image to be processed according to the image features. The ISP processor 140 may obtain external information corresponding to the image to be processed, where the external information refers to information that can affect the imaging effect of the image to be processed and is collected by an external device, for example, the external information may include pose change information collected by the pose sensor 120, ambient brightness information collected by a light sensor, and the like, but is not limited thereto. The ISP processor 140 may calculate a second fusion weight corresponding to the image to be processed according to the acquired external information, determine a third fusion weight corresponding to each pixel point in the image to be processed according to the first fusion weight and the second fusion weight, and then, the ISP processor 140 may fuse the image to be processed and the reference image based on the third fusion weight corresponding to each pixel point in the image to be processed to obtain a fusion image. Alternatively, the reference image may be an image of a previous frame acquired by the imaging device 110, or a fused image obtained by fusing the previous frame, and the like.
As shown in fig. 2, in one embodiment, an image fusion method is provided, which can be applied to electronic devices, which can include, but are not limited to, mobile phones, tablet computers, smart wearable devices, vehicle-mounted terminals, PCs (Personal computers), digital cameras, and the like. The method may comprise the steps of:
step 210, extracting image features from the image to be processed, and calculating a first fusion weight corresponding to each pixel point in the image to be processed according to the image features.
The image to be processed refers to an image that needs to be subjected to image fusion, and the image to be processed may be an image that is newly acquired by the electronic device through an imaging device (e.g., a camera, etc.) (i.e., a current frame image), or an image that is acquired by the electronic device from a memory, for example, a photograph stored in the memory, or an image of any frame in video data stored in the memory, and the like.
In some embodiments, the image feature may include at least one of a boundary feature, a brightness feature, a saturation feature, a texture feature, and the like, where the boundary feature may also be called an edge feature and is used to describe an edge contour in the image to be processed, the brightness feature may be used to describe image brightness of the image to be processed, the saturation feature may be used to describe image saturation of the image to be processed, the texture feature may be used to describe image texture of the image to be processed, and the like.
The electronic device may extract image features from the image to be processed in a number of different ways. For example, for the boundary feature, the electronic device may extract the boundary feature from the image to be processed by using Canny operator edge detection, Sobel operator edge detection, and the like; for the brightness feature, the image to be processed may be converted into a grayscale image, and the mean and/or variance of the grayscale image are calculated, so as to extract the brightness feature of the image to be processed, or the image to be processed may be converted from an RGB (red, green, blue) color space into an HSV (Hue, Saturation, brightness) color space or an HSL (Hue, Saturation, brightness) color space, so as to extract the brightness feature of the image to be processed through the pixel Value of the image to be processed in the V channel of the HSV color space or the L channel of the HSL color space; for the saturation feature, the to-be-processed image can be converted from an RGB format into an HSV color space or an HSL color space, and the saturation feature of the to-be-processed image is extracted through the pixel value of an S channel of the to-be-processed image in the HSV color space or the HSL color space; for the texture features, an LBP (Local binary pattern) extraction method, a gray level co-occurrence matrix extraction method, a wavelet extraction method, or the like may be adopted to extract the texture features of the image to be processed.
Optionally, the electronic device may also extract image features from the image to be processed through a neural network, where the neural network may include a convolutional neural network, a BP (Back Propagation) neural network, a hopfiltered neural network, and the like, but is not limited thereto, where the BP neural network is a multi-layer feedforward neural network trained according to an error Back Propagation algorithm, the hopfiltered neural network is a recurrent neural network, and the hopfiltered neural network is obtained based on infusion learning, that is, weights of the hopfiltered neural network are calculated according to a certain rule instead of being trained.
It should be noted that, the extraction of the image features from the image to be processed is not limited to the above-described modes, and may be other modes, and the embodiment of the present application does not limit the specific extraction mode of the image features.
The electronic equipment extracts image features from the image to be processed, the image features can comprise feature values corresponding to all pixel points in the image to be processed, and first fusion weights corresponding to all the pixel points can be obtained through calculation according to the feature values corresponding to all the pixel points. The first fusion weight may be a pixel-level fusion weight, that is, the first fusion weight corresponds to each pixel point in the image to be processed one by one, and the corresponding first fusion weights may be the same or different for different pixel points in the same image to be processed.
Step 220, obtaining external information corresponding to the image to be processed.
The external information may refer to information capable of affecting an imaging effect of an image to be processed, which is acquired by an external device, which refers to an electronic device external to an imaging apparatus acquiring the image to be processed, and may include one or more sensors such as a pose detection sensor, a light sensor, a temperature sensor, and the like, and the pose detection sensor may include, but is not limited to, a gravity sensor, an acceleration sensor, a gyroscope, and the like.
In one embodiment, if the image to be processed is an image newly acquired by an imaging device in the electronic device, the electronic device may acquire currently acquired external information from an external device of the imaging device. In another embodiment, if the image to be processed is an image stored in the memory, and the memory can simultaneously store the external information corresponding to the image to be processed, the electronic device may obtain the external information corresponding to the image to be processed from the memory.
And step 230, calculating a second fusion weight corresponding to the image to be processed according to the external information.
The second fusion weight may be a frame-level fusion weight, and for multiple frames of images to be processed, the second fusion weight corresponds to each frame of image to be processed one by one, and for each pixel point of the same image to be processed, the same second fusion weight corresponds to each pixel point, and different frames of images to be processed may correspond to the same or different second fusion weights, respectively.
And 240, determining a third fusion weight corresponding to each pixel point in the image to be processed according to the first fusion weight and the second fusion weight.
The electronic device may fuse the second fusion weight of the image to be processed with the first fusion weight corresponding to each pixel point to obtain a third fusion weight corresponding to each pixel point, where the third fusion weight may be a pixel-level fusion weight, that is, the third fusion weight corresponds to each pixel point in the image to be processed one to one.
Optionally, the manner of fusing the second fusion weight of the image to be processed with the first fusion weight corresponding to each pixel point may include, but is not limited to, averaging the first fusion weight and the second fusion weight corresponding to each pixel point to obtain the third fusion weight corresponding to each pixel point, or performing weighted average calculation on the first fusion weight and the second fusion weight corresponding to each pixel point to obtain the third fusion weight corresponding to each pixel point.
And 250, fusing the image to be processed and the reference image based on the third fusion weight corresponding to each pixel point in the image to be processed to obtain a fused image.
The reference image may be one or more frames, and the reference image may be a first M frames of image of the image to be processed, where M may be a positive integer, for example, the reference image may be an image captured on a frame of the imaging device, or the reference image may be an image on a frame of the image to be processed in the video data, or two frames of image before the image to be processed, and so on. As another embodiment, the reference image may also be a previously obtained M-frame history fused image, which refers to a fused image obtained by image fusion of images before the image to be processed.
As an implementation manner, the third fusion weight corresponding to each pixel point in the image to be processed may be used to describe the fusion proportion of the pixel points at the same position in the reference image. And fusing each pixel point of the image to be processed with the pixel point at the same position in the reference image based on the third fusion weight corresponding to each pixel point in the image to be processed, and calculating to obtain the fused pixel value of each pixel point, thereby obtaining the fused image.
Taking two pixel points of the first pixel coordinate in the image to be processed and the reference image as an example, the pixel values of the pixel points of the reference image and the third fusion weight corresponding to the first pixel coordinate may be multiplied, and then added with the pixel values of the pixel points of the image to be processed to obtain the total pixel value of the first pixel coordinate, and then the fused pixel value corresponding to the first pixel coordinate may be obtained according to the total pixel value and the third fusion weight corresponding to the first pixel coordinate. Further, the total pixel value of the first pixel coordinate may be divided by the sum of the third fusion weight and 1 to obtain a fused pixel value corresponding to the first pixel coordinate.
Optionally, for each pixel point in the image to be processed, the fused pixel value corresponding to each pixel may be calculated by referring to the following formula (1):
Figure BDA0003085982450000081
wherein, p1i,jThe pixel value of a pixel point representing the pixel coordinate (i, j) in the image to be processed, p2i,jThe pixel value, P, of a pixel point representing a pixel coordinate (i, j) in a reference imagei,jRepresents the fused pixel value, w, corresponding to the pixel coordinate (i, j)i,jRepresents the third fusion weight corresponding to the pixel coordinate (i, j).
In some embodiments, if there are multiple frames of reference images, the pixel values of the pixels of the first pixel coordinate in the multiple frames of reference images may be multiplied by the third fusion weight corresponding to the first pixel coordinate, and then averaged, and then the averaged pixel values may be added to the pixel values of the pixels of the first pixel coordinate in the image to be processed, so as to obtain the total pixel value of the first pixel coordinate. The manner of obtaining the fused pixel value corresponding to the first pixel coordinate according to the total pixel value of the first pixel coordinate and the third fusion weight corresponding to the first pixel coordinate may be the same as described above.
It should be noted that other ways may also be used to fuse the image to be processed and the reference image, and the method is not limited to the above-described ways.
In the embodiment of the application, a first fusion weight corresponding to each pixel point in an image to be processed is calculated through image characteristics in the image to be processed, a second fusion weight corresponding to the image to be processed is calculated according to external information corresponding to the image to be processed, the external information refers to information which can affect the imaging effect of the image to be processed and is collected through external equipment, a third fusion weight corresponding to each pixel point in the image to be processed is determined according to the first fusion weight and the second fusion weight, the image to be processed and a reference image are fused to obtain a fusion image, the fusion weights of the pixel points in the image to be processed can be determined jointly by combining the image characteristics of the image to be processed and the external information which affects the imaging effect externally, and thus a fusion image with higher image quality can be obtained, the fusion effect of image fusion is improved, and the ghost problem caused by fusion of multi-frame images can be effectively solved.
In an embodiment, the determining, according to the first fusion weight and the second fusion weight, a third fusion weight corresponding to each pixel point in the image to be processed may include: and fusing the first fusion weight and the second fusion weight corresponding to each pixel point according to the first weight coefficient and the second weight coefficient corresponding to each pixel point to obtain a third fusion weight corresponding to each pixel point.
The first weight coefficient and the second weight coefficient may be coefficients set according to actual requirements, or may be set by adjusting the coefficients multiple times and verifying the image fusion effect multiple times. Alternatively, the first weight coefficient and the second weight coefficient may be related to characteristic parameters of an imaging device that acquires an image to be processed, where the characteristic parameters may include, but are not limited to, a device type of the imaging device, a focal range, a frame rate of the acquired image, a resolution, and the like, the device type may include, but is not limited to, an infrared camera, a color camera, a laser camera, a black and white camera, and the like, and may also include device types divided by a focal range, a viewing range, and the like, such as a telephoto camera, a wide angle camera, a fixed focus camera, a zoom camera, and the like. The characteristic parameters of the imaging device can be used to determine a noise model corresponding to the imaging device, which affects the noise contained in the acquired image. The first weight coefficient and the second weight coefficient of each pixel point are set based on the characteristic parameters of the imaging device, so that the set first weight coefficient and the set second weight coefficient are more fit with the actual imaging effect, and the accuracy of the third fusion weight can be improved.
The first weight coefficient and the second weight coefficient may be pixel-level coefficients, the first weight coefficient and the second weight coefficient are in one-to-one correspondence with each pixel point, and the pixel points at different pixel positions may respectively correspond to the same or different first weight coefficient and second weight coefficient.
As a specific implementation manner, taking a first pixel point in the image to be processed as an example, the first pixel point may be any pixel point in the image to be processed, the electronic device may calculate a product of a first fusion weight corresponding to the first pixel point and a corresponding first weight coefficient to obtain a first calculation result, and calculate a product of a second weight coefficient corresponding to the first pixel point and a second fusion weight to obtain a second calculation result. The sum of the first calculation result and the second calculation result may be determined as a third blending weight corresponding to the first pixel point.
The third fusion weight corresponding to each pixel point in the image to be processed can be determined according to the formula (2):
wi,j=ri,j×pix_fi,j+ti,jx ext _ f formula (2);
wherein r isi,jA first weighting factor, pix _ f, corresponding to a pixel point representing a pixel coordinate (i, j) in the image to be processedi,jRepresenting a pixelFirst fusion weight, t, corresponding to a pixel point of coordinates (i, j)i,jAnd the ext _ f represents a second fusion weight corresponding to the image to be processed.
In the embodiment of the application, the third fusion weight corresponding to each pixel point can be obtained according to the set first weight coefficient and second weight coefficient corresponding to each pixel point in the image to be processed, and the first fusion weight and the second fusion weight corresponding to each pixel point, so that the third fusion weight corresponding to each pixel point is more accurate, the fusion effect of image fusion can be further improved, and the image quality of the fused image is improved.
As shown in fig. 3, in an embodiment, another image fusion method is provided, which can be applied to the electronic device described above, and the image fusion method can include the following steps:
step 302, extracting image features from the image to be processed, where the image features include at least one of a boundary feature value, a brightness feature value, a saturation feature value, and a texture feature value corresponding to each pixel point.
In some embodiments, the electronic device may generate a feature map having the same resolution as the image to be processed according to the image to be processed, where the feature map may include image feature values corresponding to each pixel point in the image to be processed. Optionally, the feature map may include at least one of a boundary feature map, a luminance feature map, a saturation feature map, and a texture feature map, where the boundary feature map may include boundary feature values corresponding to respective pixel points in the image to be processed, the luminance feature map may include luminance feature values corresponding to respective pixel points in the image to be processed, the saturation feature map may include saturation feature values corresponding to respective pixel points in the image to be processed, and the texture feature map may include texture feature values corresponding to respective pixel points in the image to be processed.
In one embodiment, the electronic device may extract boundary features from the image to be processed based on the reference image to determine boundary feature values corresponding to the respective pixel points. Specifically, the electronic device may perform difference processing on pixel points at the same positions in the image to be processed and the reference image to obtain a difference image between the image to be processed and the reference image, and determine a boundary characteristic value of each pixel point in the image to be processed according to the difference image.
The method can traverse each pixel point contained in the image to be processed, and calculate the pixel difference value between the pixel point of each pixel position in the image to be processed and the pixel point of the same pixel position in the reference image to obtain the difference value corresponding to each pixel position, thereby obtaining the difference image between the image to be processed and the reference image. The difference value may include a difference value of a pixel point at the same pixel position in the image to be processed and the reference image in each channel, or may be a difference value of any number of channels (e.g., one channel or two channels), or an average value, a weighted average value, and the like of the difference values of all the channels, or a maximum value among the difference values of all the channels, and the like, which is not limited herein.
In some embodiments, to improve the accuracy of the determined boundary feature values, the image to be processed and the reference image may be pixel-wise aligned prior to determining the difference image between the image to be processed and the reference image. The image to be processed and the reference image can be matched, specifically, a plurality of characteristic pixel points are respectively extracted from the image to be processed and the reference image, and the characteristic pixel points in the image to be processed and the characteristic pixel points in the reference image are matched to obtain a plurality of pairs of matched characteristic pixel point pairs. The image to be processed and the reference image can be aligned in pixel positions based on a plurality of pairs of matched characteristic pixel point pairs, so that the matched characteristic pixel point pairs are located at the same image position (namely located at the same pixel coordinate), and then the difference image between the aligned image to be processed and the reference image is determined, and the accuracy of the difference image can be improved.
As an optional implementation manner, the electronic device may use the difference image as a boundary feature value of each pixel point in the image to be processed, that is, may directly use the difference image as a boundary feature map of the image to be processed, where the pixel value of each pixel point in the difference image is the boundary feature value.
As another optional implementation, the electronic device may also process the difference image to obtain a boundary feature value of each pixel point in the image to be processed. Optionally, the processing of the difference image to obtain the boundary characteristic value of each pixel point in the image to be processed may include, but is not limited to, the following processing modes:
and in the first mode, filtering the difference image to obtain the boundary characteristic value of each pixel point in the image to be processed. The electronic device may perform filtering processing on the difference image, where the filtering processing may include, but is not limited to, smoothing filtering processing, sharpening filtering processing, and the like, where the smoothing filtering processing may be used to suppress noise in the difference image, and may be performed by using an average filter, a gaussian filter, a median filter, and the like; the sharpening filter can be used for increasing edge information in a difference image, and sharpening filtering processing can be performed by adopting a first-order gradient operator (such as a Robert crossover operator, a Sobel operator and the like), a second-order differential operator (such as a laplacian of the like. The embodiment of the present application does not limit the specific filtering processing manner of the difference image.
After filtering the difference image, the filtered difference image can be used as a boundary characteristic map of the image to be processed to obtain a boundary characteristic value of each pixel point in the image to be processed.
And secondly, carrying out corrosion treatment and/or expansion treatment on the difference image to obtain the boundary characteristic value of each pixel point in the image to be processed. The erosion processing is an operation of locally solving a minimum value, the difference image can be calculated by using the first template image block (for example, the 3 × 3 image block), and the pixel points covering the first template image block in the difference image are reserved, so that the erosion processing on the difference image is realized. The expansion processing is an operation of local maximum value calculation, and a convolution can be performed between a second template image block (for example, an image block of 3 × 3) and a difference image to calculate pixel points covered by the second template image block, so as to implement erosion processing on the difference image. The first module image block and the second module image block may be the same image block in the difference image or different image blocks, and are not limited herein.
As a specific implementation, the difference image may be divided into a plurality of image regions, and a confidence corresponding to each image region is determined, where the confidence may be used to describe a confidence of a difference value included in the image region. The confidence corresponding to each image region may be determined according to the corresponding image content of the image region in the image to be processed, for example, the richer the corresponding image content of the image region in the image to be processed is, the lower the corresponding confidence may be, the simpler the corresponding image content of the image region in the image to be processed is, the higher the corresponding position information may be, and the like, but is not limited thereto.
Alternatively, an image region with a higher degree of confidence or equal to a confidence threshold may be dilated, and an image region with a lower degree of confidence than the confidence threshold may be eroded, so that the accuracy of the difference image may be improved.
After the erosion processing and/or the expansion processing is performed on the difference image, the processed difference image can be used as a boundary characteristic diagram of the image to be processed, so as to obtain a boundary characteristic value of each pixel point in the image to be processed.
And thirdly, respectively carrying out region division on the difference image according to N target sizes to obtain N frames of divided difference images, respectively calculating difference values corresponding to all image regions contained in the divided difference images to obtain N frames of intermediate difference images, and fusing the N frames of intermediate difference images to obtain boundary characteristic values of all pixel points in the image to be processed. Wherein N may be an integer greater than or equal to 2.
N different target sizes may be preset, for example, 3 × 3, 5 × 5, 9 × 9, 8 × 8, etc., but not limited thereto. The difference image can be divided into regions according to the N target sizes respectively to obtain N frames of divided difference images corresponding to the N target sizes one by one, and each image region contained in each frame of divided difference image is the same target size. For example, the difference image is divided according to the size of 3 × 3, and divided difference images including a plurality of image areas with the size of 3 × 3 are obtained; dividing the difference image according to 5 × 5 to obtain a divided difference image including a plurality of image areas with the size of 5 × 5; dividing the difference image according to the size of 9 x 9 to obtain a divided difference image comprising a plurality of image areas with the size of 9 x 9; the difference image is divided into 8 × 8 sizes to obtain a divided difference image including a plurality of 8 × 8 sized image regions.
And aiming at each frame of divided difference image, calculating difference values corresponding to each image area, and replacing pixel values in each image area with corresponding difference values to ensure that the pixel values of pixel points belonging to the same image area are the same. The disparity values corresponding to the image regions may be determined in a variety of different manners, for example, the pixel values belonging to the same image region may be averaged to obtain the disparity value corresponding to the image region, or a median pixel value in the same image region may be determined and the median pixel value is used as the disparity value corresponding to the image region, and the like, but the method is not limited thereto.
Taking a target size of 3 × 3 for region division, the difference image may be divided according to the size of 3 × 3 to obtain a divided difference image including a plurality of image regions of 3 × 3 sizes, and a difference value corresponding to each image region of 3 × 3 sizes included in the divided difference image is calculated, so that pixel values of pixel points in each image region of 3 × 3 sizes are the same to obtain a corresponding intermediate difference image.
Further, the difference images can be sequentially divided according to the sequence of the target size from small to large, so that the pyramid-shaped N-frame divided difference images are obtained. The pyramid shape indicates that the size of the image area included in the difference image is pyramid, the size of the image area included in the difference image divided by the first frame may be the smallest, and the size of the image area included in the difference image divided by the nth frame may be the largest. And respectively calculating difference values corresponding to all image areas contained in the divided difference images to obtain the pyramid-shaped N-frame intermediate difference image.
The N frames of intermediate difference images can be fused to obtain a target difference image, and the target difference image can be used as a boundary characteristic diagram of the image to be processed to obtain a boundary characteristic value of each pixel point in the image to be processed. Optionally, fusing the N frames of intermediate difference images may include: the pixel values of the pixel points at the same pixel position in the N frames of intermediate difference images are averaged, or the largest pixel value may be selected from the pixel values of the pixel points at the same pixel position in the N frames of intermediate difference images, or the weight value of each frame of intermediate difference image may be distributed according to the target size corresponding to each frame of intermediate difference image (for example, the larger the target size is, the smaller the distributed weight value is, etc.), and the pixel values of the pixel points at the same pixel position in the N frames of intermediate difference images are weighted and averaged according to the weight value corresponding to each frame of intermediate difference image. The method of fusing the N-frame intermediate difference images is not limited to the above-described methods, and other fusing methods may be used, and are not limited herein.
Fig. 4 is a diagram illustrating the fusion of N intermediate difference images according to an embodiment. As shown in fig. 4, the N frames of difference images may be sequentially divided into regions according to the order of the sizes of the N targets from small to large, so as to obtain N frames of divided difference images in a pyramid shape. Wherein, the image 411 (i.e. Dif _1) is a difference image after the first frame division, the image 412 (i.e. Dif _2) is a difference image after the second frame division, … …, and the image 413 (i.e. Dif _3) is a difference image after the Nth frame division, and as can be seen from the visualization in FIG. 4, the difference image after the N frame division is in a pyramid shape.
And calculating difference values corresponding to all image areas in the N divided difference images respectively to obtain the pyramid-shaped N-frame intermediate difference image. Among them, image 421 (i.e., Dif _ Tmep1) is the first frame intermediate difference image corresponding to image 411, image 422 (i.e., Dif _ Tmep2) is the second frame intermediate difference image corresponding to image 412, … …, and image 423 (i.e., Dif _ TmepN) is the nth frame intermediate difference image corresponding to image 413, and as can be seen from fig. 4, the difference image after N-frame division is in a pyramid shape. The black blocks in fig. 4 indicate that the pixel values of the pixel points in the same image region are the same, i.e., correspond to the same difference value. The N frames of intermediate difference images in the shape of a pyramid may be fused to obtain the target difference image 430. By adopting the pyramid type image processing mode, the image processing efficiency can be improved, and the processing result is more accurate.
And if at least two frames of reference images exist, fusing the difference images between the image to be processed and each frame of reference image to obtain the boundary characteristic value of each pixel point in the image to be processed. If there are at least two frames of reference images, the difference images between the image to be processed and each frame of reference image can be calculated respectively, and the manner of calculating the difference images can be the same as that described in the above embodiments, and is not described herein again. And fusing the frame difference images obtained by calculation to obtain a target difference image, and taking the target difference image as a boundary characteristic image of the image to be processed to obtain a boundary characteristic value of each pixel point in the image to be processed. The fusion mode of the difference images of each frame obtained by calculation can be the same as the fusion mode of the intermediate difference images of the N frames, and is not described herein again.
Fig. 5 is a schematic diagram illustrating fusion of difference images corresponding to reference images of each frame in an embodiment. As shown in fig. 5, for the M-frame reference image, a difference image between the image to be processed 510 and each frame of reference image may be calculated first, resulting in a first frame difference image 531 between the image to be processed 510 and the first frame reference image 521, second frame difference images 532 and … … between the image to be processed 510 and the second frame reference image 522, and an M-th frame difference image 533 between the image to be processed 510 and the M-th frame reference image 523. The resulting M frame difference images may be fused to obtain a target difference image 540.
In some embodiments, the manner in which the electronic device processes the difference image may be a combination of any of the above processing manners, for example, the difference image may be first filtered, and then the filtered difference image is subjected to erosion processing and/or dilation processing, or the target difference image may be obtained first according to the above three manners, and then the target difference image is subjected to filtering processing, and the manner in which the difference image is processed is not limited in the embodiments of the present application. The difference image is processed, and then the boundary characteristic value of each pixel point in the image to be processed is obtained based on the processed difference image, so that the accuracy of the boundary characteristic value is improved, and the subsequent image fusion effect is further improved.
In one embodiment, extracting the luminance features from the image to be processed may include: the brightness information of each pixel point in the image to be processed can be determined, and the brightness characteristic value of each pixel point is determined according to the brightness information of each pixel point. The brightness information may include a brightness value or a brightness level, etc. The brightness information of each pixel point in the image to be processed can be determined in various different manners, for example, the image to be processed can be converted from an RGB color space to an HSV color space or an HSL color space, and the value of each pixel point in a V channel of the HSV color space or an L channel of the HSL color space is the corresponding brightness value; the image to be processed may also be converted into a gray image, and the brightness value is determined according to the gray level of each pixel point, where the larger the gray level is, the larger the corresponding brightness value may be, and other manners may also be adopted, which are not limited herein.
As an optional implementation manner, the luminance information of each pixel point may be directly used as the corresponding luminance characteristic value, or the luminance information of each pixel point may be subjected to mapping processing to obtain the luminance characteristic value corresponding to each pixel point. The manner of the mapping process may include, but is not limited to: the luminance information is mapped to a fixed value interval (such as 0-1), the luminance information of each pixel point is multiplied by a fixed coefficient, or the square of the luminance information of each pixel point is calculated.
In some embodiments, before determining the brightness characteristic value corresponding to each pixel point, brightness stretching processing may be performed on the image to be processed based on the reference image, so that the image brightness of the image to be processed after the brightness stretching is aligned with the image brightness of the reference image, and the brightness characteristic value corresponding to each pixel point is determined according to the image to be processed after the brightness stretching. The image brightness of the image to be processed can be adjusted based on the reference image, so that the pixel distribution of the adjusted image to be processed in each brightness domain is consistent or approximately the same as the pixel distribution of the reference image in each brightness domain, and the accuracy of the determined brightness characteristic value of each pixel point can be improved.
In one embodiment, extracting saturation features from the image to be processed may include: the saturation value of each pixel point in the image to be processed can be determined, and the saturation characteristic value of each pixel point is determined according to the saturation value of each pixel point. The Saturation value of each pixel point in the image to be processed can be determined in a variety of different manners, for example, the image to be processed can be converted from an RGB color space to a color space such as HSV, HSL, or HSI (Hue, Saturation, brightness), and the value of an S channel of each pixel point of the image to be processed in the color space such as HSV, HSL, HIS, and the like can be used as a corresponding Saturation value; other conversion functions can be used to calculate the saturation value of each pixel point, but the invention is not limited to this.
As an optional implementation manner, the saturation value of each pixel may be directly used as the corresponding saturation characteristic value, or the saturation value of each pixel may be mapped to obtain the saturation characteristic value corresponding to each pixel. The manner of the mapping process may include, but is not limited to: mapping the saturation value to a fixed numerical range, multiplying the saturation value of each pixel point by a fixed coefficient, or calculating the square of the saturation value of each pixel point.
In one embodiment, extracting texture features from the image to be processed may include: for example, the to-be-processed image is an image of RGB color space, which includes R, G, B channels, and the texture feature value of each pixel point on at least one of the channels (e.g., R channel, or R and G channels) can be calculated.
The calculation method of the texture feature value of each channel may include, but is not limited to, performing high-pass filtering, unsharp mask filtering, laplacian filtering, and other filtering processes on the value of each pixel in the channel to obtain the texture feature value of each pixel in the channel, so as to use the high-frequency component of the image to be processed in the channel as the texture feature value to enhance the edge and details of the image to be processed.
Step 304, calculating a first fusion weight corresponding to each pixel point according to at least one of a boundary fusion weight, a brightness fusion weight, a saturation fusion weight and a texture fusion weight corresponding to each pixel point in the image to be processed.
The boundary fusion weight corresponding to the pixel point is determined according to the boundary characteristic value of the pixel point, the brightness fusion weight corresponding to the pixel point is determined according to the brightness characteristic value of the pixel point, the saturation characteristic value corresponding to the pixel point is determined according to the saturation characteristic value of the pixel point, and the texture fusion weight corresponding to the pixel point is determined according to the texture characteristic value of the pixel point.
In some embodiments, the boundary fusion weight corresponding to the pixel point may be in a negative correlation with the boundary characteristic value of the pixel point, and the larger the boundary characteristic value of the pixel point is, the smaller the corresponding boundary fusion weight may be, and further, the negative correlation may be in an inverse proportion. As another embodiment, the boundary fusion weight corresponding to the pixel point may also be determined according to a third eigenvalue interval to which the boundary eigenvalue of the pixel point belongs, and the correspondence between different third eigenvalue intervals and the boundary fusion weight may be different, and the correspondence may be a fixed boundary fusion weight or a fitting relational expression.
In some embodiments, the luminance fusion weight corresponding to the pixel point may be determined according to a first feature value interval to which the luminance feature value of the pixel point belongs. A plurality of first eigenvalue intervals may be preset, and a correspondence between each first eigenvalue interval and the luminance fusion weight may be determined, where the correspondence may be a fixed luminance fusion weight or a relational expression.
For example, 3 first eigenvalue intervals may be preset, including an interval a [ a, B ], an interval B (B, MAX), and an interval C [0, a), where the interval a may correspond to a fixed luminance fusion weight X; the corresponding relation between the B interval and the brightness fusion weight can be a negative correlation relation, and the brightness fusion weight is reduced along with the increase of the brightness characteristic value in the B interval; the correspondence between the C interval and the luminance fusion weight may be a positive correlation, and the luminance fusion weight increases with an increase in the luminance characteristic value in the C interval. The luminance fusion weight X may be the largest luminance fusion weight.
For another example, 3 first eigenvalue intervals may be preset, including an interval a [ a, B ], an interval B (B, MAX), and an interval C [0, a), where the interval a may correspond to a fixed luminance fusion weight Y; the corresponding relation between the B interval and the brightness fusion weight can be a positive correlation relation, and the brightness fusion weight in the B interval is increased along with the increase of the brightness characteristic value; the correspondence between the C interval and the luminance fusion weight may be a negative correlation, and the luminance fusion weight decreases as the luminance characteristic value increases in the C interval. The luminance fusion weight Y may be the maximum luminance fusion weight.
For another example, 2 first eigenvalue bins may be preset, including an a bin [ a, MAX ], and a B bin [0, a), where the a bin may correspond to a fixed luminance fusion weight X1, and the B bin may correspond to a fixed luminance fusion weight X2.
Optionally, the correspondence between the luminance characteristic value and the luminance fusion weight may be related to a characteristic parameter of an imaging device corresponding to the image to be processed, and the correspondence between the luminance characteristic value and the luminance fusion weight corresponding to the imaging device may be calibrated in advance, for example, noise, texture characteristics, and the like included in an image with different luminance characteristics acquired by the imaging device may be detected, and the correspondence between the luminance characteristic value and the luminance fusion weight may be calibrated based on the detected noise, texture characteristics, and the like, where the smaller the noise included, the more obvious and detailed the texture characteristics are, the smaller the corresponding luminance fusion weight may be, but is not limited thereto.
In some embodiments, the saturation fusion weight corresponding to the pixel point may be determined according to a second eigenvalue interval to which the saturation eigenvalue of the pixel point belongs. A plurality of second eigenvalue intervals may be preset, and a correspondence between each second eigenvalue interval and the saturation fusion weight may be determined, where the correspondence may be a fixed saturation fusion weight or a relational expression.
For example, 3 second eigenvalue intervals including a D interval [ m, n ], an E interval (n, MAX), and an F interval [0, m) may be preset, where the D interval may correspond to a fixed saturation fusion weight H; the corresponding relation between the interval E and the saturation fusion weight can be a negative correlation relation, and the saturation fusion weight is reduced along with the increase of the saturation characteristic value in the interval E; the correspondence between the F-interval and the saturation fusion weight may be a positive correlation, and the saturation fusion weight increases with an increase in the saturation eigenvalue in the F-interval. The saturation fusion weight H may be the maximum saturation fusion weight.
For another example, 3 second eigenvalue intervals including a D interval [ m, n ], an E interval (n, MAX), and an F interval [0, m) may be preset, where the D interval may correspond to a fixed saturation fusion weight I; the corresponding relation between the interval E and the saturation fusion weight can be a positive correlation relation, and the saturation fusion weight is increased along with the increase of the saturation characteristic value in the interval E; the correspondence between the F-interval, in which the saturation fusion weight decreases as the saturation eigenvalue increases, and the saturation fusion weight may be a negative correlation. The saturation fusion weight I may be the smallest saturation fusion weight.
For another example, 4 second eigenvalue intervals including [0, m1 ], [ m2, m3 ], [ m3, n ], (n, MAX) may be preset, where the interval [0, m1) may correspond to the fixed saturation fusion weight H1, the interval [ m2, m3) may correspond to the fixed saturation fusion weight H2, the interval [ m3, n ] may correspond to the fixed saturation fusion weight H3, and the interval (n, MAX) may correspond to the fixed saturation fusion weight H4.
Optionally, the correspondence between the saturation feature value and the saturation fusion weight may be related to a characteristic parameter of an imaging device corresponding to the image to be processed, and the correspondence between the saturation feature value and the saturation fusion weight corresponding to the imaging device may be calibrated in advance, for example, noise, texture features and the like included in an image with different saturation features acquired by the imaging device may be detected, and the correspondence between the saturation feature value and the saturation fusion weight is calibrated based on the detected noise, texture features and the like, where the smaller the noise included is, the more obvious and detailed the texture features are, and the smaller the corresponding saturation fusion weight may be, but is not limited thereto.
In some embodiments, the texture fusion weight corresponding to the pixel point may be in a negative correlation with the texture feature value of the pixel point, and the larger the texture feature value of the pixel point is, the smaller the corresponding texture fusion weight may be, and further, the negative correlation may be in an inverse proportion.
The electronic device can calculate a first fusion weight corresponding to each pixel point according to at least one of a boundary fusion weight, a brightness fusion weight, a saturation fusion weight and a texture fusion weight corresponding to each pixel point in the image to be processed. Specifically, if the first fusion weight corresponding to each pixel point is calculated according to the at least two image feature fusion weights corresponding to each pixel point, a corresponding proportion can be allocated to each image feature fusion weight for each pixel point, and each image feature fusion weight corresponding to each pixel point is multiplied by the corresponding proportion and then summed to obtain the first fusion weight corresponding to each pixel point.
Taking the first pixel point in the image to be processed as an example, the first pixel point may calculate the first fusion weight corresponding to the first pixel point according to four image feature fusion weights, such as the boundary fusion weight, the brightness fusion weight, the saturation fusion weight, and the texture fusion weight. The boundary fusion weight corresponding to the first pixel point may be multiplied by the corresponding first proportion to obtain a first product, the luminance fusion weight corresponding to the first pixel point may be multiplied by the corresponding second proportion to obtain a second product, the saturation fusion weight corresponding to the first pixel point may be multiplied by the corresponding third proportion to obtain a third product, and the texture fusion weight corresponding to the first pixel point may be multiplied by the corresponding fourth proportion to obtain a fourth product. And then, the sum of the first product, the second product, the third product and the fourth product is obtained to obtain a first fusion weight corresponding to the first pixel point.
The first proportion, the second proportion, the third proportion and the fourth proportion can be in one-to-one correspondence with each pixel point, namely, different pixel points can respectively correspond to the same or different first proportion, second proportion, third proportion and fourth proportion. Alternatively, the first fusion weight corresponding to each pixel point may be calculated according to equation (3):
Figure BDA0003085982450000181
wherein, pix _ fii,jA first blending weight, a, corresponding to a pixel point representing a pixel coordinate (i, j) in the image to be processedi,jA first proportion, bdr _ w, representing a pixel point correspondence of the pixel coordinate (i, j)i,jBoundary fusion weights, b, corresponding to the pixels representing the pixel coordinates (i, j)i,jA second proportion, lum _ w, representing a pixel point correspondence of the pixel coordinate (i, j)i,jLuminance fusion weight corresponding to a pixel point representing pixel coordinate (i, j), ci,jA third proportion, sat _ w, corresponding to the pixel point representing the pixel coordinate (i, j)i,jSaturation fusion weight, d, corresponding to a pixel point representing a pixel coordinate (i, j)i,jA fourth scale corresponding to a pixel point representing pixel coordinate (i, j),
Figure BDA0003085982450000182
the texture fusion weight corresponding to the pixel point representing pixel coordinate (i, j).
In the embodiment of the application, the corresponding first fusion weight can be calculated based on the image feature fusion weights of the pixel points in multiple different dimensions, and the calculated first fusion weight is more fit with the image content of the image to be processed by considering from multiple aspects such as boundary features, brightness features, saturation features, texture features and the like, and different proportion values can be allocated for different image feature fusion, so that the accuracy of the first fusion feature of each pixel point is improved, the fusion effect of subsequent image fusion can be further improved, and the image quality of the fused image is improved.
And step 306, acquiring external information corresponding to the image to be processed, wherein the external information comprises at least one of pose change information and environment brightness information.
The pose change information may include position change information and/or attitude change information, the position change information may be calculated from acceleration signals acquired by an acceleration sensor, and the attitude change information may be calculated from attitude signals acquired by an attitude sensor such as a gyroscope. The ambient brightness information may be an ambient brightness value collected from a light sensor, or the like.
In some embodiments, the acquisition time corresponding to the image to be processed may be acquired, and the external information acquired at the same acquisition time may be acquired from the external device, so as to ensure synchronization between the image to be processed and the external information, and further improve the accuracy of the subsequent calculation of the second fusion weight.
And 308, calculating a second fusion weight corresponding to the image to be processed according to at least one of the jitter weight and the ambient brightness weight.
The dithering weight of the image to be processed can be determined according to the pose change information corresponding to the image to be processed, furthermore, the dithering weight and the pose change information can be in a negative correlation relationship, and the larger the pose change information is, the smaller the corresponding dithering weight can be. The ambient brightness weight of the image to be processed may be determined according to the ambient brightness information corresponding to the image to be processed, further, the ambient brightness weight and the ambient brightness information may have a negative correlation, and the greater the ambient brightness information is, the smaller the corresponding ambient brightness weight may be. The ambient brightness weight may be determined according to an ambient brightness interval to which the ambient brightness information belongs, a plurality of different ambient brightness intervals may be preset, and a correspondence between each ambient brightness interval and the ambient brightness weight may be determined, where the correspondence may be a fixed ambient brightness weight or a relational expression.
The electronic device may calculate the second fusion weight corresponding to the image to be processed only according to one of the shake weight and the ambient brightness weight, or may calculate the second fusion weight corresponding to the image to be processed according to the shake weight and the ambient brightness weight. If the second fusion weight corresponding to the image to be processed is calculated according to the jitter weight and the ambient brightness weight, corresponding proportionality coefficients can be distributed to the jitter weight and the ambient brightness weight, and the second fusion weight is calculated according to the distributed proportionality coefficients.
Specifically, the dithering weight corresponding to the image to be processed may be multiplied by the corresponding fifth ratio to obtain a fifth product, and the ambient brightness weight corresponding to the image to be processed may be multiplied by the corresponding sixth ratio to obtain a sixth product. And then, the sum of the fifth product and the sixth product is obtained to obtain a second fusion weight corresponding to the image to be processed.
Optionally, the second fusion weight corresponding to the image to be processed may be calculated according to formula (4):
ext _ f is x × shk _ w + y × lux _ w equation (4);
the ext _ f represents a second fusion weight corresponding to the image to be processed, x represents a fifth proportion corresponding to the image to be processed, shk _ w represents a dithering weight corresponding to the image to be processed, y represents a sixth proportion corresponding to the image to be processed, and lux _ w represents an ambient brightness weight corresponding to the image to be processed. The fifth proportion and the sixth proportion may be proportion values of a frame level, the fifth proportion and the sixth proportion may be in a one-to-one correspondence relationship with the to-be-processed image, and different to-be-processed images may respectively correspond to the same or different fifth proportion or sixth proportion.
In the embodiment of the application, the second fusion weight corresponding to the image to be processed can be obtained through dimension calculation based on the pose change information, the environment brightness information and other external influence imaging effects acquired by the external equipment, and corresponding proportion values are distributed according to the pose change information and the environment brightness information, so that the accuracy of the second fusion characteristic of the image to be processed is improved, the fusion effect of subsequent image fusion can be further improved, and the image quality of the fused image is improved.
And 310, fusing the first fusion weight and the second fusion weight corresponding to each pixel point according to the first weight coefficient and the second weight coefficient corresponding to each pixel point to obtain a third fusion weight corresponding to each pixel point.
And step 312, fusing the image to be processed and the reference image based on the third fusion weight corresponding to each pixel point in the image to be processed to obtain a fused image.
The descriptions of steps 310 to 312 can refer to the related descriptions in the above embodiments, and are not repeated herein.
In the embodiment of the application, the fusion weight of each pixel point in the image to be processed can be determined by combining the image characteristics such as the boundary characteristic, the brightness characteristic, the saturation characteristic and the texture characteristic of the image to be processed and the external information such as pose change information and environmental brightness which affect the imaging effect externally, so that a fusion image with higher image quality can be obtained, the fusion effect of image fusion is improved, and the ghost problem caused by fusing multi-frame images can be effectively solved.
As shown in FIG. 6, in one embodiment, an image fusion device 600 is provided, and the image fusion device 600 may comprise
The first calculating module 610 is configured to extract image features from the image to be processed, and calculate a first fusion weight corresponding to each pixel point in the image to be processed according to the image features.
The external information obtaining module 620 is configured to obtain external information corresponding to the image to be processed, where the external information refers to information that is collected by an external device and can affect an imaging effect of the image to be processed.
The second calculating module 630 is configured to calculate a second fusion weight corresponding to the image to be processed according to the external information.
The determining module 640 is configured to determine a third fusion weight corresponding to each pixel point in the image to be processed according to the first fusion weight and the second fusion weight.
And the fusion module 650 is configured to fuse the image to be processed and the reference image based on the third fusion weight corresponding to each pixel point in the image to be processed, so as to obtain a fused image.
In the embodiment of the application, a first fusion weight corresponding to each pixel point in an image to be processed is calculated through image characteristics in the image to be processed, a second fusion weight corresponding to the image to be processed is calculated according to external information corresponding to the image to be processed, the external information refers to information which can affect the imaging effect of the image to be processed and is collected through external equipment, a third fusion weight corresponding to each pixel point in the image to be processed is determined according to the first fusion weight and the second fusion weight, the image to be processed and a reference image are fused to obtain a fusion image, the fusion weights of the pixel points in the image to be processed can be determined jointly by combining the image characteristics of the image to be processed and the external information which affects the imaging effect externally, and thus a fusion image with higher image quality can be obtained, the fusion effect of image fusion is improved, and the ghost problem caused by fusion of multi-frame images can be effectively solved.
In an embodiment, the determining module 640 is further configured to fuse the first fusion weight and the second fusion weight corresponding to each pixel point according to the first weight coefficient and the second weight coefficient corresponding to each pixel point, so as to obtain a third fusion weight corresponding to each pixel point.
In an embodiment, the determining module 640 is further configured to calculate a product of a first fusion weight corresponding to the first pixel point and a corresponding first weight coefficient, so as to obtain a first calculation result, where the first pixel point is any pixel point in the image to be processed; calculating the product of a second weight coefficient corresponding to the first pixel point and a second fusion weight to obtain a second calculation result; and determining the sum of the first calculation result and the second calculation result as a third fusion weight corresponding to the first pixel point.
In the embodiment of the application, the third fusion weight corresponding to each pixel point can be obtained according to the set first weight coefficient and second weight coefficient corresponding to each pixel point in the image to be processed, and the first fusion weight and the second fusion weight corresponding to each pixel point, so that the third fusion weight corresponding to each pixel point is more accurate, the fusion effect of image fusion can be further improved, and the image quality of the fused image is improved.
In one embodiment, the image feature includes at least one of a boundary feature value, a brightness feature value, a saturation feature value, and a texture feature value corresponding to each pixel point.
The first calculating module 610 is further configured to calculate a first fusion weight corresponding to each pixel point in the image to be processed according to at least one of a boundary fusion weight, a brightness fusion weight, a saturation fusion weight, and a texture fusion weight corresponding to each pixel point, where the boundary fusion weight is determined according to a boundary feature value, the brightness fusion weight is determined according to a brightness feature value, and the saturation feature value is determined according to a texture fusion weight determined according to a saturation feature value.
In one embodiment, the first calculation module 610 includes a feature extraction unit and a weight calculation unit.
The image features comprise boundary feature values corresponding to all pixel points, and the feature extraction unit is used for performing difference processing on pixel points at the same positions in the image to be processed and the reference image to obtain a difference image between the image to be processed and the reference image, and determining the boundary feature values of all the pixel points in the image to be processed according to the difference image.
In one embodiment, the feature extraction unit is further configured to use the difference image as a boundary feature value of each pixel point in the image to be processed; or processing the difference image to obtain the boundary characteristic value of each pixel point in the image to be processed.
In an embodiment, the feature extraction unit is configured to process the difference image to obtain a boundary feature value of each pixel point in the image to be processed, and specifically includes at least one of the following:
filtering the difference image to obtain a boundary characteristic value of each pixel point in the image to be processed;
carrying out corrosion treatment and/or expansion treatment on the difference image to obtain a boundary characteristic value of each pixel point in the image to be processed;
respectively dividing the regions of the difference image according to N target sizes to obtain N frames of divided difference images, respectively calculating difference values corresponding to all image regions contained in the divided difference images to obtain N frames of intermediate difference images, and fusing the N frames of intermediate difference images to obtain boundary characteristic values of all pixel points in the image to be processed, wherein N is an integer greater than or equal to 2;
and if at least two frames of reference images exist, fusing the difference images between the image to be processed and each frame of reference image to obtain the boundary characteristic value of each pixel point in the image to be processed.
In an embodiment, the image feature includes a luminance feature value corresponding to each pixel point, and the feature extraction unit is further configured to perform luminance stretching processing on the image to be processed based on the reference image, so that the image luminance of the image to be processed after the luminance stretching is aligned with the image luminance of the reference image, and determine the luminance feature value corresponding to each pixel point according to the image to be processed after the luminance stretching.
In one embodiment, the boundary fusion weights are inversely related to the boundary feature values; the texture fusion weight and the texture characteristic value are in a negative correlation relationship; the brightness fusion weight is determined according to a first characteristic value interval to which the brightness characteristic value belongs; the saturation characteristic value is determined according to the second characteristic value interval to which the saturation characteristic value belongs.
In one embodiment, the weight calculation unit is configured to multiply a boundary fusion weight corresponding to a first pixel point by a corresponding first proportion to obtain a first product, where the first pixel point is any pixel point in the image to be processed; multiplying the brightness fusion weight corresponding to the first pixel point by the corresponding second proportion to obtain a second product; multiplying the saturation fusion weight corresponding to the first pixel point by the corresponding third proportion to obtain a third product; multiplying the texture fusion weight corresponding to the first pixel point by the corresponding fourth proportion to obtain a fourth product; and solving the sum of the first product, the second product, the third product and the fourth product to obtain a first fusion weight corresponding to the first pixel point.
In one embodiment, the external information includes at least one of pose change information and ambient brightness information. The second calculating module 630 is further configured to calculate a second fusion weight corresponding to the image to be processed according to at least one of the shake weight and the ambient brightness weight, where the shake weight is determined according to the pose change information, and the ambient brightness weight is determined according to the ambient brightness information.
In one embodiment, the jitter weight is inversely related to the pose change information; the ambient brightness weight and the ambient brightness information are in a negative correlation relationship, or the ambient brightness weight is determined according to an ambient brightness interval to which the ambient brightness information belongs.
In an embodiment, the second calculating module 630 is further configured to multiply the shaking weight corresponding to the image to be processed by the corresponding fifth ratio to obtain a fifth product; multiplying the corresponding ambient brightness weight of the image to be processed by the corresponding sixth proportion to obtain a sixth product; and solving the sum of the fifth product and the sixth product to obtain a second fusion weight corresponding to the image to be processed.
In the embodiment of the application, the fusion weight of each pixel point in the image to be processed can be determined by combining the image characteristics such as the boundary characteristic, the brightness characteristic, the saturation characteristic and the texture characteristic of the image to be processed and the external information such as pose change information and environmental brightness which affect the imaging effect externally, so that a fusion image with higher image quality can be obtained, the fusion effect of image fusion is improved, and the ghost problem caused by fusing multi-frame images can be effectively solved.
Fig. 7 is a block diagram of an electronic device in one embodiment. As shown in fig. 7, electronic device 700 may include one or more of the following components: a processor 710, a memory 720 coupled to the processor 710, wherein the memory 720 may store one or more computer programs that may be configured to be executed by the one or more processors 710 to implement the methods as described in the various embodiments above.
Processor 710 may include one or more processing cores. The processor 710 interfaces with various components throughout the electronic device 700 using various interfaces and circuitry to perform various functions of the electronic device 700 and process data by executing or executing instructions, programs, code sets, or instruction sets stored in the memory 720 and invoking data stored in the memory 720. Alternatively, the processor 710 may be implemented in hardware using at least one of Digital Signal Processing (DSP), Field-Programmable Gate Array (FPGA), and Programmable Logic Array (PLA). The processor 710 may integrate one or more of a Central Processing Unit (CPU), a Graphics Processing Unit (GPU), a modem, and the like. Wherein, the CPU mainly processes an operating system, a user interface, an application program and the like; the GPU is used for rendering and drawing display content; the modem is used to handle wireless communications. It is understood that the modem may not be integrated into the processor 710, but may be implemented by a communication chip.
The Memory 720 may include a Random Access Memory (RAM) or a Read-Only Memory (ROM). The memory 720 may be used to store instructions, programs, code sets, or instruction sets. The memory 720 may include a program storage area and a data storage area, wherein the program storage area may store instructions for implementing an operating system, instructions for implementing at least one function (such as a touch function, a sound playing function, an image playing function, etc.), instructions for implementing the various method embodiments described above, and the like. The storage data area may also store data created during use by the electronic device 700, and the like.
It is understood that the electronic device 700 may include more or less structural elements than those shown in the above structural block diagrams, for example, a power module, a physical button, a WiFi (Wireless Fidelity) module, a speaker, a bluetooth module, a sensor, etc., and is not limited thereto.
The embodiment of the application discloses a computer readable storage medium, which stores a computer program, wherein the computer program realizes the method described in the above embodiment when being executed by a processor.
Embodiments of the present application disclose a computer program product comprising a non-transitory computer readable storage medium storing a computer program, and the computer program, when executed by a processor, implements the method as described in the embodiments above.
It will be understood by those skilled in the art that all or part of the processes of the methods of the embodiments described above can be implemented by a computer program, which can be stored in a non-volatile computer-readable storage medium, and can include the processes of the embodiments of the methods described above when the program is executed. The storage medium may be a magnetic disk, an optical disk, a ROM, etc.
Any reference to memory, storage, database, or other medium as used herein may include non-volatile and/or volatile memory. Suitable non-volatile memory can include ROM, Programmable ROM (PROM), Erasable PROM (EPROM), Electrically Erasable PROM (EEPROM), or flash memory. Volatile memory can include Random Access Memory (RAM), which acts as external cache memory. By way of illustration and not limitation, RAM can take many forms, such as Static RAM (SRAM), Dynamic RAM (DRAM), Synchronous DRAM (SDRAM), Double Data Rate SDRAM (DDR SDRAM), Enhanced SDRAM (ESDRAM), Synchlink DRAM (SLDRAM), Rambus Direct RAM (RDRAM), and Direct Rambus DRAM (DRDRAM).
It should be appreciated that reference throughout this specification to "one embodiment" or "an embodiment" means that a particular feature, structure or characteristic described in connection with the embodiment is included in at least one embodiment of the present application. Thus, the appearances of the phrases "in one embodiment" or "in an embodiment" in various places throughout this specification are not necessarily all referring to the same embodiment. Furthermore, the particular features, structures, or characteristics may be combined in any suitable manner in one or more embodiments. Those skilled in the art should also appreciate that the embodiments described in this specification are all alternative embodiments and that the acts and modules involved are not necessarily required for this application.
In various embodiments of the present application, it should be understood that the size of the serial number of each process described above does not mean that the execution sequence is necessarily sequential, and the execution sequence of each process should be determined by its function and inherent logic, and should not constitute any limitation on the implementation process of the embodiments of the present application.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The image fusion method, the image fusion device, the electronic device, and the computer-readable storage medium disclosed in the embodiments of the present application are described in detail above, and specific examples are applied herein to illustrate the principles and implementations of the present application, and the description of the embodiments above is only used to help understand the method and the core ideas of the present application. Meanwhile, for a person skilled in the art, according to the idea of the present application, there may be variations in the specific embodiments and the application scope, and in summary, the content of the present specification should not be construed as a limitation to the present application.

Claims (16)

1. An image fusion method, comprising:
extracting image features from an image to be processed, and calculating first fusion weights corresponding to all pixel points in the image to be processed according to the image features;
acquiring external information corresponding to the image to be processed, wherein the external information refers to information which is acquired through external equipment and can influence the imaging effect of the image to be processed;
calculating a second fusion weight corresponding to the image to be processed according to the external information;
determining a third fusion weight corresponding to each pixel point in the image to be processed according to the first fusion weight and the second fusion weight;
and fusing the image to be processed and the reference image based on the third fusion weight corresponding to each pixel point in the image to be processed to obtain a fused image.
2. The method according to claim 1, wherein the determining a third fusion weight corresponding to each pixel point in the image to be processed according to the first fusion weight and the second fusion weight comprises:
and fusing the first fusion weight and the second fusion weight corresponding to each pixel point according to the first weight coefficient and the second weight coefficient corresponding to each pixel point to obtain a third fusion weight corresponding to each pixel point.
3. The method according to claim 2, wherein the fusing the first fusion weight and the second fusion weight corresponding to each pixel point according to the first weight coefficient and the second weight coefficient corresponding to each pixel point to obtain the third fusion weight corresponding to each pixel point comprises:
calculating the product of a first fusion weight corresponding to a first pixel point and a corresponding first weight coefficient to obtain a first calculation result, wherein the first pixel point is any pixel point in the image to be processed;
calculating the product of a second weight coefficient corresponding to the first pixel point and the second fusion weight to obtain a second calculation result;
and determining the sum of the first calculation result and the second calculation result as a third fusion weight corresponding to the first pixel point.
4. The method according to claim 1, wherein the image feature comprises at least one of a boundary feature value, a brightness feature value, a saturation feature value and a texture feature value corresponding to each pixel point; the calculating the first fusion weight corresponding to each pixel point in the image to be processed according to the image characteristics comprises:
and calculating a first fusion weight corresponding to each pixel point according to at least one of a boundary fusion weight, a brightness fusion weight, a saturation fusion weight and a texture fusion weight corresponding to each pixel point in the image to be processed, wherein the boundary fusion weight is determined according to the boundary characteristic value, the brightness fusion weight is determined according to the brightness characteristic value, the saturation characteristic value is determined according to the saturation characteristic value, and the texture fusion weight is determined according to the texture characteristic value.
5. The method according to claim 4, wherein the image feature includes a boundary feature value corresponding to each pixel point, and the extracting the image feature from the image to be processed includes:
performing difference processing on pixel points at the same positions in the image to be processed and the reference image to obtain a difference image between the image to be processed and the reference image;
and determining the boundary characteristic value of each pixel point in the image to be processed according to the difference image.
6. The method according to claim 5, wherein the determining the boundary feature value of each pixel point in the image to be processed according to the difference image comprises:
taking the difference image as a boundary characteristic value of each pixel point in the image to be processed; or
And processing the difference image to obtain the boundary characteristic value of each pixel point in the image to be processed.
7. The method according to claim 6, wherein the processing the difference image to obtain the boundary feature value of each pixel point in the image to be processed includes at least one of:
filtering the difference image to obtain a boundary characteristic value of each pixel point in the image to be processed;
carrying out corrosion treatment and/or expansion treatment on the difference image to obtain a boundary characteristic value of each pixel point in the image to be processed;
respectively dividing the region of the difference image according to N target sizes to obtain N frames of divided difference images, respectively calculating difference values corresponding to image regions contained in the divided difference images to obtain N frames of intermediate difference images, and fusing the N frames of intermediate difference images to obtain boundary characteristic values of pixel points in the image to be processed, wherein N is an integer greater than or equal to 2;
and if at least two frames of reference images exist, fusing the difference images between the image to be processed and the reference images of all frames to obtain the boundary characteristic value of each pixel point in the image to be processed.
8. The method according to claim 4, wherein the image feature comprises a luminance feature value corresponding to each pixel point, and the extracting the image feature from the image to be processed comprises:
performing brightness stretching processing on the image to be processed based on a reference image so as to align the image brightness of the image to be processed after brightness stretching with the image brightness of the reference image;
and determining the brightness characteristic value corresponding to each pixel point according to the image to be processed after brightness stretching.
9. The method according to any one of claims 4 to 8, wherein the boundary fusion weight is in a negative correlation with the boundary feature value;
the texture fusion weight and the texture characteristic value are in a negative correlation relationship;
the brightness fusion weight is determined according to a first characteristic value interval to which the brightness characteristic value belongs;
the saturation characteristic value is determined according to a second characteristic value interval to which the saturation characteristic value belongs.
10. The method according to any one of claims 4 to 8, wherein if the first fusion weight is calculated according to the boundary fusion weight, the luminance fusion weight, the saturation fusion weight, and the texture fusion weight, the step of calculating the first fusion weight corresponding to the first pixel point comprises:
multiplying a boundary fusion weight corresponding to a first pixel point by a corresponding first proportion to obtain a first product, wherein the first pixel point is any pixel point in the image to be processed;
multiplying the brightness fusion weight corresponding to the first pixel point by the corresponding second proportion to obtain a second product;
multiplying the saturation fusion weight corresponding to the first pixel point by the corresponding third proportion to obtain a third product;
multiplying the texture fusion weight corresponding to the first pixel point by the corresponding fourth proportion to obtain a fourth product;
and solving the sum of the first product, the second product, the third product and the fourth product to obtain a first fusion weight corresponding to the first pixel point.
11. The method according to any one of claims 1 to 8, wherein the external information includes at least one of pose change information and ambient brightness information, and the calculating the second fusion weight corresponding to the image to be processed according to the external information includes:
and calculating a second fusion weight corresponding to the image to be processed according to at least one of a jitter weight and an ambient brightness weight, wherein the jitter weight is determined according to the pose change information, and the ambient brightness weight is determined according to the ambient brightness information.
12. The method according to claim 11, wherein the shake weight is inversely related to the pose change information;
the environment brightness weight and the environment brightness information are in a negative correlation relationship, or the environment brightness weight is determined according to an environment brightness interval to which the environment brightness information belongs.
13. The method according to claim 11, wherein if the second fusion weight is calculated based on both the dithering weight and the ambient brightness weight, the step of calculating the second fusion weight comprises:
multiplying the jitter weight corresponding to the image to be processed by the corresponding fifth proportion to obtain a fifth product;
multiplying the corresponding ambient brightness weight of the image to be processed by the corresponding sixth proportion to obtain a sixth product;
and solving the sum of the fifth product and the sixth product to obtain a second fusion weight corresponding to the image to be processed.
14. An image fusion apparatus, comprising:
the first calculation module is used for extracting image features from an image to be processed and calculating first fusion weights corresponding to all pixel points in the image to be processed according to the image features;
the external information acquisition module is used for acquiring external information corresponding to the image to be processed, wherein the external information refers to information which is acquired through external equipment and can influence the imaging effect of the image to be processed;
the second calculation module is used for calculating a second fusion weight corresponding to the image to be processed according to the external information;
the determining module is used for determining a third fusion weight corresponding to each pixel point in the image to be processed according to the first fusion weight and the second fusion weight;
and the fusion module is used for fusing the image to be processed and the reference image based on the third fusion weight corresponding to each pixel point in the image to be processed to obtain a fusion image.
15. An electronic device comprising a memory and a processor, the memory having stored thereon a computer program that, when executed by the processor, causes the processor to carry out the method of any one of claims 1 to 13.
16. A computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, carries out the method according to any one of claims 1 to 13.
CN202110580639.1A 2021-05-26 2021-05-26 Image fusion method and device, electronic equipment and computer readable storage medium Pending CN113313661A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110580639.1A CN113313661A (en) 2021-05-26 2021-05-26 Image fusion method and device, electronic equipment and computer readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110580639.1A CN113313661A (en) 2021-05-26 2021-05-26 Image fusion method and device, electronic equipment and computer readable storage medium

Publications (1)

Publication Number Publication Date
CN113313661A true CN113313661A (en) 2021-08-27

Family

ID=77375356

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110580639.1A Pending CN113313661A (en) 2021-05-26 2021-05-26 Image fusion method and device, electronic equipment and computer readable storage medium

Country Status (1)

Country Link
CN (1) CN113313661A (en)

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114862735A (en) * 2022-05-23 2022-08-05 Oppo广东移动通信有限公司 Image processing method, image processing device, electronic equipment and computer readable storage medium
CN115641487A (en) * 2022-08-26 2023-01-24 青岛元动芯能源科技有限公司 Neutron and X-ray based multi-stage judgment fusion method and system
WO2023028866A1 (en) * 2021-08-31 2023-03-09 华为技术有限公司 Image processing method and apparatus, and vehicle
CN116051403A (en) * 2022-12-26 2023-05-02 新奥特(南京)视频技术有限公司 Video image processing method and device and video processing equipment
CN116188343A (en) * 2023-02-27 2023-05-30 上海玄戒技术有限公司 Image fusion method and device, electronic equipment, chip and medium
CN116228586A (en) * 2023-03-14 2023-06-06 朱桂湘 Sharpening algorithm selection system based on traversal processing
CN116775796A (en) * 2023-08-16 2023-09-19 交通运输部水运科学研究所 Multi-layer superimposed harbor district information display method and system

Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106846241A (en) * 2015-12-03 2017-06-13 阿里巴巴集团控股有限公司 A kind of method of image co-registration, device and equipment
CN107507160A (en) * 2017-08-22 2017-12-22 努比亚技术有限公司 A kind of image interfusion method, terminal and computer-readable recording medium
CN108012080A (en) * 2017-12-04 2018-05-08 广东欧珀移动通信有限公司 Image processing method, device, electronic equipment and computer-readable recording medium
CN108259774A (en) * 2018-01-31 2018-07-06 珠海市杰理科技股份有限公司 Image combining method, system and equipment
WO2018176925A1 (en) * 2017-03-31 2018-10-04 华为技术有限公司 Hdr image generation method and apparatus
CN108805898A (en) * 2018-05-31 2018-11-13 北京字节跳动网络技术有限公司 Method of video image processing and device
CN109712102A (en) * 2017-10-25 2019-05-03 杭州海康威视数字技术股份有限公司 A kind of image interfusion method, device and image capture device
CN110136071A (en) * 2018-02-02 2019-08-16 杭州海康威视数字技术股份有限公司 A kind of image processing method, device, electronic equipment and storage medium
CN110728648A (en) * 2019-10-25 2020-01-24 北京迈格威科技有限公司 Image fusion method and device, electronic equipment and readable storage medium
CN111028190A (en) * 2019-12-09 2020-04-17 Oppo广东移动通信有限公司 Image processing method, image processing device, storage medium and electronic equipment
CN111292281A (en) * 2020-01-20 2020-06-16 安徽文香信息技术有限公司 Image processing method, device, equipment and storage medium
CN111311532A (en) * 2020-03-26 2020-06-19 深圳市商汤科技有限公司 Image processing method and device, electronic device and storage medium

Patent Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106846241A (en) * 2015-12-03 2017-06-13 阿里巴巴集团控股有限公司 A kind of method of image co-registration, device and equipment
WO2018176925A1 (en) * 2017-03-31 2018-10-04 华为技术有限公司 Hdr image generation method and apparatus
CN107507160A (en) * 2017-08-22 2017-12-22 努比亚技术有限公司 A kind of image interfusion method, terminal and computer-readable recording medium
CN109712102A (en) * 2017-10-25 2019-05-03 杭州海康威视数字技术股份有限公司 A kind of image interfusion method, device and image capture device
CN108012080A (en) * 2017-12-04 2018-05-08 广东欧珀移动通信有限公司 Image processing method, device, electronic equipment and computer-readable recording medium
CN108259774A (en) * 2018-01-31 2018-07-06 珠海市杰理科技股份有限公司 Image combining method, system and equipment
CN110136071A (en) * 2018-02-02 2019-08-16 杭州海康威视数字技术股份有限公司 A kind of image processing method, device, electronic equipment and storage medium
CN108805898A (en) * 2018-05-31 2018-11-13 北京字节跳动网络技术有限公司 Method of video image processing and device
CN110728648A (en) * 2019-10-25 2020-01-24 北京迈格威科技有限公司 Image fusion method and device, electronic equipment and readable storage medium
CN111028190A (en) * 2019-12-09 2020-04-17 Oppo广东移动通信有限公司 Image processing method, image processing device, storage medium and electronic equipment
CN111292281A (en) * 2020-01-20 2020-06-16 安徽文香信息技术有限公司 Image processing method, device, equipment and storage medium
CN111311532A (en) * 2020-03-26 2020-06-19 深圳市商汤科技有限公司 Image processing method and device, electronic device and storage medium

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2023028866A1 (en) * 2021-08-31 2023-03-09 华为技术有限公司 Image processing method and apparatus, and vehicle
CN114862735A (en) * 2022-05-23 2022-08-05 Oppo广东移动通信有限公司 Image processing method, image processing device, electronic equipment and computer readable storage medium
CN115641487A (en) * 2022-08-26 2023-01-24 青岛元动芯能源科技有限公司 Neutron and X-ray based multi-stage judgment fusion method and system
CN115641487B (en) * 2022-08-26 2023-06-27 中子时代(青岛)创新科技有限公司 Multi-stage judgment fusion method and system based on neutrons and X rays
CN116051403A (en) * 2022-12-26 2023-05-02 新奥特(南京)视频技术有限公司 Video image processing method and device and video processing equipment
CN116188343A (en) * 2023-02-27 2023-05-30 上海玄戒技术有限公司 Image fusion method and device, electronic equipment, chip and medium
CN116228586A (en) * 2023-03-14 2023-06-06 朱桂湘 Sharpening algorithm selection system based on traversal processing
CN116228586B (en) * 2023-03-14 2023-09-08 深圳市合西科技有限公司 Sharpening algorithm selection system based on traversal processing
CN116775796A (en) * 2023-08-16 2023-09-19 交通运输部水运科学研究所 Multi-layer superimposed harbor district information display method and system
CN116775796B (en) * 2023-08-16 2023-10-31 交通运输部水运科学研究所 Multi-layer superimposed harbor district information display method and system

Similar Documents

Publication Publication Date Title
CN111402135B (en) Image processing method, device, electronic equipment and computer readable storage medium
JP7003238B2 (en) Image processing methods, devices, and devices
CN113313661A (en) Image fusion method and device, electronic equipment and computer readable storage medium
US11882357B2 (en) Image display method and device
CN110602467B (en) Image noise reduction method and device, storage medium and electronic equipment
KR101699919B1 (en) High dynamic range image creation apparatus of removaling ghost blur by using multi exposure fusion and method of the same
CN110473159B (en) Image processing method and device, electronic equipment and computer readable storage medium
CN111028190A (en) Image processing method, image processing device, storage medium and electronic equipment
CN112396562B (en) Disparity map enhancement method based on fusion of RGB and DVS images in high dynamic range scene
CN107925751A (en) For multiple views noise reduction and the system and method for high dynamic range
CN110349163B (en) Image processing method and device, electronic equipment and computer readable storage medium
CN110660090B (en) Subject detection method and apparatus, electronic device, and computer-readable storage medium
CN110930301B (en) Image processing method, device, storage medium and electronic equipment
CN108616700B (en) Image processing method and device, electronic equipment and computer readable storage medium
WO2021093534A1 (en) Subject detection method and apparatus, electronic device, and computer-readable storage medium
CN108989699B (en) Image synthesis method, image synthesis device, imaging apparatus, electronic apparatus, and computer-readable storage medium
CN110266954A (en) Image processing method, device, storage medium and electronic equipment
WO2022261828A1 (en) Image processing method and apparatus, electronic device, and computer-readable storage medium
CN110796041A (en) Subject recognition method and device, electronic equipment and computer-readable storage medium
CN113643214A (en) Image exposure correction method and system based on artificial intelligence
CN113673474B (en) Image processing method, device, electronic equipment and computer readable storage medium
CN113379609A (en) Image processing method, storage medium and terminal equipment
CN111031256B (en) Image processing method, image processing device, storage medium and electronic equipment
CN110689007B (en) Subject recognition method and device, electronic equipment and computer-readable storage medium
CN116437222B (en) Image processing method and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination