CN110189285B - Multi-frame image fusion method and device - Google Patents

Multi-frame image fusion method and device Download PDF

Info

Publication number
CN110189285B
CN110189285B CN201910452826.4A CN201910452826A CN110189285B CN 110189285 B CN110189285 B CN 110189285B CN 201910452826 A CN201910452826 A CN 201910452826A CN 110189285 B CN110189285 B CN 110189285B
Authority
CN
China
Prior art keywords
image
reference frame
region
long exposure
registered
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910452826.4A
Other languages
Chinese (zh)
Other versions
CN110189285A (en
Inventor
王涛
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Megvii Technology Co Ltd
Original Assignee
Beijing Megvii Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Megvii Technology Co Ltd filed Critical Beijing Megvii Technology Co Ltd
Priority to CN201910452826.4A priority Critical patent/CN110189285B/en
Publication of CN110189285A publication Critical patent/CN110189285A/en
Application granted granted Critical
Publication of CN110189285B publication Critical patent/CN110189285B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration using two or more images, e.g. averaging or subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/70Denoising; Smoothing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/13Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration
    • G06T7/38Registration of image sequences
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/40Analysis of texture
    • G06T7/41Analysis of texture based on statistical description of texture
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Probability & Statistics with Applications (AREA)
  • Image Processing (AREA)
  • Studio Devices (AREA)

Abstract

The invention relates to the technical field of image processing, and provides a multi-frame image fusion method and device. The multi-frame image fusion method comprises the following steps: a continuous image acquisition step of acquiring a plurality of continuously shot images; a reference frame selecting step of selecting one reference frame image from a plurality of continuously shot images, and taking the rest as auxiliary images; a long exposure image acquisition step, wherein a long exposure image is acquired through long exposure shooting; a registration step, in which the long exposure image and the auxiliary image are respectively registered with the reference frame image; a region segmentation step, namely performing region segmentation on the registered long exposure image to segment the long exposure image into a texture region and a flat region; and an image fusion step, namely fusing the registered auxiliary image and the reference frame image according to the texture region and the flat region to obtain a result image. By using the method, the image areas which possibly have the smear and do not have the smear are segmented and fused, which is beneficial to reasonably removing the noise and the smear and obtaining the image with high definition.

Description

Multi-frame image fusion method and device
Technical Field
The present invention relates to the field of image processing technologies, and in particular, to a method and an apparatus for fusing multiple frames of images.
Background
The multi-frame image fusion algorithm is widely used in the industry at present due to the reasons that the denoising effect is better than that of a single frame, the detail texture is better kept and the like. Based on the considerations of memory and algorithm runtime, the current common scheme is: and continuously shooting 5-10 images by the camera, selecting one image as a reference frame, carrying out image registration on other images, and carrying out multi-frame image fusion on the registered images to further obtain a denoised image.
However, in an actual multi-frame shooting scene, when multiple images are fused, light or heavy motion blur exists, for example, if objects with high motion speed, such as playing children and running pets, are continuously shot, a serious smear phenomenon occurs in the process of multi-frame image fusion, and the quality of image denoising and imaging is seriously influenced; under a high-noise scene, a lot of noises can be erroneously detected as a motion state, and the image denoising is influenced.
Disclosure of Invention
In order to solve the above problems in the prior art, the present invention provides a method and an apparatus for fusing multiple frames of images.
In a first aspect, an embodiment of the present invention provides a multi-frame image fusion method, where the method includes: a continuous image acquisition step of acquiring a plurality of continuously shot images by continuous shooting; a reference frame selecting step of selecting one reference frame image from a plurality of continuously shot images, and taking the rest as auxiliary images; a long exposure image obtaining step, obtaining a long exposure image, wherein the long exposure image is shot by long exposure while being continuously shot, and the exposure time of the long exposure image is longer than that of shooting a single continuously shot image; a registration step, in which the long exposure image and the auxiliary image are respectively registered with the reference frame image; a region segmentation step, namely performing region segmentation on the registered long exposure image to segment the long exposure image into a texture region and a flat region; and an image fusion step, namely fusing the registered auxiliary image and the reference frame image in regions according to the texture region and the flat region to obtain a result image.
In one embodiment, the image fusion step comprises: the image fusion step comprises: obtaining a texture region and a flat region of the registered reference frame image according to the texture region and the flat region of the registered long exposure image; acquiring a texture region and a flat region of the auxiliary image according to the texture region and the flat region of the registered reference frame image; fusing the texture region of the registered reference frame image with the texture region of the auxiliary image; and fusing the flat area of the registered reference frame image with the flat area of the auxiliary image.
In one embodiment, fusing the texture region of the registered reference frame image with the texture region of the registered auxiliary image includes: and directly taking the texture region of the registered reference frame image as a fusion result of the texture region.
In one embodiment, fusing the texture region of the registered reference frame image with the texture region of the registered auxiliary image includes: obtaining pixel differences between each pixel point in the texture region of the registered auxiliary image and each corresponding pixel point of the reference frame image in the texture region, and fusing the corresponding pixel points when the pixel differences are smaller than a preset threshold value; and when the pixel difference is greater than or equal to the preset threshold value, the pixel points of the corresponding auxiliary images do not participate in fusion.
In one embodiment, the reference frame selecting step includes performing edge detection on a plurality of continuously shot images, obtaining the definition of each continuously shot image through the edge detection, and selecting the continuously shot image with the highest definition as the reference frame image.
In one embodiment, the registration step includes registering the long-exposure image and the reference frame image by an MTB algorithm.
In one embodiment, the long exposure image and the continuous shot image are shot simultaneously for the same scene through different cameras respectively.
In one embodiment, the image fusion step comprises: the fusion is performed by means of direct averaging or bilateral weighting.
In a second aspect, an embodiment of the present invention provides a multi-frame image fusion apparatus, including: the continuous image acquisition module is used for acquiring a plurality of continuous shooting images through continuous shooting; the reference frame selection module is used for selecting one reference frame image from a plurality of continuously shot images, and taking the rest of the reference frame images as auxiliary images; the long exposure image acquisition module is used for acquiring a long exposure image, the long exposure image is shot by long exposure while being continuously shot, and the exposure time of the long exposure image is longer than that of a single continuously shot image; the registration module is used for registering the long exposure image and the auxiliary image with the reference frame image respectively; the region segmentation module is used for carrying out region segmentation on the registered long exposure image to segment the long exposure image into a texture region and a flat region; and the image fusion module is used for fusing the registered auxiliary image and the reference frame image in regions according to the texture region and the flat region to obtain a result image.
In a third aspect, an embodiment of the present invention provides an electronic device, where the electronic device includes: a memory to store instructions; and the processor is used for calling the instruction stored in the memory to execute the multi-frame image fusion method.
In a fourth aspect, embodiments of the present invention provide a computer-readable storage medium, where the computer-readable storage medium stores computer-executable instructions that, when executed by a processor, perform a multi-frame image fusion method.
The multi-frame image fusion method and the device are suitable for multi-shot equipment, and can be used for simultaneously picking pictures by multiple shots without additionally increasing the consumption of picture picking time; the flat area and the texture area are segmented after the long exposure image and the reference frame image are registered, so that the areas which possibly have a smear phenomenon and do not have the smear phenomenon in the continuously shot images can be distinguished, and the noise interference is avoided in the process of segmenting the areas; the device that takes the long exposure may also be a low end device, since the resolution for the long exposure image does not need to be too high. The registered continuous images are subjected to regional fusion according to the flat region and the texture region, so that reasonable denoising and smear elimination are facilitated; by using the multi-frame image fusion method and device, the image with normal denoising effect and good smear removal phenomenon can be obtained.
Drawings
The above and other objects, features and advantages of embodiments of the present invention will become readily apparent from the following detailed description read in conjunction with the accompanying drawings. Several embodiments of the invention are illustrated by way of example, and not by way of limitation, in the figures of the accompanying drawings and in which:
fig. 1 is a schematic diagram illustrating a multi-frame image fusion method according to an embodiment of the present invention;
fig. 2 is a schematic diagram illustrating a multi-frame image fusion apparatus according to an embodiment of the present invention;
FIG. 3 is a schematic diagram of an electronic device provided by an embodiment of the invention;
in the drawings, the same or corresponding reference numerals indicate the same or corresponding parts.
Detailed Description
The principles and spirit of the present invention will be described with reference to a number of exemplary embodiments. It is understood that these embodiments are given solely for the purpose of enabling those skilled in the art to better understand and to practice the invention, and are not intended to limit the scope of the invention in any way.
It should be noted that although the expressions "first", "second", etc. are used herein to describe different modules, steps, data, etc. of the embodiments of the present invention, the expressions "first", "second", etc. are merely used to distinguish between different modules, steps, data, etc. and do not indicate a particular order or degree of importance. Indeed, the terms "first," "second," and the like are fully interchangeable.
Fig. 1 is a flow chart of an embodiment of a multi-frame image fusion method 10. As shown in fig. 1, this is an example method comprising: a continuous image acquisition step 110, a reference frame selection step 120, a long exposure image acquisition step 130, a registration step 140, a region segmentation step 150, and an image fusion step 160. The respective steps in fig. 1 are explained in detail below.
A continuous image acquisition step 110 of acquiring a plurality of continuously shot images by continuous shooting.
In this embodiment, a plurality of continuously shot images, for example, 5 to 10 continuously shot images, are obtained by shooting through a lens of a shooting device such as a camera or a mobile phone, and are used for multi-frame fusion to remove noise.
A reference frame selecting step 120, selecting one reference frame image from the plurality of continuously shot images, and using the rest as auxiliary images.
In this embodiment, a plurality of acquired continuous shot images are divided into a reference frame image and an auxiliary image, and one of the plurality of continuous shot images is selected as the reference frame image and used as the reference image in the subsequent registration and image fusion processes; the remaining continuously shot images are auxiliary images. The reasonable denoising of the image in the fusion process is facilitated, and the influence of noise and smear is avoided.
A long exposure image obtaining step 130, obtaining a long exposure image, wherein the long exposure image is shot by long exposure while continuously shooting, and the exposure time of the long exposure image is longer than that of shooting a single continuously shot image.
In the present embodiment, the same image as the continuous image subject is shot by long exposure through another lens while continuously shooting the image, thereby obtaining a long-exposure image. The longer the exposure time, the more clearly the motion change of the object can be captured. On the premise of ensuring that the outline of the scene is clearly seen, the time length for shooting a long exposure image is optimally selected to be 10 times of the time length for shooting the exposure of a frame of continuous image, namely, 10 continuous shooting images can be shot while one long exposure image is shot, and at the moment, the object motion effect is obvious and the image is clear. In one example, the long exposure image is acquired simultaneously with the continuous shot image; in another example, long exposure images are acquired in batches with continuously captured images. The long exposure image is helpful for distinguishing the movement change of an object, is not interfered by noise during area segmentation, and is helpful for avoiding the influence of smear during fusion of continuously shot images.
A registration step 140, registering the long exposure image and the auxiliary image with the reference frame image, respectively.
In the embodiment, a reference frame image is taken as a standard, feature extraction is performed on a continuous shot image and a long exposure image, feature point pairs matched with an auxiliary image and the long exposure image and the reference frame image are found through similarity measurement, image space coordinate transformation parameters are obtained according to the matched feature point pairs, image registration is performed according to the coordinate transformation parameters, and coordinate systems of all the continuous shot images and the long exposure image are unified, so that subsequent image segmentation and image fusion are facilitated, and image denoising and smear removal are facilitated. In one example, the long exposure image is acquired in registration with the continuously captured images at the same time. In another example, long exposure images are acquired in batches with continuously captured images. And registering the auxiliary image with the reference frame image, acquiring the long exposure image corresponding to the continuous shooting image, and registering the long exposure image according to the reference frame image.
And a region segmentation step 150 of performing region segmentation on the registered long exposure image to segment the long exposure image into a texture region and a flat region.
In this embodiment, the registered long exposure image is processed by an edge extraction algorithm, such as: the method comprises the following steps of carrying out region segmentation on a long-exposure image according to the density degree of an extracted edge by using sobel edge operators, LOG (Gaussian Laplacian) and Canny edge extraction algorithms, wherein the region with the dense edge is a texture region, and the region with the sparse edge is a flat region. The texture region is a region where smear is likely to occur, and the flat region is a region where smear is unlikely to occur. Since the moving object necessarily has texture, positions in the image where the smear may occur are all divided into texture regions. Compared with a normally shot picture, the long-exposure image is used for carrying out region segmentation, and a region where a moving object or smear possibly exists can be accurately segmented under the condition of avoiding noise interference.
And an image fusion step 160, fusing the registered auxiliary image and the reference frame image in regions according to the texture region and the flat region to obtain a result image.
In the embodiment, the registered auxiliary image and the reference frame image are fused in different regions, which is beneficial to reasonably denoising and eliminating the influence of smear misdetection, thereby improving the definition of the result image.
In one example, the image fusion step 160 includes: obtaining a texture region and a flat region of the registered reference frame image according to the texture region and the flat region of the registered long exposure image; acquiring a texture region and a flat region of the auxiliary image according to the texture region and the flat region of the registered reference frame image; fusing the texture region of the registered reference frame image with the texture region of the auxiliary image; and fusing the flat area of the registered reference frame image with the flat area of the auxiliary image.
Through registration, each pixel point in the auxiliary image and the long exposure image can find a pixel point with corresponding matching in the reference frame image. By registering the long exposure image and the reference frame image, pixel points corresponding to the texture region and the flat region which are segmented by the long exposure image can be correspondingly found in the reference frame image, so that the texture region and the flat region which correspond to the long exposure image can be segmented in the reference frame image. Similarly, the registered auxiliary image is segmented against the segmented reference frame image to obtain a texture region and a flat region corresponding to the reference frame image. Finally, the long exposure image, the reference frame image and the auxiliary image can obtain a texture area and a flat area which correspond to each other. Because the flat area can not generate the smear phenomenon, the pixels of the reference frame image and the continuous shooting image in the flat area are directly fused in a multi-image fusion mode, the image noise is reduced, and the method is convenient and quick. The continuously shot images are registered with the long exposure images to distinguish regions where the smear phenomenon possibly occurs and regions where the smear phenomenon cannot occur, and the regional fusion is beneficial to avoiding negative effects such as noise interference, the smear phenomenon and noise amplification caused by direct fusion of a plurality of continuously shot images.
In one embodiment, fusing the texture region of the registered reference frame image with the texture region of the registered auxiliary image includes: and directly taking the texture region of the registered reference frame image as a fusion result of the texture region. In order to avoid the smear phenomenon generated in the process of multi-frame fusion, when the auxiliary image and the reference frame image are fused in the texture area, the pixels of the reference frame image in the texture area are directly selected as the pixels of the fusion result, so that the time for fusing a large number of images can be saved, and the rate of multi-frame image fusion is improved.
In one embodiment, fusing the texture region of the registered reference frame image with the texture region of the registered auxiliary image includes: obtaining pixel differences between each pixel point in the texture region of the registered auxiliary image and each corresponding pixel point of the reference frame image in the texture region, and fusing the corresponding pixel points when the pixel differences are smaller than a preset threshold value; and when the pixel difference is greater than or equal to the preset threshold value, the pixel points of the corresponding auxiliary images do not participate in fusion.
In the texture region, presetting a pixel difference threshold of pixel points of the reference frame image and the auxiliary image in the texture region in advance, calculating the pixel difference of the corresponding pixel points of the reference frame image and the auxiliary image in the texture region point by point, and fusing the corresponding pixel points of which the pixel difference of the reference frame image and the auxiliary image is smaller than the preset threshold; and (4) the pixel points with the pixel difference larger than or equal to the threshold value do not participate in fusion. For example: in one example, the preset pixel difference threshold is 20, five continuous captured images are fused for the point a in the texture region, the first point a is 159, the second point a is 169, the third point a is 186, the fourth point a is 145, and the fifth point a is 149, where the second continuous captured image is a reference frame image and the remaining four images are auxiliary images. By calculation, the pixel difference between the first and third continuously captured images and the reference frame image with respect to the a point is less than 20, the pixel difference between the fourth continuously captured image and the reference frame image with respect to the a point is greater than 20, and the pixel difference between the fifth continuously captured image and the reference frame image with respect to the a point is equal to 20. Therefore, in this texture region, the pixels of the first three continuous captured images with respect to the a point are fused, and the fusion result is the pixels of the five continuous captured images with respect to the a point multi-frame image. In another example, the pixel value of the point a in the texture region of five continuously captured images is used in the previous example, and the preset pixel difference threshold is set to 10, where the second continuously captured image is the reference frame image, the remaining four images are the auxiliary images, and the pixel differences between the auxiliary image and the reference frame image in the corresponding texture region are all greater than or equal to 10, in this case, the corresponding pixel points of the registered reference frame image in the texture region are directly used as the fusion result. By presetting the pixel difference threshold, the accuracy of image fusion in the texture region can be improved, and the obtained result image is clearer and more accurate.
In one embodiment, the long exposure image and the continuous shot image are shot simultaneously for the same scene through different cameras respectively. The images are shot simultaneously by using different shooting devices, the shooting time is the same, no extra time consumption is caused, the shot scenes are the same scene, and the consistent motion change of objects in the continuously shot images and the long-exposure images can be ensured in the dividing process. And the interference of noise to the image can be reasonably avoided according to different exposure degrees, and the detail texture of the image can be better kept. Since flat area and texture area classification of the long-exposure image is not required, pixel-level registration and particularly high image resolution are not required, the device for capturing the long-exposure image may be a hardware device with low resolution, or may be a terminal device, for example: the mobile phone comprises two main cameras and two auxiliary cameras, wherein the main cameras are used for collecting continuous shooting images, and the auxiliary cameras are used for collecting long exposure images.
In an embodiment, the reference frame selecting step 120 includes performing edge detection on a plurality of continuously shot images, obtaining the definition of each continuously shot image through the edge detection, and selecting the continuously shot image with the highest definition as the reference frame image. Edge detection is performed on a plurality of continuously shot images by using an edge extraction algorithm such as an edge detection operator or LOG (gaussian) and one frame image with the highest definition in the continuously shot images is selected as a reference frame image. The image with high definition is selected, so that the accuracy and the reliability of the fused image are improved, and the interference of noise is reduced.
In one embodiment, the registration step 140 includes registering the long-exposure image and the reference frame image by an MTB algorithm. The MTB algorithm (mean value binarization alignment algorithm) solves the problem of registration of a plurality of images with different exposures by using a pixel percentile binarization method, constructs a binarization pyramid image for each image, and then searches offset in the horizontal direction and the vertical direction respectively to perform image registration. Because the long exposure image is shot, the translation phenomenon can occur, the rotation phenomenon can not occur, and even if some rotation occurs, the attention of an observer can not be influenced, so that the MTB algorithm is adopted to register the long exposure image and the reference frame image, the time can be saved, and the image registration rate can be improved. In another embodiment, an algorithm such as SIFT (scale invariant feature transform), FAST feature point detection algorithm, etc. may also be used for image registration.
In one embodiment, the image fusion step 160 includes: the fusion is performed by means of direct averaging or bilateral weighting. The multiple continuously shot images are continuously shot for the same scene, so that the pixel difference of the corresponding area on each frame of image is small, and the average pixel of the pixels of the multiple images in the same area is often used as the fusion pixel in the area; or carrying out multi-scale decomposition on the continuously shot images by using a variable-parameter crossed bilateral filter, giving different weight values to different decomposition layers, and fusing the continuously shot images according to the weight ratio.
Fig. 2 shows an exemplary configuration diagram of the multi-frame image fusion apparatus 20. As shown in fig. 2, the multi-frame image fusion apparatus includes: a continuous image acquisition module 210 configured to acquire a plurality of continuously captured images through continuous capturing; a reference frame selecting module 220, configured to select one reference frame image from the multiple continuously captured images, and use the rest as auxiliary images; a long-exposure image obtaining module 230, configured to obtain a long-exposure image, where the long-exposure image is shot by long exposure while being continuously shot, and an exposure time of the long-exposure image is longer than a time for shooting a single continuously shot image; a registration module 240, configured to register the long-exposure image and the auxiliary image with the reference frame image respectively; a region segmentation module 250, configured to perform region segmentation on the registered long exposure image, and segment the long exposure image into a texture region and a flat region; and the image fusion module 260 is configured to fuse the registered auxiliary image and the reference frame image in different regions according to the texture region and the flat region to obtain a result image.
The functions implemented by the modules in the apparatus correspond to the steps in the method described above, and for concrete implementation of the technical effects, please refer to the description of the method steps above, which is not described herein again.
As shown in fig. 3, one embodiment of the present invention provides an electronic device 30. The electronic device 30 includes a memory 310, a processor 320, and an Input/Output (I/O) interface 330. The memory 310 is used for storing instructions. And the processor 320 is used for calling the instructions stored in the memory 310 to execute the multi-frame image fusion method of the embodiment of the invention. The processor 320 is connected to the memory 310 and the I/O interface 330, respectively, for example, via a bus system and/or other connection mechanism (not shown). The memory 310 may be used to store programs and data, including the program for multi-frame image fusion according to the embodiment of the present invention, and the processor 320 executes various functional applications and data processing of the electronic device 30 by executing the program stored in the memory 310.
In an embodiment of the present invention, the processor 320 may be implemented in at least one hardware form of a Digital Signal Processor (DSP), a Field-Programmable Gate Array (FPGA), a Programmable Logic Array (PLA), and the processor 320 may be one or a combination of several Central Processing Units (CPUs) or other forms of Processing units with data Processing capability and/or instruction execution capability.
Memory 310 in embodiments of the present invention may comprise one or more computer program products that may include various forms of computer-readable storage media, such as volatile memory and/or non-volatile memory. The volatile Memory may include, for example, a Random Access Memory (RAM), a cache Memory (cache), and/or the like. The nonvolatile Memory may include, for example, a Read-Only Memory (ROM), a Flash Memory (Flash Memory), a Hard Disk Drive (HDD), a Solid-State Drive (SSD), or the like.
In the embodiment of the present invention, the I/O interface 330 may be used to receive input instructions (e.g., numeric or character information, and generate key signal inputs related to user settings and function control of the electronic device 30, etc.), and may also output various information (e.g., images or sounds, etc.) to the outside. The I/O interface 330 may comprise one or more of a physical keyboard, function keys (e.g., volume control keys, switch keys, etc.), a mouse, a joystick, a trackball, a microphone, a speaker, a touch panel, and the like.
In some embodiments, the invention provides a computer-readable storage medium having stored thereon computer-executable instructions that, when executed by a processor, perform any of the methods described above.
Although operations are depicted in the drawings in a particular order, this should not be understood as requiring that such operations be performed in the particular order shown or in serial order, or that all illustrated operations be performed, to achieve desirable results. In certain environments, multitasking and parallel processing may be advantageous.
The methods and apparatus of the present invention can be accomplished with standard programming techniques with rule based logic or other logic to accomplish the various method steps. It should also be noted that the words "means" and "module," as used herein and in the claims, is intended to encompass implementations using one or more lines of software code, and/or hardware implementations, and/or equipment for receiving inputs.
Any of the steps, operations, or procedures described herein may be performed or implemented using one or more hardware or software modules, alone or in combination with other devices. In one embodiment, the software modules are implemented using a computer program product comprising a computer readable medium containing computer program code, which is executable by a computer processor for performing any or all of the described steps, operations, or procedures.
The foregoing description of the implementation of the invention has been presented for purposes of illustration and description. It is not intended to be exhaustive or to limit the invention to the precise form disclosed, and modifications and variations are possible in light of the above teachings or may be acquired from practice of the invention. The embodiments were chosen and described in order to explain the principles of the invention and its practical application to enable one skilled in the art to utilize the invention in various embodiments and with various modifications as are suited to the particular use contemplated.

Claims (10)

1. A multi-frame image fusion method comprises the following steps:
a continuous image acquisition step of acquiring a plurality of continuously shot images by continuous shooting;
a reference frame selecting step of selecting one reference frame image from the plurality of continuously shot images, and taking the rest as auxiliary images;
a long exposure image obtaining step, obtaining a long exposure image, wherein the long exposure image is shot through long exposure while being continuously shot, and the exposure time of the long exposure image is longer than that of shooting a single continuously shot image;
a registration step, in which the long exposure image and the auxiliary image are respectively registered with the reference frame image;
a region segmentation step, namely performing region segmentation on the registered long exposure image to segment the long exposure image into a texture region and a flat region;
an image fusion step, namely fusing the auxiliary image and the reference frame image after registration in a subarea manner according to the texture area and the flat area to obtain a result image; wherein the image fusion step comprises:
obtaining the texture region and the flat region of the reference frame image after registration according to the texture region and the flat region of the long exposure image after registration;
obtaining the texture region and the flat region of the auxiliary image after registration according to the texture region and the flat region of the reference frame image after registration;
fusing the texture region of the registered reference frame image with the texture region of the registered auxiliary image, and fusing the flat region of the registered reference frame image with the flat region of the registered auxiliary image.
2. The method of claim 1, wherein fusing the texture region of the registered reference frame image with the texture region of the registered auxiliary image comprises: and directly taking the registered reference frame image as a fusion result.
3. The method of claim 1, wherein fusing the texture region of the registered reference frame image with the texture region of the registered auxiliary image comprises: obtaining pixel differences between each pixel point in the texture region of the registered auxiliary image and each pixel point corresponding to the registered reference frame image in the texture region, and fusing the corresponding pixel points when the pixel differences are smaller than a preset threshold value; and when the pixel difference is greater than or equal to the preset threshold value, the corresponding pixel points of the auxiliary image do not participate in fusion.
4. The method of claim 1, wherein the reference frame extracting step comprises: and carrying out edge detection on the plurality of continuous shot images, obtaining the definition of each continuous shot image through the edge detection, and selecting the continuous shot image with the highest definition as the reference frame image.
5. The method of claim 1, wherein the registering step comprises: and registering the long exposure image and the reference frame image through a mean value binarization alignment algorithm.
6. The method of claim 1, wherein the long exposure image and the continuously captured image are captured simultaneously for the same scene by different cameras, respectively.
7. The method according to any one of claims 1-6, wherein the image fusion step comprises: the fusion is performed by means of direct averaging or bilateral weighting.
8. A multi-frame image fusion apparatus, comprising:
the continuous image acquisition module is used for acquiring a plurality of continuous shooting images through continuous shooting;
a reference frame selecting module, configured to select one reference frame image from the multiple continuously shot images, and use the rest as auxiliary images;
the long exposure image acquisition module is used for acquiring a long exposure image, the long exposure image is continuously shot and simultaneously shot by long exposure, and the exposure time of the long exposure image is longer than that of a single continuously shot image;
a registration module, configured to register the long-exposure image and the auxiliary image with the reference frame image respectively;
the region segmentation module is used for carrying out region segmentation on the registered long exposure image to segment the long exposure image into a texture region and a flat region;
the image fusion module is used for fusing the registered auxiliary image and the reference frame image in regions according to the texture region and the flat region to obtain a result image; wherein the image fusion module further comprises: the texture region and the flat region of the reference frame image after registration are obtained according to the texture region and the flat region of the long exposure image after registration;
obtaining the texture region and the flat region of the auxiliary image after registration according to the texture region and the flat region of the reference frame image after registration;
fusing the texture region of the registered reference frame image with the texture region of the registered auxiliary image, and fusing the flat region of the registered reference frame image with the flat region of the registered auxiliary image.
9. An electronic device, wherein the electronic device comprises:
a memory to store instructions; and
a processor for invoking the memory-stored instructions to perform the multi-frame image fusion method of any one of claims 1-7.
10. A computer-readable storage medium having stored thereon computer-executable instructions which, when executed by a processor, perform the multi-frame image fusion method of any one of claims 1-7.
CN201910452826.4A 2019-05-28 2019-05-28 Multi-frame image fusion method and device Active CN110189285B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910452826.4A CN110189285B (en) 2019-05-28 2019-05-28 Multi-frame image fusion method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910452826.4A CN110189285B (en) 2019-05-28 2019-05-28 Multi-frame image fusion method and device

Publications (2)

Publication Number Publication Date
CN110189285A CN110189285A (en) 2019-08-30
CN110189285B true CN110189285B (en) 2021-07-09

Family

ID=67718326

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910452826.4A Active CN110189285B (en) 2019-05-28 2019-05-28 Multi-frame image fusion method and device

Country Status (1)

Country Link
CN (1) CN110189285B (en)

Families Citing this family (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110689502B (en) * 2019-10-09 2022-06-14 深圳看到科技有限公司 Image processing method and related device
CN110717878B (en) * 2019-10-12 2022-04-15 北京迈格威科技有限公司 Image fusion method and device, computer equipment and storage medium
CN111091506A (en) * 2019-12-02 2020-05-01 RealMe重庆移动通信有限公司 Image processing method and device, storage medium and electronic equipment
CN111028189B (en) * 2019-12-09 2023-06-27 Oppo广东移动通信有限公司 Image processing method, device, storage medium and electronic equipment
CN112016418B (en) * 2020-08-18 2023-12-22 中国农业大学 Secant recognition method and device, electronic equipment and storage medium
CN112767295A (en) * 2021-01-14 2021-05-07 Oppo广东移动通信有限公司 Image processing method, image processing apparatus, storage medium, and electronic device
CN112887515B (en) * 2021-01-26 2023-09-19 维沃移动通信有限公司 Video generation method and device
CN112887623B (en) * 2021-01-28 2022-11-29 维沃移动通信有限公司 Image generation method and device and electronic equipment
CN113409209A (en) * 2021-06-17 2021-09-17 Oppo广东移动通信有限公司 Image deblurring method and device, electronic equipment and storage medium
CN113905185B (en) * 2021-10-27 2023-10-31 锐芯微电子股份有限公司 Image processing method and device
CN113706421B (en) * 2021-10-27 2022-02-22 深圳市慧鲤科技有限公司 Image processing method and device, electronic equipment and storage medium
CN114387248B (en) * 2022-01-12 2022-11-25 苏州天准科技股份有限公司 Silicon material melting degree monitoring method, storage medium, terminal and crystal pulling equipment
CN114821030B (en) * 2022-04-11 2023-04-04 苏州振旺光电有限公司 Planet image processing method, system and device
CN115439386A (en) * 2022-09-06 2022-12-06 维沃移动通信有限公司 Image fusion method and device, electronic equipment and storage medium
CN116740099B (en) * 2023-08-15 2023-11-14 南京博视医疗科技有限公司 OCT image segmentation method and device

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103973989A (en) * 2014-04-15 2014-08-06 北京理工大学 Method and system for obtaining high-dynamic images
US8885976B1 (en) * 2013-06-20 2014-11-11 Cyberlink Corp. Systems and methods for performing image fusion
CN104144298A (en) * 2014-07-16 2014-11-12 浙江宇视科技有限公司 Wide dynamic image synthesis method
CN105430263A (en) * 2015-11-24 2016-03-23 努比亚技术有限公司 Long-exposure panoramic image photographing device and method
CN105657244A (en) * 2015-11-06 2016-06-08 乐视移动智能信息技术(北京)有限公司 Anti-shake photographing method and apparatus, and mobile terminal
CN108419023A (en) * 2018-03-26 2018-08-17 华为技术有限公司 A kind of method and relevant device generating high dynamic range images
CN109167931A (en) * 2018-10-23 2019-01-08 Oppo广东移动通信有限公司 Image processing method, device, storage medium and mobile terminal
CN109410130A (en) * 2018-09-28 2019-03-01 华为技术有限公司 Image processing method and image processing apparatus

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8885976B1 (en) * 2013-06-20 2014-11-11 Cyberlink Corp. Systems and methods for performing image fusion
CN103973989A (en) * 2014-04-15 2014-08-06 北京理工大学 Method and system for obtaining high-dynamic images
CN104144298A (en) * 2014-07-16 2014-11-12 浙江宇视科技有限公司 Wide dynamic image synthesis method
CN105657244A (en) * 2015-11-06 2016-06-08 乐视移动智能信息技术(北京)有限公司 Anti-shake photographing method and apparatus, and mobile terminal
CN105430263A (en) * 2015-11-24 2016-03-23 努比亚技术有限公司 Long-exposure panoramic image photographing device and method
CN108419023A (en) * 2018-03-26 2018-08-17 华为技术有限公司 A kind of method and relevant device generating high dynamic range images
CN109410130A (en) * 2018-09-28 2019-03-01 华为技术有限公司 Image processing method and image processing apparatus
CN109167931A (en) * 2018-10-23 2019-01-08 Oppo广东移动通信有限公司 Image processing method, device, storage medium and mobile terminal

Also Published As

Publication number Publication date
CN110189285A (en) 2019-08-30

Similar Documents

Publication Publication Date Title
CN110189285B (en) Multi-frame image fusion method and device
WO2019233264A1 (en) Image processing method, computer readable storage medium, and electronic device
GB2501810B (en) Method for determining the extent of a foreground object in an image
CN111091590B (en) Image processing method, device, storage medium and electronic equipment
KR101524548B1 (en) Apparatus and method for alignment of images
US11538175B2 (en) Method and apparatus for detecting subject, electronic device, and computer readable storage medium
CN112514373B (en) Image processing apparatus and method for feature extraction
WO2013145589A1 (en) Image-processing device, image-capturing device, and image-processing method
US20230214981A1 (en) Method for detecting appearance defects of a product and electronic device
CN108875504B (en) Image detection method and image detection device based on neural network
CN107909554B (en) Image noise reduction method and device, terminal equipment and medium
CN110796041B (en) Principal identification method and apparatus, electronic device, and computer-readable storage medium
CN109447022B (en) Lens type identification method and device
US9466095B2 (en) Image stabilizing method and apparatus
CN113066088A (en) Detection method, detection device and storage medium in industrial detection
CN112418243A (en) Feature extraction method and device and electronic equipment
CN111161299B (en) Image segmentation method, storage medium and electronic device
CN112330618B (en) Image offset detection method, device and storage medium
Patro Design and implementation of novel image segmentation and BLOB detection algorithm for real-time video surveillance using DaVinci processor
US11373277B2 (en) Motion detection method and image processing device for motion detection
CN111080683B (en) Image processing method, device, storage medium and electronic equipment
CN113052019A (en) Target tracking method and device, intelligent equipment and computer storage medium
CN105049706A (en) Image processing method and terminal
WO2024016632A1 (en) Bright spot location method, bright spot location apparatus, electronic device and storage medium
Xu et al. Features based spatial and temporal blotch detection for archive video restoration

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant