CN114049288A - Image generation method and device, electronic equipment and computer-readable storage medium - Google Patents

Image generation method and device, electronic equipment and computer-readable storage medium Download PDF

Info

Publication number
CN114049288A
CN114049288A CN202111308827.5A CN202111308827A CN114049288A CN 114049288 A CN114049288 A CN 114049288A CN 202111308827 A CN202111308827 A CN 202111308827A CN 114049288 A CN114049288 A CN 114049288A
Authority
CN
China
Prior art keywords
frame
image
reference frame
block
target
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111308827.5A
Other languages
Chinese (zh)
Inventor
廖玹颖
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Oppo Mobile Telecommunications Corp Ltd
Original Assignee
Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Oppo Mobile Telecommunications Corp Ltd filed Critical Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority to CN202111308827.5A priority Critical patent/CN114049288A/en
Publication of CN114049288A publication Critical patent/CN114049288A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration using two or more images, e.g. averaging or subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/223Analysis of motion using block-matching
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20092Interactive image processing based on input by user
    • G06T2207/20104Interactive definition of region of interest [ROI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Studio Devices (AREA)

Abstract

The application relates to an image generation method, an image generation device, an electronic device and a storage medium. The method comprises the following steps: determining a reference frame from a plurality of image frames obtained by shooting the same scene; respectively carrying out block matching on the reference frame and each fused frame except the reference frame to determine a first block in each fused frame corresponding to a reference block in the reference frame; noise estimation is carried out on the reference frame and each fusion frame to obtain a noise estimation result, and the first block in each fusion frame is adjusted based on the noise estimation result to obtain a corresponding second block; and combining the reference block in the reference frame and the corresponding second block in each fusion frame to generate a target image. By adopting the method, ghost images in the image can be removed.

Description

Image generation method and device, electronic equipment and computer-readable storage medium
Technical Field
The present application relates to the field of computer technologies, and in particular, to an image generation method and apparatus, an electronic device, and a computer-readable storage medium.
Background
With the development of computer technology, image signal processing technology has emerged. The electronic device inputs an image into an image Signal processing isp (image Signal processing) flow, and processes the image to obtain an image required by a user.
However, in the conventional image generation method, a plurality of processed images are fused, and the fused images have a problem of ghosting.
Disclosure of Invention
The embodiment of the application provides an image generation method, an image generation device, electronic equipment and a computer readable storage medium, which can remove ghosts in images.
An image generation method, comprising:
determining a reference frame from a plurality of image frames obtained by shooting the same scene;
respectively carrying out block matching on the reference frame and each fused frame except the reference frame to determine a first block in each fused frame corresponding to a reference block in the reference frame;
noise estimation is carried out on the reference frame and each fusion frame to obtain a noise estimation result, and the first block in each fusion frame is adjusted based on the noise estimation result to obtain a corresponding second block;
and combining the reference block in the reference frame and the corresponding second block in each fusion frame to generate a target image.
An image generation apparatus comprising:
the reference frame determining module is used for determining a reference frame from a plurality of image frames obtained by shooting the same scene;
the block matching module is used for respectively performing block matching on the reference frame and each fused frame except the reference frame to determine a first block in each fused frame corresponding to a reference block in the reference frame;
the noise estimation module is used for carrying out noise estimation on the reference frame and each fusion frame to obtain a noise estimation result, and adjusting the first block in each fusion frame based on the noise estimation result to obtain a corresponding second block;
and the block merging module is used for merging the reference block in the reference frame and the corresponding second block in each fused frame to generate a target image.
An electronic device comprising a memory and a processor, the memory having stored therein a computer program that, when executed by the processor, causes the processor to perform the steps of the image generation method as described above.
A computer-readable storage medium, on which a computer program is stored which, when being executed by a processor, carries out the steps of the method as described above.
According to the image generation method, the image generation device, the electronic equipment and the computer-readable storage medium, the reference frame is determined from a plurality of image frames obtained by shooting the same scene; respectively carrying out block matching on the reference frame and each fused frame except the reference frame to determine a first block in each fused frame corresponding to the reference block in the reference frame; then, the noise estimation is performed on the reference frame and each fusion frame to obtain a noise estimation result, and then the first block in each fusion frame can be adjusted based on the noise estimation result to obtain a more accurate second block, so that the reference block in the reference frame and the corresponding second block in each fusion frame can be more accurately combined, ghosts in the image are eliminated, and a target image with higher definition is generated.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present application, and for those skilled in the art, other drawings can be obtained according to the drawings without creative efforts.
FIG. 1 is a flow diagram of an image generation method in one embodiment;
FIG. 2 is an image frame including texture in one embodiment;
FIG. 3 is a diagram illustrating the calculation of sharpness for an image frame in one embodiment;
FIG. 4 is a flow diagram that illustrates steps in one embodiment for determining a reference frame from a plurality of image frames captured of the same scene;
FIG. 5 is a schematic diagram of an image pyramid in one embodiment;
FIG. 6 is a diagram of block matching in one embodiment;
FIG. 7 is a flow diagram illustrating noise estimation in one embodiment;
FIG. 8 is a diagram of block merging in one embodiment;
FIG. 9 is a schematic flow chart of image generation in one embodiment;
FIG. 10 is a schematic flow chart of image generation in one embodiment;
FIG. 11 is a block diagram showing the configuration of an image generating apparatus according to an embodiment;
fig. 12 is a schematic diagram of an internal structure of an electronic device in one embodiment.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the present application is described in further detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the present application and are not intended to limit the present application.
It will be understood that, as used herein, the terms "first," "second," and the like may be used herein to describe various elements, but these elements are not limited by these terms. These terms are only used to distinguish one element from another. For example, a first client may be referred to as a second client, and similarly, a second client may be referred to as a first client, without departing from the scope of the present application. Both the first client and the second client are clients, but they are not the same client.
FIG. 1 is a flow diagram of an image generation method in one embodiment. The image generation method in this embodiment is described by taking an example of the image generation method running on an electronic device. The electronic device can be but not limited to various personal computers, notebook computers, smart phones, tablet computers, internet of things devices and portable wearable devices, and the internet of things devices can be smart speakers, smart televisions, smart air conditioners, smart vehicle-mounted devices and the like. The portable wearable device can be a smart watch, a smart bracelet, a head-mounted device, and the like.
As shown in fig. 1, the processing method of image data includes steps 102 to 108.
Step 102, a reference frame is determined from a plurality of image frames obtained by shooting the same scene.
The reference frame refers to an image frame as a calibration. The image format of the reference frame is not limited, and may be an RGB (Red, Green, Blue, Red, Green, Blue) image frame, a YUV image frame, or an HSV (Hue, Saturation, brightness) image frame. Wherein, Y component in YUV image frame represents brightness (Luma) and U component and V component represent Chroma (Chroma or Chroma).
Specifically, the electronic device calls a camera, controls the camera to shoot the same scene to obtain a plurality of image frames, and determines a reference frame from the plurality of image frames. Wherein each image frame contains texture information, as shown in fig. 2, which is an image frame containing texture, the reference frame may be determined based on each image frame containing texture. For example, an image frame with the clearest texture information among the image frames is used as a reference frame.
Furthermore, the electronic equipment calls the camera, acquires preset shooting parameters, and controls the camera to shoot the same scene according to the preset shooting parameters to obtain a plurality of image frames. The shooting parameters include exposure, aperture factor, shutter speed, and the like.
In one embodiment, the electronic device may randomly select one image frame from a plurality of image frames as a reference frame. In another embodiment, the electronic device may select an image frame with the highest definition from among a plurality of image frames as a reference frame. In other embodiments, the electronic device may also select the image frame acquired first as the reference frame. The manner of determining the reference frame from the plurality of image frames is not limited, and may be set according to the user's needs.
Further, the electronic device calculates a Sharpness and a Brightness of each image frame, and determines a reference frame based on the Sharpness and the Brightness of each image frame. The electronic equipment obtains the brightness of the image frame by averaging the image frame by adopting the gray value step of 8 × 8. The electronic device calculates the x, y gradient in the image frame and determines the sharpness of the image frame.
FIG. 3 is a diagram illustrating the calculation of sharpness for an image frame in one embodiment. The electronic equipment calculates the sharpness by taking the gradient change of 3 pixels respectively at the upper, lower, left and right sides of a certain point in an image frame. Wherein each directional step by 8 determines a point in the image.
And 104, respectively performing block matching on the reference frame and each fused frame except the reference frame to determine a first block in each fused frame corresponding to the reference block in the reference frame.
The fusion frame is an image frame other than the reference frame among the plurality of image frames. The reference block is a region in the reference frame. The first block is a region in the fused frame corresponding to the reference block.
Wherein the reference block may be a region of interest (ROI) in the reference frame. Specifically, the electronic device performs region-of-interest detection on the reference frame, and determines a region-of-interest as a reference block; the reference frame and each fused frame except the reference frame are respectively subjected to Block matching (Block Motion), so that an interested area in each fused frame corresponding to the reference Block in the reference frame can be determined as a first Block.
And 106, performing noise estimation on the reference frame and each fusion frame to obtain a noise estimation result, and adjusting the first block in each fusion frame based on the noise estimation result to obtain a corresponding second block.
The noise estimation result may specifically be a noise curve corresponding to the reference frame. The second block is a block adjusted from the first block in the fused frame.
Specifically, the electronic device calls a preset noise estimation function, inputs the reference frame and each fusion frame into the preset noise estimation function, performs noise estimation on the reference frame and each fusion frame to obtain a noise estimation result, and adjusts the first block in each fusion frame based on the noise estimation result to obtain a corresponding second block.
Further, before noise estimation is performed on the reference frame and each of the fusion frames, the electronic device may perform preprocessing to acquire sensitivity (ISO) and set a maximum value and a minimum value of noise corresponding to the sensitivity; and acquiring a result between the maximum value and the minimum value from the noise estimation result as a new noise estimation result, and adjusting the first block in each fusion frame based on the new noise estimation result to obtain a corresponding second block.
And 108, combining the reference block in the reference frame and the corresponding second block in each fusion frame to generate a target image.
The target image is the image generated by the merging. The image format of the target image is not limited, and may be a jpeg format, png format, or the like.
Specifically, the electronic device invokes an HDR (High dynamic range) synthesis algorithm to combine the reference block in the reference frame and the corresponding second block in each of the fusion frames, and generate a High dynamic range image. The high dynamic range image is a target image, and the electronic device can display the high dynamic range image.
The image generation method determines a reference frame from a plurality of image frames obtained by shooting the same scene; respectively carrying out block matching on the reference frame and each fused frame except the reference frame to determine a first block in each fused frame corresponding to the reference block in the reference frame; then, the noise estimation is performed on the reference frame and each fusion frame to obtain a noise estimation result, and then the first block in each fusion frame can be adjusted based on the noise estimation result to obtain a more accurate second block, so that the reference block in the reference frame and the corresponding second block in each fusion frame can be more accurately combined, ghosts in the image are eliminated, and a target image with higher definition is generated.
In another embodiment, the reference block in the reference frame and the corresponding second block in each fused frame are subjected to block merging, namely frequency domain fusion and window interpolation fusion, and the reference frame and each fused frame are subjected to frequency domain fusion firstly, and then the fused image is subjected to interpolation processing to obtain the target image with the original size.
In another embodiment, the electronic device merges the reference block in the reference frame and the corresponding second block in each fused frame to obtain an intermediate image, and performs Laplace Blending (Laplace Blending) on the intermediate image to obtain the target image.
In one embodiment, as shown in fig. 4, determining a reference frame from a plurality of image frames captured of the same scene includes:
step 402, shooting the same scene with a first exposure duration to obtain a first type of image frame, and shooting the same scene with a second exposure duration to obtain a second type of image frame; the first exposure duration is greater than the second exposure duration.
The first exposure duration and the second exposure duration can be set according to needs, and the first exposure duration is longer than the second exposure duration. For example, the first exposure period is 10 milliseconds (ms), and the second exposure period is 5 milliseconds (ms).
The first type of image frame is an image frame which is shot in a first exposure time length and is used as a type of longer exposure time length. The second type of image frame is an image frame which is shot with a second exposure time length and is used as a type of shorter exposure time length. It is understood that the first exposure duration is greater than the second exposure duration, the image frames in the first type of image frames are bright frames, and the image frames in the second type of image frames are dark frames.
In step 404, a first target image frame is determined from the first type of image frame, and a second target image frame is determined from the second type of image frame.
The first target image frame is an image frame determined from the first type of image frame for determining the reference frame. The second target image frame is an image frame determined from the second type image frame to determine the reference frame.
Specifically, the electronic equipment determines the definition of each image frame in the first type of image frames, and determines the image frame with the highest definition in the first type of image frames as a first target image frame; and determining the definition of each image frame in the second type of image frames, and determining the image frame with the highest definition in the second type of image frames as a second target image frame.
In another embodiment, the electronic device may further randomly determine a first target image frame from the first type of image frame and a second target image frame from the second type of image frame. In another embodiment, the electronic device may further use the image frame acquired first in the first type of image frame as the first target image frame and use the image frame acquired first in the second type of image frame as the second target image frame. The specific manner of determining the first target image frame and the second target image frame is not limited, and may be set as needed.
Step 406, a reference frame is determined from the first target image frame and the second target image frame.
In one embodiment, the electronic device may randomly determine a reference frame from the first target image frame and the second target image frame.
In another embodiment, the electronic device may directly designate one of the image frames as the reference frame from the first target image frame and the second target image frame. For example, the electronic device may directly take the second target image frame as the reference frame. It can be understood that, usually, there is relatively complete image information in the dark frame, and the dark frame is not affected by the loss of the highlight overexposure information, so that the target image can be generated more accurately by using the second target image frame as the reference frame.
In another embodiment, the electronic device may determine, as the reference frame, one image frame with higher definition from among the first target image frame and the second target image frame.
The specific manner of determining the reference frame may be set according to needs, and is not limited herein.
In the embodiment, the same scene is shot with a first exposure duration and a second exposure duration to respectively obtain a first type image frame and a second type image frame; the first exposure time is longer than the second exposure time, and then the first target image frame and the second target image frame are determined from the first type image frame and the second type image frame respectively, so that the reference frame can be determined more accurately based on the first target image frame and the second target image frame.
In one embodiment, capturing the same scene for a first exposure duration to obtain a first type of image frame, and capturing the same scene for a second exposure duration to obtain a second type of image frame comprises: shooting the same scene with a first exposure duration to obtain a plurality of first alternative image frames, and classifying the first alternative image frames with the image brightness larger than a first brightness threshold value into a first type of image frames; shooting the same scene with a second exposure duration to obtain a plurality of second alternative image frames, and attributing the second alternative image frames with the image brightness smaller than a second brightness threshold value to second type image frames; the first brightness threshold is greater than or equal to the second brightness threshold.
The first candidate image frame is an image frame that is obtained by shooting for a first exposure time period and is used as a candidate. The second candidate image frame is an image frame that is obtained by shooting for a second exposure time period and is used as a candidate.
The first brightness threshold and the second brightness threshold can be set according to requirements, and the first brightness threshold is larger than or equal to the second brightness threshold. Therefore, the first candidate image frame with the image brightness larger than the first brightness threshold is a bright frame and belongs to the first class of image frame, and the second candidate image frame with the image brightness smaller than the second brightness threshold is a dark frame and belongs to the second class of image frame.
For each image frame, brightness matching is firstly carried out, and an initial first type image frame and an initial second type image frame are determined.
In this embodiment, a first candidate image frame whose image brightness is greater than a first brightness threshold is classified into a first type image frame, a second candidate image frame whose image brightness is less than a second brightness threshold is classified into a second type image frame, a first candidate image frame whose brightness is less than or equal to the first brightness threshold in the first candidate image frame may be removed, a second candidate image frame whose brightness is greater than or equal to the second brightness threshold in the second candidate image frame may be removed, and the first type image frame and the second type image frame may be obtained more accurately.
In one embodiment, after determining the first target image frame from the first kind of image frames and determining the second target image frame from the second kind of image frames, the method further comprises: determining a sharpness ratio between the first target image frame and the second target image frame; determining an image frame which does not meet a preset sharpness condition based on the sharpness of each image frame except the first target image frame and the second target image frame and the sharpness ratio; and eliminating the image frames which do not meet the preset sharpness condition, and performing the step of respectively performing block matching on the reference frame and each fusion frame except the reference frame on the basis of each image frame after elimination.
The sharpness ratio refers to a ratio between the sharpness of a first target image frame and the sharpness of a second target image frame. In one embodiment, the electronic device may divide the sharpness of the first target image frame by the sharpness of the second target image frame to obtain a sharpness ratio. In another embodiment, the electronic device may divide the sharpness of the second target image frame by the sharpness of the first target image frame to obtain a sharpness ratio.
The sharpness ratio can be calculated according to the following equation:
Figure BDA0003341080940000101
wherein ratio is the sharpness ratio, sharpnessiAnd bright nessjRespectively, the sharpness of the first target image frame and the sharpness of the second target image frame.
The preset sharpness condition may be set as needed. For example, the preset sharpness condition may be that the ratio between the sharpness and the sharpness ratio of the image frames is smaller than a preset ratio threshold, or that the difference between the sharpness and the sharpness ratio of the image frames is not within a threshold range, without being limited thereto.
Furthermore, the electronic device may also sort the image frames after being removed according to the size of the sharpness, and may sort the image frames from large to small, or may sort the image frames from small to large.
It can be understood that, unlike the equal exposure fusion, only exposure frames with the same brightness need to be considered, in this embodiment, the exposure time ratio of the EV for the long and short exposures is not fixed, and there is a difference in different scenes, so that the fusibility of the long and short exposures needs to be considered, and therefore, the clearest image frame which does not satisfy the preset sharpness condition needs to be calculated, and an inappropriate frame needs to be removed, so as to generate the target image more accurately.
In one embodiment, after the reference frame is determined from a plurality of image frames captured of the same scene, the method further includes: mapping the brightness of each fused frame except the reference frame to the reference frame, and determining a local motion vector between each fused frame and the reference frame; and performing block matching on the reference frame and each of the fused frames except the reference frame based on the local motion vector between each of the fused frames relative to the reference frame.
A local motion vector is a local motion vector that describes the position of a block in an image frame relative to a reference frame.
The electronic equipment maps the brightness of each fused frame except the reference frame to the reference frame, and a local motion vector between each fused frame and the reference frame can be determined; and respectively performing block matching on the reference frame and each fused frame except the reference frame based on the local motion vector between each fused frame and the reference frame, so that the first block in each fused frame corresponding to the reference block in the reference frame can be more accurately determined.
Furthermore, the electronic equipment shoots the same scene with a first exposure time length to obtain a first type of image frame, and shoots the same scene with a second exposure time length to obtain a second type of image frame; the first exposure duration is longer than the second exposure duration; that is, the first type image frame is a bright frame, and the second type image frame is a dark frame. When the reference frame is a second target image frame in the second type of image frame, each fusion frame is the first type of image frame and other image frames except the reference frame in the second type of image frame, the brightness of the first type of image frame is mapped to the reference frame, and the brightness of the second type of image frame is mapped to the reference frame, so that local motion vectors between each fusion frame and the reference frame can be respectively obtained, and then the first block corresponding to the reference block in the reference frame can be more accurately found based on the local motion vectors corresponding to each fusion frame.
Further, in order to obtain a bright frame fused image and a dark frame fused image, the electronic device may further map the brightness of the first type of image frame into the second type of image frame, and map the brightness of the second type of image frame into the first type of image frame.
In one embodiment, the block matching the reference frame and each of the fusion frames except the reference frame to determine a first block in each of the fusion frames corresponding to the reference block in the reference frame includes: down-sampling the reference frame to obtain at least two layers of image pyramids; traversing from the lowest layer to the highest layer of the image pyramid, respectively performing block matching on each layer of image and each fusion frame except the reference frame, and acquiring a first block in each fusion frame corresponding to the reference block in the highest layer of image based on block alignment.
The new sequence is obtained by sampling a sample sequence several samples apart, and thus is a down-sampling of the original sequence. And the electronic equipment performs downsampling on the reference frame to obtain at least two layers of image pyramids. In one embodiment, as shown in FIG. 5, the electronic device downsamples the reference frame to obtain a four-level image pyramid. In other embodiments, the electronic device may also decompose the image pyramid to obtain a new image pyramid, such as a steerable pyramid.
Specifically, the electronic device traverses from the bottom layer to the top layer of the image pyramid, performs block matching on each layer of image and each fusion frame except for the reference frame by adopting multiple threads, and acquires a first block in each fusion frame corresponding to the reference block in the top layer of image based on block alignment. The traversal step of the bottom image is 1 pixel, the search range is 24 pixels by 24 pixels, and the reference block size is 8 pixels by 8 pixels. When the image pyramid includes four layers of images, the search radii of the four layers of images are 8, and 2, respectively, and the search ranges are 24, and 12, respectively, in order from the lowest layer to the highest layer.
In this embodiment, a reference frame is downsampled to obtain an image pyramid of at least two layers; traversing from the lowest layer to the highest layer of the image pyramid, respectively performing block matching on each layer of image and each fusion frame except the reference frame, acquiring a first block in each fusion frame corresponding to the reference block in the highest layer of image based on block alignment, and performing block matching more accurately so as to determine the first block corresponding to the reference block more accurately.
FIG. 6 is a diagram of block matching in one embodiment. The electronic device first starts from the pixel coordinate of the upper left corner and traverses to the pixel coordinate of the lower right coordinate. When selecting the coordinates of the optimal block, on one hand, the sum of squares of pixel differences of the block is calculated, on the other hand, the distance from the central point is calculated until the highest-layer image, and the optimal block is selected from the highest-layer image of the fusion frame as the first block in the fusion frame.
In one embodiment, the noise estimation of the reference frame and each fused frame to obtain a noise estimation result includes: determining a target root mean square and a target difference absolute value corresponding to each pixel interval of the reference frame based on the reference frame and each fusion frame; and constructing a noise curve corresponding to the reference frame based on the target root mean square and the target difference absolute value corresponding to each pixel interval.
The pixel section is a section including a plurality of adjacent pixels. For example, if the pixel value of the reference frame is between 0 and 255 (the image is 8 bits), the pixel value may be equally divided into 16 parts to obtain 16 pixel sections.
In the data statistical analysis, the square of all the values is summed, the mean value is calculated, and then the square is opened, so that the root mean square is obtained. The target root mean square is the root mean square determined for each pixel interval in the reference frame to construct the noise curve. The target difference absolute value is the difference absolute value of the colluding jade structure noise curve determined by each pixel interval in the reference frame.
In this embodiment, the electronic device obtains a target root mean square and a target difference absolute value corresponding to each pixel interval, and may accurately construct a noise curve corresponding to the reference frame.
FIG. 7 is a flow diagram illustrating noise estimation in one embodiment. The electronic device performs preprocessing to acquire sensitivity (ISO), and sets the maximum value and the minimum value of noise corresponding to the sensitivity. And calculating data such as a target Root Mean Square (RMS) corresponding to each pixel interval of the reference frame, a target difference absolute value Sum DIFF (SAD of absolute differences of the reference frame and the fusion frame), Inrtense and the like. The electronic device then calculates the intermediate variables, finds the intermediate value of Diff for each interval, and rejects intervals that do not meet the conditions. Wherein the conditions can be set as desired. Wherein, the start point (Search start) is the start point of the first Search range, the end point (Search end) is the start point of the first Search range, and the block size (Search window) refers to the block size of the Search contrast.
For each reference block in the reference frame, the fused frame finds a block in the image that is most similar to the reference block as a first block for fusion.
In one embodiment, determining a target root mean square and a target difference absolute value corresponding to each pixel interval of the reference frame based on the reference frame and each fused frame comprises: traversing each pixel of the reference frame, determining an initial root mean square corresponding to each pixel, and determining a pixel interval corresponding to the initial root mean square of each pixel; traversing each pixel of the reference frame, calculating an initial difference absolute value between each pixel in the reference frame and a corresponding pixel of each fusion frame, and determining a pixel interval corresponding to an initial root-mean-square of the same pixel in the reference frame as a pixel interval corresponding to the initial difference absolute value; and determining a target root mean square and a target difference absolute value of the corresponding pixel interval based on the initial root mean square and the initial difference absolute value corresponding to each pixel interval.
The initial root mean square is the root mean square corresponding to each pixel in the pixel interval. The initial absolute difference value is the absolute difference value corresponding to each pixel in the pixel interval.
Specifically, the electronic device traverses each pixel of the reference frame, calculates, for each pixel, an initial root mean square of the pixel and a corresponding pixel in each fused frame, and takes a pixel interval in which a numerical value of the initial root mean square is located as a pixel interval corresponding to the initial root mean square of the pixel; and calculating the initial mean-square difference absolute value of the pixel and the corresponding pixel in each fusion frame, and determining the pixel interval corresponding to the initial root-mean-square of the same pixel in the reference frame as the pixel interval corresponding to the initial difference absolute value.
For example, the electronic device equally divides the pixels of the reference frame into 16 parts to obtain 16 pixel intervals, which are 0-15 and 16-31 … … respectively, calculates the initial root mean square of each pixel in the reference frame and the corresponding pixel in each fused frame respectively, and calculates the initial difference absolute value of each pixel in each fused frame respectively. If the fusion frame has N frames, N initial root mean square values and N initial absolute value differences of the pixel can be obtained.
For example, if the initial root mean square of one pixel is 23, the interval of the pixels where the initial root mean square value 23 is located is 16 to 31, and the interval of the pixels corresponding to the initial root mean square 23 of the pixel is 16 to 31. Meanwhile, calculating the initial absolute value difference value of the same pixel in the reference frame and the same pixel in the same fusion frame, and determining the pixel interval corresponding to the initial absolute value difference value to be 16-31, namely the pixel interval corresponding to the initial root mean square 23 is consistent.
Each pixel interval has a certain number of initial root-mean-square and initial absolute value differences, and further screening can be performed to remove the initial absolute value differences larger than a preset threshold.
In one embodiment, for each pixel interval, the electronic device may obtain a mean value of initial root-mean-square values corresponding to the pixel interval as a target root-mean-square value, and obtain a mean value of initial absolute value difference values corresponding to the pixel interval as a target absolute value difference value.
In another embodiment, for each pixel interval, the electronic device may determine a median of each initial root mean square corresponding to the pixel interval as a target root mean square, and determine a median of each initial absolute value difference corresponding to the pixel interval as a target absolute value difference.
It should be noted that, the manner of determining the target root mean square from each initial root mean square corresponding to the pixel interval and the manner of determining the target difference absolute value from each initial difference absolute value corresponding to the pixel interval may be set as needed, and are not limited herein.
Furthermore, if there is a pixel section in the reference frame that has no corresponding initial root mean square and initial absolute value difference, the pixel section is ignored, and the speed of image processing can be increased.
In one embodiment, determining the target root mean square and the target difference absolute value of each pixel interval based on the initial root mean square and the initial difference absolute value corresponding to each pixel interval includes: for each pixel interval, averaging all initial root-mean-square values corresponding to the pixel interval, and taking the average value as a target root-mean-square value; and determining a median value from each initial difference absolute value corresponding to the pixel interval, and taking the median value as a target difference absolute value.
And for each pixel interval, each corresponding initial root mean square and each corresponding initial absolute value difference exist, the initial root mean square corresponding to the pixel interval is subjected to average calculation, the average value is used as a target root mean square, a median value is determined from each initial difference absolute value corresponding to the pixel interval, and the median value is used as a target difference absolute value. The target root mean square and the target absolute value can be obtained in each pixel interval, so that the noise curve corresponding to the reference frame can be accurately constructed,
In one embodiment, adjusting the first block in each fused frame based on the noise estimation result to obtain the corresponding second block comprises: acquiring the sum of absolute values of difference values corresponding to each pixel interval in a reference frame; determining an intensity weight of each fusion frame based on the noise estimation result and the sum of the absolute values of the differences; and adjusting the first block in the corresponding fusion frame based on each intensity weight to obtain a second block in each fusion frame.
The electronic equipment obtains the target difference absolute values corresponding to each pixel interval in the reference frame, and adds the target difference absolute values to obtain the sum of the difference absolute values.
If the noise estimation result is a noise curve, the electronic device inputs the fused frame into the noise estimation result, so that the noise value of the fused frame can be obtained, and the intensity weight of the fused frame can be determined based on the sum of the noise value and the difference absolute address. The intensity weight refers to a weight value adjusted by the first block in the fusion frame. The specific manner of determining the fusion frame by the electronic device based on the sum of the noise value and the difference absolute address may be set as required, and is not limited herein.
In this embodiment, the sum of absolute difference values corresponding to each pixel interval in the reference frame is obtained, and the intensity weight of each fusion frame is determined based on the noise estimation result and the sum of absolute difference values, so that the first block in the corresponding fusion frame is adjusted based on each intensity weight, and the second block in each fusion frame can be obtained more accurately.
FIG. 8 is a diagram illustrating block merging in one embodiment. The electronic equipment sets parameter creation variables, acquires Teransform (ACTransform) transfer parameters and noiseCapling direct control fusion proportion (the larger the fusion is), and the more the fusion is. The electronics calculate the Y-channel RMS (Root mean square) for the reference frame and perform dct (Discrete cosine transform) conversion. The electronic device performs bilinear interpolation on the fused frame to obtain a pixel value corresponding to the decimal coordinate, and performs dct (Discrete cosine transform) conversion. The electronic device fuses the reference frame and each fused frame, and specifically, performs weighted average on a reference block in the reference frame and a corresponding second block in each fused frame to obtain a target image. Wherein the intensity weight is determined by the noise estimate and the interpolation in the frequency domain. Wherein the reference block and the second block are each small blocks of 8 × 8 resolution. The electronic device performs dct inverse transformation on the target image to generate a large image of 8192 × 6144, and ghost images in the large image can be eliminated.
FIG. 9 is a flow diagram illustrating image generation in one embodiment. And electronically acquiring the first type image frame and the second type image frame, and searching the best frame from the first type image frame and the second type image frame. The first type of image frames are image frames obtained by shooting the same scene for a first exposure duration, the second type of image frames are image frames obtained by shooting the same scene for a second exposure duration, the first exposure duration is greater than the second exposure duration, namely the first type of image frames are bright frames, and the second type of image frames are dark frames. The electronic device may find the image frame with the highest definition from the first kind of image frames as the best frame of the first kind of image frames, find the image frame with the highest definition from the second kind of image frames as the best frame of the second kind of image frames, and use the best frame of the first kind of image frames as the reference frame.
The electronic equipment maps the brightness of the reference frame and each fused frame except the reference frame, namely, the brightness of each fused frame except the reference frame is mapped to the reference frame, the local motion vector between each fused frame and the reference frame is determined, and the reference frame and each fused frame are respectively subjected to block matching based on the local motion vector between each fused frame and the reference frame.
The electronic equipment carries out noise estimation on the reference frame and each fusion frame to obtain a noise estimation result, and adjusts the first block in each fusion frame based on the noise estimation result to obtain a corresponding second block; carrying out block combination on the reference block in the reference frame and the corresponding second block in each fusion frame; and performing Laplace mixing on the image after the blocks are combined to generate a high dynamic range image.
FIG. 10 is a diagram illustrating image generation in one embodiment. The electronic device respectively acquires 5 image frames in a backlight scene, wherein the image frames are an image frame #1, an image frame #2, an image frame #3, an image frame #4 and an image frame #5, the image frame #1 and the image frame #3 are light frames (EV0), the image frame #2, the image frame #4 and the image frame #5 are dark frames (EV-), and the high-dynamic-range image is manufactured by using the dark frames as reference frames by adopting the image generation method. It can be seen that the high dynamic range image shows good ghost image in the hand waving region, and the whole high dynamic range image has better permeability.
It should be understood that, although the steps in the flowcharts of fig. 1, 4, 7 to 9 are shown in order as indicated by the arrows, the steps are not necessarily performed in order as indicated by the arrows. The steps are not performed in the exact order shown and described, and may be performed in other orders, unless explicitly stated otherwise. Moreover, at least some of the steps in fig. 1, 4, 7-9 may include multiple sub-steps or multiple stages, which are not necessarily performed at the same time, but may be performed at different times, and the order of performing the sub-steps or stages is not necessarily sequential, but may be performed alternately or alternatingly with other steps or at least some of the sub-steps or stages of other steps.
Fig. 11 is a block diagram showing the configuration of an image generating apparatus according to an embodiment. As shown in fig. 11, there is provided an image generating apparatus including: a reference frame determination module 1102, a block matching module 1104, a noise estimation module 1106, and a block merging module 1108, wherein:
a reference frame determining module 1102, configured to determine a reference frame from a plurality of image frames captured of the same scene.
And the block matching module 1104 is configured to perform block matching on the reference frame and each of the fusion frames except the reference frame, and determine a first block in each of the fusion frames corresponding to the reference block in the reference frame.
A noise estimation module 1106, configured to perform noise estimation on the reference frame and each of the fusion frames to obtain a noise estimation result, and adjust the first block in each of the fusion frames based on the noise estimation result to obtain a corresponding second block.
A block merging module 1108, configured to merge the reference block in the reference frame and the corresponding second block in each fused frame, so as to generate a target image.
The image generating device determines a reference frame from a plurality of image frames obtained by shooting the same scene; respectively carrying out block matching on the reference frame and each fused frame except the reference frame to determine a first block in each fused frame corresponding to the reference block in the reference frame; then, the noise estimation is performed on the reference frame and each fusion frame to obtain a noise estimation result, and then the first block in each fusion frame can be adjusted based on the noise estimation result to obtain a more accurate second block, so that the reference block in the reference frame and the corresponding second block in each fusion frame can be more accurately combined, ghosts in the image are eliminated, and a target image with higher definition is generated.
In an embodiment, the reference frame determining module 1102 is further configured to capture the same scene with a first exposure duration to obtain a first type of image frame, and capture the same scene with a second exposure duration to obtain a second type of image frame; the first exposure duration is longer than the second exposure duration; determining a first target image frame from the first type of image frame and a second target image frame from the second type of image frame; a reference frame is determined from the first target image frame and the second target image frame.
In an embodiment, the reference frame determining module 1102 is further configured to capture the same scene for a first exposure duration to obtain a plurality of first candidate image frames, and classify the first candidate image frames with image brightness greater than a first brightness threshold into a first class image frame; shooting the same scene with a second exposure duration to obtain a plurality of second alternative image frames, and attributing the second alternative image frames with the image brightness smaller than a second brightness threshold value to second type image frames; the first brightness threshold is greater than or equal to the second brightness threshold.
In one embodiment, the reference frame determining module 1102 is further configured to determine a sharpness ratio between the first target image frame and the second target image frame; determining an image frame which does not meet a preset sharpness condition based on the sharpness of each image frame except the first target image frame and the second target image frame and the sharpness ratio; the image frames that do not satisfy the predetermined sharpness condition are removed, and the block matching module 1104 is further configured to perform block matching on the reference frame and each of the fusion frames other than the reference frame, respectively, based on each of the image frames after the removal.
In one embodiment, the apparatus further includes a luminance mapping module, configured to map the luminance of each of the fused frames except the reference frame to the reference frame, and determine a local motion vector between each of the fused frames relative to the reference frame; the block matching module 1104 is further configured to perform block matching on the reference frame and each of the fused frames except the reference frame based on the local motion vector between each of the fused frames and the reference frame.
In one embodiment, the block matching module 1104 is further configured to perform downsampling on the reference frame to obtain at least two layers of image pyramids; traversing from the lowest layer to the highest layer of the image pyramid, respectively performing block matching on each layer of image and each fusion frame except the reference frame, and acquiring a first block in each fusion frame corresponding to the reference block in the highest layer of image based on block alignment.
In one embodiment, the noise estimation module 1106 is further configured to determine a target root mean square and a target absolute difference value corresponding to each pixel interval of the reference frame based on the reference frame and each fused frame; and constructing a noise curve corresponding to the reference frame based on the target root mean square and the target difference absolute value corresponding to each pixel interval.
In one embodiment, the noise estimation module 1106 is further configured to traverse each pixel of the reference frame, determine an initial root mean square corresponding to each pixel, and determine a pixel interval corresponding to the initial root mean square of each pixel; traversing each pixel of the reference frame, calculating an initial difference absolute value between each pixel in the reference frame and a corresponding pixel of each fusion frame, and determining a pixel interval corresponding to an initial root-mean-square of the same pixel in the reference frame as a pixel interval corresponding to the initial difference absolute value; and determining a target root mean square and a target difference absolute value of the corresponding pixel interval based on the initial root mean square and the initial difference absolute value corresponding to each pixel interval.
In an embodiment, the noise estimation module 1106 is further configured to, for each pixel interval, perform average calculation on each initial root-mean-square corresponding to the pixel interval, and take the average as a target root-mean-square; and determining a median value from each initial difference absolute value corresponding to the pixel interval, and taking the median value as a target difference absolute value.
In an embodiment, the noise estimation module 1106 is further configured to obtain a sum of absolute values of differences corresponding to each pixel interval in the reference frame; determining an intensity weight of each fusion frame based on the noise estimation result and the sum of the absolute values of the differences; and adjusting the first block in the corresponding fusion frame based on each intensity weight to obtain a second block in each fusion frame.
The division of the modules in the image generating apparatus is merely for illustration, and in other embodiments, the image generating apparatus may be divided into different modules as needed to complete all or part of the functions of the image generating apparatus.
For specific limitations of the image generation apparatus, reference may be made to the above limitations of the image generation method, which are not described herein again. The respective modules in the image generating apparatus described above may be wholly or partially implemented by software, hardware, and a combination thereof. The modules can be embedded in a hardware form or independent from a processor in the computer device, and can also be stored in a memory in the computer device in a software form, so that the processor can call and execute operations corresponding to the modules.
Fig. 12 is a schematic diagram of an internal structure of an electronic device in one embodiment. The electronic device may be any terminal device such as a mobile phone, a tablet computer, a notebook computer, a desktop computer, a PDA (Personal Digital Assistant), a POS (Point of Sales), a vehicle-mounted computer, and a wearable device. The electronic device includes a processor and a memory connected by a system bus. The processor may include one or more processing units, among others. The processor may be a CPU (Central Processing Unit), a DSP (Digital Signal processor), or the like. The memory may include a non-volatile storage medium and an internal memory. The non-volatile storage medium stores an operating system and a computer program. The computer program can be executed by a processor for implementing an image generation method provided in the following embodiments. The internal memory provides a cached execution environment for the operating system computer programs in the non-volatile storage medium.
The implementation of each module in the image generation apparatus provided in the embodiment of the present application may be in the form of a computer program. The computer program may be run on a terminal or a server. Program modules constituted by such computer programs may be stored on the memory of the electronic device. Which when executed by a processor, performs the steps of the method described in the embodiments of the present application.
The embodiment of the application also provides a computer readable storage medium. One or more non-transitory computer-readable storage media containing computer-executable instructions that, when executed by one or more processors, cause the processors to perform the steps of the image generation method.
Embodiments of the present application also provide a computer program product containing instructions which, when run on a computer, cause the computer to perform an image generation method.
Any reference to memory, storage, database, or other medium used herein may include non-volatile and/or volatile memory. The nonvolatile Memory may include a ROM (Read-Only Memory), a PROM (Programmable Read-Only Memory), an EPROM (Erasable Programmable Read-Only Memory), an EEPROM (Electrically Erasable Programmable Read-Only Memory), or a flash Memory. Volatile Memory can include RAM (Random Access Memory), which acts as external cache Memory. By way of illustration and not limitation, RAM is available in many forms, such as SRAM (Static Random Access Memory), DRAM (Dynamic Random Access Memory), SDRAM (Synchronous Dynamic Random Access Memory), Double Data Rate DDR SDRAM (Double Data Rate Synchronous Random Access Memory), ESDRAM (Enhanced Synchronous Dynamic Random Access Memory), SLDRAM (Synchronous Link Dynamic Random Access Memory), RDRAM (Random Dynamic Random Access Memory), and DRmb DRAM (Dynamic Random Access Memory).
The above-mentioned embodiments only express several embodiments of the present application, and the description thereof is more specific and detailed, but not construed as limiting the scope of the present application. It should be noted that, for a person skilled in the art, several variations and modifications can be made without departing from the concept of the present application, which falls within the scope of protection of the present application. Therefore, the protection scope of the present patent shall be subject to the appended claims.

Claims (13)

1. An image generation method, comprising:
determining a reference frame from a plurality of image frames obtained by shooting the same scene;
respectively carrying out block matching on the reference frame and each fused frame except the reference frame to determine a first block in each fused frame corresponding to a reference block in the reference frame;
noise estimation is carried out on the reference frame and each fusion frame to obtain a noise estimation result, and the first block in each fusion frame is adjusted based on the noise estimation result to obtain a corresponding second block;
and combining the reference block in the reference frame and the corresponding second block in each fusion frame to generate a target image.
2. The method of claim 1, wherein determining the reference frame from a plurality of image frames captured of the same scene comprises:
shooting the same scene with a first exposure duration to obtain a first type of image frame, and shooting the same scene with a second exposure duration to obtain a second type of image frame; the first exposure duration is greater than the second exposure duration;
determining a first target image frame from the first type of image frame and a second target image frame from the second type of image frame;
determining a reference frame from the first target image frame and the second target image frame.
3. The method of claim 2, wherein capturing the same scene for a first exposure duration results in a first type of image frame, and capturing the same scene for a second exposure duration results in a second type of image frame, comprising:
shooting the same scene with a first exposure duration to obtain a plurality of first alternative image frames, and classifying the first alternative image frames with the image brightness larger than a first brightness threshold value into a first type of image frames;
shooting the same scene with a second exposure duration to obtain a plurality of second alternative image frames, and attributing the second alternative image frames with the image brightness smaller than a second brightness threshold value to second type image frames; the first brightness threshold is greater than or equal to a second brightness threshold.
4. The method according to claim 2, wherein after determining a first target image frame from the first type of image frame and a second target image frame from the second type of image frame, further comprising:
determining a sharpness ratio between the first target image frame and the second target image frame;
determining an image frame which does not satisfy a preset sharpness condition based on the sharpness of each image frame except the first target image frame and the second target image frame and the sharpness ratio;
and eliminating the image frames which do not meet the preset sharpness condition, and performing the step of respectively performing block matching on the reference frame and each fusion frame except the reference frame on the basis of each image frame after elimination.
5. The method according to claim 1, wherein after determining the reference frame from a plurality of image frames captured of the same scene, further comprising:
mapping the brightness of each fused frame except the reference frame to the reference frame, and determining a local motion vector between each fused frame and the reference frame;
and performing the step of respectively performing block matching on the reference frame and each fused frame except the reference frame based on the local motion vector between each fused frame and the reference frame.
6. The method according to claim 1, wherein the performing block matching on the reference frame and each of the fused frames except the reference frame to determine a first block in each of the fused frames corresponding to a reference block in the reference frame comprises:
down-sampling the reference frame to obtain at least two layers of image pyramids;
traversing from the lowest layer to the highest layer of the image pyramid, respectively performing block matching on each layer of image and each fusion frame except the reference frame, and acquiring a first block in each fusion frame corresponding to the reference block in the highest layer of image based on block alignment.
7. The method according to claim 1, wherein said performing noise estimation on the reference frame and each of the fused frames to obtain a noise estimation result comprises:
determining a target root mean square and a target difference absolute value corresponding to each pixel interval of the reference frame based on the reference frame and each fused frame;
and constructing a noise curve corresponding to the reference frame based on the target root mean square and the target difference absolute value corresponding to each pixel interval.
8. The method according to claim 7, wherein the determining a target root mean square and a target absolute difference value for each pixel interval of the reference frame based on the reference frame and each of the fused frames comprises:
traversing each pixel of the reference frame, determining an initial root mean square corresponding to each pixel, and determining a pixel interval corresponding to the initial root mean square of each pixel;
traversing each pixel of the reference frame, calculating an initial difference absolute value between each pixel in the reference frame and the corresponding pixel of each fusion frame, and determining a pixel interval corresponding to an initial root-mean-square of the same pixel in the reference frame as a pixel interval corresponding to the initial difference absolute value;
and determining a target root mean square and a target difference absolute value of the corresponding pixel interval based on the initial root mean square and the initial difference absolute value corresponding to each pixel interval.
9. The method of claim 8, wherein determining the target root mean square and the target absolute difference value for each pixel interval based on the initial root mean square and the initial absolute difference value for the respective pixel interval comprises:
for each pixel interval, carrying out average calculation on each initial root mean square corresponding to the pixel interval, and taking the average value as a target root mean square;
and determining a median value from each initial difference absolute value corresponding to the pixel interval, and taking the median value as a target difference absolute value.
10. The method according to claim 1, wherein said adjusting the first block in each of the fused frames based on the noise estimation result to obtain the corresponding second block comprises:
acquiring the sum of absolute values of difference values corresponding to each pixel interval in the reference frame;
determining an intensity weight of each fused frame based on the noise estimation result and the sum of the absolute values of the differences;
and adjusting the first block in the corresponding fusion frame based on each intensity weight to obtain a second block in each fusion frame.
11. An image generation apparatus, comprising:
the reference frame determining module is used for determining a reference frame from a plurality of image frames obtained by shooting the same scene;
the block matching module is used for respectively performing block matching on the reference frame and each fused frame except the reference frame to determine a first block in each fused frame corresponding to a reference block in the reference frame;
the noise estimation module is used for carrying out noise estimation on the reference frame and each fusion frame to obtain a noise estimation result, and adjusting the first block in each fusion frame based on the noise estimation result to obtain a corresponding second block;
and the block merging module is used for merging the reference block in the reference frame and the corresponding second block in each fused frame to generate a target image.
12. An electronic device comprising a memory and a processor, the memory having stored thereon a computer program, wherein the computer program, when executed by the processor, causes the processor to perform the steps of the image generation method according to any of claims 1 to 10.
13. A computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, carries out the steps of the method according to any one of claims 1 to 10.
CN202111308827.5A 2021-11-05 2021-11-05 Image generation method and device, electronic equipment and computer-readable storage medium Pending CN114049288A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111308827.5A CN114049288A (en) 2021-11-05 2021-11-05 Image generation method and device, electronic equipment and computer-readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111308827.5A CN114049288A (en) 2021-11-05 2021-11-05 Image generation method and device, electronic equipment and computer-readable storage medium

Publications (1)

Publication Number Publication Date
CN114049288A true CN114049288A (en) 2022-02-15

Family

ID=80207469

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111308827.5A Pending CN114049288A (en) 2021-11-05 2021-11-05 Image generation method and device, electronic equipment and computer-readable storage medium

Country Status (1)

Country Link
CN (1) CN114049288A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115079073A (en) * 2022-03-10 2022-09-20 杭州永川科技有限公司 Frequency difference quasi-static magnetic induction imaging method, system, equipment and medium

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115079073A (en) * 2022-03-10 2022-09-20 杭州永川科技有限公司 Frequency difference quasi-static magnetic induction imaging method, system, equipment and medium

Similar Documents

Publication Publication Date Title
CN108898567B (en) Image noise reduction method, device and system
CN108694705B (en) Multi-frame image registration and fusion denoising method
WO2020192483A1 (en) Image display method and device
Rao et al. A Survey of Video Enhancement Techniques.
CN113992861B (en) Image processing method and image processing device
CN106981054B (en) Image processing method and electronic equipment
CN110661977B (en) Subject detection method and apparatus, electronic device, and computer-readable storage medium
JP2009506688A (en) Image segmentation method and image segmentation system
Park et al. High dynamic range and super-resolution imaging from a single image
CN111183630B (en) Photo processing method and processing device of intelligent terminal
CN112132769A (en) Image fusion method and device and computer equipment
CN114820405A (en) Image fusion method, device, equipment and computer readable storage medium
Choi et al. A method for fast multi-exposure image fusion
CN115147304A (en) Image fusion method and device, electronic equipment, storage medium and product
US9466007B2 (en) Method and device for image processing
CN114049288A (en) Image generation method and device, electronic equipment and computer-readable storage medium
CN113935934A (en) Image processing method, image processing device, electronic equipment and computer readable storage medium
CN114092562A (en) Noise model calibration method, image denoising method, device, equipment and medium
CN109961422B (en) Determination of contrast values for digital images
CN115471413A (en) Image processing method and device, computer readable storage medium and electronic device
CN113570531A (en) Image processing method, image processing device, electronic equipment and computer readable storage medium
CN115049572A (en) Image processing method, image processing device, electronic equipment and computer readable storage medium
CN114862734A (en) Image processing method, image processing device, electronic equipment and computer readable storage medium
CN111080543B (en) Image processing method and device, electronic equipment and computer readable storage medium
Van Vo et al. High dynamic range video synthesis using superpixel-based illuminance-invariant motion estimation

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination