CN113689362B - Image processing method and device, electronic equipment and storage medium - Google Patents

Image processing method and device, electronic equipment and storage medium Download PDF

Info

Publication number
CN113689362B
CN113689362B CN202111251407.8A CN202111251407A CN113689362B CN 113689362 B CN113689362 B CN 113689362B CN 202111251407 A CN202111251407 A CN 202111251407A CN 113689362 B CN113689362 B CN 113689362B
Authority
CN
China
Prior art keywords
fusion
value
frame
pixel
parameter
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202111251407.8A
Other languages
Chinese (zh)
Other versions
CN113689362A (en
Inventor
王东
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen TetrasAI Technology Co Ltd
Original Assignee
Shenzhen TetrasAI Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen TetrasAI Technology Co Ltd filed Critical Shenzhen TetrasAI Technology Co Ltd
Priority to CN202111251407.8A priority Critical patent/CN113689362B/en
Publication of CN113689362A publication Critical patent/CN113689362A/en
Application granted granted Critical
Publication of CN113689362B publication Critical patent/CN113689362B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • G06T5/70
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration by the use of more than one image, e.g. averaging, subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging

Abstract

The present disclosure relates to an image processing method and apparatus, an electronic device and a storage medium, which determine N image frames to be fused in an image sequence to be processed, and determine a reference frame and the remaining (N-1) target frames therein. And fusing the target frame and the reference frame aiming at the 1 st target frame to obtain the 1 st fused frame. And (3) aiming at the ith target frame, fusing the ith target frame and the (i-1) th fusion frame to obtain the ith fusion frame, wherein the value of N is a positive integer which is greater than or equal to 3, and the value of i is [2, N ]. And the image fusion process is realized based on the fusion parameter set obtained by the dynamic calculation at each time. The method and the device can fuse the result of image fusion every time with the next frame again, can obtain better image fusion parameters, and optimize the image fusion result. Meanwhile, a fusion parameter set is dynamically determined during image fusion every time, so that parameters can be timely adjusted when noise is high, and introduction of noise is avoided.

Description

Image processing method and device, electronic equipment and storage medium
Technical Field
The present disclosure relates to the field of image data processing, and in particular, to an image processing method and apparatus, an electronic device, and a storage medium.
Background
The image multi-frame noise reduction technology is a commonly used technology in image processing, and is generally applied to improving the quality of an original image. Due to the fact that multiple frames of images or multiple shot images are different in acquisition time, a camera is in a relative motion state and the like, it is difficult to guarantee that all input images can be completely aligned. For a multi-frame image which is not aligned or has content variation, a noise area with fuzzy fusion result or motion ghost exists in the image fusion process, and the final result is seriously influenced.
Disclosure of Invention
The disclosure provides an image processing method and device, an electronic device and a storage medium, aiming at improving and optimizing an image fusion result, reducing noise of a noise area caused by image motion and the like, and optimizing an image processing result.
According to a first aspect of the present disclosure, there is provided an image processing method including: determining N image frames to be fused in an image sequence to be processed; determining a reference frame and the remaining (N-1) target frames from the N image frames to be fused; for a 1 st target frame, fusing the target frame and the reference frame to obtain a 1 st fused frame; aiming at the ith target frame, fusing the ith target frame and the (i-1) th fusion frame to obtain the ith fusion frame, wherein the value of N is a positive integer which is more than or equal to 3, and the value of i is [2, N ]; the N image frames to be fused have the same size, and each image fusion process is realized based on the fusion parameter set obtained by the dynamic calculation.
The embodiment of the disclosure can fuse the result of each image fusion with the next frame again, can obtain better image fusion parameters, and optimizes the image fusion result. Meanwhile, a fusion parameter set is dynamically determined during image fusion every time, so that parameters can be timely adjusted when noise is high, and introduction of noise is avoided.
In a possible implementation manner, the fusing the target frame and the reference frame for the 1 st target frame to obtain the 1 st fused frame includes: determining a fusion parameter set of the 1 st target frame and the reference frame at the 1 st fusion; and for the 1 st target frame, fusing the 1 st target frame and the reference frame by using a fusion parameter set fused for the 1 st time to obtain a 1 st fusion frame.
During the 1 st image fusion, the embodiment of the disclosure dynamically determines a fusion parameter set during the 1 st image fusion, performs image fusion in a targeted manner, and improves an image fusion effect.
In a possible implementation manner, the determining a fusion parameter set of the 1 st target frame and the reference frame at the 1 st fusion includes: determining at least one pixel position included in the reference frame and the 1 st target frame at the time of the 1 st fusion; and respectively determining a fusion parameter set of each pixel position.
In the image fusion of the 1 st time, in the embodiment of the disclosure, when two image frames are fused, a fusion parameter set is dynamically determined for each pixel position respectively, so as to timely adjust parameters for pixel positions with higher noise to reduce noise, thereby improving the image fusion effect.
In a possible implementation manner, the fusing the 1 st target frame and the reference frame by using the fused parameter set of the 1 st fusion for the 1 st target frame, and obtaining the 1 st fused frame includes: respectively fusing the reference pixel value of the reference frame at the pixel position and the target pixel value of the 1 st target frame at the pixel position according to the fusion parameter set of each pixel position during the 1 st fusion to obtain a fusion pixel value; determining a 1 st fused frame based on the fused pixel value of at least one of the pixel locations.
In the image fusion of the 1 st time, the embodiment of the disclosure performs frame fusion pertinently according to the fusion parameter set corresponding to each pixel position, determines the obtained fusion frame, performs image processing point by point, and optimizes the fusion effect.
In one possible implementation, the determining the fusion parameter set for each pixel position respectively includes: determining a target pixel location in at least one of the pixel locations; determining a pixel area where the target pixel position is located;
and determining a fusion parameter set of the target pixel position according to the reference pixel value and the target pixel value of each pixel position in the pixel region.
When the 1 st image fusion is performed, the embodiment of the disclosure refers to the pixel position with a short surrounding distance when determining the fusion parameter set of each pixel position, so that the characteristics of the position can be uniformly embodied, and the influence on the final fusion result due to poor determined fusion parameter result caused by single point noise is avoided.
In a possible implementation manner, the fusing the ith target frame and the (i-1) th fused frame for the ith target frame to obtain the ith fused frame includes: determining a fusion parameter set of the ith target frame and the (i-1) th fusion frame during the ith fusion; and aiming at the ith target frame, fusing the ith target frame and the (i-1) th fusion frame by using the fusion parameter set fused for the ith time to obtain the ith fusion frame.
During the ith image fusion, the embodiment of the disclosure dynamically determines a fusion parameter set during each image fusion, performs image fusion in a targeted manner, and improves the image fusion effect.
In a possible implementation manner, the determining the fusion parameter set of the ith fusion frame and the (i-1) th fusion frame when fusing the ith time includes: determining at least one pixel position included in the (i-1) th fused frame and the ith target frame at the time of the ith fusion; and respectively determining a fusion parameter set of each pixel position.
During the ith image fusion, in the embodiment of the disclosure, when two image frames are fused, a fusion parameter set is dynamically determined for each pixel position respectively, so as to timely adjust parameters for pixel positions with higher noise to reduce noise, thereby improving the image fusion effect.
In a possible implementation manner, the fusing, with respect to the ith target frame, the ith target frame and the (i-1) th fused frame using the fusion parameter set fused for the ith target frame, and obtaining the ith fused frame includes: respectively fusing the reference pixel value of the (i-1) th fused frame at the pixel position and the target pixel value of the ith target frame at the pixel position according to the fusion parameter set of each pixel position during the ith fusion to obtain a fused pixel value; and determining the ith fusion frame according to the fusion pixel value of the at least one pixel position.
During the ith image fusion, the embodiment of the disclosure performs frame fusion pertinently according to the fusion parameter set corresponding to each pixel position, determines the obtained fusion frame, performs image processing point by point, and optimizes the fusion effect.
In one possible implementation, the determining the fusion parameter set for each pixel position respectively includes: determining a target pixel location in at least one of the pixel locations; determining a pixel area where the target pixel position is located; and determining a fusion parameter set of the target pixel position according to the reference pixel value and the target pixel value of each pixel position in the pixel region.
When the ith image fusion is performed, the embodiment of the disclosure refers to the pixel position with a short surrounding distance when determining the fusion parameter set of each pixel position, so that the characteristics of the position can be uniformly embodied, and the influence on the final fusion result due to poor determined fusion parameter result caused by single point noise is avoided.
In a possible implementation manner, the fusion parameter set includes a first parameter, a second parameter, and a third parameter, where the first parameter is used to represent a weight of a value of the pixel position in the reference frame or the fusion frame, the second parameter is used to represent a weight of a value of the pixel position in the target frame, and the third parameter is used to represent a modification parameter of the pixel position.
In the embodiment of the disclosure, when the image fusion is performed, besides the first parameter and the second parameter which are used as weights, a third parameter for correcting results is introduced, local noise reduction is performed by adjusting the third parameter in the image fusion process, and the image fusion effect can be improved by a small amount of calculation.
In a possible implementation manner, the determining a fusion parameter set of the target pixel position according to the reference pixel value and the target pixel value of each pixel position in the pixel region includes: determining a similarity function according to a reference pixel value, a target pixel value and the current fusion frequency of each pixel position in the pixel region, wherein the similarity function represents the similarity between an expected fusion pixel value and the reference pixel value of the target pixel position, and comprises a first independent variable, a second independent variable and a third independent variable; determining the value of the first argument, the value of the second argument, and the value of the third argument when the similarity function is at a minimum position; determining a first parameter, a second parameter and a third parameter according to the value of the first independent variable, the value of the second independent variable and the value of the third independent variable; and determining a fusion parameter set of the target pixel position according to the first parameter, the second parameter and the third parameter.
According to the embodiment of the invention, the similarity function is dynamically established for each pixel position, the multiple parameters required in the fusion process are calculated according to the similarity function to form the fusion parameter set, and the parameters are adjusted in time when noise is found, so that the accuracy and flexibility of parameter determination are improved.
In a possible implementation manner, the determining a similarity function according to the reference pixel value, the target pixel value, and the current fusion time of each pixel position in the pixel region includes: for each pixel position in the pixel region, determining a pixel difference term and an iteration adjusting term according to a reference pixel value, a target pixel value and the current fusion frequency of each pixel position, wherein the pixel difference term represents the difference between an expected fusion pixel value and the reference pixel value, and the iteration adjusting term is used for adjusting the values of the first independent variable, the second independent variable and the third independent variable according to the current fusion frequency; and determining a similarity function according to the sum of the pixel difference item and the iteration adjusting item of each pixel position in the pixel region.
According to the embodiment of the disclosure, the similarity function is determined through the pixel difference term and the iteration adjustment term, two differences are introduced in the determination process of the similarity function, and the accuracy of determining parameters by the similarity function is improved.
In one possible implementation, the pixel difference term is determined from a difference of the expected fused pixel value and the reference pixel value, and the expected fused pixel value is determined from the reference pixel value, the first argument, the target pixel value, the second argument, and the third argument.
According to the embodiment of the disclosure, the expected fusion pixel value is determined according to the three variables, and then the parameters required by fusion can be accurately determined according to the pixel difference according to the difference pixel difference value between the expected fusion pixel value and the reference pixel value.
In a possible implementation manner, the iterative adjustment term is determined according to a first adjustment parameter, a second adjustment parameter, a third adjustment parameter, and preset first, second, and third coefficients, where the first adjustment parameter is determined according to the first argument, the second adjustment parameter is determined according to the first argument, the current fusion number, and the second argument, and the third adjustment parameter is determined according to the first argument and the second argument.
The embodiment of the disclosure timely adjusts and fuses the parameters by setting three adjusting parameters under the condition of noise.
In one possible implementation, the determining a first parameter, a second parameter, and a third parameter according to the value of the first argument, the value of the second argument, and the value of the third argument includes: judging whether the target pixel position is a motion position or not according to a first independent variable corresponding to the minimum value of the similarity function; under the condition that the target pixel position is a motion position, denoising a pixel value corresponding to the motion position to obtain a denoising result; and determining a first parameter, a second parameter and a third parameter according to the value of the first independent variable, the value of the second independent variable and the value of the third independent variable corresponding to the noise reduction result.
The embodiment of the disclosure judges whether the current pixel position is a motion position or not through the value of the first independent variable, and performs noise reduction when the current pixel position is the motion position, and adjusts the first parameter, the second parameter and the third parameter required by fusion in time to optimize the fusion effect.
In a possible implementation manner, in the case that the target pixel position is a motion position, performing noise reduction on a pixel value corresponding to the motion position, and obtaining a noise reduction result includes: and under the condition that the target pixel position is a motion position, denoising the pixel value corresponding to the motion position by adjusting the value of the first independent variable, the value of the second independent variable and the value of the third independent variable, and obtaining a denoising result.
The embodiment of the disclosure can reduce noise by adjusting the value of the independent variable, simplify the noise reduction process and realize accurate and timely noise reduction.
In one possible implementation, the adjusting process of the values of the first argument, the second argument, and the third argument comprises: decreasing the value of the first argument; and inputting the value of the second independent variable and the reduced value of the first independent variable into the similarity function to obtain the increased value of the third independent variable.
The embodiment of the disclosure adjusts the independent variable value based on the similarity function, and can adjust the value of only one independent variable, that is, adjust the values of other independent variables at the same time, thereby simplifying the adjustment process.
In one possible implementation, the reducing the value of the first argument comprises: determining a variance from a reference pixel value for each of the pixel locations within the pixel region; reducing the value of the first argument in accordance with the variance, wherein the magnitude of the variance is inversely related to the magnitude of the reduction of the first argument.
According to the embodiment of the invention, the corresponding adjustment amplitude is determined through the variance of the pixel region, so that the negative effect brought by the wrong adjustment amplitude in the noise reduction process is avoided.
In a possible implementation manner, the target pixel position is a middle position of the located pixel region.
According to the embodiment of the disclosure, the target pixel position is used as the middle position of the pixel region, so that the pixel region can be rapidly determined, and the accuracy of the obtained fusion parameter set is improved.
According to a second aspect of the present disclosure, there is provided an image processing apparatus including: the image determining module is used for determining N image frames to be fused in the image sequence to be processed; a frame determination module for determining a reference frame and the remaining (N-1) target frames from the N image frames to be fused; a first fusion module, configured to fuse the target frame and the reference frame for a 1 st target frame, to obtain a 1 st fusion frame; a second fusion module, configured to fuse, for an ith target frame, the ith target frame and an (i-1) th fusion frame to obtain an ith fusion frame, where a value of N is a positive integer greater than or equal to 3, and a value of i is [2, N ]; the N image frames to be fused have the same size, and each image fusion process is realized based on the fusion parameter set obtained by the dynamic calculation.
In one possible implementation manner, the first fusion module includes: a first parameter determining submodule, configured to determine a fusion parameter set of a 1 st target frame and the reference frame in a 1 st fusion; and the first fusion submodule is used for fusing the 1 st target frame and the reference frame by using a fusion parameter set fused for the 1 st time aiming at the 1 st target frame to obtain a 1 st fusion frame.
In one possible implementation manner, the first parameter determining sub-module includes: a first position determination unit, configured to determine at least one pixel position included in the reference frame and the 1 st target frame at the time of the 1 st fusion; and the first parameter determining unit is used for respectively determining a fusion parameter set of each pixel position.
In one possible implementation, the first fusion submodule includes: the first pixel fusion unit is used for fusing the reference pixel value of the reference frame at the pixel position and the target pixel value of the 1 st target frame at the pixel position according to the fusion parameter set of each pixel position during the 1 st fusion to obtain a fusion pixel value; a first fused frame determining unit for determining a 1 st fused frame based on fused pixel values of at least one of said pixel locations.
In one possible implementation manner, the first parameter determining unit includes: a first position determining subunit for determining a target pixel position among at least one of the pixel positions; a first area determining subunit, configured to determine a pixel area where the target pixel position is located; and the parameter determining subunit is used for determining a fusion parameter set of the target pixel position according to the reference pixel value and the target pixel value of each pixel position in the pixel region.
In one possible implementation, the second fusion module includes: a second parameter determining submodule, configured to determine a fusion parameter set of an ith target frame and an (i-1) th fusion frame during an ith fusion; and the second fusion submodule is used for fusing the ith target frame and the (i-1) th fusion frame by using the fusion parameter set fused for the ith target frame to obtain the ith fusion frame.
In one possible implementation manner, the second parameter determining sub-module includes: a second position determination unit configured to determine at least one pixel position included in an (i-1) th fused frame and an ith target frame at an ith fusion time; and the second parameter determining unit is used for respectively determining the fusion parameter set of each pixel position.
In one possible implementation, the second fusion submodule includes: a second pixel fusion unit, configured to fuse, according to a fusion parameter set of each pixel position during the ith fusion, a reference pixel value of the (i-1) th fused frame at the pixel position and a target pixel value of the ith target frame at the pixel position to obtain a fused pixel value; and the second fused frame determining unit is used for determining the ith fused frame according to the fused pixel value of the at least one pixel position.
In a possible implementation manner, the second parameter determining unit includes: a second position determining subunit for determining a target pixel position among at least one of the pixel positions; a second area determining subunit, configured to determine a pixel area where the target pixel position is located; and the parameter determining subunit is used for determining a fusion parameter set of the target pixel position according to the reference pixel value and the target pixel value of each pixel position in the pixel region.
In a possible implementation manner, the fusion parameter set includes a first parameter, a second parameter, and a third parameter, where the first parameter is used to represent a weight of a value of the pixel position in the reference frame or the fusion frame, the second parameter is used to represent a weight of a value of the pixel position in the target frame, and the third parameter is used to represent a modification parameter of the pixel position.
In one possible implementation, the parameter determining subunit includes: a function determining subunit, configured to determine a similarity function according to a reference pixel value, a target pixel value, and a current fusion frequency of each pixel position in the pixel region, where the similarity function represents a similarity between an expected fusion pixel value and the reference pixel value of the target pixel position, and the similarity function includes a first argument, a second argument, and a third argument; an argument determination subunit, configured to determine, when the similarity function is at a minimum value position, a value of the first argument, a value of the second argument, and a value of the third argument; a parameter calculating subunit, configured to determine a first parameter, a second parameter, and a third parameter according to the value of the first argument, the value of the second argument, and the value of the third argument; and the parameter set determining subunit is used for determining a fusion parameter set of the target pixel position according to the first parameter, the second parameter and the third parameter.
In one possible implementation, the function determining subunit includes: a function term determining subunit, configured to determine, for each pixel position in the pixel region, a pixel difference term and an iteration adjusting term according to a reference pixel value, a target pixel value, and a current fusion number of the pixel position, where the pixel difference term represents a difference between an expected fusion pixel value and the reference pixel value, and the iteration adjusting term is used to adjust values of the first argument, the second argument, and the third argument according to the current fusion number; and the function calculation subunit is used for determining a similarity function according to the sum of the pixel difference term and the iteration adjusting term of each pixel position in the pixel region.
In one possible implementation, the pixel difference term is determined from a difference of the expected fused pixel value and the reference pixel value, and the expected fused pixel value is determined from the reference pixel value, the first argument, the target pixel value, the second argument, and the third argument.
In a possible implementation manner, the iterative adjustment term is determined according to a first adjustment parameter, a second adjustment parameter, a third adjustment parameter, and preset first, second, and third coefficients, where the first adjustment parameter is determined according to the first argument, the second adjustment parameter is determined according to the first argument, the current fusion number, and the second argument, and the third adjustment parameter is determined according to the first argument and the second argument.
In one possible implementation, the parameter calculating subunit includes: the position judgment subunit is used for judging whether the target pixel position is a motion position according to a first independent variable corresponding to the minimum value of the similarity function; the noise reduction subunit is configured to perform noise reduction on a pixel value corresponding to the motion position to obtain a noise reduction result when the target pixel position is the motion position; and the noise reduction parameter determining subunit is configured to determine a first parameter, a second parameter, and a third parameter according to the value of the first argument, the value of the second argument, and the value of the third argument, which correspond to the noise reduction result.
In one possible implementation, the noise reduction subunit includes: and the independent variable adjusting subunit is configured to, when the target pixel position is a motion position, perform noise reduction on a pixel value corresponding to the motion position by adjusting the values of the first independent variable, the second independent variable, and the third independent variable, so as to obtain a noise reduction result.
In one possible implementation, the independent variable adjusting subunit includes: a decrease subunit operable to decrease the value of the first argument; and the increasing subunit is configured to input the value of the second argument and the reduced value of the first argument into the similarity function, so as to obtain an increased value of the third argument.
In one possible implementation, the reduction subunit includes: a variance calculating subunit, configured to determine a variance according to a reference pixel value of each of the pixel positions in the pixel region; and the independent variable adjusting subunit is used for reducing the value of the first independent variable according to the variance, wherein the magnitude of the variance is inversely related to the reduction amplitude of the first independent variable.
In a possible implementation manner, the target pixel position is a middle position of the located pixel region.
According to a third aspect of the present disclosure, there is provided an electronic device comprising: a processor; a memory for storing processor-executable instructions; wherein the processor is configured to invoke the instructions stored by the memory to perform the image processing method described above.
According to a fourth aspect of the present disclosure, there is provided a computer readable storage medium having stored thereon computer program instructions which, when executed by a processor, implement the above-described image processing method.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the disclosure. Other features and aspects of the present disclosure will become apparent from the following detailed description of exemplary embodiments, which proceeds with reference to the accompanying drawings.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the present disclosure and, together with the description, serve to explain the principles of the disclosure.
Fig. 1 shows a flowchart of an image processing method of an embodiment of the present disclosure.
Fig. 2 shows a schematic diagram of an image processing procedure of an embodiment of the present disclosure.
Fig. 3 shows a schematic diagram of a pixel region of an embodiment of the disclosure.
Fig. 4 shows a schematic diagram of a process of determining a fusion parameter set according to an embodiment of the disclosure.
FIG. 5 illustrates a schematic diagram of a variance and adjustment magnitude relationship of an embodiment of the present disclosure.
Fig. 6 shows a schematic diagram of an image processing apparatus according to an embodiment of the present disclosure.
Fig. 7 shows a schematic diagram of an electronic device of an embodiment of the disclosure.
Fig. 8 shows a schematic diagram of an electronic device of an embodiment of the disclosure.
Detailed Description
Various exemplary embodiments, features and aspects of the present disclosure will be described in detail below with reference to the accompanying drawings. In the drawings, like reference numbers can indicate functionally identical or similar elements. While the various aspects of the embodiments are presented in drawings, the drawings are not necessarily drawn to scale unless specifically indicated.
The word "exemplary" is used exclusively herein to mean "serving as an example, embodiment, or illustration. Any embodiment described herein as "exemplary" is not necessarily to be construed as preferred or advantageous over other embodiments.
The term "and/or" herein is merely an association describing an associated object, meaning that three relationships may exist, e.g., a and/or B, may mean: a exists alone, A and B exist simultaneously, and B exists alone. In addition, the term "at least one" herein means any one of a plurality or any combination of at least two of a plurality, for example, including at least one of A, B, C, and may mean including any one or more elements selected from the group consisting of A, B and C.
Furthermore, in the following detailed description, numerous specific details are set forth in order to provide a better understanding of the present disclosure. It will be understood by those skilled in the art that the present disclosure may be practiced without some of these specific details. In some instances, methods, means, elements and circuits that are well known to those skilled in the art have not been described in detail so as not to obscure the present disclosure.
Fig. 1 shows a flowchart of an image processing method of an embodiment of the present disclosure. In a possible implementation manner, the image processing method of the embodiment of the disclosure may be executed by an electronic device such as a terminal device or a server. The terminal device may be a User Equipment (UE), a mobile device, a User terminal, a cellular phone, a cordless phone, a Personal Digital Assistant (PDA), a handheld device, a computing device, a vehicle-mounted device, a wearable device, or the like, and the terminal device may implement the image processing method according to the embodiment of the present disclosure by using a processor to call a computer readable instruction stored in a memory. Alternatively, the image processing method of the embodiment of the present disclosure may also be executed by a server. Alternatively, the server may be a single server or a server cluster composed of a plurality of servers.
The embodiment of the disclosure can be applied to any application scene needing to align the image sequence. For example, a sequence of images captured by a camera fixedly arranged on a road in a preset time interval is aligned, or frames in a video captured by a handheld camera are aligned. The electronic device of the embodiment of the disclosure can execute the image processing method to perform image processing on each frame in the image sequence to be processed after receiving the image sequence to be processed acquired by the image acquisition device, so as to obtain an image processing result.
As shown in fig. 1, the image processing method of the embodiment of the present disclosure may include the following steps S10-S40.
And step S10, determining N image frames to be fused in the image sequence to be processed.
In one possible implementation manner, an electronic device executing the image processing method according to the embodiment of the present disclosure determines a sequence of images to be processed, where the sequence of images includes N image frames that need to be subjected to image fusion. Optionally, the determination method of the image sequence to be processed may be to receive the image sequence to be processed sent after being acquired by other electronic devices or image acquisition devices, or may also be to acquire the image sequence to be processed through a built-in image acquisition device. Further, the image sequence to be processed may be a plurality of images continuously acquired within a preset time period, or may also be an image sequence to be processed composed of a plurality of images acquired by a plurality of cameras. Optionally, the image sequence to be processed may also be formed by acquiring a plurality of image frames in any one of the above manners, and then extracting and acquiring image frames to be fused from the acquired image frames.
For example, when a plurality of images acquired by the road camera within a preset time interval need to be processed, a plurality of images continuously acquired by the road camera within the preset time interval may be acquired as an image sequence to be processed, or an image frame needing to be fused is extracted from the plurality of continuously acquired images as the image sequence to be processed. And transmitting the sequence of images to be processed to a connected electronic device to execute the image processing method of the embodiment of the present disclosure by the electronic device.
Step S20, determining a reference frame from the N image frames to be fused, and the remaining (N-1) target frames.
In a possible implementation manner, after the image sequence to be processed is determined, the image processing is performed on N image frames in the image sequence to be processed in an iterative manner. Optionally, the image processing process is image fusion. Optionally, before image fusion, any one of the N image frames is determined as a reference frame, and the other image frames are determined as target frames. During image fusion, firstly, fusion is carried out according to a reference frame and a first target frame to obtain a fusion frame, then, image fusion is further carried out on the fusion frame obtained each time and other target frames, and the image processing process is finished after the image fusion of the whole image sequence to be processed is finished. The sizes of the N image frames to be fused are the same, and each image fusion process is realized based on the fusion parameter set obtained by the dynamic calculation.
Fig. 2 shows a schematic diagram of an image processing procedure of an embodiment of the present disclosure. As shown in fig. 2, when performing image processing on a sequence of an image sequence to be processed, an image frame 1 in the image sequence to be processed is first used as a reference frame, and an adjacent next image frame 2 is used as a target frame, and image fusion is performed to obtain a fusion frame 1. And then, the image fusion is carried out on the next frame image frame 3 adjacent to the image frame 2 through the obtained fusion frame 1 to obtain a fusion frame 2. And further, sequentially carrying out image fusion on the obtained fusion frame and the image frame at the next position until the image processing process is finished after the last image frame in the image sequence to be processed finishes the image fusion.
Step S30, aiming at the 1 st target frame, the target frame and the reference frame are fused to obtain the 1 st fusion frame.
In a possible implementation manner, after the reference frame and the target frame are determined, the reference frame and the target frame are subjected to image fusion in a first iteration process to obtain a 1 st fusion frame. Optionally, the reference frame and the target frame are images with the same size, and the 1 st image fusion process is implemented based on the fusion parameter set.
Optionally, the 1 st image fusion process is to determine a fusion parameter set of the 1 st target frame and the reference frame during the 1 st fusion, and then fuse the 1 st target frame and the reference frame by using the fusion parameter set of the 1 st fusion for the 1 st target frame to obtain the 1 st fusion frame. Further, in the process of image fusion, each pixel in the target frame and the reference frame needs to be fused separately, and a fusion parameter set needs to be easily determined when each pixel is fused. That is, at the time of the 1 st fusion, at least one pixel position included in the reference frame and the 1 st target frame is determined, and a fusion parameter set for each pixel position is determined respectively. After the fusion parameter set of each pixel position is obtained, respectively fusing the reference pixel value of the reference frame at the pixel position and the target pixel value of the 1 st target frame at the pixel position according to the fusion parameter set of each pixel position during the 1 st fusion to obtain a fusion pixel value. A fused frame 1 is determined based on the fused pixel values of at least one pixel location.
In a possible implementation manner, the fusion parameter set may include at least one fusion parameter used for pixel fusion, for example, a first parameter, a second parameter, and a third parameter, where the first parameter is used to represent a weight of a value of a pixel position in the reference frame or the 1 st fusion frame, the second parameter is used to represent a weight of a value of a pixel position in the 1 st or i-th target frame, and the third parameter is used to represent a correction parameter of the pixel position. The process of determining the fusion parameter set for each pixel position may be to determine a target pixel position in at least one pixel position, determine a pixel region where the target pixel position is located, and determine the fusion parameter set for the target pixel position according to the reference pixel value and the target pixel value of each pixel position in the pixel region. Optionally, the target pixel position may be any one of a plurality of pixel positions included in the target frame and the reference frame, and after the target pixel position is determined, the electronic device determines the fusion parameter set according to the reference pixel value and the target pixel value of each pixel position in the region where the target pixel position is located. Further, after the fusion parameter set of the target pixel position is determined, a target pixel position is determined again until the fusion parameter sets of all pixel positions in the target frame and the reference frame are determined. Further, the pixel region where the target pixel position is located may be a region having a size of a predetermined size and centered on the target pixel position, that is, the target pixel position is a middle position of the pixel region where the target pixel position is located. Alternatively, the pixel region where the target pixel position is located may be determined according to a predetermined size. For example, when the predetermined size is 3 × 3, the pixel region is a 3 × 3 pixel region with the target pixel position as the middle position.
Fig. 3 shows a schematic diagram of a pixel region of an embodiment of the disclosure. As shown in fig. 3, the target frame and the reference frame having the same size may have the same 12 × 12=144 pixel positions. One pixel position may be determined therein as a target pixel position, and the pixel region 30 in which the target pixel position is located may be determined with the target pixel position as the center when the predetermined size is 5 × 5. Alternatively, when the complete pixel region cannot be determined due to the target pixel position being at the edge of the target frame and the reference frame, the target frame and the reference frame may be expanded by copying the image edge to determine the pixel region.
In one possible implementation manner, the determining a fusion parameter set according to the reference pixel value and the target pixel value of each pixel position in the pixel region according to the embodiment of the disclosure includes: and determining a similarity function according to the reference pixel value, the target pixel value and the current fusion frequency of each pixel position in the pixel region, wherein the similarity function represents the similarity between the expected fusion pixel value and the reference pixel value of the target pixel position, and the similarity function comprises the value of a first independent variable, the value of a second independent variable and the value of a third independent variable when the similarity function is determined to be at the minimum value position by the first independent variable, the second independent variable and the third independent variable. And determining the first parameter, the second parameter and the third parameter according to the value of the first independent variable, the value of the second independent variable and the value of the third independent variable. And determining a fusion parameter set of the target pixel position according to the first parameter, the second parameter and the third parameter.
Optionally, the determining of the similarity function of the target pixel position may include: and for each pixel position in the pixel region, determining a pixel difference item and an iteration adjusting item according to the reference pixel value, the target pixel value and the current fusion frequency of each pixel position, wherein the pixel difference item represents the difference between the expected fusion pixel value and the reference pixel value, and the iteration adjusting item is used for adjusting the values of the first independent variable, the second independent variable and the third independent variable according to the current fusion frequency. A similarity function is determined based on the sum of the pixel difference term and the iterative adjustment term for each pixel location within the pixel region. Wherein the pixel difference term is determined from a difference of the expected fused pixel value and the reference pixel value, and the expected fused pixel value is determined from the reference pixel value, the first argument, the target pixel value, the second argument, and the third argument. The iteration adjusting item is determined according to a first adjusting parameter, a second adjusting parameter, a third adjusting parameter, a preset first coefficient, a preset second coefficient and a preset third coefficient, the first adjusting parameter is determined according to a first independent variable, the second adjusting parameter is determined according to the first independent variable, the current fusion frequency and the second independent variable, and the third adjusting parameter is determined according to the first independent variable and the second independent variable.
Alternatively, the pixel difference term may be determined by determining, for each pixel position within the pixel region, a square of a difference between the expected fused pixel value and the reference pixel value as the pixel difference term, wherein the expected fused pixel value may be a sum of a product of the reference pixel value and the first argument, a product of the target pixel value and the second argument, and a third argument. The determination process of the iterative adjustment term may be to determine a preset first coefficient, a second coefficient, and a third coefficient, and determine the iterative adjustment term according to a sum of a product of the first coefficient and the first adjustment parameter, a product of the second coefficient and the second adjustment parameter, and a product of the third coefficient and a third adjustment parameter, where the first adjustment parameter is a square of the first argument, the second adjustment parameter is a square of the first argument minus a product of the current fusion time and the second argument, and the third adjustment parameter is a square of the sum of the first argument and the second argument minus a constant term.
In one possible implementation, the similarity function of the target pixel position may be as follows:
Figure 572216DEST_PATH_IMAGE001
wherein E is the similarity between the expected fusion pixel value and the reference pixel value,
Figure 895881DEST_PATH_IMAGE002
for the purpose of reference to the pixel value,
Figure 129415DEST_PATH_IMAGE003
a, b, and c are a first argument, a second argument, and a third argument in this order as a target pixel value. k is the current number of times of fusion,
Figure 55783DEST_PATH_IMAGE004
Figure 276679DEST_PATH_IMAGE005
and
Figure 302404DEST_PATH_IMAGE006
the first coefficient, the second coefficient and the third coefficient are preset in sequence. Further, the air conditioner is provided with a fan,
Figure 241541DEST_PATH_IMAGE007
in order to expect a fused pixel value,
Figure 237179DEST_PATH_IMAGE008
in the case of the pixel difference term,
Figure 811118DEST_PATH_IMAGE009
is a first adjusting parameter,
Figure 476586DEST_PATH_IMAGE010
Is the second adjusting parameter,
Figure 168598DEST_PATH_IMAGE011
Is a third tuning parameter, where the constant term is 1.
Figure 545090DEST_PATH_IMAGE012
The terms are iteratively adjusted.
Optionally, the pixel difference term is used to characterize the similarity between the expected fused pixel value and the reference pixel value, and the iterative adjustment term is used to adjust the first argument, the second argument, and the third argument according to the current fusion number. The first coefficient, the second coefficient and the third coefficient can be determined according to actual image fusion requirements. The first coefficient is used for adjusting the size of the first independent variable, the second coefficient is used for adjusting the proportion of the first independent variable and the second independent variable, and the third independent variable is used for adjusting the third independent variable through the first independent variable and the second independent variable.
In a possible implementation manner, after determining the similarity function of the target pixel position, the electronic device calculates a minimum value of the similarity function, so as to determine a first parameter, a second parameter, and a third parameter according to a value of the first argument, a value of the second argument, and a value of the third argument at the time of the minimum value of the similarity function, and further determine a fusion parameter set of the target pixel position according to the first parameter, the second parameter, and the third parameter. Optionally, the minimum value of the similarity function may be calculated by deriving the similarity function based on the first argument, the second argument, and the third argument to obtain a first derivative function, a second derivative function, and a third derivative function, and setting the values of the first derivative function, the second derivative function, and the third derivative function to 0, so as to establish an equation set and solve to obtain the minimum value of the similarity function, where the values of the first argument, the second argument, and the third argument are the values of the first derivative function, the second derivative function, and the third derivative function. Further, the results obtained by respectively deriving the first argument, the second argument, and the third argument in the similarity function are sequentially as follows:
Figure 599634DEST_PATH_IMAGE013
Figure 419691DEST_PATH_IMAGE014
Figure 192475DEST_PATH_IMAGE015
and solving the three derivative equations as an equation set to obtain the value of the first independent variable, the value of the second independent variable and the value of the third independent variable corresponding to the minimum value of the similarity function.
In a possible implementation manner, after determining the value of the first argument, the value of the second argument, and the value of the third argument corresponding to the minimum value of the similarity function, the first parameter, the second parameter, and the third parameter may be determined according to the values of the respective variables. The first parameter is determined according to the value of the first independent variable, the second parameter is determined according to the value of the second independent variable, and the third parameter is determined according to the value of the third independent variable. Optionally, when determining the first parameter, the second parameter, and the third parameter, it is necessary to determine whether the current target pixel position is a motion position, that is, whether the difference between the target frame and the reference frame at the position is large. And, the first parameter, the second parameter, and the third parameter are determined based on the different determination results. For example, whether the current target pixel position is a motion position may be determined according to the first argument corresponding to the minimum value of the similarity function, and when the target pixel position is the motion position, the pixel value corresponding to the motion position may be denoised to obtain a denoising result. And determining a first parameter, a second parameter and a third parameter according to the value of the first independent variable, the value of the second independent variable and the value of the third independent variable corresponding to the noise reduction result. When the current target pixel position is not the motion position, the value of the first argument corresponding to the minimum value of the similarity function may be directly determined as the first parameter, the value of the second argument as the second parameter, and the value of the third argument as the third parameter.
Further, since the expected fused pixel value is determined according to the sum of the product of the reference pixel value and the first argument, the product of the target pixel value and the second argument, and the third argument. For larger values of the first argument and smaller values of the second argument, the expected fused pixel value is too close to the reference pixel value, resulting in a noisy expected fused pixel value. Therefore, when the target pixel position is the motion position, the pixel value corresponding to the motion position may be denoised by adjusting the values of the first argument, the second argument, and the third argument to obtain a denoised result.
In a possible implementation manner, whether the target pixel position is a motion position may also be determined according to a value of the first argument corresponding to the minimum value. For example, the target pixel position may be determined to be a motion position in response to the value of the first argument not being greater than the argument threshold. Further, the process of denoising by adjusting the values of the first argument, the second argument, and the third argument may include decreasing the value of the first argument, and inputting the value of the second argument and the decreased value of the first argument into the similarity function to obtain an increased value of the third argument.
Alternatively, the magnitude of the decrease in the first argument value may also be determined from the variance of the reference pixel value for each pixel position within the pixel area. For example, a variance of the reference pixel value for each pixel location within the pixel region may be determined in response to the value of the first argument being greater than an argument threshold, the value of the first argument being reduced according to the variance, wherein the magnitude of the variance is inversely related to the magnitude of adjustment of the first argument. That is, the larger the variance value of the reference pixel value at each pixel position in the pixel region, the smaller the conditional width of the corresponding first argument.
Fig. 4 shows a schematic diagram of a process of determining a fusion parameter set according to an embodiment of the disclosure. As shown in fig. 4, when the minimum value of the similarity function is obtained by calculation, the value of the first argument, the value of the second argument, and the value of the third argument are used to determine a fusion parameter set based on the values of the arguments.
Optionally, the process of determining the fusion parameter set is to determine 40 a value of the first independent variable, and then compare the value of the first independent variable with a preset independent variable threshold to determine whether the value of the first independent variable is greater than the independent variable threshold 41. And when the value of the first independent variable is not greater than a preset independent variable threshold value, directly determining 43 values of a second independent variable and a third independent variable, and determining 44 a first parameter, a second parameter and a third parameter according to the value of the first independent variable, the value of the second independent variable and the value of the third independent variable, namely determining the value of the first independent variable as the first parameter, determining the value of the second independent variable as the second parameter and determining the value of the third independent variable as the third parameter. Further, when the value of the first independent variable is greater than the preset independent variable threshold, the value 42 of the first independent variable is adjusted, and then the values 43 of the second independent variable and the third independent variable are determined according to the adjusted value of the first independent variable. Further, the first parameter, the second parameter and the third parameter are determined 44 according to the adjusted values of the first argument, the second argument and the third argument, that is, the value of the first argument is determined as the first parameter, the value of the second argument is determined as the second parameter, and the value of the third argument is determined as the third parameter.
FIG. 5 illustrates a schematic diagram of a variance and adjustment magnitude relationship of an embodiment of the present disclosure. As shown in fig. 5, in adjusting the value of the first argument, the first argument is adjusted downward by the variance of the reference pixel value of each pixel position within the pixel region, and the value of the first argument is reduced as the first parameter. Wherein the variance is inversely related to the adjustment amplitude of the first independent variable, i.e. the correlation function 50 of the variance and the adjustment amplitude is a monotonically decreasing function, and the correlation function 50 image can be a straight line or a curve. Furthermore, the corresponding relationship between the variance and the adjustment amplitude may also be a preset mapping table, and after the variance of the reference pixel value of each pixel position in the pixel region is determined, the corresponding adjustment amplitude may be directly obtained by looking up from the mapping table.
For each target pixel position, after determining the first parameter, the second parameter, and the third parameter, a fusion parameter set including the first parameter, the second parameter, and the third parameter is determined.
The embodiment of the disclosure adjusts parameters required by image fusion by adjusting the first independent variable, so as to realize local noise reduction of the fusion result, suppress local noise caused by different fusion dynamics, and obtain a more uniform noise multi-frame fusion result. And the increase of the operation amount brought by the traditional local noise reduction mode is greatly reduced.
In a possible implementation manner, for each pixel position, frame fusion is performed according to each parameter, reference pixel value and target pixel value in the fusion parameter set to obtain a fusion pixel value. The frame fusion process is to calculate the sum of the product of the first parameter and the reference pixel value of the reference frame at the pixel position, the product of the second parameter and the target pixel value of the target frame at the pixel position and the third parameter to obtain a fusion parameter.
In one possible implementation, the fused frame is determined as the image fusion result according to the fused pixel value of each pixel position.
And step S40, aiming at the ith target frame, fusing the ith target frame and the (i-1) th fusion frame to obtain the ith fusion frame.
In a possible implementation manner, after the first image fusion process is finished and the 1 st fusion frame is obtained. And adding the 1 st fusion frame into the next fusion process so as to perform image fusion with the next target frame in the next fusion process until the fusion process of all the image sequences to be processed is completed. Optionally, the number N of image frames included in the image sequence to be processed is a positive integer greater than or equal to 3, and the number i of fusion times is [2, N ]
Alternatively, the image fusion process for the ith target frame and the (i-1) th fusion frame may be implemented by the image fusion process in step S30. That is, the fusion parameter set of the ith target frame and the (i-1) th fusion frame at the ith fusion can be determined. And then aiming at the ith target frame, fusing the ith target frame and the (i-1) th fusion frame by using the fusion parameter set fused for the ith time to obtain the ith fusion frame. Determining the fusion parameter set of the ith target frame and the (i-1) th fusion frame during the ith fusion may include determining at least one pixel position included in the (i-1) th fusion frame and the ith target frame during the ith fusion, and determining the fusion parameter set of each pixel position respectively.
In a possible implementation manner, when obtaining the fusion parameter of each pixel position required in the ith fusion, respectively fusing the reference pixel value of the (i-1) th fusion frame at the pixel position and the target pixel value of the ith target frame at the pixel position according to the fusion parameter set of each pixel position in the ith fusion to obtain a fusion pixel value, and determining the ith fusion frame according to the fusion pixel value of at least one pixel position. Optionally, in the ith image fusion process, the fusion parameter set of each pixel position includes a first parameter, a second parameter, and a third parameter, where the first parameter is used to characterize the weight of the value of the pixel position in the reference frame fusion frame, the second parameter is used to characterize the weight of the value of the pixel position in the target frame, and the third parameter is used to characterize the correction parameter of the pixel position. The determining process of the fusion parameter set may be to determine a target pixel position in at least one pixel position, determine a pixel region where the target pixel position is located, and determine the fusion parameter set of the target pixel position according to a reference pixel value and a target pixel value of each of the pixel positions in the pixel region.
Further, in the ith image fusion process, the determination of the fusion parameter set of each pixel position may also be determined based on a similarity function, that is, a similarity function is determined according to the reference pixel value, the target pixel value and the current fusion frequency of each pixel position in the pixel region, where the similarity function represents the similarity between the expected fusion pixel value and the reference pixel value of the target pixel position, and the similarity function includes a first argument, a second argument and a third argument. And determining the value of the first independent variable, the value of the second independent variable and the value of the third independent variable when the similarity function is at the minimum value position. And determining the first parameter, the second parameter and the third parameter according to the value of the first independent variable, the value of the second independent variable and the value of the third independent variable. And determining a fusion parameter set of the target pixel position according to the first parameter, the second parameter and the third parameter. The determination of the similarity function and the determination of the first parameter, the second parameter and the third parameter are the same as the process of the first fusion, and are not repeated here.
In the image processing process, the image fusion result of each time can be fused with the next frame again, so that better image fusion parameters can be obtained, and the image fusion result is optimized. Meanwhile, the embodiment of the disclosure dynamically determines the fusion parameter set of each pixel position during each image fusion through the mathematical model, so as to adjust the parameters in real time when determining that the pixel position is a motion position with large noise, improve the reliability of obtaining the adjusted parameters, and avoid noise introduction. Besides, a third parameter used for correcting a result is introduced besides the first parameter and the second parameter which are used as weights, local noise reduction is carried out by adjusting the third parameter in the image fusion process, and the image fusion effect can be improved by less calculation.
It is understood that the above-mentioned method embodiments of the present disclosure can be combined with each other to form a combined embodiment without departing from the logic of the principle, which is limited by the space, and the detailed description of the present disclosure is omitted. Those skilled in the art will appreciate that in the above methods of the specific embodiments, the specific order of execution of the steps should be determined by their function and possibly their inherent logic.
In addition, the present disclosure also provides an image processing apparatus, an electronic device, a computer-readable storage medium, and a program, which can be used to implement any one of the image processing methods provided by the present disclosure, and the descriptions and corresponding descriptions of the corresponding technical solutions and the corresponding descriptions in the methods section are omitted for brevity.
Fig. 6 shows a schematic diagram of an image processing apparatus 60 according to an embodiment of the present disclosure, and as shown in fig. 6, the image processing apparatus 60 according to an embodiment of the present disclosure may include: the image determining module 61 is configured to determine N image frames to be fused in the image sequence to be processed; a frame determination module 62, configured to determine a reference frame and the remaining (N-1) target frames from the N image frames to be fused; a first fusion module 63, configured to fuse the target frame and the reference frame for a 1 st target frame to obtain a 1 st fusion frame; a second fusion module 64, configured to fuse, for an ith target frame, the ith target frame and an (i-1) th fusion frame to obtain an ith fusion frame, where a value of N is a positive integer greater than or equal to 3, and a value of i is [2, N ]; the N image frames to be fused have the same size, and each image fusion process is realized based on the fusion parameter set obtained by the dynamic calculation.
In one possible implementation, the first fusion module 63 includes: a first parameter determining submodule, configured to determine a fusion parameter set of a 1 st target frame and the reference frame in a 1 st fusion; and the first fusion submodule is used for fusing the 1 st target frame and the reference frame by using a fusion parameter set fused for the 1 st time aiming at the 1 st target frame to obtain a 1 st fusion frame.
In one possible implementation manner, the first parameter determining sub-module includes: a first position determination unit, configured to determine at least one pixel position included in the reference frame and the 1 st target frame at the time of the 1 st fusion; and the first parameter determining unit is used for respectively determining a fusion parameter set of each pixel position.
In one possible implementation, the first fusion submodule includes: the first pixel fusion unit is used for fusing the reference pixel value of the reference frame at the pixel position and the target pixel value of the 1 st target frame at the pixel position according to the fusion parameter set of each pixel position during the 1 st fusion to obtain a fusion pixel value; a first fused frame determining unit for determining a 1 st fused frame based on fused pixel values of at least one of said pixel locations.
In one possible implementation manner, the first parameter determining unit includes: a first position determining subunit for determining a target pixel position among at least one of the pixel positions; a first area determining subunit, configured to determine a pixel area where the target pixel position is located; and the parameter determining subunit is used for determining a fusion parameter set of the target pixel position according to the reference pixel value and the target pixel value of each pixel position in the pixel region.
In one possible implementation, the second fusion module 64 includes: a second parameter determining submodule, configured to determine a fusion parameter set of an ith target frame and an (i-1) th fusion frame during an ith fusion; and the second fusion submodule is used for fusing the ith target frame and the (i-1) th fusion frame by using the fusion parameter set fused for the ith target frame to obtain the ith fusion frame.
In one possible implementation manner, the second parameter determining sub-module includes: a second position determination unit configured to determine at least one pixel position included in an (i-1) th fused frame and an ith target frame at an ith fusion time; and the second parameter determining unit is used for respectively determining the fusion parameter set of each pixel position.
In one possible implementation, the second fusion submodule includes: a second pixel fusion unit, configured to fuse, according to a fusion parameter set of each pixel position during the ith fusion, a reference pixel value of the (i-1) th fused frame at the pixel position and a target pixel value of the ith target frame at the pixel position to obtain a fused pixel value; and the second fused frame determining unit is used for determining the ith fused frame according to the fused pixel value of the at least one pixel position.
In a possible implementation manner, the second parameter determining unit includes: a second position determining subunit for determining a target pixel position among at least one of the pixel positions; a second area determining subunit, configured to determine a pixel area where the target pixel position is located; and the parameter determining subunit is used for determining a fusion parameter set of the target pixel position according to the reference pixel value and the target pixel value of each pixel position in the pixel region.
In a possible implementation manner, the fusion parameter set includes a first parameter, a second parameter, and a third parameter, where the first parameter is used to represent a weight of a value of the pixel position in the reference frame or the fusion frame, the second parameter is used to represent a weight of a value of the pixel position in the target frame, and the third parameter is used to represent a modification parameter of the pixel position.
In one possible implementation, the parameter determining subunit includes: a function determining subunit, configured to determine a similarity function according to a reference pixel value, a target pixel value, and a current fusion frequency of each pixel position in the pixel region, where the similarity function represents a similarity between an expected fusion pixel value and the reference pixel value of the target pixel position, and the similarity function includes a first argument, a second argument, and a third argument; an argument determination subunit, configured to determine, when the similarity function is at a minimum value position, a value of the first argument, a value of the second argument, and a value of the third argument; a parameter calculating subunit, configured to determine a first parameter, a second parameter, and a third parameter according to the value of the first argument, the value of the second argument, and the value of the third argument; and the parameter set determining subunit is used for determining a fusion parameter set of the target pixel position according to the first parameter, the second parameter and the third parameter.
In one possible implementation, the function determining subunit includes: a function term determining subunit, configured to determine, for each pixel position in the pixel region, a pixel difference term and an iteration adjusting term according to a reference pixel value, a target pixel value, and a current fusion number of the pixel position, where the pixel difference term represents a difference between an expected fusion pixel value and the reference pixel value, and the iteration adjusting term is used to adjust values of the first argument, the second argument, and the third argument according to the current fusion number; and the function calculation subunit is used for determining a similarity function according to the sum of the pixel difference term and the iteration adjusting term of each pixel position in the pixel region.
In one possible implementation, the pixel difference term is determined from a difference of the expected fused pixel value and the reference pixel value, and the expected fused pixel value is determined from the reference pixel value, the first argument, the target pixel value, the second argument, and the third argument.
In a possible implementation manner, the iterative adjustment term is determined according to a first adjustment parameter, a second adjustment parameter, a third adjustment parameter, and preset first, second, and third coefficients, where the first adjustment parameter is determined according to the first argument, the second adjustment parameter is determined according to the first argument, the current fusion number, and the second argument, and the third adjustment parameter is determined according to the first argument and the second argument.
In one possible implementation, the parameter calculating subunit includes: the position judgment subunit is used for judging whether the target pixel position is a motion position according to a first independent variable corresponding to the minimum value of the similarity function; the noise reduction subunit is configured to perform noise reduction on a pixel value corresponding to the motion position to obtain a noise reduction result when the target pixel position is the motion position; and the noise reduction parameter determining subunit is configured to determine a first parameter, a second parameter, and a third parameter according to the value of the first argument, the value of the second argument, and the value of the third argument, which correspond to the noise reduction result.
In one possible implementation, the noise reduction subunit includes: and the independent variable adjusting subunit is configured to, when the target pixel position is a motion position, perform noise reduction on a pixel value corresponding to the motion position by adjusting the values of the first independent variable, the second independent variable, and the third independent variable, so as to obtain a noise reduction result.
In one possible implementation, the independent variable adjusting subunit includes: a decrease subunit operable to decrease the value of the first argument; and the increasing subunit is configured to input the value of the second argument and the reduced value of the first argument into the similarity function, so as to obtain an increased value of the third argument.
In one possible implementation, the reduction subunit includes: a variance calculating subunit, configured to determine a variance according to a reference pixel value of each of the pixel positions in the pixel region; and the independent variable adjusting subunit is used for reducing the value of the first independent variable according to the variance, wherein the magnitude of the variance is inversely related to the reduction amplitude of the first independent variable.
In a possible implementation manner, the target pixel position is a middle position of the located pixel region.
In some embodiments, functions of or modules included in the apparatus provided in the embodiments of the present disclosure may be used to execute the method described in the above method embodiments, and specific implementation thereof may refer to the description of the above method embodiments, and for brevity, will not be described again here.
Embodiments of the present disclosure also provide a computer-readable storage medium having stored thereon computer program instructions, which when executed by a processor, implement the above-mentioned method. The computer readable storage medium may be a volatile or non-volatile computer readable storage medium.
An embodiment of the present disclosure further provides an electronic device, including: a processor; a memory for storing processor-executable instructions; wherein the processor is configured to invoke the memory-stored instructions to perform the above-described method.
The disclosed embodiments also provide a computer program product comprising computer readable code or a non-transitory computer readable storage medium carrying computer readable code, which when run in a processor of an electronic device, the processor in the electronic device performs the above method.
The electronic device may be provided as a terminal, server, or other form of device.
Fig. 7 shows a schematic diagram of an electronic device 800 of an embodiment of the disclosure. For example, the electronic device 800 may be a User Equipment (UE), a mobile device, a User terminal, a cellular phone, a cordless phone, a Personal Digital Assistant (PDA), a handheld device, a computing device, a vehicle-mounted device, a wearable device, or other terminal device.
Referring to fig. 7, electronic device 800 may include one or more of the following components: a processing component 802, a memory 804, a power component 806, a multimedia component 808, an audio component 810, an input/output (I/O) interface 812, a sensor component 814, and a communication component 816.
The processing component 802 generally controls overall operation of the electronic device 800, such as operations associated with display, telephone calls, data communications, camera operations, and recording operations. The processing components 802 may include one or more processors 820 to execute instructions to perform all or a portion of the steps of the methods described above. Further, the processing component 802 can include one or more modules that facilitate interaction between the processing component 802 and other components. For example, the processing component 802 can include a multimedia module to facilitate interaction between the multimedia component 808 and the processing component 802.
The memory 804 is configured to store various types of data to support operations at the electronic device 800. Examples of such data include instructions for any application or method operating on the electronic device 800, contact data, phonebook data, messages, pictures, videos, and so forth. The memory 804 may be implemented by any type or combination of volatile or non-volatile memory devices such as Static Random Access Memory (SRAM), electrically erasable programmable read-only memory (EEPROM), erasable programmable read-only memory (EPROM), programmable read-only memory (PROM), read-only memory (ROM), magnetic memory, flash memory, magnetic or optical disks.
The power supply component 806 provides power to the various components of the electronic device 800. The power components 806 may include a power management system, one or more power supplies, and other components associated with generating, managing, and distributing power for the electronic device 800.
The multimedia component 808 includes a screen that provides an output interface between the electronic device 800 and a user. In some embodiments, the screen may include a Liquid Crystal Display (LCD) and a Touch Panel (TP). If the screen includes a touch panel, the screen may be implemented as a touch screen to receive an input signal from a user. The touch panel includes one or more touch sensors to sense touch, slide, and gestures on the touch panel. The touch sensor may not only sense the boundary of a touch or slide action, but also detect the duration and pressure associated with the touch or slide operation. In some embodiments, the multimedia component 808 includes a front facing camera and/or a rear facing camera. The front camera and/or the rear camera may receive external multimedia data when the electronic device 800 is in an operation mode, such as a shooting mode or a video mode. Each front camera and rear camera may be a fixed optical lens system or have a focal length and optical zoom capability.
The audio component 810 is configured to output and/or input audio signals. For example, the audio component 810 includes a Microphone (MIC) configured to receive external audio signals when the electronic device 800 is in an operational mode, such as a call mode, a recording mode, and a voice recognition mode. The received audio signals may further be stored in the memory 804 or transmitted via the communication component 816. In some embodiments, audio component 810 also includes a speaker for outputting audio signals.
The I/O interface 812 provides an interface between the processing component 802 and peripheral interface modules, which may be keyboards, click wheels, buttons, etc. These buttons may include, but are not limited to: a home button, a volume button, a start button, and a lock button.
The sensor assembly 814 includes one or more sensors for providing various aspects of state assessment for the electronic device 800. For example, the sensor assembly 814 may detect an open/closed state of the electronic device 800, the relative positioning of components, such as a display and keypad of the electronic device 800, the sensor assembly 814 may also detect a change in the position of the electronic device 800 or a component of the electronic device 800, the presence or absence of user contact with the electronic device 800, orientation or acceleration/deceleration of the electronic device 800, and a change in the temperature of the electronic device 800. Sensor assembly 814 may include a proximity sensor configured to detect the presence of a nearby object without any physical contact. The sensor assembly 814 may also include a light sensor, such as a Complementary Metal Oxide Semiconductor (CMOS) or Charge Coupled Device (CCD) image sensor, for use in imaging applications. In some embodiments, the sensor assembly 814 may also include an acceleration sensor, a gyroscope sensor, a magnetic sensor, a pressure sensor, or a temperature sensor.
The communication component 816 is configured to facilitate wired or wireless communication between the electronic device 800 and other devices. The electronic device 800 may access a wireless network based on a communication standard, such as a wireless network (WiFi), a second generation mobile communication technology (2G) or a third generation mobile communication technology (3G), or a combination thereof. In an exemplary embodiment, the communication component 816 receives a broadcast signal or broadcast related information from an external broadcast management system via a broadcast channel. In an exemplary embodiment, the communication component 816 further includes a Near Field Communication (NFC) module to facilitate short-range communications. For example, the NFC module may be implemented based on Radio Frequency Identification (RFID) technology, infrared data association (IrDA) technology, Ultra Wideband (UWB) technology, Bluetooth (BT) technology, and other technologies.
In an exemplary embodiment, the electronic device 800 may be implemented by one or more Application Specific Integrated Circuits (ASICs), Digital Signal Processors (DSPs), Digital Signal Processing Devices (DSPDs), Programmable Logic Devices (PLDs), Field Programmable Gate Arrays (FPGAs), controllers, micro-controllers, microprocessors or other electronic components for performing the above-described methods.
In an exemplary embodiment, a non-transitory computer-readable storage medium, such as the memory 804, is also provided that includes computer program instructions executable by the processor 820 of the electronic device 800 to perform the above-described methods.
Fig. 8 shows a schematic diagram of an electronic device 1900 of an embodiment of the disclosure. For example, the electronic device 1900 may be provided as a server. Referring to fig. 8, electronic device 1900 includes a processing component 1922 further including one or more processors and memory resources, represented by memory 1932, for storing instructions, e.g., applications, executable by processing component 1922. The application programs stored in memory 1932 may include one or more modules that each correspond to a set of instructions. Further, the processing component 1922 is configured to execute instructions to perform the above-described method.
Electronic device 1900 may also include a power supply component1926 are configured to perform power management of the electronic device 1900, a wired or wireless network interface 1950 is configured to connect the electronic device 1900 to a network, and an input/output (I/O) interface 1958. The electronic device 1900 may operate based on an operating system, such as the Microsoft Server operating system (Windows Server), stored in the memory 1932TM) Apple Inc. of the present application based on the graphic user interface operating System (Mac OS X)TM) Multi-user, multi-process computer operating system (Unix)TM) Free and open native code Unix-like operating System (Linux)TM) Open native code Unix-like operating System (FreeBSD)TM) Or the like.
In an exemplary embodiment, a non-transitory computer readable storage medium, such as the memory 1932, is also provided that includes computer program instructions executable by the processing component 1922 of the electronic device 1900 to perform the above-described methods.
The present disclosure may be systems, methods, and/or computer program products. The computer program product may include a computer-readable storage medium having computer-readable program instructions embodied thereon for causing a processor to implement various aspects of the present disclosure.
The computer readable storage medium may be a tangible device that can hold and store the instructions for use by the instruction execution device. The computer readable storage medium may be, for example, but is not limited to, an electronic memory device, a magnetic memory device, an optical memory device, an electromagnetic memory device, a semiconductor memory device, or any suitable combination of the foregoing. More specific examples (a non-exhaustive list) of the computer readable storage medium would include the following: a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), a Static Random Access Memory (SRAM), a portable compact disc read-only memory (CD-ROM), a Digital Versatile Disc (DVD), a memory stick, a floppy disk, a mechanical coding device, such as punch cards or in-groove projection structures having instructions stored thereon, and any suitable combination of the foregoing. Computer-readable storage media as used herein is not to be construed as transitory signals per se, such as radio waves or other freely propagating electromagnetic waves, electromagnetic waves propagating through a waveguide or other transmission medium (e.g., optical pulses through a fiber optic cable), or electrical signals transmitted through electrical wires.
The computer-readable program instructions described herein may be downloaded from a computer-readable storage medium to a respective computing/processing device, or to an external computer or external storage device via a network, such as the internet, a local area network, a wide area network, and/or a wireless network. The network may include copper transmission cables, fiber optic transmission, wireless transmission, routers, firewalls, switches, gateway computers and/or edge servers. The network adapter card or network interface in each computing/processing device receives computer-readable program instructions from the network and forwards the computer-readable program instructions for storage in a computer-readable storage medium in the respective computing/processing device.
The computer program instructions for carrying out operations of the present disclosure may be assembler instructions, Instruction Set Architecture (ISA) instructions, machine-related instructions, microcode, firmware instructions, state setting data, or source or object code written in any combination of one or more programming languages, including an object oriented programming language such as Smalltalk, C + + or the like and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The computer-readable program instructions may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any type of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet service provider). In some embodiments, the electronic circuitry that can execute the computer-readable program instructions implements aspects of the present disclosure by utilizing the state information of the computer-readable program instructions to personalize the electronic circuitry, such as a programmable logic circuit, a Field Programmable Gate Array (FPGA), or a Programmable Logic Array (PLA).
Various aspects of the present disclosure are described herein with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the disclosure. It will be understood that each block of the flowchart illustrations and/or block diagrams, and combinations of blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer-readable program instructions.
These computer-readable program instructions may be provided to a processor of a general purpose computer, special purpose computer, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks. These computer-readable program instructions may also be stored in a computer-readable storage medium that can direct a computer, programmable data processing apparatus, and/or other devices to function in a particular manner, such that the computer-readable medium storing the instructions comprises an article of manufacture including instructions which implement the function/act specified in the flowchart and/or block diagram block or blocks.
The computer readable program instructions may also be loaded onto a computer, other programmable data processing apparatus, or other devices to cause a series of operational steps to be performed on the computer, other programmable apparatus or other devices to produce a computer implemented process such that the instructions which execute on the computer, other programmable apparatus or other devices implement the functions/acts specified in the flowchart and/or block diagram block or blocks.
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of instructions, which comprises one or more executable instructions for implementing the specified logical function(s). In some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The computer program product may be embodied in hardware, software or a combination thereof. In an alternative embodiment, the computer program product is embodied in a computer storage medium, and in another alternative embodiment, the computer program product is embodied in a Software product, such as a Software Development Kit (SDK), or the like.
Having described embodiments of the present disclosure, the foregoing description is intended to be exemplary, not exhaustive, and not limited to the disclosed embodiments. Many modifications and variations will be apparent to those of ordinary skill in the art without departing from the scope and spirit of the described embodiments. The terminology used herein is chosen in order to best explain the principles of the embodiments, the practical application, or improvements made to the technology in the marketplace, or to enable others of ordinary skill in the art to understand the embodiments disclosed herein.

Claims (18)

1. An image processing method, characterized in that the image processing method comprises:
determining N image frames to be fused in an image sequence to be processed;
determining a reference frame and the remaining N-1 target frames from the N image frames to be fused;
for a 1 st target frame, fusing the target frame and the reference frame to obtain a 1 st fused frame;
aiming at the ith target frame, fusing the ith target frame and the (i-1) th fused frame to obtain the ith fused frame, wherein the value of N is a positive integer which is greater than or equal to 3, and the value of i is greater than or equal to 2 and less than or equal to N;
the size of the N image frames to be fused is the same, each image fusion process is realized by respectively fusing each pixel position based on a fusion parameter set corresponding to each pixel position obtained by the dynamic calculation, the fusion parameter set comprises a first parameter, a second parameter and a third parameter, the first parameter is used for representing the weight of the value of the pixel position in the reference frame or the fusion frame, the second parameter is used for representing the weight of the value of the pixel position in the target frame, and the third parameter is used for representing the correction parameter of the pixel position;
the process of determining the fusion parameter set for each pixel position comprises:
determining a target pixel position in at least one pixel position and a pixel area where the target pixel position is located;
determining a similarity function according to a reference pixel value, a target pixel value and a current fusion time of each pixel position of the pixel region, wherein the similarity function represents the similarity of an expected fusion pixel value and a reference pixel value of the target pixel position, the similarity function comprises a first independent variable, a second independent variable and a third independent variable, the reference pixel value is a pixel value in the reference frame or the fusion frame, and the target pixel value is a pixel value in the target frame;
determining the value of the first argument, the value of the second argument, and the value of the third argument when the similarity function is at a minimum position;
determining a first parameter, a second parameter and a third parameter according to the value of the first independent variable, the value of the second independent variable and the value of the third independent variable;
and determining a fusion parameter set of the target pixel position according to the first parameter, the second parameter and the third parameter.
2. The image processing method according to claim 1, wherein the fusing the target frame and the reference frame for the 1 st target frame, and obtaining the 1 st fused frame comprises:
determining a fusion parameter set of the 1 st target frame and the reference frame at the 1 st fusion;
and for the 1 st target frame, fusing the 1 st target frame and the reference frame by using a fusion parameter set fused for the 1 st time to obtain a 1 st fusion frame.
3. The method according to claim 2, wherein said determining the fusion parameter set of the 1 st target frame and the reference frame at the 1 st fusion comprises:
determining at least one pixel position included in the reference frame and the 1 st target frame at the time of the 1 st fusion;
and respectively determining a fusion parameter set of each pixel position.
4. The image processing method according to claim 3, wherein the fusing the 1 st target frame and the reference frame using the fused parameter set for the 1 st time for the 1 st target frame, and obtaining the 1 st fused frame comprises:
respectively fusing the reference pixel value of the reference frame at the pixel position and the target pixel value of the 1 st target frame at the pixel position according to the fusion parameter set of each pixel position during the 1 st fusion to obtain a fusion pixel value;
determining a 1 st fused frame based on the fused pixel value of at least one of the pixel locations.
5. The image processing method according to claim 1, wherein the fusing the ith target frame and the (i-1) th fused frame for the ith target frame to obtain the ith fused frame comprises:
determining a fusion parameter set of the ith target frame and the (i-1) th fusion frame during the ith fusion;
and aiming at the ith target frame, fusing the ith target frame and the (i-1) th fusion frame by using the fusion parameter set fused for the ith time to obtain the ith fusion frame.
6. The image processing method according to claim 5, wherein the determining the fusion parameter set of the ith target frame and the ith-1 fused frame in the ith fusion comprises:
determining at least one pixel position included in the ith-1 st fusion frame and the ith target frame during the ith fusion;
and respectively determining a fusion parameter set of each pixel position.
7. The image processing method according to claim 6, wherein the fusing the ith target frame and the (i-1) th fused frame using the fusion parameter set fused for the ith target frame to obtain the ith fused frame comprises:
respectively fusing a reference pixel value of the ith-1 th fused frame at the pixel position and a target pixel value of the ith target frame at the pixel position according to the fusion parameter set of each pixel position during the ith fusion to obtain a fused pixel value;
and determining the ith fusion frame according to the fusion pixel value of the at least one pixel position.
8. The method according to claim 1, wherein determining the similarity function according to the reference pixel value, the target pixel value and the current fusion times of each of the pixel positions of the pixel region comprises:
for each pixel position in the pixel region, determining a pixel difference term and an iteration adjusting term according to a reference pixel value, a target pixel value and the current fusion frequency of each pixel position, wherein the pixel difference term represents the difference between an expected fusion pixel value and the reference pixel value, and the iteration adjusting term is used for adjusting the values of the first independent variable, the second independent variable and the third independent variable according to the current fusion frequency;
and determining a similarity function according to the sum of the pixel difference item and the iteration adjusting item of each pixel position in the pixel region.
9. The image processing method according to claim 8, wherein the pixel difference term is determined from a difference of the expected fused pixel value and the reference pixel value, the expected fused pixel value being determined from the reference pixel value, the first argument, the target pixel value, the second argument, and the third argument.
10. The image processing method according to claim 8, wherein the iterative adjustment term is determined based on a first adjustment parameter, which is determined based on the first argument, a second adjustment parameter, which is determined based on the first argument, the current fusion number, and the second argument, a third adjustment parameter, which is determined based on the first argument and the second argument, and a preset first coefficient, a second coefficient, and a third coefficient.
11. The image processing method according to claim 1, wherein the determining a first parameter, a second parameter, and a third parameter from the value of the first argument, the value of the second argument, and the value of the third argument comprises:
judging whether the target pixel position is a motion position or not according to a first independent variable corresponding to the minimum value of the similarity function;
under the condition that the target pixel position is a motion position, denoising a pixel value corresponding to the motion position to obtain a denoising result;
and determining a first parameter, a second parameter and a third parameter according to the value of the first independent variable, the value of the second independent variable and the value of the third independent variable corresponding to the noise reduction result.
12. The image processing method according to claim 11, wherein in a case that the target pixel position is a motion position, performing noise reduction on a pixel value corresponding to the motion position, and obtaining a noise reduction result comprises:
and under the condition that the target pixel position is a motion position, denoising the pixel value corresponding to the motion position by adjusting the value of the first independent variable, the value of the second independent variable and the value of the third independent variable, and obtaining a denoising result.
13. The image processing method according to claim 12, wherein the adjustment process of the values of the first argument, the second argument, and the third argument comprises:
decreasing the value of the first argument;
and inputting the value of the second independent variable and the reduced value of the first independent variable into the similarity function to obtain the increased value of the third independent variable.
14. The image processing method according to claim 13, wherein the reducing the value of the first argument comprises:
determining a variance from a reference pixel value for each of the pixel locations within the pixel region;
reducing the value of the first argument in accordance with the variance, wherein the magnitude of the variance is inversely related to the magnitude of the reduction of the first argument.
15. The image processing method according to claim 1, wherein the target pixel position is a middle position of the located pixel region.
16. An image processing apparatus characterized by comprising:
the image determining module is used for determining N image frames to be fused in the image sequence to be processed;
the frame determining module is used for determining a reference frame and the remaining N-1 target frames from the N image frames to be fused;
a first fusion module, configured to fuse the target frame and the reference frame for a 1 st target frame, to obtain a 1 st fusion frame;
a second fusion module, configured to fuse, for an ith target frame, the ith target frame and an (i-1) th fusion frame to obtain an ith fusion frame, where a value of N is a positive integer greater than or equal to 3, and a value of i is greater than or equal to 2 and less than or equal to N;
the size of the N image frames to be fused is the same, each image fusion process is realized by respectively fusing each pixel position based on a fusion parameter set corresponding to each pixel position obtained by the dynamic calculation, the fusion parameter set comprises a first parameter, a second parameter and a third parameter, the first parameter is used for representing the weight of the value of the pixel position in the reference frame or the fusion frame, the second parameter is used for representing the weight of the value of the pixel position in the target frame, and the third parameter is used for representing the correction parameter of the pixel position;
the process of determining the fusion parameter set for each pixel position comprises:
determining a target pixel position in at least one pixel position and a pixel area where the target pixel position is located;
determining a similarity function according to a reference pixel value, a target pixel value and a current fusion time of each pixel position of the pixel region, wherein the similarity function represents the similarity of an expected fusion pixel value and a reference pixel value of the target pixel position, the similarity function comprises a first independent variable, a second independent variable and a third independent variable, the reference pixel value is a pixel value in the reference frame or the fusion frame, and the target pixel value is a pixel value in the target frame;
determining the value of the first argument, the value of the second argument, and the value of the third argument when the similarity function is at a minimum position;
determining a first parameter, a second parameter and a third parameter according to the value of the first independent variable, the value of the second independent variable and the value of the third independent variable;
and determining a fusion parameter set of the target pixel position according to the first parameter, the second parameter and the third parameter.
17. An electronic device, comprising:
a processor;
a memory for storing processor-executable instructions;
wherein the processor is configured to invoke the memory-stored instructions to perform the image processing method of any of claims 1 to 15.
18. A computer readable storage medium having computer program instructions stored thereon, which when executed by a processor implement the image processing method of any one of claims 1 to 15.
CN202111251407.8A 2021-10-27 2021-10-27 Image processing method and device, electronic equipment and storage medium Active CN113689362B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111251407.8A CN113689362B (en) 2021-10-27 2021-10-27 Image processing method and device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111251407.8A CN113689362B (en) 2021-10-27 2021-10-27 Image processing method and device, electronic equipment and storage medium

Publications (2)

Publication Number Publication Date
CN113689362A CN113689362A (en) 2021-11-23
CN113689362B true CN113689362B (en) 2022-02-22

Family

ID=78588271

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111251407.8A Active CN113689362B (en) 2021-10-27 2021-10-27 Image processing method and device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN113689362B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115361533B (en) * 2022-08-19 2023-04-18 深圳市汇顶科技股份有限公司 Image data processing method and electronic device

Family Cites Families (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109978808B (en) * 2019-04-25 2022-02-01 北京迈格威科技有限公司 Method and device for image fusion and electronic equipment
CN110428391B (en) * 2019-08-02 2022-05-03 格兰菲智能科技有限公司 Image fusion method and device for removing ghost artifacts
CN113284077A (en) * 2020-02-19 2021-08-20 华为技术有限公司 Image processing method, image processing device, communication equipment and readable storage medium
CN111583151B (en) * 2020-05-09 2023-05-12 浙江大华技术股份有限公司 Video noise reduction method and device, and computer readable storage medium
CN111784734A (en) * 2020-07-17 2020-10-16 Oppo广东移动通信有限公司 Image processing method and device, storage medium and electronic equipment
CN112949662B (en) * 2021-05-13 2021-11-16 北京市商汤科技开发有限公司 Image processing method and device, computer equipment and storage medium
CN113362260A (en) * 2021-07-21 2021-09-07 Oppo广东移动通信有限公司 Image optimization method and device, storage medium and electronic equipment
CN113674189A (en) * 2021-08-17 2021-11-19 Oppo广东移动通信有限公司 Image processing method, image processing device, electronic equipment and computer readable storage medium

Also Published As

Publication number Publication date
CN113689362A (en) 2021-11-23

Similar Documents

Publication Publication Date Title
CN109800737B (en) Face recognition method and device, electronic equipment and storage medium
CN107692997B (en) Heart rate detection method and device
CN110557547B (en) Lens position adjusting method and device
CN111340731B (en) Image processing method and device, electronic equipment and storage medium
CN113538519A (en) Target tracking method and device, electronic equipment and storage medium
CN111553864B (en) Image restoration method and device, electronic equipment and storage medium
CN109840939B (en) Three-dimensional reconstruction method, three-dimensional reconstruction device, electronic equipment and storage medium
CN109377446B (en) Face image processing method and device, electronic equipment and storage medium
CN109934240B (en) Feature updating method and device, electronic equipment and storage medium
CN111369482B (en) Image processing method and device, electronic equipment and storage medium
CN113139947A (en) Image processing method and device, electronic equipment and storage medium
CN113689361B (en) Image processing method and device, electronic equipment and storage medium
CN111583142A (en) Image noise reduction method and device, electronic equipment and storage medium
CN113689362B (en) Image processing method and device, electronic equipment and storage medium
CN107730443B (en) Image processing method and device and user equipment
CN110826463B (en) Face recognition method and device, electronic equipment and storage medium
CN112102300A (en) Counting method and device, electronic equipment and storage medium
CN111861942A (en) Noise reduction method and device, electronic equipment and storage medium
CN110121115B (en) Method and device for determining wonderful video clip
CN112651880B (en) Video data processing method and device, electronic equipment and storage medium
CN115457024A (en) Method and device for processing cryoelectron microscope image, electronic equipment and storage medium
US11792518B2 (en) Method and apparatus for processing image
CN114549983A (en) Computer vision model training method and device, electronic equipment and storage medium
CN114445298A (en) Image processing method and device, electronic equipment and storage medium
CN110896492B (en) Image processing method, device and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant