CN118365554B - Video noise reduction method, device, electronic equipment and computer readable storage medium - Google Patents
Video noise reduction method, device, electronic equipment and computer readable storage medium Download PDFInfo
- Publication number
- CN118365554B CN118365554B CN202410789407.0A CN202410789407A CN118365554B CN 118365554 B CN118365554 B CN 118365554B CN 202410789407 A CN202410789407 A CN 202410789407A CN 118365554 B CN118365554 B CN 118365554B
- Authority
- CN
- China
- Prior art keywords
- noise reduction
- frame image
- image
- optical flow
- current frame
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 230000009467 reduction Effects 0.000 title claims abstract description 210
- 238000000034 method Methods 0.000 title claims abstract description 59
- 230000003044 adaptive effect Effects 0.000 claims abstract description 101
- 230000003287 optical effect Effects 0.000 claims abstract description 82
- 230000004927 fusion Effects 0.000 claims abstract description 36
- 238000012545 processing Methods 0.000 claims description 40
- 238000004590 computer program Methods 0.000 claims description 9
- 238000012935 Averaging Methods 0.000 claims description 7
- 230000004044 response Effects 0.000 claims description 3
- 230000000694 effects Effects 0.000 abstract description 7
- 230000006870 function Effects 0.000 description 17
- 230000008569 process Effects 0.000 description 9
- 238000004891 communication Methods 0.000 description 7
- 238000010586 diagram Methods 0.000 description 6
- 239000013598 vector Substances 0.000 description 5
- 230000008878 coupling Effects 0.000 description 3
- 238000010168 coupling process Methods 0.000 description 3
- 238000005859 coupling reaction Methods 0.000 description 3
- 230000007704 transition Effects 0.000 description 3
- 230000003190 augmentative effect Effects 0.000 description 2
- 238000004364 calculation method Methods 0.000 description 2
- 238000012937 correction Methods 0.000 description 2
- 238000013507 mapping Methods 0.000 description 2
- 238000011946 reduction process Methods 0.000 description 2
- 238000004422 calculation algorithm Methods 0.000 description 1
- 239000003638 chemical reducing agent Substances 0.000 description 1
- 239000000470 constituent Substances 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 230000003993 interaction Effects 0.000 description 1
- 230000007246 mechanism Effects 0.000 description 1
- 238000005457 optimization Methods 0.000 description 1
- 238000003672 processing method Methods 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
- 230000000007 visual effect Effects 0.000 description 1
Landscapes
- Picture Signal Circuits (AREA)
Abstract
The embodiment of the application discloses a video noise reduction method, a device, electronic equipment and a computer readable storage medium, wherein the method comprises the following steps: acquiring a current frame image and a previous frame image of a target video; the previous frame image comprises the image output in the last cycle and the previous frame image which is not subjected to noise reduction treatment; acquiring fusion factors corresponding to a current frame image and a previous frame image of a target video, wherein the fusion factors are related to a first adaptive noise reduction factor and a second adaptive noise reduction factor; performing motion estimation on a current frame image of a target video and a previous frame image which is not subjected to noise reduction treatment to obtain an optical flow result, wherein the optical flow result is used for the image circularly output last time to obtain an image to be fused which is aligned with the current frame image; and fusing the current frame image and the image to be fused to generate a target noise reduction image corresponding to the current circulating target video. The application can reduce the noise reduction calculated amount and improve the noise reduction effect of the video.
Description
Technical Field
The present application relates to the field of video processing technologies, and in particular, to a method and apparatus for video noise reduction, an electronic device, and a computer readable storage medium.
Background
With the development of image display devices, the requirements of high-quality and high-definition video image information are higher and higher, but in reality, the video image is often interfered by noise in the process of digitizing and transmitting, and the quality and definition of the video image are affected, so that how to effectively suppress the noise in the video image is important.
At present, the main method of video noise reduction comprises the following steps: and respectively carrying out motion estimation on each frame of images before and after the current frame and a plurality of frames of images before and after the current frame to generate images after the alignment of the frames before and after the current frame, then calculating the fusion weight of each frame of images, and finally generating a video frame after noise reduction through multi-frame fusion. The realization mode needs more video frames to achieve better noise reduction effect, and increases the video frames, so that the calculation complexity is greatly improved, the mobile terminal cannot process in real time, more texture details cannot be reserved while noise is reduced, and the effect of video noise reduction is affected.
Disclosure of Invention
The embodiment of the application provides a video noise reduction method, a device, electronic equipment and a computer readable storage medium, which can reduce the noise reduction calculated amount and improve the noise reduction effect of a video.
In a first aspect, an embodiment of the present application provides a method for noise reduction in video, where the method may include the following steps:
Acquiring a current frame image and a previous frame image of a target video; the previous frame image comprises an image output in the last cycle and a previous frame image which is not subjected to noise reduction treatment;
acquiring fusion factors corresponding to a current frame image and a previous frame image of the target video, wherein the fusion factors are related to a first adaptive noise reduction factor and a second adaptive noise reduction factor, and the first adaptive noise reduction factor is used for indicating the position offset of the current frame image and the previous frame image; the second adaptive noise reduction factor is used for indicating the noise level of the current frame image;
performing motion estimation on the current frame image of the target video and the previous frame image which is not subjected to noise reduction processing to obtain an optical flow result, wherein the optical flow result is used for the image circularly output last time to obtain an image to be fused which is aligned with the current frame image;
And fusing the current frame image and the image to be fused to generate a target noise reduction image corresponding to the target video in the current circulation.
When the embodiment of the application is implemented, when the noise reduction is carried out on a certain frame of image of the target video, the image output in the last cycle and the fusion factors related to the first self-adaptive noise reduction factor and the second self-adaptive noise reduction factor are fully considered, so that the target noise reduction image corresponding to the target video in the current cycle is generated, the noise reduction calculated amount is reduced, the original texture of the image is reserved, and the noise reduction effect of the video is improved.
In one possible implementation, the method further includes:
Processing the optical flow result to obtain the position offset of the current frame image and the previous frame image;
and acquiring the first adaptive noise reduction factor according to the position offset.
In this way, the first adaptive noise reduction factor may be obtained in combination with the positional offset.
In one possible implementation, the optical-flow result includes an optical flow in an x-direction and an optical flow in a y-direction, the x-direction being orthogonal to the y-direction; the processing the optical flow result to obtain the position offset of the current frame image and the previous frame image includes:
respectively averaging and absolute values of the optical flow in the x direction and the optical flow in the y direction to obtain a first optical flow value and a second optical flow value;
and acquiring the maximum value of the first light current value and the second light current value, and determining the maximum value as the position offset.
In one possible implementation manner, the obtaining the first adaptive noise reduction factor according to the position offset includes:
Upon determining the first adaptive noise reduction factor, it is calculated according to the following formula:
Wherein Y1 represents a first adaptive noise reduction factor and D represents a positional offset.
In one possible implementation, the method further includes:
Acquiring a first pixel value difference in the current frame image and the previous frame image, wherein the first pixel value difference is a pixel difference value between a first pixel point and a corresponding pixel point;
acquiring the noise level of the first pixel point based on the first pixel value difference;
and obtaining the second adaptive noise reduction factor according to the noise level of the first pixel point.
In this way, the second adaptive noise reduction factor may be obtained in conjunction with the noise level of the pixel point.
In one possible implementation, obtaining the second adaptive noise reduction factor according to the noise level of the first pixel point includes:
upon determining the second adaptive noise reduction factor, it is calculated according to the following formula:
where Y2 represents the second adaptive noise reduction factor and diff represents the average of the pixel difference values.
In a second aspect, the present application provides a method for video denoising, the method may include the steps of: in response to a user operation, displaying a video frame in a display screen of the electronic device, where the video frame includes a noise-reduced target noise-reduced image, where the noise-reduced target noise-reduced image is obtained by performing video noise reduction on a target video acquired by a camera of the electronic device according to the video noise reduction method provided in any one of the first aspects.
In a possible implementation manner, the user operation is an operation of turning on a video noise reduction mode, where the video noise reduction mode is used to instruct noise reduction on a video frame acquired by the electronic device; or the user operation is an operation to start video shooting.
In a third aspect, the present application provides a video denoising apparatus, the apparatus comprising an image acquisition unit, an image acquisition fusion factor unit, an image processing unit, and a fusion unit, the video denoising apparatus being configured to perform the video denoising method of any one of the first or second aspects.
In a fourth aspect, the present application also provides a video noise reduction device, including a memory, a processor, and a program stored in the memory and executable on the processor, wherein the video noise reduction method according to any one of the first or second aspects is performed when the processor executes the program.
In a fifth aspect, the present application also provides a computer readable storage medium having a computer program stored therein, which when executed by a processor, implements the video denoising method of any one of the first or second aspects above.
In a sixth aspect, the application also provides a computer program product comprising computer programs/instructions which when executed by a processor implement the steps of the video noise reduction method according to any of the first or second aspects above.
Drawings
In order to more clearly illustrate the technical solution of the embodiments of the present application, the drawings that are needed in the description of the embodiments will be briefly described.
Fig. 1 is a schematic diagram of an application scenario of a video denoising method according to an embodiment of the present application;
fig. 2 is a schematic structural diagram of an electronic device according to an embodiment of the present application;
fig. 3 is a schematic flow chart of a video denoising method according to an embodiment of the present application;
fig. 4 is a schematic structural diagram of a video noise reduction device according to an embodiment of the present application;
Fig. 5 is a schematic structural diagram of another electronic device according to an embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be described below with reference to the accompanying drawings in the embodiments of the present application.
Fig. 1 is a schematic diagram of an application scenario of a video denoising method according to an embodiment of the present application.
As shown in fig. 1, the video denoising method according to the embodiment of the present application may be applied to an electronic device. When an object is recorded by a camera of an electronic device, due to limitations of hardware, environment and the like, the acquired video may have noise problems such as noise points, plaques, ghosts and the like which affect visual effects. At this time, noise reduction processing can be performed on the original video which is acquired by the camera in the electronic device and is not subjected to noise reduction, so as to obtain a noise-reduced video.
The electronic device may be mobile or fixed, for example, the electronic device may be a mobile phone, a camera, a video camera, a vehicle, a tablet personal computer (tablet personal computer, TPC), a media player, a smart television, a notebook computer (LC), a personal digital assistant (personal DIGITAL ASSISTANT, PDA), a personal computer (personal computer, PC), a smart watch, an augmented reality (augmented reality, AR)/Virtual Reality (VR), a wearable device (wearabledevice, WD), a game console, and the like, which are not limited by the specific type of the electronic device according to the embodiments of the present application.
The implementation process of the noise reduction process may include: acquiring a current frame image and a previous frame image of a target video; the previous frame image comprises an image output in the last cycle and a previous frame image which is not subjected to noise reduction treatment; acquiring fusion factors corresponding to a current frame image and a previous frame image of the target video, wherein the fusion factors are related to a first adaptive noise reduction factor and a second adaptive noise reduction factor, and the first adaptive noise reduction factor is used for indicating the position offset of the current frame image and the previous frame image; the second adaptive noise reduction factor is used for indicating the noise level of the current frame image; performing motion estimation on the current frame image of the target video and the previous frame image which is not subjected to noise reduction processing to obtain an optical flow result, wherein the optical flow result is used for the image circularly output last time to obtain an image to be fused which is aligned with the current frame image; and fusing the current frame image and the image to be fused to generate a target noise reduction image corresponding to the target video in the current circulation.
In one possible implementation, the video frames after the multi-frame fusion noise reduction may be output after subsequent image processing. The subsequent image processing may include performing image processing such as white balance, color correction, tone mapping, etc. on the video frame after the multi-frame fusion noise reduction.
In one possible implementation, outputting the denoised image (or video) includes displaying on a screen of the electronic device and/or saving to an album of the electronic device.
An example electronic device 100 for implementing the video denoising method and apparatus according to embodiments of the present application is described with reference to fig. 2. As shown in fig. 1, the electronic device 100 includes one or more processors 102, one or more storage devices 104, an input device 106, and an output device 108. The electronic device 100 may also include a data acquisition device 110 and/or an image acquisition device 112, which are interconnected by a bus system 114 and/or other forms of connection mechanisms (not shown). It should be noted that the components and structures of the electronic device 100 shown in fig. 2 are exemplary only and not limiting, as the electronic device may have other components and structures as desired.
The processor 102 may be a Central Processing Unit (CPU) or other form of processing unit having data processing and/or instruction execution capabilities, and may control other components in the electronic device 100 to perform desired functions.
The storage 104 may include one or more computer program products that may include various forms of computer-readable storage media, such as volatile memory and/or non-volatile memory. The volatile memory may include, for example, random Access Memory (RAM) and/or cache memory (cache), and the like. The non-volatile memory may include, for example, read-only memory (ROM, hard disk, flash memory), etc. One or more computer program instructions may be stored on the computer readable storage medium that can be executed by the processor 102 to implement client functions and/or other desired functions in embodiments of the present application as described below. Various applications and various data, such as various data used and/or generated by the applications, may also be stored in the computer readable storage medium.
The input device 106 may be a device used by a user to input instructions and may include one or more of a keyboard, mouse, microphone, touch screen, and the like.
The output device 108 may output various information (e.g., images and/or sounds) to the outside (e.g., a user), and may include one or more of a display, a speaker, and the like.
In the case where the electronic device 100 is used to implement the video denoising method and apparatus according to the embodiment of the present application, the electronic device 100 may include the data acquisition apparatus 110. The data acquisition device 110 may acquire a video stream, then acquire video frames in the video stream, and store the acquired images in the storage device 104 for use by other components. For example, the data acquisition device 110 may include one or more of a wired or wireless network interface, a Universal Serial Bus (USB) interface, an optical disk drive, and the like.
In case the electronic device 100 is used to implement the image processing method and apparatus according to an embodiment of the present application, the electronic device 100 may include the image capturing apparatus 112. The image capture device 112 may capture images and store the captured images in the storage device 104 for use by other components. The image capturing device 112 may be a camera, it being understood that the image capturing device 112 is merely an example and the electronic apparatus 100 may not include the image capturing device 112. In this case, the image to be processed may be acquired by other devices having image acquisition capability, and the acquired image may be transmitted to the electronic apparatus 100.
Exemplary electronic devices for implementing the video denoising method and apparatus according to embodiments of the present application may be implemented on devices such as a personal computer or a remote server.
The following describes in detail a video denoising method according to an embodiment of the present application with reference to fig. 3.
Fig. 3 is a flowchart of a video denoising method according to an embodiment of the present application, which may be performed by the electronic device 100, and may include, but is not limited to, the following steps:
step S301, acquiring a current frame image and a previous frame image of a video; wherein the previous frame image includes the image output in the last cycle and the previous frame image which has not undergone the noise reduction process.
In one embodiment, the current frame image and the previous frame image are buffered in time sequence.
Step S302, acquiring fusion factors corresponding to a current frame image and a previous frame image of a target video, wherein the fusion factors are related to a first adaptive noise reduction factor and the second adaptive noise reduction factor, and the first adaptive noise reduction factor is used for indicating the position offset of the current frame image and the previous frame image; the second adaptive noise reduction factor is used to indicate a noise level of the current frame image.
In one embodiment, the implementation process of obtaining the first adaptive noise reduction factor may include: processing the optical flow result to obtain the position offset of the current frame image and the previous frame image; and acquiring the first adaptive noise reduction factor according to the position offset. Specifically, the position offset can adaptively adjust the noise reduction intensity, for example, the noise reduction intensity is reduced when the video is in transition; for another example, the intensity of noise reduction is increased when the video is not in transition, in this way, the problem of unclear noise reduction during transition can be avoided.
In one embodiment, the optical-flow result includes an optical flow in an x-direction and an optical flow in a y-direction, the x-direction being orthogonal to the y-direction; the processing the optical flow result to obtain the position offset of the current frame image and the previous frame image (i.e. the previous and the next 2 frames) includes: respectively averaging and absolute values of the optical flow in the x direction and the optical flow in the y direction to obtain a first optical flow value and a second optical flow value; for example, the optical flow in the x direction may be represented by flow (x), and the optical flow in the y direction may be represented by flow (y). And acquiring the maximum value of the first light current value and the second light current value, and determining the maximum value as the position offset. In practical application, the position offset may be obtained by the following formula:
Formula (1)
Where D represents the position offset, max () represents the maximum function, abs () represents the absolute function, and mean () represents the average function.
Further, the first adaptive noise reduction factor may be obtained by the following formula:
formula (2)
Wherein Y1 represents a first adaptive noise reduction factor and D represents a positional offset.
In one embodiment, the implementation of obtaining optical-flow results may include: reducing the width and height of an image to one fourth of the size of an original image, constructing a pyramid, meshing each layer of the pyramid into 8x8 blocks, performing reverse search on each grid to solve motion vectors, densifying sparse optical flow, performing variation optimization, outputting an optical flow result of the layer as an optical flow initial value of the next layer, performing cyclical processing until the last layer of the pyramid is processed, amplifying the optical flow result to the size of the original image, and outputting a final optical flow result.
In one embodiment, the implementation process of obtaining the second adaptive noise reduction factor may include: acquiring a first pixel value difference in the current frame image and the previous frame image, wherein the first pixel value difference is a pixel difference value between the first pixel point and the corresponding pixel point;
acquiring the noise level of the first pixel point based on the first pixel value difference;
and obtaining the second adaptive noise reduction factor according to the noise level of the first pixel point.
Where Y2 represents the second adaptive noise reduction factor and diff represents the average of the pixel difference values.
Specifically, in embodiments of the present application, noise levels are estimated by comparing pixel differences between adjacent frames, which exploits spatio-temporal correlations in video sequences to improve the accuracy of the noise estimation. The motion vector from each pixel point of the current frame to the pixel point of the previous frame is obtained through the motion estimation, only each pixel of the current frame is needed to be traversed, the coordinates of the corresponding point of the previous frame plus the value of the pixel point of the motion vector are subtracted, then summation is carried out, and finally average is carried out, so that the noise level of the current frame is obtained. For real-time processing, the input image may be a 1/16 small plot of the original image size width and height. Assuming that the pixel value range of the image is 0 to 255, the pixel value of the current frame is p ij, the pixel value of the previous frame is q i+u,j+v, i represents the number of rows, j represents the number of columns, u and v are the motion vectors calculated in the step 1, the image has m rows and n columns, and the sum of the corresponding pixel differences of the two frames of images is shown in formula 3:
Formula (3)
Then take the average as shown in equation 4:
the result of the noise estimation will ultimately affect the intensity of the noise reduction, and thus, when determining the second adaptive noise reduction factor, it is calculated according to the following formula:
where Y2 represents the second adaptive noise reduction factor and diff represents the average of the pixel difference values.
Step S303, performing motion estimation on the current frame image of the target video and the previous frame image which is not subjected to noise reduction processing to obtain an optical flow result, where the optical flow result is used for the image output in the previous cycle to obtain an image to be fused aligned with the current frame image.
And step S304, fusing the current frame image and the image to be fused to generate a target noise reduction image corresponding to the target video of the current circulation.
The video denoising method provided by the invention only needs to fuse 2 frames of images, one frame is the current frame, and the other frame is the previous frame which has undergone denoising, namely the image calculated in the last cycle. Assuming that the pixel values of the image are normalized from 0.0 to 1.0, the pixel value of the current frame is p ij, the pixel value of the previous noise-reduced frame is q i+u,j+v, i represents the number of rows, j represents the number of columns, u and v are motion vectors calculated in the step 1, the image has m rows and n columns, the pixel difference diff is calculated according to the formula (6), then the belta value (i.e. the fusion factor) required to be added or subtracted for the pixel value of the current frame is calculated according to the formula (6), gamma is constant 18.0, pow is an exponential function, the first adaptive noise reduction factor and the second adaptive noise reduction factor are adaptive noise reduction intensity values calculated in the above steps, and finally the final output pixel value output is calculated according to the formula (8).
-Formula (6)
Formula (7)
Wherein,Representing a first adaptive noise reduction factor,Representing a second adaptive noise reduction factor, gamma is constant 18.0 and pow is an exponential function.
Formula (8)
When the embodiment of the application is implemented, when the noise reduction is carried out on a certain frame of image of the target video, the image output in the last cycle and the fusion factors related to the first self-adaptive noise reduction factor and the second self-adaptive noise reduction factor are fully considered, so that the target noise reduction image corresponding to the target video in the current cycle is generated, the noise reduction calculated amount is reduced, the original texture of the image is reserved, and the noise reduction effect of the video is improved.
In a possible implementation manner, the video noise reduction function enabled by the video noise reduction method provided in the first aspect may be directly integrated in a system, and the user uses the video noise reduction function by default when shooting a video.
In another possible implementation manner, when a user shoots a video, the user can select whether to use the video denoising method provided by the application to denoise the video, so as to give the user more independent options. For example, the user may turn on or off a switch in a system setting or in a camera application indicating the video noise reduction function that can be achieved by the video noise reduction method provided in the first aspect. In one possible implementation manner, in response to an operation of shooting by a user, displaying a video frame in a display screen of the mobile phone, wherein the video frame is obtained according to a noise-reduced video frame, and the noise-reduced video frame is obtained by performing front-to-back 2-frame fusion noise reduction on the video frame acquired by a camera of the mobile phone.
The displayed video frame may refer to a standard full-color image obtained by performing subsequent image processing on the video frame after multi-frame fusion and noise reduction, where the subsequent image processing may include, but is not limited to: and performing image processing such as white balance, color correction, tone mapping and the like on the video frames subjected to multi-frame fusion noise reduction.
In general, in the noise reduction method provided by the application, under the condition that the number of video frames is a plurality of, whether the video frame image is a first frame image is firstly judged, if not, the current frame image and a previous frame which is not subjected to noise reduction treatment are subjected to motion estimation, then the noise level of the current frame is determined, then the current frame and an image (an image which is subjected to noise reduction treatment) which is output in a previous cycle are fused to generate an output frame in the current cycle, and the output frame is sequentially circulated until the last frame image of the video frame is subjected to the same noise reduction treatment. In this way, the original texture of the image can be reserved while the noise reduction calculation amount is reduced, and the noise reduction effect of the video is improved.
Based on the video noise reduction method described in the above embodiments, the embodiments of the present application further provide a video noise reduction device 40, which includes one or more functional units for performing the video noise reduction method described in the above embodiments, where the functional units may be implemented by software, or by hardware, such as a processor, or by a suitable combination of software, hardware and/or firmware, such as a part of the functions being implemented by an application processor executing a computer program, and a part of the functions being implemented by a wireless communication module (such as bluetooth, wi-Fi module, etc.), MCU, ISP, etc.
In one embodiment, as shown in FIG. 4, the video noise reducer 40 may include at least:
An image obtaining unit 400, configured to obtain a current frame image and a previous frame image of a target video; the previous frame image comprises an image output in the last cycle and a previous frame image which is not subjected to noise reduction treatment;
A fusion factor obtaining unit 402, configured to obtain fusion factors corresponding to a current frame image and a previous frame image of the target video, where the fusion factors relate to a first adaptive noise reduction factor and a second adaptive noise reduction factor, and the first adaptive noise reduction factor is used to indicate a position offset of the current frame image and the previous frame image; the second adaptive noise reduction factor is used for indicating the noise level of the current frame image;
an image processing unit 404, configured to perform motion estimation on a current frame image of the target video and the previous frame image that is not subjected to noise reduction processing, to obtain an optical flow result, where the optical flow result is used for the image that is output in the previous cycle, to obtain an image to be fused that is aligned with the current frame image;
And a fusion unit 406, configured to fuse the current frame image and the image to be fused, and generate a target noise reduction image corresponding to the target video in the current cycle.
In one possible implementation, the video noise reduction device 40 further includes:
A first adaptive noise reduction factor acquisition unit 408, configured to:
Processing the optical flow result to obtain the position offset of the current frame image and the previous frame image;
and acquiring the first adaptive noise reduction factor according to the position offset.
In one possible implementation, the optical-flow result includes an optical flow in an x-direction and an optical flow in a y-direction, the x-direction being orthogonal to the y-direction; the first adaptive noise reduction factor obtaining unit 408 is specifically configured to:
respectively averaging and absolute values of the optical flow in the x direction and the optical flow in the y direction to obtain a first optical flow value and a second optical flow value;
and acquiring the maximum value of the first light current value and the second light current value, and determining the maximum value as the position offset.
In one possible implementation manner, the first adaptive noise reduction factor obtaining unit 408 is specifically configured to:
Upon determining the first adaptive noise reduction factor, it is calculated according to the following formula:
Wherein Y1 represents a first adaptive noise reduction factor and D represents a positional offset.
In one possible implementation, the video noise reduction device 40 further includes:
a second adaptive noise reduction factor acquisition unit 4010 for:
Acquiring a first pixel value difference in the current frame image and the previous frame image, wherein the first pixel value difference is a pixel difference value between a first pixel point and a corresponding pixel point;
acquiring the noise level of the first pixel point based on the first pixel value difference;
and obtaining the second adaptive noise reduction factor according to the noise level of the first pixel point.
In one possible implementation manner, the second adaptive noise reduction factor obtaining unit 4010 is specifically configured to:
upon determining the second adaptive noise reduction factor, it is calculated according to the following formula:
where Y2 represents the second adaptive noise reduction factor and diff represents the average of the pixel difference values.
In order to facilitate better implementation of the foregoing aspects of the embodiments of the present application, the present application further provides an electronic device 50, which is described in detail below with reference to the accompanying drawings:
As shown in fig. 5, in the schematic structural diagram of the electronic device according to the embodiment of the present application, the electronic device 500 may include a processor 501, a memory 504, and a communication module 505, where the processor 501, the memory 504, and the communication module 505 may be connected to each other through a bus 506. The memory 504 may be a high-speed random access memory (Random Access Memory, RAM) memory or a non-volatile memory (non-volatile memory), such as at least one disk memory. The memory 504 may also optionally be at least one storage system located remotely from the aforementioned processor 501. Memory 504 is used for storing application program codes and may include an operating system, a network communication module, a user interface module, and a data processing program, and communication module 505 is used for information interaction with external devices; the processor 501 is configured to invoke the program code to perform the video denoising method as proposed by the present application.
It should be understood that the video noise reduction device shown in the embodiment of the present application may be a server, for example, may be a cloud server, or may also be a chip configured in the cloud server; or the video noise reduction device in the embodiment of the application can be electronic equipment or a chip configured in the electronic equipment.
Embodiments of the present application also provide a computer storage medium having instructions stored therein which, when executed on a computer or processor, cause the computer or processor to perform one or more steps of a method as described in any of the embodiments above. The individual constituent modules of the apparatus described above, if implemented in the form of software functional units and sold or used as separate products, can be stored in the computer-readable storage medium, and based on such understanding, the technical solution of the present application may be embodied essentially or partly or wholly or partly in the form of a software product, which is stored in the computer-readable storage medium.
The computer readable storage medium may be an internal storage unit of the apparatus according to the foregoing embodiment, such as a hard disk or a memory. The computer-readable storage medium may be an external storage device of the above device, such as a plug-in hard disk, a smart memory card (SMART MEDIA CARD, SMC), a Secure Digital (SD) card, a flash memory card (FLASH CARD), or the like. Further, the computer-readable storage medium may include both an internal storage unit and an external storage device of the above device. The computer-readable storage medium is used to store the computer program and other programs and data required by the apparatus. The above-described computer-readable storage medium may also be used to temporarily store data that has been output or is to be output.
It should be understood that, in various embodiments of the present application, the sequence numbers of the foregoing processes do not mean the order of execution, and the order of execution of the processes should be determined by the functions and internal logic thereof, and should not constitute any limitation on the implementation process of the embodiments of the present application.
Those of ordinary skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware, computer software, or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
It should be noted that, the terms "executable program", "computer program", and "program" as used in the embodiments of the present application should be interpreted broadly to include, but are not limited to: instructions, instruction sets, code segments, subroutines, software modules, applications, software packages, threads, processes, functions, firmware, middleware, and the like. It will be clear to those skilled in the art that, for convenience and brevity of description, specific working procedures of the apparatus and units described above may refer to corresponding procedures in the foregoing method embodiments, which are not described herein again.
In the several embodiments provided by the present application, it should be understood that the disclosed apparatus and method may be implemented in other manners. For example, the apparatus embodiments described above are merely illustrative, e.g., the division of the units is merely a logical function division, and there may be additional divisions when actually implemented, e.g., multiple units or components may be combined or integrated into another system, or some features may be omitted or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed with each other may be an indirect coupling or communication connection via some interfaces, devices or units, which may be in electrical, mechanical or other form.
In addition, each functional unit in the embodiments of the present application may be integrated in one processing module, or each unit may exist alone physically, or two or more units may be integrated in one unit. The functions, if implemented in the form of software functional units and sold or used as a stand-alone product, may be stored in a computer-readable storage medium. Based on this understanding, the technical solution of the present application may be embodied essentially or in a part contributing to the prior art or in a part of the technical solution, in the form of a software product stored in a storage medium, comprising several instructions for causing a computer device (which may be a personal computer, a server, a network device, etc.) to perform all or part of the steps of the method according to the embodiments of the present application. And the aforementioned storage medium includes: various media capable of storing executable programs, such as a usb disk, a removable hard disk, a read-only memory random access memory, a magnetic disk, or an optical disk.
The foregoing is merely illustrative of the present application, and the present application is not limited thereto, and any person skilled in the art will readily recognize that variations or substitutions are within the scope of the present application. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.
Claims (7)
1. A method of video denoising, comprising:
Acquiring a current frame image and a previous frame image of a target video; the previous frame image comprises an image output in the last cycle and a previous frame image which is not subjected to noise reduction treatment;
acquiring fusion factors corresponding to a current frame image and a previous frame image of the target video, wherein the fusion factors are related to a first adaptive noise reduction factor and a second adaptive noise reduction factor, and the first adaptive noise reduction factor is used for indicating the position offset of the current frame image and the previous frame image; the second adaptive noise reduction factor is used for indicating the noise level of the current frame image;
performing motion estimation on the current frame image of the target video and the previous frame image which is not subjected to noise reduction processing to obtain an optical flow result, wherein the optical flow result is used for the image circularly output last time to obtain an image to be fused which is aligned with the current frame image;
fusing the current frame image and the image to be fused to generate a target noise reduction image corresponding to the target video in the current circulation;
the method further comprises the steps of:
Processing the optical flow result to obtain the position offset of the current frame image and the previous frame image;
acquiring the first adaptive noise reduction factor according to the position offset;
The optical-flow results include optical flow in an x-direction and optical flow in a y-direction, the x-direction being orthogonal to the y-direction; the processing the optical flow result to obtain the position offset of the current frame image and the previous frame image includes:
respectively averaging and absolute values of the optical flow in the x direction and the optical flow in the y direction to obtain a first optical flow value and a second optical flow value;
Obtaining the maximum value of the first light current value and the second light current value, and determining the maximum value as the position offset;
the obtaining the first adaptive noise reduction factor according to the position offset includes:
Upon determining the first adaptive noise reduction factor, it is calculated according to the following formula:
wherein Y1 represents a first adaptive noise reduction factor and D represents a position offset;
the method further comprises the steps of:
Acquiring a first pixel value difference in the current frame image and the previous frame image, wherein the first pixel value difference is a pixel difference value between a first pixel point and a corresponding pixel point;
acquiring the noise level of the first pixel point based on the first pixel value difference;
obtaining the second adaptive noise reduction factor according to the noise level of the first pixel point comprises the following steps:
upon determining the second adaptive noise reduction factor, it is calculated according to the following formula:
where Y2 represents the second adaptive noise reduction factor and diff represents the average of the pixel difference values.
2. A method of video denoising, comprising:
In response to a user operation, displaying a video frame in a display screen of an electronic device, wherein the video frame comprises a target noise reduction image after noise reduction, the target noise reduction image after noise reduction is obtained by noise reduction of a current frame and a previous frame image of a target video acquired by the electronic device, and noise reduction of the current frame and the previous frame image of the target video comprises:
Acquiring a current frame image and a previous frame image of a target video; the previous frame image comprises an image output in the last cycle and a previous frame image which is not subjected to noise reduction treatment;
acquiring fusion factors corresponding to a current frame image and a previous frame image of the target video, wherein the fusion factors are related to a first adaptive noise reduction factor and a second adaptive noise reduction factor, and the first adaptive noise reduction factor is used for indicating the position offset of the current frame image and the previous frame image; the second adaptive noise reduction factor is used for indicating the noise level of the current frame image;
performing motion estimation on the current frame image of the target video and the previous frame image which is not subjected to noise reduction processing to obtain an optical flow result, wherein the optical flow result is used for the image circularly output last time to obtain an image to be fused which is aligned with the current frame image;
fusing the current frame image and the image to be fused to generate a target noise reduction image corresponding to the target video in the current circulation;
the method further comprises the steps of:
Processing the optical flow result to obtain the position offset of the current frame image and the previous frame image;
acquiring the first adaptive noise reduction factor according to the position offset;
The optical-flow results include optical flow in an x-direction and optical flow in a y-direction, the x-direction being orthogonal to the y-direction; the processing the optical flow result to obtain the position offset of the current frame image and the previous frame image includes:
respectively averaging and absolute values of the optical flow in the x direction and the optical flow in the y direction to obtain a first optical flow value and a second optical flow value;
Obtaining the maximum value of the first light current value and the second light current value, and determining the maximum value as the position offset;
the obtaining the first adaptive noise reduction factor according to the position offset includes:
Upon determining the first adaptive noise reduction factor, it is calculated according to the following formula:
wherein Y1 represents a first adaptive noise reduction factor and D represents a position offset;
the method further comprises the steps of:
Acquiring a first pixel value difference in the current frame image and the previous frame image, wherein the first pixel value difference is a pixel difference value between a first pixel point and a corresponding pixel point;
acquiring the noise level of the first pixel point based on the first pixel value difference;
obtaining the second adaptive noise reduction factor according to the noise level of the first pixel point comprises the following steps:
upon determining the second adaptive noise reduction factor, it is calculated according to the following formula:
where Y2 represents the second adaptive noise reduction factor and diff represents the average of the pixel difference values.
3. The method of claim 2, wherein the user operation is an operation to turn on a video noise reduction mode for indicating to reduce noise of video frames acquired by the electronic device; or the user operation is an operation to start video shooting.
4. A video noise reduction device, comprising:
The image acquisition unit is used for acquiring a current frame image and a previous frame image of the target video; the previous frame image comprises an image output in the last cycle and a previous frame image which is not subjected to noise reduction treatment;
The fusion factor acquisition unit is used for acquiring fusion factors corresponding to a current frame image and a previous frame image of the target video, wherein the fusion factors are related to a first adaptive noise reduction factor and a second adaptive noise reduction factor, and the first adaptive noise reduction factor is used for indicating the position offset of the current frame image and the previous frame image; the second adaptive noise reduction factor is used for indicating the noise level of the current frame image;
The image processing unit is used for performing motion estimation on the current frame image of the target video and the previous frame image which is not subjected to noise reduction processing to obtain an optical flow result, wherein the optical flow result is used for the image which is circularly output last time to obtain an image to be fused which is aligned with the current frame image;
The fusion unit is used for fusing the current frame image and the image to be fused to generate a target noise reduction image corresponding to the target video in the current circulation;
a first adaptive noise reduction factor acquisition unit configured to:
Processing the optical flow result to obtain the position offset of the current frame image and the previous frame image, and acquiring the first adaptive noise reduction factor according to the position offset;
The optical-flow results include optical flow in an x-direction and optical flow in a y-direction, the x-direction being orthogonal to the y-direction; the processing the optical flow result to obtain the position offset of the current frame image and the previous frame image includes:
respectively averaging and absolute values of the optical flow in the x direction and the optical flow in the y direction to obtain a first optical flow value and a second optical flow value;
Obtaining the maximum value of the first light current value and the second light current value, and determining the maximum value as the position offset;
the obtaining the first adaptive noise reduction factor according to the position offset includes:
Upon determining the first adaptive noise reduction factor, it is calculated according to the following formula:
wherein Y1 represents a first adaptive noise reduction factor and D represents a position offset;
a second adaptive noise reduction factor acquisition unit configured to:
Acquiring a first pixel value difference in the current frame image and the previous frame image, wherein the first pixel value difference is a pixel difference value between a first pixel point and a corresponding pixel point;
acquiring the noise level of the first pixel point based on the first pixel value difference;
obtaining the second adaptive noise reduction factor according to the noise level of the first pixel point comprises the following steps:
upon determining the second adaptive noise reduction factor, it is calculated according to the following formula:
where Y2 represents the second adaptive noise reduction factor and diff represents the average of the pixel difference values.
5. A video noise reduction device, comprising:
the video noise reduction processing module is configured to respond to a user operation, display a video frame in a display screen of an electronic device, where the video frame includes a target noise reduction image after noise reduction, the target noise reduction image after noise reduction is obtained by noise reduction of a current frame and a previous frame of a target video acquired by the electronic device, and noise reduction of the current frame and the previous frame of the target video includes:
The image acquisition unit is used for acquiring a current frame image and a previous frame image of the target video; the previous frame image comprises an image output in the last cycle and a previous frame image which is not subjected to noise reduction treatment;
The fusion factor acquisition unit is used for acquiring fusion factors corresponding to a current frame image and a previous frame image of the target video, wherein the fusion factors are related to a first adaptive noise reduction factor and a second adaptive noise reduction factor, and the first adaptive noise reduction factor is used for indicating the position offset of the current frame image and the previous frame image; the second adaptive noise reduction factor is used for indicating the noise level of the current frame image;
the image processing unit is used for performing motion estimation on the current frame image of the target video and the previous frame image which is not subjected to noise reduction processing to obtain an optical flow result, wherein the optical flow result is used for the image which is circularly output last time to obtain an image to be fused which is aligned with the current frame image;
The fusion unit is used for fusing the current frame image and the image to be fused to generate a target noise reduction image corresponding to the target video in the current circulation;
a first adaptive noise reduction factor acquisition unit configured to:
Processing the optical flow result to obtain the position offset of the current frame image and the previous frame image, and acquiring the first adaptive noise reduction factor according to the position offset;
The optical-flow results include optical flow in an x-direction and optical flow in a y-direction, the x-direction being orthogonal to the y-direction; the processing the optical flow result to obtain the position offset of the current frame image and the previous frame image includes:
respectively averaging and absolute values of the optical flow in the x direction and the optical flow in the y direction to obtain a first optical flow value and a second optical flow value;
Obtaining the maximum value of the first light current value and the second light current value, and determining the maximum value as the position offset;
the obtaining the first adaptive noise reduction factor according to the position offset includes:
Upon determining the first adaptive noise reduction factor, it is calculated according to the following formula:
wherein Y1 represents a first adaptive noise reduction factor and D represents a position offset;
a second adaptive noise reduction factor acquisition unit configured to:
Acquiring a first pixel value difference in the current frame image and the previous frame image, wherein the first pixel value difference is a pixel difference value between a first pixel point and a corresponding pixel point;
acquiring the noise level of the first pixel point based on the first pixel value difference;
obtaining the second adaptive noise reduction factor according to the noise level of the first pixel point comprises the following steps:
upon determining the second adaptive noise reduction factor, it is calculated according to the following formula:
where Y2 represents the second adaptive noise reduction factor and diff represents the average of the pixel difference values.
6. A video noise reduction device comprising a memory, a processor and a program stored on the memory and executable on the processor, characterized in that the processor, when executing the program, performs the video noise reduction method of any one of claims 1 to 3.
7. A computer readable storage medium, characterized in that the computer readable storage medium has stored therein a computer program which, when executed by a processor, implements the video denoising method of any one of claims 1 to 3.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202410789407.0A CN118365554B (en) | 2024-06-19 | 2024-06-19 | Video noise reduction method, device, electronic equipment and computer readable storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202410789407.0A CN118365554B (en) | 2024-06-19 | 2024-06-19 | Video noise reduction method, device, electronic equipment and computer readable storage medium |
Publications (2)
Publication Number | Publication Date |
---|---|
CN118365554A CN118365554A (en) | 2024-07-19 |
CN118365554B true CN118365554B (en) | 2024-08-23 |
Family
ID=91887666
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202410789407.0A Active CN118365554B (en) | 2024-06-19 | 2024-06-19 | Video noise reduction method, device, electronic equipment and computer readable storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN118365554B (en) |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111127347A (en) * | 2019-12-09 | 2020-05-08 | Oppo广东移动通信有限公司 | Noise reduction method, terminal and storage medium |
CN114612312A (en) * | 2020-12-08 | 2022-06-10 | 武汉Tcl集团工业研究院有限公司 | Video noise reduction method, intelligent terminal and computer readable storage medium |
Family Cites Families (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110933334B (en) * | 2019-12-12 | 2021-08-03 | 腾讯科技(深圳)有限公司 | Video noise reduction method, device, terminal and storage medium |
CN117880645A (en) * | 2022-10-10 | 2024-04-12 | 华为技术有限公司 | Image processing method and device, electronic equipment and storage medium |
-
2024
- 2024-06-19 CN CN202410789407.0A patent/CN118365554B/en active Active
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111127347A (en) * | 2019-12-09 | 2020-05-08 | Oppo广东移动通信有限公司 | Noise reduction method, terminal and storage medium |
CN114612312A (en) * | 2020-12-08 | 2022-06-10 | 武汉Tcl集团工业研究院有限公司 | Video noise reduction method, intelligent terminal and computer readable storage medium |
Also Published As
Publication number | Publication date |
---|---|
CN118365554A (en) | 2024-07-19 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US10979612B2 (en) | Electronic device comprising plurality of cameras using rolling shutter mode | |
US10911691B1 (en) | System and method for dynamic selection of reference image frame | |
CN110996170B (en) | Video file playing method and related equipment | |
CN111225150A (en) | Method for processing interpolation frame and related product | |
CN112767281A (en) | Image ghost eliminating method, device, electronic equipment and storage medium | |
KR20200011000A (en) | Device and method for augmented reality preview and positional tracking | |
WO2020171300A1 (en) | Processing image data in a composite image | |
CN110889809A (en) | Image processing method and device, electronic device and storage medium | |
US9451165B2 (en) | Image processing apparatus | |
CN108574803B (en) | Image selection method and device, storage medium and electronic equipment | |
CN115546043B (en) | Video processing method and related equipment thereof | |
US11200653B2 (en) | Local histogram matching with global regularization and motion exclusion for multi-exposure image fusion | |
CN115701125A (en) | Image anti-shake method and electronic equipment | |
KR20210101941A (en) | Electronic device and method for generating high dynamic range images | |
US10277844B2 (en) | Processing images based on generated motion data | |
WO2024104439A1 (en) | Image frame interpolation method and apparatus, device, and computer readable storage medium | |
CN112565603B (en) | Image processing method and device and electronic equipment | |
CN118365554B (en) | Video noise reduction method, device, electronic equipment and computer readable storage medium | |
CN108027646B (en) | Anti-shaking method and device for terminal display | |
CN118696331A (en) | Method and electronic device for synthesizing image training data and image processing using artificial intelligence | |
US20240062342A1 (en) | Machine learning-based approaches for synthetic training data generation and image sharpening | |
CN115834795A (en) | Image processing method, device, equipment and computer readable storage medium | |
CN111462008B (en) | Low-illumination image enhancement method, low-illumination image enhancement device and electronic equipment | |
CN111754417B (en) | Noise reduction method for video image, video matting method, device and electronic system | |
EP4280154A1 (en) | Image blurriness determination method and device related thereto |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |