CN107016650B - 3D noise reduction method and device for video image - Google Patents

3D noise reduction method and device for video image Download PDF

Info

Publication number
CN107016650B
CN107016650B CN201710107692.3A CN201710107692A CN107016650B CN 107016650 B CN107016650 B CN 107016650B CN 201710107692 A CN201710107692 A CN 201710107692A CN 107016650 B CN107016650 B CN 107016650B
Authority
CN
China
Prior art keywords
current
image data
image
noise reduction
pixel point
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201710107692.3A
Other languages
Chinese (zh)
Other versions
CN107016650A (en
Inventor
熊超
章勇
曹李军
陈卫东
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Suzhou Keda Technology Co Ltd
Original Assignee
Suzhou Keda Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Suzhou Keda Technology Co Ltd filed Critical Suzhou Keda Technology Co Ltd
Priority to CN201710107692.3A priority Critical patent/CN107016650B/en
Publication of CN107016650A publication Critical patent/CN107016650A/en
Priority to PCT/CN2017/117164 priority patent/WO2018153150A1/en
Application granted granted Critical
Publication of CN107016650B publication Critical patent/CN107016650B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/70Denoising; Smoothing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/10Image enhancement or restoration using non-spatial domain filtering
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/14Picture signal circuitry for video frequency region
    • H04N5/21Circuitry for suppressing or minimising disturbance, e.g. moiré or halo
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • G06T2207/10021Stereoscopic video; Stereoscopic image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20172Image enhancement details
    • G06T2207/20182Noise reduction or smoothing in the temporal domain; Spatio-temporal filtering

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Picture Signal Circuits (AREA)
  • Image Processing (AREA)

Abstract

The invention provides a 3D noise reduction method and a device for a video image, wherein the method comprises the following steps: acquiring current first image data from a video image; 2D denoising the first image data based on a space domain to obtain current second image data; acquiring a binary image according to the current second image data; the binary image comprises a background area and a foreground area; obtaining a filtering intensity coefficient of each pixel point in the current second image data on time domain filtering; and performing 3D noise reduction processing on the current second image data according to the 3D noise reduction result of the last frame of image data of the current first image data and the filtering strength coefficient. Therefore, the problem that relatively accurate motion information is difficult to obtain by the existing 3D noise reduction technology is solved, and the problem that the storage overhead of equipment is increased by performing FIR filtering on a time domain is solved. And further, the noise reduction effect of the video image is improved, and 3D noise reduction is applied in a wider range.

Description

3D noise reduction method and device for video image
Technical Field
The invention relates to the field of video image processing, in particular to a 3D noise reduction method and device for a video image.
Background
The video and the image are concerned by the user with the advantage of strong visualization, however, the quality of the video image directly affects the use of the user, the noise reduction function of the video image undoubtedly plays a key role therein, the good image noise reduction technology can clearly identify some moving objects in a low-illumination scene, and the 3D noise reduction technology becomes a hotspot of research in the field of the noise reduction technology of the video image. Generally, if the noise reduction method based on the spatial domain or the temporal domain is used to perform the noise reduction on the video image, the phenomena of smooth image transition, loss of details, inter-frame jump noise, etc. are likely to occur, and the 3D noise reduction method combining the temporal domain and the spatial domain is adopted to avoid the above phenomena as much as possible. In the existing 3D noise reduction technique for video images, one type of method is mainly a 3D noise reduction method based on motion estimation and compensation, which performs motion estimation based on macro blocks between two frames to obtain motion information of the image, then performs motion compensation on the image according to the motion information, and finally performs time-domain filtering on the time domain by using an fir (finite impulse filter) to output a noise reduction result. Still another 3D noise reduction method is mainly based on motion adaptive 3D noise reduction method, which analyzes the motion intensity information of image pixels or macro blocks between frames, and performs time domain noise reduction as much as possible if the motion intensity is large, and performs spatial domain noise reduction as much as possible otherwise according to the motion intensity information.
The main defects are that in a low-illumination scene like at night, whether an inter-frame macro block matching method or an inter-frame motion intensity calculation method is adopted, relatively accurate motion information is difficult to obtain, error analysis of background and foreground pixel points is easy to cause, background noise is not reduced, a serious trailing phenomenon of a foreground moving object occurs, and the like, meanwhile, multi-frame historical image data needs to be stored when FIR filtering is carried out on a time domain, storage cost of equipment is increased, and the real-time performance of a 3D noise reduction method in a video image system is not facilitated.
Disclosure of Invention
In view of this, the technical problem to be solved by the present invention is to overcome the defects that in the prior art, it is difficult to obtain relatively accurate motion information in a low-illumination scene for 3D noise reduction, which easily results in error analysis of background and foreground pixels, no reduction of background noise and severe tailing of foreground moving objects, and meanwhile, performing FIR filtering in the time domain requires storage of multiple frames of historical image data, which increases the storage overhead of the device, and is not favorable for the real-time performance of the 3D noise reduction method in a video image system. Therefore, the method and the device for 3D noise reduction of the video image are provided.
Therefore, the embodiment of the invention provides the following technical scheme:
the embodiment of the invention provides a 3D noise reduction method for a video image, which comprises the following steps: acquiring current first image data from a video image; 2D denoising the first image data based on a space domain to obtain current second image data; acquiring a binary image according to the current second image data; wherein the binary image comprises a background region and a foreground region; acquiring a filtering intensity coefficient of each pixel point in the current second image data on time domain filtering; and performing 3D noise reduction processing on the current second image data according to the 3D noise reduction result of the last frame of image data of the current first image data and the filtering intensity coefficient.
Optionally, the obtaining a binary image according to the current second image data includes: judging whether the current pixel belongs to the background area or the foreground area; under the condition that the current pixel point belongs to the background area, acquiring the number of pixel points belonging to the foreground area in a first preset area near the current pixel point; and when the number is larger than a first threshold value, setting the pixel points in a second preset area near the current pixel point as the pixel points belonging to the foreground area.
Optionally, the second predetermined region includes a neighborhood window centered on the current pixel point and having a second threshold as a radius.
Optionally, the obtaining a binary image according to the current second image data includes: when the current pixel point belongs to the foreground area, acquiring the motion intensity information of the current second image data; when the motion intensity information is larger than or equal to a third threshold and the number of pixel points belonging to the foreground area in a third preset area near the same coordinate position is smaller than a fourth threshold, resetting the current pixel point to belong to the background area; wherein the co-ordinate position includes a same position of a last frame image of the current first image and a last frame image of the current first image; and/or the presence of a gas in the gas,
when the current pixel point belongs to the background area, acquiring the motion intensity information of the current second image data; under the condition that the motion intensity information is smaller than or equal to a fifth threshold value and the number of pixel points belonging to the foreground area in a fourth preset area near the same coordinate position is larger than a sixth threshold value, resetting the current pixel point to belong to the foreground area; wherein the co-ordinate position includes a same position of a last frame image of the current first image and a last frame image of the current first image.
Optionally, the obtaining of the motion intensity information of the current second image data includes: motion intensity information of the current second image data is calculated by a SAD algorithm.
Optionally, performing 3D noise reduction processing on the current second image data according to the 3D noise reduction result of the last frame of image data of the current first image data and the filter strength coefficient includes: obtaining a result of performing 3D noise reduction processing on the current second image data by the following formula:
cur_3D=α*pre_3D+(1-α)*cur_2D
wherein cur _3D represents a 3D noise reduction output result of the current second image data, cur _2D represents a 2D noise reduction result of the current first image data, pre _3D represents a 3D noise reduction result of a previous frame image data of the current first image data, and α represents a temporal filtering strength coefficient.
The embodiment of the invention also provides a 3D noise reduction device for video images, which comprises: the acquisition module is used for acquiring current first image data from a video image; the first noise reduction module is used for carrying out 2D noise reduction on the first image data based on a space domain to obtain current second image data; the first acquisition module is used for acquiring a binary image according to the current second image data; wherein the binary image comprises a background region and a foreground region; the second acquisition module is used for acquiring a filtering intensity coefficient of each pixel point in the current second image data on time domain filtering; and the second noise reduction module is used for carrying out 3D noise reduction processing on the current second image data according to the 3D noise reduction result of the last frame of image data of the current first image data and the filtering intensity coefficient.
Optionally, the first obtaining module includes: the judging unit is used for judging whether the current pixel point belongs to the background area or the foreground area; the obtaining unit is used for obtaining the number of pixel points belonging to the foreground area in a first preset area near the current pixel point under the condition that the current pixel point belongs to the background area; and the setting unit is used for setting the pixel points in a second preset area near the current pixel point as the pixel points belonging to the foreground area when the number is larger than a first threshold value.
Optionally, the second predetermined region includes a neighborhood window centered on the current pixel point and having a second threshold as a radius.
Optionally, the first obtaining module includes: the first processing unit is used for acquiring the motion intensity information of the current second image data when the current pixel point belongs to the foreground area; when the motion intensity information is larger than or equal to a third threshold and the number of pixel points belonging to the foreground area in a third preset area near the same coordinate position is smaller than a fourth threshold, resetting the current pixel point to belong to the background area; wherein the co-ordinate position includes a same position of a last frame image of the current first image and a last frame image of the current first image; and/or the second processing unit is used for acquiring the motion intensity information of the current second image data when the current pixel point belongs to the background area; under the condition that the motion intensity information is smaller than or equal to a fifth threshold value and the number of pixel points belonging to the foreground area in a fourth preset area near the same coordinate position is larger than a sixth threshold value, resetting the current pixel point to belong to the foreground area; wherein the co-ordinate position includes a same position of a last frame image of the current first image and a last frame image of the current first image.
Optionally, the first processing unit or the second processing unit is further configured to calculate motion intensity information of the current second image data by a SAD algorithm.
Optionally, the second denoising module is further configured to obtain a result of performing 3D denoising processing on the current second image data according to the following formula:
cur_3D=α*pre_3D+(1-α)*cur_2D
wherein cur _3D represents a 3D noise reduction output result of the current second image data, cur _2D represents a 2D noise reduction result of the current first image data, pre _3D represents a 3D noise reduction result of a previous frame image data of the current first image data, and α represents a temporal filtering strength coefficient.
The technical scheme of the embodiment of the invention has the following advantages:
the embodiment of the invention provides a 3D noise reduction method and a device for a video image, wherein the method comprises the steps of acquiring current first image data from the video image; 2D denoising the first image data based on a space domain to obtain current second image data; acquiring a binary image according to the current second image data; wherein the binary image comprises a background region and a foreground region; acquiring a filtering intensity coefficient of each pixel point in the current second image data on time domain filtering; and performing 3D noise reduction processing on the current second image data according to the 3D noise reduction result of the last frame of image data of the current first image data and the filtering intensity coefficient. For the existing 3D noise reduction technology, in a scene with lower illumination, it is difficult to obtain relatively accurate motion information, which easily results in the defects of error analysis of background and foreground pixels, no reduction of background noise, severe tailing of foreground moving objects, and the like, meanwhile, FIR filtering in the time domain requires the storage of multiple frames of historical image data, which increases the storage overhead of the device and is not beneficial to the real-time performance of the 3D noise reduction method in a video image system.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, and it is obvious that the drawings in the following description are some embodiments of the present invention, and other drawings can be obtained by those skilled in the art without creative efforts.
FIG. 1 is a flow chart of a method for 3D denoising video images according to an embodiment of the invention;
FIG. 2 is a table of filter strength coefficient parameters according to an embodiment of the present invention;
FIG. 3 is a schematic structural diagram of a 3D denoising method for a video image according to an embodiment of the invention;
FIG. 4 is another flow chart of a method for 3D denoising video images according to an embodiment of the invention;
fig. 5 is a block diagram of a structure of a 3D noise reduction apparatus for a video image according to an embodiment of the present invention;
FIG. 6 is a block diagram of a first acquisition module according to an embodiment of the invention;
fig. 7 is another block diagram of the first obtaining module according to an embodiment of the present invention.
Detailed Description
The technical solutions of the present invention will be described clearly and completely with reference to the accompanying drawings, and it should be understood that the described embodiments are some, but not all embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
In the description of the present invention, it should be noted that the terms "center", "upper", "lower", "left", "right", "vertical", "horizontal", "inner", "outer", etc., indicate orientations or positional relationships based on the orientations or positional relationships shown in the drawings, and are only for convenience of description and simplicity of description, but do not indicate or imply that the device or element being referred to must have a particular orientation, be constructed and operated in a particular orientation, and thus, should not be construed as limiting the present invention. Furthermore, the terms "first," "second," and "third" are used for descriptive purposes only and are not to be construed as indicating or implying relative importance.
In the description of the present invention, it should be noted that, unless otherwise explicitly specified or limited, the terms "mounted," "connected," and "connected" are to be construed broadly, e.g., as meaning either a fixed connection, a removable connection, or an integral connection; can be mechanically or electrically connected; the two elements may be directly connected or indirectly connected through an intermediate medium, or may be communicated with each other inside the two elements, or may be wirelessly connected or wired connected. The specific meanings of the above terms in the present invention can be understood in specific cases to those skilled in the art.
In addition, the technical features involved in the different embodiments of the present invention described below may be combined with each other as long as they do not conflict with each other.
Example 1
An embodiment of the present invention provides a 3D denoising method for a video image, and fig. 1 is a flowchart of the 3D denoising method for a video image according to an embodiment of the present invention, as shown in fig. 1, the flowchart includes the following steps:
step S101, acquiring current first image data from a video image; for example, acquiring one frame of image data of an input video image;
and S102, carrying out 2D noise reduction on the first image data based on a space domain to obtain current second image data. 2D noise reduction processing based on a spatial domain relation is carried out on YUV of the current frame, wherein the 2D noise reduction method used by the implementation method can be a 2D-DCT noise reduction method, and the method belongs to a more practical 2D noise reduction method;
step S103, acquiring a binary image according to the current second image data; wherein the binary image comprises a background region and a foreground region. And performing moving target detection analysis on the second image obtained by the 2D noise reduction result by using a ViBe (visual Background extractor) moving target detection method based on Background modeling to obtain a binary image containing a Background static area and a Background moving area, wherein the ViBe belongs to a better method based on a Background modeling type moving target detection method.
Step S104, obtaining the filtering intensity coefficient of each pixel point in the current second image data on the time domain filtering. And calculating a time domain filtering strength coefficient. Through practical simulation test verification, the calculation of the time-domain filtering strength coefficient can be associated with a plurality of related information of a video image scene, such as the noise standard deviation of the scene, the digital image gain of the scene and the like. For example, under a certain image gain value, a fixed value between 0 and 1 is respectively given to the foreground region and the background region according to a finally generated binary image result. Taking the digital image gain range from 0-60 dB as an example, reference may be made to fig. 2 in the accompanying drawings, where the data values in fig. 2 are obtained through a large number of experiments, and the parameters in fig. 2 may also be modified according to specific devices and application scenarios.
Step S105, performing 3D noise reduction processing on the current second image data according to the 3D noise reduction result of the previous frame of image data of the current first image data and the filtering strength coefficient. And using an IIR filter, wherein the 3D noise reduction result of the previous frame and the 2D noise reduction result and the filter strength coefficient of the current frame are used as the input of the IIR filter, and the output of the IIR filter is used as the output of the 3D noise reduction result. If the time domain filter coefficient is larger, the pixel point is possibly a foreground motion region, and more 3D noise reduction results of the previous frame are quoted into the final 3D noise reduction result; if the time domain filter coefficient is smaller, the pixel point is possibly a background static area, and more 2D noise reduction results are quoted in the final 3D noise reduction result. With reference to fig. 3 of the drawings, the IIR filter-based time-domain filtering is performed on the data of the current frame, where the IIR filtering formula is as follows:
cur_3D=α*pre_3D+(1-α)*cur_2D
wherein cur _3D represents the 3D noise reduction output result of the current frame, cur _2D represents the 2D noise reduction result of the current frame, pre _3D represents the 3D noise reduction result of the previous frame, and α represents the time-domain filtering strength coefficient.
Through the steps, current first image data are collected from the video image; 2D denoising the first image data based on a space domain to obtain current second image data; acquiring a binary image according to the current second image data; the binary image comprises a background area and a foreground area; obtaining a filtering intensity coefficient of each pixel point in the current second image data on time domain filtering; and performing 3D noise reduction processing on the current second image data according to the 3D noise reduction result of the last frame of image data of the current first image data and the filtering strength coefficient. The method solves the defects that relatively accurate motion information is difficult to obtain in a scene with lower illumination, error analysis on background and foreground pixel points is easy to cause, background noise is not reduced, a foreground moving object has a serious trailing phenomenon and the like, meanwhile, multi-frame historical image data needs to be stored when FIR filtering is carried out on a time domain, storage cost of equipment is increased, and real-time performance of a 3D noise reduction method in a video image system is not facilitated.
Step S103 involves obtaining a binary image according to the current second image data, in an optional embodiment, determining whether the current pixel belongs to the background region or the foreground region, and obtaining the number of pixels belonging to the foreground region in a first predetermined region near the current pixel under the condition that the current pixel belongs to the background region; and when the number is larger than the first threshold value, setting the pixel points in a second preset area near the current pixel point as the pixel points belonging to the foreground area. The second predetermined region includes a neighborhood window centered on the current pixel point and having a second threshold as a radius. Specifically, if the current pixel point is judged to belong to the background area, the number distribution conditions of the pixel points belonging to the foreground area in the horizontal direction and the vertical direction are respectively counted in a cross-shaped window which takes the current pixel point as the center and has a radius of 7 in the upper, lower, left and right directions. If the number of the pixel points of the foreground region obtained in the horizontal direction or the vertical direction is greater than a preset threshold Th1, the current pixel point needs to be filled, wherein the filling method is to set all values in a neighborhood window with the radius of 2 and taking the current pixel point as the center as belonging to the foreground region. And if the current pixel point is judged to belong to the foreground area, no processing is performed. Here, the threshold Th1 is set to half the window radius.
Step S103 involves obtaining a binary image according to the current second image data, and in an optional embodiment, when the current pixel belongs to the foreground region, obtaining the motion intensity information of the current second image data; when the motion intensity information is larger than or equal to a third threshold value and the number of pixel points belonging to the foreground area in a third preset area near the same coordinate position is smaller than a fourth threshold value, resetting the current pixel point to belong to the background area; wherein the co-ordinate position includes the same position of the previous frame image of the current first image and the previous frame image of the current first image. In another optional embodiment, when the current pixel belongs to the background region, obtaining the motion intensity information of the current second image data; under the condition that the motion intensity information is less than or equal to a fifth threshold value and the number of pixel points belonging to the foreground area in a fourth preset area near the same coordinate position is greater than a sixth threshold value, resetting the current pixel point to belong to the foreground area; wherein the co-ordinate position includes the same position of the previous frame image of the current first image and the previous frame image of the current first image. Specifically, a binary image of the last frame of image data that has been saved and a binary image of the last frame of image data that has been saved are acquired, and the SAD value information obtained is read and analyzed one by one, and a more complete binary image generated by a morphological method is read and analyzed. If the current pixel belongs to the foreground region, further if the SAD value of the current pixel is greater than or equal to a preset threshold Th2, and the number of pixels belonging to the foreground region in the neighborhood windows with the radius of 2 of the binary images of the previous frame and the previous frame at the same coordinate position is less than a preset threshold Th3, the pixel is judged to belong to the background region again; if the current pixel belongs to the background region, and further if the SAD value of the current pixel is less than or equal to a preset threshold Th2, and the number of pixels belonging to the foreground region in the neighborhood window with the radius of 2 of the binary image of the previous frame and the previous frame at the same coordinate position is greater than or equal to a preset threshold Th4, the pixel is determined to belong to the foreground region again. The threshold Th2 is set to 50, and the thresholds Th3 and Th4 are set to the window radius size. And obtaining a final binary image of the current scene image by the method.
The above steps are related to obtaining the motion intensity information of the current second image data, which is calculated by the SAD algorithm in an alternative embodiment. Specifically, the result of the image data obtained by 2D noise reduction based on the spatial domain and the output result of the 3D noise reduction of the previous frame are subjected to inter-frame difference in a spatial neighborhood window of a certain size and an Absolute value, that is, SAD (sum of Absolute differences) is calculated, and the SAD value is used as the motion intensity information of the current image. And linearly mapping the SAD value to 0-255 intervals, wherein the radius of the set neighborhood window size is 1, and the calculation formula of the SAD is as follows:
Figure GDA0002734883700000131
wherein pre _ Y _3D represents the 3D noise reduction result of the Y component of the previous frame, cur _ Y _2D represents the 2D noise reduction result of the Y component of the current frame, and i, j represent the horizontal and vertical coordinates of the pixel point, respectively.
Step S105 involves performing 3D noise reduction processing on the current second image data according to the 3D noise reduction result of the previous frame of image data of the current first image data and the filtering strength coefficient, and in an optional embodiment, the result of performing 3D noise reduction processing on the current second image data is obtained by the following formula:
cur_3D=α*pre_3D+(1-α)*cur_2D
wherein cur _3D represents the 3D noise reduction output result of the current second image data, cur _2D represents the 2D noise reduction result of the current first image data, pre _3D represents the 3D noise reduction result of the last frame of image data of the current first image data, and α represents the temporal filtering strength coefficient.
Specifically, the current second image data is subjected to 3D noise reduction processing according to the 3D noise reduction result of the previous frame image data of the current first image data and the filter strength coefficient. And using an IIR filter, wherein the 3D noise reduction result of the previous frame and the 2D noise reduction result and the filter strength coefficient of the current frame are used as the input of the IIR filter, and the output of the IIR filter is used as the output of the 3D noise reduction result. If the time domain filter coefficient is larger, the pixel point is possibly a foreground motion region, and more 3D noise reduction results of the previous frame are quoted into the final 3D noise reduction result; if the time domain filter coefficient is smaller, the pixel point is possibly a background static area, and more 2D noise reduction results are quoted in the final 3D noise reduction result.
Fig. 4 is another flowchart of a 3D denoising method for a video image according to an embodiment of the present invention, which includes the following specific steps:
firstly, acquiring image input information; step 1, performing 2D noise reduction on video image data; step 2, processing the result of the step 1 by a moving object detection method based on background modeling to obtain a binary image containing a background static area and a foreground moving area; step 3, analyzing the spatial neighborhood information of the pixel points of the binary image in the step 2, which is preliminarily judged as the background area, namely the distribution condition of the pixel points of the foreground area in the upper, lower, left and right directions of the binary image, and filling the pixel points with the foreground pixel points in a neighborhood window with a certain size of the pixel points if the distribution quantity of the foreground pixel points in the four directions of the pixel points meets a given threshold value; otherwise, no processing is carried out; step 4, processing the result of the step 3 by using an expansion and corrosion method in morphology respectively, and removing a pseudo background point and a pseudo foreground point to obtain more complete binary image information; step 5, performing inter-frame difference between the result of the step 1 and the 3D noise reduction output result of the previous frame in a spatial neighborhood window with a certain size and taking an Absolute value, namely SAD (sum of Absolute differences) calculation, and taking the SAD value as the motion intensity information of the current image; step 6, combining the result of the step 5, the binary image information of the previous frame of image and the binary image information of the previous frame of image, and deeply analyzing the binary image of the step 4 to obtain a final binary image of the current scene image; step 7, calculating a filtering intensity coefficient of each pixel point of the current image on time domain filtering by taking the binary image result of the step 6 as a basis; step 8, using an IIR filter, and taking the 3D noise reduction result of the previous frame, the result of step 1, and the filtering strength coefficient in step 7 as the input of the IIR filter, and taking the output of the IIR filter as the output of the 3D noise reduction result, that is, if the time domain filtering coefficient is large, it indicates that the pixel point may be a foreground motion region, and more 3D noise reduction results of the previous frame are referred to the final 3D noise reduction result; if the time domain filter coefficient is smaller, the pixel point is possibly a background static area, and more results of the step 1 are quoted into a final 3D noise reduction result; and finally, outputting the 3D noise reduction result.
Example 2
In this embodiment, a 3D noise reduction apparatus for image video is further provided, where the apparatus is used to implement the foregoing embodiments and preferred embodiments, and details are not repeated for what has been described. As used below, the term "module" may be a combination of software and/or hardware that implements a predetermined function. Although the means described in the embodiments below are preferably implemented in software, an implementation in hardware, or a combination of software and hardware is also possible and contemplated.
Fig. 5 is a block diagram of a 3D noise reduction apparatus for a video image according to an embodiment of the present invention, as shown in fig. 5, the apparatus including: an acquisition module 51, configured to acquire current first image data from a video image; a first denoising module 52, configured to perform spatial domain-based 2D denoising on the first image data to obtain current second image data; a first obtaining module 53, configured to obtain a binary image according to the current second image data; the binary image comprises a background area and a foreground area; a second obtaining module 54, configured to obtain a filtering strength coefficient of each pixel point in the current second image data on the time domain filtering; and a second denoising module 55, configured to perform 3D denoising on the current second image data according to the 3D denoising result of the previous frame of image data of the current first image data and the filtering strength coefficient.
Fig. 6 is a block diagram of a first obtaining module according to an embodiment of the present invention, and as shown in fig. 6, the first obtaining module 53 further includes: a judging unit 531, configured to judge whether the current pixel belongs to the background area or the foreground area; an obtaining unit 532, configured to obtain, when the current pixel belongs to the background region, the number of pixels belonging to the foreground region in a first predetermined region near the current pixel; the setting unit 533 is configured to, when the number is greater than the first threshold, set a pixel point in a second predetermined region near the current pixel point as a pixel point belonging to the foreground region.
Optionally, the second predetermined region includes a neighborhood window centered on the current pixel point and having a radius of the second threshold.
Fig. 7 is another structural block diagram of the first obtaining module according to the embodiment of the present invention, and as shown in fig. 7, the first obtaining module 53 further includes: the first processing unit 534 is configured to obtain motion intensity information of the current second image data when the current pixel belongs to the foreground region; when the motion intensity information is larger than or equal to a third threshold value and the number of pixel points belonging to the foreground area in a third preset area near the same coordinate position is smaller than a fourth threshold value, resetting the current pixel point to belong to the background area; wherein the co-ordinate position includes the same position of the previous frame image of the current first image and the previous frame image of the current first image; and/or, the second processing unit 535, configured to obtain, when the current pixel belongs to the background area, the motion intensity information of the current second image data; under the condition that the motion intensity information is less than or equal to a fifth threshold value and the number of pixel points belonging to the foreground area in a fourth preset area near the same coordinate position is greater than a sixth threshold value, resetting the current pixel point to belong to the foreground area; wherein the co-ordinate position includes the same position of the previous frame image of the current first image and the previous frame image of the current first image.
Optionally, the device first processing unit 534 or the second processing unit 535 is further configured to calculate the motion intensity information of the current second image data by a SAD algorithm.
Optionally, the second denoising module 55 of the apparatus is further configured to obtain a result of performing 3D denoising processing on the current second image data by the following formula:
cur_3D=α*pre_3D+(1-α)*cur_2D
wherein cur _3D represents the 3D noise reduction output result of the current second image data, cur _2D represents the 2D noise reduction result of the current first image data, pre _3D represents the 3D noise reduction result of the last frame of image data of the current first image data, and α represents the temporal filtering strength coefficient.
It should be understood that the above examples are only for clarity of illustration and are not intended to limit the embodiments. Other variations and modifications will be apparent to persons skilled in the art in light of the above description. And are neither required nor exhaustive of all embodiments. And obvious variations or modifications therefrom are within the scope of the invention.

Claims (12)

1. A method for 3D denoising a video image, comprising:
acquiring current first image data from a video image;
2D denoising the first image data based on a space domain to obtain current second image data;
acquiring a binary image according to the current second image data; wherein the binary image comprises a background region and a foreground region;
acquiring a filtering intensity coefficient of each pixel point in the current second image data on time domain filtering; wherein, the filtering intensity coefficient is calculated according to the binary image;
and performing 3D noise reduction processing on the current second image data according to the 3D noise reduction result of the last frame of image data of the current first image data and the filtering intensity coefficient.
2. The method of claim 1, wherein obtaining a binary image from the current second image data comprises:
judging whether the current pixel belongs to the background area or the foreground area;
under the condition that the current pixel point belongs to the background area, acquiring the number of pixel points belonging to the foreground area in a first preset area near the current pixel point;
and when the number is larger than a first threshold value, setting the pixel points in a second preset area near the current pixel point as the pixel points belonging to the foreground area.
3. The method of claim 2, wherein the second predetermined region comprises a neighborhood window centered at the current pixel point and having a radius of a second threshold.
4. The method of claim 1, wherein obtaining a binary image from the current second image data comprises:
when the current pixel point belongs to the foreground area, acquiring the motion intensity information of the current second image data; when the motion intensity information is larger than or equal to a third threshold and the number of pixel points belonging to the foreground area in a third preset area near the same coordinate position is smaller than a fourth threshold, resetting the current pixel point to belong to the background area; wherein the co-ordinate position includes a same position of a last frame image of the current first image and a last frame image of the current first image; and/or the presence of a gas in the gas,
when the current pixel point belongs to the background area, acquiring the motion intensity information of the current second image data; under the condition that the motion intensity information is smaller than or equal to a fifth threshold value and the number of pixel points belonging to the foreground area in a fourth preset area near the same coordinate position is larger than a sixth threshold value, resetting the current pixel point to belong to the foreground area; wherein the co-ordinate position includes a same position of a last frame image of the current first image and a last frame image of the current first image.
5. The method of claim 4, wherein obtaining motion intensity information for the current second image data comprises:
motion intensity information of the current second image data is calculated by a SAD algorithm.
6. The method according to any one of claims 1 to 5, wherein performing 3D noise reduction processing on the current second image data according to the filtering strength coefficient and the 3D noise reduction result of the last frame of image data of the current first image data comprises:
obtaining a result of performing 3D noise reduction processing on the current second image data by the following formula:
cur_3D=α*pre_3D+(1-α)*cur_2D
wherein cur _3D represents a 3D noise reduction output result of the current second image data, cur _2D represents a 2D noise reduction result of the current first image data, pre _3D represents a 3D noise reduction result of a previous frame image data of the current first image data, and α represents a temporal filtering strength coefficient.
7. A 3D noise reduction apparatus for video images, comprising:
the acquisition module is used for acquiring current first image data from a video image;
the first noise reduction module is used for carrying out 2D noise reduction on the first image data based on a space domain to obtain current second image data;
the first acquisition module is used for acquiring a binary image according to the current second image data; wherein the binary image comprises a background region and a foreground region;
the second acquisition module is used for acquiring a filtering intensity coefficient of each pixel point in the current second image data on time domain filtering; wherein, the filtering intensity coefficient is calculated according to the binary image;
and the second noise reduction module is used for carrying out 3D noise reduction processing on the current second image data according to the 3D noise reduction result of the last frame of image data of the current first image data and the filtering intensity coefficient.
8. The apparatus of claim 7, wherein the first obtaining module comprises:
the judging unit is used for judging whether the current pixel point belongs to the background area or the foreground area;
the obtaining unit is used for obtaining the number of pixel points belonging to the foreground area in a first preset area near the current pixel point under the condition that the current pixel point belongs to the background area;
and the setting unit is used for setting the pixel points in a second preset area near the current pixel point as the pixel points belonging to the foreground area when the number is larger than a first threshold value.
9. The apparatus of claim 8, wherein the second predetermined region comprises a neighborhood window centered at the current pixel point and having a radius of a second threshold.
10. The apparatus of claim 7, wherein the first obtaining module comprises:
the first processing unit is used for acquiring the motion intensity information of the current second image data when the current pixel point belongs to the foreground area; when the motion intensity information is larger than or equal to a third threshold and the number of pixel points belonging to the foreground area in a third preset area near the same coordinate position is smaller than a fourth threshold, resetting the current pixel point to belong to the background area; wherein the co-ordinate position includes a same position of a last frame image of the current first image and a last frame image of the current first image; and/or the presence of a gas in the gas,
the second processing unit is used for acquiring the motion intensity information of the current second image data when the current pixel point belongs to the background area; under the condition that the motion intensity information is smaller than or equal to a fifth threshold value and the number of pixel points belonging to the foreground area in a fourth preset area near the same coordinate position is larger than a sixth threshold value, resetting the current pixel point to belong to the foreground area; wherein the co-ordinate position includes a same position of a last frame image of the current first image and a last frame image of the current first image.
11. The apparatus of claim 10, wherein the first processing unit or the second processing unit is further configured to calculate motion intensity information of the current second image data by a SAD algorithm.
12. The apparatus according to any one of claims 7 to 11, wherein the second denoising module is further configured to obtain a result of performing 3D denoising processing on the current second image data by the following formula:
cur_3D=α*pre_3D+(1-α)*cur_2D
wherein cur _3D represents a 3D noise reduction output result of the current second image data, cur _2D represents a 2D noise reduction result of the current first image data, pre _3D represents a 3D noise reduction result of a previous frame image data of the current first image data, and α represents a temporal filtering strength coefficient.
CN201710107692.3A 2017-02-27 2017-02-27 3D noise reduction method and device for video image Active CN107016650B (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN201710107692.3A CN107016650B (en) 2017-02-27 2017-02-27 3D noise reduction method and device for video image
PCT/CN2017/117164 WO2018153150A1 (en) 2017-02-27 2017-12-19 Video image 3d denoising method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710107692.3A CN107016650B (en) 2017-02-27 2017-02-27 3D noise reduction method and device for video image

Publications (2)

Publication Number Publication Date
CN107016650A CN107016650A (en) 2017-08-04
CN107016650B true CN107016650B (en) 2020-12-29

Family

ID=59440606

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710107692.3A Active CN107016650B (en) 2017-02-27 2017-02-27 3D noise reduction method and device for video image

Country Status (2)

Country Link
CN (1) CN107016650B (en)
WO (1) WO2018153150A1 (en)

Families Citing this family (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107016650B (en) * 2017-02-27 2020-12-29 苏州科达科技股份有限公司 3D noise reduction method and device for video image
CN112311962B (en) * 2019-07-29 2023-11-24 深圳市中兴微电子技术有限公司 Video denoising method and device and computer readable storage medium
CN111754437B (en) * 2020-06-24 2023-07-14 成都国科微电子有限公司 3D noise reduction method and device based on motion intensity
CN113628138B (en) * 2021-08-06 2023-10-20 北京爱芯科技有限公司 Hardware multiplexing image noise reduction device
CN114331899A (en) * 2021-12-31 2022-04-12 上海宇思微电子有限公司 Image noise reduction method and device
CN115937013B (en) * 2022-10-08 2023-08-11 上海为旌科技有限公司 Luminance denoising method and device based on airspace

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101448077A (en) * 2008-12-26 2009-06-03 四川虹微技术有限公司 Self-adapting video image 3D denoise method
CN101964863A (en) * 2010-05-07 2011-02-02 镇江唐桥微电子有限公司 Self-adaptive time-space domain video image denoising method
CN102238316A (en) * 2010-04-29 2011-11-09 北京科迪讯通科技有限公司 Self-adaptive real-time denoising scheme for 3D digital video image
CN103108109A (en) * 2013-01-31 2013-05-15 深圳英飞拓科技股份有限公司 Digital video noise reduction system and method
CN103369209A (en) * 2013-07-31 2013-10-23 上海通途半导体科技有限公司 Video noise reduction device and video noise reduction method

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP4693546B2 (en) * 2005-08-19 2011-06-01 株式会社東芝 Digital noise reduction apparatus and method, and video signal processing apparatus
US20110149040A1 (en) * 2009-12-17 2011-06-23 Ilya Klebanov Method and system for interlacing 3d video
CN103679196A (en) * 2013-12-05 2014-03-26 河海大学 Method for automatically classifying people and vehicles in video surveillance
CN104915655A (en) * 2015-06-15 2015-09-16 西安电子科技大学 Multi-path monitor video management method and device
CN107016650B (en) * 2017-02-27 2020-12-29 苏州科达科技股份有限公司 3D noise reduction method and device for video image

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101448077A (en) * 2008-12-26 2009-06-03 四川虹微技术有限公司 Self-adapting video image 3D denoise method
CN102238316A (en) * 2010-04-29 2011-11-09 北京科迪讯通科技有限公司 Self-adaptive real-time denoising scheme for 3D digital video image
CN101964863A (en) * 2010-05-07 2011-02-02 镇江唐桥微电子有限公司 Self-adaptive time-space domain video image denoising method
CN103108109A (en) * 2013-01-31 2013-05-15 深圳英飞拓科技股份有限公司 Digital video noise reduction system and method
CN103369209A (en) * 2013-07-31 2013-10-23 上海通途半导体科技有限公司 Video noise reduction device and video noise reduction method

Also Published As

Publication number Publication date
WO2018153150A1 (en) 2018-08-30
CN107016650A (en) 2017-08-04

Similar Documents

Publication Publication Date Title
CN107016650B (en) 3D noise reduction method and device for video image
WO2021217643A1 (en) Method and device for infrared image processing, and movable platform
KR100754181B1 (en) Method and apparatus for reducing mosquito noise in fecoded video sequence
US10521885B2 (en) Image processing device and image processing method
CN109003249B (en) Method, device and terminal for enhancing image details
Liu et al. A perceptually relevant no-reference blockiness metric based on local image characteristics
CN112311962B (en) Video denoising method and device and computer readable storage medium
JP5708916B2 (en) Image evaluation method, image evaluation system, and program
KR101761928B1 (en) Blur measurement in a block-based compressed image
CN107481271B (en) Stereo matching method, system and mobile terminal
KR20070116717A (en) Method and device for measuring mpeg noise strength of compressed digital image
RU2603529C2 (en) Noise reduction for image sequences
JPH0799660A (en) Motion compensation predicting device
CN101123681A (en) A digital image noise reduction method and device
KR20110014067A (en) Method and system for transformation of stereo content
WO2014070273A1 (en) Recursive conditional means image denoising
CN110866882B (en) Layered joint bilateral filtering depth map repairing method based on depth confidence
US9813698B2 (en) Image processing device, image processing method, and electronic apparatus
JP2009534902A (en) Image improvement to increase accuracy smoothing characteristics
JP2002539657A (en) Process, apparatus and use for evaluating an encoded image
KR101907451B1 (en) Filter based high resolution color image restoration and image quality enhancement apparatus and method
KR20140046187A (en) Motion estimation apparatus and method thereof in a video system
Sonawane et al. Image quality assessment techniques: An overview
Stankiewicz et al. Estimation of temporally-consistent depth maps from video with reduced noise
ITMI970575A1 (en) METHOD FOR ESTIMATING THE MOVEMENT IN SEQUENCES OF BLOCK-CODED IMAGES IN PARTICULAR FOR THE PROCESSING OF THE VIDEO SIGNAL

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant