CN107016650A - Video image 3 D noise-reduction method and device - Google Patents

Video image 3 D noise-reduction method and device Download PDF

Info

Publication number
CN107016650A
CN107016650A CN201710107692.3A CN201710107692A CN107016650A CN 107016650 A CN107016650 A CN 107016650A CN 201710107692 A CN201710107692 A CN 201710107692A CN 107016650 A CN107016650 A CN 107016650A
Authority
CN
China
Prior art keywords
current
view data
image
noise
pixel point
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201710107692.3A
Other languages
Chinese (zh)
Other versions
CN107016650B (en
Inventor
熊超
章勇
曹李军
陈卫东
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Suzhou Keda Technology Co Ltd
Original Assignee
Suzhou Keda Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Suzhou Keda Technology Co Ltd filed Critical Suzhou Keda Technology Co Ltd
Priority to CN201710107692.3A priority Critical patent/CN107016650B/en
Publication of CN107016650A publication Critical patent/CN107016650A/en
Priority to PCT/CN2017/117164 priority patent/WO2018153150A1/en
Application granted granted Critical
Publication of CN107016650B publication Critical patent/CN107016650B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/70Denoising; Smoothing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/10Image enhancement or restoration using non-spatial domain filtering
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/14Picture signal circuitry for video frequency region
    • H04N5/21Circuitry for suppressing or minimising disturbance, e.g. moiré or halo
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • G06T2207/10021Stereoscopic video; Stereoscopic image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20172Image enhancement details
    • G06T2207/20182Noise reduction or smoothing in the temporal domain; Spatio-temporal filtering

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Picture Signal Circuits (AREA)
  • Image Processing (AREA)

Abstract

The invention provides a kind of video image 3 D noise-reduction method and device, wherein this method includes:Current first view data gathered from video image;2D noise reductions based on spatial domain are carried out to first view data, current second view data is obtained;According to the current second image data acquisition bianry image;Wherein, the bianry image includes background area and foreground area;Obtain filter strength factor of each pixel on time-domain filtering in current second view data;3D noise reduction process is carried out to current second view data according to the 3D noise reductions result of the previous frame view data of current first view data and the filter strength factor.So as to solve existing 3D noise reduction technologies, the problem of being relatively difficult to obtain relatively accurate movable information, and progress FIR filters the problem of adding the storage overhead of equipment in time domain.And then the noise reduction of video image is improved, 3D noise reductions is obtained larger range of application.

Description

Video image 3 D noise-reduction method and device
Technical field
The present invention relates to field of video image processing, and in particular to a kind of video image 3 D noise-reduction method and device.
Background technology
Strong advantage is of interest by user to visualize for video and image, however, the quality of video image quality directly shadow The use to user is rung, and the decrease of noise functions of video image undoubtedly plays the effect of key, good Image Denoising wherein Some moving objects, in the noise reduction technology field of video image, 3D noise reductions can be clearly recognized under low-illumination scene Technology becomes the focus of research.In general, if only regarded with based on spatial domain or only with the noise-reduction method based on time domain , then easily there are the phenomenons such as image transition smooth, loss in detail, frame-to-frame jump noise in frequency image noise reduction, and uses time domain and sky The 3D noise-reduction methods that domain is combined, then can avoid above phenomenon as far as possible.In existing video image 3 D noise reduction technology method, One class method is mainly based upon the 3D noise-reduction methods of motion estimation and compensation, and this kind of method between two frames to carry out based on macro block Estimation, obtains the movable information of image, then motion compensation is carried out to image with the movable information, finally, in time domain Time-domain filtering is carried out using FIR (Finite Inpulse Filter), noise reduction result is exported.Also another kind of 3D noise-reduction methods, The 3D noise-reduction methods of Motion Adaptive are mainly based upon, the exercise intensity information of image pixel or macro block is analyzed by interframe, According to the exercise intensity information size, if exercise intensity is larger, time domain noise reduction is carried out as far as possible, conversely, then entering as far as possible Row spatial domain noise reduction.
Major defect is as under scene of the night in this compared with low-light (level), and the method either matched with inter macroblocks is also , in the method for interframe movement Strength co-mputation, to be all relatively difficult to obtain relatively accurate movable information, be easily caused to background with , there is ambient noise and does not lower with foreground moving object serious conditions of streaking etc. occur in the error analysis of foreground pixel point, Meanwhile, FIR filtering is carried out in time domain to be needed to store multiframe history image data, is added the storage overhead of equipment, is unfavorable for Real-time of the 3D noise-reduction methods in video image system.
The content of the invention
In view of this, the technical problem to be solved in the present invention is to overcome 3D noise reductions of the prior art compared with low-light (level) Under scene, all it is relatively difficult to obtain relatively accurate movable information, is easily caused the error analysis to background and foreground pixel point, Appearance ambient noise, which does not lower with foreground moving object, there are the defects such as serious conditions of streaking, meanwhile, carried out in time domain FIR filtering needs to store multiframe history image data, adds the storage overhead of equipment, is unfavorable for 3D noise-reduction methods in video figure As the real-time of system.So as to provide a kind of video image 3 D noise-reduction method and device.
Therefore, the embodiments of the invention provide following technical scheme:
The embodiments of the invention provide a kind of video image 3 D noise-reduction method, including:What is gathered from video image is current First view data;2D noise reductions based on spatial domain are carried out to described first image data, current second view data is obtained;According to The current second image data acquisition bianry image;Wherein, the bianry image includes background area and foreground area;Obtain Filter strength factor of each pixel on time-domain filtering in current second view data;According to current first figure As the 3D noise reductions result and the filter strength factor of the previous frame view data of data are entered to current second view data Row 3D noise reduction process.
Alternatively, included according to the current second image data acquisition bianry image:Judge that current pixel point belongs to institute State background area and still fall within the foreground area;In the case where the current pixel point belongs to the background area, obtain The quantity of the pixel for belonging to the foreground area near the current pixel point in the first presumptive area;It is big in the quantity When first threshold, the pixel in the second presumptive area near the current pixel point is set to belong to the foreground area Pixel.
Alternatively, second presumptive area is included centered on the current pixel point using Second Threshold as the neck of radius Domain window.
Alternatively, included according to the current second image data acquisition bianry image:Belong to described in current pixel point During foreground area, the exercise intensity information of current second view data is obtained;It is more than or equal in the exercise intensity information 3rd threshold value, and it is less than the with the quantity for the pixel for belonging to the foreground area near coordinate position in the 3rd presumptive area In the case of four threshold values, the current pixel point is re-set as to belong to the background area;Wherein, the same to coordinate position The same position of previous frame image and the upper two field picture of current first image including current first image;With/ Or,
When current pixel point belongs to the background area, the exercise intensity letter of current second view data is obtained Breath;It is less than or equal to the 5th threshold value in the exercise intensity information, and with belonging to institute in the 4th presumptive area near coordinate position The quantity of pixel of foreground area is stated more than in the case of the 6th threshold value, the current pixel point is re-set as to belong to institute State foreground area;Wherein, the same coordinate position includes the previous frame image and described current first of current first image The same position of the upper two field picture of image.
Alternatively, obtaining the exercise intensity information of current second view data includes:Calculate described by SAD algorithms The exercise intensity information of current second view data.
Alternatively, according to the 3D noise reductions result of the previous frame view data of current first view data and the filtering Strength factor carries out 3D noise reduction process to current second view data to be included:Obtained by equation below to described current the Two view data carry out the result of 3D noise reduction process:
Cur_3D=α * pre_3D+ (1- α) * cur_2D
Wherein, cur_3D represents the 3D noise reduction output results of current second view data, and cur_2D represents described and worked as The 2D noise reduction results of preceding first view data, pre_3D represents the 3D of the previous frame view data of current first view data Noise reduction result, α represents time-domain filtering strength factor.
The embodiment of the present invention additionally provides a kind of video image 3 D denoising device, including:Acquisition module, for from video figure Current first view data gathered as in;First noise reduction module, for carrying out described first image data based on spatial domain 2D noise reductions, obtain current second view data;First acquisition module, for according to the current second image data acquisition two-value Image;Wherein, the bianry image includes background area and foreground area;Second acquisition module, obtains current second figure As filter strength factor of each pixel on time-domain filtering in data;Second noise reduction module, for according to described current The 3D noise reductions result and the filter strength factor of the previous frame view data of one view data are to current second picture number According to progress 3D noise reduction process.
Alternatively, first acquisition module includes:Judging unit, for judging that current pixel point belongs to the background area Domain still falls within the foreground area;Acquiring unit, in the case of belonging to the background area in the current pixel point, Obtain the quantity of the pixel for belonging to the foreground area near the current pixel point in the first presumptive area;Set single Member, for when the quantity is more than first threshold, the pixel in the second presumptive area near the current pixel point to be set It is set to the pixel for belonging to the foreground area.
Alternatively, second presumptive area is included centered on the current pixel point using Second Threshold as the neighbour of radius Domain window.
Alternatively, first acquisition module includes:First processing units, for belonging to the prospect in current pixel point During region, the exercise intensity information of current second view data is obtained;It is more than or equal to the 3rd in the exercise intensity information Threshold value, and it is less than the 4th threshold with the quantity for the pixel for belonging to the foreground area near coordinate position in the 3rd presumptive area In the case of value, the current pixel point is re-set as to belong to the background area;Wherein, the same coordinate position includes The same position of the previous frame image of current first image and the upper two field picture of current first image;And/or, the Two processing units, for when current pixel point belongs to the background area, obtaining the motion of current second view data Strength information;It is less than or equal to the 5th threshold value in the exercise intensity information, and near coordinate position in the 4th presumptive area Belong to the quantity of pixel of the foreground area more than in the case of the 6th threshold value, the current pixel point is re-set as Belong to the foreground area;Wherein, previous frame image of the same coordinate position including current first image is worked as with described The same position of the upper two field picture of preceding first image.
Alternatively, the first processing units or the second processing unit are additionally operable to calculate described by SAD algorithms The exercise intensity information of current second view data.
Alternatively, second noise reduction module is additionally operable to obtain to enter current second view data by equation below The result of row 3D noise reduction process:
Cur_3D=α * pre_3D+ (1- α) * cur_2D
Wherein, cur_3D represents the 3D noise reduction output results of current second view data, and cur_2D represents described and worked as The 2D noise reduction results of preceding first view data, pre_3D represents the 3D of the previous frame view data of current first view data Noise reduction result, α represents time-domain filtering strength factor.
Technical scheme of the embodiment of the present invention, has the following advantages that:
The embodiments of the invention provide a kind of video image 3 D noise-reduction method and device, pass through what is gathered from video image Current first view data;2D noise reductions based on spatial domain are carried out to described first image data, current second view data is obtained; According to the current second image data acquisition bianry image;Wherein, the bianry image includes background area and foreground area; Obtain filter strength factor of each pixel on time-domain filtering in current second view data;According to described current The 3D noise reductions result and the filter strength factor of the previous frame view data of one view data are to current second picture number According to progress 3D noise reduction process.For existing 3D noise reduction technologies, under the scene compared with low-light (level), all it is relatively difficult to obtain relatively accurate Movable information, be easily caused the error analysis to background and foreground pixel point, ambient noise occur and do not lower and prospect There are the defects such as serious conditions of streaking in moving object, meanwhile, FIR filtering is carried out in time domain to be needed to store multiframe history image number According to, the storage overhead of equipment is added, is unfavorable for 3D noise-reduction methods the problem of the real-time of video image system, it is of the invention real Apply example and solve the above-mentioned problems in the prior art.
Brief description of the drawings
, below will be to specific in order to illustrate more clearly of the specific embodiment of the invention or technical scheme of the prior art The accompanying drawing used required in embodiment or description of the prior art is briefly described, it should be apparent that, in describing below Accompanying drawing is some embodiments of the present invention, for those of ordinary skill in the art, before creative work is not paid Put, other accompanying drawings can also be obtained according to these accompanying drawings.
Fig. 1 is the flow chart of video image 3 D noise-reduction method according to embodiments of the present invention;
Fig. 2 is filter strength factor parameter list according to embodiments of the present invention;
Fig. 3 is the structural representation of video image 3 D noise-reduction method according to embodiments of the present invention;
Fig. 4 is another flow chart of video image 3 D noise-reduction method according to embodiments of the present invention;
Fig. 5 is the structured flowchart of video image 3 D denoising device according to embodiments of the present invention;
Fig. 6 is the structured flowchart of the first acquisition module according to embodiments of the present invention;
Fig. 7 is another structured flowchart of the first acquisition module according to embodiments of the present invention.
Embodiment
Technical scheme is clearly and completely described below in conjunction with accompanying drawing, it is clear that described implementation Example is a part of embodiment of the invention, rather than whole embodiments.Based on the embodiment in the present invention, ordinary skill The every other embodiment that personnel are obtained under the premise of creative work is not made, belongs to the scope of protection of the invention.
In the description of the invention, it is necessary to explanation, term " " center ", " on ", " under ", "left", "right", " vertical ", The orientation or position relationship of the instruction such as " level ", " interior ", " outer " be based on orientation shown in the drawings or position relationship, merely to Be easy to the description present invention and simplify description, rather than indicate or imply signified device or element must have specific orientation, With specific azimuth configuration and operation, therefore it is not considered as limiting the invention.In addition, term " first ", " second ", " the 3rd " is only used for describing purpose, and it is not intended that indicating or implying relative importance.
In the description of the invention, it is necessary to illustrate, unless otherwise clearly defined and limited, term " installation ", " phase Even ", " connection " should be interpreted broadly, for example, it may be being fixedly connected or being detachably connected, or be integrally connected;Can To be mechanical connection or electrical connection;Can be joined directly together, can also be indirectly connected to by intermediary, can be with It is the connection of two element internals, can is wireless connection or wired connection.For one of ordinary skill in the art For, the concrete meaning of above-mentioned term in the present invention can be understood with concrete condition.
As long as in addition, technical characteristic involved in invention described below different embodiments non-structure each other It can just be combined with each other into conflict.
Embodiment 1
The embodiments of the invention provide a kind of video image 3 D noise-reduction method, Fig. 1 is video according to embodiments of the present invention The flow chart of image 3D noise-reduction methods, as shown in figure 1, the flow comprises the following steps:
Step S101, current first view data gathered from video image;For example obtain the one of inputted video image Frame image data;
Step S102, carries out the 2D noise reductions based on spatial domain to first view data, obtains current second view data.It is right The YUV of present frame carries out the 2D noise reduction process based on spatial domain relation, and the 2D noise-reduction methods wherein used in present implementation can be with For 2D-DCT noise-reduction methods, this method belongs to more practical 2D noise-reduction methods;
Step S103, according to the current second image data acquisition bianry image;Wherein, the bianry image includes background area Domain and foreground area.Utilize ViBe (Visual Background Extractor) moving object detection based on background modeling The second image that method is obtained to the 2D noise reduction results carries out moving object detection analysis, obtains including stationary background region and the back of the body The bianry image of scape moving region, wherein ViBe belong to based on background modeling type games object detection method preferably one kind side Method.
Step S104, obtains filtering strength system of each pixel on time-domain filtering in current second view data Number.Calculate time-domain filtering strength factor.Verify that the calculating of time-domain filtering strength factor can be with video by putting into practice emulation testing Multiple relevant informations of image scene are associated, such as poor, scene digital picture gain of noise criteria of scene etc., this reality Applying method uses based on digital picture gain to calculate time-domain filtering strength factor.Such as, under a certain image gain numerical value, according to According to the bianry image result ultimately generated, the number between one fixed 0~1 is assigned to foreground area and background area respectively Value.So that digital picture gain ranging is from 0~60dB as an example, Fig. 2 in accompanying drawing is specifically referred to, wherein Fig. 2 data value passes through Many experiments are obtained, and Fig. 2 parameter modification can also be carried out according to specific equipment and application scenarios.
Step S105 is strong according to the 3D noise reductions result of the previous frame view data of current first view data and the filtering Spend coefficient and 3D noise reduction process is carried out to current second view data.Using iir filter, with the 3D noise reductions result of previous frame and The 2D noise reductions result of present frame, filter strength factor are as the input of iir filter, and the output of iir filter is used as 3D noise reductions As a result output.If i.e. time domain filter coefficients are larger, it may be foreground moving region, more upper one to illustrate the pixel Frame 3D noise reduction results are referred in final 3D noise reduction results;If time domain filter coefficients are smaller, illustrate that the pixel can In being stationary background region, the final 3D noise reduction results that the results of more 2D noise reductions is cited.With reference to the figure in accompanying drawing 3, the data to present frame carry out the time-domain filtering based on iir filter, and wherein IIR Filtering Formulas are as follows:
Cur_3D=α * pre_2D+ (1- α) * cur_3D
Wherein, cur_3D represents the 3D noise reduction output results of present frame, and cur_2D represents the 2D noise reduction results of present frame, Pre_3D represents the 3D noise reduction results of previous frame, and α represents time-domain filtering strength factor.
By above-mentioned steps, current first view data gathered from video image;First view data is carried out 2D noise reductions based on spatial domain, obtain current second view data;According to the current second image data acquisition bianry image;Its In, the bianry image includes background area and foreground area;Each pixel is obtained in current second view data in time domain Filter strength factor in filtering;According to the 3D noise reductions result of the previous frame view data of current first view data and the filter Intensity of wave coefficient carries out 3D noise reduction process to current second view data.Solve under the scene compared with low-light (level), all compare It is difficult to obtain relatively accurate movable information, is easily caused the error analysis to background and foreground pixel point, ambient noise occur Do not lower with foreground moving object and the defects such as serious conditions of streaking occur, meanwhile, FIR filtering needs are carried out in time domain and are deposited Multiframe history image data are stored up, the storage overhead of equipment is added, is unfavorable for 3D noise-reduction methods in the real-time of video image system Property.
Above-mentioned steps S103 is related to according to current second image data acquisition bianry image, in an alternative embodiment In, judge that current pixel point belongs to the background area and still falls within the foreground area, pixel belongs to the background area in this prior In the case of domain, the quantity of the pixel for belonging to the foreground area near the current pixel point in the first presumptive area is obtained; When the quantity is more than first threshold, the pixel in the second presumptive area near the current pixel point is set to belong to before this The pixel of scene area.Second presumptive area includes the field window centered on the current pixel point by radius of Second Threshold Mouthful.Specifically, if current pixel is judged to belong to background area, centered on current pixel point, thereon, Under, left and right four direction and radius for counted respectively in 7 cross window horizontally and vertically belong to foreground zone The distributed number situation of the pixel in domain.If the quantity of the pixel of the foreground area obtained by horizontal direction or vertical direction More than our default threshold value Th1, then need to fill the current pixel point, wherein fill method is with current pixel Radius centered on point is all set to for the value in 2 neighborhood window belongs to foreground area.If current pixel is judged to Belong to foreground area, then without any processing.Wherein, threshold value Th1 is set to the half of windows radius.
Step S103 is related to according in current second image data acquisition bianry image, in one alternate embodiment, When current pixel point belongs to the foreground area, the exercise intensity information of current second view data is obtained;In the exercise intensity Information is more than or equal to the 3rd threshold value, and with the pixel for belonging to the foreground area near coordinate position in the 3rd presumptive area In the case that quantity is less than the 4th threshold value, the current pixel point is re-set as to belong to the background area;Wherein, the same to coordinate Position includes the same position of the previous frame image of current first image and the upper two field picture of current first image.Another In one alternative embodiment, when current pixel point belongs to the background area, the motion for obtaining current second view data is strong Spend information;It is less than or equal to the 5th threshold value in the exercise intensity information, and with belonging near coordinate position in the 4th presumptive area In the case that the quantity of the pixel of the foreground area is more than the 6th threshold value, the current pixel point is re-set as belonging to before this Scene area;Wherein, this with coordinate position including current first image previous frame image and current first image it is upper The same position of two field picture.Specifically, the bianry image and upper previous frame image for the previous frame view data being saved are obtained The bianry image of data, and obtained sad value information, and analysis is read one by one by the more perfect of morphological method generation Bianry image.If current pixel belongs to foreground area, if further its sad value is more than or equal to presetting threshold value Th2, and with the radius in previous frame and the bianry image of upper previous frame on coordinate position for 2 neighborhood window in belong to before The quantity of the pixel of scene area is both less than presetting threshold value Th3, then the pixel is judged to belong to background area again;Such as Really current pixel belongs to background area, if further its sad value is less than or equal to presetting threshold value Th2, and same coordinate The radius in previous frame and the bianry image of upper previous frame on position is the pixel for belonging to foreground area in 2 neighborhood window The quantity of point is both greater than equal to presetting threshold value Th4, then the pixel is judged to belong to foreground area again.Wherein, threshold value Th2 is set to 50, and threshold value Th3 and Th4 are set to windows radius size.Final two of current scene image are obtained by the above method It is worth image.
It is related to the exercise intensity information for obtaining current second view data in above-mentioned steps, in an alternative embodiment In, the exercise intensity information of current second view data is calculated by SAD algorithms.Specifically, the 2D noise reductions based on spatial domain Obtain the result of view data and the 3D noise reductions output result of previous frame carries out interframe in a certain size spatial neighborhood window Make the difference and take absolute value, i.e., so-called SAD (Sum of Absolute Differences) calculates, using the sad value as current The exercise intensity information of image.And SAD value is carried out Linear Mapping to 0~255 interval, the neighborhood window size being provided with Radius be for 1, SAD calculation formula:
Wherein, pre_y_3D represents the 3D noise reduction results of the Y-component of previous frame, and cur_y_2D represents the Y-component of present frame 2D noise reduction results, i, j represents the coordinate both horizontally and vertically of pixel respectively.
Step S105 is related to according to the 3D noise reductions result of the previous frame view data of current first view data and this Filter strength factor carries out 3D noise reduction process to current second view data, in one alternate embodiment, by following public Formula obtains the result that 3D noise reduction process is carried out to current second view data:
Cur_3D=α * pre_3D+ (1- α) * cur_2D
Wherein, cur_3D represents the 3D noise reduction output results of current second view data, cur_2D represent this current The 2D noise reduction results of one view data, pre_3D represents the 3D noise reduction knots of the previous frame view data of current first view data Really, α represents time-domain filtering strength factor.
Specifically, according to the 3D noise reductions result of the previous frame view data of current first view data and the filtering strength Coefficient carries out 3D noise reduction process to current second view data.Using iir filter, with the 3D noise reductions result of previous frame and work as The 2D noise reductions result of previous frame, filter strength factor are as the input of iir filter, and the output of iir filter is used as 3D noise reduction knots The output of fruit.If i.e. time domain filter coefficients are larger, it may be foreground moving region, more previous frames to illustrate the pixel 3D noise reduction results are referred in final 3D noise reduction results;If time domain filter coefficients are smaller, illustrate that the pixel may For stationary background region, in the final 3D noise reduction results that the results of more 2D noise reductions is cited.
Fig. 4 is another flow chart of video image 3 D noise-reduction method according to embodiments of the present invention, is comprised the following steps that:
Image input information is obtained first;Step 1,2D noise reductions are carried out to vedio data;Step 2, built based on background The moving target detecting method of mould is handled the result of step 1, obtains including stationary background region and foreground moving region Bianry image;Step 3, the bianry image in analytical procedure 2 is tentatively judged as the spatial neighborhood of the pixel of background area The distribution situation of the foreground area pixel of information, i.e. its upper and lower, left and right four direction, if the pixel four direction The distributed quantity of foreground pixel point meets the threshold value that we give, then before being used in a certain size neighborhood window of the pixel Scene vegetarian refreshments is filled;Otherwise, it is without any processing;Step 4, respectively with the expansion in morphology and caustic solution to step 3 result is handled, and is removed pseudo- background dot and pseudo- foreground point, is obtained more perfect bianry image information;Step 5, step Rapid 1 result and the 3D noise reductions output result of previous frame carry out interframe in a certain size spatial neighborhood window and make the difference and take absolutely To value, i.e., so-called SAD (Sum of Absolute Differences) is calculated, and the motion of present image is used as using the sad value Strength information;Step 6, the bianry image of result with reference to step 5, previous frame image binary map information and upper previous frame image Information, analyses in depth the bianry image of step 4, obtains the final bianry image of current scene image;Step 7, with The binary map result of step 6 is foundation, calculates filter strength factor of each pixel of present image on time-domain filtering;Step Rapid 8, using iir filter, using the filter strength factor in the result of the 3D noise reductions result of previous frame and step 1, step 7 as The input of iir filter, the output of iir filter as 3D noise reduction results output, if that is, time domain filter coefficients are larger, It may be foreground moving region to illustrate the pixel, and more previous frame 3D noise reduction results are referred to final 3D noise reduction results In;If time domain filter coefficients are smaller, it may be stationary background region, the result quilt of more step 1 to illustrate the pixel In the final 3D noise reduction results quoted;Finally 3D noise reductions result is exported.
Embodiment 2
Additionally provide a kind of image/video 3D denoising devices in the present embodiment, the device be used to realizing above-described embodiment and Preferred embodiment, had carried out repeating no more for explanation.As used below, term " module " can realize predetermined work( The combination of the software and/or hardware of energy.Although the device described by following examples is preferably realized with software, firmly Part, or the realization of the combination of software and hardware is also that may and be contemplated.
Fig. 5 is the structured flowchart of video image 3 D denoising device according to embodiments of the present invention, as shown in figure 5, the device Including:Acquisition module 51, for current first view data gathered from video image;First noise reduction module 52, for pair First view data carries out the 2D noise reductions based on spatial domain, obtains current second view data;First acquisition module 53, for root According to the current second image data acquisition bianry image;Wherein, the bianry image includes background area and foreground area;Second obtains Modulus block 54, obtains filter strength factor of each pixel on time-domain filtering in current second view data;Second drop Module of making an uproar 55,3D noise reductions result and the filtering strength system for the previous frame view data according to current first view data It is several that 3D noise reduction process is carried out to current second view data.
Fig. 6 is the structured flowchart of the first acquisition module according to embodiments of the present invention, as shown in fig. 6, the device first is obtained Modulus block 53 also includes:Judging unit 531, the foreground zone is still fallen within for judging that current pixel point belongs to the background area Domain;Acquiring unit 532, in the case of belonging to the background area for pixel in this prior, is obtained near the current pixel point The quantity of the pixel for belonging to the foreground area in first presumptive area;Setting unit 533, for being more than first in the quantity During threshold value, the pixel in the second presumptive area near the current pixel point is set to belong to the pixel of the foreground area.
Alternatively, the second presumptive area includes the neighborhood window centered on the current pixel point by radius of Second Threshold Mouthful.
Fig. 7 is another structured flowchart of the first acquisition module according to embodiments of the present invention, as shown in fig. 7, the device One acquisition module 53 also includes:First processing units 534, for when current pixel point belongs to the foreground area, obtaining and deserving The exercise intensity information of preceding second view data;It is more than or equal to the 3rd threshold value, and same coordinate position in the exercise intensity information Belong to the quantity of pixel of the foreground area in neighbouring 3rd presumptive area less than in the case of the 4th threshold value, by the current picture Vegetarian refreshments is re-set as belonging to the background area;Wherein, this includes the previous frame image of current first image with coordinate position With the same position of the upper two field picture of current first image;And/or, second processing unit 535, in current pixel point When belonging to the background area, the exercise intensity information of current second view data is obtained;Be less than in the exercise intensity information etc. It is more than the in the 5th threshold value, and with the quantity for the pixel for belonging to the foreground area near coordinate position in the 4th presumptive area In the case of six threshold values, the current pixel point is re-set as to belong to the foreground area;Wherein, this includes being somebody's turn to do with coordinate position The same position of the previous frame image of current first image and the upper two field picture of current first image.
Alternatively, the device first processing units 534 or the second processing unit 535 are additionally operable to by SAD algorithm meters Calculate the exercise intensity information of current second view data.
Alternatively, the second noise reduction module 55 of the device is additionally operable to obtain to current second picture number by equation below According to the result for carrying out 3D noise reduction process:
Cur_3D=α * pre_3D+ (1- α) * cur_2D
Wherein, cur_3D represents the 3D noise reduction output results of current second view data, cur_2D represent this current The 2D noise reduction results of one view data, pre_3D represents the 3D noise reduction knots of the previous frame view data of current first view data Really, α represents time-domain filtering strength factor.
Obviously, above-described embodiment is only intended to clearly illustrate example, and the not restriction to embodiment.It is right For those of ordinary skill in the art, can also make on the basis of the above description it is other it is various forms of change or Change.There is no necessity and possibility to exhaust all the enbodiments.And the obvious change thus extended out or Among changing still in the protection domain of the invention.

Claims (12)

1. a kind of video image 3 D noise-reduction method, it is characterised in that including:
Current first view data gathered from video image;
2D noise reductions based on spatial domain are carried out to described first image data, current second view data is obtained;
According to the current second image data acquisition bianry image;Wherein, the bianry image includes background area and prospect Region;
Obtain filter strength factor of each pixel on time-domain filtering in current second view data;
According to the 3D noise reductions result and the filter strength factor pair of the previous frame view data of current first view data Current second view data carries out 3D noise reduction process.
2. according to the method described in claim 1, it is characterised in that according to the current second image data acquisition bianry image Including:
Judge that current pixel point belongs to the background area and still falls within the foreground area;
In the case where the current pixel point belongs to the background area, the current pixel point the first fate nearby is obtained The quantity of the pixel for belonging to the foreground area in domain;
When the quantity is more than first threshold, the pixel in the second presumptive area near the current pixel point is set to Belong to the pixel of the foreground area.
3. method according to claim 2, it is characterised in that second presumptive area is included with the current pixel point Centered on field window by radius of Second Threshold.
4. according to the method described in claim 1, it is characterised in that according to the current second image data acquisition bianry image Including:
When current pixel point belongs to the foreground area, the exercise intensity information of current second view data is obtained; The exercise intensity information is more than or equal to the 3rd threshold value, and with belonging to the prospect in the 3rd presumptive area near coordinate position In the case that the quantity of the pixel in region is less than the 4th threshold value, the current pixel point is re-set as to belong to the background Region;Wherein, the same coordinate position includes the previous frame image and current first image of current first image The same position of upper two field picture;And/or,
When current pixel point belongs to the background area, the exercise intensity information of current second view data is obtained; The exercise intensity information is less than or equal to the 5th threshold value, and with belonging to the prospect in the 4th presumptive area near coordinate position In the case that the quantity of the pixel in region is more than the 6th threshold value, the current pixel point is re-set as to belong to the prospect Region;Wherein, the same coordinate position includes the previous frame image and current first image of current first image The same position of upper two field picture.
5. method according to claim 4, it is characterised in that obtain the exercise intensity letter of current second view data Breath includes:
The exercise intensity information of current second view data is calculated by SAD algorithms.
6. according to any described method in claim 1 to 5, it is characterised in that according to current first view data The 3D noise reductions result and the filter strength factor of previous frame view data carry out 3D noise reductions to current second view data Processing includes:
The result that 3D noise reduction process is carried out to current second view data is obtained by equation below:
Cur_3D=α * pre_3D+ (1- α) * cur_2D
Wherein, cur_3D represents the 3D noise reduction output results of current second view data, and cur_2D represents described current The 2D noise reduction results of one view data, pre_3D represents the 3D noise reductions of the previous frame view data of current first view data As a result, α represents time-domain filtering strength factor.
7. a kind of video image 3 D denoising device, it is characterised in that including:
Acquisition module, for current first view data gathered from video image;
First noise reduction module, for carrying out the 2D noise reductions based on spatial domain to described first image data, obtains current second image Data;
First acquisition module, for according to the current second image data acquisition bianry image;Wherein, the bianry image bag Include background area and foreground area;
Second acquisition module, obtains filtering strength system of each pixel on time-domain filtering in current second view data Number;
Second noise reduction module, 3D noise reductions result and institute for the previous frame view data according to current first view data State filter strength factor and 3D noise reduction process is carried out to current second view data.
8. device according to claim 7, it is characterised in that first acquisition module includes:
Judging unit, the foreground area is still fallen within for judging that current pixel point belongs to the background area;
Acquiring unit, in the case of belonging to the background area in the current pixel point, obtains the current pixel point The quantity of the pixel for belonging to the foreground area in neighbouring first presumptive area;
Setting unit, for when the quantity is more than first threshold, by the second presumptive area near the current pixel point Pixel be set to belong to the pixel of the foreground area.
9. device according to claim 8, it is characterised in that second presumptive area is included with the current pixel point Centered on neighborhood window by radius of Second Threshold.
10. device according to claim 7, it is characterised in that first acquisition module includes:
First processing units, for when current pixel point belongs to the foreground area, obtaining current second view data Exercise intensity information;It is more than or equal to the 3rd threshold value in the exercise intensity information, and it is predetermined with the near coordinate position the 3rd Belong in region the foreground area pixel quantity be less than the 4th threshold value in the case of, by the current pixel point again It is set to belong to the background area;Wherein, the previous frame image of the same coordinate position including current first image with The same position of the upper two field picture of current first image;And/or,
Second processing unit, for when current pixel point belongs to the background area, obtaining current second view data Exercise intensity information;It is less than or equal to the 5th threshold value in the exercise intensity information, and it is predetermined with the near coordinate position the 4th Belong in region the foreground area pixel quantity be more than the 6th threshold value in the case of, by the current pixel point again It is set to belong to the foreground area;Wherein, the previous frame image of the same coordinate position including current first image with The same position of the upper two field picture of current first image.
11. device according to claim 10, it is characterised in that the first processing units or the second processing list Member is additionally operable to calculate the exercise intensity information of current second view data by SAD algorithms.
12. according to any described device in claim 7 to 11, it is characterised in that second noise reduction module is additionally operable to lead to Cross equation below and obtain the result that 3D noise reduction process is carried out to current second view data:
Cur_3D=α * pre_3D+ (1- α) * cur_2D
Wherein, cur_3D represents the 3D noise reduction output results of current second view data, and cur_2D represents described current The 2D noise reduction results of one view data, pre_3D represents the 3D noise reductions of the previous frame view data of current first view data As a result, α represents time-domain filtering strength factor.
CN201710107692.3A 2017-02-27 2017-02-27 3D noise reduction method and device for video image Active CN107016650B (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN201710107692.3A CN107016650B (en) 2017-02-27 2017-02-27 3D noise reduction method and device for video image
PCT/CN2017/117164 WO2018153150A1 (en) 2017-02-27 2017-12-19 Video image 3d denoising method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710107692.3A CN107016650B (en) 2017-02-27 2017-02-27 3D noise reduction method and device for video image

Publications (2)

Publication Number Publication Date
CN107016650A true CN107016650A (en) 2017-08-04
CN107016650B CN107016650B (en) 2020-12-29

Family

ID=59440606

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710107692.3A Active CN107016650B (en) 2017-02-27 2017-02-27 3D noise reduction method and device for video image

Country Status (2)

Country Link
CN (1) CN107016650B (en)
WO (1) WO2018153150A1 (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2018153150A1 (en) * 2017-02-27 2018-08-30 苏州科达科技股份有限公司 Video image 3d denoising method and device
CN111754437A (en) * 2020-06-24 2020-10-09 成都国科微电子有限公司 3D noise reduction method and device based on motion intensity
CN113628138A (en) * 2021-08-06 2021-11-09 北京爱芯科技有限公司 Hardware multiplexing image noise reduction device
CN114331899A (en) * 2021-12-31 2022-04-12 上海宇思微电子有限公司 Image noise reduction method and device
CN115937013A (en) * 2022-10-08 2023-04-07 上海为旌科技有限公司 Method and device for denoising brightness based on airspace

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112311962B (en) * 2019-07-29 2023-11-24 深圳市中兴微电子技术有限公司 Video denoising method and device and computer readable storage medium

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101448077A (en) * 2008-12-26 2009-06-03 四川虹微技术有限公司 Self-adapting video image 3D denoise method
CN101964863A (en) * 2010-05-07 2011-02-02 镇江唐桥微电子有限公司 Self-adaptive time-space domain video image denoising method
US20110149040A1 (en) * 2009-12-17 2011-06-23 Ilya Klebanov Method and system for interlacing 3d video
CN102238316A (en) * 2010-04-29 2011-11-09 北京科迪讯通科技有限公司 Self-adaptive real-time denoising scheme for 3D digital video image
CN103108109A (en) * 2013-01-31 2013-05-15 深圳英飞拓科技股份有限公司 Digital video noise reduction system and method
CN103369209A (en) * 2013-07-31 2013-10-23 上海通途半导体科技有限公司 Video noise reduction device and video noise reduction method

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP4693546B2 (en) * 2005-08-19 2011-06-01 株式会社東芝 Digital noise reduction apparatus and method, and video signal processing apparatus
CN103679196A (en) * 2013-12-05 2014-03-26 河海大学 Method for automatically classifying people and vehicles in video surveillance
CN104915655A (en) * 2015-06-15 2015-09-16 西安电子科技大学 Multi-path monitor video management method and device
CN107016650B (en) * 2017-02-27 2020-12-29 苏州科达科技股份有限公司 3D noise reduction method and device for video image

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101448077A (en) * 2008-12-26 2009-06-03 四川虹微技术有限公司 Self-adapting video image 3D denoise method
US20110149040A1 (en) * 2009-12-17 2011-06-23 Ilya Klebanov Method and system for interlacing 3d video
CN102238316A (en) * 2010-04-29 2011-11-09 北京科迪讯通科技有限公司 Self-adaptive real-time denoising scheme for 3D digital video image
CN101964863A (en) * 2010-05-07 2011-02-02 镇江唐桥微电子有限公司 Self-adaptive time-space domain video image denoising method
CN103108109A (en) * 2013-01-31 2013-05-15 深圳英飞拓科技股份有限公司 Digital video noise reduction system and method
CN103369209A (en) * 2013-07-31 2013-10-23 上海通途半导体科技有限公司 Video noise reduction device and video noise reduction method

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2018153150A1 (en) * 2017-02-27 2018-08-30 苏州科达科技股份有限公司 Video image 3d denoising method and device
CN111754437A (en) * 2020-06-24 2020-10-09 成都国科微电子有限公司 3D noise reduction method and device based on motion intensity
CN111754437B (en) * 2020-06-24 2023-07-14 成都国科微电子有限公司 3D noise reduction method and device based on motion intensity
CN113628138A (en) * 2021-08-06 2021-11-09 北京爱芯科技有限公司 Hardware multiplexing image noise reduction device
CN113628138B (en) * 2021-08-06 2023-10-20 北京爱芯科技有限公司 Hardware multiplexing image noise reduction device
CN114331899A (en) * 2021-12-31 2022-04-12 上海宇思微电子有限公司 Image noise reduction method and device
CN115937013A (en) * 2022-10-08 2023-04-07 上海为旌科技有限公司 Method and device for denoising brightness based on airspace
CN115937013B (en) * 2022-10-08 2023-08-11 上海为旌科技有限公司 Luminance denoising method and device based on airspace

Also Published As

Publication number Publication date
CN107016650B (en) 2020-12-29
WO2018153150A1 (en) 2018-08-30

Similar Documents

Publication Publication Date Title
CN107016650A (en) Video image 3 D noise-reduction method and device
KR100604394B1 (en) A frame interpolation method, apparatus and image display system using the same
CN103369209B (en) Vedio noise reduction device and method
CN105208376B (en) A kind of digital noise reduction method and apparatus
US20110299597A1 (en) Image processing method using motion estimation and image processing apparatus
CN101651786B (en) Method for restoring brightness change of video sequence and video processing equipment
CN102025960B (en) Motion compensation de-interlacing method based on adaptive interpolation
CN104023166B (en) A kind of environment self-adaption video image noise reducing method and device
CN103179325A (en) Self-adaptive 3D (Three-Dimensional) noise reduction method for low signal-to-noise ratio video under fixed scene
KR20100114499A (en) Image interpolation with halo reduction
CN105100807A (en) Motion vector post-processing based frame rate up-conversion method
CN104915966A (en) Frame rate up-conversion motion estimation method and frame rate up-conversion motion estimation system based on Kalman filtering
CN103428409B (en) A kind of vedio noise reduction processing method and processing device based on fixed scene
JP2012168936A (en) Animation processing device and animation processing method
JP2000078533A (en) Method for detecting still area in video image sequence
CN112311962A (en) Video denoising method and device and computer readable storage medium
CN110866882B (en) Layered joint bilateral filtering depth map repairing method based on depth confidence
CN107481271A (en) A kind of solid matching method, system and mobile terminal
CN103414845A (en) Self-adaptive video image noise reducing method and noise reducing system
CN104717402B (en) A kind of Space-time domain combines noise estimating system
CN115330653A (en) Multi-source image fusion method based on side window filtering
CN103996177A (en) Snow noise removing algorithm free of reference detection
CN108270945A (en) A kind of motion compensation denoising method and device
CN103051829B (en) Raw image data noise reduction system based on FPGA platform and noise-reduction method
CN110475252A (en) The weak coverage optimization method of cell MR and device are divided in the room determined based on user behavior

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant