US20130083245A1 - Compression error handling for temporal noise reduction - Google Patents
Compression error handling for temporal noise reduction Download PDFInfo
- Publication number
- US20130083245A1 US20130083245A1 US13/250,530 US201113250530A US2013083245A1 US 20130083245 A1 US20130083245 A1 US 20130083245A1 US 201113250530 A US201113250530 A US 201113250530A US 2013083245 A1 US2013083245 A1 US 2013083245A1
- Authority
- US
- United States
- Prior art keywords
- value
- image
- current
- edge slope
- previous
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 230000009467 reduction Effects 0.000 title claims abstract description 110
- 230000002123 temporal effect Effects 0.000 title description 35
- 230000006835 compression Effects 0.000 title description 10
- 238000007906 compression Methods 0.000 title description 10
- 230000001419 dependent effect Effects 0.000 claims abstract description 53
- 238000002156 mixing Methods 0.000 claims description 42
- 241000023320 Luma <angiosperm> Species 0.000 claims description 29
- 238000000034 method Methods 0.000 claims description 29
- OSWPMRLSEDHDFF-UHFFFAOYSA-N methyl salicylate Chemical compound COC(=O)C1=CC=CC=C1O OSWPMRLSEDHDFF-UHFFFAOYSA-N 0.000 claims description 29
- 238000012545 processing Methods 0.000 claims description 15
- 230000003044 adaptive effect Effects 0.000 description 27
- 238000001514 detection method Methods 0.000 description 16
- 208000019300 CLIPPERS Diseases 0.000 description 11
- 208000021930 chronic lymphocytic inflammation with pontine perivascular enhancement responsive to steroids Diseases 0.000 description 11
- 230000003111 delayed effect Effects 0.000 description 10
- 238000013461 design Methods 0.000 description 8
- 239000003638 chemical reducing agent Substances 0.000 description 7
- 239000004065 semiconductor Substances 0.000 description 6
- 238000012937 correction Methods 0.000 description 3
- 238000001914 filtration Methods 0.000 description 3
- 230000006870 function Effects 0.000 description 3
- 230000008569 process Effects 0.000 description 3
- 238000012935 Averaging Methods 0.000 description 2
- 238000006243 chemical reaction Methods 0.000 description 2
- 238000003708 edge detection Methods 0.000 description 2
- 238000004519 manufacturing process Methods 0.000 description 2
- 239000000203 mixture Substances 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 230000003287 optical effect Effects 0.000 description 2
- 230000006978 adaptation Effects 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 239000002131 composite material Substances 0.000 description 1
- 239000004020 conductor Substances 0.000 description 1
- 238000013500 data storage Methods 0.000 description 1
- 238000010586 diagram Methods 0.000 description 1
- 230000002500 effect on skin Effects 0.000 description 1
- 239000004973 liquid crystal related substance Substances 0.000 description 1
- 238000005259 measurement Methods 0.000 description 1
- 230000007246 mechanism Effects 0.000 description 1
- 239000000758 substrate Substances 0.000 description 1
- 230000001360 synchronised effect Effects 0.000 description 1
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N5/00—Details of television systems
- H04N5/14—Picture signal circuitry for video frequency region
- H04N5/21—Circuitry for suppressing or minimising disturbance, e.g. moiré or halo
- H04N5/213—Circuitry for suppressing or minimising impulsive noise
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N5/00—Details of television systems
- H04N5/14—Picture signal circuitry for video frequency region
- H04N5/142—Edging; Contouring
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N9/00—Details of colour television systems
- H04N9/64—Circuits for processing colour signals
- H04N9/643—Hue control means, e.g. flesh tone control
Definitions
- the present invention relates to video processing.
- the invention further relates to but is not limited to noise reduction in video processing.
- Video processing where video image data is processed to ‘improve’ the image is applied in many devices. Furthermore image quality can be improved by the use of video processing in the form of a noise reduction block or processor to reduce noise in the image.
- the video noise reduction block can often be the first stage of any video processing pipeline, where the video image frames are noise reduced before any other processing and enhancements in the digital domain are applied.
- noise reduction can be employed for television applications, where in the video signal can be input or received from any one of radio frequency (RF) channel, component input channel, a high definition multimedia interface (HDMI), a composite (CVBS) channel, and s-video input channel.
- RF radio frequency
- HDMI high definition multimedia interface
- CVBS composite
- s-video input channel s-video input channel
- Noise reduction can for example be spatial or temporal noise reduction.
- Spatial noise reduction is where areas surrounding a picture element (or pixel) or block of pixels on the same field or frame can be analysed to determine whether the pixel is similar to the surrounding areas and whether a correction or noise reduction operation can be carried out.
- the noise reduction operation can be for example an averaging across displayed lines or within a line (intraline and interline noise reduction).
- temporal noise reduction is where a pixel is compared to proceeding or succeeding fields or frame pixels to determine whether or not the area pixel differs significantly from previous or following fields or frames, and whether an averaging or filtering across fields or frames should be carried out.
- Temporal noise reduction requires previous frames or fields to be stored in order that they can be compared to the current frame or field to determine whether there is image motion.
- Any such low noise still images in such examples can have greater noise levels when aggressive compression is used as less noise reduction is applied and the images suffers resolution loss.
- Embodiments of the present application aim to address the above problems.
- a video processor comprising a spatio-temporal noise reduction controller configured to determine current and previous image edge slopes and adaptively control a spatio-temporal noise reduction processor to blend current and previous images dependent on the current and previous image edge slope values.
- the video processor may further comprise a spatio-temporal noise reduction processor configured to blend current and previous images dependent on the spatial noise reduction controller.
- the spatiotemporal noise reduction controller may comprise: a current image edge slope determiner configured to generate a current image edge slope value; a previous image edge slope determiner configured to generate a previous image edge slope value; and a edge slope processor configured to determine a blending control signal dependent on the current image edge slope value and previous image edge slope value for the spatiotemporal noise reduction processor.
- the edge slope processor may comprise a slope selector configured to select one of the current image edge slope value and the previous image edge slope value and generate the blending control signal value dependent on the selected edge slope value.
- the edge slope processor may comprise a slope value generator configured to combine a portion of the selected edge slope value to a blended gain value to generate a slope value, the control signal value being dependent on the slope value.
- the edge slope processor may comprise a first blended gain value generator configured to generate the blended gain value dependent on at least one of: the noise level status of the image region, and an image region motion status value.
- the slope selector may be configured to select at least one of: the minimum of the current image edge slope value and the previous image edge slope value, the maximum of the current image edge slope value and the previous image edge slope value, and the average of the current image edge slope value and the previous image edge slope value.
- the slope selector may be configured to select at least one of the edge slope values dependent on at least one of: the noise level status of the image region, and an image region motion status value.
- the video processor may further comprise a motion noise detector configured to determine at least one of: the noise level status of the image region, and an image region motion status value.
- the edge slope processor may be configured to selectively nullify the motion noise value for at least one of: image edges, and regions of high frequency image components.
- the current and previous image edge slopes may be current luma image edge slopes and previous luma image edge slopes.
- the current and previous images may comprise: a current frame image and a previous frame image respectively; and a current field image and a previous field image respectively.
- the edge slope value may comprise at least one of: horizontal pixel difference, and vertical pixel difference.
- a television receiver may comprise the video processor as discussed herein.
- An integrated circuit may comprise the video processor as discussed herein.
- a video player may comprise the video processor as discussed herein.
- a method of processing video signals comprising: determining current and previous image edge slopes; and adaptively controlling spatio-temporal noise reduction blending of current and previous images dependent on the current and previous image edge slope values.
- the method may further comprise spatio-temporal noise reduction blending of current and previous images dependent on the current and previous image edge slope values.
- Adaptively controlling spatio-temporal noise reduction blending of current and previous images may comprise: generating a current image edge slope value; generating a previous image edge slope value; and determining a blending control signal dependent on the current image edge slope value and previous image edge slope value for the spatio-temporal noise reduction processor.
- Determining a blending control signal may comprise: selecting one of the current image edge slope value and the previous image edge slope value; and generating the blending control signal value dependent on the selected edge slope value.
- Determining a blending control signal may comprise combining a portion of the selected edge slope value to a blended gain value to generate a slope value, the control signal value being dependent on the slope value.
- Determining a blending control signal may comprise generating the blended gain value dependent at least one of the noise level status of the image region, and an image region motion status value.
- Selecting one of the current image edge slope value and the previous image edge slope value may comprise selecting at least one of: the minimum of the current image edge slope value and the previous image edge slope value, the maximum of the current image edge slope value and the previous image edge slope value, and the average of the current image edge slope value and the previous image edge slope value.
- Selecting one of the current image edge slope value and the previous image edge slope value may comprise selecting at least one of the edge slope values dependent on at least one of the noise level status of the image region, and an image region motion status value.
- the method may further comprise determining the portion of the selected edge slope value dependent on at least one of: a noise level status of the image region, and an image region motion status value.
- the method may further comprise determining at least one of: the noise level status of the image region, and an image region motion status value.
- Determining the noise level of the image may comprise selectively nullifying the motion noise value for at least one of: image edges, and regions of high frequency image components.
- the current and previous image edge slopes may be current luma image edge slopes and previous luma image edge slopes.
- the current and previous images may comprise: a current frame image and a previous frame image respectively; and a current field image and a previous field image respectively.
- the edge slope value may comprise at least one of: horizontal pixel difference, and vertical pixel difference.
- Apparatus comprising at least one processor and at least one memory including computer code for one or more programs, the at least one memory and the computer code configured to with the at least one processor may cause the apparatus to at least perform a method as discussed herein.
- a video processor comprising: means for determining current and previous image edge slopes; and means for adaptively controlling spatiotemporal noise reduction blending of current and previous images dependent on the current and previous image edge slope values.
- the video processor may further comprise means for spatiotemporal noise reduction blending of current and previous images dependent on the current and previous image edge slope values.
- the means for adaptively controlling spatio-temporal noise reduction blending of current and previous images may comprise: means for generating a current image edge slope value; means for generating a previous image edge slope value; and means for determining a blending control signal dependent on the current image edge slope value and previous image edge slope value for the spatio-temporal noise reduction processor.
- the means for determining a blending control signal may comprise: means for selecting one of the current image edge slope value and the previous image edge slope value; and means for generating the blending control signal value dependent on the selected edge slope value.
- the means for determining a blending control signal may comprise means for combining a portion of the selected edge slope value to a blended gain value to generate a slope value, the control signal value being dependent on the slope value.
- the means for determining a blending control signal may comprise means for generating the blended gain value dependent on at least one of: the noise level status of the image region, and an image region motion status value.
- the means for selecting one of the current image edge slope value and the previous image edge slope value may comprise means for selecting at least one of: the minimum of the current image edge slope value and the previous image edge slope value, the maximum of the current image edge slope value and the previous image edge slope value, and the average of the current image edge slope value and the previous image edge slope value.
- the means for selecting one of the current image edge slope value and the previous image edge slope value may comprise means for selecting at least one of the edge slope values dependent on at least one of: the noise level status of the image region, and an image region motion status value.
- the means for combining a portion of the selected edge slope value to a blended gain value to generate a slope value may comprise means for determining the portion of the selected edge slope value dependent on at least one of: a noise level status of the image region, and an image region motion status value.
- the video processor may further comprise means for determining at least one of: the noise level status of the image region, and the image region motion status value.
- the means for determining the blending control signal edge slope may comprise means for selectively nullifying the motion noise value at at least one of: image edges, and regions of high frequency image components.
- the current and previous image edge slopes may be current luma image edge slopes and previous luma image edge slopes.
- the current and previous images may comprise: a current frame image and a previous frame image respectively; and a current field image and a previous field image respectively.
- the edge slope value may comprise at least one of: horizontal pixel difference, and vertical pixel difference.
- FIG. 1 shows schematically a system suitable for employing a video processor according to some embodiments of the application
- FIG. 2 shows schematically a video noise reduction apparatus according to some embodiments of the application
- FIG. 3 shows schematically the edge adaptive threshold determiner as shown in FIG. 2 in further detail according to some embodiments of the application;
- FIG. 4 shows the noise reduction K logic as shown in FIG. 2 in further detail according to some embodiments of the application
- FIG. 5 shows an example of the K logic plot of the noise reducing apparatus according to some embodiments of the application
- FIG. 6 shows a method of operating the edge adaptive threshold determiner
- FIG. 7 shows a method of operating the noise reduction apparatus according to some embodiments of the application.
- FIG. 8 shows the operation of the K logic.
- FIG. 1 an example electronic device or apparatus 10 is shown within which embodiments of the application can be implemented.
- the apparatus 10 in some embodiments comprises a receiver configured to receive a radio frequency modulated television and/or video signal and output the analogue encoded video signal to the processor 5 .
- the receiver can be controlled by the processor to demodulate/select the channel to be received.
- the apparatus 10 in some embodiments comprises a processor 5 which can be configured to execute various program codes.
- the implemented program codes can comprise video processing for processing the received video data, for example by noise reduction video processing, and outputting the data to the display 7 .
- the implemented program codes can be stored within a suitable memory.
- the processor 5 can be coupled to memory 21 .
- the memory 21 can further comprise an instruction code section 23 suitable for storing program codes implementable upon the processor 5 .
- the memory 21 can comprise a stored data section 25 for storing data, for example video data.
- the memory 21 can be any suitable storage means.
- the memory 21 can be implemented as part of the processor in a system-on-chip configuration.
- the apparatus 10 can further comprise a display 7 .
- the display can be any suitable display means featuring technology for example cathode ray tube (CRT), light emitting diode (LED), variable backlight liquid crystal display (LCD) for example LED lit LCD, organic light emitting diode (OLED), and plasma display.
- the display 7 can furthermore be considered to provide a graphical user interface (GUI) providing a dialog window in which a user can implement and input how the apparatus 10 displays the video.
- GUI graphical user interface
- the apparatus can be configured to communicate with a display remote from the physical apparatus by a suitable display interface, for example a High Definition Multimedia Interface (HDMI) or a Digital Video Interface (DVI) or be remodulated and transmitted to the display.
- HDMI High Definition Multimedia Interface
- DVI Digital Video Interface
- the apparatus 10 further can comprise a user input or user settings input apparatus 11 .
- the user input can in some embodiments be a series of buttons, switches or adjustable elements providing an input to the processor 5 .
- the user input 11 and display 7 can be combined as a touch sensitive surface on the display, also known as a touch screen or touch display apparatus.
- the noise reduction apparatus comprises an input line delay 101 .
- the input line delay 101 is configured to receive the original (ORG) input video signal on a line by line basis and can comprise any suitable means for delaying or storing a previous line or previous lines of input data.
- the input to the line delay 101 comprises either the input chroma or luma data.
- the input line delay 101 can in some embodiments be configured to output current, previous, and next line data to a spatial noise reduction apparatus 105 , a spatial-temporal noise reducer blender 107 , an edge adaptive threshold determiner 109 , a motion detector 111 , and a flesh tone detector 113 .
- the noise reduction apparatus can further comprise a noise reduced line delay 103 .
- the noise reduced line delay 103 can be configured to receive the previous frame (or field) delayed video input on a line by line basis in such a manner that the line data held in the noise reduced line delay are those associated or synchronised with the line data held in the input line delay.
- the noise reduced line delay samples are one frame away from the line data samples held in the input line delay.
- the previous frame/field information can in some embodiments comprise data which had been stored after a first compression sequence and then decompressed after storage.
- the output of the noise reduced line delay 103 can in some embodiments be passed to an edge adaptive threshold determiner 109 , the motion detector 111 , and a temporal noise reduction blender 119 .
- step 701 The operation of receiving the original chroma or luma data and the decompressed “noise reduced” chroma or luma data is shown in FIG. 7 by step 701 .
- step 703 The delaying of the original and decompressed lines to generate a current, previous line, and next line data for the original and decompressed noise reduced chroma or luma data is shown in FIG. 7 by step 703 .
- the noise reduction apparatus comprises a spatial noise reducer 105 .
- the spatial noise reducer 105 can be configured to attempt to reduce the noise level for a line of data dependent on the input line information from the input line delay 101 .
- the spatial noise reducer 105 can perform any suitable spatial noise reduction algorithm such as, but not exclusively, linear interpolation between lines, line sharpening, edge sharpening, and low pass filtering.
- the output of the spatial noise reducer 105 can be output to the spatial temporal noise reducer blender 107 .
- the spatial noise reducer 105 can be configured to further receive a noise level value input for controlling the amount of spatial noise reduction performed.
- the noise level value can be received from a noise measurement block not shown in the figure.
- step 711 The operation of performing spatial noise reduction is shown in FIG. 7 by step 711 .
- the noise reduction apparatus further comprises a motion detector 111 .
- the motion detector 111 can be configured to receive the input video data from the first line delay 101 , and the previous frame or field(s) video data via the noise reduction line delay 103 . The motion detector can thus compare the difference between the various line delay inputs from frame-to-frame or field-to-field to determine whether or not there is any motion between the original and delayed/stored image.
- the motion detector can be configured to perform some noise detection.
- the noise detection value is based on determining how many motion samples are above the noise level value (the noise level present in the input noisy video signal) and can in some embodiments be determined in a noise measure block (not shown) in a specified spatial image kernel.
- the motion is determined to be due to real motion or noise by this measure of number of motion samples being above some noise level. If in a specific region (defined by kernel size) if more number of motion samples is above noise level, then that region is assumed to have real motion, else that region is assumed to be have false motion due to noise. Based on this determination a different motion filter is used for deriving the actual motion value.
- the Motion detector outputs a control signal ND, which indicates whether the motion is due to Noise or real motion.
- the motion detector 111 can thus be configured to output a determination or value of the motion of between the original and delayed frame lines. This motion determination can be output to the spatial temporal noise reduction control logic 115 and to the temporal noise reduction control logic 117 . Furthermore in some embodiments the noise detection output from the motion detector 111 can be configured to be output to the edge adaptive threshold determiner 109 , spatial temporal noise reduction control logic 115 and temporal noise reduction control logic 117 .
- step 705 The operation of determining motion and/or determining noise is shown in FIG. 7 by step 705 .
- the noise reduction apparatus comprises a flesh tone detector 113 .
- the flesh tone detector 113 can be configured to determine whether or not the current line image contains flesh tone information.
- the output of the flesh tone detector 113 can be configured to output to the spatial temporal noise reduction control logic 115 and the temporal noise reduction control logic 117 .
- the fleshtone detection therefore helps in identifying human skin colour and in those areas the noise reduction strength can be different than the non-fleshtone areas. This therefore assists in some embodiments by preventing ‘plastic’ skin effects.
- step 709 The operation of determining flesh tone in the current line is shown in FIG. 8 by step 709 .
- the noise reduction apparatus comprises an edge adaptive threshold determiner 109 .
- the edge adaptor threshold determiner 109 can be configured to receive the line information from the original image (from the image line delay 101 ), and the previous frame or field image data (from the noise reduced line delay 103 ).
- the edge adaptive threshold determiner 109 can further be configured to receive an input from the noise detector output from the motion detector 111 .
- the edge adaptive threshold determiner 109 can be configured to generate an edge detection threshold value such that luma edge image portions can be detected.
- the edge detection threshold can be used so that for spatial or temporal noise reduction, a suitable edge profile or slope can be allowed for.
- the output of the edge threshold value from the edge adaptive threshold determiner 109 can be passed to the spatial temporal noise reduction control logic 115 and the temporal noise reduction control logic 117 .
- the determination of the edge threshold value is shown in FIG. 7 by step 707 .
- step 711 The performing of spatial noise reduction is shown in FIG. 7 by step 711 .
- the noise reduction apparatus can further comprise in some embodiments a spatial temporal noise reduction (STNR) control logic 115 .
- the spatial temporal noise reduction control logic 115 can be configured to receive the motion detection output from the motion detector 111 , the noise detection output from the motion detector 111 , the edge threshold value from the edge adaptive threshold determiner 109 , as well as a big noise flag input and noise level value input and a flesh tone determination input.
- the spatial temporal noise reduction control logic 115 can be configured to generate a control value to be passed to a spatial temporal noise reduction blender 107 such that the output of the spatial noise reduction circuit and the original line to be analysed are blended according to the various inputs to the STNR control logic 115 .
- the noise reduction apparatus can comprise a spatial temporal noise reduction blender 107 .
- the spatial temporal noise reduction (STNR) blender 107 can be configured to receive data inputs from the spatial noise reduction apparatus 105 and the input line delay 101 and further be controlled by the STNR control logic 115 to blend the line information.
- STNR spatial temporal noise reduction
- the STNR blender 107 can be configured to perform any suitable blending operation dependent on the output of the STNR control logic 115 .
- the output of the STNR blender 107 can be passed to the temporal noise reduction blender 119 .
- step 713 The operation of blending the spatial noise reduction output and the current frame line data is shown in FIG. 7 by step 713 .
- step 712 The performing of temporal noise reduction is shown in FIG. 7 by step 712 .
- the noise reduction apparatus comprises a temporal noise reduction (TNR) control logic 117 .
- the temporal noise reduction control logic 117 can be configured to receive the motion detection output from the motion detector 111 , the noise detection output from the motion detector 111 , the edge adaptive threshold determiner edge threshold value from the edge adaptive threshold determiner 109 , the flesh tone detection output from the flesh tone detector 118 , and the noise and big noise flag inputs to determine a suitable control value to be passed to the temporal noise reduction blender 119 .
- the noise input is as discussed herein the input video signal noise level value from noise measure block and the Big Noise flag (also from noise measure block) is an indicator indicating that the input video has high level of noise.
- the noise reduction apparatus can further comprise a temporal noise reduction blender 119 .
- the temporal noise reduction blender 119 can be configured to receive the output from the spatial temporal noise reduction blender 107 and further the previous frame or field image data from the previous noise reduction line delay 103 and further to blend the inputs dependent on the output of the temporal noise reduction control logic 117 to produce a blended TNR output.
- the blended TNR output 119 can be passed in some embodiments to a sticky bit converter 121 .
- step 715 The operation of blending the current and previous image data to perform temporal noise reduction is shown in FIG. 7 by step 715 .
- the temporal reduction apparatus comprises a sticky bit converter 121 .
- the sticky bit converter 121 can be configured to output a sticky bit converted TNR blended output.
- the sticky bit output thus forms the output image line data or the noise reduced data output (NR_OUT).
- step 717 The operation of performing a sticky bit process on the TNR output is shown in FIG. 7 by step 717 .
- step 719 Furthermore the outputting of the noise reduced image is shown in FIG. 7 by step 719 .
- the noise reduced output can be passed to a REMPEG compression circuit 123 .
- the REMPEG compressor 123 can be configured to perform a REMPEG compression on the output of the noise reduced image data. This output can further pass to a frame/field delay 125 .
- the frame/field delay 125 in some embodiments can comprise memory configured to store the output of the REMPEG compression.
- the noise reduction apparatus can further comprise a REMPEG decompressor 127 configured to read the memory at a suitable point and further process the REMPEG compressed data by decompressing the data passing the previous frame or field delayed noise reduction output to the noise reduction line delay 103 .
- the input line delay 101 is in some embodiments configured to receive the original luma information
- the original signal line delay 101 can be configured to output the current input line luma information (ORG_Y) to a first difference engine 209 of the edge adaptive threshold determiner 109 and to a first pixel delay element 201 .
- the output of the first pixel delay element 201 can be configured to output a delayed pixel information to an input of a second pixel delay 203 , and to the first difference engine 209 and to a second difference engine 211 .
- the second pixel delay element 203 can be configured to output the second delayed pixel information to the second difference engine 211 .
- the noise reduced delay line 103 input luma information (NR_Y) is input to a third difference engine 213 and a first noise reduced pixel delay 205 which is configured to perform a pixel delay on the received frame/field delayed data and output the delayed pixel information to the third difference engine 213 , a fourth difference engine 215 and a second noise reduced pixel delay 207 .
- the output of the second noise reduced pixel delay is passed to the fourth difference engine 215 of the edge adaptive threshold determiner 109 .
- the vertical difference engine across lines can be used (not shown in figure).
- step 601 The reception of the luma data and the performing of the line delay and pixel delay operations are shown in FIG. 6 by step 601 .
- the edge adaptive threshold determiner comprises four difference engines.
- the difference engines are configured to receive spatially different pixel luma information to determine edges.
- the first difference engine 209 can be configured to receive the original input luma data and a first pixel delayed luma data input and output a first difference value to a first absolute value determiner 217 .
- the second difference engine 211 can be configured to receive the first pixel delay and second pixel delay outputs from the input line delay 101 and be configured to output a second difference value to a second absolute value determiner 219 .
- the third difference engine 213 and the fourth difference engine 215 can be configured to produce the same difference values for the noise reduced image and pass the difference between the non delayed and first pixel delay to a third absolute value determiner 221 and the difference between the first pixel delay and second delay pixel to a fourth absolute value determiner 223 .
- the absolute value determiners, the first absolute value determiner 217 , second absolute value determiner 219 , third absolute value determiner 221 , and fourth absolute value determiner 223 can be configured to receive a difference value and output an absolute or modulus value of the input value. In other words the absolute value determiner generates a positive value output independent of the sign of the input value.
- the first and second absolute value determiners 217 and 219 are configured to output the current frame/field luma line pixel difference values to a first minimum/maximum/averager (associated with the current frame/field 225 ) and the third and fourth absolute value determiners 221 and 223 are configured to output the previous frame/field luma line pixel difference data to a second minimum/maximum/averager 227 .
- the minimum/maximum/averager 225 , 227 comprises any suitable processor configured to determine the minimum of the values, the maximum of the values and the average of the values and a multiplexer configured to select and output one of the minimum, maximum or average values dependent on the min_max.sel signal.
- the first minimum/maximum/averager 225 associated with the current frame/field can be configured to output a selected value to a first minimum selector 229 and the second minimum/maximum/averager 227 is configured to output a selected minimum/maximum/average value to the first minimiser 229 .
- the minimum/maximum/averagers can be considered to be providing edge determination values for the current and previous frames/fields.
- the first minimiser block 229 can be a minimum or maximum/average selection based on the ND signal (the Noise detect signal from the Motion detector).
- step 605 The determination of luma edge operation is shown in FIG. 6 by step 605 .
- step 607 Furthermore the determination of luma edge slopes for the current and previous frames/fields operation as shown in FIG. 6 by step 607 .
- the minimiser 229 is configured in some embodiments to receive the edge slope values for the current and previous frames/fields and output the minimum value.
- the minimum edge slope of the current and previous frames/fields can be considered to be the “fine” slope value.
- the output of the minimiser can be passed to a first multiplier 230 .
- the edge adaptive threshold determiner 109 further comprises a blender 231 configured to receive the M1 gain and M2 gains.
- the M1 and M2 gain values are different user programmable gains and can be adaptively selected or blended based on the value of the ND-Out signal. For example when the ND-Out signal indicates a detection of noise then M2 gain value can be selected, else the M 1 gain value can be selected.
- the blender 231 can further be configured to receive the noise detection output (ND-OUT) from the motion detector.
- the blender 231 is configured to blend the M1 gains and M2 gains to produce a gain output to the multiplier 230 .
- the output of the minimiser 229 and the output of the blender can then be combined in the multiplier 230 to output an adaptive gain value.
- the adaptive gain value can be output by the multiplier 230 to a 3 ⁇ 5 tap expander or low pass filter device 233 .
- the multiplier 230 can be configured to output an adaptive threshold value as a percentage of the fine slope scaled by the blended M1 and M2 gains.
- the tap expander or LPF 233 can be configured to further receive an LPF selection (LPF-SEL) input to determine whether or not to low pass filter the output of the multiplier 230 .
- LPF-SEL LPF selection
- the output of the expander or LPF can be passed to a clipper 235 .
- the clipper 235 can be configured to clip the value of the adaptive gain value to a value between 0 and 255 to prevent a value rollover occurring producing an incorrect edge threshold value.
- step 611 the generation of the adaptive threshold by a percentage of the fine slope added to the user or gain noise level value threshold is shown in FIG. 6 by step 611 .
- the logic can comprise an edge threshold multiplexer 301 .
- the edge threshold multiplexer 301 can be configured to receive the edge adaptive threshold determined value from the edge adaptive threshold determiner 109 as a first data input, a null or zero value as the second data input, and an enable input such as an edge adaptive enable input (EDGE_ADAP_ENB).
- EDGE_ADAP_ENB edge adaptive enable input
- the output of the edge adaptive multiplexer 301 can thus be either the edge adaptive threshold value or zero (or null) value which can be passed to a summer 302 .
- the receiving of the edge threshold values is shown in FIG. 8 by step 801 .
- the K logic circuitry can comprise a separate noise gain/offset path determiner block 307 .
- the determiner block 307 can be configured to receive the Noise level value of the input video signal from the noise meter (Fine_Noise_NM)) or via a user input. This noise input is multiplied by the user noise_gain input and then added with a signed user noise offset input.
- the gain and offset processing is performed separately for M1 (real motion) and M2 (Motion due to Noise) to generate the M1_noise_value and the M2_noise_value.
- M1_noise_gain and M1_noise_offset values are the user input for generating M1_noise_value output and M2_noise_gain and M2_noise_offset are the user input for generating the M2_noise_value output.
- the logic circuitry comprises a first or noise determination blender 303 .
- the first blender can blends the above outputs (M1_noise_value, M2_noise_value) based on the ND control from motion detector 111 to generate the Noise_level threshold to summer 302 .
- the K logic circuitry can comprise a summer 302 configured to receive the output of the edge adaptive threshold multiplexer 301 and also a noise level threshold (Noise_level_threshold) from the blender 303 .
- the output of the combination can be passed as a first input to the first subtractor 309 .
- step 803 The operation of generating the final noise offset value from the edge threshold value, and the initial noise gain values is shown in FIG. 8 by step 803 .
- the logic circuitry can in some embodiments comprise a first or motion subtracter 309 .
- the first subtracter 309 can be configured to receive both the final noise offset value from the noise summer 302 and also a motion detection input from the motion detector 111 to find the difference between these values.
- the output of the first subtracter 309 can in some embodiments be passed to a clipping determiner 311 .
- step 805 The application of the final noise offset to the motion detection output is shown in FIG. 8 by step 805 .
- the logic comprises a clipping determiner 311 .
- the clipping determiner can be configured to receive the difference between the motion detection value and the final offset value and clip these in such a way that values below 0 are set to 0 and values above 63 are set to 63 in order that the value does not overflow or undertow the value range.
- the output of the clipper 311 can be passed to the second multiplier 317 .
- the logic circuitry comprises a gain multiplexer 313 .
- the gain multiplexer can be configured to receive a series of input gains such as shown in FIG. 4 by gain 0, gain 1, gain 2 and gain 3 as input gain data values which are selectable dependent on the selection inputs provided from the flesh tone flag value and the big noise flag value.
- the gains are for example user defined gains defining the gains for different combinations of Fleshtone and Big noise status values (00, 01, 10, 11).
- the output of the gain multiplexer can be passed to the second blender 315 .
- the logic circuitry further comprise an M2 multiplier 314 .
- the M2 multiplier can be configured to receive the gain multiplexer 313 output and multiply the gain output value by the user M2_gain value, which then can be output to the second input to the second blender 315 .
- the logic circuitry comprises a second blender 315 configured to receive as a first data input the output of the 4 to 1 gain multiplexer 313 and output gained value (multiplied by user input M2_gain) as a second data input. This blending operation can further be controlled based on the noise detection control signal (ND_CNTRL).
- the second blender 315 can be configured to output a blended value of these two inputs to the second multiplier 317 .
- the second multiplier 317 can be configured to multiply the clipper 311 values and the second blender 315 values and pass the multiplied value to a further clipper 318 .
- the logic circuitry comprises a further clipper 318 .
- the further clipper can be can be configured to receive the output of the second multiplier and clip it to a value 255 .
- the output of the second clipper can in some embodiments be passed to a look-up table 319 .
- a limit multiplexer 321 can be configured to receive a series of associated limit values such as shown in FIG. 4 by the associated limit values limit 0, limit 1, limit 2, and limit 3 as data input values selectable by the selection input provided from the big noise and the flesh tone status value combinations.
- the output of the limit multiplexer 321 can be passed to the look up table 319 as a limit input.
- the circuit logic in some embodiments comprises a look-up table (LUT) 319 configured to receive the input product of the clipped motion value to generate a non linear TNR K control parameter.
- the generation of the K parameter can thus in some embodiments be non linear based on the LUT filling.
- the LUT can perform any suitable conversion such as for example a scale/inverse/linear/non-linear conversion of the motion value to K value.
- the LUT can in some embodiments be fully user programmable.
- the output of the look-up table 319 can be passed to a limit clipper 325 and a third “limit” multiplier 323 .
- step 809 The application of the look-up table to the output of the multiplier can be shown in FIG. 8 by step 809 .
- the logic circuit comprises a third multiplier 323 configured to receive the output of the look-up table 319 and output the product of this combination to a divider 327 .
- the logic circuitry comprises a limit clipper 325 configured to clip the output of the look-up table 319 to the limit value and output this value to a clip selection multiplexer 329 .
- the logic circuitry comprises a divider 327 configured to receive the output of the third or “limit” multiplier 323 and divide the input by a value of 1023.
- the output of the divider 327 can be passed as a second input to the clip selection multiplexer 329 .
- the logic circuitry comprises a clip selection multiplexer 329 configured to receive the output of the limit clipper 325 as a first data input, the output of the divider 327 as a second data input and further to receive a selection input shown in FIG. 4 by the clip selection input (clip_select).
- the output of the clip selection multiplexer 329 can be output to a logic processor 331 .
- step 811 The operation of clipping to limit or applying a second gain at the second multiplier and selection of one of these is shown in FIG. 8 by step 811 .
- the logic circuitry further comprises a minimiser/low pass filter 331 .
- the minimiser/low pass filter 331 can be configured to receive as a data input the output of the clip selection multiplexer 329 and further receive a low pass filter selection signal to determine whether or not the output of the minimiser/low pass filter processor 331 is either the 3-tap horizontal minimum function of the input signal or a low pass filtered output version.
- the output of the minimiser/low pass filter 331 can be the K value.
- step 813 The operation of minimising or low pass filtering is shown in FIG. 8 by step 813 . Furthermore the outputting of the K value is shown in FIG. 8 by step 815 .
- FIG. 5 a series of examples of the error correction available by performing embodiments of the application are shown.
- the first third of the image 401 is shown as the K parameter dump without the use of a compression algorithm such as REMPEG
- the second third of the image 403 is the same image K parameter dump after compression showing motion errors occurring through the field/frame
- the third of the image 405 is the output following error correction as shown in embodiments of the application.
- the various embodiments of the invention may be implemented in hardware or special purpose circuits, software, logic or any combination thereof.
- some aspects may be implemented in hardware, while other aspects may be implemented in firmware or software which may be executed by a controller, microprocessor or other computing device, although the invention is not limited thereto.
- firmware or software which may be executed by a controller, microprocessor or other computing device, although the invention is not limited thereto.
- While various aspects of the invention may be illustrated and described as block diagrams, flow charts, or using some other pictorial representation, it is well understood that these blocks, apparatus, systems, techniques or methods described herein may be implemented in, as non-limiting examples, hardware, software, firmware, special purpose circuits or logic, general purpose hardware or controller or other computing devices, or some combination thereof.
- the embodiments of this application may be implemented by computer software executable by a data processor of the mobile device, such as in the processor entity, or by hardware, or by a combination of software and hardware.
- any blocks of the logic flow as in the Figures may represent program steps, or interconnected logic circuits, blocks and functions, or a combination of program steps and logic circuits, blocks and functions.
- the software may be stored on such physical media as memory chips, or memory blocks implemented within the processor, magnetic media such as hard disk or floppy disks, and optical media such as for example DVD and the data variants thereof, CD.
- the memory may be of any type suitable to the local technical environment and may be implemented using any suitable data storage technology, such as semiconductor-based memory devices, magnetic memory devices and systems, optical memory devices and systems, fixed memory and removable memory.
- the data processors may be of any type suitable to the local technical environment, and may include one or more of general purpose computers, special purpose computers, microprocessors, digital signal processors (DSPs), application specific integrated circuits (ASIC), gate level circuits and processors based on multicore processor architecture, as non-limiting examples.
- Embodiments of the inventions may be practiced in various components such as integrated circuit modules.
- the design of integrated circuits is by and large a highly automated process.
- Complex and powerful software tools are available for converting a logic level design into a semiconductor circuit design ready to be etched and formed on a semiconductor substrate.
- Programs such as those provided by Synopsys, Inc. of Mountain View, Calif. and Cadence Design, of San Jose, Calif. automatically route conductors and locate components on a semiconductor chip using well established rules of design as well as libraries of pre-stored design modules.
- the resultant design in a standardized electronic format (e.g., Opus, GDSII, or the like) may be transmitted to a semiconductor fabrication facility or “fab” for fabrication.
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Picture Signal Circuits (AREA)
Abstract
Description
- The present invention relates to video processing. The invention further relates to but is not limited to noise reduction in video processing.
- Video processing, where video image data is processed to ‘improve’ the image is applied in many devices. Furthermore image quality can be improved by the use of video processing in the form of a noise reduction block or processor to reduce noise in the image.
- The video noise reduction block can often be the first stage of any video processing pipeline, where the video image frames are noise reduced before any other processing and enhancements in the digital domain are applied. For example noise reduction can be employed for television applications, where in the video signal can be input or received from any one of radio frequency (RF) channel, component input channel, a high definition multimedia interface (HDMI), a composite (CVBS) channel, and s-video input channel.
- Noise reduction can for example be spatial or temporal noise reduction. Spatial noise reduction is where areas surrounding a picture element (or pixel) or block of pixels on the same field or frame can be analysed to determine whether the pixel is similar to the surrounding areas and whether a correction or noise reduction operation can be carried out. The noise reduction operation can be for example an averaging across displayed lines or within a line (intraline and interline noise reduction). Furthermore temporal noise reduction is where a pixel is compared to proceeding or succeeding fields or frame pixels to determine whether or not the area pixel differs significantly from previous or following fields or frames, and whether an averaging or filtering across fields or frames should be carried out. Temporal noise reduction requires previous frames or fields to be stored in order that they can be compared to the current frame or field to determine whether there is image motion.
- To reduce the memory size required to store the previous frames or fields, aggressive compression modes such as REMPEG are typically performed. The application of REMPEG compression modes introduce an average error of 35 for a 10 bit domain (i.e. an average error of 35 in the
range 0 to 1023) at the high frequency regions or edges. This error can create false motion detection in still image regions, and therefore reduce temporal noise reduction application. This reduces the beneficial effect of temporal noise reduction (TNR) and causes noise reduction to be biased towards applying spatial noise reduction (SNR). - Any such low noise still images in such examples can have greater noise levels when aggressive compression is used as less noise reduction is applied and the images suffers resolution loss.
- Embodiments of the present application aim to address the above problems.
- There is provided according to the disclosure a video processor comprising a spatio-temporal noise reduction controller configured to determine current and previous image edge slopes and adaptively control a spatio-temporal noise reduction processor to blend current and previous images dependent on the current and previous image edge slope values.
- The video processor may further comprise a spatio-temporal noise reduction processor configured to blend current and previous images dependent on the spatial noise reduction controller.
- The spatiotemporal noise reduction controller may comprise: a current image edge slope determiner configured to generate a current image edge slope value; a previous image edge slope determiner configured to generate a previous image edge slope value; and a edge slope processor configured to determine a blending control signal dependent on the current image edge slope value and previous image edge slope value for the spatiotemporal noise reduction processor.
- The edge slope processor may comprise a slope selector configured to select one of the current image edge slope value and the previous image edge slope value and generate the blending control signal value dependent on the selected edge slope value.
- The edge slope processor may comprise a slope value generator configured to combine a portion of the selected edge slope value to a blended gain value to generate a slope value, the control signal value being dependent on the slope value.
- The edge slope processor may comprise a first blended gain value generator configured to generate the blended gain value dependent on at least one of: the noise level status of the image region, and an image region motion status value.
- The slope selector may be configured to select at least one of: the minimum of the current image edge slope value and the previous image edge slope value, the maximum of the current image edge slope value and the previous image edge slope value, and the average of the current image edge slope value and the previous image edge slope value.
- The slope selector may be configured to select at least one of the edge slope values dependent on at least one of: the noise level status of the image region, and an image region motion status value.
- The video processor may further comprise a motion noise detector configured to determine at least one of: the noise level status of the image region, and an image region motion status value.
- The edge slope processor may be configured to selectively nullify the motion noise value for at least one of: image edges, and regions of high frequency image components.
- The current and previous image edge slopes may be current luma image edge slopes and previous luma image edge slopes.
- The current and previous images may comprise: a current frame image and a previous frame image respectively; and a current field image and a previous field image respectively.
- The edge slope value may comprise at least one of: horizontal pixel difference, and vertical pixel difference.
- A television receiver may comprise the video processor as discussed herein.
- An integrated circuit may comprise the video processor as discussed herein.
- A video player may comprise the video processor as discussed herein.
- According to a second aspect there is provided a method of processing video signals comprising: determining current and previous image edge slopes; and adaptively controlling spatio-temporal noise reduction blending of current and previous images dependent on the current and previous image edge slope values.
- The method may further comprise spatio-temporal noise reduction blending of current and previous images dependent on the current and previous image edge slope values.
- Adaptively controlling spatio-temporal noise reduction blending of current and previous images may comprise: generating a current image edge slope value; generating a previous image edge slope value; and determining a blending control signal dependent on the current image edge slope value and previous image edge slope value for the spatio-temporal noise reduction processor.
- Determining a blending control signal may comprise: selecting one of the current image edge slope value and the previous image edge slope value; and generating the blending control signal value dependent on the selected edge slope value.
- Determining a blending control signal may comprise combining a portion of the selected edge slope value to a blended gain value to generate a slope value, the control signal value being dependent on the slope value.
- Determining a blending control signal may comprise generating the blended gain value dependent at least one of the noise level status of the image region, and an image region motion status value.
- Selecting one of the current image edge slope value and the previous image edge slope value may comprise selecting at least one of: the minimum of the current image edge slope value and the previous image edge slope value, the maximum of the current image edge slope value and the previous image edge slope value, and the average of the current image edge slope value and the previous image edge slope value.
- Selecting one of the current image edge slope value and the previous image edge slope value may comprise selecting at least one of the edge slope values dependent on at least one of the noise level status of the image region, and an image region motion status value.
- The method may further comprise determining the portion of the selected edge slope value dependent on at least one of: a noise level status of the image region, and an image region motion status value.
- The method may further comprise determining at least one of: the noise level status of the image region, and an image region motion status value.
- Determining the noise level of the image may comprise selectively nullifying the motion noise value for at least one of: image edges, and regions of high frequency image components.
- The current and previous image edge slopes may be current luma image edge slopes and previous luma image edge slopes.
- The current and previous images may comprise: a current frame image and a previous frame image respectively; and a current field image and a previous field image respectively.
- The edge slope value may comprise at least one of: horizontal pixel difference, and vertical pixel difference.
- A processor-readable medium encoded with instructions that, when executed by a processor, may perform a method for processing video as discussed herein.
- Apparatus comprising at least one processor and at least one memory including computer code for one or more programs, the at least one memory and the computer code configured to with the at least one processor may cause the apparatus to at least perform a method as discussed herein.
- According to a third aspect there is provided a video processor comprising: means for determining current and previous image edge slopes; and means for adaptively controlling spatiotemporal noise reduction blending of current and previous images dependent on the current and previous image edge slope values.
- The video processor may further comprise means for spatiotemporal noise reduction blending of current and previous images dependent on the current and previous image edge slope values.
- The means for adaptively controlling spatio-temporal noise reduction blending of current and previous images may comprise: means for generating a current image edge slope value; means for generating a previous image edge slope value; and means for determining a blending control signal dependent on the current image edge slope value and previous image edge slope value for the spatio-temporal noise reduction processor.
- The means for determining a blending control signal may comprise: means for selecting one of the current image edge slope value and the previous image edge slope value; and means for generating the blending control signal value dependent on the selected edge slope value.
- The means for determining a blending control signal may comprise means for combining a portion of the selected edge slope value to a blended gain value to generate a slope value, the control signal value being dependent on the slope value.
- The means for determining a blending control signal may comprise means for generating the blended gain value dependent on at least one of: the noise level status of the image region, and an image region motion status value.
- The means for selecting one of the current image edge slope value and the previous image edge slope value may comprise means for selecting at least one of: the minimum of the current image edge slope value and the previous image edge slope value, the maximum of the current image edge slope value and the previous image edge slope value, and the average of the current image edge slope value and the previous image edge slope value.
- The means for selecting one of the current image edge slope value and the previous image edge slope value may comprise means for selecting at least one of the edge slope values dependent on at least one of: the noise level status of the image region, and an image region motion status value.
- The means for combining a portion of the selected edge slope value to a blended gain value to generate a slope value may comprise means for determining the portion of the selected edge slope value dependent on at least one of: a noise level status of the image region, and an image region motion status value.
- The video processor may further comprise means for determining at least one of: the noise level status of the image region, and the image region motion status value.
- The means for determining the blending control signal edge slope may comprise means for selectively nullifying the motion noise value at at least one of: image edges, and regions of high frequency image components.
- The current and previous image edge slopes may be current luma image edge slopes and previous luma image edge slopes.
- The current and previous images may comprise: a current frame image and a previous frame image respectively; and a current field image and a previous field image respectively.
- The edge slope value may comprise at least one of: horizontal pixel difference, and vertical pixel difference.
- For better understanding of the present application, reference will now be made by way of example to the accompanying drawings in which:
-
FIG. 1 shows schematically a system suitable for employing a video processor according to some embodiments of the application; -
FIG. 2 shows schematically a video noise reduction apparatus according to some embodiments of the application; -
FIG. 3 shows schematically the edge adaptive threshold determiner as shown inFIG. 2 in further detail according to some embodiments of the application; -
FIG. 4 shows the noise reduction K logic as shown inFIG. 2 in further detail according to some embodiments of the application; -
FIG. 5 shows an example of the K logic plot of the noise reducing apparatus according to some embodiments of the application; -
FIG. 6 shows a method of operating the edge adaptive threshold determiner; -
FIG. 7 shows a method of operating the noise reduction apparatus according to some embodiments of the application; and -
FIG. 8 shows the operation of the K logic. - The following describes in further detail suitable apparatus and possible mechanisms for the provision of video processing and noise reduction.
- With respect to
FIG. 1 an example electronic device orapparatus 10 is shown within which embodiments of the application can be implemented. - The
apparatus 10 in some embodiments comprises a receiver configured to receive a radio frequency modulated television and/or video signal and output the analogue encoded video signal to theprocessor 5. In some embodiments the receiver can be controlled by the processor to demodulate/select the channel to be received. - The
apparatus 10 in some embodiments comprises aprocessor 5 which can be configured to execute various program codes. The implemented program codes can comprise video processing for processing the received video data, for example by noise reduction video processing, and outputting the data to the display 7. The implemented program codes can be stored within a suitable memory. - In some embodiments the
processor 5 can be coupled tomemory 21. Thememory 21 can further comprise aninstruction code section 23 suitable for storing program codes implementable upon theprocessor 5. Furthermore in some embodiments thememory 21 can comprise a storeddata section 25 for storing data, for example video data. Thememory 21 can be any suitable storage means. In some embodiments thememory 21 can be implemented as part of the processor in a system-on-chip configuration. - The
apparatus 10 can further comprise a display 7. The display can be any suitable display means featuring technology for example cathode ray tube (CRT), light emitting diode (LED), variable backlight liquid crystal display (LCD) for example LED lit LCD, organic light emitting diode (OLED), and plasma display. The display 7 can furthermore be considered to provide a graphical user interface (GUI) providing a dialog window in which a user can implement and input how theapparatus 10 displays the video. In some embodiments the apparatus can be configured to communicate with a display remote from the physical apparatus by a suitable display interface, for example a High Definition Multimedia Interface (HDMI) or a Digital Video Interface (DVI) or be remodulated and transmitted to the display. - The
apparatus 10 further can comprise a user input or usersettings input apparatus 11. The user input can in some embodiments be a series of buttons, switches or adjustable elements providing an input to theprocessor 5. In some embodiments theuser input 11 and display 7 can be combined as a touch sensitive surface on the display, also known as a touch screen or touch display apparatus. - With respect to
FIG. 2 , the noise reduction apparatus is shown in further detail. Furthermore with respect toFIG. 7 the operation of the noise reduction apparatus is described. In some embodiments the noise reduction apparatus comprises aninput line delay 101. Theinput line delay 101 is configured to receive the original (ORG) input video signal on a line by line basis and can comprise any suitable means for delaying or storing a previous line or previous lines of input data. In some embodiments the input to theline delay 101 comprises either the input chroma or luma data. Theinput line delay 101 can in some embodiments be configured to output current, previous, and next line data to a spatialnoise reduction apparatus 105, a spatial-temporalnoise reducer blender 107, an edgeadaptive threshold determiner 109, amotion detector 111, and aflesh tone detector 113. - In some embodiments the noise reduction apparatus can further comprise a noise reduced
line delay 103. The noise reducedline delay 103 can be configured to receive the previous frame (or field) delayed video input on a line by line basis in such a manner that the line data held in the noise reduced line delay are those associated or synchronised with the line data held in the input line delay. In other words the noise reduced line delay samples are one frame away from the line data samples held in the input line delay. - The previous frame/field information can in some embodiments comprise data which had been stored after a first compression sequence and then decompressed after storage. The output of the noise reduced
line delay 103 can in some embodiments be passed to an edgeadaptive threshold determiner 109, themotion detector 111, and a temporalnoise reduction blender 119. - The operation of receiving the original chroma or luma data and the decompressed “noise reduced” chroma or luma data is shown in
FIG. 7 bystep 701. - The delaying of the original and decompressed lines to generate a current, previous line, and next line data for the original and decompressed noise reduced chroma or luma data is shown in
FIG. 7 bystep 703. - In some embodiments the noise reduction apparatus comprises a
spatial noise reducer 105. Thespatial noise reducer 105 can be configured to attempt to reduce the noise level for a line of data dependent on the input line information from theinput line delay 101. Thespatial noise reducer 105 can perform any suitable spatial noise reduction algorithm such as, but not exclusively, linear interpolation between lines, line sharpening, edge sharpening, and low pass filtering. The output of thespatial noise reducer 105 can be output to the spatial temporalnoise reducer blender 107. In some embodiments thespatial noise reducer 105 can be configured to further receive a noise level value input for controlling the amount of spatial noise reduction performed. The noise level value can be received from a noise measurement block not shown in the figure. - The operation of performing spatial noise reduction is shown in
FIG. 7 bystep 711. - In some embodiments the noise reduction apparatus further comprises a
motion detector 111. Themotion detector 111 can be configured to receive the input video data from thefirst line delay 101, and the previous frame or field(s) video data via the noisereduction line delay 103. The motion detector can thus compare the difference between the various line delay inputs from frame-to-frame or field-to-field to determine whether or not there is any motion between the original and delayed/stored image. - Furthermore in some embodiments the motion detector can be configured to perform some noise detection. The noise detection value is based on determining how many motion samples are above the noise level value (the noise level present in the input noisy video signal) and can in some embodiments be determined in a noise measure block (not shown) in a specified spatial image kernel. The motion is determined to be due to real motion or noise by this measure of number of motion samples being above some noise level. If in a specific region (defined by kernel size) if more number of motion samples is above noise level, then that region is assumed to have real motion, else that region is assumed to be have false motion due to noise. Based on this determination a different motion filter is used for deriving the actual motion value. The Motion detector outputs a control signal ND, which indicates whether the motion is due to Noise or real motion.
- The
motion detector 111 can thus be configured to output a determination or value of the motion of between the original and delayed frame lines. This motion determination can be output to the spatial temporal noisereduction control logic 115 and to the temporal noise reduction control logic 117. Furthermore in some embodiments the noise detection output from themotion detector 111 can be configured to be output to the edgeadaptive threshold determiner 109, spatial temporal noisereduction control logic 115 and temporal noise reduction control logic 117. - The operation of determining motion and/or determining noise is shown in
FIG. 7 bystep 705. - In some embodiments the noise reduction apparatus comprises a
flesh tone detector 113. Theflesh tone detector 113 can be configured to determine whether or not the current line image contains flesh tone information. The output of theflesh tone detector 113 can be configured to output to the spatial temporal noisereduction control logic 115 and the temporal noise reduction control logic 117. The fleshtone detection therefore helps in identifying human skin colour and in those areas the noise reduction strength can be different than the non-fleshtone areas. This therefore assists in some embodiments by preventing ‘plastic’ skin effects. - The operation of determining flesh tone in the current line is shown in
FIG. 8 bystep 709. - In some embodiments the noise reduction apparatus comprises an edge
adaptive threshold determiner 109. The edgeadaptor threshold determiner 109 can be configured to receive the line information from the original image (from the image line delay 101), and the previous frame or field image data (from the noise reduced line delay 103). The edgeadaptive threshold determiner 109 can further be configured to receive an input from the noise detector output from themotion detector 111. The edgeadaptive threshold determiner 109 can be configured to generate an edge detection threshold value such that luma edge image portions can be detected. The edge detection threshold can be used so that for spatial or temporal noise reduction, a suitable edge profile or slope can be allowed for. The output of the edge threshold value from the edgeadaptive threshold determiner 109 can be passed to the spatial temporal noisereduction control logic 115 and the temporal noise reduction control logic 117. - The determination of the edge threshold value is shown in
FIG. 7 bystep 707. - The performing of spatial noise reduction is shown in
FIG. 7 bystep 711. - The noise reduction apparatus can further comprise in some embodiments a spatial temporal noise reduction (STNR)
control logic 115. The spatial temporal noisereduction control logic 115 can be configured to receive the motion detection output from themotion detector 111, the noise detection output from themotion detector 111, the edge threshold value from the edgeadaptive threshold determiner 109, as well as a big noise flag input and noise level value input and a flesh tone determination input. The spatial temporal noisereduction control logic 115 can be configured to generate a control value to be passed to a spatial temporalnoise reduction blender 107 such that the output of the spatial noise reduction circuit and the original line to be analysed are blended according to the various inputs to theSTNR control logic 115. - In some embodiments the noise reduction apparatus can comprise a spatial temporal
noise reduction blender 107. The spatial temporal noise reduction (STNR)blender 107 can be configured to receive data inputs from the spatialnoise reduction apparatus 105 and theinput line delay 101 and further be controlled by theSTNR control logic 115 to blend the line information. - The
STNR blender 107 can be configured to perform any suitable blending operation dependent on the output of theSTNR control logic 115. The output of theSTNR blender 107 can be passed to the temporalnoise reduction blender 119. - The operation of blending the spatial noise reduction output and the current frame line data is shown in
FIG. 7 bystep 713. - The performing of temporal noise reduction is shown in
FIG. 7 bystep 712. - Furthermore in some embodiments the noise reduction apparatus comprises a temporal noise reduction (TNR) control logic 117. The temporal noise reduction control logic 117 can be configured to receive the motion detection output from the
motion detector 111, the noise detection output from themotion detector 111, the edge adaptive threshold determiner edge threshold value from the edgeadaptive threshold determiner 109, the flesh tone detection output from the flesh tone detector 118, and the noise and big noise flag inputs to determine a suitable control value to be passed to the temporalnoise reduction blender 119. The noise input is as discussed herein the input video signal noise level value from noise measure block and the Big Noise flag (also from noise measure block) is an indicator indicating that the input video has high level of noise. - In some embodiments the noise reduction apparatus can further comprise a temporal
noise reduction blender 119. The temporalnoise reduction blender 119 can be configured to receive the output from the spatial temporalnoise reduction blender 107 and further the previous frame or field image data from the previous noisereduction line delay 103 and further to blend the inputs dependent on the output of the temporal noise reduction control logic 117 to produce a blended TNR output. The blendedTNR output 119 can be passed in some embodiments to asticky bit converter 121. - The operation of blending the current and previous image data to perform temporal noise reduction is shown in
FIG. 7 bystep 715. - In some embodiments the temporal reduction apparatus comprises a
sticky bit converter 121. Thesticky bit converter 121 can be configured to output a sticky bit converted TNR blended output. The sticky bit output thus forms the output image line data or the noise reduced data output (NR_OUT). - The operation of performing a sticky bit process on the TNR output is shown in
FIG. 7 bystep 717. - Furthermore the outputting of the noise reduced image is shown in
FIG. 7 bystep 719. - Furthermore in some embodiments the noise reduced output can be passed to a
REMPEG compression circuit 123. TheREMPEG compressor 123 can be configured to perform a REMPEG compression on the output of the noise reduced image data. This output can further pass to a frame/field delay 125. The frame/field delay 125 in some embodiments can comprise memory configured to store the output of the REMPEG compression. The noise reduction apparatus can further comprise aREMPEG decompressor 127 configured to read the memory at a suitable point and further process the REMPEG compressed data by decompressing the data passing the previous frame or field delayed noise reduction output to the noisereduction line delay 103. - With respect to
FIG. 3 ,video line delay 101 and noise reducedline delay 103 parts are shown in further detail together with the edge adaption threshold determiner with respect to luma noise reduction embodiments. Theinput line delay 101 is in some embodiments configured to receive the original luma information, The originalsignal line delay 101 can be configured to output the current input line luma information (ORG_Y) to afirst difference engine 209 of the edgeadaptive threshold determiner 109 and to a firstpixel delay element 201. The output of the firstpixel delay element 201 can be configured to output a delayed pixel information to an input of asecond pixel delay 203, and to thefirst difference engine 209 and to asecond difference engine 211. The secondpixel delay element 203 can be configured to output the second delayed pixel information to thesecond difference engine 211. - Similarly the noise reduced
delay line 103 input luma information (NR_Y) is input to athird difference engine 213 and a first noise reducedpixel delay 205 which is configured to perform a pixel delay on the received frame/field delayed data and output the delayed pixel information to thethird difference engine 213, afourth difference engine 215 and a second noise reducedpixel delay 207. The output of the second noise reduced pixel delay is passed to thefourth difference engine 215 of the edgeadaptive threshold determiner 109. Similarly in some embodiments the vertical difference engine across lines can be used (not shown in figure). - The reception of the luma data and the performing of the line delay and pixel delay operations are shown in
FIG. 6 bystep 601. - In some embodiments the edge adaptive threshold determiner comprises four difference engines. The difference engines are configured to receive spatially different pixel luma information to determine edges. Thus, for example, the
first difference engine 209 can be configured to receive the original input luma data and a first pixel delayed luma data input and output a first difference value to a firstabsolute value determiner 217. - Similarly the
second difference engine 211 can be configured to receive the first pixel delay and second pixel delay outputs from theinput line delay 101 and be configured to output a second difference value to a secondabsolute value determiner 219. - The
third difference engine 213 and thefourth difference engine 215 can be configured to produce the same difference values for the noise reduced image and pass the difference between the non delayed and first pixel delay to a thirdabsolute value determiner 221 and the difference between the first pixel delay and second delay pixel to a fourthabsolute value determiner 223. - The absolute value determiners, the first
absolute value determiner 217, secondabsolute value determiner 219, thirdabsolute value determiner 221, and fourthabsolute value determiner 223 can be configured to receive a difference value and output an absolute or modulus value of the input value. In other words the absolute value determiner generates a positive value output independent of the sign of the input value. The first and secondabsolute value determiners absolute value determiners averager 227. - In some embodiments the minimum/maximum/
averager averager 225 associated with the current frame/field can be configured to output a selected value to a firstminimum selector 229 and the second minimum/maximum/averager 227 is configured to output a selected minimum/maximum/average value to thefirst minimiser 229. The minimum/maximum/averagers can be considered to be providing edge determination values for the current and previous frames/fields. Also in some embodiments thefirst minimiser block 229 can be a minimum or maximum/average selection based on the ND signal (the Noise detect signal from the Motion detector). - The determination of luma edge operation is shown in
FIG. 6 bystep 605. - Furthermore the determination of luma edge slopes for the current and previous frames/fields operation as shown in
FIG. 6 bystep 607. - The
minimiser 229 is configured in some embodiments to receive the edge slope values for the current and previous frames/fields and output the minimum value. The minimum edge slope of the current and previous frames/fields can be considered to be the “fine” slope value. The output of the minimiser can be passed to afirst multiplier 230. - In some embodiments the edge
adaptive threshold determiner 109 further comprises ablender 231 configured to receive the M1 gain and M2 gains. The M1 and M2 gain values are different user programmable gains and can be adaptively selected or blended based on the value of the ND-Out signal. For example when the ND-Out signal indicates a detection of noise then M2 gain value can be selected, else the M1 gain value can be selected. Theblender 231 can further be configured to receive the noise detection output (ND-OUT) from the motion detector. Theblender 231 is configured to blend the M1 gains and M2 gains to produce a gain output to themultiplier 230. The output of theminimiser 229 and the output of the blender can then be combined in themultiplier 230 to output an adaptive gain value. The adaptive gain value can be output by themultiplier 230 to a ⅗ tap expander or lowpass filter device 233. In other words themultiplier 230 can be configured to output an adaptive threshold value as a percentage of the fine slope scaled by the blended M1 and M2 gains. - The tap expander or
LPF 233 can be configured to further receive an LPF selection (LPF-SEL) input to determine whether or not to low pass filter the output of themultiplier 230. The output of the expander or LPF can be passed to aclipper 235. - The
clipper 235 can be configured to clip the value of the adaptive gain value to a value between 0 and 255 to prevent a value rollover occurring producing an incorrect edge threshold value. - Thus the generation of the adaptive threshold by a percentage of the fine slope added to the user or gain noise level value threshold is shown in
FIG. 6 bystep 611. - With respect to
FIG. 4 , the operation of the spatiotemporal noise reduction (STNR)control logic 115 and temporal noise reduction (TNR) K control logic 117 is shown. Furthermore with respect toFIG. 8 the operation of the STNR and TNR K logic is shown in further detail. The K parameter is the control factor of the blend equation. This will control the amount of noise reduction performed. In some embodiments the logic can comprise anedge threshold multiplexer 301. Theedge threshold multiplexer 301 can be configured to receive the edge adaptive threshold determined value from the edgeadaptive threshold determiner 109 as a first data input, a null or zero value as the second data input, and an enable input such as an edge adaptive enable input (EDGE_ADAP_ENB). The output of the edgeadaptive multiplexer 301 can thus be either the edge adaptive threshold value or zero (or null) value which can be passed to asummer 302. - The receiving of the edge threshold values is shown in
FIG. 8 bystep 801. - In some embodiments the K logic circuitry can comprise a separate noise gain/offset path determiner
block 307. Thedeterminer block 307 can be configured to receive the Noise level value of the input video signal from the noise meter (Fine_Noise_NM)) or via a user input. This noise input is multiplied by the user noise_gain input and then added with a signed user noise offset input. - The gain and offset processing is performed separately for M1 (real motion) and M2 (Motion due to Noise) to generate the M1_noise_value and the M2_noise_value.
- The M1_noise_gain and M1_noise_offset values are the user input for generating M1_noise_value output and M2_noise_gain and M2_noise_offset are the user input for generating the M2_noise_value output.
- In some embodiments the logic circuitry comprises a first or
noise determination blender 303. The first blender can blends the above outputs (M1_noise_value, M2_noise_value) based on the ND control frommotion detector 111 to generate the Noise_level threshold tosummer 302. - Furthermore the K logic circuitry can comprise a
summer 302 configured to receive the output of the edgeadaptive threshold multiplexer 301 and also a noise level threshold (Noise_level_threshold) from theblender 303. The output of the combination can be passed as a first input to thefirst subtractor 309. - The operation of generating the final noise offset value from the edge threshold value, and the initial noise gain values is shown in
FIG. 8 bystep 803. - Furthermore the logic circuitry can in some embodiments comprise a first or
motion subtracter 309. Thefirst subtracter 309 can be configured to receive both the final noise offset value from thenoise summer 302 and also a motion detection input from themotion detector 111 to find the difference between these values. The output of thefirst subtracter 309 can in some embodiments be passed to aclipping determiner 311. - The application of the final noise offset to the motion detection output is shown in
FIG. 8 bystep 805. - In some embodiments the logic comprises a
clipping determiner 311. The clipping determiner can be configured to receive the difference between the motion detection value and the final offset value and clip these in such a way that values below 0 are set to 0 and values above 63 are set to 63 in order that the value does not overflow or undertow the value range. The output of theclipper 311 can be passed to thesecond multiplier 317. - In some embodiments the logic circuitry comprises a
gain multiplexer 313. The gain multiplexer can be configured to receive a series of input gains such as shown inFIG. 4 bygain 0, gain 1, gain 2 and gain 3 as input gain data values which are selectable dependent on the selection inputs provided from the flesh tone flag value and the big noise flag value. The gains (gain 0, gain 1, gain 2 and gain 3) are for example user defined gains defining the gains for different combinations of Fleshtone and Big noise status values (00, 01, 10, 11). The output of the gain multiplexer can be passed to thesecond blender 315. - In some embodiments the logic circuitry further comprise an
M2 multiplier 314. The M2 multiplier can be configured to receive thegain multiplexer 313 output and multiply the gain output value by the user M2_gain value, which then can be output to the second input to thesecond blender 315. - In some embodiments the logic circuitry comprises a
second blender 315 configured to receive as a first data input the output of the 4 to 1gain multiplexer 313 and output gained value (multiplied by user input M2_gain) as a second data input. This blending operation can further be controlled based on the noise detection control signal (ND_CNTRL). Thesecond blender 315 can be configured to output a blended value of these two inputs to thesecond multiplier 317. Thesecond multiplier 317 can be configured to multiply theclipper 311 values and thesecond blender 315 values and pass the multiplied value to afurther clipper 318. - In some embodiments the logic circuitry comprises a
further clipper 318. The further clipper can be can be configured to receive the output of the second multiplier and clip it to avalue 255. The output of the second clipper can in some embodiments be passed to a look-up table 319. - In some embodiments a
limit multiplexer 321 can be configured to receive a series of associated limit values such as shown inFIG. 4 by the associated limit values limit 0, limit 1, limit 2, and limit 3 as data input values selectable by the selection input provided from the big noise and the flesh tone status value combinations. The output of thelimit multiplexer 321 can be passed to the look up table 319 as a limit input. - The circuit logic in some embodiments comprises a look-up table (LUT) 319 configured to receive the input product of the clipped motion value to generate a non linear TNR K control parameter. The generation of the K parameter, can thus in some embodiments be non linear based on the LUT filling. In some embodiments the LUT can perform any suitable conversion such as for example a scale/inverse/linear/non-linear conversion of the motion value to K value. The LUT can in some embodiments be fully user programmable.
- The output of the look-up table 319 can be passed to a
limit clipper 325 and a third “limit”multiplier 323. - The application of the look-up table to the output of the multiplier can be shown in
FIG. 8 bystep 809. - In some embodiments the logic circuit comprises a
third multiplier 323 configured to receive the output of the look-up table 319 and output the product of this combination to adivider 327. - Furthermore in some embodiments the logic circuitry comprises a
limit clipper 325 configured to clip the output of the look-up table 319 to the limit value and output this value to aclip selection multiplexer 329. - In some embodiments the logic circuitry comprises a
divider 327 configured to receive the output of the third or “limit”multiplier 323 and divide the input by a value of 1023. The output of thedivider 327 can be passed as a second input to theclip selection multiplexer 329. - In some embodiments the logic circuitry comprises a
clip selection multiplexer 329 configured to receive the output of thelimit clipper 325 as a first data input, the output of thedivider 327 as a second data input and further to receive a selection input shown inFIG. 4 by the clip selection input (clip_select). The output of theclip selection multiplexer 329 can be output to alogic processor 331. - The operation of clipping to limit or applying a second gain at the second multiplier and selection of one of these is shown in
FIG. 8 bystep 811. - In some embodiments the logic circuitry further comprises a minimiser/
low pass filter 331. The minimiser/low pass filter 331 can be configured to receive as a data input the output of theclip selection multiplexer 329 and further receive a low pass filter selection signal to determine whether or not the output of the minimiser/lowpass filter processor 331 is either the 3-tap horizontal minimum function of the input signal or a low pass filtered output version. The output of the minimiser/low pass filter 331 can be the K value. - The operation of minimising or low pass filtering is shown in
FIG. 8 bystep 813. Furthermore the outputting of the K value is shown inFIG. 8 bystep 815. - With respect to
FIG. 5 , a series of examples of the error correction available by performing embodiments of the application are shown. For example inFIG. 5 the first third of theimage 401 is shown as the K parameter dump without the use of a compression algorithm such as REMPEG, the second third of theimage 403 is the same image K parameter dump after compression showing motion errors occurring through the field/frame and the third of theimage 405 is the output following error correction as shown in embodiments of the application. - In general, the various embodiments of the invention may be implemented in hardware or special purpose circuits, software, logic or any combination thereof. For example, some aspects may be implemented in hardware, while other aspects may be implemented in firmware or software which may be executed by a controller, microprocessor or other computing device, although the invention is not limited thereto. While various aspects of the invention may be illustrated and described as block diagrams, flow charts, or using some other pictorial representation, it is well understood that these blocks, apparatus, systems, techniques or methods described herein may be implemented in, as non-limiting examples, hardware, software, firmware, special purpose circuits or logic, general purpose hardware or controller or other computing devices, or some combination thereof.
- The embodiments of this application may be implemented by computer software executable by a data processor of the mobile device, such as in the processor entity, or by hardware, or by a combination of software and hardware. Further in this regard it should be noted that any blocks of the logic flow as in the Figures may represent program steps, or interconnected logic circuits, blocks and functions, or a combination of program steps and logic circuits, blocks and functions. The software may be stored on such physical media as memory chips, or memory blocks implemented within the processor, magnetic media such as hard disk or floppy disks, and optical media such as for example DVD and the data variants thereof, CD.
- The memory may be of any type suitable to the local technical environment and may be implemented using any suitable data storage technology, such as semiconductor-based memory devices, magnetic memory devices and systems, optical memory devices and systems, fixed memory and removable memory. The data processors may be of any type suitable to the local technical environment, and may include one or more of general purpose computers, special purpose computers, microprocessors, digital signal processors (DSPs), application specific integrated circuits (ASIC), gate level circuits and processors based on multicore processor architecture, as non-limiting examples.
- Embodiments of the inventions may be practiced in various components such as integrated circuit modules. The design of integrated circuits is by and large a highly automated process. Complex and powerful software tools are available for converting a logic level design into a semiconductor circuit design ready to be etched and formed on a semiconductor substrate.
- Programs, such as those provided by Synopsys, Inc. of Mountain View, Calif. and Cadence Design, of San Jose, Calif. automatically route conductors and locate components on a semiconductor chip using well established rules of design as well as libraries of pre-stored design modules. Once the design for a semiconductor circuit has been completed, the resultant design, in a standardized electronic format (e.g., Opus, GDSII, or the like) may be transmitted to a semiconductor fabrication facility or “fab” for fabrication.
- The foregoing description has provided by way of exemplary and non-limiting examples a full and informative description of the exemplary embodiment of this invention. However, various modifications and adaptations may become apparent to those skilled in the relevant arts in view of the foregoing description, when read in conjunction with the accompanying drawings and the appended claims. However, all such and similar modifications of the teachings of this invention will still fall within the scope of this invention as defined in the appended claims.
Claims (47)
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US13/250,530 US8774549B2 (en) | 2011-09-30 | 2011-09-30 | Compression error handling for temporal noise reduction |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US13/250,530 US8774549B2 (en) | 2011-09-30 | 2011-09-30 | Compression error handling for temporal noise reduction |
Publications (2)
Publication Number | Publication Date |
---|---|
US20130083245A1 true US20130083245A1 (en) | 2013-04-04 |
US8774549B2 US8774549B2 (en) | 2014-07-08 |
Family
ID=47992254
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US13/250,530 Active 2032-07-06 US8774549B2 (en) | 2011-09-30 | 2011-09-30 | Compression error handling for temporal noise reduction |
Country Status (1)
Country | Link |
---|---|
US (1) | US8774549B2 (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20160275897A1 (en) * | 2015-03-17 | 2016-09-22 | Apple Inc. | Content-driven slew rate control for display driver |
Citations (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6184974B1 (en) * | 1999-07-01 | 2001-02-06 | Wavefront Sciences, Inc. | Apparatus and method for evaluating a target larger than a measuring aperture of a sensor |
US6388736B1 (en) * | 1999-11-15 | 2002-05-14 | Asm Lithography B.V. | Imaging method using phase boundary masking with modified illumination |
US6461064B1 (en) * | 1996-09-10 | 2002-10-08 | Benjamin Patrick Leonard | Service station assembly for a drum-based wide format print engine |
US6487307B1 (en) * | 1994-11-30 | 2002-11-26 | Isoa, Inc. | System and method of optically inspecting structures on an object |
US6563566B2 (en) * | 2001-01-29 | 2003-05-13 | International Business Machines Corporation | System and method for printing semiconductor patterns using an optimized illumination and reticle |
US6793390B2 (en) * | 2002-10-10 | 2004-09-21 | Eastman Kodak Company | Method for automatic arrangement determination of partial radiation images for reconstructing a stitched full image |
US6817982B2 (en) * | 2002-04-19 | 2004-11-16 | Sonosite, Inc. | Method, apparatus, and product for accurately determining the intima-media thickness of a blood vessel |
US8031193B1 (en) * | 2007-01-25 | 2011-10-04 | Rockwell Collins, Inc. | Dynamic light shading in terrain rendering applications |
US8443461B2 (en) * | 2010-10-20 | 2013-05-14 | Frank Michael Ohuesorge | Interatomic force measurements using passively drift compensated non-contact in situ calibrated atomic force microscopy—quantifying chemical bond forces between electronic orbitals by direct force measurements at subatomic lateral resolution |
-
2011
- 2011-09-30 US US13/250,530 patent/US8774549B2/en active Active
Patent Citations (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6487307B1 (en) * | 1994-11-30 | 2002-11-26 | Isoa, Inc. | System and method of optically inspecting structures on an object |
US6461064B1 (en) * | 1996-09-10 | 2002-10-08 | Benjamin Patrick Leonard | Service station assembly for a drum-based wide format print engine |
US6184974B1 (en) * | 1999-07-01 | 2001-02-06 | Wavefront Sciences, Inc. | Apparatus and method for evaluating a target larger than a measuring aperture of a sensor |
US6388736B1 (en) * | 1999-11-15 | 2002-05-14 | Asm Lithography B.V. | Imaging method using phase boundary masking with modified illumination |
US6563566B2 (en) * | 2001-01-29 | 2003-05-13 | International Business Machines Corporation | System and method for printing semiconductor patterns using an optimized illumination and reticle |
US6817982B2 (en) * | 2002-04-19 | 2004-11-16 | Sonosite, Inc. | Method, apparatus, and product for accurately determining the intima-media thickness of a blood vessel |
US6793390B2 (en) * | 2002-10-10 | 2004-09-21 | Eastman Kodak Company | Method for automatic arrangement determination of partial radiation images for reconstructing a stitched full image |
US8031193B1 (en) * | 2007-01-25 | 2011-10-04 | Rockwell Collins, Inc. | Dynamic light shading in terrain rendering applications |
US8443461B2 (en) * | 2010-10-20 | 2013-05-14 | Frank Michael Ohuesorge | Interatomic force measurements using passively drift compensated non-contact in situ calibrated atomic force microscopy—quantifying chemical bond forces between electronic orbitals by direct force measurements at subatomic lateral resolution |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20160275897A1 (en) * | 2015-03-17 | 2016-09-22 | Apple Inc. | Content-driven slew rate control for display driver |
US9818367B2 (en) * | 2015-03-17 | 2017-11-14 | Apple Inc. | Content-driven slew rate control for display driver |
Also Published As
Publication number | Publication date |
---|---|
US8774549B2 (en) | 2014-07-08 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110545413B (en) | Method and apparatus for performing tone mapping of high dynamic range video | |
US8098329B2 (en) | Image determination apparatus, image determination method, and program, and image processing apparatus, image processing method, and program | |
US8305397B2 (en) | Edge adjustment method, image processing device and display apparatus | |
US6600517B1 (en) | System and method for improving the sharpness of a video image | |
JP6757890B2 (en) | Signal processors, display devices, signal processing methods, and programs | |
US7995146B2 (en) | Image processing apparatus and image processing method | |
JP5089783B2 (en) | Image processing apparatus and control method thereof | |
US20080123984A1 (en) | System and method for efficiently enhancing videos and images | |
JP5056242B2 (en) | Image determination apparatus, image determination method, and program | |
US8446532B2 (en) | Image processing apparatus for improving sharpness and image processing method | |
US7903126B2 (en) | Image processing apparatus and image processing method thereof | |
US8159617B2 (en) | Universal, highly configurable video and graphic measurement device | |
US8774549B2 (en) | Compression error handling for temporal noise reduction | |
JP2009225349A (en) | Image signal processing apparatus, display, and image signal processing method | |
US8224120B2 (en) | Image signal processing apparatus and image signal processing method | |
US10109040B2 (en) | Image processing apparatus, image processing method, and non-transitory computer-readable storage medium | |
JP4506548B2 (en) | Video processing apparatus and video display apparatus | |
US20100157161A1 (en) | Gamma Correction Apparatus and Method | |
US20170278286A1 (en) | Method and electronic device for creating title background in video frame | |
US9615072B2 (en) | Adaptive PAL field comber | |
WO2012073865A1 (en) | Image processing device, image processing method, image processing program, and display device | |
CN105141870A (en) | Television signal processing method and television signal processing device | |
Cho et al. | Color transient improvement with transient detection and variable length nonlinear filtering | |
JP2004266387A (en) | Video signal processor and method therefor | |
JP2010109858A (en) | Image processing apparatus and image processing method |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
AS | Assignment |
Owner name: STMICROELECTRONICS, INC., TEXAS Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:GOPALAKRISHNA, VATSALA;ANANTHAPURBACCHE, RAVI;SWARTZ, PETER;REEL/FRAME:027375/0460 Effective date: 20111116 Owner name: STMICROELECTRONICS PVT LTD., INDIA Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:GOPALAKRISHNA, VATSALA;ANANTHAPURBACCHE, RAVI;SWARTZ, PETER;REEL/FRAME:027375/0460 Effective date: 20111116 |
|
AS | Assignment |
Owner name: STMICROELECTRONICS INTERNATIONAL N.V., NETHERLANDS Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:STMICROELECTRONICS PVT LTD.;REEL/FRAME:032655/0562 Effective date: 20140408 |
|
STCF | Information on status: patent grant |
Free format text: PATENTED CASE |
|
MAFP | Maintenance fee payment |
Free format text: PAYMENT OF MAINTENANCE FEE, 4TH YEAR, LARGE ENTITY (ORIGINAL EVENT CODE: M1551) Year of fee payment: 4 |
|
MAFP | Maintenance fee payment |
Free format text: PAYMENT OF MAINTENANCE FEE, 8TH YEAR, LARGE ENTITY (ORIGINAL EVENT CODE: M1552); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY Year of fee payment: 8 |
|
AS | Assignment |
Owner name: STMICROELECTRONICS INTERNATIONAL N.V., SWITZERLAND Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:STMICROELECTRONICS, INC.;REEL/FRAME:068433/0883 Effective date: 20240627 |