US20080095433A1 - Signal intensity range transformation apparatus and method - Google Patents
Signal intensity range transformation apparatus and method Download PDFInfo
- Publication number
- US20080095433A1 US20080095433A1 US12/002,674 US267407A US2008095433A1 US 20080095433 A1 US20080095433 A1 US 20080095433A1 US 267407 A US267407 A US 267407A US 2008095433 A1 US2008095433 A1 US 2008095433A1
- Authority
- US
- United States
- Prior art keywords
- image
- grey
- signal
- values
- output
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Abandoned
Links
- 238000000034 method Methods 0.000 title claims abstract description 38
- 230000009466 transformation Effects 0.000 title description 6
- 230000001186 cumulative effect Effects 0.000 claims description 60
- 238000005315 distribution function Methods 0.000 claims description 59
- 230000004044 response Effects 0.000 claims description 24
- 230000001143 conditioned effect Effects 0.000 claims description 18
- 239000002131 composite material Substances 0.000 claims description 6
- 238000002156 mixing Methods 0.000 claims description 4
- 230000009467 reduction Effects 0.000 description 22
- 230000010354 integration Effects 0.000 description 21
- 238000010276 construction Methods 0.000 description 10
- 238000010586 diagram Methods 0.000 description 10
- 238000009825 accumulation Methods 0.000 description 9
- 238000009499 grossing Methods 0.000 description 8
- 238000003384 imaging method Methods 0.000 description 8
- 238000013507 mapping Methods 0.000 description 7
- 238000009826 distribution Methods 0.000 description 6
- 230000008569 process Effects 0.000 description 6
- 238000004364 calculation method Methods 0.000 description 5
- 230000006870 function Effects 0.000 description 5
- 229920006395 saturated elastomer Polymers 0.000 description 5
- 230000002411 adverse Effects 0.000 description 4
- 230000008901 benefit Effects 0.000 description 3
- 238000013500 data storage Methods 0.000 description 3
- 230000000694 effects Effects 0.000 description 3
- 238000003709 image segmentation Methods 0.000 description 3
- 238000003706 image smoothing Methods 0.000 description 3
- 238000012545 processing Methods 0.000 description 3
- 238000012935 Averaging Methods 0.000 description 2
- 238000005282 brightening Methods 0.000 description 2
- 238000006243 chemical reaction Methods 0.000 description 2
- 230000002708 enhancing effect Effects 0.000 description 2
- 238000001914 filtration Methods 0.000 description 2
- 238000010606 normalization Methods 0.000 description 2
- 230000003287 optical effect Effects 0.000 description 2
- 238000005192 partition Methods 0.000 description 2
- 230000001360 synchronised effect Effects 0.000 description 2
- 230000001131 transforming effect Effects 0.000 description 2
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000008878 coupling Effects 0.000 description 1
- 238000010168 coupling process Methods 0.000 description 1
- 238000005859 coupling reaction Methods 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 238000001514 detection method Methods 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 238000011156 evaluation Methods 0.000 description 1
- 238000003707 image sharpening Methods 0.000 description 1
- 230000007246 mechanism Effects 0.000 description 1
- 239000002184 metal Substances 0.000 description 1
- 229910001092 metal group alloy Inorganic materials 0.000 description 1
- 239000000203 mixture Substances 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000012544 monitoring process Methods 0.000 description 1
- 230000007170 pathology Effects 0.000 description 1
- 238000012805 post-processing Methods 0.000 description 1
- 238000005070 sampling Methods 0.000 description 1
- 238000000926 separation method Methods 0.000 description 1
- 230000003068 static effect Effects 0.000 description 1
- 238000009827 uniform distribution Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/70—Denoising; Smoothing
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/40—Image enhancement or restoration using histogram techniques
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/90—Dynamic range modification of images or parts thereof
- G06T5/94—Dynamic range modification of images or parts thereof based on local image properties, e.g. for local contrast enhancement
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/80—Camera processing pipelines; Components thereof
- H04N23/84—Camera processing pipelines; Components thereof for processing colour signals
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N5/00—Details of television systems
- H04N5/14—Picture signal circuitry for video frequency region
- H04N5/20—Circuitry for controlling amplitude response
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10016—Video; Image sequence
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20024—Filtering details
- G06T2207/20032—Median filtering
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N9/00—Details of colour television systems
- H04N9/64—Circuits for processing colour signals
Definitions
- the present invention generally relates to imaging systems, and in particular, to a system for manipulating image data.
- Extreme lighting variation for example, due to sunlight beams, can cause typical video cameras and imaging sensors to saturate (i.e., become unable to represent the real-world luminance range) resulting in wide-scale bright and dark regions having extremely low-contrast wherein objects are difficult or impossible to discern.
- sunlight beams or strong lighting in the background can cause a person in a dark area to become unidentifiable due to low contrast. This weakness is due to the limited luminance range of the imaging system.
- the electronic iris and automatic gain control provided by some imaging systems are designed to try to optimally map the wide luminance values within a real-world light situation into a limited range digital representation, often resulting in a poor compromise. To adequately represent the bright areas, less illuminated areas become dramatically compressed in contrast and thus become very dark.
- video or imagery from lower-cost imaging sensors can have significant noise due to a number of basic system limitations, as well as significant blur due to lower-cost, small, or simple optical configurations. Reducing noise and blur within these systems can improve the ability for a human viewer to effectively interpret image content.
- a system and method has been developed that considers the mechanisms and defects of imagery from video or other sources to prescribe a sequence of tailored image processing operations designed to improve human interpretation of resulting images.
- the detailed disclosures below are directed to redistribution of grey-scale or luminance values in a rectilinear array (pixels) from a camera. It is also contemplated however, that the inventions may be applied to other values in a video image, such as redistribution of chrominance values. It is also proposed that the inventions may be employed with other sensing modalities such as, magnetic resonance imagery, radar, sonar, infrared, ultraviolet, microwave, X-ray, radio wave, and the like.
- the spatial scale of an entire group of signal intensity values are considered, for example, the luminance in an entire pixel image, so that the overall brightness from corner to corner is taken into account.
- all the frequency content of an image must be represented by the one global mapping, including the very lowest spatial frequency.
- this single global mapping has to stretch to map all of the low frequency variation in the image, and then fails to enhance higher frequency structure of interest. The result can be a bright “bloom” on one side of the image, with too dark an area on the other side of the image. As such, there may not be optimal recognition of spatial structure because of the need to represent the large scale variation across the entire image.
- the inventions propose generating subset partitions of an image (group of signal intensities) representative of the spatial scales of interest.
- a transform mapping e.g. of luminance
- a spatial scale e.g., is 1 ⁇ 4 of a global image for a quadrant subset
- the present invention employs accumulating pixel samples across larger “right-sized” spatial scales, while using efficient interpolation to produce the correct transform (e.g. for luminance) representation at each pixel.
- apparatus and methods include: decomposing a group of signal values into subgroup partitions from which to sample values and construct associated transforms and to combine those transform values at every coordinate (e.g. pixel) according to a rule which weights the contribution of each mapping in accordance with “image” geometry and value coordinate or position.
- apparatus and methods are provided for blending transformed values from the global set of values (e.g. entire pixel image) with transformed values from the spatial segments to adjust contributions from several spatial scales as desired.
- preferred operations can include digital sampling and luminance/color separation, noise reduction, deblurring, pixel noise reduction, histogram smoothing, contrast stretching, and luminance and chrominance re-combination.
- One or more of the operators can have configurable attributes, such as degree of noise reduction, brightness, degree of deblurring, and determined range of useful grey-levels.
- FIG. 1 is a simplified block diagram of a device in accordance with the present invention, including a logical device;
- FIG. 2 is a simplified functional block diagram of a process performed by the logical device of FIG. 1 , the process having a color transform, an image pre-contrast conditioner, a contrast enhancement, an image post-contrast conditioner, and a color inverse transform;
- FIG. 3 is a simplified functional block diagram of the image pre-contrast conditioner of FIG. 2 , the pre-contrast conditioner comprising a system noise reduction filter, a deblurring module, and a pixel noise reduction module;
- FIGS. 4 ( a )-( f ) depict various exemplary kernels shapes that can be used with the system noise reduction filter, deblurring module, and pixel noise reduction module of FIG. 3 ;
- FIG. 5 is a simplified functional block diagram of the contrast enhancement block of FIG. 2 , the contrast enhancement comprising an equalized lookup table construction block and an enhanced luminance image generation block;
- FIG. 6 is a simplified functional block diagram of the equalized lookup table construction block of FIG. 5 ;
- FIG. 7 is a simplified functional block diagram of another embodiment of the contrast enhancement block of FIG. 2 ;
- FIGS. 8 and 9 is a simplified function block diagram of yet another embodiment of the contrast enhancement block of FIG. 2 .
- each of the FIGURES depicts a simplified block diagram wherein each block provides hardware (i.e., circuitry), firmware, software, or any combination thereof that performs one or more operations.
- hardware i.e., circuitry
- firmware software
- Each block can be self-contained or integral with other hardware, firmware, or software associated with one or more other blocks.
- a device 10 for enhancing, through transformation, the luminance range of an image input signal.
- the device 10 includes an input connector 12 , logic circuitry 14 , and an output connector 16 .
- the connectors 12 and 16 are mounted in a conventional manner to an enclosed housing 13 , constructed of a metal, metal alloy, rigid plastic, or combinations of the above, that contains the logic circuitry 14 .
- one or more of the modules described herein are performed by the logic circuitry 14 comprising of one or more integrated circuits, commonly referred to as “ICs,” placed on one or more printed circuit boards mounted within the housing 13 .
- ICs integrated circuits
- the device 10 is a stand-alone or embedded system.
- the term “stand-alone” refers to a device that is self-contained, one that does not require any other devices to perform its primary functions.
- a fax machine is a stand-alone device because it does not require a computer, printer, modem, or other device.
- the device 10 does not need to provide ports for connecting a disk drive, display screen, or a keyboard.
- the device 10 could provide one or more ports (e.g., RS-232) for supporting field interactivity.
- an embedded system is a system that is not a desktop computer or a workstation computer or a mainframe computer designed to admit facile human interactivity.
- Another delineator between embedded and “desktop” systems is that desktop systems (and workstation, etc.) present the status of the computer state to the human operator via a display screen and the internal state of the computer is represented by icons on the screen, and thus the person can interact with the computer internal state via control of the icons.
- desktop systems and workstation, etc.
- Such a computer uses a software layer called an “operating system” through which the human operator can interact with the internal state of the computer.
- an embedded system while it is performing its work function, the human operator cannot interact with the work process except to stop it.
- the input connector 12 provides for operably connecting the device 10 to an image input signal 17 generated by a video camera (not shown), or the like, having a video output.
- the input connector 12 consists of an F connector, BNC connector, RCA jacks, or the like.
- the input connector 12 is operably connected to the logic circuitry 14 by way of a conductive path attached to the input connector and the printed circuit board contained within the housing 13 .
- the logic circuitry could also be coupled through other than a conductive path such as through optical coupling.
- the image input signal 17 is a conventional analog video signal containing a plurality of still images or fixed image frames taken in a sequential manner.
- Each frame provided by the image input signal is also referred to herein as an image input frame.
- Each image or frame includes data regarding an array of pixels contained therein.
- the output connector 16 of the device 10 provides for connecting the device to an output device such as a monitor (not shown). Like the input connector 12 , the output connector 16 consists of any means for outputting the signal to other devices such as, an F connector, BNC connector, RCA jacks, or the like.
- the output connector 16 is operably connected to the logic circuitry 14 by way of a conductive or coupled path attached to the output connector and the printed circuit board contained within the housing 13 .
- the output signal provided by connector 16 and thus the device 10 , provides an output signal which includes data resulting from transforming or other operations carried out with respect to image input signal 17 received (“transformed output signal”).
- the output signal can include a plurality of image output frames and be formatted as a conventional analog video signal, a digital signal, or the like.
- the output signal can be in a format as defined by NTSC, VGA, HDTV, or other desired output formats.
- the logic circuitry 14 within the device 10 includes, inter alia, circuitry configured to transform the variable range of grey-scale values in the image input signal 17 received by the input connector 12 .
- the logic circuitry 14 includes a logical device 18 with corresponding support circuitry 20 , a video decoder 22 , and a video encoder 23 .
- the support circuitry 20 preferably includes a microcontroller 24 , a read only memory 26 , and a random access memory 28 comprising a synchronous dynamic random access memory.
- an optional switch 29 is provided for configuring the logic circuitry 14 within the device 10 .
- the switch 29 is operably connected to the microcontroller 24 and logical device 18 .
- the switch 29 allows a user to enable or disable, features or processes provided by the logic circuitry 14 within the device 10 .
- the switch 29 consists of a conventional DIP switch.
- the configuration of the circuitry 14 within the device 10 is hardwired, or can be set via software commands, instead of using a switch.
- video decoder 22 is operably connected to the input connector 12 and the logical device 18 . Accordingly, the video decoder 22 receives the image input signal 17 that can consist of live video from a television broadcast, a video tape, a camera, or any other desired signals containing or representing image content.
- the video decoder 22 preferably is a conventional device for tracking the video image input signal 17 , digitizing the input signal (if required), separating out the brightness and color information from the input signal, and forwarding the digital video signal 30 to the logical device 18 on a frame by frame basis.
- the input image signal 17 received by the video decoder 22 is an analog signal formatted in a predefined manner such as PAL, NTSC, or another conventional format.
- the video decoder 22 converts the analog signal 17 into a digital video signal 30 using a conventional analog-to-digital conversion algorithm.
- the digital video signal 30 provided by the video decoder 22 includes luminance information and color information in any conventional manner such as, specified by YUV format, YCbCr format, super video, S-video, or the like.
- the digital video signal 30 can have the luminance information embedded therein such as that provided by digital RGB, for example.
- the video decoder 22 is capable of converting a plurality of different analog video formats into digital video signals suitable for processing by the logical device 18 as described in detail further herein.
- the microcontroller 24 configures the video decoder 22 for converting the image input signal 17 into a digital video signal 30 having a specific format type (e.g., CCIR601, RGB, etc.). If desired, the microcontroller 24 determines the format of the image input signal 17 , and configures the video decoder 22 accordingly. The determination can be accomplished by the microcontroller 24 checking the user or device manufacturer configured settings of the DIP switch corresponding with the format of the image input signal 17 expected to be received. Alternatively, the video decoder 22 can include circuitry for automatically detecting the format of the image input signal 17 , instead of using preset DIP switch settings.
- the gain of the video decoder 22 is set to reduce overall contrast on the luminance for reducing the probability of image saturation.
- the video decoder 22 provides a fixed bit resolution output range (e.g., 8 bit, 16 bit, 32 bit, etc.) and a digitizer maps the image input signal in a conventional manner for effective use of the resolution output range.
- the full range of the digitizer output is utilized.
- the logical device 18 receives digital video signals 30 and provides digital output signals 31 (i.e., transformed pixel array or frame data) in response to the digital video signals 30 .
- the device 10 receives analog image input signals 17 that are converted by the video decoder 22 into digital video signals 30 for manipulation by the logical device 18 .
- the digital output 31 of the logical device 18 can be converted (if desired) into an analog output by the video encoder 23 connected between the logical device 18 and the output connector 16 .
- the logical device 18 consists of a conventional field programmable gate array (FPGA).
- the logical device 18 can consists of a Digital Signal Processor (DSP), a microcontroller, an Application Specific Integrated Circuit (ASIC), or other suitable device.
- DSP Digital Signal Processor
- ASIC Application Specific Integrated Circuit
- the device 10 instead of receiving analog input signals 17 , the device 10 can receive digital image input signals that are provided, without any conversion, to the logical device 18 .
- the video decoder 22 can be omitted.
- the logical device 18 is operably connected between connectors 12 and 16 . Accordingly, an image input signal enters the input connector 12 , is modified by the logical device 18 , and then exits via the output connector 16 as a transformed output signal.
- the frame output rate of the device 10 is substantially equal to the frame input rate to the device.
- the device 10 provides an output of thirty frames per second in response to a frame input rate of thirty frames per second.
- the logical device 18 provides a sequence of image processing functional modules or blocks 32 within a process as illustrated in FIG. 2 .
- the blocks 32 represent a color transform 34 , an image pre-contrast conditioner 36 , a contrast enhancement 38 , an image post-contrast conditioner 40 , and a color inverse transform 42 .
- Each of the modules 32 preferably performs a specific functional step or plurality of functional steps as described in detail further herein.
- static data for configuring the logical device 18 is stored within the read only memory 26 operably connected thereto.
- the read only memory 26 provides for storing data that is not alterable by computer instructions.
- SDRAM synchronous dynamic random access memory
- a high speed random access memory is provided by the support circuitry 20 to reduce performance bottlenecks.
- the memory 20 is used as a field buffer.
- the color transform 34 provides for separating the luminance from the digital video signal 30 , as desired.
- the digital video signal 30 provided by the video decoder 22 consists of a digital RGB signal.
- luminance information is not separated from color information.
- the color transform 34 provides for separating the luminance information for each pixel, within each frame, from the digital video signal 30 .
- the color transform 34 uses a conventional algorithm for converting a digital RGB video input signal into a Yuv format signal, YCbCr format signal, or other desired format signal. Accordingly, the color transform 34 provides an output comprising three channels: luminance 43 , and U-V or other values ascribed per pixel location.
- the digital video signal 30 provided by the video decoder 22 consists of a super video or S-video.
- the digital video signal 30 consists of two different signals: chrominance and luminance. Accordingly, the color transform 34 can be omitted from the process of FIG. 2 because the luminance 43 is separately provided by the S-video input.
- a separate channel or channels comprising the steps to be discussed below with respect to luminance information, could be provided to any one of RGB/chrominance values independently for transforming the values and redistributing the color values in the image.
- the luminance information 43 contained within the digital video signal 30 is received by the image pre-contrast conditioner 36 .
- the image pre-contrast conditioner 36 can include a video time integration module or block 44 , a system noise reduction module or block 46 , a deblurring module or block 48 , and a pixel noise reduction module or block 50 .
- any one or all of the blocks within FIG. 3 can be omitted including, but not limited to, the video time integration module 44 .
- the video time integration module 44 consists of a low-pass filter for noise reduction of the luminance signal 43 .
- each frame includes an array of pixels wherein each pixel has a specific i,j coordinate or address.
- Noise reduction by the video time integration module 44 is preferably achieved by combining the current frame (n) with the immediately preceding frame (n ⁇ 1) stored by the logical device 18 within the memory 28 .
- the corresponding pixel p(i,j) from the preceding captured frame (n ⁇ 1) is subtracted to result in a difference d(i,j).
- the absolute value of the difference d(i,j) is determined, and compared to a predetermined threshold value. If the absolute value of the difference d(i,j) is greater than the predetermined threshold, then the time integration output signal 45 of the video time integration module 44 is c(i,j). Otherwise, the time integration output signal 45 of the video time integration module 44 is the average of c(i,j) and p(i,j).
- the video time integration module 44 can operate such that object motion, which ordinarily creates large pixel differences, are represented by current pixels while background differences due to noise are suppressed by the averaging. Accordingly, the video time integration module 44 provides for avoiding image blurring as a result of low-pass spatial domain filtering on the current image.
- the median filter or system noise reduction module 46 receives the time integration output signal 45 (i.e., transformed pixel frame data) and, in response thereto, provides for reducing pixel noise caused by the system sensor (e.g., camera), electromagnetic, and/or thermal noise.
- the median filter 46 can include a user selectable level of noise reduction by selecting from a plurality of filter kernels via DIP switch 29 , or the like, operably coupled to the logical device 18 .
- the filter kernel applied by the noise reduction module 40 to the current frame, as preferably modified by time integration module 44 is designed to achieve noise reduction while minimizing the adverse effect of the filter on perceived image quality. For instance, a 3.times.1 kernel ( FIG.
- FIG. 4 ( a ) provides for a reduction in row noise due to the kernel shape, while having minimal adverse effect on the image.
- a 5-point or plus kernel offers symmetric noise reduction with low adverse impact to image quality.
- a hybrid median filter offers symmetric noise reduction with lower adverse impact than the full 3.times.3 kernel.
- a user or the manufacturer can select from the kernel shapes shown in FIGS. 4 ( a )-( c ).
- the deblurring module or block 48 provides for counteracting image blurring attributable to the point spread function of the imaging system (not shown).
- the deblurring module or block 48 uses a Laplacian filter to sharpen the output 47 (i.e., transformed pixel frame data) received from the filter 46 .
- the deblurring module or block 48 can include a user-selectable level of image sharpening from a plurality of Laplacian filter center pixel weights.
- the user or manufacturer can select the center pixel weight via the DIP switch 29 operably coupled to the logical device 18 .
- the Laplacian filter uses a 3.times.3 kernel ( FIG. 4 ( d )) with selectable center pixel weights of 9, 10, 12, or 16, requiring normalization divisors of 1, 2, 4, or 8, respectively.
- the remaining kernel pixels are preferably assigned a weight of ⁇ 1.
- the result of applying the kernel is normalized by dividing the convolution result for the current pixel element by the normalization divisor.
- the deblurring module or block 48 can use other filters in place of a Laplacian type filter.
- a general filter with kernel as shown in FIG. 4 ( e ) can be used for deblurring.
- the output 49 (i.e., transformed pixel frame data) of the deblurring module 48 emphasizes isolated pixel noise present in the input image signal.
- the pixel noise reduction module 50 provides for reducing isolated pixel noise emphasized within the output 49 of the deblurring module 48 .
- the pixel noise reduction module 50 employs a small-kernel median filter (i.e., small pixel neighborhood such as a 3.times.3) to reduce isolated pixel noise.
- the pixel noise reduction filter or module 50 can provide a user or manufacturer selectable level of noise reduction by allowing for the selection from a plurality of filter kernels.
- the DIP switch 29 software, or the like, is used to select from a five-point “plus patterned” kernel ( FIG. 4 ( b )) or a hybrid median filter such as ( FIG. 4 ( c )).
- the contrast enhancement module or block 38 enhances the contrast of the conditioned luminance signal 53 (i.e., transformed pixel frame data) provided by the image pre-contrast conditioner 36 in response to the original luminance input signal 17 .
- the contrast enhancement module 38 directly receives the image input signal 17 , and in response thereto, generates an enhanced image signal 55 .
- the contrast enhancement module or block 38 preferably includes a sample area selection module or block 52 , an equalized lookup table construction module or block 54 , and an enhanced luminance generation module or block 56 .
- lookup table and “equalized lookup table” both refer to a lookup table derived from grey-level values or a lookup table representative of the inversion of the cumulative distribution function derived from an accumulated histogram.
- the sample area selection module or block 52 receives the conditioned luminance image signal 53 provided by the image pre-contrast conditioner 36 ( FIGS. 2 and 3 ) and a selected portion of the input image, via input 57 , to be enhanced.
- the sample area selection module provides a selected conditioned luminance image signal 59 comprising conditioned luminance pixel data from the pixels within the selected region.
- the selected portion of the image can be selected in a conventional manner such as by using a point-and-click interface (e.g., a mouse) to select a rectangular sub-image of an image displayed on a monitor (not shown).
- a point-and-click interface e.g., a mouse
- the selected portion of the image can be selected by automated means such as by a computer, based upon movement detection, with a portion of the image or other criteria.
- the equalized look-up table construction module or block 54 receives the selected conditioned image signal 59 from the sample area selection module or block 52 and, in response thereto, provides for creating a lookup table 61 that is used in generating the enhanced image signal 55 (i.e., transformed pixel frame data).
- the selected conditioned luminance signal 59 received from the sample area selection module or block 52 can be a sub-image comprising a portion of the overall conditioned luminance image signal 53 received from the image pre-contrast conditioner 36 ( FIGS. 2 and 3 ).
- the selected conditioned luminance signal 59 can consist of the complete conditioned luminance signal 53 , without any apportioning by the sample area selection module 52 .
- the equalized lookup table construction module, or block 54 of FIG. 5 includes an input smoothing module or block 58 , a histogram accumulation module or block 60 , a histogram smoothing module or block 62 , a cumulative distribution function integration module or block 64 , a saturation bias removal module or block 66 , a linear cumulative distribution function scaling module or block 68 , and a lookup table construction module or block 70 .
- the input image smoothing module, or block 58 receives the selected conditioned luminance image signal 59 and, in response thereto, generates a smoothed conditioned luminance image signal on a frame-by-frame basis.
- the input image smoothing module or block 58 applies a Gaussian filter with a traditional 3.times.3 kernel to the selected signal 59 for smoothing the image within each frame.
- a histogram accumulation module, or block 60 provides for generating a histogram of the selected conditioned image signal 59 , as modified by the input image smoothing module.
- the histogram accumulation module, or block 60 accumulates a conventional histogram from the received input. Accordingly, the histogram can be used to provide a graph of luminance levels to the number of pixels at each luminance level. Stated another way, the histogram reflects the relationship between luminance levels and the number of pixels at each luminance level.
- the histogram accumulation module or block 60 includes a plurality of data storage locations or “bins” for tracking the number of occurrences of each grey-level occurring in the received input signal, for example, such as that received by block 58 .
- the histogram accumulation module or block 60 includes a bin for every discrete grey-level in the input signal.
- the histogram accumulation module or block 60 determines the number of pixels in the input luminance image with the corresponding grey-level after being filtered, if desired. Accordingly, the histogram result for any given grey-level k is the number of pixels in the luminance image input having that grey-level.
- the histogram smoothing module or block 62 provides for reducing noise in the histogram created from the input image.
- the histogram smoothing module, or block 62 receives the histogram from the histogram accumulation, module or block 60 , and in response thereto, generates a smoothed histogram.
- the histogram smoothing module or block 62 applies a 5-point symmetric kernel ( FIG. 4 ( f )) to filter the histogram calculated for each frame.
- the cumulative distribution function integration module or block 64 provides for integrating the histogram to create a cumulative distribution function.
- the cumulative distribution function integration module or block 64 receives the histogram from the histogram smoothing module 62 , or optionally from the histogram accumulation module 60 .
- the cumulative distribution function integration module 64 integrates the received histogram to generate a cumulative distribution function. It has been observed that integrating a smoothed histogram can result in a cumulative distribution function having an increased accuracy over a cumulative distribution function resulting from an unsmoothed histogram.
- the cumulative distribution function integration module or block 64 includes a plurality of data storage locations or “bins” which hold the cumulative distribution function result for each input grey-level.
- CDF(k) is the cumulative distribution function result for grey-level k
- H(k) is the histogram value for grey-level k
- CDF(k ⁇ 1) is the cumulative distribution function result for the grey-level one increment lower than grey-level k.
- the saturation bias identification module provides for identifying the grey-levels that bound the useful information in the luminance signal 43 .
- the saturation bias identification module 66 receives the cumulative distribution function from the cumulative distribution function integration module 64 and determines the grey-levels at which the first unsaturated grey-level, k.sub.f, and the last unsaturated grey-level, k.sub. 1 , occurs.
- the first unsaturated grey-level, k.sub.f is determined by identifying the first grey-level k.sub. 0 for which the cumulative distribution function returns a non-zero value.
- the saturation bias identification module 66 identifies k.sub.f as the sum of k.sub. 0 plus one additional grey-level.
- the last unsaturated grey-level, k.sub. 1 is determined by identifying the first grey-level k.sub.n for which the cumulative distribution function returns the number of pixels in the image.
- the grey-level k.sub.n is treated as saturated, and the saturation bias identification module or block 66 identifies k.sub. 1 as the difference between k.sub.n minus one additional grey-level.
- the useful grey-levels identified by the saturation bias identification module 66 is the range from, and including, k.sub. 0 plus one additional grey-level through k.sub.n minus one additional grey-level.
- the useful grey-levels identified by the saturation bias identification module 66 is the range from, and including, k.sub. 0 through k.sub.n minus one additional grey-level.
- the useful grey-levels identified by the saturation bias identification module 66 is the range from, and including k.sub. 0 plus one additional grey-level through k.sub.n.
- the useful grey-levels identified by the saturation bias identification module 66 is the range from, and including, k.sub. 0 plus X additional grey-level(s) through k.sub.n minus Y additional grey-level(s), wherein X and Y are whole numbers greater than zero.
- the linear cumulative distribution function scaling module or block 68 provides for scaling the portion of the cumulative distribution function corresponding to the unsaturated, useful grey-levels in the luminance input image across the entire range of cumulative distribution function grey-level inputs.
- the linear cumulative distribution function scaling module 68 receives the cumulative distribution function, provided by the cumulative distribution function integration module 64 , and the first unsaturated grey-level k.sub.f and the last unsaturated grey-level k.sub. 1 from the saturation bias identification module 66 .
- the linear cumulative distribution function scaling module 68 includes a plurality of data storage locations or “bins” equal to the number of bins in the cumulative distribution function for holding the linearly mapped cumulative distribution function result for each input grey-level.
- CDF(k) is the cumulative distribution function result for grey-level k
- CDF(kf) is the cumulative distribution function result for kf
- CDF(kl) is the cumulative distribution function result for kl.
- Each linearly mapped cumulative distribution function result is stored in the bin corresponding to the current grey-level k. If LCDF(k) is negative for any input grey-level k, the linearly scaled cumulative distribution function result in the bin corresponding to the input grey-level k is set to zero.
- the determination of the cumulative distribution function output value can be calculated using known methodologies for improving computation ease.
- the numerator can be scaled by a multiplier, before the division operation, to obtain proper integer representation.
- the result of the division can then be inversely scaled by the same multiplier to produce the LCDF(k) value.
- the scale factor is an adequately large constant, such as the number of pixels in the input image.
- the lookup table construction module or block 70 generates a lookup table 61 that is used by the enhanced luminance image generation module ( FIG. 5 ) to produce the enhanced luminance image signal 55 .
- the lookup table construction module 70 receives the linearly scaled cumulative distribution function from the linear cumulative distribution function scaling module 70 . In response to the scaled cumulative distribution function, the lookup table construction module 70 provides the lookup table 61 .
- the lookup table 61 is constructed by multiplying each linearly scaled cumulative distribution function result by the number of discrete grey-levels in the output image resolution range which can be provided by the DIP switch 29 ( FIG. 1 ), software, or other desired means. If the multiplication result is greater than the maximum value in the output image resolution range for any input grey-level, the linearly scaled cumulative distribution function result for that input grey-level is set to the maximum value in the output image resolution range. Accordingly, for each grey-level input in the linearly scaled cumulative distribution function, the result is a value in the output image resolution range.
- the enhanced luminance image generation module, or block 56 is responsive to the lookup table 61 and the conditioned luminance signal 53 .
- the luminance of each pixel within each input frame can be reassigned a different value based upon the lookup table corresponding to the same frame. Stated another way, the luminance value for each pixel in the conditioned luminance input image is used as an index in the look-up table, and the value at that index location is used as the luminance value for the corresponding pixel in the enhanced luminance output image.
- the reassigned frame data is then forwarded, preferably to image post contrast conditioner 40 , as the enhanced image signal 55 .
- the image post contrast conditioner 40 provides for filtering the enhanced image signal 55 to generate an enhanced conditioned image signal 72 .
- the conditioner 40 can apply to the image signal 55 one or more conventional imaging filters, including an averaging filter, a Gaussian filter, a median filter, a Laplace filter, a sharpen filter, a Sobel filter, and a Prewitt filter.
- the color inverse transform 42 generates the contrast enhanced digital video output signal 31 , having an expanded range, in response to the enhanced image signal 72 and the color information for the same frame.
- the enhanced image signal and color information are combined in a conventional manner wherein luminance is combined with color information.
- the color information can be stored within memory 28 , for example. Once the luminance has been enhanced, it can be re-combined with the color information stored in memory.
- the video encoder 23 In response to the digital video output signal 31 , the video encoder 23 preferably provides an analog output signal via output connector 16 .
- the video encoder 23 preferably utilizes conventional circuitry for converting digital video signals into corresponding analog video signals.
- FIG. 7 an alternate embodiment of the contrast enhancement module is depicted.
- that last two digits of the reference numbers used therein correspond to like two digit reference numbers used for like elements in FIGS. 5 and 6 . Accordingly, no further description of these elements is provided.
- the median bin identification module or block 174 determines the median of the histogram provided by the histogram accumulation module 160 .
- the brightness midpoint value is calculated by the weighted midpoint calculation module or block 176 .
- the brightness midpoint value preferably is calculated as the weighted sum of the actual grey-level midpoint of the input frame and the median of the input distribution.
- the weights assigned to the grey-level midpoint and the median of the input distribution are configured as compile-time parameters.
- the image segmentation module, or block 178 receives the luminance image data for the selected sample area, via module 152 , and provides for segmenting the luminance image data into light and dark regimes and performing enhancement operations separately (i.e., luminance redistribution) on each of the regimes. Stated another way, the image segmentation module 178 assigns each pixel in the selected sample area to either the light or dark regime based on each pixel's relationship to the brightness midpoint value received from the midpoint calculation module 176 . Pixels with a brightness value greater than the midpoint are assigned to the light regime, and pixels with a brightness value less than the midpoint value are assigned to the dark regime.
- Pixel values assigned by the image segmentation module 178 are received by the lookup table construction modules, wherein lookup tables are constructed and forwarded to the enhanced luminance image generation module 156 for remapping of the conditioned luminance signal 53 .
- contrast enhancement is provided by heuristic and algorithmic techniques, including the calculation of the image grey-level histogram and sample cumulative distribution function.
- maximum utilization of available grey-levels for representation of image detail is provided.
- Entropy is a measure of the information content of the image. Entropy is generally the average number of bits required to represent a pixel of the image. Accordingly, a completely saturated image with only black and white can be represented by one-bit per pixel, on and off, thus, the image entropy is one-bit. A one bit image contains less information than an eight-bit image with 256 shades of grey.
- optimized entropy is provided while maintaining a monotonic grey-level relationship wherein “monotonic” generally means that the precedence order of the input grey-levels are not changed by the desired grey-level transformation employed.
- contrast enhancement includes calculation of the sample cumulative distribution function of the image grey-levels.
- the inverted sample cumulative distribution is applied to map the input grey-levels. Because the inverted sample cumulative distribution function is used, the resulting transform approximates uniformly distributed grey levels within the input range.
- the uniform density histogram generally has the maximum entropy.
- the histogram When the image is saturated the histogram has a pathology—an inordinate number of pixels are black or white or both. Direct calculation of the sample cumulative distribution function produces large gaps in endpoints of the grey-level transform. Such gaps mean that the maximum number of grey-level values are not being used, and the resulting output distribution does not closely approximate the desired uniform distribution. Moreover, the resulting image will contain broad areas of a single grey-level (for both black and with saturation). Sometimes, such areas are referred to as “puddles.”
- FIGS. 8 and 9 are simplified functional block diagrams.
- the contrast enhancement block diagrams of FIGS. 8 and 9 include: an identity lookup table builder 211 , a global equalized lookup table builder 213 , a plurality of zone equalized lookup table builders 215 , 217 , 219 and 221 ; a plurality of non-linear transform blocks 223 , 225 , 227 , 229 and 231 ; a global weight functional block 233 ; a plurality of zone weighting functional blocks 235 , 237 , 239 and 241 ; and a plurality of functional operators including adders, multipliers, and dividers.
- the identity lookup table builder 211 constructs an identity lookup table for the image frame received. Thus, for each pixel having a contrast k, the lookup table output provided by block 211 provides the same contrast, k.
- the global equalized lookup table builder 213 constructs a global equalized lookup table for the image data received.
- the functionality of the builder 213 within FIG. 8 is like that of block 54 within FIG. 6 .
- a range of useful grey-levels is identified.
- the range can include all grey-levels within a cumulative distribution function or any lesser or greater amount as desired, for example all grey levels except for the first and/or last unsaturated grey-level within the distribution function.
- the range is then scaled to include an extended range of grey-levels including, but not limited to, the full range of grey-levels.
- the builder 213 provides, as an output, the scaled range as a lookup table to the non-linear transform 223 .
- each of the zone equalized lookup table builders 215 , 217 , 219 and 221 constructs an equalized lookup table for that portion of the image data within the particular table builder's zone.
- a range of useful grey-levels is identified by the builder assigned to the zone.
- the range for each zone can include all grey-levels within a cumulative distribution function, except the first and/or last unsaturated grey-level within the zone's distribution function, or the like.
- the range is then scaled to include an extended range of grey-levels including, but not limited to, the full range of grey-levels.
- the lookup table builder for each zone then provides, as an output, its scaled range as a lookup table to a non-linear transform 223 , 225 , 227 , 229 and 231 .
- FIG. 8 discloses schematically, that the non-linear transforms 223 , 225 , 227 , 229 and 231 , provide for weighting grey-level values towards white, thus brightening the image.
- the non-linear transforms are a power function wherein the input to the non-linear transform for each pixel is normalized over the range of grey level values. For example, if there are 256 grey-level values available for each pixel, then the input for each pixel is normalized for 256 grey-level values. Then, a power is applied wherein the power is preferably a real number in the range of about 0 to about 1, and preferably 8. Therefore, the output of each non-linear transform 223 , 225 , 227 , 229 and 231 , is a lookup table (no reference number) where the values are skewed towards brightening the image.
- each non-linear transform 221 - 231 is multiplied by a weight as shown by blocks 233 - 241 of FIG. 8 .
- the weighting provided by global weight 233 determines the balance between combining together two or more inputs.
- the global weight determines the balance between the identity lookup table and the transformed global equalized lookup table.
- the combination or blending together of the identity lookup table 211 and the transformed global equalized lookup table 213 results in a contrast enhanced global lookup table 243 .
- weighting provided by weights 235 - 241 determines the balance between the contrast enhanced global lookup table 243 and each transformed zone equalized lookup table.
- the outputs from the zonal balanced lookup tables 245 , 247 , 249 and 251 of FIG. 8 are combined with corresponding pixel position weights for determining relative zonal pixel grey-levels 253 , 255 , 257 and 259 .
- the relative zonal pixel grey-levels 253 , 255 , 257 and 259 are summed together to provide a final or composite enhanced pixel grey-level value 261 for each pixel.
- each lookup table 245 , 247 , 249 and 251 includes the original grey-level for each pixel within the image frame.
- each lookup table 245 , 247 , 249 and 251 receives the original grey-level image data input for each pixel within the image frame.
- the outputs 263 , 265 , 267 and 269 of the lookup tables 245 , 247 , 249 and 251 provide new grey-level values for each pixel within the corresponding zone. This new grey-level values can be increased, decrease, or the same as the original luminance.
- the outputs 263 , 265 , 267 and 269 , of the lookup tables are multiplied by a corresponding position weighting factor 271 , 273 , 275 and 277 .
- the new grey-level values for each pixel is multiplied by a weighting factor 271 , 273 , 275 and 277 , based upon the pixel's location relative to one or more reference positions within the image frame.
- the weighting factors 271 , 273 , 275 and 277 govern how the grey-level within each zone effects the new grey-level values provided for each pixel within the image.
- a pixel is effected more by pixel values within the zone within which it is located, and is effected less dramatically by pixel values from a zone furthest from the pixel.
- the weightings can be normalized so their total sum is 1.
- the reference positions within the image frame are located at the corners of the image frame. In a further embodiment, the reference positions can be at the centers of the defined zones.
- the weightings are based upon the distance the pixel is located from the reference positions. For example, if the image frame is a square containing four equally sized zones with the reference positions at the corners of the image frame, then the weighting factors 271 , 273 , 275 and 277 , for the center pixel within the image frame would be 0.25 because the pixel is equal distant from each zone's reference position (i.e., the corners of the image frame). Likewise, each pixel at the corners of the image frame would be weighted such that only the lookup table results for the zone containing the pixel would be applied in determining the pixel's new grey-level.
- only a single zone can be used within an image frame.
- the zone is centered in the image frame and is small in size than the entire image frame.
- the device 10 and its combination of elements, and derivatives thereof provide circuits and implement techniques that are highly efficient to implementation yet work very well. Others have proposed various forms of contrast enhancement solutions but the proposals are much more complex to implement. Apparatus and methods according to the invention create an algorithmically efficient procedure that results in cost efficient real-time implementation. This, in turn provides lower product costs.
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Image Processing (AREA)
Abstract
A system and method for manipulating image data is disclosed. Generally, the system and method identifies useful grey-levels within an input image. The image is then scaled based upon the identified useful grey-levels.
Description
- This application is a divisional application claiming the benefit of U.S. application Ser. No. 10/657,723 filed Sep. 8, 2003; which claims the benefit of U.S. Provisional Application No. 60/408,663, filed Sep. 6, 2002; the contents of both of these applications are herein incorporated by reference.
- The present invention generally relates to imaging systems, and in particular, to a system for manipulating image data.
- Human interpretation of video or other imagery can be made difficult or even impossible by system noise, image blur, and poor contrast. These limitations are observed, for example, in most video and closed circuit television systems, and others, including such technology as RS-170 monochrome video, NTSC/PAL/SECAM video or digital color video formats.
- Extreme lighting variation, for example, due to sunlight beams, can cause typical video cameras and imaging sensors to saturate (i.e., become unable to represent the real-world luminance range) resulting in wide-scale bright and dark regions having extremely low-contrast wherein objects are difficult or impossible to discern. At outdoor automated teller machines, for example, sunlight beams or strong lighting, in the background can cause a person in a dark area to become unidentifiable due to low contrast. This weakness is due to the limited luminance range of the imaging system.
- The electronic iris and automatic gain control provided by some imaging systems are designed to try to optimally map the wide luminance values within a real-world light situation into a limited range digital representation, often resulting in a poor compromise. To adequately represent the bright areas, less illuminated areas become dramatically compressed in contrast and thus become very dark.
- Besides having limited range, video or imagery from lower-cost imaging sensors can have significant noise due to a number of basic system limitations, as well as significant blur due to lower-cost, small, or simple optical configurations. Reducing noise and blur within these systems can improve the ability for a human viewer to effectively interpret image content.
- Moreover, digital samples of interlaced analog video from a video field are typically taken by imaging systems. The noise inherent in such digital samples can make human interpretation of important details in the image difficult.
- Hence, a need exists for a luminance range transformation apparatus and method that manipulates image data for improved interpretation thereof.
- Others have provided some image post processing devices which can enhance contrast of an image. However, in many areas such as in security monitoring, real-time evaluation of images is highly beneficial or necessary. Accordingly, there is also a need to provide a luminance range transformation apparatus which can enhance an image in real-time or near-real time.
- According to the present invention, a system and method has been developed that considers the mechanisms and defects of imagery from video or other sources to prescribe a sequence of tailored image processing operations designed to improve human interpretation of resulting images.
- However, in a broad aspect of the invention, methods and devices are provided to redistribute discrete signal intensity values within groups of signal intensity values. The signal intensity values are supplied directly or indirectly from a sensor sensing an environment. It is proposed that the inventions can be used for enhanced interpretation of any array of signal intensities or variable values in any group of such values that have spatial or geometric relationships to one another (e.g. coordinates).
- For example, the detailed disclosures below are directed to redistribution of grey-scale or luminance values in a rectilinear array (pixels) from a camera. It is also contemplated however, that the inventions may be applied to other values in a video image, such as redistribution of chrominance values. It is also proposed that the inventions may be employed with other sensing modalities such as, magnetic resonance imagery, radar, sonar, infrared, ultraviolet, microwave, X-ray, radio wave, and the like.
- According to another aspect of the invention, the spatial scale of an entire group of signal intensity values are considered, for example, the luminance in an entire pixel image, so that the overall brightness from corner to corner is taken into account. In an orthogonal direction, all the frequency content of an image must be represented by the one global mapping, including the very lowest spatial frequency. Often, this single global mapping has to stretch to map all of the low frequency variation in the image, and then fails to enhance higher frequency structure of interest. The result can be a bright “bloom” on one side of the image, with too dark an area on the other side of the image. As such, there may not be optimal recognition of spatial structure because of the need to represent the large scale variation across the entire image.
- It is proposed that to further improve overall contrast balance to reveal image structure at scales of interest by applying equalization at spatial scales representative of scales of interest. Accordingly, the inventions propose generating subset partitions of an image (group of signal intensities) representative of the spatial scales of interest. A transform mapping (e.g. of luminance) for the subsets of signal values is generated, so that a spatial scale (e.g., is ¼ of a global image for a quadrant subset) so as to mitigate or eliminate the lowest frequency from consideration. This improves representation of contrast for structures at this scale.
- Computing a luminance transformation at every pixel (with for example a filter kernel) for a neighborhood around that pixel, would result in a large computational burden because the resulting spatial scales are too small. In contrast, the present invention employs accumulating pixel samples across larger “right-sized” spatial scales, while using efficient interpolation to produce the correct transform (e.g. for luminance) representation at each pixel.
- According to another aspect of the invention, apparatus and methods include: decomposing a group of signal values into subgroup partitions from which to sample values and construct associated transforms and to combine those transform values at every coordinate (e.g. pixel) according to a rule which weights the contribution of each mapping in accordance with “image” geometry and value coordinate or position.
- According to another aspect of the invention, apparatus and methods are provided for blending transformed values from the global set of values (e.g. entire pixel image) with transformed values from the spatial segments to adjust contributions from several spatial scales as desired.
- While overall interpretation of groups of signal intensity values is provided according to the invention, it is of particular note that the effectively mitigate signal values which are out of limit for a sensor system, for example, saturated sensor response.
- In a preferred embodiment for enhancing luminance contrast in video signals, preferred operations can include digital sampling and luminance/color separation, noise reduction, deblurring, pixel noise reduction, histogram smoothing, contrast stretching, and luminance and chrominance re-combination. One or more of the operators can have configurable attributes, such as degree of noise reduction, brightness, degree of deblurring, and determined range of useful grey-levels.
- Other advantages and features of the present invention will be apparent from the following description of a specific embodiment illustrated in the accompanying drawings.
-
FIG. 1 is a simplified block diagram of a device in accordance with the present invention, including a logical device; -
FIG. 2 is a simplified functional block diagram of a process performed by the logical device ofFIG. 1 , the process having a color transform, an image pre-contrast conditioner, a contrast enhancement, an image post-contrast conditioner, and a color inverse transform; -
FIG. 3 is a simplified functional block diagram of the image pre-contrast conditioner ofFIG. 2 , the pre-contrast conditioner comprising a system noise reduction filter, a deblurring module, and a pixel noise reduction module; - FIGS. 4(a)-(f) depict various exemplary kernels shapes that can be used with the system noise reduction filter, deblurring module, and pixel noise reduction module of
FIG. 3 ; -
FIG. 5 is a simplified functional block diagram of the contrast enhancement block ofFIG. 2 , the contrast enhancement comprising an equalized lookup table construction block and an enhanced luminance image generation block; -
FIG. 6 is a simplified functional block diagram of the equalized lookup table construction block ofFIG. 5 ; -
FIG. 7 is a simplified functional block diagram of another embodiment of the contrast enhancement block ofFIG. 2 ; and, -
FIGS. 8 and 9 is a simplified function block diagram of yet another embodiment of the contrast enhancement block ofFIG. 2 . - This invention is susceptible of embodiments in many different forms. For example, the methods and apparatus disclosed there is shown in the drawings and will herein be described in detail, a preferred embodiment of the invention. The present disclosure is to be considered as an exemplification of the principles of the invention and is not intended to limit the broad aspect of the invention to the embodiment illustrated.
- Referring now to the drawings, and as will be appreciated by those having skill in the art, each of the FIGURES depicts a simplified block diagram wherein each block provides hardware (i.e., circuitry), firmware, software, or any combination thereof that performs one or more operations. Each block can be self-contained or integral with other hardware, firmware, or software associated with one or more other blocks.
- Turning particularly to
FIG. 1 , adevice 10 is disclosed for enhancing, through transformation, the luminance range of an image input signal. Thedevice 10 includes aninput connector 12,logic circuitry 14, and anoutput connector 16. Theconnectors housing 13, constructed of a metal, metal alloy, rigid plastic, or combinations of the above, that contains thelogic circuitry 14. In one embodiment, one or more of the modules described herein are performed by thelogic circuitry 14 comprising of one or more integrated circuits, commonly referred to as “ICs,” placed on one or more printed circuit boards mounted within thehousing 13. - Preferably, the
device 10 is a stand-alone or embedded system. As used herein, the term “stand-alone” refers to a device that is self-contained, one that does not require any other devices to perform its primary functions. For example, a fax machine is a stand-alone device because it does not require a computer, printer, modem, or other device. Accordingly, in an embodiment, thedevice 10 does not need to provide ports for connecting a disk drive, display screen, or a keyboard. However, in an alternative embodiment, thedevice 10 could provide one or more ports (e.g., RS-232) for supporting field interactivity. - Also, as used herein, an embedded system is a system that is not a desktop computer or a workstation computer or a mainframe computer designed to admit facile human interactivity. Another delineator between embedded and “desktop” systems is that desktop systems (and workstation, etc.) present the status of the computer state to the human operator via a display screen and the internal state of the computer is represented by icons on the screen, and thus the person can interact with the computer internal state via control of the icons. Moreover, such a computer uses a software layer called an “operating system” through which the human operator can interact with the internal state of the computer. Conversely, with an embedded system, while it is performing its work function, the human operator cannot interact with the work process except to stop it.
- The
input connector 12 provides for operably connecting thedevice 10 to animage input signal 17 generated by a video camera (not shown), or the like, having a video output. In one embodiment, theinput connector 12 consists of an F connector, BNC connector, RCA jacks, or the like. Theinput connector 12 is operably connected to thelogic circuitry 14 by way of a conductive path attached to the input connector and the printed circuit board contained within thehousing 13. The logic circuitry could also be coupled through other than a conductive path such as through optical coupling. - Preferably, but not necessarily, the
image input signal 17 is a conventional analog video signal containing a plurality of still images or fixed image frames taken in a sequential manner. Each frame provided by the image input signal is also referred to herein as an image input frame. Each image or frame includes data regarding an array of pixels contained therein. - The
output connector 16 of thedevice 10 provides for connecting the device to an output device such as a monitor (not shown). Like theinput connector 12, theoutput connector 16 consists of any means for outputting the signal to other devices such as, an F connector, BNC connector, RCA jacks, or the like. Theoutput connector 16 is operably connected to thelogic circuitry 14 by way of a conductive or coupled path attached to the output connector and the printed circuit board contained within thehousing 13. As explained in detail further herein, the output signal provided byconnector 16, and thus thedevice 10, provides an output signal which includes data resulting from transforming or other operations carried out with respect to imageinput signal 17 received (“transformed output signal”). The output signal can include a plurality of image output frames and be formatted as a conventional analog video signal, a digital signal, or the like. For example, but by no means exclusive, the output signal can be in a format as defined by NTSC, VGA, HDTV, or other desired output formats. - In one embodiment, the
logic circuitry 14 within thedevice 10 includes, inter alia, circuitry configured to transform the variable range of grey-scale values in theimage input signal 17 received by theinput connector 12. Preferably, thelogic circuitry 14 includes alogical device 18 withcorresponding support circuitry 20, avideo decoder 22, and avideo encoder 23. Thesupport circuitry 20 preferably includes amicrocontroller 24, a read onlymemory 26, and arandom access memory 28 comprising a synchronous dynamic random access memory. - In an embodiment, an
optional switch 29 is provided for configuring thelogic circuitry 14 within thedevice 10. Theswitch 29 is operably connected to themicrocontroller 24 andlogical device 18. Theswitch 29 allows a user to enable or disable, features or processes provided by thelogic circuitry 14 within thedevice 10. In one embodiment, theswitch 29 consists of a conventional DIP switch. In an alternative embodiment, the configuration of thecircuitry 14 within thedevice 10 is hardwired, or can be set via software commands, instead of using a switch. - Preferably,
video decoder 22 is operably connected to theinput connector 12 and thelogical device 18. Accordingly, thevideo decoder 22 receives theimage input signal 17 that can consist of live video from a television broadcast, a video tape, a camera, or any other desired signals containing or representing image content. Thevideo decoder 22 preferably is a conventional device for tracking the videoimage input signal 17, digitizing the input signal (if required), separating out the brightness and color information from the input signal, and forwarding thedigital video signal 30 to thelogical device 18 on a frame by frame basis. - In one embodiment, the
input image signal 17 received by thevideo decoder 22 is an analog signal formatted in a predefined manner such as PAL, NTSC, or another conventional format. Thevideo decoder 22 converts theanalog signal 17 into adigital video signal 30 using a conventional analog-to-digital conversion algorithm. Preferably, thedigital video signal 30 provided by thevideo decoder 22 includes luminance information and color information in any conventional manner such as, specified by YUV format, YCbCr format, super video, S-video, or the like. Alternatively, thedigital video signal 30 can have the luminance information embedded therein such as that provided by digital RGB, for example. - In one embodiment, the
video decoder 22 is capable of converting a plurality of different analog video formats into digital video signals suitable for processing by thelogical device 18 as described in detail further herein. In one embodiment, themicrocontroller 24 configures thevideo decoder 22 for converting theimage input signal 17 into adigital video signal 30 having a specific format type (e.g., CCIR601, RGB, etc.). If desired, themicrocontroller 24 determines the format of theimage input signal 17, and configures thevideo decoder 22 accordingly. The determination can be accomplished by themicrocontroller 24 checking the user or device manufacturer configured settings of the DIP switch corresponding with the format of theimage input signal 17 expected to be received. Alternatively, thevideo decoder 22 can include circuitry for automatically detecting the format of theimage input signal 17, instead of using preset DIP switch settings. - Preferably, the gain of the
video decoder 22 is set to reduce overall contrast on the luminance for reducing the probability of image saturation. In an alternative embodiment, thevideo decoder 22 provides a fixed bit resolution output range (e.g., 8 bit, 16 bit, 32 bit, etc.) and a digitizer maps the image input signal in a conventional manner for effective use of the resolution output range. Preferably, the full range of the digitizer output is utilized. - Turning to the
logical device 18, as will be understood by those having skill in the art, the logical device receives digital video signals 30 and provides digital output signals 31 (i.e., transformed pixel array or frame data) in response to the digital video signals 30. As indicated previously, in one embodiment, thedevice 10 receives analog image input signals 17 that are converted by thevideo decoder 22 into digital video signals 30 for manipulation by thelogical device 18. Moreover, as explained in detail further herein, thedigital output 31 of thelogical device 18 can be converted (if desired) into an analog output by thevideo encoder 23 connected between thelogical device 18 and theoutput connector 16. - Preferably, the
logical device 18 consists of a conventional field programmable gate array (FPGA). However, thelogical device 18 can consists of a Digital Signal Processor (DSP), a microcontroller, an Application Specific Integrated Circuit (ASIC), or other suitable device. - In an alternative embodiment, instead of receiving analog input signals 17, the
device 10 can receive digital image input signals that are provided, without any conversion, to thelogical device 18. Thus, in this alternative embodiment, thevideo decoder 22 can be omitted. - Regardless of whether a video encoder or decoder is provided, it is to be understood that the
logical device 18 is operably connected betweenconnectors input connector 12, is modified by thelogical device 18, and then exits via theoutput connector 16 as a transformed output signal. Preferably, the frame output rate of thedevice 10 is substantially equal to the frame input rate to the device. Thedevice 10 provides an output of thirty frames per second in response to a frame input rate of thirty frames per second. - In one embodiment, the
logical device 18 provides a sequence of image processing functional modules or blocks 32 within a process as illustrated inFIG. 2 . In one embodiment, theblocks 32 represent acolor transform 34, animage pre-contrast conditioner 36, acontrast enhancement 38, animage post-contrast conditioner 40, and a colorinverse transform 42. - Each of the
modules 32 preferably performs a specific functional step or plurality of functional steps as described in detail further herein. Turning back toFIG. 1 , static data for configuring thelogical device 18, if needed or desired, is stored within the read onlymemory 26 operably connected thereto. As appreciated by those having skill in the art, the read onlymemory 26 provides for storing data that is not alterable by computer instructions. - Storage for data manipulation and the like is provided by the synchronous dynamic random access memory (SDRAM) 28. Preferably, a high speed random access memory is provided by the
support circuitry 20 to reduce performance bottlenecks. In one embodiment, thememory 20 is used as a field buffer. - Turning back to
FIG. 2 , thecolor transform 34 provides for separating the luminance from thedigital video signal 30, as desired. In one embodiment, thedigital video signal 30 provided by thevideo decoder 22 consists of a digital RGB signal. As such, luminance information is not separated from color information. Thus, thecolor transform 34 provides for separating the luminance information for each pixel, within each frame, from thedigital video signal 30. - Preferably, the color transform 34 uses a conventional algorithm for converting a digital RGB video input signal into a Yuv format signal, YCbCr format signal, or other desired format signal. Accordingly, the
color transform 34 provides an output comprising three channels:luminance 43, and U-V or other values ascribed per pixel location. - In an alternative embodiment, the
digital video signal 30 provided by thevideo decoder 22 consists of a super video or S-video. As such, thedigital video signal 30 consists of two different signals: chrominance and luminance. Accordingly, the color transform 34 can be omitted from the process ofFIG. 2 because theluminance 43 is separately provided by the S-video input. - It should be understood as noted above, that a separate channel or channels (not shown) comprising the steps to be discussed below with respect to luminance information, could be provided to any one of RGB/chrominance values independently for transforming the values and redistributing the color values in the image.
- As shown in
FIG. 2 , theluminance information 43 contained within thedigital video signal 30 is received by theimage pre-contrast conditioner 36. As shown inFIG. 3 , theimage pre-contrast conditioner 36 can include a video time integration module or block 44, a system noise reduction module or block 46, a deblurring module or block 48, and a pixel noise reduction module or block 50. As will be appreciated by those having ordinary skill in the art, any one or all of the blocks withinFIG. 3 can be omitted including, but not limited to, the videotime integration module 44. - In one embodiment, the video
time integration module 44 consists of a low-pass filter for noise reduction of theluminance signal 43. As understood by those having skill in the art, each frame includes an array of pixels wherein each pixel has a specific i,j coordinate or address. Noise reduction by the videotime integration module 44 is preferably achieved by combining the current frame (n) with the immediately preceding frame (n−1) stored by thelogical device 18 within thememory 28. Desirably, for each pixel c(i,j) in the current image frame (n), the corresponding pixel p(i,j) from the preceding captured frame (n−1) is subtracted to result in a difference d(i,j). The absolute value of the difference d(i,j) is determined, and compared to a predetermined threshold value. If the absolute value of the difference d(i,j) is greater than the predetermined threshold, then the timeintegration output signal 45 of the videotime integration module 44 is c(i,j). Otherwise, the timeintegration output signal 45 of the videotime integration module 44 is the average of c(i,j) and p(i,j). - As indicated above, the video
time integration module 44 can operate such that object motion, which ordinarily creates large pixel differences, are represented by current pixels while background differences due to noise are suppressed by the averaging. Accordingly, the videotime integration module 44 provides for avoiding image blurring as a result of low-pass spatial domain filtering on the current image. - In one embodiment, the median filter or system
noise reduction module 46 receives the time integration output signal 45 (i.e., transformed pixel frame data) and, in response thereto, provides for reducing pixel noise caused by the system sensor (e.g., camera), electromagnetic, and/or thermal noise. Themedian filter 46 can include a user selectable level of noise reduction by selecting from a plurality of filter kernels viaDIP switch 29, or the like, operably coupled to thelogical device 18. The filter kernel applied by thenoise reduction module 40 to the current frame, as preferably modified bytime integration module 44, is designed to achieve noise reduction while minimizing the adverse effect of the filter on perceived image quality. For instance, a 3.times.1 kernel (FIG. 4 (a)) provides for a reduction in row noise due to the kernel shape, while having minimal adverse effect on the image. Also, a 5-point or plus kernel (FIG. 4 (b)) offers symmetric noise reduction with low adverse impact to image quality. Moreover, a hybrid median filter (FIG. 4 (c)) offers symmetric noise reduction with lower adverse impact than the full 3.times.3 kernel. In one embodiment, a user or the manufacturer can select from the kernel shapes shown in FIGS. 4(a)-(c). - The deblurring module or block 48 provides for counteracting image blurring attributable to the point spread function of the imaging system (not shown). In an embodiment, the deblurring module or block 48 uses a Laplacian filter to sharpen the output 47 (i.e., transformed pixel frame data) received from the
filter 46. The deblurring module or block 48 can include a user-selectable level of image sharpening from a plurality of Laplacian filter center pixel weights. In one embodiment, the user or manufacturer can select the center pixel weight via theDIP switch 29 operably coupled to thelogical device 18. - Preferably, the Laplacian filter uses a 3.times.3 kernel (
FIG. 4 (d)) with selectable center pixel weights of 9, 10, 12, or 16, requiring normalization divisors of 1, 2, 4, or 8, respectively. As shown inFIG. 4 (d), the remaining kernel pixels are preferably assigned a weight of −1. The result of applying the kernel is normalized by dividing the convolution result for the current pixel element by the normalization divisor. - As will be appreciated by those having ordinary skill in the art, the deblurring module or block 48 can use other filters in place of a Laplacian type filter. In an embodiment, and in like fashion to the use of the Laplacian filter with various weights, a general filter with kernel as shown in
FIG. 4 (e) can be used for deblurring. - The output 49 (i.e., transformed pixel frame data) of the
deblurring module 48 emphasizes isolated pixel noise present in the input image signal. Preferably, the pixelnoise reduction module 50 provides for reducing isolated pixel noise emphasized within theoutput 49 of thedeblurring module 48. In one embodiment, the pixelnoise reduction module 50 employs a small-kernel median filter (i.e., small pixel neighborhood such as a 3.times.3) to reduce isolated pixel noise. The pixel noise reduction filter ormodule 50 can provide a user or manufacturer selectable level of noise reduction by allowing for the selection from a plurality of filter kernels. In a preferred embodiment, theDIP switch 29, software, or the like, is used to select from a five-point “plus patterned” kernel (FIG. 4 (b)) or a hybrid median filter such as (FIG. 4 (c)). - As shown in
FIG. 2 , the contrast enhancement module or block 38 enhances the contrast of the conditioned luminance signal 53 (i.e., transformed pixel frame data) provided by theimage pre-contrast conditioner 36 in response to the originalluminance input signal 17. In an alternative embodiment, no image pre-contrast is provided. Instead, thecontrast enhancement module 38 directly receives theimage input signal 17, and in response thereto, generates anenhanced image signal 55. - Turning to
FIG. 5 , the contrast enhancement module or block 38 preferably includes a sample area selection module or block 52, an equalized lookup table construction module or block 54, and an enhanced luminance generation module or block 56. As used herein, and as will be appreciated by those having ordinary skill in the art after reading this specification, the phrases “lookup table” and “equalized lookup table” both refer to a lookup table derived from grey-level values or a lookup table representative of the inversion of the cumulative distribution function derived from an accumulated histogram. - In one embodiment, the sample area selection module or block 52 receives the conditioned
luminance image signal 53 provided by the image pre-contrast conditioner 36 (FIGS. 2 and 3 ) and a selected portion of the input image, viainput 57, to be enhanced. In response to the conditionedluminance input signal 53 and theimage portion selection 57, the sample area selection module provides a selected conditionedluminance image signal 59 comprising conditioned luminance pixel data from the pixels within the selected region. - Depending upon the overall embodiment of a system, the selected portion of the image can be selected in a conventional manner such as by using a point-and-click interface (e.g., a mouse) to select a rectangular sub-image of an image displayed on a monitor (not shown). Alternatively, the selected portion of the image can be selected by automated means such as by a computer, based upon movement detection, with a portion of the image or other criteria.
- In one embodiment, the equalized look-up table construction module or block 54 receives the selected
conditioned image signal 59 from the sample area selection module or block 52 and, in response thereto, provides for creating a lookup table 61 that is used in generating the enhanced image signal 55 (i.e., transformed pixel frame data). As previously indicated, the selected conditionedluminance signal 59 received from the sample area selection module or block 52 can be a sub-image comprising a portion of the overall conditionedluminance image signal 53 received from the image pre-contrast conditioner 36 (FIGS. 2 and 3 ). Alternatively, the selected conditionedluminance signal 59 can consist of the complete conditionedluminance signal 53, without any apportioning by the samplearea selection module 52. - As shown in
FIG. 6 , the equalized lookup table construction module, or block 54 ofFIG. 5 , includes an input smoothing module or block 58, a histogram accumulation module or block 60, a histogram smoothing module or block 62, a cumulative distribution function integration module or block 64, a saturation bias removal module or block 66, a linear cumulative distribution function scaling module or block 68, and a lookup table construction module or block 70. - In one embodiment, the input image smoothing module, or block 58, receives the selected conditioned
luminance image signal 59 and, in response thereto, generates a smoothed conditioned luminance image signal on a frame-by-frame basis. Preferably, the input image smoothing module or block 58 applies a Gaussian filter with a traditional 3.times.3 kernel to the selectedsignal 59 for smoothing the image within each frame. As recognized by those having skill in the art, the Gaussian filter performs a weighted sum (center pixel=highest weight), wherein the result is normalized by the total kernel weight. - In one embodiment, a histogram accumulation module, or block 60, provides for generating a histogram of the selected
conditioned image signal 59, as modified by the input image smoothing module. The histogram accumulation module, or block 60, accumulates a conventional histogram from the received input. Accordingly, the histogram can be used to provide a graph of luminance levels to the number of pixels at each luminance level. Stated another way, the histogram reflects the relationship between luminance levels and the number of pixels at each luminance level. - Accordingly, the histogram accumulation module or block 60 includes a plurality of data storage locations or “bins” for tracking the number of occurrences of each grey-level occurring in the received input signal, for example, such as that received by
block 58. Preferably, the histogram accumulation module or block 60 includes a bin for every discrete grey-level in the input signal. As stated previously, for each grey-level in the input signal, the histogram accumulation module or block 60 determines the number of pixels in the input luminance image with the corresponding grey-level after being filtered, if desired. Accordingly, the histogram result for any given grey-level k is the number of pixels in the luminance image input having that grey-level. - In one embodiment, the histogram smoothing module or block 62 provides for reducing noise in the histogram created from the input image. The histogram smoothing module, or block 62, receives the histogram from the histogram accumulation, module or block 60, and in response thereto, generates a smoothed histogram. Preferably, the histogram smoothing module or block 62 applies a 5-point symmetric kernel (
FIG. 4 (f)) to filter the histogram calculated for each frame. - In one embodiment, the cumulative distribution function integration module or block 64 provides for integrating the histogram to create a cumulative distribution function. The cumulative distribution function integration module or block 64 receives the histogram from the
histogram smoothing module 62, or optionally from thehistogram accumulation module 60. The cumulative distributionfunction integration module 64 integrates the received histogram to generate a cumulative distribution function. It has been observed that integrating a smoothed histogram can result in a cumulative distribution function having an increased accuracy over a cumulative distribution function resulting from an unsmoothed histogram. - The cumulative distribution function integration module or block 64 includes a plurality of data storage locations or “bins” which hold the cumulative distribution function result for each input grey-level. Preferably, the cumulative distribution function integration module, or block 64, includes a bin for every discrete grey-level in the image resolution input range. For each grey-level in the image resolution input range, the cumulative distribution function integration module or block 64 determines the cumulative distribution function result and stores that result in the bin corresponding to that grey-level. For each grey-level k, the cumulative distribution function result is the sum of the histogram value for grey-level k and the cumulative distribution function result for the previous grey-level k−1. Accordingly, the following equation describes the cumulative distribution function result for a given grey-level k:
CDF(k)=H(k)+CDF(k−1) - where CDF(k) is the cumulative distribution function result for grey-level k, H(k) is the histogram value for grey-level k, and CDF(k−1) is the cumulative distribution function result for the grey-level one increment lower than grey-level k.
- The saturation bias identification module, or block 66, provides for identifying the grey-levels that bound the useful information in the
luminance signal 43. In one embodiment, the saturationbias identification module 66 receives the cumulative distribution function from the cumulative distributionfunction integration module 64 and determines the grey-levels at which the first unsaturated grey-level, k.sub.f, and the last unsaturated grey-level, k.sub.1, occurs. The first unsaturated grey-level, k.sub.f, is determined by identifying the first grey-level k.sub.0 for which the cumulative distribution function returns a non-zero value. The grey-level k.sub.0 is treated as saturated, and the saturationbias identification module 66 identifies k.sub.f as the sum of k.sub.0 plus one additional grey-level. The last unsaturated grey-level, k.sub.1, is determined by identifying the first grey-level k.sub.n for which the cumulative distribution function returns the number of pixels in the image. The grey-level k.sub.n is treated as saturated, and the saturation bias identification module or block 66 identifies k.sub.1 as the difference between k.sub.n minus one additional grey-level. - Accordingly, the useful grey-levels identified by the saturation
bias identification module 66 is the range from, and including, k.sub.0 plus one additional grey-level through k.sub.n minus one additional grey-level. In another embodiment, the useful grey-levels identified by the saturationbias identification module 66 is the range from, and including, k.sub.0 through k.sub.n minus one additional grey-level. In yet another embodiment, the useful grey-levels identified by the saturationbias identification module 66 is the range from, and including k.sub.0 plus one additional grey-level through k.sub.n. In a further embodiment, the useful grey-levels identified by the saturationbias identification module 66 is the range from, and including, k.sub.0 plus X additional grey-level(s) through k.sub.n minus Y additional grey-level(s), wherein X and Y are whole numbers greater than zero. - The linear cumulative distribution function scaling module or block 68 provides for scaling the portion of the cumulative distribution function corresponding to the unsaturated, useful grey-levels in the luminance input image across the entire range of cumulative distribution function grey-level inputs. In a preferred embodiment, the linear cumulative distribution
function scaling module 68 receives the cumulative distribution function, provided by the cumulative distributionfunction integration module 64, and the first unsaturated grey-level k.sub.f and the last unsaturated grey-level k.sub.1 from the saturationbias identification module 66. - The linear cumulative distribution
function scaling module 68 includes a plurality of data storage locations or “bins” equal to the number of bins in the cumulative distribution function for holding the linearly mapped cumulative distribution function result for each input grey-level. In a preferred embodiment, the cumulative distributionfunction scaling module 68 scales the portion of the cumulative distribution function between k.sub.f and k.sub.1 inclusively across the entire range of the linearly scaled cumulative distribution function. Accordingly, for each grey-level k in the input range, the linearly scaled cumulative distribution function output value LCDF(k) is determined by the following equation:
LCDF(k)=(CDF(k)−CDF(kf))/(CDF(Kl)−CDF(Kf)) - where CDF(k) is the cumulative distribution function result for grey-level k, CDF(kf) is the cumulative distribution function result for kf, and CDF(kl) is the cumulative distribution function result for kl. Each linearly mapped cumulative distribution function result is stored in the bin corresponding to the current grey-level k. If LCDF(k) is negative for any input grey-level k, the linearly scaled cumulative distribution function result in the bin corresponding to the input grey-level k is set to zero.
- The determination of the cumulative distribution function output value can be calculated using known methodologies for improving computation ease. For instance, the numerator can be scaled by a multiplier, before the division operation, to obtain proper integer representation. The result of the division can then be inversely scaled by the same multiplier to produce the LCDF(k) value. Preferably, the scale factor is an adequately large constant, such as the number of pixels in the input image.
- In one embodiment, the lookup table construction module or block 70 generates a lookup table 61 that is used by the enhanced luminance image generation module (
FIG. 5 ) to produce the enhancedluminance image signal 55. The lookuptable construction module 70 receives the linearly scaled cumulative distribution function from the linear cumulative distributionfunction scaling module 70. In response to the scaled cumulative distribution function, the lookuptable construction module 70 provides the lookup table 61. - Preferably, the lookup table 61 is constructed by multiplying each linearly scaled cumulative distribution function result by the number of discrete grey-levels in the output image resolution range which can be provided by the DIP switch 29 (
FIG. 1 ), software, or other desired means. If the multiplication result is greater than the maximum value in the output image resolution range for any input grey-level, the linearly scaled cumulative distribution function result for that input grey-level is set to the maximum value in the output image resolution range. Accordingly, for each grey-level input in the linearly scaled cumulative distribution function, the result is a value in the output image resolution range. - Turning back to
FIG. 5 , the enhanced luminance image generation module, or block 56, is responsive to the lookup table 61 and the conditionedluminance signal 53. The luminance of each pixel within each input frame can be reassigned a different value based upon the lookup table corresponding to the same frame. Stated another way, the luminance value for each pixel in the conditioned luminance input image is used as an index in the look-up table, and the value at that index location is used as the luminance value for the corresponding pixel in the enhanced luminance output image. The reassigned frame data is then forwarded, preferably to imagepost contrast conditioner 40, as theenhanced image signal 55. - As shown in
FIG. 2 , the imagepost contrast conditioner 40 provides for filtering theenhanced image signal 55 to generate an enhancedconditioned image signal 72. Theconditioner 40 can apply to theimage signal 55 one or more conventional imaging filters, including an averaging filter, a Gaussian filter, a median filter, a Laplace filter, a sharpen filter, a Sobel filter, and a Prewitt filter. - The color inverse transform 42 generates the contrast enhanced digital
video output signal 31, having an expanded range, in response to theenhanced image signal 72 and the color information for the same frame. The enhanced image signal and color information are combined in a conventional manner wherein luminance is combined with color information. - As will be appreciated by those having skill in the art, while the luminance of the input signal is being enhanced, the color information can be stored within
memory 28, for example. Once the luminance has been enhanced, it can be re-combined with the color information stored in memory. - In response to the digital
video output signal 31, thevideo encoder 23 preferably provides an analog output signal viaoutput connector 16. Thevideo encoder 23 preferably utilizes conventional circuitry for converting digital video signals into corresponding analog video signals. - Turning to
FIG. 7 , an alternate embodiment of the contrast enhancement module is depicted. WithinFIG. 7 , that last two digits of the reference numbers used therein correspond to like two digit reference numbers used for like elements inFIGS. 5 and 6 . Accordingly, no further description of these elements is provided. - The median bin identification module or block 174 determines the median of the histogram provided by the
histogram accumulation module 160. Next, the brightness midpoint value is calculated by the weighted midpoint calculation module or block 176. The brightness midpoint value preferably is calculated as the weighted sum of the actual grey-level midpoint of the input frame and the median of the input distribution. Preferably, the weights assigned to the grey-level midpoint and the median of the input distribution are configured as compile-time parameters. - The image segmentation module, or block 178, receives the luminance image data for the selected sample area, via
module 152, and provides for segmenting the luminance image data into light and dark regimes and performing enhancement operations separately (i.e., luminance redistribution) on each of the regimes. Stated another way, theimage segmentation module 178 assigns each pixel in the selected sample area to either the light or dark regime based on each pixel's relationship to the brightness midpoint value received from themidpoint calculation module 176. Pixels with a brightness value greater than the midpoint are assigned to the light regime, and pixels with a brightness value less than the midpoint value are assigned to the dark regime. - Pixel values assigned by the
image segmentation module 178 are received by the lookup table construction modules, wherein lookup tables are constructed and forwarded to the enhanced luminanceimage generation module 156 for remapping of the conditionedluminance signal 53. - According to broad aspects of the invention, contrast enhancement is provided by heuristic and algorithmic techniques, including the calculation of the image grey-level histogram and sample cumulative distribution function. Preferably, maximum utilization of available grey-levels for representation of image detail is provided.
- Entropy is a measure of the information content of the image. Entropy is generally the average number of bits required to represent a pixel of the image. Accordingly, a completely saturated image with only black and white can be represented by one-bit per pixel, on and off, thus, the image entropy is one-bit. A one bit image contains less information than an eight-bit image with 256 shades of grey.
- According to broad aspects of the invention, optimized entropy is provided while maintaining a monotonic grey-level relationship wherein “monotonic” generally means that the precedence order of the input grey-levels are not changed by the desired grey-level transformation employed.
- In one embodiment, contrast enhancement includes calculation of the sample cumulative distribution function of the image grey-levels. The inverted sample cumulative distribution is applied to map the input grey-levels. Because the inverted sample cumulative distribution function is used, the resulting transform approximates uniformly distributed grey levels within the input range. The uniform density histogram generally has the maximum entropy.
- As known by those having skill in the art, classical “histogram equalization” directly applies the inverted sample cumulative distribution function to form the grey-level mapping lookup table. In an embodiment in accordance with the present invention, along with the inverted sample cumulative distribution data, it is preferred to combine a linear mapping to ensure full-range output, and also combine a saturation bias identification to improve grey-level uniformity in the presence of saturation. As will be appreciated by those having skill in the art, image saturation creates a pathology in the inverted sample cumulative distribution mapping that is not addressed in classical histogram equalization.
- When the image is saturated the histogram has a pathology—an inordinate number of pixels are black or white or both. Direct calculation of the sample cumulative distribution function produces large gaps in endpoints of the grey-level transform. Such gaps mean that the maximum number of grey-level values are not being used, and the resulting output distribution does not closely approximate the desired uniform distribution. Moreover, the resulting image will contain broad areas of a single grey-level (for both black and with saturation). Sometimes, such areas are referred to as “puddles.”
- An embodiment of another broad aspect of the invention is disclosed in
FIGS. 8 and 9 which are simplified functional block diagrams. The contrast enhancement block diagrams ofFIGS. 8 and 9 include: an identitylookup table builder 211, a global equalizedlookup table builder 213, a plurality of zone equalizedlookup table builders functional block 233; a plurality of zone weightingfunctional blocks - The identity
lookup table builder 211 constructs an identity lookup table for the image frame received. Thus, for each pixel having a contrast k, the lookup table output provided byblock 211 provides the same contrast, k. - The global equalized
lookup table builder 213 constructs a global equalized lookup table for the image data received. In particular, the functionality of thebuilder 213 withinFIG. 8 is like that ofblock 54 withinFIG. 6 . Accordingly, a range of useful grey-levels is identified. As described above, the range can include all grey-levels within a cumulative distribution function or any lesser or greater amount as desired, for example all grey levels except for the first and/or last unsaturated grey-level within the distribution function. The range is then scaled to include an extended range of grey-levels including, but not limited to, the full range of grey-levels. Thebuilder 213 provides, as an output, the scaled range as a lookup table to thenon-linear transform 223. - Similar to the global equalized
lookup table builder 213, each of the zone equalizedlookup table builders non-linear transform -
FIG. 8 discloses schematically, that thenon-linear transforms non-linear transform - The output of each non-linear transform 221-231 is multiplied by a weight as shown by blocks 233-241 of
FIG. 8 . Accordingly, the weighting provided byglobal weight 233 determines the balance between combining together two or more inputs. In particular, the global weight determines the balance between the identity lookup table and the transformed global equalized lookup table. The combination or blending together of the identity lookup table 211 and the transformed global equalized lookup table 213 results in a contrast enhanced global lookup table 243. - Likewise, the weighting provided by weights 235-241 determines the balance between the contrast enhanced global lookup table 243 and each transformed zone equalized lookup table. These combinations or blends result in a plurality of zonal balanced lookup tables 245-251 wherein the balanced lookup tables are contrast enhanced and relate, in part, to a particular zone of the original input image.
- Turning to
FIG. 9 , the outputs from the zonal balanced lookup tables 245, 247, 249 and 251 ofFIG. 8 , are combined with corresponding pixel position weights for determining relative zonal pixel grey-levels levels level value 261 for each pixel. - In particular, the image data input provided to each lookup table 245, 247, 249 and 251 includes the original grey-level for each pixel within the image frame. Stated another way, each lookup table 245, 247, 249 and 251 receives the original grey-level image data input for each pixel within the image frame.
- The
outputs - The
outputs position weighting factor weighting factor - In one embodiment, the reference positions within the image frame are located at the corners of the image frame. In a further embodiment, the reference positions can be at the centers of the defined zones.
- The weightings are based upon the distance the pixel is located from the reference positions. For example, if the image frame is a square containing four equally sized zones with the reference positions at the corners of the image frame, then the weighting factors 271, 273, 275 and 277, for the center pixel within the image frame would be 0.25 because the pixel is equal distant from each zone's reference position (i.e., the corners of the image frame). Likewise, each pixel at the corners of the image frame would be weighted such that only the lookup table results for the zone containing the pixel would be applied in determining the pixel's new grey-level.
- In yet another embodiment, and as will be appreciated by those having ordinary skill in the art, only a single zone can be used within an image frame. Preferably, the zone is centered in the image frame and is small in size than the entire image frame.
- It should be understood that the
device 10 and its combination of elements, and derivatives thereof, provide circuits and implement techniques that are highly efficient to implementation yet work very well. Others have proposed various forms of contrast enhancement solutions but the proposals are much more complex to implement. Apparatus and methods according to the invention create an algorithmically efficient procedure that results in cost efficient real-time implementation. This, in turn provides lower product costs. - While the specific embodiments have been illustrated and described, numerous modifications come to mind without significantly departing from the spirit of the invention and the scope of protection is only limited by the scope of the accompanying claims.
Claims (20)
1. A device providing an image output frame in response to an image input frame, the device comprising: a housing having a printed circuit board contained therein; an input connector attached to the housing and having a conductive path attached to the printed circuit board; an output connector attached to the housing and having a conductive path attached to the printed circuit board; an integrated circuit placed on the printed circuit board, the integrated circuit having an output responsive to the image input frame, the output comprising transformed pixel frame data; and wherein the device does not require a keyboard to operate.
2. The device of claim 1 wherein the device is an embedded system.
3. The device of claim 1 wherein the input connector receives thirty frames per second and the output connector provides thirty frames per second.
4. The device of claim 1 wherein a port is not provided for operably attaching the device to the keyboard.
5. The device of claim 1 , further comprising a saturation bias identification circuit having a range of useful grey-levels output responsive to the image input frame, and a cumulative distribution function scaling circuit having a scaled output responsive to the useful grey-levels output.
6. A method for providing an image output frame in response to an image input frame, the method comprising the steps of: segmenting the image input frame into one or more zones; determining a plurality of grey-level values for a pixel based, at least in part, on grey-level data contained within the one or more zones; and, calculating a composite enhanced pixel grey-level value for the pixel by blending the plurality of grey-level values.
7. The method of claim 6 wherein the image input frame is segmented into a plurality of zones.
8. The method of claim 6 further comprising establishing reference points within the image input frame, and wherein the step of calculating a composite enhanced pixel grey-level value is based, at least in part, on distance from the reference points.
9. The method of claim 6 further comprising the step of generating a digital output responsive to the image input frame.
10. The method of claim 6 further comprising the step of generating an analog output in response to the image input frame.
11. The method of claim 6 further comprising the step of generating a luminance output signal in response to the image input frame.
12. The method of claim 6 further comprising the step of generating a conditioned output responsive to the image input frame.
13. A method for providing an image output frame in response to an image input frame, the method comprising the steps of: establishing reference points within the image input frame; calculating grey-level values for a plurality of pixels within the image input frame; and, calculating a composite enhanced pixel grey-level value for the pixels based, at least in part, on distance from the reference points.
14. The method of claim 13 further comprising segmenting the image input frame into a plurality of zones, and wherein at least one of the reference points is located within one of the zones.
15. The method of claim 13 further comprising generating a range of useful grey-levels in response to the image input frame, and generating a scaled output in response to the range of useful grey-levels.
16. A method for providing an image output frame in response to an image input frame, the method comprising the steps of: constructing an equalized lookup table for the image input frame; constructing an equalized lookup table for a zone within the image input frame; and, utilizing the lookup tables to build a balanced lookup table.
17. The method of claim 16 further comprising the step of utilizing a lookup table representative of the image input frame to build the balanced lookup table.
18. A method for providing an output of signal values in response to an input array of signal values derived directly or indirectly from a sensor, wherein each input value is variable within a range of values based upon an environment sensed by the sensor, and each value of the input signal has coordinates with a spatial and or geometric relationship to other values in the input signal, comprising the steps of: segmenting the image input array into one or more spatial zones; determining a plurality of signal intensity values for a coordinate based, at least in part, on signal intensity contained at coordinates within the one or more other zones; and, calculating a composite enhanced coordinate signal value for the coordinate by blending the plurality of signal intensity values.
19. A method for providing an output of signal values in response to an input array of signal values derived directly or indirectly from a sensor, wherein each input value is variable within a range of values based upon an environment sensed by the sensor, and each value of the input signal has coordinates with a spatial and or geometric relationship to other values in the input signal, comprising the steps of: establishing reference points within the array of signal values; calculating signal intensity values for a plurality of coordinates within the array of input values; and, calculating a composite enhanced coordinate signal intensity value for the coordinates based, at least in part, on a distance of the coordinate from the reference points.
20. A method providing an output of signal values in response to an input array of signal values derived directly or indirectly from a sensor, wherein each input value is variable within a range of values based upon an environment sensed by the sensor, and each value of the input signal has coordinates with a spatial and or geometric relationship to other values in the input signal, comprising the steps of: constructing an equalized lookup table for the array of signal intensity values; constructing an equalized lookup table for a zone within the array of signal intensity values; and, utilizing the lookup tables to build a balanced lookup table.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US12/002,674 US20080095433A1 (en) | 2002-09-06 | 2007-12-18 | Signal intensity range transformation apparatus and method |
Applications Claiming Priority (3)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US40866302P | 2002-09-06 | 2002-09-06 | |
US10/657,723 US7321699B2 (en) | 2002-09-06 | 2003-09-08 | Signal intensity range transformation apparatus and method |
US12/002,674 US20080095433A1 (en) | 2002-09-06 | 2007-12-18 | Signal intensity range transformation apparatus and method |
Related Parent Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US10/657,723 Division US7321699B2 (en) | 2002-09-06 | 2003-09-08 | Signal intensity range transformation apparatus and method |
Publications (1)
Publication Number | Publication Date |
---|---|
US20080095433A1 true US20080095433A1 (en) | 2008-04-24 |
Family
ID=31978654
Family Applications (2)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US10/657,723 Expired - Fee Related US7321699B2 (en) | 2002-09-06 | 2003-09-08 | Signal intensity range transformation apparatus and method |
US12/002,674 Abandoned US20080095433A1 (en) | 2002-09-06 | 2007-12-18 | Signal intensity range transformation apparatus and method |
Family Applications Before (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US10/657,723 Expired - Fee Related US7321699B2 (en) | 2002-09-06 | 2003-09-08 | Signal intensity range transformation apparatus and method |
Country Status (3)
Country | Link |
---|---|
US (2) | US7321699B2 (en) |
AU (1) | AU2003270386A1 (en) |
WO (1) | WO2004023787A2 (en) |
Cited By (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20070174895A1 (en) * | 2005-12-30 | 2007-07-26 | Yao Zhigang | Video and audio front end assembly and method |
US20110075946A1 (en) * | 2005-08-01 | 2011-03-31 | Buckland Eric L | Methods, Systems and Computer Program Products for Analyzing Three Dimensional Data Sets Obtained from a Sample |
US20110216956A1 (en) * | 2010-03-05 | 2011-09-08 | Bower Bradley A | Methods, Systems and Computer Program Products for Collapsing Volume Data to Lower Dimensional Representations Thereof |
US20140111532A1 (en) * | 2012-10-22 | 2014-04-24 | Stmicroelectronics International N.V. | Content adaptive image restoration, scaling and enhancement for high definition display |
US20160253788A1 (en) * | 2015-02-27 | 2016-09-01 | Siliconfile Technologies Inc. | Device for removing noise on image using cross-kernel type median filter and method therefor |
US10048057B2 (en) | 2013-12-05 | 2018-08-14 | Bioptigen, Inc. | Image registration, averaging, and compounding for high speed extended depth optical coherence tomography |
US11006175B2 (en) | 2012-09-19 | 2021-05-11 | Google Llc | Systems and methods for operating a set top box |
US11140443B2 (en) | 2012-09-19 | 2021-10-05 | Google Llc | Identification and presentation of content associated with currently playing television programs |
US11740897B2 (en) | 2020-07-15 | 2023-08-29 | Copado, Inc. | Methods for software development and operation process analytics and devices thereof |
US11775910B2 (en) | 2020-07-15 | 2023-10-03 | Copado, Inc. | Applied computer technology for high efficiency value stream management and mapping and process tracking |
Families Citing this family (22)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6937274B2 (en) * | 2001-12-17 | 2005-08-30 | Motorola, Inc. | Dynamic range compression of output channel data of an image sensor |
KR100513273B1 (en) * | 2003-07-04 | 2005-09-09 | 이디텍 주식회사 | Apparatus and method for real-time brightness control of moving images |
JP4428159B2 (en) * | 2003-11-05 | 2010-03-10 | セイコーエプソン株式会社 | Image data generation apparatus, image quality correction apparatus, image data generation method, and image quality correction method |
US7355639B2 (en) * | 2003-11-06 | 2008-04-08 | Omnivision Technologies, Inc. | Lens correction using processed YUV data |
US20060008174A1 (en) * | 2004-07-07 | 2006-01-12 | Ge Medical Systems Global Technology | Count adaptive noise reduction method of x-ray images |
JP4533330B2 (en) * | 2005-04-12 | 2010-09-01 | キヤノン株式会社 | Image forming apparatus and image forming method |
JP4427001B2 (en) * | 2005-05-13 | 2010-03-03 | オリンパス株式会社 | Image processing apparatus and image processing program |
US7859574B1 (en) * | 2005-07-19 | 2010-12-28 | Maxim Integrated Products, Inc. | Integrated camera image signal processor and video encoder |
JP2007060169A (en) * | 2005-08-23 | 2007-03-08 | Sony Corp | Image processing apparatus, image display apparatus and image processing method |
US7512574B2 (en) * | 2005-09-30 | 2009-03-31 | International Business Machines Corporation | Consistent histogram maintenance using query feedback |
US7894686B2 (en) * | 2006-01-05 | 2011-02-22 | Lsi Corporation | Adaptive video enhancement gain control |
KR100717401B1 (en) * | 2006-03-02 | 2007-05-11 | 삼성전자주식회사 | Method and apparatus for normalizing voice feature vector by backward cumulative histogram |
KR100849845B1 (en) * | 2006-09-05 | 2008-08-01 | 삼성전자주식회사 | Method and apparatus for Image enhancement |
US7912307B2 (en) * | 2007-02-08 | 2011-03-22 | Yu Wang | Deconvolution method using neighboring-pixel-optical-transfer-function in fourier domain |
WO2008132698A1 (en) * | 2007-04-30 | 2008-11-06 | Koninklijke Philips Electronics N.V. | Positive contrast mr susceptibility imaging |
US9002131B2 (en) | 2011-09-18 | 2015-04-07 | Forus Health Pvt. Ltd. | Method and system for enhancing image quality |
US9646366B2 (en) * | 2012-11-30 | 2017-05-09 | Change Healthcare Llc | Method and apparatus for enhancing medical images |
JP6816018B2 (en) * | 2015-04-14 | 2021-01-20 | コーニンクレッカ フィリップス エヌ ヴェKoninklijke Philips N.V. | Equipment and methods for improving medical image quality |
CN104900209A (en) * | 2015-06-29 | 2015-09-09 | 深圳市华星光电技术有限公司 | Overdriven target value calculating method based on sub-pixel signal bright-dark switching |
US10477251B2 (en) | 2016-11-04 | 2019-11-12 | Google Llc | Restoration for video coding with self-guided filtering and subspace projection |
JP6684971B2 (en) * | 2017-01-18 | 2020-04-22 | ドルビー ラボラトリーズ ライセンシング コーポレイション | Segment-based reconstruction for encoding high dynamic range video |
CN114972218B (en) * | 2022-05-12 | 2023-03-24 | 中海油信息科技有限公司 | Pointer meter reading identification method and system |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6538405B1 (en) * | 2000-04-28 | 2003-03-25 | The Cherry Corporation | Accessory control system |
US6763364B1 (en) * | 1995-02-14 | 2004-07-13 | Scott A. Wilber | Random number generator and generation method |
US7102696B2 (en) * | 2001-04-03 | 2006-09-05 | Chunghwa Tubes, Ltd. | Method of effecting various anti compensation processes on segmented gray level of input image on plasma display panel |
US7283683B1 (en) * | 1998-01-23 | 2007-10-16 | Sharp Kabushiki Kaisha | Image processing device and image processing method |
Family Cites Families (172)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US2758676A (en) | 1954-02-16 | 1956-08-14 | Haughton Elevator Company | Variable standing time control |
US3195126A (en) | 1957-05-13 | 1965-07-13 | Lab For Electronics Inc | Traffic supervisory system |
US3686434A (en) | 1957-06-27 | 1972-08-22 | Jerome H Lemelson | Area surveillance system |
US3255434A (en) | 1961-11-01 | 1966-06-07 | Peter D Schwarz | Vehicle detection and counting system |
US3590151A (en) | 1966-12-30 | 1971-06-29 | Jackson & Church Electronics C | Television surveillance system |
US3562423A (en) | 1967-08-15 | 1971-02-09 | Univ Northwestern | Dictorial tracking and recognition system which provides display of target identified by brilliance and spatial characteristics |
US3924130A (en) | 1968-02-12 | 1975-12-02 | Us Navy | Body exposure indicator |
US3534499A (en) | 1969-01-27 | 1970-10-20 | John L Czinger Jr | Door opening apparatus |
US3685012A (en) | 1970-04-16 | 1972-08-15 | Sperry Rand Corp | Apparatus for determining data associated with objects |
US3691556A (en) | 1970-06-03 | 1972-09-12 | Memco Electronics Ltd | Detection of movement in confined spaces |
US3663937A (en) | 1970-06-08 | 1972-05-16 | Thiokol Chemical Corp | Intersection ingress-egress automatic electronic traffic monitoring equipment |
US3668625A (en) | 1970-09-21 | 1972-06-06 | David Wolf | Monitoring system |
US3740466A (en) | 1970-12-14 | 1973-06-19 | Jackson & Church Electronics C | Surveillance system |
US3691302A (en) | 1971-02-25 | 1972-09-12 | Gte Sylvania Inc | Automatic light control for low light level television camera |
GB1378754A (en) | 1971-09-07 | 1974-12-27 | Peak Technologies Ltd | Patient monitoring |
US3816648A (en) | 1972-03-13 | 1974-06-11 | Magnavox Co | Scene intrusion alarm |
US3890463A (en) | 1972-03-15 | 1975-06-17 | Konan Camera Res Inst | System for use in the supervision of a motor-boat race or a similar timed event |
US3852592A (en) | 1973-06-07 | 1974-12-03 | Stanley Works | Automatic door operator |
US3988533A (en) | 1974-09-30 | 1976-10-26 | Video Tek, Inc. | Video-type universal motion and intrusion detection system |
US3947833A (en) | 1974-11-20 | 1976-03-30 | The United States Of America As Represented By The Secretary Of The Navy | Automatic target detection system |
US3930735A (en) | 1974-12-11 | 1976-01-06 | The United States Of America As Represented By The United States National Aeronautics And Space Administration | Traffic survey system |
JPS5197155A (en) | 1975-02-21 | 1976-08-26 | Erebeetano jokyakudeetashushusochi | |
FR2321229A1 (en) | 1975-08-13 | 1977-03-11 | Cit Alcatel | METHOD AND APPARATUS FOR AUTOMATIC GRAPHIC CONTROL |
SE394146B (en) | 1975-10-16 | 1977-06-06 | L Olesen | SATURATION DEVICE RESP CONTROL OF A FOREMAL, IN ESPECIALLY THE SPEED OF A VEHICLE. |
DE2617111C3 (en) | 1976-04-17 | 1986-02-20 | Robert Bosch Gmbh, 7000 Stuttgart | Method for detecting movement in the surveillance area of a television camera |
DE2617112C3 (en) | 1976-04-17 | 1982-01-14 | Robert Bosch Gmbh, 7000 Stuttgart | Method for determining a movement or a change in the surveillance area of a television camera |
US4063282A (en) | 1976-07-20 | 1977-12-13 | The United States Of America As Represented By The Administrator Of The National Aeronautics And Space Administration | TV fatigue crack monitoring system |
DE2638138C3 (en) | 1976-08-25 | 1979-05-03 | Kloeckner-Werke Ag, 4100 Duisburg | Device for recognizing and sorting out defective packs that are transported along a conveyor line |
US4240109A (en) | 1976-10-14 | 1980-12-16 | Micro Consultants, Limited | Video movement detection |
US4136950A (en) | 1976-11-08 | 1979-01-30 | Labrum Engineering, Inc. | Microscope system for observing moving particles |
US4183013A (en) | 1976-11-29 | 1980-01-08 | Coulter Electronics, Inc. | System for extracting shape features from an image |
JPS53110823A (en) | 1977-03-10 | 1978-09-27 | Ricoh Co Ltd | Optical information processor |
DE2715083C3 (en) | 1977-04-04 | 1983-02-24 | Robert Bosch Gmbh, 7000 Stuttgart | System for the discrimination of a video signal |
US4141062A (en) * | 1977-05-06 | 1979-02-20 | Trueblood, Inc. | Trouble light unit |
DE2720865A1 (en) | 1977-05-10 | 1978-11-23 | Philips Patentverwaltung | ARRANGEMENT FOR THE EXAMINATION OF OBJECTS |
US4163357A (en) | 1977-06-13 | 1979-08-07 | Hamel Gmbh, Zwirnmaschinen | Apparatus for cable-twisting two yarns |
US4133004A (en) | 1977-11-02 | 1979-01-02 | Hughes Aircraft Company | Video correlation tracker |
JPS5474700A (en) | 1977-11-26 | 1979-06-14 | Agency Of Ind Science & Technol | Collection and delivery system for traffic information by photo electric conversion element group |
CA1103803A (en) | 1978-03-01 | 1981-06-23 | National Research Council Of Canada | Method and apparatus of determining the center of area or centroid of a geometrical area of unspecified shape lying in a larger x-y scan field |
JPS54151322A (en) | 1978-05-19 | 1979-11-28 | Tokyo Hoso:Kk | Storoboscopic effect generator for television |
US4187519A (en) | 1978-08-17 | 1980-02-05 | Rockwell International Corporation | System for expanding the video contrast of an image |
CA1116286A (en) | 1979-02-20 | 1982-01-12 | Control Data Canada, Ltd. | Perimeter surveillance system |
US4257063A (en) | 1979-03-23 | 1981-03-17 | Ham Industries, Inc. | Video monitoring system and method |
US4219845A (en) | 1979-04-12 | 1980-08-26 | The United States Of America As Represented By The Secretary Of The Air Force | Sense and inject moving target indicator apparatus |
DE2934038C2 (en) | 1979-08-23 | 1982-02-25 | Deutsche Forschungs- und Versuchsanstalt für Luft- und Raumfahrt e.V., 5000 Köln | Crack propagation measuring device |
US4414685A (en) | 1979-09-10 | 1983-11-08 | Sternberg Stanley R | Method and apparatus for pattern recognition and detection |
US4395699A (en) | 1979-09-10 | 1983-07-26 | Environmental Research Institute Of Michigan | Method and apparatus for pattern recognition and detection |
US4317130A (en) | 1979-10-10 | 1982-02-23 | Motorola, Inc. | Narrow band television transmission system |
JPS56132505A (en) | 1980-03-24 | 1981-10-16 | Hitachi Ltd | Position detecting method |
US4298858A (en) | 1980-03-27 | 1981-11-03 | The United States Of America As Represented By The Secretary Of The Air Force | Method and apparatus for augmenting binary patterns |
JPS56160183A (en) | 1980-05-09 | 1981-12-09 | Hajime Sangyo Kk | Method and device for monitoring |
US4337481A (en) | 1980-06-10 | 1982-06-29 | Peter Mick | Motion and intrusion detecting system |
US4410910A (en) | 1980-09-18 | 1983-10-18 | Advanced Diagnostic Research Corp. | Motion detecting method and apparatus |
US4433325A (en) | 1980-09-30 | 1984-02-21 | Omron Tateisi Electronics, Co. | Optical vehicle detection system |
JPS6360960B2 (en) | 1980-10-22 | 1988-11-25 | ||
US4455550A (en) | 1980-11-06 | 1984-06-19 | General Research Of Electronics, Inc. | Detection circuit for a video intrusion monitoring apparatus |
US4520343A (en) | 1980-12-16 | 1985-05-28 | Hiroshi Koh | Lift control system |
JPS57125496A (en) | 1981-01-27 | 1982-08-04 | Fujitec Kk | Condition detector |
EP0070286B1 (en) | 1981-01-26 | 1985-10-09 | Memco-Med Limited | Proximity detector circuitry especially for lift doors |
US4493420A (en) | 1981-01-29 | 1985-01-15 | Lockwood Graders (U.K.) Limited | Method and apparatus for detecting bounded regions of images, and method and apparatus for sorting articles and detecting flaws |
DE3107901A1 (en) | 1981-03-02 | 1982-09-16 | Siemens AG, 1000 Berlin und 8000 München | DIGITAL REAL-TIME TELEVISION IMAGE DEVICE |
US4449144A (en) | 1981-06-26 | 1984-05-15 | Omron Tateisi Electronics Co. | Apparatus for detecting moving body |
US4479145A (en) | 1981-07-29 | 1984-10-23 | Nippon Kogaku K.K. | Apparatus for detecting the defect of pattern |
US4433438A (en) | 1981-11-25 | 1984-02-21 | The United States Of America As Represented By The Secretary Of The Air Force | Sobel edge extraction circuit for image processing |
US4589139A (en) | 1982-02-04 | 1986-05-13 | Nippon Kogaku K. K. | Apparatus for detecting defects in pattern |
US4490851A (en) | 1982-04-16 | 1984-12-25 | The United States Of America As Represented By The Secretary Of The Army | Two-dimensional image data reducer and classifier |
US4520504A (en) | 1982-07-29 | 1985-05-28 | The United States Of America As Represented By The Secretary Of The Air Force | Infrared system with computerized image display |
US4569078A (en) | 1982-09-17 | 1986-02-04 | Environmental Research Institute Of Michigan | Image sensor |
JPS5994045A (en) | 1982-11-22 | 1984-05-30 | Toshiba Corp | Image input apparatus |
US4577344A (en) | 1983-01-17 | 1986-03-18 | Automatix Incorporated | Vision system |
US4543567A (en) | 1983-04-14 | 1985-09-24 | Tokyo Shibaura Denki Kabushiki Kaisha | Method for controlling output of alarm information |
US4574393A (en) | 1983-04-14 | 1986-03-04 | Blackwell George F | Gray scale image processor |
US4556900A (en) | 1983-05-25 | 1985-12-03 | Rca Corporation | Scaling device as for quantized B-Y signal |
US4665554A (en) | 1983-07-13 | 1987-05-12 | Machine Vision International Corporation | Apparatus and method for implementing dilation and erosion transformations in digital image processing |
KR910009880B1 (en) | 1983-07-25 | 1991-12-03 | 가부시기가이샤 히다찌세이사꾸쇼 | Image motion detecting circuit of interlacing television signal |
US4589030A (en) | 1983-07-25 | 1986-05-13 | Kley Victor B | Solid state camera |
IL69327A (en) | 1983-07-26 | 1986-11-30 | Elscint Ltd | Automatic misregistration correction |
US4639767A (en) | 1983-09-08 | 1987-01-27 | Nec Corporation | Apparatus for detecting movement in a television signal based on taking ratio of signal representing frame difference to signal representing sum of picture element differences |
US4555724A (en) | 1983-10-21 | 1985-11-26 | Westinghouse Electric Corp. | Elevator system |
US4698937A (en) | 1983-11-28 | 1987-10-13 | The Stanley Works | Traffic responsive control system for automatic swinging door |
US4565029A (en) | 1983-11-28 | 1986-01-21 | The Stanley Works | Traffic responsive control system for automatic swinging door |
US4669218A (en) | 1984-03-08 | 1987-06-02 | The Stanley Works | Traffic responsive control system |
US4694329A (en) | 1984-04-09 | 1987-09-15 | Corporate Communications Consultants, Inc. | Color correction system and method with scene-change detection |
US4653109A (en) | 1984-07-30 | 1987-03-24 | Lemelson Jerome H | Image analysis system and method |
US4641356A (en) | 1984-08-24 | 1987-02-03 | Machine Vision International Corporation | Apparatus and method for implementing dilation and erosion transformations in grayscale image processing |
FI70651C (en) | 1984-10-05 | 1986-09-24 | Kone Oy | OVERHEAD FREQUENCY FOR OIL FITTINGS |
US4679077A (en) | 1984-11-10 | 1987-07-07 | Matsushita Electric Works, Ltd. | Visual Image sensor system |
DE3513833A1 (en) | 1984-11-14 | 1986-05-22 | Karl-Walter Prof. Dr.-Ing. 5910 Kreuztal Bonfig | FUSE PROTECTION INSERT WITH OPTOELECTRICAL DISPLAY DEVICE |
US4685145A (en) | 1984-12-07 | 1987-08-04 | Fingermatrix, Inc. | Conversion of an image represented by a field of pixels in a gray scale to a field of pixels in binary scale |
US4680704A (en) | 1984-12-28 | 1987-07-14 | Telemeter Corporation | Optical sensor apparatus and method for remotely monitoring a utility meter or the like |
US4662479A (en) | 1985-01-22 | 1987-05-05 | Mitsubishi Denki Kabushiki Kaisha | Operating apparatus for elevator |
US4739401A (en) | 1985-01-25 | 1988-04-19 | Hughes Aircraft Company | Target acquisition system and method |
GB8518803D0 (en) | 1985-07-25 | 1985-08-29 | Rca Corp | Locating target patterns within images |
US4779131A (en) | 1985-07-26 | 1988-10-18 | Sony Corporation | Apparatus for detecting television image movement |
US4697594A (en) * | 1985-08-21 | 1987-10-06 | North American Philips Corporation | Displaying a single parameter image |
JPS6278979A (en) | 1985-10-02 | 1987-04-11 | Toshiba Corp | Picture processor |
GB2183878B (en) | 1985-10-11 | 1989-09-20 | Matsushita Electric Works Ltd | Abnormality supervising system |
JPH0744689B2 (en) | 1985-11-22 | 1995-05-15 | 株式会社日立製作所 | Motion detection circuit |
JPH0766446B2 (en) | 1985-11-27 | 1995-07-19 | 株式会社日立製作所 | Method of extracting moving object image |
US5187747A (en) * | 1986-01-07 | 1993-02-16 | Capello Richard D | Method and apparatus for contextual data enhancement |
US4825393A (en) * | 1986-04-23 | 1989-04-25 | Hitachi, Ltd. | Position measuring method |
US4760607A (en) | 1986-07-31 | 1988-07-26 | Machine Vision International Corporation | Apparatus and method for implementing transformations in grayscale image processing |
JPS63222589A (en) * | 1987-03-12 | 1988-09-16 | Toshiba Corp | Noise reducing circuit |
US4823010A (en) * | 1987-05-11 | 1989-04-18 | The Stanley Works | Sliding door threshold sensor |
US4906940A (en) * | 1987-08-24 | 1990-03-06 | Science Applications International Corporation | Process and apparatus for the automatic detection and extraction of features in images and displays |
US4799243A (en) * | 1987-09-01 | 1989-01-17 | Otis Elevator Company | Directional people counting arrangement |
JPH0695008B2 (en) * | 1987-12-11 | 1994-11-24 | 株式会社東芝 | Monitoring device |
JPH0672770B2 (en) * | 1988-02-01 | 1994-09-14 | 豊田工機株式会社 | Robot object recognition device |
EP0336430B1 (en) * | 1988-04-08 | 1994-10-19 | Dainippon Screen Mfg. Co., Ltd. | Method of extracting contour of subject image from original |
DE3818534A1 (en) * | 1988-05-31 | 1989-12-07 | Brunner Wolfgang | METHOD FOR PRESENTING THE LOCALLY DISTRIBUTED DISTRIBUTION OF PHYSICAL SIZES ON A DISPLAY, AND DEVICE FOR CARRYING OUT THE METHOD |
ES2041850T3 (en) * | 1988-06-03 | 1993-12-01 | Inventio Ag | PROCEDURE AND DEVICE FOR CONTROLLING THE POSITION OF AN AUTOMATIC DOOR. |
US4985618A (en) * | 1988-06-16 | 1991-01-15 | Nicoh Company, Ltd. | Parallel image processing system |
FR2634551B1 (en) * | 1988-07-20 | 1990-11-02 | Siderurgie Fse Inst Rech | METHOD AND DEVICE FOR IDENTIFYING THE FINISH OF A METAL SURFACE |
US4991092A (en) * | 1988-08-12 | 1991-02-05 | The Regents Of The University Of California | Image processor for enhancing contrast between subregions of a region of interest |
DE68926702T2 (en) * | 1988-09-08 | 1996-12-19 | Sony Corp | Image processing device |
JPH0683373B2 (en) * | 1988-12-09 | 1994-10-19 | 大日本スクリーン製造株式会社 | How to set the reference concentration point |
US5008739A (en) * | 1989-02-13 | 1991-04-16 | Eastman Kodak Company | Real-time digital processor for producing full resolution color signals from a multi-color image sensor |
JPH0335399A (en) * | 1989-06-30 | 1991-02-15 | Toshiba Corp | Change area integrating device |
JP2953712B2 (en) * | 1989-09-27 | 1999-09-27 | 株式会社東芝 | Moving object detection device |
DE69129568T2 (en) * | 1990-02-26 | 1998-12-10 | Matsushita Electric Industrial Co., Ltd., Kadoma, Osaka | TRAFFIC MONITOR DEVICE |
KR100204101B1 (en) * | 1990-03-02 | 1999-06-15 | 가나이 쓰도무 | Image processing apparatus |
JP2712844B2 (en) * | 1990-04-27 | 1998-02-16 | 株式会社日立製作所 | Traffic flow measurement device and traffic flow measurement control device |
AU7974491A (en) * | 1990-05-25 | 1991-12-31 | European Vision Systems Centre Limited | An image acquisition system |
US5305395A (en) * | 1990-06-08 | 1994-04-19 | Xerox Corporation | Exhaustive hierarchical near neighbor operations on an image |
US5319547A (en) * | 1990-08-10 | 1994-06-07 | Vivid Technologies, Inc. | Device and method for inspection of baggage and other objects |
US5596418A (en) * | 1990-08-17 | 1997-01-21 | Samsung Electronics Co., Ltd. | Deemphasis and subsequent reemphasis of high-energy reversed-spectrum components of a folded video signal |
US5182778A (en) * | 1990-08-31 | 1993-01-26 | Eastman Kodak Company | Dot-matrix video enhancement for optical character recognition |
US5181254A (en) * | 1990-12-14 | 1993-01-19 | Westinghouse Electric Corp. | Method for automatically identifying targets in sonar images |
FR2670978A1 (en) * | 1990-12-21 | 1992-06-26 | Philips Electronique Lab | MOTION EXTRACTION METHOD COMPRISING THE FORMATION OF DIFFERENCE IMAGES AND THREE DIMENSIONAL FILTERING. |
JPH04241077A (en) * | 1991-01-24 | 1992-08-28 | Mitsubishi Electric Corp | Moving body recognizing method |
US5296852A (en) * | 1991-02-27 | 1994-03-22 | Rathi Rajendra P | Method and apparatus for monitoring traffic flow |
EP0505858B1 (en) * | 1991-03-19 | 2002-08-14 | Mitsubishi Denki Kabushiki Kaisha | A moving body measuring device and an image processing device for measuring traffic flows |
JP2936791B2 (en) * | 1991-05-28 | 1999-08-23 | 松下電器産業株式会社 | Gradation correction device |
US5509082A (en) * | 1991-05-30 | 1996-04-16 | Matsushita Electric Industrial Co., Ltd. | Vehicle movement measuring apparatus |
JPH0578048A (en) * | 1991-09-19 | 1993-03-30 | Hitachi Ltd | Detecting device for waiting passenger in elevator hall |
US5289520A (en) * | 1991-11-27 | 1994-02-22 | Lorad Corporation | Stereotactic mammography imaging system with prone position examination table and CCD camera |
KR940001054B1 (en) * | 1992-02-27 | 1994-02-08 | 삼성전자 주식회사 | Automatic contrast compensating method and it's apparatus in color video printer |
US5500904A (en) * | 1992-04-22 | 1996-03-19 | Texas Instruments Incorporated | System and method for indicating a change between images |
JP2917661B2 (en) * | 1992-04-28 | 1999-07-12 | 住友電気工業株式会社 | Traffic flow measurement processing method and device |
US5300739A (en) * | 1992-05-26 | 1994-04-05 | Otis Elevator Company | Cyclically varying an elevator car's assigned group in a system where each group has a separate lobby corridor |
US5612928A (en) * | 1992-05-28 | 1997-03-18 | Northrop Grumman Corporation | Method and apparatus for classifying objects in sonar images |
US5410418A (en) * | 1992-06-24 | 1995-04-25 | Dainippon Screen Mfg. Co., Ltd. | Apparatus for converting image signal representing image having gradation |
US5483351A (en) * | 1992-09-25 | 1996-01-09 | Xerox Corporation | Dilation of images without resolution conversion to compensate for printer characteristics |
US5617484A (en) * | 1992-09-25 | 1997-04-01 | Olympus Optical Co., Ltd. | Image binarizing apparatus |
US5387768A (en) * | 1993-09-27 | 1995-02-07 | Otis Elevator Company | Elevator passenger detector and door control system which masks portions of a hall image to determine motion and court passengers |
EP0669034B1 (en) * | 1992-11-10 | 1997-01-15 | Siemens Aktiengesellschaft | Process for detecting and eliminating the shadow of moving objects in a sequence of digital images |
CA2112737C (en) * | 1993-01-01 | 2002-01-29 | Nobuatsu Sasanuma | Image processing machine with visible and invisible information discriminating means |
US5282337A (en) * | 1993-02-22 | 1994-02-01 | Stanley Home Automation | Garage door operator with pedestrian light control |
WO1994023375A1 (en) * | 1993-03-31 | 1994-10-13 | Luma Corporation | Managing information in an endoscopy system |
BE1007608A3 (en) * | 1993-10-08 | 1995-08-22 | Philips Electronics Nv | Improving picture signal circuit. |
US5604822A (en) * | 1993-11-12 | 1997-02-18 | Martin Marietta Corporation | Methods and apparatus for centroid based object segmentation in object recognition-type image processing system |
US5875264A (en) * | 1993-12-03 | 1999-02-23 | Kaman Sciences Corporation | Pixel hashing image recognition system |
JP3721206B2 (en) * | 1993-12-24 | 2005-11-30 | 富士写真フイルム株式会社 | Image reproduction device |
US5621868A (en) * | 1994-04-15 | 1997-04-15 | Sony Corporation | Generating imitation custom artwork by simulating brush strokes and enhancing edges |
US5625709A (en) * | 1994-12-23 | 1997-04-29 | International Remote Imaging Systems, Inc. | Method and apparatus for identifying characteristics of an object in a field of view |
US5982926A (en) * | 1995-01-17 | 1999-11-09 | At & T Ipm Corp. | Real-time image enhancement techniques |
US5727080A (en) * | 1995-05-03 | 1998-03-10 | Nec Research Institute, Inc. | Dynamic histogram warping of image histograms for constant image brightness, histogram matching and histogram specification |
FR2734911B1 (en) * | 1995-06-01 | 1997-08-01 | Aerospatiale | METHOD AND DEVICE FOR DETECTING THE MOVEMENT OF A TARGET AND THEIR APPLICATIONS |
US5857029A (en) * | 1995-06-05 | 1999-01-05 | United Parcel Service Of America, Inc. | Method and apparatus for non-contact signature imaging |
US5793883A (en) * | 1995-09-29 | 1998-08-11 | Siemens Medical Systems, Inc. | Method for enhancing ultrasound image |
US5978506A (en) * | 1995-12-28 | 1999-11-02 | Ricoh & Company, Ltd. | Colorant-independent color balancing methods and systems |
EP0801360B1 (en) * | 1996-04-10 | 2002-11-06 | Samsung Electronics Co., Ltd. | Image quality enhancing method using mean-matching histogram equalization and a circuit therefor |
US6020931A (en) * | 1996-04-25 | 2000-02-01 | George S. Sheng | Video composition and position system and media signal communication system |
US5872857A (en) * | 1996-05-22 | 1999-02-16 | Raytheon Company | Generalized biased centroid edge locator |
JP3602659B2 (en) * | 1996-08-05 | 2004-12-15 | 株式会社リコー | Feature extraction method and feature extraction device from gray value document image |
KR100200628B1 (en) * | 1996-09-30 | 1999-06-15 | 윤종용 | Image quality enhancement circuit and method thereof |
US5859698A (en) * | 1997-05-07 | 1999-01-12 | Nikon Corporation | Method and apparatus for macro defect detection using scattered light |
US5949918A (en) * | 1997-05-21 | 1999-09-07 | Sarnoff Corporation | Method and apparatus for performing image enhancement |
US6760086B2 (en) * | 1998-03-26 | 2004-07-06 | Tomoegawa Paper Co., Ltd. | Attachment film for electronic display device |
EP0989516B1 (en) * | 1998-09-22 | 2005-06-15 | Hewlett-Packard Company, A Delaware Corporation | Image data processing method and corresponding device |
KR100322596B1 (en) * | 1998-12-15 | 2002-07-18 | 윤종용 | Apparatus and method for improving image quality maintaining brightness of input image |
US7006688B2 (en) * | 2001-07-05 | 2006-02-28 | Corel Corporation | Histogram adjustment features for use in imaging technologies |
-
2003
- 2003-09-08 US US10/657,723 patent/US7321699B2/en not_active Expired - Fee Related
- 2003-09-08 AU AU2003270386A patent/AU2003270386A1/en not_active Abandoned
- 2003-09-08 WO PCT/US2003/028028 patent/WO2004023787A2/en not_active Application Discontinuation
-
2007
- 2007-12-18 US US12/002,674 patent/US20080095433A1/en not_active Abandoned
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6763364B1 (en) * | 1995-02-14 | 2004-07-13 | Scott A. Wilber | Random number generator and generation method |
US7283683B1 (en) * | 1998-01-23 | 2007-10-16 | Sharp Kabushiki Kaisha | Image processing device and image processing method |
US6538405B1 (en) * | 2000-04-28 | 2003-03-25 | The Cherry Corporation | Accessory control system |
US7102696B2 (en) * | 2001-04-03 | 2006-09-05 | Chunghwa Tubes, Ltd. | Method of effecting various anti compensation processes on segmented gray level of input image on plasma display panel |
Cited By (17)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20110075946A1 (en) * | 2005-08-01 | 2011-03-31 | Buckland Eric L | Methods, Systems and Computer Program Products for Analyzing Three Dimensional Data Sets Obtained from a Sample |
US8442356B2 (en) | 2005-08-01 | 2013-05-14 | Bioptgien, Inc. | Methods, systems and computer program products for analyzing three dimensional data sets obtained from a sample |
US7562169B2 (en) * | 2005-12-30 | 2009-07-14 | Symwave, Inc. | Video and audio front end assembly and method |
US20070174895A1 (en) * | 2005-12-30 | 2007-07-26 | Yao Zhigang | Video and audio front end assembly and method |
US8744159B2 (en) * | 2010-03-05 | 2014-06-03 | Bioptigen, Inc. | Methods, systems and computer program products for collapsing volume data to lower dimensional representations thereof using histogram projection |
US20110216956A1 (en) * | 2010-03-05 | 2011-09-08 | Bower Bradley A | Methods, Systems and Computer Program Products for Collapsing Volume Data to Lower Dimensional Representations Thereof |
US11140443B2 (en) | 2012-09-19 | 2021-10-05 | Google Llc | Identification and presentation of content associated with currently playing television programs |
US11006175B2 (en) | 2012-09-19 | 2021-05-11 | Google Llc | Systems and methods for operating a set top box |
US11729459B2 (en) | 2012-09-19 | 2023-08-15 | Google Llc | Systems and methods for operating a set top box |
US11917242B2 (en) | 2012-09-19 | 2024-02-27 | Google Llc | Identification and presentation of content associated with currently playing television programs |
US8907973B2 (en) * | 2012-10-22 | 2014-12-09 | Stmicroelectronics International N.V. | Content adaptive image restoration, scaling and enhancement for high definition display |
US20140111532A1 (en) * | 2012-10-22 | 2014-04-24 | Stmicroelectronics International N.V. | Content adaptive image restoration, scaling and enhancement for high definition display |
US10048057B2 (en) | 2013-12-05 | 2018-08-14 | Bioptigen, Inc. | Image registration, averaging, and compounding for high speed extended depth optical coherence tomography |
US20160253788A1 (en) * | 2015-02-27 | 2016-09-01 | Siliconfile Technologies Inc. | Device for removing noise on image using cross-kernel type median filter and method therefor |
US9875529B2 (en) * | 2015-02-27 | 2018-01-23 | SK Hynix Inc. | Device for removing noise on image using cross-kernel type median filter and method therefor |
US11740897B2 (en) | 2020-07-15 | 2023-08-29 | Copado, Inc. | Methods for software development and operation process analytics and devices thereof |
US11775910B2 (en) | 2020-07-15 | 2023-10-03 | Copado, Inc. | Applied computer technology for high efficiency value stream management and mapping and process tracking |
Also Published As
Publication number | Publication date |
---|---|
US20040131273A1 (en) | 2004-07-08 |
US7321699B2 (en) | 2008-01-22 |
WO2004023787A3 (en) | 2004-05-13 |
WO2004023787A2 (en) | 2004-03-18 |
AU2003270386A1 (en) | 2004-03-29 |
AU2003270386A8 (en) | 2004-03-29 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US7321699B2 (en) | Signal intensity range transformation apparatus and method | |
US8570396B2 (en) | Multiple exposure high dynamic range image capture | |
US8525900B2 (en) | Multiple exposure high dynamic range image capture | |
EP2515273B1 (en) | Multiple exposure high dynamic range image capture | |
US6094508A (en) | Perceptual thresholding for gradient-based local edge detection | |
US8237813B2 (en) | Multiple exposure high dynamic range image capture | |
US8594451B2 (en) | Edge mapping incorporating panchromatic pixels | |
JP4460839B2 (en) | Digital image sharpening device | |
US6548800B2 (en) | Image blur detection methods and arrangements | |
JP2010525486A (en) | Image segmentation and image enhancement | |
US20110115815A1 (en) | Methods and Systems for Image Enhancement | |
WO2003061266A2 (en) | System and method for compressing the dynamic range of an image | |
Mitsunaga et al. | Autokey: Human assisted key extraction | |
KR102462265B1 (en) | Directional scaling systems and methods | |
US10762604B2 (en) | Chrominance and luminance enhancing systems and methods | |
US8897378B2 (en) | Selective perceptual masking via scale separation in the spatial and temporal domains using intrinsic images for use in data compression | |
Bajpai et al. | High quality real-time panorama on mobile devices | |
Tsai et al. | An adaptive dynamic range compression with local contrast enhancement algorithm for real-time color image enhancement | |
US10719916B2 (en) | Statistical noise estimation systems and methods | |
EP1384204A2 (en) | Apparatus and method for boundary detection in vector sequences and edge detection in color image signals | |
GB2490231A (en) | Multiple exposure High Dynamic Range image capture | |
CN114972087A (en) | Video processing method, device, equipment and computer storage medium | |
CN118799348A (en) | Tone and gray level edge detection method and system based on self-adaptive fusion method | |
Neuenhahn et al. | Pareto optimal design of an FPGA-based real-time watershed image segmentation | |
Steux et al. | of Deliverable: Report on Computer Vision Algorithms |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
AS | Assignment |
Owner name: RYTEC CORPORATION, WISCONSIN Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:ANASTASIA, CHARLES M;REEL/FRAME:021279/0767 Effective date: 20040212 |
|
STCB | Information on status: application discontinuation |
Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION |