GB2360895A - Image processor and method of image processing - Google Patents

Image processor and method of image processing Download PDF

Info

Publication number
GB2360895A
GB2360895A GB0007936A GB0007936A GB2360895A GB 2360895 A GB2360895 A GB 2360895A GB 0007936 A GB0007936 A GB 0007936A GB 0007936 A GB0007936 A GB 0007936A GB 2360895 A GB2360895 A GB 2360895A
Authority
GB
United Kingdom
Prior art keywords
image
colour
processor
component
components
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
GB0007936A
Other versions
GB0007936D0 (en
Inventor
Stephen Mark Keating
Matthew Patrick Compton
Stephen John Forde
Clive Gillard
Morgan William Amos David
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Sony Europe Ltd
Original Assignee
Sony United Kingdom Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Sony United Kingdom Ltd filed Critical Sony United Kingdom Ltd
Priority to GB0007936A priority Critical patent/GB2360895A/en
Publication of GB0007936D0 publication Critical patent/GB0007936D0/en
Publication of GB2360895A publication Critical patent/GB2360895A/en
Withdrawn legal-status Critical Current

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N9/00Details of colour television systems
    • H04N9/64Circuits for processing colour signals
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/10Cameras or camera modules comprising electronic image sensors; Control thereof for generating image signals from different wavelengths
    • H04N23/13Cameras or camera modules comprising electronic image sensors; Control thereof for generating image signals from different wavelengths with multiple sensors
    • H04N23/15Image signal generation with circuitry for avoiding or correcting image misregistration
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N9/00Details of colour television systems
    • H04N9/64Circuits for processing colour signals
    • H04N9/646Circuits for processing colour signals for image enhancement, e.g. vertical detail restoration, cross-colour elimination, contour correction, chrominance trapping filters

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Color Television Image Signal Generators (AREA)

Abstract

An image processor for detecting chromatic error in a colour image represented by a video signal, the video signal having at least first and second colour components, the detector comprising an image analysis processor arranged in operation to receive the video signal, to detect a part of the image in the first colour component, and to detect the corresponding part of the image in the second colour components, and a comparison processor coupled to the image analysis processor arranged in operation to detect the chromatic error consequent upon a spatial displacement between the corresponding parts of the image in the first and the second of the colour components. The comparison may be effected by calculating a difference in pixels values between blocks of the image at a plurality of displacements between the first and the second colour components. A minimum in the difference between the pixel values provides an indication of the chromatic error between the first and second components. The chromatic error can be evaluated for each block, to establish the chromatic error at different parts of the image. The detected chromatic error is used to correct the chromatic error in the video signal which may be from a video camera.

Description

2360895 Image Processor and Method of Image Processin
Field of Invention
The present invention relates to methods of processing colour images and image processors. The present invention also relates to video signal processing and C> video signal processors and video cameras embodying such video signal processors. Background of Invention
More particularly, but not exclusively, the present invention relates to image processors, which are arranged to process colour images formed by an imaging lens, and methods of processing images formed by an imaging lens.
Cameras and light projectors are examples of optical imaging equipment which use an imaging lens to focus light to form an image. For cameras, the imaging lens is provided in order to focus an image falling within the field of view of the lens onto a sensor. Typically the sensor in, for example, digital cameras, camcorders, and video cameras is provided with a dichroic element which serves to divide the colour image formed by the lens into red, green and blue components. The red, green and blue components are then sampled in order to produce a colour image signal representing the red, green and blue components. In the case of still image digital cameras, the colour image signal is represented in digital form. In this form, the data represented by the colour image signals may be stored in order to be reproduced or processed in some way. In the case of video cameras or camcorders, the colour signals in analogue or digital form may be recorded or communicated to a mixing apparatus where for example the camera is used in a television production studio. For the example of conventional cameras the sensor is a film which is exposed to a predetermined amount of light produced from the image focused by the lens.
A further example in which a lens is used to form an image is a projector in which the projector provides the light source which is modulated in some way by an image, such as by introducing a coloured slide which is to be displayed.
In the above examples a lens is used in order to focus the image falling within a field of view of the lens. However, lenses do not form a perfect representation of the
310 image falling within the field of view of the lens. This is because optical properties of
2 the lens itself cause distortion in the focused image formed by the lens. One example of such distortion is chromatic aberration. Chromatic aberration arises from dispersion which is a property of the lens resulting from the refractive index of the material forming the lens, such as glass, differing with wavelength. As a result the quality of the colour image is impaired, particularly at the boundaries of the image, where the chromatic aberration causes greatest error.
In order to correct the chromatic aberration in a colour image, it is necessary to estimate the chromatic aberration error caused the imaging lens, as a function of the focus, zoom and iris setting of the lens. To this end, it is known for some high definition television camera lenses to provide a lens aberration indicator which produces, from a look-up table, a signal representative of a lens aberration derived from the look-up table for the particular focus, zoom and iris settings. However, not all lenses are provided with such aberration indicators.
Summary of the Invention
According to the present invention there is provided a method of processing a colour image represented by a video signal having at least first and second colour components, the method comprising the steps of detecting a part of the image in the first colour component, detecting the corresponding part of the image in the second colour component, and detecting the chromatic error consequent upon a spatial displacement between the part of the image in the first colour component and the corresponding part of the image in the second colour component.
Each of the colour components of the video signal is derived from light having at least one different wavelength. The term light as used herein should be interpreted broadly to include both visible and invisible light. Correspondingly the term 'colour' refers to and includes light of at least one wavelength, but more particularly a band of wavelengths, which may be visible or invisible to the human eye.
The method of processing the colour image represented by a video signal serves to detect and evaluate the chromatic error produced in a colour image by for example an imaging lens. The method provides a substantial improvement with 0 respect to known techniques because the chromatic error is measured in the video signal itself rather than estimating the chromatic error caused by the imaging lens with respect to predetermined characteristics as a function of the focus, zooms and iris settings. Therefore by detecting a part of the image in a first colour component and detecting the corresponding part of the image in a second colour component and detecting a spatial displacement between the part of the image in the first colour component and the corresponding part of the image in the second colour component the chromatic error may be detected and evaluated in accordance with the spatial displacement.
In a flirther aspect of the present invention there is provided a method of processing a colour image which includes the steps of generating a video signal representing a colour image, the video signal having at least first and second colour components, each component being derived from light of at least one different wave length, detecting a part of the image in said first colour component, detecting the corresponding part of the image in said second colour component, and detecting a chromatic error consequent upon a spatial displacement between the corresponding parts of the image in the first and the second colour components.
As will be appreciated therefore the method of image processing which provides a measure of a chromatic error in a colour image, could be applied to any colour image and is not limited in application to video signals of existing systems. As such the method could therefore include the further step of generating the video signal from the colour image.
In preferred embodiments the step of detecting the chromatic error consequent upon the spatial displacement may comprise the steps of detecting a feature of the part of the image in the first and the second colour components and comparing the position of the feature in the first and the second colour components, the difference in the position of the feature being indicative of the spatial displacement caused by the chromatic error. The comparison may be effected in several ways. The comparison may be effected by, for example, cross correlating the part of the image in the first colour component and the corresponding part of the image in the second colour component. However in preferred embodiments and in particular in order to improve 3 0 the efficiency with which the comparison is made, the step of comparing the position of the feature in the first and the second components may comprise the step of comparing a difference in pixel values between the part of the image in the first colour 4 component and the corresponding part of the image in the second colour component at a plurality of displacements, the spatial displacement which is indicative of the chromatic error corresponding to that of a substantial minimum in the difference in pixel values.
Although any feature in the image can be used to determine and detect the spatial displacement between the parts of the image in the first and second colour components, advantageously the feature may be at least part of an edge of an object in the image. By detecting the edge of an object, the minimum of the comparison results may be more easily detected.
A further improvement is provided to the image processing method by evaluating the chromatic error with respect to a predetermined orientation of the image and as a function of the position along this predetermined orientation. To this end the image processing method may comprise the steps of dividing the image into a plurality of image blocks and evaluating for each block the chromatic error consequent upon the spatial displacement between the first and the second components of the part of the image within the block.
To provide the relationship of chromatic error with respect to position in a predetermined orientation, in preferred embodiments of the present invention the step of evaluating for each block the chromatic error may comprise the steps of producing for each of the plurality of blocks a chromatic error value in accordance with the spatial displacement at which the comparison between the edge in the first and second components is a minimum, and the method of evaluating may comprise the steps of forming from the chromatic error values a relationship of chromatic error with respect to the spatial position in the image in the predetermined orientation, and producing error data representative of the relationship.
Although each of the blocks may provide an estimate of the chromatic error, some blocks produce a less accurate estimate of the chromatic error than others. This is as a result of the features of the image which are used to compare the first and second colour components. Therefore an improvement may be provided to the image 3 0 processing method which in preferred embodiments includes the steps detecting a second minimum in the comparison values with respect to the spatial displacement, the second minimum being greater than the minimum detected as representing the chromatic error for an image block, determining a relative difference between the position of the detected minimum and the position of the second minimum. with respect to a difference between the detected minimum and a maximum of the comparison values, and consequent upon the determined relative difference. 5 discounting the chromatic error detected for the image block.
For a conventional camera, or a conventional projector, the colour components of the image correspond to red, green and blue light. As such in preferred embodiments the plurality of colour components may be three components derived substantially from red, green and blue light, wherein a first chromatic error is detected consequent upon a spatial displacement between the red component and the green component, the first colour component being the red component and the second colour component being the green component, and a second chromatic error is detected for the blue component and the green component, the first colour component in this case being the blue component and the second colour component in this case being the green component.
It is known that the human eye is more sensitive to green light than to either of red or blue light. For this reason by making the green component a reference and determining the chromatic error first with respect to the green component, and then between the second component (red) and the third component (blue) two error signals are provided which are indicative of the chromatic error for the red component and the blue component. Thus with this information the chromatic error may be corrected or at least substantially reduced. To this end, a further aspect of the present invention is to provide a method of processing a video signal according to patent claim 13.
According to a ftu-ther aspect of the present invention there is provided an image processor for detecting chromatic error in a colour image according to patent claim 14.
According to a further aspect of the present invention there is provided a camera having a signal video processor according to patent claim 28.
A flirther advantage is provided to embodiments of the present invention which may be, for example, video cameras. As already explained not all lenses provide an indication of the chromatic error that is produced by the lens as a function of the iris, focus and zoom settings. By providing an image processor which operates in 6 accordance with the present invention, the video camera may be provided with a measure of the chromatic error produced by the imaging lens, even though the imaging lens itself may not provide an indication of the chromatic error. This is particularly advantageous because not all high definition television lenses provide an indication of the chromatic error. Furthermore, an advantage is provided in that the video camera can operate with a standard definition lens which is more prone to chromatic aberration. As such providing an image processor to detect the chromatic aberration can provide a way of reducing the chromatic error produced by a standard definition lens.
Various aspects and features of the present invention are defined in the appended claims.
Brief Description of Drawings
Embodiments of the present invention will now be described, by way of example only, with reference to the accompanying drawings in which Figure 1 is a schematic block diagram of a video camera, Figure 2 is a schematic block diagram of parts of the video camera shown in figure 1, Figure 3(a) is an illustrative representation of three components of a colour image formed by the imaging lens shown in figure 2, and Figure -31(b) is also an illustrative representation of components of a colour image formed by the imaging lens shown in figure 2, Figure 4 is a schematic block diagram of a data processor which operates to reduce a chromatic error caused by the imaging lens shown in figure 2, Figure 5 is a schematic block diagram of a chromatic error processor, Figure 6 is a graphical representation of amplitude against pixel position for an image block, Figure 7 is an illustrative representation of an image block comparison process, Figure 8 is a part graphical part schematic representation of a process of discounting image blocks from the process of estimating the chromatic error, Figure 9(a) is a graphical representation of a relationship between comparison values and displacements for a good image block, and Figure 9(b) is a graphical representation of comparison values and displacements for a bad image block, 7 Figure 1 0(a) is an illustrative representation of a test chart, and Figure 1 0(b) is a graphical representation of a relationship of amplitude against time for red and green components for the test chart, Figure 11 is a flow diagram representing the method of detecting chromatic error performed by the chromatic error processor shown in Figure 5, and Figure 12 is a flow diagram representing further parts of the method represented by the flow diagram of figure 12.
Description of Preferred Embodiments
There are, in general, two types of chromatic aberration, which are., longitudinal aberration which corresponds to a tracking error; and a lateral aberration which corresponds to a registration error. The longitudinal chromatic aberration causes different wavelengths of light and therefore colours to focus on different image planes which are formed respectively along an axis of the lens at different distances from the lens. This produces a tracking error in that different colours of an image will be focused at different points or different image planes. Furthermore in a case of a zoom lens the amount of longitudinal chromatic aberration varies as the lens is zoomed and a relative position of the image planes on which different wavelengths or different colours of light is formed changes between a wide angle focus and a telephoto focus of the zoom lens. Lateral chromatic aberration occurs because the magnification of the image formed by the lens differs with wavelength. In the field of television this typ e of chromatic aberration is referred to as causing registration error which is produced because different wavelengths of light and therefore colours from a colour image will be focused at different points laterally displaced from the axis of the lens on an imaging plane.
Although for explanation the chromatic error has been described as being produced by an imaging lens, the chromatic error could be caused by other optical components such as, inter alia, prisms and filters. Embodiments of the present invention are therefore not limited to applications in correcting chromatic error from lenses but may be applied to any colour image in which there is some form of 3 0 chromatic error without dependence on how this is caused.
---18 An example of an item of optical imaging equipment with which embodiments of the invention find application is shown in figure 1. In figure 1 a video camera 1 is shown to comprise an imaging lens 2 having a lens body 22 which is coupled to a camera body 4 and is arranged in operation to focus an image falling within a field of view of the imaging lens 2 onto a sensor within the body of the camera 4. The video camera is also provided with a view finder 6 which provides an operator with a view of the image focused by the imaging lens of the camera so that the operator may adjust the position, focus and other parameters of the camera in order to optimise the image produced by the imaging lens 2. Typically the sensor is arranged to generate colour image signals which may be displayed for example on a display means 8 to provide a further illustration of the colour image produced by the camera 1. The use of the display means 8 is more common on hand held video cameras such as "camcorders".
The video camera 1 may also include a tape drive 10 which is arranged to record the colour image signals, or alternatively the colour image signals may be presented at an output channel 12 to be fed to a separate recording apparatus or a mixing studio. Parts of the television camera 1 which are particularly relevant for facilitating understanding of the present invention are shown in figure 2 where parts also appearing in figure 1 bear identical numerical designations.
In figure 2 the lens body 22 is shown to have three output channels 14, 16, 18 which are connected to a data processor 20. As will be explained, the data processor operates to detect and evaluate a chromatic error in the colour image represented by the video signal, and to correct or at least reduce this chromatic error. The processed colour signals are presented at output channels 56, 58, 60. The three output channels 14, 16, 18 are arranged to convey signals representative of three colour components of the colour image formed by the imaging lens 2. Conventionally, the three colour components are representative of red, green and blue light. The red, green and blue light components of the colour image are produced by a dichrolc element 24 disposed at an imaging plane 3)2 embodied within the body of the imaging lens 22 which divides the colour image into red, green and blue light components which are arranged to be detected by a corresponding sensor 26, 28, 30. The sensor is sho-wri in figure 2 to comprise three CCD elements 26, 28, 30. However, the sensor could be formed from a single CCD element from which the three colour components are recovered. The 9 focus of the lens takes into account the effect of the dichroic element 24 which is usually formed as a splitter prism, whereby the focus accommodates the refraction introduced by the prism. A sampling processor (not shown) forming part of the sensors 26, 28, 30 operates to sample the red, green and blue light components and to generate the three colour signal components which are representative of the samples of pixels within horizontal lines which make up the red, green and blue image tD 7W components. Therefore, in effect the dichroic element in combination with the sensors, and the sampling processor form a colour pick-up which generates a colour video signal representative of the colour image.
Although in the example embodiment the three colour components are representative of red, green and blue light, the components may be representative of light of any wavelength both visible and invisible. Furthermore, the image may be comprised of only two components which suffer from chromatic aberration and are therefore of different sizes. An example of an application involving only two components is the processing of different image components produced from a camera from infra-red light and low intensity visible light, such as might be used as a security camera.
As will be appreciated from the explanation given above, the imaging lens 2 suffers from a chromatic aberration so that, at an imaging plane 32 each of the red, green and blue image components will differ in size as a result of the distortion produced by the chromatic aberration of the lens. This is illustrated in a somewhat exaggerated way by the representation shown in figures 3 (a) and 3(b).
In figure 3(a) a reference area represented by the solid square 34 provides an illustration of a detection area which can be utilised and is formed by the dichroic element 24 in combination with the sensors 26, 28, 30. As shown within the reference square 34 a red component of the image R is represented by a dot-dashed line as a square and within the square a triangle. Correspondingly, the green light component representing the same image is shown and illustrated by the solid line G whereas the blue light component is represented by the dotted line B. The same image is represented in Figure 31(b). However because the imaging lens 2 is a zoom lens, the representation in figure 3(a) is shown to illustrate a situation in which the zoom lens is set at a wide angle zoom. Correspondingly, the representative figure 3(b) is representative of a telephoto zoom. In this focus, the blue light component now appears as the largest of the three components, and the red light component now appears as the smallest of the three components. This is an illustration of a characteristic of chromatic aberration. The relative size of the different components with respect to the focus of the lens depends on the particular lens being used. In other examples, the red component could appear as the largest component, and the blue component the smallest component, or alternatively both the red and blue components could be smaller or larger than the green component. However in the present example it will be appreciative from the representations shown in figures 33(a) and 3(b) that the red, green and blue light components of the image differ in size as a result of the chromatic aberration. This can therefore be represented as a difference in area formed by the images within the reference frame 34 formed on the sensors at the imagine plane 32.
As a result of the chromatic aberration, the colour image will contain imperfections and artefacts, particularly at extreme edges of objects. The camera, according to the example embodiment of the invention is therefore arranged to correct this chromatic error, by processing the video signal to the effect of adjusting the relative size of the different colour components. As a result the camera can operate to produce an improved quality image. Furthermore the video camera can provide an image of improved quality even when fitted with a standard definition lens. However in order to correct the chromatic error, the error must first be estimated. The chromatic error is estimated and corrected by the data processor 20, which is shown in more detail in figure 4.
In figure 4 the data processor 20 is shown to comprise a control processor 40 which is arranged to receive on the three colour channels 32, 34, 3)6 the red, green and blue image component signals. The control processor 40 operates to determine which two of the three image components are to be processed to the effect of adjusting the size of these components to match the remaining reference component. The components to be adjusted are fed respectively to first and second colour component adjustment processors 44, 46 via two input channels 48, 50. Also conveyed to the first and second colour correction processors 44, 46, via control input channels 52, 54, is an amount by which each respective colour component differs from the remaining reference colour component. It is by this amount that the colour component in question is to be increased in size. The remaining reference colour component is then fed unprocessed to the first output channel 56. After being adjusted by the colour components processors 44, 46, the result of the adjusted image components are presented on two output channels 58, 60 respectively. However in order to correct the chromatic error by altering the size of two of the components, the chromatic error must first be estimated.
The chromatic error is estimated by a chromatic error processor 62. The three colour components of the video signal are fed to the chromatic error processor 62 via the colour connecting channels 32, 34, 36. The chromatic error processor 62 is arranged in operation to generate two error signals representative of the chromatic error in the horizontal direction of the image of the red coloured component with reference to the green colour component, and the blue colour component with reference to the green colour component. These are presented respectively on red and blue error channels 64, 66. The chromatic error processor 62 is shown in more detail in figure 5.
The chromatic error processor 62 operates generally, in preferred embodiments to generate an estimate of the chromatic error in the colour image with respect to horizontal position's across the image for each of the red and blue components with reference to the green component. This is because typically the green component is used as a reference because the human eye is most sensitive to the green component.
In figure 5 the chromatic error image processor 62 is shown to receive the three colour image components corresponding to red, green and blue light on the three connecting channels 32, 34, 36. The green component fed on the connecting channel 32 is used in the chromatic error image processor as a reference signal for the reasons already mentioned. The red and blue signals are therefore those for which the chromatic error is being evaluated. The red and blue components are therefore referred to in the following description as the input signals. The three colour components red, green and blue are received from the colour component input channels 32, 34, 36 by a block analysis processor 70. For each of the red, green and blue components the block analysis processor serves to divide the colour image corresponding to a field of the video signal into a series of image blocks 72 from which the colour image component 12 is comprised. Each of the blocks of each of the colour image components is then preprocessed to the effect of substantially enhancing the isolation of horizontal frequency components from which each colour image component is made up. A pre-processor 74 which performs the pre- processing is comprised of a vertical low pass filter 76, a horizontal high pass filter 78 and an edge detection processor 80. The pre-processed image blocks are then fed to three connecting channel ports 82, 84. 86 which respectively convey the pre-processed colour image blocks for the red, green and blue components to a block match processor 100 forming the lower half of the chromatic error processor 62.
Essentially, in preferred embodiments, the reference colour component (green component) and the input signal colour components (red or blue components) of a video image are divided into blocks of 64x32 pixels for each video field. A series of processes are then performed on each block. Each process validates the usefulness of the block which determines whether the block should be used for further processing.
In effect, the chromatic error processor 62 is arranged to characterise a lens during use of the camera to the effect that processing is applied to a frame of an image to obtain a maximum number of sample points. However, it should be possible to get corresponding performance for interlaced scenes by applying the process to individual fields.
In the upper part of the chromatic error image processor 62, both of the reference signal and the input signals are pre-processed. The effect of pre-processing is to remove DC components and high frequency verticalcomponents of the image signals. This has an effect of increasing the probability that the measurements are affected only on the vertical lines, and reduces the likelihood of noise in the image blocks. In preferred embodiments, the filters, both the low pass and the high pass, have fifteen taps because it was found that these visually emphasise scenes in the image.
The edge detection processor 80 forming a part of the pre-processor 74 operates to identify a suitable edge in the image block from which the chromatic error 3 0 can be detected and evaluated. In operation the chromatic error image processor analyses the horizontal line of pixels along the vertical centre of the image block.
13 The operation of the edge detection processor 80 is illustrated by the graphical representation of the relationship between pixel amplitude with respect to pixel position shown in figure 6. In figure 6 a test block 90 is shown to have a test area 92.
For this test area the pixel amplitude is plotted with reference to the horizontal pixel position shown as the solid line 94 for eight pixel positions. The edge detection processor requires that two successive pixels have differences "diff V and "diff 22" which are greater than a predetermined threshold. A typical threshold is for example the amplitude value of T for a video signal having eight bit samples. As shown in figure 6 if the difference between two successive pixel positions are "diff 1 " and -diff T' and these are greater than the value '7', then the edge detection processor 80 will select this image block for further processing. If the difference is less than '7' then the image block is discounted. In effect, therefore the edge detection processors serve to validate all blocks in the input signal and reference signal images. This is because for a correlation measurement to be made between corresponding blocks in different colour image components, both the input signal blocks and the reference signal blocks must have a feature which will give a strong correlation result when the image blocks are compared.
The further parts of the chromatic error image processor 63 to which the connecting ports 82, 84, 86-are fed are shown in the lower half of figure 5. The connecting ports 82, 84, 86 feed the image blocks from the red, green and blue components respectively to a block match processor 100. The block match processor serves to generate comparison values between the two input signal components which are the red and the blue components and the reference signal component which is the green component as a function of displacement between the image blocks being compared. A first data set of comparison values formed between the red component and the green component is communicated to a block selection processor 102 via a first connecting channel 104. A second data set of comparison values derived from a comparison of image blocks from the blue component with respect to the green component are fed to the block select processor 102 via a second connecting channel 3) 0 106. Following processing by the block select processor 102, the first and second comparison values for the image blocks are fed via further connecting channels 108, to a chromatic error evaluation processor 112. The chromatic error evaluation 14 processor presents on the output channels 64, 66 the chromatic error with respect to position along the horizontal axis of the image for the red and blue colour components which are then fed to the control processor 40 of the chromatic error correction processor 20 as shown in figure 4.
The operation of the lower part of the chromatic error image processor 62 will now be explained. Generally, in preferred embodiments the block match processor is arranged to compare each image block of one of the components with the corresponding image block of the reference component (green component) for a predetermined number of displacements. This can be performed by cross correlating one of the image blocks against the corresponding image block of the other component. However in preferred embodiments the block match processor 100 serves to calculate a sum of absolute difference pixel values for corresponding horizontal displacements between co-sited reference and input signal blocks. The absolute difference calculation as a comparison is preferred because this provides the comparison values as a function of the horizontal displacement for a smaller number of calculations. For each of N horizontal displacements the absolute difference in pixel values for each horizontal displacement n is calculated in accordance with equation (1), where a(n) and b(n) are the image block pixel values.
comparison value - a(n) - b(n) (1) N In the block match processor 100 of the illustrative embodiment, the absolute difference calculation is performed using an eleven tap eight sub- position interpolation filter providing eight horizontally shifted variations of the input signal block with respect to the reference block. This comparison is represented pictorially in figure 7.
In figure 7 a reference signal block 120 is compared with a sample signal block 122 at each of eight horizontal displacements within a complete horizontal search area represented by a line 124. The dark areas are representative of an extended search area which is required in order to provide some overflow either side of the signal block because the search range extends for nearly 5 pixels. The complete horizontal search therefore provides a search area of 4 pixels at a resolution of 1/8th of a pixel for each of the positions of the horizontal displacements. As will be explained the search resolution can be further increased by an averaging effect provided by a curve fitting process performed by the chromatic error evaluation processor 112. Thus the block match processor 100 provides a data set representing for each of the eight horizontal displacements an absolute difference between the input signal block and the reference block. So. for each image block for the two red and blue input signals a data set for each block is generated and fed to the block select processor 102 via the connecting channels 104, 106.
In preferred embodiments, the block select processor 102 operates generally to discount data sets comprising comparison values for image blocks which do not produce a sufficiently sharp minimum in the curve which is fitted to the comparison values as a function of the displacement. For an example block, this is shown in a graphical representation in figure 8. In figure 8 a curve 130 represents a plot of the comparison value on the x-axis 1332 with respect to the sub-pixel displacement of the input signal block and the reference signal block on the y axis 1 3)4. The curve 13) 0 is generated as a representation of a curve fitted to the comparison values provided for a data set for a sample block. The curve has minimum at a point 136. However the block select processor then searches in the shaded areas 138, 140 for a further minimum value of the curve. In this example the further minimum is found at point 142. This minimum is outside the exclusion zone of eight sub-pixels which is represented by the arrow 144.
In order to provide a sufficiently accurate detection of the chromatic error, the plot of comparison values against sub-pixel displacements should provide a sufficiently sharp minimum point for that minimum point to be counted in the evaluation of the chromatic error. The curve of comparison value with respect to subpixel displacement is representative of a correlation between the part of the image within the input signal block and the reference block. The curve will therefore vary in dependence upon the contents of the block. If there is a sharp edge then the curve will have a substantial minimum and the displacement representative of the chromatic error can be regarded as highly reliable. If however there is not a sharp minimum then the likelihood that the minimum provides an accurate representative of the actual chromatic error is substantially reduced. In this case the minimum displacement 16 produced for such a block should be discounted when evaluating the chromatic error. The difference between a block providing a good estimate of the minimum displacement and therefore the chromatic error is shown in figure 9(a), where the axes correspond to those of figure 8 and so bear the same numerical references.
Correspondingly, a block which does not provide a good estimate of the chromatic error is shown in figure 9(b). In figure 9(b) it can be seen that there are two minima 150, 152 forming part of the curve. In preferred embodiments, if the difference in the displacements between the minima compared to the total dynamic range in the comparison values is less than 50%, then the correlation values of the data set for this block are discounted. This is represented by equation (2), where G is the ratio compared with the 50% threshold.
G = (NextMin - Min) X 100 (2) (Max - Min) The output channels from the block select processor 102 therefore provide comparison data sets for each of the image blocks which are to be counted in estimating the chromatic error. Essentially the collection of data sets represents the horizontal error across the input source video, for each of the red and blue components. The block select processor, for each component, selects the displacement corresponding to the minimum value for each image block. This provides, for each component, the chromatic error for this block. The chromatic evaluation processor 112, then operates to perform a curve fitting process to generate a curve from the selected minimum values for each data set. The curve which best fits the selected minimum values is representative of the chromatic error as a function of horizontal displacement across the image. In preferred embodiments, the curve fitting is performed by producing a third order curve to fit the data set produced. The third order curve is represented in generic form by equation (3). However because the chromatic error is being determined for the horizontal displacement across the image, the curve according to equation (3) is resolved in the x direction as represented by 330 equation (4). As such for a third order system, there is a dependence on the horizontal component (y) for the horizontal component error (Xerror). The horizontal error 17 component for the third order system is dependent on vertical position (y) and the horizontal position (x). However for linear systems such as for example a tele-cine application, the Xerror is dependent on the horizontal position (component) (X) only.
Error(r) = Ar' + Br + C(O) G1) Where r is a variable representing values along the y-axis.
X,,, = Ax(x' + Y2) + Bx + Cx (4) Where Xerror is the chromatic error at a horizontal displacement x.
The chromatic error image processor 62 generates from the red, green and blue colour components two sets of error data presented on the output channels 64, 66 to the control processor 40. The error data indicates the chromatic error in the colour image as a function of the horizontal axis of the image. However the chromatic error processor 62 could operate instead or in addition the vertical direction to estimate the chromatic error in the vertical axis. This may also be presented on the output channels 64, 66. Rather than repeat the explanation of the chromatic error processor when estimating the chromatic error in the vertical plane, it will be understood that the operation of the chromatic error processor is substantially the same except that the displacement of the input signal block with respect to the reference block would be in the vertical direction as performed by the block match processor 100, and the pre processor 74 would operate to isolate the vertical frequency components. As such the low pass filters 76 of the pre-processor 74 would be horizontal low pass filters and the high pass filters 78 would be vertical high pass filters. Correspondingly, it will be appreciated that the chromatic error could also be evaluated within any other direction within the colour image by correspondingly orientating the displacement of the input signal block with respect to the reference block.
The chromatic error processor 62 can determine the chromatic error in a colour image even at values which are less than one pixel. The embodiment of the invention 18 can therefore be arranged to correct the chromatic aberration produced by a high definition television camera lens which produces sub-pixel chromatic aberration as illustrated in figure 10. In figure 10 a test chart 160 is shown to have a single black vertical line 162. For this test chart 160, a graph is shown in Figure 10, which represents a relationship of amplitude 164 on the x axis against time on the y axis 166 for a red colour component represented by a broken line 168 and a green colour component represented by a solid line 170. As can be seen the chromatic aberration between the two signals is represented by the minimum values corresponding to the position of the line on the test chart. Thus as the signal is generated by scanning the test chart, it will be appreciated that the chromatic error 172 is representative of a sub pixel displacement in the minima of the two curves 168, 170.
The operation of the chromatic error image processor 62 is summarised by the flow diagram shown in figure 11. At a first process step 200 each of the colour components of the video signal which are the reference and two input signal blocks are divided into image blocks of 64x32 pixels. In process step 202, the image blocks are then pre-processed to the effect of isolating vertical lines of the image in each block by vertical low pass filtering and horizontal high pass filtering. In process step 204, the edges within the image blocks are detected by comparing the difference in absolute value between successive pixels. If the image block has two successive values which differ in an absolute value by more than a predetermined threshold (by a value of 7 in the example embodiment) then this block is selected for further processing. Otherwise the block is discarded. At process step 206, the sets of image blocks are divided with respect to the two chromatic error evaluations for the red an d the blue components. The red component is evaluated on branch 208 and the blue component is evaluated on branch 210. Since the following process steps are repeated substantially in each of the branches, the subsequent process steps will only be described for one branch. At process step 212 comparison values are generated between the red signal component image blocks and the corresponding green component image blocks as a function of the eight sub- pixel displacements. At process step 214, the minimum displacement 3 0 value for each image block is determined and added to a data set which represents values from which the chromatic error is evaluated. At process step 216, the chromatic error for the red signal component is evaluated with respect to displacement 19 in the horizontal axis of the colour image by fitting a curve to the data set corresponding to the minimum displacement values for each of the image blocks. The corresponding steps for the blue comparison are performed on the branch 210 and correspondingly designated 2121, 214', 22 16'.
The further step of discarding image blocks which do not provide a sufficiently sharp minimum comparison value with respect to displacement is illustrated by the flow diagram shown in figure 12. Between the process steps 212 and 214, there is the step of evaluating the comparison value against displacement curves for each block and so at connecting arrow 218 the steps represented in figure 12 may be performed.
In figure 12 the first decision box 220 analyses, for each block, the comparison values with respect to displacement, and determines whether there is more than one minimum value present in the comparison against displacement curve. If there is more than one minimum then processing proceeds to process decision block 222. If there is only one minimum present then processing passes to step 214 as shown in figure 10. At processing decision step 222, a relative quality of the curve is evaluated, in accordance with equation (2) with reference to the predetermined threshold (50%). If the ratio G is less than 50% then processing passes to block 224 at which the displacement values for that bock are discounted in calculating the chromatic error. If the ratio of minimum is greater than or equal to 50% then processing passes to process step 214 as shown in figure 11.
Returning to figure 4, the operation of the data processor to correct the chromatic error will now be explained. The control processor 40, receives the chromatic error data from the two control channels 64, 66. The control processor is then arranged to compare the lines of data representing the sampled red, green and blue images. The control processor 40 determines which two of the red, green and blue image components are the smallest and an amount in terms of lines of the image by which the smallest components differ from the largest of the three image components. The first, second or third data representative of the largest of the components is then fed via a first output channel 48 as an output version of the colour 3 0 image signals without being further affected. However, the two image components corresponding to those with smaller areas are fed respectively to the first and second data processors 44, 46 via two further output channels 52, 54. On two further output channels 56, 58, the control processor generates an indication of an amount by which the two smallest image components must increase in size in order to match the largest of the components presented at the output 50. This is derived from the chromatic error data. The first and second data processors 44, 46 then operate to interpolate the two smallest image components in order to increase the size of these components by the amount determined with reference to the largest component. Following the interpolation the two smallest image components will be increased in size, so that the contents of these components correspond with that of the largest component. The two interpolated components are therefore output on the output channels 58, 60.
Although the invention has been described with reference to a video camera, it will be appreciated that in other embodiments, the data processor 20 or the chromatic error processor 62 may be embodied within an image projector or any other optical instrument which processes colour images as video signals. Furthermore although the example embodiment has been described with the chromatic data processor 20 and the chromatic error image processor 62 embodied within the television camera, it will be appreciated that the chromatic error detection processor 62 could form a separate element such as an Application Specific Integrated Circuit (ASIC) which could be connected to an existing video camera or other optical instrument generating a representation of a colour image as a video signal.
Various modifications may be made to the embodiments of the embodiments herein before described without departing from the scope of the present invention. It will be appreciated that a further aspect of the present invention is a chromatic error detection processor which operates to determine the chromatic error with reference to any two image components which are generated with reference to at least one different light component having a different wave length.
21

Claims (35)

1. A method of processing a colour image represented by a video signal having at least first and second colour components, said method comprising the steps of - detecting a part of said image in said first colour component, - detecting the corresponding part of said image in said second colour components, and - detecting said chromatic error consequent upon a spatial displacement between the part of said image in said first colour component and the corresponding part of the image in said second colour component.
A method of processing a colour image comprising the steps of - generating a video signal representing said colour image, said video signal having at least first and second colour components, each component being derived from light of at least one difference wavelength; - detecting a part of said image in said first colour component, - detecting the corresponding part of said image in said - second colour components, and detecting a chromatic error consequent upon a spatial displacement between corresponding parts of said image in said first and said second of said plurality of colour components.
3. A method of image processing as claimed in Claims 1 or 2, wherein the step of detecting said chromatic error consequent upon said spatial displacement, comprises the steps of - detecting a feature of said part of said image in said first and said second colour components, and comparing the position of said feature in said first colour component and the position of said feature in said second colour components, the difference in the position of said feature in said first and second components being indicative of said spatial displacement caused by said chromatic error.
22
4. A method of image processing as claimed in Claim 3), wherein the step of comparing the position of said feature in said first and said second components, comprises the step of cross correlating said part of said image in said first colour component and said corresponding part of said image in said second colour component.
5. A method of image processing as claimed in Claim 3), wherein the step of comparing the position of said feature in said first and said second components, comprises the step of comparing a difference in pixel values between said part of said image in said first colour component and said corresponding part of said image in said second colour component at a plurality of displacements, said spatial displacement which is indicative of said chromatic error corresponding to that of a substantial minimum in said difference in pixel values.
6. A method of image processing in a colour image as claimed in any of Claims 3) to 5, wherein said feature is at least part of an edge of an object in said image.
7. A method of image processing as claimed in any preceding Claim, comprising the steps of - dividing said image into a plurality of image blocks, and - evaluating for each block said chromatic error consequent upon said spatial displacement between said first and said second colour components of the part of the image within the block.
8. A method of image processing as claimed in Claim 7, wherein the step of evaluating for each block said chromatic error, comprises the steps of determining for each of said plurality of blocks a chromatid error value in accordance with the spatial displacement at which said comparison values between the edge in said first and second components is a minimum, and the method of evaluating 3 0 comprises the steps of - forming from said chromatic error values a relationship of chromatic error with respect to a spatial position in said image in a predetermined orientation, and 23 - producing error data representative of said relationship.
9. A method of image processing as claimed in Claim 8, comprising the steps of - detecting a second minimum in said comparison values with respect to said spatial displacement, said second minimum being greater than the minimum detected as representing said chromatic error for an image block, determining a relative difference between the position of said detected minimum and the position of said second minimum, with respect to a difference between said detected minimum and a maximum of said comparison values, and - consequent upon said determined relative difference, discounting said chromatic error detected for the image block.
10. A method of image processing as claimed in any preceding Claim, wherein said predetermined orientation is in a substantially horizontal direction, said chromatic error being evaluated with respect to horizontal displacements in said image.
11. A method of image processing as claimed in Claim 10, comprising the step of - pre-processing said video signals to the effect of isolating vertical lines of said image.
12. A method of image processing as claimed in any preceding Claim, wherein said plurality of colour components are three components derived substantially from red, green and blue light, wherein a first chromatic error is detected consequent upon a spatial displacement between said red component and said green component, said first colour component being said red component and said second colour component being said green component, and a second chromatic error is detected for said blue component and said green component, said first colour component being said blue component and said second colour component being said green component.
W
13. A method of processing a video signal for reducing an effect of chromatic error in images represented by said video signal, said method comprising the steps of 24 - evaluating said chromatic error in accordance with the method claimed in any of Claims 6 to 12, and - changing the size of the image in at least one of said colour components of said video signal, in accordance with said evaluated chromatic error.
14. An image processor for detecting chromatic error in a colour image, said detector comprising - image pick-up means for generating a video signal representing said image, said video signal having at least first and second colour components, each component being derived from light of at least one difference wavelength, an image analysis processor arranged in operation to receive said video signal, to detect a part of said image in said first colour component, and to detect the corresponding part of said image in said second colour component, and - a comparison processor which is arranged in operation to detect said chromatic error consequent upon a spatial displacement between the corresponding parts of said image in said first and said second of said colour components.
15. An image processor for detecting chromatic error in a colour image represented by a video signal having at least first and second colour components, said detector comprising - an image analysis processor arranged in operation to receive said video signal, to detect a part of said image in the first colour component, and to detect the corresponding part of said image in the second colour components, and - a comparison processor coupled to said image analysis processor and arranged in operation to detect said chromatic error consequent upon a spatial displacement between the corresponding parts of said image in said first and said second of said colour components.
16. An image processor as claimed in Claims 14 or 15, wherein said comparison processor is arranged in operation - to detect a feature of said part of said image in said first and said second colour components, and - to compare the position of said feature in said first colour component and the position of said feature in said second colour components, the difference in the position of said feature in said first and second components being indicative of said spatial displacement caused by said chromatic error.
17. An image processor as claimed in Claims 16, wherein said comparison processor is arranged in operation to compare the position of said feature in said first and said second components by cross correlating said part of said image in said first colour component and said corresponding part of said image in said second colour component.
18. An image processor as claimed in Claim 17, wherein said comparison processor is arranged in operation to compare the position of said feature in said first and said second components by comparing a difference in pixel values between said part of said image in said first colour component and said corresponding part of said image in said second colour component at a plurality of displacements, said spatial displacement which is indicative of said chromatic error corresponding to that of a substantial minima in said difference in pixel values.
19. An image processor as claimed in any of Claims 14 to 18, wherein said image analysis processor is arranged in operation - to partition said image into a plurality of image blocks, and said comparison processor is arranged in operation - to evaluate for each image block said chromatic error consequent upon said spatial displacement between said first and said second colour components at which said comparison between the feature in said first and second components is a minimum.
20. An image processor as claimed in Claim 19, comprising no - an evaluation processor arranged in operation - to receive the chromatic error values for each of said image blocks, 26 - to form, from said chromatic error values, a relationship of chromatic error with respect to a spatial position in said image in a predetermined orientation, and - to generate error data representative of said relationship.
21. An image processor as claimed in Claim 20, wherein said evaluation processor is arranged in operation - to detect a second minimum in said comparison values with respect to said spatial displacement, said second minimum being greater than the minimum detected as representing said chromatic error for an image block, to determine a relative difference between the position of said detected minimum and the position of said second minimum, with respect to a difference between said detected minimum and a maximum of said comparison values, and - to discount said chromatic error detected for the image block, consequent upon said determined relative difference.
22. An image processor as claimed in any of Claims 16 to 2 1, wherein said feature is at least part of an edge of an object in said image.
23). An image processor as claimed in Claim 20, 21 or 22, wherein said predetermined orientation is in a substantially horizontal direction, said chromatic error being evaluated with respect to horizontal displacements in said image.
24. An image processor as claimed in Claim 23, comprising - pre-processor arranged to receive said video signals and coupled to an input of said image analysis processor, and arranged in operation to isolate vertical lines of said image.
25. An image processor as claimed in Claim 24, wherein said pre-processor comprises 0 - a vertical low pass filter arranged to attenuate high frequency components in the vertical parts of said video signal, and 27 a horizontal high pass filter arranged to attenuate low frequency components in the horizontal parts of said video signal.
26. An image processor as claimed in any of Claims 14 to 25, wherein said plurality of colour components are three components derived substantially from red, green and blue light respectively, wherein a first chromatic error is detected consequent upon a spatial displacement between said red component and said green component, said first colour component being said red component and said second colour component being said green component, and a second chromatic error is detected for said blue component and said green component, said first colour component being said blue component and said second colour component being said green component.
27. A video signal processor for reducing an effect of chromatic error in images represented by said video signal, said processor comprising - an image processor as claimed in any of Claims 14 to 26, arranged in operation to detect said chromatic error, and - a chromatic error correction processor which is arranged in operation to change the size of the image in each of a plurality of colour components of said video signal, consequent upon said chromatic error.
28. A video camera having a video signal processor as claimed in Claim 27.
29. A computer program providing computer executable instructions, which when loaded onto a computer configures the computer to operate as an image processor as claimed in any of Claims 14 to 26.
30. A computer program providing computer executable instructions, which when loaded on to a computer causes the computer to perform the method according to 3 0 Claims 1 to 13.
28
31. A computer program product having a computer readable medium and having recorded thereon information signals representative of the computer program claimed in any of Claims 29 or 30.
3 2. An image processor as herein before described with reference to the accompanying drawings.
1 ) _). A video camera as herein before described with reference to the accompanying drawings.
34. A method of detecting/evaluating chromatic error as herein before described with reference to the accompanying drawings.
35. A method of processing a video signal as herein before described with 15 reference to the accompanying drawings.
GB0007936A 2000-03-31 2000-03-31 Image processor and method of image processing Withdrawn GB2360895A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
GB0007936A GB2360895A (en) 2000-03-31 2000-03-31 Image processor and method of image processing

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
GB0007936A GB2360895A (en) 2000-03-31 2000-03-31 Image processor and method of image processing

Publications (2)

Publication Number Publication Date
GB0007936D0 GB0007936D0 (en) 2000-05-17
GB2360895A true GB2360895A (en) 2001-10-03

Family

ID=9888914

Family Applications (1)

Application Number Title Priority Date Filing Date
GB0007936A Withdrawn GB2360895A (en) 2000-03-31 2000-03-31 Image processor and method of image processing

Country Status (1)

Country Link
GB (1) GB2360895A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2006080953A1 (en) * 2005-01-27 2006-08-03 Thomson Licensing Method for edge matching in film and image processing
WO2007070051A1 (en) * 2005-12-16 2007-06-21 Thomson Licensing Method, apparatus and system for color component registration
CN103403790A (en) * 2010-12-30 2013-11-20 汤姆逊许可公司 Method of processing a video content allowing the adaptation to several types of display devices

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP0878970A2 (en) * 1997-05-16 1998-11-18 Matsushita Electric Industrial Co., Ltd. Imager registration error and chromatic aberration measurement system for a video camera

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP0878970A2 (en) * 1997-05-16 1998-11-18 Matsushita Electric Industrial Co., Ltd. Imager registration error and chromatic aberration measurement system for a video camera

Cited By (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2006080953A1 (en) * 2005-01-27 2006-08-03 Thomson Licensing Method for edge matching in film and image processing
WO2006080950A1 (en) * 2005-01-27 2006-08-03 Thomson Licensing Edge based cmy automatic picture registration
CN101112105B (en) * 2005-01-27 2010-05-26 汤姆森特许公司 Edge matching method in film and image processing
US8014592B2 (en) 2005-01-27 2011-09-06 Thomson Licensing Edge based CMY automatic picture registration
US8090205B2 (en) 2005-01-27 2012-01-03 Thomson Licensing Method for edge matching in film and image processing
WO2007070051A1 (en) * 2005-12-16 2007-06-21 Thomson Licensing Method, apparatus and system for color component registration
CN101326548B (en) * 2005-12-16 2011-09-07 汤姆森许可贸易公司 Method, apparatus and system for registering color component
US8126290B2 (en) * 2005-12-16 2012-02-28 Thomson Licensing Method, apparatus and system for color component registration
CN103403790A (en) * 2010-12-30 2013-11-20 汤姆逊许可公司 Method of processing a video content allowing the adaptation to several types of display devices
CN103403790B (en) * 2010-12-30 2016-08-31 汤姆逊许可公司 Process method and the video content sink of video content
US10298897B2 (en) 2010-12-30 2019-05-21 Interdigital Madison Patent Holdings Method of processing a video content allowing the adaptation to several types of display devices

Also Published As

Publication number Publication date
GB0007936D0 (en) 2000-05-17

Similar Documents

Publication Publication Date Title
Lin et al. Determining the radiometric response function from a single grayscale image
Boult et al. Correcting chromatic aberrations using image warping.
EP0878970A2 (en) Imager registration error and chromatic aberration measurement system for a video camera
US7544919B2 (en) Focus assist system and method
US6023056A (en) Scene-based autofocus method
US5170441A (en) Apparatus for detecting registration error using the image signal of the same screen
JP6013284B2 (en) Imaging apparatus and imaging method
US6252659B1 (en) Three dimensional measurement apparatus
JP6173065B2 (en) Imaging apparatus, image processing apparatus, imaging method, and image processing method
JPH09116809A (en) Picture steadiness correcting method in television film scanning of prescribed sequence changing method for performing picture object and device for performing the correcting method
US7064793B2 (en) Method and apparatus for measuring the noise contained in a picture
CN101771882A (en) Image processing apparatus and image processing method
US7123300B2 (en) Image processor and method of processing images
CN105444888A (en) Chromatic aberration compensation method of hyperspectral imaging system
GB2360895A (en) Image processor and method of image processing
JP2001245307A (en) Image pickup device
JP7445508B2 (en) Imaging device
EP0176406B1 (en) Device for the correction of uniformity errors induced in signals generated by a television camera by the variations of the scanning speed
JPH02312459A (en) Picture processor
van Zwanenberg et al. Camera system performance derived from natural scenes
WO2013125398A1 (en) Imaging device and focus control method
GB2360896A (en) Image processing apparatus and method of processing images
JP2022036505A (en) Imaging device
JP2024002255A (en) Imaging device
JPH09189609A (en) Color classifying device

Legal Events

Date Code Title Description
WAP Application withdrawn, taken to be withdrawn or refused ** after publication under section 16(1)