EP2377306A1 - Controle de defauts optiques dans un systeme de capture d'images - Google Patents
Controle de defauts optiques dans un systeme de capture d'imagesInfo
- Publication number
- EP2377306A1 EP2377306A1 EP10706293A EP10706293A EP2377306A1 EP 2377306 A1 EP2377306 A1 EP 2377306A1 EP 10706293 A EP10706293 A EP 10706293A EP 10706293 A EP10706293 A EP 10706293A EP 2377306 A1 EP2377306 A1 EP 2377306A1
- Authority
- EP
- European Patent Office
- Prior art keywords
- sensor
- photosensitive elements
- responses
- image capture
- image
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Withdrawn
Links
- 230000003287 optical effect Effects 0.000 title claims abstract description 140
- 230000007547 defect Effects 0.000 title claims abstract description 95
- 238000012544 monitoring process Methods 0.000 title abstract 2
- 230000004044 response Effects 0.000 claims abstract description 94
- 238000000034 method Methods 0.000 claims abstract description 28
- 230000009471 action Effects 0.000 claims abstract description 7
- 230000003595 spectral effect Effects 0.000 claims description 23
- 230000002093 peripheral effect Effects 0.000 claims description 12
- 238000005286 illumination Methods 0.000 description 49
- 230000000875 corresponding effect Effects 0.000 description 19
- 238000005259 measurement Methods 0.000 description 16
- 230000006870 function Effects 0.000 description 15
- 238000012545 processing Methods 0.000 description 13
- 238000012512 characterization method Methods 0.000 description 9
- 230000007704 transition Effects 0.000 description 9
- 230000000694 effects Effects 0.000 description 7
- 238000004364 calculation method Methods 0.000 description 6
- 238000012937 correction Methods 0.000 description 6
- 238000001914 filtration Methods 0.000 description 4
- 238000004519 manufacturing process Methods 0.000 description 4
- 230000008859 change Effects 0.000 description 3
- 239000000243 solution Substances 0.000 description 3
- 239000000758 substrate Substances 0.000 description 3
- 239000003086 colorant Substances 0.000 description 2
- 230000001276 controlling effect Effects 0.000 description 2
- 238000001514 detection method Methods 0.000 description 2
- 238000010586 diagram Methods 0.000 description 2
- 229910052745 lead Inorganic materials 0.000 description 2
- 239000004065 semiconductor Substances 0.000 description 2
- 230000035939 shock Effects 0.000 description 2
- 239000007787 solid Substances 0.000 description 2
- 238000001228 spectrum Methods 0.000 description 2
- 238000012360 testing method Methods 0.000 description 2
- 238000013519 translation Methods 0.000 description 2
- 238000004458 analytical method Methods 0.000 description 1
- 238000013459 approach Methods 0.000 description 1
- 230000005540 biological transmission Effects 0.000 description 1
- 239000012482 calibration solution Substances 0.000 description 1
- 230000015556 catabolic process Effects 0.000 description 1
- 230000000295 complement effect Effects 0.000 description 1
- 230000002596 correlated effect Effects 0.000 description 1
- 238000006731 degradation reaction Methods 0.000 description 1
- 230000002939 deleterious effect Effects 0.000 description 1
- 230000006866 deterioration Effects 0.000 description 1
- 239000012212 insulator Substances 0.000 description 1
- 230000007246 mechanism Effects 0.000 description 1
- 239000002184 metal Substances 0.000 description 1
- 229910052751 metal Inorganic materials 0.000 description 1
- 229910044991 metal oxide Inorganic materials 0.000 description 1
- 150000004706 metal oxides Chemical class 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000000737 periodic effect Effects 0.000 description 1
- 230000008569 process Effects 0.000 description 1
- 238000011002 quantification Methods 0.000 description 1
- 238000010079 rubber tapping Methods 0.000 description 1
- 238000002604 ultrasonography Methods 0.000 description 1
- 238000010200 validation analysis Methods 0.000 description 1
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N17/00—Diagnosis, testing or measuring for television systems or their details
- H04N17/002—Diagnosis, testing or measuring for television systems or their details for television cameras
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/80—Camera processing pipelines; Components thereof
- H04N23/81—Camera processing pipelines; Components thereof for suppressing or minimising disturbance in the image signal generation
Definitions
- the present invention relates to the field of digital image capture systems.
- Such an image capture system may for example be a module suitable for use in a digital camera, a SLR, a scanner, a fax, an endoscope, a camera, a camcorder , a surveillance camera, a toy, an integrated camera or connected to a telephone, personal assistant or computer, a thermal camera, an ultrasound machine, an MRI (magnetic resonance) x-ray machine, etc.
- Such a system conventionally comprises a sensor including a plurality of photosensitive elements (eg pixels) that transform a received amount of light into digital values, and an optical device comprising one or more lenses for focusing light to the sensor. All of these two elements are commonly called “sensor-optical module”.
- the sensor may be for example a CCD (Charged Coupled Device), a CMOS (Complementary Metal Oxide Semiconductor), a CID
- IRCCD Infra-Red CCD
- IRCCD Infra-Red CCD
- ICCD Intensified CCD
- EBCCD Electro Bombarded CCD
- MIS Metal Insulator Semiconductor
- Quantum Wells or whatever. It can optionally be associated with a Bayer filter to obtain a color image.
- FIGS. 1A-1C show examples of a first positioning fault due to a tilt defect in a sensor-optical module.
- the sensor-optical module presented in these figures comprises an optical device L and a sensor C, as indicated above.
- a housing B receives the sensor C and has an optical support H (commonly called "holder" in English) for positioning the optical device L relative to the housing B, with a screw thread for example.
- the different elements of the sensor-optical module are correctly mounted, in other words, the sensor C and the optical device L are parallel to each other, which represents a good optical assembly.
- FIG. 1B illustrates a first example of relative inclination defect between a sensor C and an optical device L. It can be seen here that the sensor C is inclinedly mounted in the housing B of this module. This results in an asymmetrical change in the sharpness of the image rendered by the sensor C. This is called the ortho-frontality defect of this sensor, or equivalent "tilt sensor".
- FIG. IC illustrates another example of relative tilt error between a C and an optical sensor device L.
- the optical holder H which is positioned obliquely, it which causes a non-parallelism of the optical device L with respect to the sensor C.
- Such verticality defect of the optical support H can be called “tilt holder” and likewise leads to an asymmetrical modification of the sharpness of the image returned by the sensor C.
- a sensor-optical module may exhibit a relative decentration defect, illustrated by FIGS. 2A-2D.
- FIG. 2A shows the sensor-optical module of an image capture system, comprising a circular optical device L, of center O L , projecting the light that it receives on a sensor C according to a circular illumination zone I
- the present C sensor has a central zone Z comprising a certain number of pixels dedicated to the capture of the image and surrounded by a peripheral zone P.
- the optical device L is perfectly centered with this zone.
- center Z that is to say that the center O z of the central zone Z, situated at the intersection of its diagonals, coincides with the center Oi of the illumination zone I, which ensures optimum illumination of the central zone Z and thus a certain luminance homogeneity in the center of the image, illustrated by FIG. 2B.
- FIG. 2B shows a reference image, consisting of a series of regularly spaced points, as received by a sensor-optical module according to FIG. 2A.
- Such an image has an effect called “vignetting" on its edges, these being less bright at the periphery of the illuminated area of the sensor.
- vignetting an effect called "vignetting" on its edges, these being less bright at the periphery of the illuminated area of the sensor.
- vignetting may also appear on the edges of the image, this being due to the vignetting as defined above, depending on the color plane, which results in the appearance of certain colors. in some regions of the edge of the image.
- the vignetting in the case of the module of FIG. 2A, is centered and will thus concern only the photosensitive elements situated at the periphery of the image. Such vignetting can be corrected by digital processing downstream of the sensor-optical module.
- FIG. 2C shows the same sensor-optical module in which, this time, the optical device L is off-center with respect to the central zone Z of image capture of the sensor C. It can be seen that the right part of the central zone Z is this time in the center of the illumination zone I and will therefore be more illuminated than the left part of the central zone Z which will receive much lower light levels, or even no light if the decentering becomes too important .
- Figure 2D illustrates the the consequences of this decentering on a reference image identical to that used in Figure 2B, but this time received by the off-center sensor-optical module of Figure 2C.
- the vignetting effect, decentered to the right constitutes an optical defect that can no longer be corrected with the same adjusted digital processing to correct a centered vignetting as illustrated in FIG. 2A.
- the optical-sensor modules may have, in production, an offset of the center O L of the optical device with respect to the center O z of the active zone Z of the sensor, which may be up to 200 ⁇ m, which may have a negative impact therefore on the correction of the vignetting effect.
- a sensor-optical module may have a defocus defect of the optics relative to the sensor, which constitutes another type of optical defect, illustrated in FIGS. 3A and 3B.
- the sensor C and the optical device L of a module must be separated by a certain distance F, typically a distance allowing the image to focus on the sensor C.
- the illumination circle I then has a radius R 1 .
- the step of optical assembly of a module is usually followed by a characterization step mounted module, during which it is determined whether the quality of the assembly is sufficiently acceptable or not.
- the solution generally adopted is to characterize one or more optical defects of a sensor-optical module, and to correct the effect by digital processing, where possible.
- This solution is usually performed by subjecting a reference scene, such as a test pattern, to the sensor-optical module in order to observe at the output of this module the image obtained.
- a reference scene such as a test pattern
- the reference scene, as well as the shooting conditions are chosen specifically to test certain properties of the module. They can differ according to the defect that one seeks to characterize, which makes the validation step long and expensive.
- the characterization of a defect can be carried out in several ways. You can take a measurement on the image. One can also compare the image acquired by the module to be characterized to a reference image representing the same scene taken under the same conditions.
- This characterization step makes it possible to detect the unusable modules, for example by using a criterion of quality applicable on the image at the output of the module. It also makes it possible to categorize the modules according to the quality of their optical assembly. Finally, it makes it possible to correct the effect of a positioning defect of each module by an individual calibration of the image processing chain associated with the corresponding module. This is called a unit calibration.
- the sensor-optical module is shocked as when the device falls, which is common with devices digital photo for example, the relative positioning of the sensor and the optical device will be disordered, which will result in a degradation of the quality of the photographs.
- An object of the present invention is to provide an image capture system which does not require a calibration step as described above, but which can self-calibrate.
- the self-calibration of such a system can be useful on the production line, but also after its assembly and outside the assembly plant, especially after a shock, without the need for a external intervention.
- a control method of an image capture system comprising a sensor including a plurality of photosensitive elements and an optical device for focusing the light emitted from a scene towards the sensor, the method comprising a step of obtaining respective responses of at least some of the photosensitive elements of the sensor to an exposure of the image capture system to any scene, followed by a step of determining at least one difference between at least one magnitude derived from the responses obtained and at least one reference quantity.
- the exposure of the image capture system to any scene makes it possible to dispense with the initial calibration of the prior art which requires the acquisition, under controlled conditions, and the analysis of a reference scene, like a sight. It also allows you to control the image capture system anytime and anywhere.
- a control method comprising, in addition to the steps previously described in the paragraph above, a step of estimating an optical defect of the image capture system from said determined difference.
- a step of implementing an action capable of at least partially compensating for the estimated optical defect can also be implemented.
- the estimate of the optical defect and / or its compensation can be implemented by the image capture system itself, by means located downstream of this system (for example by a third party to whom said image is supplied). determined deviation or an estimate of the optical defect), or shared by the image capture system and means downstream of this system.
- the responses obtained comprise the responses of photosensitive elements sensitive to at least one common spectral band. This makes it possible to use relatively homogeneous responses spectrally, without the need to make a specific equalizing treatment for each spectral band. It is thus possible indirectly to detect a tilt defect between the sensor and its associated optical device.
- the responses obtained comprise the responses of photosensitive elements sensitive to at least the spectral band of green. This makes it possible to use any type of scene to be able to detect a defect of the sensor-optical module, in addition to offering a more sensitive response.
- the magnitude deduced from the responses obtained comprises a mathematical comparison between at least some of the responses obtained.
- a mathematical comparison makes it possible to dispense with the components of the response related to the content of the image as such, and more clearly highlights the component related to the defect to be detected.
- At least some of the photosensitive elements whose respective responses are obtained are neighbors of first or second order on the sensor. With such proximity, the area of the observed image has a very high homogeneity, which will eliminate the components of the response related to the content of the image, whatever it is.
- the responses obtained comprise the responses of a plurality of pairs of photosensitive elements in which, for each of said pairs of photosensitive elements, a difference between a quantity deduced from the responses of the photosensitive elements belonging to said pair and a magnitude is determined. reference.
- this plurality of pairs of photosensitive elements is positioned in a selected region of the sensor.
- a region of the sensor receiving a portion of the image not subject to high frequency variations can be chosen, which will give a more reliable fault determination.
- the responses obtained comprise the responses of photosensitive elements located at the periphery of the sensor.
- Such a configuration makes it possible to detect, for example, an optical decentering defect, or information on the defocusing state of the lens.
- the senor comprises a central image-capturing zone and a peripheral zone not participating in the image capture, and wherein said photosensitive elements situated at the periphery of the sensor belong to said peripheral zone. This makes it possible to detect an optical positioning fault before it has an impact on the central zone of the sensor.
- the responses obtained comprise the responses of at least two photosensitive elements positioned on a first axis crossing the central image-capture zone, on either side of this central zone. This gives an indication of the direction and direction of decentering.
- the responses obtained furthermore comprise the responses of at least two other photosensitive elements positioned on a second axis, crossing the central zone of image capture and substantially orthogonal to the first axis, on either side of the central zone. image capture. This makes it possible to characterize an optical defect such as a decentering in the two dimensions of the sensor.
- the responses obtained comprise the responses of at least a first plurality of photosensitive elements, positioned on a first axis crossing the central image-capturing zone, belonging to a first secondary region of the sensor and being separated consecutively from one another. others of a determined distance, and a second plurality of photosensitive elements, positioned on a second axis crossing the central image-capturing zone and substantially orthogonal to said first axis, belonging to a second secondary region of the sensor distinct from said first secondary region and being separated consecutively from each other by a determined distance.
- the present invention also provides an image capture system comprising means for implementing the method above.
- the image capture system includes:
- a sensor including a plurality of photosensitive elements
- an optical device for focusing on the sensor the light emitted from a scene; means for determining at least one difference between at least one magnitude, deduced from respective responses of at least some of the photosensitive elements of the sensor to an exposure of the image capture system to any scene, and at least one magnitude reference.
- This system also advantageously comprises a means for estimating an optical defect of the image capture system from said determined deviation, and possibly a means of at least partial compensation of the estimated optical defect.
- the present invention also relates to a digital camera comprising a system for capturing images above.
- FIGS. 1A-1C already commented on, illustrate a defect in optical tilt in a sensor-optical module
- FIG. 4 is a block diagram showing an image capture system according to an exemplary embodiment of the present invention.
- FIG. 5 is a flowchart illustrating a method of controlling an image capture system according to an exemplary embodiment of the present invention
- FIGS. 6A and 6B illustrate a first embodiment of the invention for detecting an optical inclination defect of the module
- FIG. 7 illustrates the concept of first, second and third order neighborhoods for photosensitive elements of a sensor
- FIG. 8 shows a particular example of a sensor-optical module comprising a sensor with a so-called "Bayer" colored filter
- FIG. 9 illustrates the notion of image field in a conventional rectangular sensor
- FIG. 1 OA shows a characterization curve of the average radius angle of an optics as a function of the position in the image field
- FIG. 1 OB shows an intensity difference curve received between an element Gr and Gb as a function of the angle of attack of the light rays, for photosensitive elements Gr and Gb positioned at 60% of the field of the image. ;
- FIG. 1 OC shows a characterization curve of a difference in intensity received between elements Gr and Gb of a Bayer filter in the image field of a sensor, along the X axis thereof;
- FIGS. 11A-11C show a first example of a sensor of an image capture system according to a second embodiment of the invention.
- FIG. 12 shows a second example of a sensor of an image capture system according to a second embodiment of the invention
- FIG. 13 shows a third example of a sensor of an image capture system according to a second embodiment of the invention
- FIG. 14 shows a fourth example of a sensor of an image capture system according to a second embodiment of the invention.
- FIGS. 15A-15C show a fifth example of a sensor of an image capture system according to a second embodiment of the invention.
- Fig. 4 is a block diagram showing an exemplary image capture system according to a possible embodiment of the present invention.
- the image capture system 1 receives light from any scene S to be captured.
- the system 1 comprises an optical device L and a sensor C, the optical device L serving to focus the light emitted from the scene S towards the sensor in order to focus them on the sensor C.
- the optical device L and the sensor C form this. commonly called a sensor-optics module.
- the sensor C comprises a plurality of photosensitive elements (for example pixels), each photosensitive element, in response to a quantity of light that it receives, can deliver an electrical intensity that can result in a certain numerical value.
- the sensor C transforms the light received from the optical device L into a series of digital values corresponding to an image in digital form. This raw digital image may be affected by certain optical defects, including those presented above.
- the system 1 of the present invention further comprises a means for determining at least one deviation DET.
- This DET determination means which may for example take the form of a calculation module, within a processor for example, will receive the respective responses of certain photosensitive elements of the sensor C to an exposure of the capture system. images at the scene S, as explained below, and will deduce at least a magnitude G from these responses. In some cases, this magnitude G may be representative of a relative positioning state of the optical device L with respect to the sensor C.
- the determination means DET also has at least one reference variable Gref.
- This magnitude Gref corresponds for example to a situation where the optical device L and the sensor C would be correctly positioned. It will serve as a standard measure to which the magnitude G will be compared later.
- Such a reference variable Gref can also be defined, inter alia, by initial characterization of the sensor C under different conditions, under several illumination angles, for example. Such characterization will be performed only once for a type of sensor, and not systematically for each mounting of an optical device with a sensor, for example. A magnitude G obtained subsequently, during the current use of the system 1, can then be calculated from any shot, not requiring the use of a special scene.
- a difference ⁇ between the magnitude G and the reference variable Gref is then calculated by the determination means DET.
- This difference ⁇ gives, for example, an indication of the positioning state of the optical device L with respect to the sensor C.
- This difference ⁇ can be example proportional to the G-Gref difference between these two quantities, or to the G / Gref ratio between them.
- This difference ⁇ can also take any other form allowing the mathematical comparison of the two quantities G and Gref.
- This gap can finally take the form of an index in a correspondence table between reference data, and predetermined data.
- the system 1 it is possible to detect an optical defect of the sensor-optical module. From this detection, the system 1 can advantageously be calibrated again at a repairer or factory, for example.
- the detection of an optical defect of the sensor-optical module may be used for other purposes than a correction of this defect.
- it can be used as a diagnostic for the module without any subsequent correction.
- a selection of one or more sensor-optical modules can also be performed taking into account their respective optical defects.
- Other control mechanisms are also conceivable as will be apparent to those skilled in the art.
- the system 1 further comprises means for estimating DEF of an optical defect and possibly compensating means COMP of the estimated optical defect.
- the means DEF receives the difference ⁇ determined by the determination means DET, and estimates as a function of this difference ⁇ the type of optical defect incriminated as well as its amplitude.
- the means DEF then sends this information to the compensation means COMP.
- This compensation means COMP also receives from the sensor C the raw digital image affected by the optical defects. Taking into account the information from the estimation means DEF, the compensation means COMP will be able to compensate for the optical defect determined, either totally or partially.
- the compensation can be done without human intervention, for example at periodic time intervals, or following the appearance of certain events, such as shocks suffered by the system 1.
- Such self-calibration is therefore much more flexible than the calibration of the prior art mentioned in the introduction.
- the compensation in question can take various forms, in particular depending on the fault detected. It may comprise mechanical actions, for example a change of inclination of the sensor and / or the optical device to reduce or eliminate a relative inclination defect between these elements, a translation of the sensor and / or the optical device in a substantially plane. parallel to the sensor to reduce or eliminate a relative decentering error between these elements, a translation of the sensor and / or the optical device in a direction substantially orthogonal to the sensor to reduce or eliminate a defocus defect of the optics relative to the sensor, Or other.
- These mechanical actions are for example carried out using mechanical means, possibly controlled electronically.
- the abovementioned compensation may comprise an appropriate digital processing.
- This digital processing can be implemented by the image capture system 1 itself, by digital processing means located downstream of this system, or shared between the image capture system 1 and the image capture system 1. digital processing means downstream.
- a decentering defect of the optics relative to the sensor can modify the properties of the vignetting phenomenon (illustrated in FIG. 2C, commented on above).
- the "original" vignetting phenomenon (i.e. regardless of decentering) is usually corrected numerically either on the capture system or by a specific downstream device.
- the digital correction may for example be based on a vignetting correction model, function-among other- coordinates in the image of the pixel to be processed, the sensor-optical torque used, etc.
- dx, dy decentering
- a defocusing defect (as illustrated in FIGS. 3A and 3B) generates a blur in the image, since the focusing distance is not optimal.
- the knowledge of the characteristics of the optics makes it possible to quantify the level of blur as a function of the defocusing.
- a deflashing technique such as a deconvolution or otherwise.
- the compensation performed by the compensation means COMP of the system 1 may be partial and limited modifying the received image to bring it back to an image having a degree of defect which can then be corrected by said downstream digital processing means. It may also consist in changing the parameters of the model used in the processing means located downstream, without affecting the image.
- FIG. 5 is a flowchart illustrating a method of controlling an image capture system according to an exemplary embodiment of the present invention, as described for example in FIG. 4.
- the determination means DET obtains respective responses from certain photosensitive elements of the sensor C of the image capture system 1. These different photosensitive elements are at least 2 in number.
- the determination means DET determines, during a second step 200, the difference ⁇ between a quantity G, deduced from the responses obtained during step 100, and a reference quantity Gref, determined as explained above.
- This gap makes it possible to detect a possible optical defect of the capture system, such as for example a bad positioning in the sensor-optical module.
- the method further comprises a step 300, during which the estimation means DEF estimates the optical defect of the capture system 1 from the difference ⁇ determined during the second step 200.
- FIGS. 6A and 6B illustrate the sensor-optical module of an image capture system according to a first embodiment of the invention, for detecting for example an optical defect related to a relative inclination between the sensor and the device optical as shown above in FIGS. 1A-1C.
- FIGS. 6A and 6B are represented the optical device L and the sensor C of the image capture system, without the housing, or the optical medium that may possibly be composed, the latter elements being not essential to the understanding of the present embodiment. Further represented is the means for determining a difference DET receiving the response of certain elements of the sensor C.
- the sensor C comprises a zone Z in which is located a plurality of photosensitive elements Zi, Z 2 , ..., Z n .
- Each of these photosensitive elements is sensitive to a particular spectral band. Of all these photosensitive elements, some may be sensitive to spectral bands of which at least part is common. It is also possible to have, among the photosensitive elements of the sensor C, identical photosensitive elements and therefore sensitive to the same spectral band.
- the sensor C has, among the plurality of photosensitive elements Z 1 , Z 2 ,..., Z n , at least two photosensitive elements E 1 and E 1 ', situated at different locations of the sensor and sensitive to at least one band. common spectral. Due to their different positions, the light reaching these elements will come from a different angle, and consequently their respective responses, in terms of intensities 1 (E 1 ) and 1 (E 1 ') representative of the amount of light received respectively by each of these two elements, will be different.
- This difference in intensity response can be formulated in the form of a parameter G, corresponding to the magnitude presented above, which is a function of the responses in electrical intensities 1 (E 1 ) and 1 (E 1 ').
- the parameter G calculated according to one of the preceding formulas will take a reference value Gref, corresponding to the optimal positioning in terms of parallelism of the sensor-optical module.
- the reference parameter Gref can also be calculated from response measurements made under initial illumination of the sensor alone, under certain particular conditions, for example under particular lighting angles. Such a reference value Gref can then be stored in the determination means DET, for example.
- the elements E 1 and E 1 ' whose response is used by the determination means DET, can be chosen from any of the photosensitive elements Z, of the sensor C, as long as they are sensitive. at least at a common spectral band.
- their intensity responses will have a high probability of being substantially homogeneous in terms of image content, and can be compared directly, without the need for equalizing treatment between different spectral bands with more spectral response. or less sensitive.
- Other functions can be used insofar as they are indicative of a difference in luminous intensity received by these two elements, which makes it possible to overcome, by mathematical comparison, the intensity component common to the two elements E 1 and E 1 ', corresponding to the content of the image acquired in the common spectral band, it will then be easier to distinguish the intensity component related to the angle of inclination ⁇ .
- the two elements E 1 and E 1 'whose response is used by the means DET are chosen in close proximity to one another. They are either first-order neighbors, that is, they are adjacent to each other, or second-order neighbors, that is, they are not adjacent to each other. to one another but that there is at least one other photosensitive element to which they are both adjacent, for example, without this example being limiting.
- This more or less close neighborhood concept is illustrated in Figure 7.
- FIG. 7 shows a conventional sensor C, viewed from above, and comprising a plurality of photosensitive elements.
- the photosensitive elements of a sensor are usually arranged in a grid in two directions.
- the neighboring first-order elements thereof are those which are adjacent to V 0 in one of the two grid directions.
- These first-order neighboring elements are the elements V 1 of FIG. 7.
- the second-order neighboring elements V 2 of V 0 are the first-order neighboring elements of the first-order neighboring elements V 1 of the element V 0 this element V 0 itself being excluded.
- the neighboring third-order elements V 3 of V 0 are the first-order neighboring elements of the second-order neighboring elements V 2 of the element V 0 , excluding the elements V 1 and so on.
- This advantageous characteristic makes it possible to obtain a more reliable measurement, insofar as the acquired image may have more or less dark areas at different locations.
- comparing the intensity received from a first element located in a dark zone to the intensity received from a second element located in a light zone would distort the measurement of the angle of inclination ⁇ . If the two elements are chosen in a region of small size, they will receive a relatively homogeneous light information, and therefore the comparison of their respective intensities will more effectively eliminate the useless intensity component here, which will bring out more clearly the component related to the angle ⁇ .
- the invention is not limited to using the response of neighboring elements of first or second order.
- the reasoning that has just been made above with only two photosensitive elements E 1 and E 1 ' can be done with a plurality of pairs of photosensitive elements (E 1 , E 1 '), advantageously sensitive to the same band common spectral.
- the fact of increasing the number of elements used to give an answer makes it possible to avoid any transition zones of the image, in which the process would no longer work with only two elements, if these elements were to be found on both sides of this transition zone. This further reduces the effect of noise on the measurement.
- a global difference ⁇ G may possibly be calculated from these particular deviations ⁇ ,.
- This global gap ⁇ G can be for example the average of all individual differences ⁇ ,, or their median value, or any value for global characterize all of these particular differences ⁇ .
- a possible after estimating an optical defect of the image capture system may be based on an overall gap ⁇ G. As a variant, it can be based on all or part of the particular deviations ⁇ ,.
- a global deviation ⁇ G in the manner indicated above is particularly advantageous in the case where the different pairs of photosensitive elements (E 1 , E 1 ') whose response is used are substantially close to one another. Since the reference quantity changes as a function of the image field, it can also be envisaged, in the case where the different pairs of photosensitive elements (E 1 , E 1 ') whose answer is used are not close to each other, calculate particular angles ⁇ , for each of the pairs to obtain from them an overall inclination angle ⁇ G , corresponding for example to the average of particular angles ⁇ ,.
- This example with two pairs of elements is of course not limiting.
- the determination means DET uses the responses of a plurality of pairs (E 11 E 1 '), these pairs belong to a selected region E of the sensor C.
- This region E which represents a sub-part of the surface the sensor, can be chosen to receive a homogeneous area of the image excluding high frequency areas, synonymous with transitions and therefore potential measurement errors.
- Such a zone can be determined by methods known to those skilled in the art, such as using the noise curves of the sensor, or by using the information on other channels, located in other spectral bands than that photosensitive elements used.
- Figure 8 illustrates in part an image capture system having an exemplary sensor including a popular example of a color filter, called a Bayer filter, commonly used in today's digital image capture apparatus.
- the senor C is formed by the superposition of a FIL filter and a SUB substrate. sensitive to light.
- the light coming from a source S and passing through an optical device L as described above, illuminates the color filter F.
- Such an arrangement makes it possible to break down the light into different components according to a defined pattern, for example a grid, which allows a more suitable processing and transmission of information.
- FIL filter that breaks down filtered light into its three green, blue and red components. This will receive, at different points of the substrate, intensities corresponding to these different components in order to use them to restore the image later.
- the so-called "Bayer” filter consists of the repetition, in two dimensions, of a basic pattern Mb of 2x2 filtering elements, a blue filtering element B, a red filtering element R, and two Gr and Gb elements filtering green. Since the spectral band of green is the central band of the light spectrum, this band will generally contain more information than the others, and as moreover the human eye is more sensitive to this band, the choice of having two elements Gr and Gb detecting green has been made with this type of filter.
- a typical defect of this kind of filter is what is called "cross-talk". It is reflected here that, when photons arrive on a photosensitive element, they are partially deflected to other nearby photosensitive elements.
- the present invention uses such a normally deleterious phenomenon in a positive direction to better detect an optical defect.
- the comparison of the responses in intensities 1 (E 1 ) and 1 (E 1 '), as explained above, will be applied to the intensities responses of two elements Gr and Gb of the same basic pattern Mb d a Bayer filter. As these two elements are close, they undergo a phenomenon of cross-talk and their response therefore contains a correlated information component, which can be all the more easily suppressed by the comparison of their two intensities, which will improve the observation of the angle of inclination ⁇ .
- the magnitude G as defined above will therefore be derived from the comparison of the intensity I (Gr) received by an element Gr and the intensity l (Gb) received by a element Gb of the same basic pattern. Again, this comparison can be made by difference, ratio or difference from the mean between the two respective intensity values, among others. In the present example, the difference is the measure used for the comparison between these two values l (Gr) and l (Gb), expressed in percentages.
- the sensor shown in the example of this figure 9 has a rectangular shape "4: 3", that is to say that its image width h is equivalent to 4/3 times its image height v.
- image field serves to indicate the distance from the center of the sensor of FIG. 9.
- a position at 100% of the image field corresponds to a position on one of the vertices of the rectangular sensor. which corresponds to a distance d maximum relative to the center equal to the diagonal of the rectangle.
- FIG. 10A shows a curve representing the average radius angle ("CRA") characteristic of an optical system, as a function of their position in the image field.
- CRA average radius angle
- FIG. 10B shows a curve for characterizing the difference between photosensitive elements Gr and Gb, expressed in percentages, for a given sensor, as a function of the angle of attack of the rays thereon, for a given position in the image field.
- the given position corresponds to 60% of the field on the X axis of the sensor. This curve does not depend on the optical device associated with the sensor.
- FIG. 10C shows a record of the difference, in percentages, between the intensities received by photosensitive elements Gr and Gb belonging to the same basic pattern. This as a function of the position along a horizontal axis X of a rectangular sensor.
- the abscissas are expressed in image field, as explained above.
- such an estimate of the difference between the intensities received by the photosensitive elements Gr and Gb of a module can be made from an image, from a zone of the image or from of a set of areas of the image.
- such an estimate of the difference between the intensities of the elements Gr and Gb is also possible on a video stream, on a sub-sampled version of the image.
- the estimation can be done with prior knowledge of the variation model of the cross talk between elements Gr and Gb, as illustrated in FIG. 1 OC.
- the measurements made on the image are thus used to adapt to a parametric model, which makes it possible to reduce the errors due to the measurements, and to have more advanced applications based on these estimates.
- photosensitive elements sensitive to the green color were used to measure the inclination defect, because the Bayer filter has the particularity of having a microstructure with two green elements offset.
- the use of this particular spectral band is advantageous because, on the one hand, the spectral response in this band is more sensitive, in particular that of the human eye.
- this spectral band being in the middle of the optical spectrum, most of the images will have components in this band, in any case much more than with the other red and blue spectral bands.
- any image can be used here to detect a relative tilt angle, without the need to make a specific choice of a certain type of image to make the measurement.
- the invention is not limited to this example, and any elements sensitive to another color can be used.
- sensor-optical module of an image capture system for detecting a defocusing or defocusing defect of this module, will now be presented.
- FIG 11A shows a top view of a sensor C belonging to a sensor-optical module similar to those presented above.
- a sensor advantageously has a central image-capture zone Z, comprising a certain number of photosensitive elements Z, dedicated to the capture of the incident photons to restore an image, surrounded by a peripheral zone P which does not include pixels specifically dedicated to capturing images and therefore not participating in image capture.
- the central zone Z is described in this example as being rectangular, with a center O z located at the intersection of its diagonals, that is to say at the intersection of its two orthogonal axes of symmetry X and Y.
- any other form of image capture area may be considered here, such as a circular shape for example.
- the optical device L associated with the sensor C in the image capture system of the present invention and not shown in this view from above, will illuminate the sensor C with the light coming from the source S, and this according to a zone d illumination whose shape will depend on the shape of the device L itself.
- the optical device L is of circular shape and will therefore have an illumination circle U f on the sensor C, the light intensity received outside this illuminance circle l ref being quasi nothing.
- the size of this circle will depend on the distance separating the optical device L and the sensor C, as illustrated in FIGS. 3A and 3B above.
- FIG. 11A represents the case where the optical device L is centered on the sensor C, that is to say the case where the center of the illumination circle l ref and the center O z of the central zone Z coincide, and where an optimal focusing distance F separates the optical device L and the sensor C.
- the illumination circle l ref then has a radius R
- the sensor C has a certain number of photosensitive elements P 1 , located in the peripheral zone P of the sensor C and whose response will make it possible to detect a defect of shifting or defocusing as illustrated respectively in Figures 2A-2D and 3A-3B above.
- These photosensitive elements P 1 may for example have a binary digital response, that is to say, provide a "0" response when the received light intensity is less than a first threshold and provide a "1" response when the intensity received light is greater than a second threshold, greater than the first threshold.
- the invention is not limited to this type of photosensitive element, and any type of element making it possible to distinguish a level of high light intensity from a level of low light intensity can be used here.
- the sensor C has a first photosensitive element Pi located inside the reference illumination circle U, and a second photosensitive element P 2 located outside the reference illumination circle. l ref .
- Gref 1 (P 1 ) + 1 (P 2 ).
- FIG. 11B illustrates two cases in which the optical device L is off-center with respect to the sensor C in the direction of its axis X.
- the optical device L In a first case where the optical device L is shifted to the right of the sensor C, it will project an illumination circle I 1 of center O n on the sensor C.
- the two elements P 1 and P 2 belong to to the circle of illumination I 1 and the magnitude G, determined from their intensity response identically to the reference variable Gref, will have a value equal to 2. Then determining a difference ⁇ between this magnitude G and the magnitude reference Gref, by a calculation as described above, the fact that this deviation ⁇ is substantial will be indicative of a decentering defect. If for example, the gap
- ⁇ corresponds to the difference between G and Gref, ⁇ is no longer zero but here takes the value
- the two elements P 1 and P 2 no longer belong to the illumination circle I 2 and the magnitude G, always determined identically to the reference variable Gref, will have a value of zero.
- FIG. 11C illustrates two other cases where the optical device L is this time defocused with respect to the sensor C.
- the illumination circle I 3 projected on the sensor C will be larger than the reference illumination circle. U f .
- the two elements P 1 and P 2 belong to the circle of illumination I 3 , similarly to the case of the decentering of the optical device L towards the right of the sensor C presented previously in FIG. 11B and the magnitude G, determined similar to the previous cases, will have a value of "2".
- the illumination circle I 4 projected on the sensor C will this time be smaller than the circle of illumination.
- reference illumination U f the two elements P 1 and P 2 no longer belong to the circle of illumination I 4 , similarly to the case of the decentering of the optical device L to the left of the sensor C presented previously in FIG. 11B and the magnitude G, determined from their intensity response identically to the reference variable Gref, will have a value of zero.
- an optical defect of decentering or defocusing type can therefore be detected, without being necessarily distinguishable.
- the following examples of sensor advantageously make it possible to distinguish the type of optical defect incriminated.
- FIG. 12 A second example of a sensor C is shown from above in FIG. 12.
- This sensor C is similar to the one shown in FIGS. 11A-11C, with the difference that it has two photosensitive elements P 3 and FIG. P 4 , located in the peripheral zone P, for example inside the illumination circle U f , and on a Y axis passing through the center O z of the central rectangular zone Z, advantageously at an equal distance from the center O z of it, in order to obtain more concrete information.
- This axis Y may for example be an axis of symmetry of the zone Z.
- the reference variable Gref may correspond to the sum of the responses of the two elements P 3 and P 4 for example, which gives here a value of "2".
- the optical device L is decentered along the Y axis, downwards for example, the element P 3 will no longer be illuminated while the element P 4 will remain.
- the magnitude G will then have a value of "1".
- the optical device L approaches the sensor C, the illumination circle will shrink to the point where the two elements P 3 and P 4 will no longer be illuminated.
- the magnitude G will then take a value of zero.
- elements P 3 and P 4 located outside the illumination circle U f it is possible to detect in the same way a distance of the optical device L with respect to the sensor C.
- the determination of the difference ⁇ between the magnitude G obtained and the reference variable Gref then makes it possible to distinguish the type optical defect incriminated.
- the difference ⁇ corresponds to the difference between G and Gref, then this difference ⁇ will be "1" in absolute value when there is a decentering, and the “2" in absolute value when there is approximation of the optical device L and the sensor C.
- the difference ⁇ is therefore indicative of the type of fault.
- a shift along the Y axis of the zone Z may be detected and discriminated from a defect related to the too close approximation of the optical device L with respect to the sensor C.
- a third example, illustrated in FIG. 13, consists of extrapolating the example of FIG. 12, furthermore using the response of two other photosensitive elements P 5 and P 6 , still located in the peripheral zone P, inside the circle of illumination l ref and on the axis X of symmetry of the central rectangular zone Z, equidistant from the center O z thereof.
- the reference variable Gref can then correspond to a pair of relative reference variables (Grefi, Gref 2 ) respectively corresponding to the sum of the responses of the elements P 3 and P 4 as well as elements P 5 and P 6, for example.
- the reference variable Gref will here take the value (2,2).
- the difference ⁇ calculated as (GrGrefi, G 2 -Gref 2 ) then takes the value (1, 0), which indicates a shift along the Y axis. If the magnitude G takes the value (2.1), the difference ⁇ then takes the value (0.1), which indicates a shift along the axis X. Finally, if the magnitude G takes the value (0,0), the difference ⁇ then takes the value (2,2), which indicates a focusing defect on the approximation of the optical device L with respect to the sensor C . In this third example of FIG. 13, a shift along the Y axis or the X axis of the zone Z can be detected and discriminated from a defect related to the too close approximation of the optical device L with respect to the sensor C.
- a fourth example, illustrated in FIG. 14, consists in extrapolating the example of FIG. 13, furthermore using the response of four other photosensitive elements P 3 ', P 4 ', P 5 'and P 6 ' situated in the zone peripheral P, outside the illumination circle l ref , on the X and Y axis of the central rectangular zone Z, equidistant from the center O z thereof.
- the reference variable Gref can then correspond to a series of four quantities Gref, reference relative to each pair of elements (P 1 , P 1 '), corresponding for example to the sum of the responses of the elements P 1 and P 1 'associates.
- the global reference variable Gref will here take the value (1, 1, 1, 1).
- the difference ⁇ then takes the value (-1, - 1, -1, -1), thus indicating a focusing defect due to the approximation of the optical device L with respect to the sensor C.
- the overall quantity G takes the value (2,2,2,2) , that is to say if all the elements are illuminated, the difference ⁇ then takes the value (1, 1, 1, 1), thus indicating a focus defect due this time to the distance of the optical device L relative to the sensor C.
- FIGS. 12 to 14 it is possible to detect and discriminate an optical defect of decentering or defocusing type.
- the fifth example below also makes it possible to estimate the amplitude of the detected fault, in order to possibly perform a compensating action.
- Figure 15A shows a top view of a C-sensor similar to that shown in Figures 12-14 above.
- Such a sensor has secondary regions Pa, Pb, Pc and Pd, belonging to the peripheral region P, in which there are respectively a number of photosensitive elements Pa 1 , Pb 1 , Pc 1 and Pd 1 located on the axes of X or Y symmetry of the central zone Z.
- each secondary region comprises four photosensitive elements, but the present invention is not limited to such a number.
- the definition of a different number of secondary regions is also conceivable. By way of example, only the secondary regions Pa and Pb could be exploited.
- the four photosensitive elements Pa 1 are consecutively spaced apart from each other by a determined distance ⁇ . This spacing distance may be used in all secondary regions, or each region may have its own spacing distance between the photosensitive elements.
- the sensor of FIG. 15A is illuminated by an illumination circle 1 ref identical to that of FIG. 11A, that is to say corresponding to the optimal positioning of the optical device L with respect to the sensor C in terms of focusing. and decentering.
- the illumination circle U f crosses all the secondary regions.
- the illumination circle l ref passes between the elements Pa 2 and Pa 3 , which implies that the elements Pai and Pa 2 are substantially well lit while the elements Pa 3 and Pa 4 are not, if at all.
- Gref a relative to the region Pa, corresponding to the illumination of the region Pa by a properly positioned optical device L.
- Such a magnitude may for example be the sum of the intensities received by the photosensitive elements of the Pa region.
- this sum equals the value "2" and reflects the fact that the illumination circle passes between the elements Pa 2 and Pa 3 .
- a similar calculation can be done for each of the other secondary regions.
- FIG. 15B now illustrates the case where the optical device L is off-center with respect to the sensor C in the direction of its axis X.
- the circle of illumination I 5 of center O, 5 passes this time between the photosensitive elements Pc and 3 Pc 4 the secondary region Pc, as well as between the photosensitive elements Pd 2 and Pdi of the secondary region Pd.
- the magnitude G d relative to the Pd region will therefore decrease relative to the optimal case and take the value "1" in the present example, while the magnitude G c relative to the Pc region will increase compared to the optimal case and take the value "3".
- FIG. 15C illustrates two other cases where this time the optical device L is defocused with respect to the sensor C, in the absence of shifting between the device L and the sensor C
- the illumination circle I 6 this time passes between the first and the second photosensitive elements of each secondary region.
- the magnitudes G, relative to each region will therefore decrease compared to the optimal case and take the value "1" in the present example.
- the comparison of these magnitudes G, relative to the relative reference values Gref which have the value "2" in the present example, will indicate a decrease in the radius of the illumination circle, and therefore a decrease in the distance separating the sensor C of the optical device L.
- the fact that the quantities relative to the opposite secondary regions decrease simultaneously of the same order means that there is no decentering defect.
- the illumination circle I 7 this time passes between the second and the third photosensitive elements of each secondary region.
- the magnitudes G 1 relative to each region will thus increase with respect to the optimal case and take the value "3" in the present example.
- the comparison of these relative magnitudes G 1 with relative reference values Gref which always have the value "2" in the present example, will indicate an increase in the radius of the illumination circle, and therefore an increase in the distance separating the sensor C from the optical device L.
- each optical defect has been presented separately for the sake of simplification. It is obvious, however, that a decentering defect can occur simultaneously with a defocus.
- the sensor-optical module shown in Figures 15A-15C by its configuration, will be able to detect and estimate each of the defects independently of one another.
- control of the image capture system includes the estimation of an optical defect, for example for compensation purposes
- the latter may be one of the optical defects mentioned above, or else any other defect optically conceivable and detectable from respective responses of at least some of the photosensitive elements of the sensor.
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Health & Medical Sciences (AREA)
- Biomedical Technology (AREA)
- General Health & Medical Sciences (AREA)
- Studio Devices (AREA)
Abstract
Description
Claims
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
FR0950192A FR2941067B1 (fr) | 2009-01-14 | 2009-01-14 | Controle de defauts optiques dans un systeme de capture d'images |
PCT/FR2010/050034 WO2010081982A1 (fr) | 2009-01-14 | 2010-01-11 | Controle de defauts optiques dans un systeme de capture d'images |
Publications (1)
Publication Number | Publication Date |
---|---|
EP2377306A1 true EP2377306A1 (fr) | 2011-10-19 |
Family
ID=40532679
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
EP10706293A Withdrawn EP2377306A1 (fr) | 2009-01-14 | 2010-01-11 | Controle de defauts optiques dans un systeme de capture d'images |
Country Status (4)
Country | Link |
---|---|
US (1) | US8634004B2 (fr) |
EP (1) | EP2377306A1 (fr) |
FR (1) | FR2941067B1 (fr) |
WO (1) | WO2010081982A1 (fr) |
Families Citing this family (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2010055809A1 (fr) * | 2008-11-12 | 2010-05-20 | コニカミノルタオプト株式会社 | Procédé de réglage de dispositif d'imagerie et dispositif d'imagerie |
US8350934B2 (en) | 2010-10-21 | 2013-01-08 | Taiwan Semiconductor Manufacturing Co., Ltd. | Color image sensor array with color crosstalk test patterns |
US8924168B2 (en) | 2011-01-27 | 2014-12-30 | General Electric Company | Method and system to detect actuation of a switch using vibrations or vibration signatures |
TW201326738A (zh) * | 2011-12-16 | 2013-07-01 | Hon Hai Prec Ind Co Ltd | 相機模組檢測裝置及檢測方法 |
JP2019009653A (ja) * | 2017-06-26 | 2019-01-17 | アイホン株式会社 | インターホン機器におけるカメラモジュールの調整方法 |
US11555789B2 (en) | 2018-09-20 | 2023-01-17 | Siemens Healthcare Diagnostics Inc. | Systems, methods and apparatus for autonomous diagnostic verification of optical components of vision-based inspection systems |
US11277544B2 (en) * | 2019-08-07 | 2022-03-15 | Microsoft Technology Licensing, Llc | Camera-specific distortion correction |
EP4222723A1 (fr) | 2020-10-02 | 2023-08-09 | Google LLC | Dispositif de sonnette à capture d'image |
US11663704B2 (en) | 2021-04-28 | 2023-05-30 | Microsoft Technology Licensing, Llc | Distortion correction via modified analytical projection |
CN117529926A (zh) * | 2021-08-02 | 2024-02-06 | 谷歌有限责任公司 | 用于增强的包裹检测的非对称相机传感器定位 |
Citations (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2008152095A1 (fr) * | 2007-06-15 | 2008-12-18 | Iee International Electronics & Engineering S.A. | Procédé de détection de contamination dans une caméra à plage tof |
Family Cites Families (27)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6864916B1 (en) * | 1999-06-04 | 2005-03-08 | The Trustees Of Columbia University In The City Of New York | Apparatus and method for high dynamic range imaging using spatially varying exposures |
JP3405302B2 (ja) * | 1999-12-24 | 2003-05-12 | 株式会社豊田自動織機 | 画像歪補正装置 |
US20060125945A1 (en) * | 2001-08-07 | 2006-06-15 | Satoshi Suzuki | Solid-state imaging device and electronic camera and shading compensaton method |
KR20040058277A (ko) * | 2001-11-13 | 2004-07-03 | 코닌클리즈케 필립스 일렉트로닉스 엔.브이. | 캘리브레이션 유도 방법, 이미지 처리 방법 및 이미지시스템 |
US7032472B2 (en) * | 2002-05-10 | 2006-04-25 | Caterpillar Inc. | Counterbalance for linkage assembly |
US7071966B2 (en) * | 2003-06-13 | 2006-07-04 | Benq Corporation | Method of aligning lens and sensor of camera |
TWI232349B (en) * | 2003-07-07 | 2005-05-11 | Benq Corp | Method for adjusting relative position of lens module by using uniform light source |
US7961973B2 (en) * | 2004-09-02 | 2011-06-14 | Qualcomm Incorporated | Lens roll-off correction method and apparatus |
JP4754939B2 (ja) * | 2005-11-01 | 2011-08-24 | オリンパス株式会社 | 画像処理装置 |
JP4014612B2 (ja) * | 2005-11-09 | 2007-11-28 | シャープ株式会社 | 周辺光量補正装置、周辺光量補正方法、電子情報機器、制御プログラムおよび可読記録媒体 |
US20070133893A1 (en) * | 2005-12-14 | 2007-06-14 | Micron Technology, Inc. | Method and apparatus for image noise reduction |
KR100808493B1 (ko) * | 2005-12-28 | 2008-03-03 | 엠텍비젼 주식회사 | 렌즈 셰이딩 보상 장치, 보상 방법 및 이를 이용한 이미지프로세서 |
US20080111912A1 (en) * | 2006-11-09 | 2008-05-15 | Mei-Ju Chen | Methods and apparatuses for generating information regarding spatial relationship between a lens and an image sensor of a digital imaging apparatus and related assembling methods |
US8405711B2 (en) * | 2007-01-09 | 2013-03-26 | Capso Vision, Inc. | Methods to compensate manufacturing variations and design imperfections in a capsule camera |
KR100911378B1 (ko) * | 2007-02-23 | 2009-08-10 | 삼성전자주식회사 | 렌즈 교정 방법 및 렌즈 교정 장치 |
JP2008227582A (ja) * | 2007-03-08 | 2008-09-25 | Hoya Corp | 撮像装置 |
US7907185B2 (en) * | 2007-07-16 | 2011-03-15 | Aptina Imaging Corporation | Lens correction logic for image sensors |
US8085391B2 (en) * | 2007-08-02 | 2011-12-27 | Aptina Imaging Corporation | Integrated optical characteristic measurements in a CMOS image sensor |
JP2009205479A (ja) * | 2008-02-28 | 2009-09-10 | Kddi Corp | 撮像装置のキャリブレーション装置、方法及びプログラム |
US8970744B2 (en) * | 2009-07-02 | 2015-03-03 | Imagination Technologies Limited | Two-dimensional lens shading correction |
CN102035988A (zh) * | 2009-09-29 | 2011-04-27 | 深圳富泰宏精密工业有限公司 | 手机摄像头拍照效果测试系统及方法 |
CN102096307A (zh) * | 2009-12-14 | 2011-06-15 | 鸿富锦精密工业(深圳)有限公司 | 测试相机镜头偏转角度的方法和装置 |
US8477195B2 (en) * | 2010-06-21 | 2013-07-02 | Omnivision Technologies, Inc. | Optical alignment structures and associated methods |
JP2012070046A (ja) * | 2010-09-21 | 2012-04-05 | Sony Corp | 収差補正装置、収差補正方法、および、プログラム |
TWI417640B (zh) * | 2010-12-31 | 2013-12-01 | Altek Corp | 鏡頭校準系統 |
US8659689B2 (en) * | 2011-05-17 | 2014-02-25 | Rpx Corporation | Fast measurement of alignment data of a camera system |
US8786737B2 (en) * | 2011-08-26 | 2014-07-22 | Novatek Microelectronics Corp. | Image correction device and image correction method |
-
2009
- 2009-01-14 FR FR0950192A patent/FR2941067B1/fr not_active Expired - Fee Related
-
2010
- 2010-01-11 US US13/144,293 patent/US8634004B2/en not_active Expired - Fee Related
- 2010-01-11 WO PCT/FR2010/050034 patent/WO2010081982A1/fr active Application Filing
- 2010-01-11 EP EP10706293A patent/EP2377306A1/fr not_active Withdrawn
Patent Citations (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2008152095A1 (fr) * | 2007-06-15 | 2008-12-18 | Iee International Electronics & Engineering S.A. | Procédé de détection de contamination dans une caméra à plage tof |
Also Published As
Publication number | Publication date |
---|---|
FR2941067A1 (fr) | 2010-07-16 |
WO2010081982A1 (fr) | 2010-07-22 |
US20110273569A1 (en) | 2011-11-10 |
US8634004B2 (en) | 2014-01-21 |
FR2941067B1 (fr) | 2011-10-28 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
EP2377306A1 (fr) | Controle de defauts optiques dans un systeme de capture d'images | |
EP2457379B1 (fr) | Procédé d'estimation d'un défaut d'un système de capture d'images et systèmes associés | |
EP2780763B1 (fr) | Procede et systeme de capture de sequence d'images avec compensation des variations de grandissement | |
EP2374270B1 (fr) | Système de capture d'images et utilisation associée pour améliorer la qualité des images acquises | |
EP2465255A1 (fr) | Système et procédé de capture d'images avec deux modes de fonctionnement | |
FR2833743A1 (fr) | Procede et dispositif a faible resolution d'acquisition pour le controle d'un ecran d'affichage | |
BE1019646A3 (fr) | Systeme d'inspection et procede d'imagerie haute vitesse. | |
EP2852815A1 (fr) | Convertisseur chromatique d'altimetrie | |
CA2976931A1 (fr) | Procede et dispositif de caracterisation des aberrations d'un systeme optique | |
FR3040798A1 (fr) | Camera plenoptique | |
JP5920608B2 (ja) | 画像の鮮明度を評価する撮像システムおよび方法 | |
CA2701151A1 (fr) | Systeme d'imagerie a modification de front d'onde et procede d'augmentation de la profondeur de champ d'un systeme d'imagerie | |
WO2019166720A1 (fr) | Détection dynamique de lumière parasite dans une image numérique | |
EP2746830B1 (fr) | Mise au point optique d'un instrument de saisie d'image | |
FR3102845A1 (fr) | Détermination d'au moins un paramètre optique d'une lentille optique | |
WO2020104509A1 (fr) | Appareil et procédé pour observer une scène comportant une cible | |
BE1015708A3 (fr) | Procede pour mesurer la hauteur de spheres ou d'hemispheres. | |
FR3130060A1 (fr) | procédé et dispositif de caractérisation de distorsions dans une caméra plénoptique | |
FR3054678B1 (fr) | Kit pour dispositif imageur | |
WO2023187170A1 (fr) | Procédé de correction d'aberrations optiques introduites par un objectif optique dans une image, appareil et système mettant en œuvre un tel procédé | |
FR2688969A1 (fr) | Procede de correction des defauts d'uniformite des pixels d'un senseur a l'etat solide et dispositif pour la mise en óoeuvre du procede. | |
FR3122938A1 (fr) | Système de vidéoconférence permettant de réduire un effet de parallaxe associé à la direction du regard d’un utilisateur | |
WO2003023715A2 (fr) | Procede de determination d'image de reference de capteur d'images |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PUAI | Public reference made under article 153(3) epc to a published international application that has entered the european phase |
Free format text: ORIGINAL CODE: 0009012 |
|
17P | Request for examination filed |
Effective date: 20110706 |
|
AK | Designated contracting states |
Kind code of ref document: A1 Designated state(s): AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MK MT NL NO PL PT RO SE SI SK SM TR |
|
RIN1 | Information on inventor provided before grant (corrected) |
Inventor name: DOUADY, CESAR Inventor name: GUICHARD, FREDERIC Inventor name: TARCHOUNA, IMENE |
|
DAX | Request for extension of the european patent (deleted) | ||
STAA | Information on the status of an ep patent application or granted ep patent |
Free format text: STATUS: EXAMINATION IS IN PROGRESS |
|
17Q | First examination report despatched |
Effective date: 20170322 |
|
RAP1 | Party data changed (applicant data changed or rights of an application transferred) |
Owner name: LENS CORRECTION TECHNOLOGIES |
|
STAA | Information on the status of an ep patent application or granted ep patent |
Free format text: STATUS: THE APPLICATION IS DEEMED TO BE WITHDRAWN |
|
18D | Application deemed to be withdrawn |
Effective date: 20190330 |