EP1686793B1 - Focalisation automatique pour des capteurs d'image - Google Patents

Focalisation automatique pour des capteurs d'image Download PDF

Info

Publication number
EP1686793B1
EP1686793B1 EP06250389A EP06250389A EP1686793B1 EP 1686793 B1 EP1686793 B1 EP 1686793B1 EP 06250389 A EP06250389 A EP 06250389A EP 06250389 A EP06250389 A EP 06250389A EP 1686793 B1 EP1686793 B1 EP 1686793B1
Authority
EP
European Patent Office
Prior art keywords
focus
pixel data
accumulated
image
focus condition
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
EP06250389A
Other languages
German (de)
English (en)
Other versions
EP1686793A1 (fr
Inventor
Xiaolin Huang
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Omnivision Technologies Inc
Original Assignee
Omnivision Technologies Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Omnivision Technologies Inc filed Critical Omnivision Technologies Inc
Publication of EP1686793A1 publication Critical patent/EP1686793A1/fr
Application granted granted Critical
Publication of EP1686793B1 publication Critical patent/EP1686793B1/fr
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/67Focus control based on electronic image sensor signals
    • H04N23/673Focus control based on electronic image sensor signals based on contrast or high frequency components of image signals, e.g. hill climbing method

Definitions

  • the embodiments of this invention relate generally to digital imaging, and more specifically, relate to architectures and methods for the automatic focusing of digital imaging devices.
  • digital imaging devices may be one of two types: automatic focus (auto-focus) and fixed-focus.
  • the fixed-focus devices usually are incapable of affecting lens or changing the aperture and, instead, rely on a large depth of field where the object appears to be in focus.
  • the images captured by fixed-focus devices are not as sharp as those captured by auto-focus devices.
  • Images captured at the focal point of a lens will be in sharp focus, where the focal point is defined as the point on the axis of the camera lens at which light rays converge or appear to converge.
  • the focal point is defined as the point on the axis of the camera lens at which light rays converge or appear to converge.
  • an adequately sharp image can be produced, provided the object is within the depth of field of the lens.
  • the depth of field is a range of distance from a camera within which the captured image of an object is sufficiently focused.
  • the depth of field is a range of area spanned both sides of the exact focal point.
  • auto-focusing means rely mainly on the data obtained from the main imaging data path.
  • the basic assumption is that the best focus condition is achieved when the image contains the maximum amount of high frequency information, measured by applying digital filtering to a portion of the digitized image data.
  • the computed energy of the filtered spectrum is employed as a measure of frequency content.
  • Figure 1 is a high-level schematic diagram of an auto-focus process, according to an embodiment of this invention.
  • FIG 2 is a block diagram of processes performed by the windowing unit depicted in Figure 1 , according to another embodiment of the invention.
  • Figure 3 illustrates the window processing circuit depicted in Figure 1 , according to yet another embodiment of this invention.
  • Figure 4 illustrates the focus condition change detection circuit depicted in Figure 1 , according to yet another embodiment of this invention.
  • Figure 5 illustrates a control flow diagram of the tracking state-machine depicted in Figure 1 , according to another embodiment of this invention.
  • Embodiments of the invention relate to automatic focusing methods and apparatus for use in digital imaging devices.
  • the proposed methods and apparatus evaluate the focus condition of the optical system by evaluating the digitized image, without requiring any additional hardware besides the digital image-capture signal flow path.
  • This application discloses a method, based on the evaluated focus conditions, for computing the direction and the magnitude of the lens movement.
  • the computed quantities are communicated to the lens-driving-apparatus for incremental adjustment of the optical system and attainment of a better focus.
  • the device Upon reaching a desired focusing quality, the device is prevented from controlling the lens-driving-apparatus.
  • Figure 1 is a schematic diagram of an auto-focus process 100, according to an embodiment of this invention, which is flexible for implementation in various imaging applications.
  • a unique method is provided for measuring the frequency content of a captured image. For best focus, in a computationally cost effective manner, the proposed method iteratively approximates the optimal lens position.
  • the method pertains to a non-linear control method to control the optical system driver, which has low complexity while being highly effective for both video and still imaging applications and can be used in any digital imaging device that employs focus adjustable lenses.
  • the raw image data 102 captured by a two dimensional array of light sensors, or the pre-processed image data 104, from the primary image capturing device 106, is fed into the windowing unit 108, which computes values quantifying the quality of the current focus state.
  • the quantified focus state values are subsequently utilized by the focus condition change detection circuit 110 to compute an out-of-focus indicator value.
  • the quantified focus state values and the out-of-focus indicator values are employed by the tracking state-machine 112 to compute a set of directional and magnitude quantities 114 for driving the optical system towards a better focusing position.
  • the out-of-focus indicator values determine whether or not the optical system needs to be driven. They enable or disable the tracking state-machine 112 from computing the directional and magnitude quantities 114 or from sending these quantities to the lens driving apparatus.
  • Figure 2 is a block diagram of the processes performed by the windowing unit 108.
  • the primary task of the windowing unit 108 is to compute values that suitably describe the focus state.
  • the values that correspond to the desired focus state are not absolute and may vary under different imaging environments.
  • the windowing unit 108 is comprised of a number of window processing circuits 202, each of which only processes the data from an assigned region of the image. Each assigned region is defined by location and size parameters that are programmable.
  • the region is rectangular in shape and is typically located such that the region fits within the image (i.e. no portions of the region lies outside the image).
  • the size is 64x32 and can be variable up to the full length and the width of the image. Regions can be of different sizes. Regions may also overlap each other within an image depending on the needs of the application developer. The relative importance of each region is determined by its weighting function.
  • window processing circuits 202 typically 6 or less window processing circuits 202 are needed for practical applications. However, the user/application-developer can decide on the number of circuits 202. For example the user may find that only 2 window processing circuits are required and that the user will apply zero weights to the remaining 4. The more window processing circuits there are the more flexible it is for the application developers.
  • the mode select unit 208 distinguishes Window Processing 0 from the remaining Window Processing units 202, 1 to 5. It is introduced as a flexible means for application developers to differentiate weighted window processed results from multiple regions from non-weighted window processed results from a single region defined by Window Processing unit 0. For instance, Window Processing unit 0 may be defined to include the entire image whereas Window Processing 1-5 may be defined by small non-overlapping regions within the image. An example of a situation where Window Processing unit 0 is selected over the remaining Window Processing units is a change in focus, which may cause the small windows to loose effectiveness in tracking focus. Window Processing unit 0 may then be selected to extract a more general focus condition based on a larger window size. What section of the image goes to the Window Processing 0 is specified by the application developer; however, it can be up to the entire image.
  • FIG. 3 illustrates the processing architecture of the window processing circuit 202.
  • the window extraction circuit 302 determines when to extract pixel data corresponding to its assigned region from the main image data stream. If a pixel is determined to belong to the pre-specified region, then it is retained and fed to the digital filters A (304) and B (306). Otherwise, it is ignored in all subsequent processes.
  • Each of the two digital filters 304 and 306 can be programmed with filtering coefficients selected from a set of digital filter coefficients. Filters A and B are programmed by the application developer to extract different spectral information of the image where one may be more sensitive to high frequency image data and the other may be more sensitive to low frequency image data. The outcomes of both filters are used by the focus condition change detection unit. But the tracking state machine uses only the outcome of one of the filters.
  • the digital filters are the primary means of evaluating focus condition and extract and evaluate certain frequency components of the image region.
  • the filters are one dimensional filters and filter only in the "horizontal" direction (if the image data is scanned in a horizontal raster scan order). They involve a finite number of filter taps, such as 3 to 15 taps,(e.g., [-1 0 2 0 -1]) and they are programmable.
  • the filters are FIR filters which means that each window processed data is first stored in a delay chain. The FIR filter taps are then multiplied by the corresponding delayed data in the chain and the multiplication results are accumulated to produce the filter results.
  • a digitally filtered data sequence is then accumulated in accumulators 308 and 310, as illustrated in Figure 3 , until the entire region is processed. The final accumulated sums are provided as outputs of the window processing circuit 202.
  • the final accumulated sums from each of the window processing circuits 202 are then arithmetically weighted in the two arithmetic weighting units A (204) and B (206), to produce a weighted accumulated sum.
  • These weighting units 204 and 206 are distinguished by their digital filter type.
  • Arithmetic weighting unit A (204) is associated with final accumulated sums produced from the digital filter A (304) filtered sequence.
  • the arithmetic weighting unit B (206) is associated with digital filter B (306).
  • Each filter results are multiplied by a weight and the weighted results are summed. So the weighting is actually a weighted sum, not an array of multiply weighted results.
  • the weights are programmable are tuned by the application developers. The tuning process is one where the best focus tracking is achieved. The weights directly affect the tracking performance. Since these weights are programmable, they can either be hard-coded into the design or if the application developers wish, they can generate the weights using additional processing on the image and feed the desired weights back to the focus tracking module.
  • Figure 4 illustrates the process architecture of the focus condition change detection circuit 110.
  • the two final accumulated sums from the windowing unit are sent to a difference circuit 402 and a divide circuit 404. These circuits first perform a pre-scaling of the final accumulated sums and then subtract and divide the two final accumulated sums, respectively. Pre-scaling simply throws out (truncates/rounds) excess bits so that the resulting value is aligned properly before performing difference or division, and is primarily to align the results between Arithmetic Weighting A and Arithmetic Weighting B. The scaling is only applied to one of the two arithmetic weighting units.
  • the out-of-focus decision circuit 406 then makes a decision on the focus condition by evaluating the relative magnitudes of the difference and division results and by comparing them to a predefined threshold value.
  • the focus condition changes.
  • Each set of changes can be traced out as a curve (e.g., a Gaussian curve).
  • a curve e.g., a Gaussian curve.
  • the sums A and B will return low values and when the lens is approaching the focus point, the two sums will return their maximum values.
  • the two curves will contain a significant amount of noise, even at the focus point.
  • two sets of filters, A and B are employed.
  • the use of two different filters implies that the two curves (being sensitive to different spectral contents) will have different amplitudes. This allows taking the difference and the ratio between the two curves. Using two curves and taking their difference and ratio makes the method more robust against noise from focus tracking.
  • Figure 5 depicts the control flow diagram of the tracking state-machine 112, which keeps track of up to two previous positions by storing their corresponding tracking state information in memory. Based on the current state and the stored state information, the tracking state machine 112 calculates the desired direction and magnitude of the lens adjustment.
  • the stored state information includes the weighted accumulated sum associated with digital filter B from the windowing unit, the previously computed direction and magnitude quantities, and the out-of-focus indicator values.
  • the computation of the desired direction is based on comparing the weighted accumulated sum of the current position with the weighted accumulated sum of adjacent tracking positions (i.e. the previous 1 or 2 positions). It is the previous direction if the lens has been moving in the same direction for the previous 2 positions. Otherwise, if the lens reversed its direction, then there is a possibility of having one position to the left and one position to the right of the current position.
  • the direction is adjusted such that the weighted accumulated sum approaches a maximum value.
  • the magnitude is adjusted based on two assumptions: (1) that none of the previous positions visited is visited again (unless there is a focus condition change), and (2) that the lens is brought to the optimal focus point (lens adjustment magnitudes may reduce when approaching optimal focus).
  • the tracking state-machine 112 is reset to a neutral state in which the lens is placed in a middle (or relaxed) position.
  • the tracking state information for this position is then computed and stored in memory.
  • a positive step (corresponding to a positive direction and a non-zero magnitude quantity) is then applied to the lens-driving-apparatus.
  • the tracking state for this new position is then computed and stored in memory.
  • Information from the two states, the current and the initial information, are then used to calculate the desired direction and magnitude quantities for adjusting the lens position.
  • a positive direction is chosen as the desired direction if the current position results in better focus quality. Otherwise, a negative direction is chosen.
  • the desired magnitude quantity in this step is typically double that of the previous magnitude quantity.
  • the tracking state-machine 112 operates in the "Global Search" mode.
  • the process of the determination of the desired direction remains the same but it is different for the magnitude quantities. If the current position results in better focus quality, both desired direction and magnitude quantities remain the same. If not, the desired direction is reversed and the magnitude quantity is reduced by a half. In the latter case, the next lens position will be situated between the current lens position and the previous lens position.
  • the condition for reversing the search direction is a simple condition based on comparing the current weighted accumulated sum against a previously stored weighted accumulated sum. More elaborate conditions may also be implemented to reduce the amount of "overshoot" during global search in which the lens is driven past the optimal focal point. Such conditions may utilize information from other properties of the image sequence measured at different times.
  • the tracking state-machine 112 When the lens is repositioned, a new weighted accumulated sum is calculated and the tracking state-machine 112 operates in the "Local Search" mode.
  • the main characteristic of this search mode is that, during the update, if the current position does not have a better focus quality than the previous positions, then the tracking state information of the best focus position is kept. On the other hand, if the current position has the best focus quality, then the previous tracking state information is discarded and only the current tracking state information is stored.
  • the direction and magnitude quantities are determined to approach the optimal focus point by comparing the current tracking state information against the tracking state information having the best focus quality. After the best focus position has been determined through Local Search, the tracking state-machine 112 is placed in the Idle mode and remains in the Idle mode until an out-of-focus signal is generated by the focus condition change detection unit 110.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Automatic Focus Adjustment (AREA)
  • Studio Devices (AREA)
  • Focusing (AREA)

Claims (19)

  1. Moyen de focalisation automatique pour utilisation dans un dispositif d'imagerie numérique, comprenant:
    un moyen (106) pour produire des données en pixels (102) représentant une image;
    un moyen pour extraire les données en pixels d'une pluralité de régions de l'image, où les données en pixels extraites proviennent d'une composante verte ou d'une composante de luminance de l'image;
    un moyen (108) pour calculer au moins une valeur correspondant à un état de focalisation de la pluralité de régions sur la base des données extraites en pixels, le moyen (108) pour calculer la au moins une valeur d'état de focalisation comportant:
    une pluralité de circuits de traitement de fenêtre (202), chacun pour calculer des première et deuxième sommes accumulées pour une région respective des régions sur la base de ses données extraites en pixels en extrayant une première information spectrale des données extraites en pixels en utilisant un premier filtre de réponse finie d'impulsion FIR (304) impliquant un nombre fini de branchements et extraire une deuxième information spectrale des données extraites en pixels en utilisant un deuxième filtre FIR (306) impliquant un nombre fini de branchements, le premier filtre FIR étant plus sensible à des données d'image de haute fréquence, et le deuxième filtre FIR étant plus sensible à des données d'image de basse fréquence, les premier et deuxième filtres FIR étant chacun respectivement pour retarder des données en pixels, multiplier les branchements par les données retardées et accumuler les résultats pour produire une première séquence numériquement filtrée de données en pixels et une deuxième séquence numériquement filtrée de données de pixels, la première séquence numériquement filtrée de données en pixels étant accumulée pour produire la première somme accumulée, et la deuxième séquence numériquement filtrée de données en pixels étant accumulée pour produire la deuxième somme accumulée;
    une première unité de pondération arithmétique (204) pour pondérer arithmétiquement les premières sommes accumulées de la pluralité de circuits de traitement de fenêtre (202) pour produire une première somme pondérée accumulée;
    une deuxième unité de pondération arithmétique (206) pour pondérer arithmétiquement les deuxièmes sommes accumulées de la pluralité de circuits de traitement de fenêtre (202) pour produire une deuxième somme pondérée accumulée, les première et deuxième sommes pondérées accumulées fournissant une mesure d'ensemble de statistiques d'image;
    un moyen (110) pour calculer une valeur d'indicateur hors foyer basée sur les première et seconde sommes pondérées accumulées; et
    un moyen (112) pour calculer une valeur de direction et de grandeur pour ajuster le système optique, basée sur la valeur d'indicateur hors foyer.
  2. Appareil selon la revendication 1, comprenant en outre:
    un moyen pour stocker les valeurs d'état de focalisation de positions précédentes, où:
    dans un premier mode, les valeurs d'état de focalisation des deux positions consécutives précédentes sont stockées;
    dans un deuxième mode, les valeurs d'état de focalisation d'une position ayant la meilleure qualité de focalisation sont stockées et les autres valeurs sont écartées; et
    dans un troisième mode, toutes les valeurs d'état de focalisation stockées sont écartées;
    un moyen pour comparer les valeurs d'état de focalisation courantes avec les valeurs d'état de focalisation stockées et pour déterminer une position souhaitée pour l'ajustage du système optique, où la valeur de direction et la valeur de grandeur sont calculées sur la base des valeurs d'état de focalisation stockées et des valeurs de direction et de grandeur précédentes;
    un moyen pour arrêter l'ajustage du foyer lorsqu'une quantité de grandeur minimum est atteinte et que l'état de focalisation est déterminé comme étant plus optimal que tout état de focalisation précédent stocké; et
    un moyen pour redémarrer l'ajustage du foyer lorsque la valeur d'indicateur hors foyer signifie une situation hors foyer.
  3. Appareil selon la revendication 1, comprenant un moyen de fenêtrage (108) qui comporte au moins un circuit de traitement de fenêtre (202) pour chaque région de l'image pour extraire lesdites données en pixels, et où une région d'image est spécifiée en réalisant le moyen de fenêtrage, son emplacement et sa taille, et où les régions peuvent se chevaucher.
  4. Appareil selon la revendication 1, comprenant un moyen de fenêtrage (108) qui comprend au moins un circuit de traitement de fenêtre principale (202) et un circuit de traitement de fenêtre secondaire (202), et où:
    le circuit de traitement de fenêtre principal est dédié à des applications vidéo à auto-focalisation;
    le circuit de traitement de fenêtre secondaire est dédié à des applications d'image à auto-focalisation; et
    le traitement suivant des résultats du circuit de traitement de fenêtre principale et le traitement suivant des résultats du circuit de traitement de fenêtre secondaire sont exécutés indépendamment.
  5. Appareil selon la revendication 4, comprenant en outre un mode (208) pour sélectionner entre ledit circuit de traitement de fenêtre principale (202) et ladite ressource de circuit de fenêtre secondaire (202) comme une ressource de traitement active.
  6. Appareil selon la revendication 1, comprenant un moyen de fenêtrage (108) qui comporte au moins un circuit de traitement de fenêtre (202) pour chaque région de l'image, et chaque ressource de traitement de fenêtre réalise les premier ou deuxième filtres FIR.
  7. Appareil selon la revendication 6, dans lequel un ensemble de coefficients de filtre numérique utilisé dans chacun desdits premier et deuxième filtres FIR sont sélectionnés parmi un ou plusieurs ensembles prédéterminés de coefficients de filtre numériques.
  8. Appareil selon la revendication 6, comprenant en outre une pluralité d'unités d'accumulateurs (308, 310) pour accumuler chacune desdites séquences numériquement filtrées en ajoutant les données en pixels dans la séquence à une valeur totale représentant la somme de toutes les données précédentes en pixels dans la séquence, où le résultat final accumulé est une valeur totale calculée par une unité d'accumulateur après avoir ajouté les dernières données en pixels à la séquence numériquement filtrée.
  9. Appareil selon la revendication 8, apte en outre à calculer une somme accumulée arithmétique pondérée par pondération et addition de résultats finaux accumulés des circuits de traitement de fenêtre (202).
  10. Appareil selon la revendication 9, dans lequel les poids arithmétiques peuvent être mis à jour pour chaque nouvelle image.
  11. Appareil selon la revendication 9, dans lequel des poids arithmétiques sont séparés en au moins un groupe de courbes, et où chaque groupe de courbes est défini par:
    contenant exactement un résultat final accumulé d'un circuit de traitement de fenêtre (202); et
    chaque résultat final accumulé est associé à un groupe de courbes.
  12. Appareil selon la revendication 4, apte en outre à calculer une somme arithmétique pondérée accumulée en pondérant et en additionnant des résultats finaux accumulés de ressources de traitement de fenêtre (202), où la fenêtre principale utilise un poids d'une.
  13. Appareil selon la revendication 1, dans lequel les processus mentionnés sont programmés dans au moins un circuit intégré du dispositif d'imagerie numérique.
  14. Appareil selon la revendication 3, dans lequel les circuits de traitement de fenêtre (202) fonctionnent sur une séquence de pixels d'une lecture unique de l'image.
  15. Appareil selon la revendication 11, dans lequel un moyen de détection de changement de foyer (110) calcule la valeur d'indicateur hors foyer en utilisant des résultats d'au moins deux des courbes et produit deux ensembles de résultats intermédiaires en opérant sur les sommes pondérées accumulées de chaque groupe de courbes, et où:
    le premier ensemble de résultats intermédiaires est calculé en prenant la différence entre deux sommes pondérées accumulées prédéterminées du même groupe de courbes;
    le deuxième ensemble de résultats intermédiaire est calculé en divisant deux sommes pondérées accumulées prédéterminées du même groupe de courbes et
    la valeur d'indicateur hors foyer est calculée par un moyen de détection de seuil qui exécute des opérations logiques sur les quatre ensembles de valeurs, deux ensembles de résultats intermédiaires de l'image courante et deux ensembles de résultats intermédiaires de l'image précédente.
  16. Procédé de focalisation automatique pour un dispositif de capture d'image numérique (106), le procédé comprenant:
    produire des données en pixels (102) représentant une image, en utilisant un groupement bidimensionnel de capteurs de lumière situé sensiblement à un plan focal d'un système optique;
    extraire les données en pixels d'une pluralité de régions de l'image;
    calculer au moins une valeur correspondant à l'état de foyer de la au moins une région basée sur les données extraites en pixels en:
    calculer des première et deuxième sommes accumulées pour chacune des régions basées sur ses données en pixels extraites en extrayant une première information spectrale des données extraites en pixels en utilisant un premier filtre de réponse finie d'impulsions, FIR (304) impliquant un nombre fini de branchements et en extrayant une deuxième information spectrale des données extraites en pixels en utilisant un deuxième filtre FIR (306) impliquant un nombre fini de branchements, le premier filtre FIR étant plus sensible à des données d'image de fréquence élevée, et le deuxième filtre FIR étant plus sensible à des données d'image de basse fréquence, les premier et deuxième filtres FIR étant chacun respectivement pour retarder des données en pixels, multiplier les branchements par les données retardées et accumuler les résultats pour produire une première séquence numériquement filtrée de données en pixels et une deuxième séquence numériquement filtrée de données en pixels, la première séquence numériquement filtrée de données en pixels étant accumulée pour produire la première somme accumulée, et la deuxième séquence numériquement filtrée de données en pixels étant accumulée pour produire la deuxième somme accumulée;
    pondérer arithmétiquement les premières sommes accumulées pour produire une première somme pondérée accumulée;
    pondérer arithmétiquement les deuxièmes sommes accumulées pour produire une deuxième somme pondérée accumulée, les première et deuxième sommes pondérées accumulées fournissant une mesure d'ensemble de statistiques d'images;
    calculer une valeur d'indicateur hors foyer basée sur les première et deuxième sommes pondérées accumulées; et
    calculer une valeur de direction et de grandeur pour ajuster et repositionner le système optique d'une position courante, basée sur la valeur d'indicateur hors foyer.
  17. Procédé selon la revendication 16, dans lequel les calculs de la valeur de direction et de la valeur de grandeur comprennent:
    stocker les valeurs de condition focale de zéro, une ou deux positions précédentes;
    comparer les valeurs de condition focale courantes avec les valeurs de condition focale stockées pour déterminer une position souhaitée pour ajuster le système optique;
    calculer la valeur de direction et la valeur de grandeur sur la base des valeurs de condition focale stockées et des valeurs de direction et de grandeur précédentes;
    arrêter l'ajustage du foyer lorsqu'une quantité de grandeur minimum est atteinte et qu'il est établi que la condition du foyer est plus optimale que toute condition de foyer stockée précédemment; et
    redémarrer l'ajustage du foyer lorsque la valeur d'indicateur hors foyer signifie une situation hors foyer.
  18. Procédé selon la revendication 16, dans lequel le calcul de la valeur de direction et de la valeur de grandeur comprend:
    choisir une position pour ajuster le système optique qui est:
    sur le même côté d'une position précédente que la position courante, si ladite condition de foyer s'améliore; entre la position courante et la position précédente, si la condition du foyer se dégrade et que ladite valeur de direction a été calculée précédemment; et
    sur le côté opposé de la position courante relativement à la position précédente si ladite condition de foyer se dégrade et que ladite quantité de direction n'a pas été déterminée précédemment.
  19. Procédé selon la revendication 16, dans lequel lesdites fonctions sont programmées dans au moins un des circuits intégrés du dispositif d'imagerie numérique (106).
EP06250389A 2005-01-26 2006-01-25 Focalisation automatique pour des capteurs d'image Active EP1686793B1 (fr)

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
US11/044,137 US7589781B2 (en) 2005-01-26 2005-01-26 Automatic focus for image sensors

Publications (2)

Publication Number Publication Date
EP1686793A1 EP1686793A1 (fr) 2006-08-02
EP1686793B1 true EP1686793B1 (fr) 2011-03-09

Family

ID=36127909

Family Applications (1)

Application Number Title Priority Date Filing Date
EP06250389A Active EP1686793B1 (fr) 2005-01-26 2006-01-25 Focalisation automatique pour des capteurs d'image

Country Status (6)

Country Link
US (1) US7589781B2 (fr)
EP (1) EP1686793B1 (fr)
CN (1) CN1825906B (fr)
AT (1) ATE501592T1 (fr)
DE (1) DE602006020513D1 (fr)
TW (1) TWI325087B (fr)

Families Citing this family (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080266310A1 (en) * 2006-03-31 2008-10-30 Kayla Chalmers System and method for multiple color format spatial scaling
US8238681B2 (en) * 2008-11-25 2012-08-07 Nokia Corporation Adaptive configuration of windows-of-interest for accurate and robust focusing in multispot autofocus cameras
NO330275B1 (no) * 2008-12-19 2011-03-21 Tandberg Telecom As Fremgangsmate i en videokodings-/-dekodingsprosess
TWI463415B (zh) * 2009-03-06 2014-12-01 Omnivision Tech Inc 以物件為基礎之光學字元辨識之預處理演算法
JP5951211B2 (ja) * 2011-10-04 2016-07-13 オリンパス株式会社 合焦制御装置及び内視鏡装置
CN105763802B (zh) * 2016-02-29 2019-03-01 Oppo广东移动通信有限公司 控制方法、控制装置及电子装置
US11082606B1 (en) * 2018-09-11 2021-08-03 Apple Inc. Method and system for robust contrast based auto focus in low light

Family Cites Families (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4928170A (en) * 1988-06-21 1990-05-22 Visualtek, Inc. Automatic focus control for an image magnification system
US20050083428A1 (en) * 1995-03-17 2005-04-21 Hiroto Ohkawara Image pickup apparatus
US5644519A (en) * 1995-04-07 1997-07-01 Motorola, Inc. Method and apparatus for a multiply and accumulate circuit having a dynamic saturation range
US6563543B1 (en) * 1998-03-31 2003-05-13 Hewlett-Packard Development Company, L.P. Digital camera and method of using same
US6151415A (en) * 1998-12-14 2000-11-21 Intel Corporation Auto-focusing algorithm using discrete wavelet transform
JP3649043B2 (ja) * 1999-06-07 2005-05-18 セイコーエプソン株式会社 画像表示装置及び方法、並びに、画像処理装置及び方法
US6441855B1 (en) * 2000-05-09 2002-08-27 Eastman Kodak Company Focusing device
JP3666429B2 (ja) * 2001-09-03 2005-06-29 コニカミノルタフォトイメージング株式会社 オートフォーカス装置及び方法、並びにカメラ
JP4281311B2 (ja) * 2001-09-11 2009-06-17 セイコーエプソン株式会社 被写体情報を用いた画像処理
US7158182B2 (en) * 2001-09-28 2007-01-02 Nikon Corporation Camera that engages in a focusing operation through a contrast method
US7187413B2 (en) * 2002-07-25 2007-03-06 Lockheed Martin Corporation Method and system for using an image based autofocus algorithm
JP4143394B2 (ja) * 2002-12-13 2008-09-03 キヤノン株式会社 オートフォーカス装置
JP4647885B2 (ja) * 2003-02-17 2011-03-09 株式会社ニコン カメラ
JP3867687B2 (ja) * 2003-07-08 2007-01-10 コニカミノルタフォトイメージング株式会社 撮像装置
JP4532865B2 (ja) * 2003-09-09 2010-08-25 キヤノン株式会社 撮像装置および撮像装置のフォーカス制御方法
JP5464771B2 (ja) * 2003-10-15 2014-04-09 キヤノン株式会社 撮像装置およびそのフォーカス制御方法
US20050212950A1 (en) * 2004-03-26 2005-09-29 Chinon Kabushiki Kaisha Focal length detecting method, focusing device, image capturing method and image capturing apparatus

Also Published As

Publication number Publication date
ATE501592T1 (de) 2011-03-15
US7589781B2 (en) 2009-09-15
EP1686793A1 (fr) 2006-08-02
TW200627047A (en) 2006-08-01
TWI325087B (en) 2010-05-21
CN1825906A (zh) 2006-08-30
DE602006020513D1 (de) 2011-04-21
CN1825906B (zh) 2011-06-29
US20060164934A1 (en) 2006-07-27

Similar Documents

Publication Publication Date Title
EP2214139B1 (fr) Modèle polynomial bidimensionnel pour l'estimation de la profondeur basé sur la correspondance de deux images
EP1686793B1 (fr) Focalisation automatique pour des capteurs d'image
EP2615484B1 (fr) Appareil et procédé de mise au point automatique avec calibration et correction de pente
US8023000B2 (en) Image pickup apparatus, image processing apparatus, image pickup method, and image processing method
US9501834B2 (en) Image capture for later refocusing or focus-manipulation
US9531944B2 (en) Focus detection apparatus and control method thereof
EP2378760A2 (fr) Modèle polynomial quadri-dimensionnel pour l'estimation de la profondeur basé sur la correspondance de deux images
US9204034B2 (en) Image processing apparatus and image processing method
US20090079862A1 (en) Method and apparatus providing imaging auto-focus utilizing absolute blur value
US20160198107A1 (en) Focal point detection device and focal point detection method
WO2007058100A1 (fr) Detecteur focal
US8855479B2 (en) Imaging apparatus and method for controlling same
US20120249833A1 (en) Motion robust depth estimation using convolution and wavelet transforms
WO2011093923A1 (fr) Profondeur à partir d'un étalonnage de défocalisation
US20120249816A1 (en) Focus direction detection confidence system and method
US20130342751A1 (en) Image pickup apparatus and its control method
JP2008276217A (ja) イメージセンサの自動焦点調節のための装置及び方法
US20070140677A1 (en) Automatic focusing methods and image capture devices utilizing the same
KR19980084175A (ko) 적응적 필터를 이용한 촛점 조절장치 및 방법
KR20170101532A (ko) 이미지 융합 방법 및 이를 위한 컴퓨터 프로그램, 그 기록매체
EP3051799A1 (fr) Dispositif imageur, et procédé de traitement d'image
US10747089B2 (en) Imaging apparatus and control method of the same
CN102833484A (zh) 摄像设备及其控制方法
US20140340566A1 (en) Imaging device
US11854239B2 (en) Image processing device, imaging device, image processing method, and recording medium

Legal Events

Date Code Title Description
PUAI Public reference made under article 153(3) epc to a published international application that has entered the european phase

Free format text: ORIGINAL CODE: 0009012

17P Request for examination filed

Effective date: 20060214

AK Designated contracting states

Kind code of ref document: A1

Designated state(s): AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HU IE IS IT LI LT LU LV MC NL PL PT RO SE SI SK TR

AX Request for extension of the european patent

Extension state: AL BA HR MK YU

17Q First examination report despatched

Effective date: 20070223

AKX Designation fees paid

Designated state(s): AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HU IE IS IT LI LT LU LV MC NL PL PT RO SE SI SK TR

GRAP Despatch of communication of intention to grant a patent

Free format text: ORIGINAL CODE: EPIDOSNIGR1

GRAS Grant fee paid

Free format text: ORIGINAL CODE: EPIDOSNIGR3

GRAA (expected) grant

Free format text: ORIGINAL CODE: 0009210

AK Designated contracting states

Kind code of ref document: B1

Designated state(s): AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HU IE IS IT LI LT LU LV MC NL PL PT RO SE SI SK TR

REG Reference to a national code

Ref country code: GB

Ref legal event code: FG4D

REG Reference to a national code

Ref country code: CH

Ref legal event code: EP

REG Reference to a national code

Ref country code: IE

Ref legal event code: FG4D

REF Corresponds to:

Ref document number: 602006020513

Country of ref document: DE

Date of ref document: 20110421

Kind code of ref document: P

REG Reference to a national code

Ref country code: DE

Ref legal event code: R096

Ref document number: 602006020513

Country of ref document: DE

Effective date: 20110421

REG Reference to a national code

Ref country code: NL

Ref legal event code: VDEP

Effective date: 20110309

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: LT

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20110309

Ref country code: GR

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20110610

Ref country code: ES

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20110620

Ref country code: LV

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20110309

Ref country code: SE

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20110309

LTIE Lt: invalidation of european patent or patent extension

Effective date: 20110309

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: CY

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20110309

Ref country code: FI

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20110309

Ref country code: SI

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20110309

Ref country code: AT

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20110309

Ref country code: BG

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20110609

Ref country code: NL

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20110309

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: BE

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20110309

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: PT

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20110711

Ref country code: EE

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20110309

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: CZ

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20110309

Ref country code: IS

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20110709

Ref country code: RO

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20110309

Ref country code: SK

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20110309

PLBE No opposition filed within time limit

Free format text: ORIGINAL CODE: 0009261

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: NO OPPOSITION FILED WITHIN TIME LIMIT

26N No opposition filed

Effective date: 20111212

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: DK

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20110309

Ref country code: PL

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20110309

REG Reference to a national code

Ref country code: DE

Ref legal event code: R097

Ref document number: 602006020513

Country of ref document: DE

Effective date: 20111212

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: IT

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20110309

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: MC

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20120131

REG Reference to a national code

Ref country code: CH

Ref legal event code: PL

REG Reference to a national code

Ref country code: IE

Ref legal event code: MM4A

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: CH

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20120131

Ref country code: LI

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20120131

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: IE

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20120125

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: TR

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20110309

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: LU

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20120125

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: HU

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20060125

REG Reference to a national code

Ref country code: FR

Ref legal event code: PLFP

Year of fee payment: 11

REG Reference to a national code

Ref country code: FR

Ref legal event code: PLFP

Year of fee payment: 12

REG Reference to a national code

Ref country code: FR

Ref legal event code: PLFP

Year of fee payment: 13

REG Reference to a national code

Ref country code: DE

Ref legal event code: R079

Ref document number: 602006020513

Country of ref document: DE

Free format text: PREVIOUS MAIN CLASS: H04N0005232000

Ipc: H04N0023600000

PGFP Annual fee paid to national office [announced via postgrant information from national office to epo]

Ref country code: GB

Payment date: 20231218

Year of fee payment: 19

PGFP Annual fee paid to national office [announced via postgrant information from national office to epo]

Ref country code: FR

Payment date: 20231214

Year of fee payment: 19

PGFP Annual fee paid to national office [announced via postgrant information from national office to epo]

Ref country code: DE

Payment date: 20231215

Year of fee payment: 19