WO2011070514A1 - Auto-focus image system - Google Patents

Auto-focus image system Download PDF

Info

Publication number
WO2011070514A1
WO2011070514A1 PCT/IB2010/055649 IB2010055649W WO2011070514A1 WO 2011070514 A1 WO2011070514 A1 WO 2011070514A1 IB 2010055649 W IB2010055649 W IB 2010055649W WO 2011070514 A1 WO2011070514 A1 WO 2011070514A1
Authority
WO
WIPO (PCT)
Prior art keywords
gradient
edge
width
focus
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Ceased
Application number
PCT/IB2010/055649
Other languages
English (en)
French (fr)
Inventor
Hiok Nam Tay
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Individual
Original Assignee
Individual
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Individual filed Critical Individual
Priority to JP2012542670A priority Critical patent/JP2013527483A/ja
Priority to SG2012041588A priority patent/SG181539A1/en
Priority to GB1209942.0A priority patent/GB2488482A/en
Priority to US12/962,649 priority patent/US8159600B2/en
Priority to AU2010329534A priority patent/AU2010329534A1/en
Priority to MX2012006469A priority patent/MX2012006469A/es
Priority to DE112011104256.6T priority patent/DE112011104256T5/de
Priority to JP2013542632A priority patent/JP2014504375A/ja
Priority to BR112013014240A priority patent/BR112013014240A2/pt
Priority to CA2820856A priority patent/CA2820856A1/en
Priority to SG2013044201A priority patent/SG190451A1/en
Priority to AU2011340207A priority patent/AU2011340207A1/en
Priority to GB1311741.1A priority patent/GB2501414A/en
Priority to PCT/IB2011/052515 priority patent/WO2012076992A1/en
Priority to EP11735545.3A priority patent/EP2649787A1/en
Priority to MX2013006517A priority patent/MX2013006517A/es
Publication of WO2011070514A1 publication Critical patent/WO2011070514A1/en
Priority to US13/491,590 priority patent/US20130044255A1/en
Anticipated expiration legal-status Critical
Priority to US13/492,825 priority patent/US20120314960A1/en
Priority to JP2016078083A priority patent/JP6179827B2/ja
Ceased legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B7/00Mountings, adjusting means, or light-tight connections, for optical elements
    • G02B7/28Systems for automatic generation of focusing signals
    • G02B7/36Systems for automatic generation of focusing signals using image sharpness techniques, e.g. image processing techniques for generating autofocus signals
    • GPHYSICS
    • G03PHOTOGRAPHY; CINEMATOGRAPHY; ANALOGOUS TECHNIQUES USING WAVES OTHER THAN OPTICAL WAVES; ELECTROGRAPHY; HOLOGRAPHY
    • G03BAPPARATUS OR ARRANGEMENTS FOR TAKING PHOTOGRAPHS OR FOR PROJECTING OR VIEWING THEM; APPARATUS OR ARRANGEMENTS EMPLOYING ANALOGOUS TECHNIQUES USING WAVES OTHER THAN OPTICAL WAVES; ACCESSORIES THEREFOR
    • G03B13/00Viewfinders; Focusing aids for cameras; Means for focusing for cameras; Autofocus systems for cameras
    • G03B13/32Means for focusing
    • G03B13/34Power focusing
    • G03B13/36Autofocus systems
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/73Deblurring; Sharpening
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/63Control of cameras or camera modules by using electronic viewfinders
    • H04N23/631Graphical user interfaces [GUI] specially adapted for controlling image capture or setting capture parameters
    • H04N23/632Graphical user interfaces [GUI] specially adapted for controlling image capture or setting capture parameters for displaying or modifying preview images prior to image capturing, e.g. variety of image resolutions or capturing parameters
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/63Control of cameras or camera modules by using electronic viewfinders
    • H04N23/633Control of cameras or camera modules by using electronic viewfinders for displaying additional information relating to control or operation of the camera
    • H04N23/635Region indicators; Field of view indicators
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/67Focus control based on electronic image sensor signals
    • H04N23/673Focus control based on electronic image sensor signals based on contrast or high frequency components of image signals, e.g. hill climbing method
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/80Camera processing pipelines; Components thereof
    • H04N23/84Camera processing pipelines; Components thereof for processing colour signals
    • H04N23/843Demosaicing, e.g. interpolating colour pixel values
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image

Definitions

  • the subject matter disclosed generally relates to auto- focusing electronically captured images.
  • Photographic equipment such as digital cameras and digital camcorders may contain electronic image sensors that capture light for processing into still or video images, respectively.
  • Electronic image sensors typically contain millions of light capturing elements such as photodiodes.
  • the process of auto-focusing includes the steps of capturing an image, processing the image to determine whether it is in focus, and if not, generating a feedback signal that is used to vary a position of a focus lens ("focus position") .
  • focus position a position of a focus lens
  • the other technique looks at a phase difference between a pair of images.
  • the contrast method the
  • the phase difference method includes splitting an incoming image into two images that are captured by separate image sensors. The two images are compared to determine a phase difference. The focus position is adjusted until the two images match.
  • the phase difference method requires additional parts such as a beam splitter and an extra image sensor.
  • the phase difference approach analyzes a relatively small band of fixed detection points. Having a small group of detection points is prone to error because noise may be superimposed onto one or more points. This technique is also ineffective if the detection points do not coincide with an image edge.
  • the phase difference method splits the light the amount of light that impinges on a light sensor is cut in half or even more. This can be problematic in dim settings where the image light intensity is already low.
  • An auto focus image system that includes a pixel array coupled to a focus signal generator.
  • the pixel array captures an image that has at least one edge with a width.
  • the generator may eliminate an edge having an asymmetry of a gradient profile of an image signal.
  • the generator may also eliminate an edge that fails a template for an associated peaking of the gradient.
  • FIG. 1 is a schematic of an embodiment of an auto-focus image pickup apparatus
  • FIG. 2 is a schematic of an alternate embodiment of an auto-focus image pickup apparatus
  • FIG. 3 is a block diagram of a focus signal generator
  • FIG. 4 is an illustration of a horizontal Sobel
  • FIG. 5 illustrates a calculation of edge width from a horizontal gradient
  • FIG. 6A, 6B are illustrations of a calculation of an edge width of a vertical edge having a slant angle ⁇ ;
  • FIG. 6C, 6D are illustrations of a calculation of an edge width of a horizontal edge having a slant angle ⁇ ;
  • FIG. 7 is a flowchart of a process to calculate a slant angle ⁇ and correct an edge width for a vertical edge having a slant ;
  • FIG. 8 is an illustration of a vertical concatenated edge
  • FIG. 9A is an illustration of a group of closely-packed vertical bars
  • FIG. 9B is a graph of an image signal across FIG. 9A;
  • FIG. 9C is a graph of a horizontal Sobel gradient across FIG. 9A;
  • FIG. 10 is a flowchart of a process to eliminate closely- packed edges having shallow depths of modulation;
  • FIG. 11 is a histogram of edge widths illustrating a range of edge widths for calculating a fine focus signal
  • FIG. 12 is an illustration of a scene
  • FIG. 13 is a graph illustrating a variation of a narrow- edge count during a focus scan of the scene of FIG. 12;
  • FIG. 14 is a graph illustrating a variation of a gross focus signal during a focus scan of the scene of FIG. 12;
  • FIG. 15 is a graph illustrating a variation of a fine focus signal across a range of focus positions;
  • FIG. 16 is an illustration of an apparatus displaying multiple objects in a scene and a selection mark over one of the objects;
  • FIG. 17 is a block diagram of an alternate embodiment of a focus signal generator;
  • FIG. 18 is a schematic of an alternate embodiment of an auto-focus image pickup apparatus
  • FIG. 19 is a schematic of an embodiment of an auto-focus image pickup apparatus having a main pixel array and an auxiliary pixel array;
  • FIG. 20 is a schematic of an alternate embodiment of an auto-focus image pickup apparatus having a main pixel array and an auxiliary pixel array
  • FIG. 21 is a schematic of an alternate embodiment of an auto-focus image pickup apparatus having a main pixel array and an auxiliary pixel array
  • FIG. 22 is an illustration of a variation of an edge width from a main pixel array and a variation of an edge width from an auxiliary pixel array at different focus positions;
  • FIG. 23A illustrates a gradient of an image signal across an edge
  • FIG. 23B illustrates a gradient of an image signal across a spurious edge
  • FIG. 23C illustrates a typical gradient profile whose peak is an interpolated peak
  • FIG. 23D illustrates five reference gradient profiles have different widths at one normalized gradient level that are proportional to their widths at another normalized
  • FIG. 24A shows first set of two pairs of min-max width constraints for the first and narrowest reference gradient profile
  • FIG. 24B shows second set of two pairs of min-max width constraints for the second and next wider reference gradient profile
  • FIG. 24C shows third set of two pairs of min-max width constraints for a third reference gradient profile
  • FIG. 24D shows fourth set of two pairs of min-max width constraints for a fourth reference gradient profile
  • FIG. 25A illustrates a pair of min-max width constraints at a gradient level for selecting one of the reference gradient profiles and another pair of min-max width constraints at a different gradient level for detecting deviation from the selected reference gradient profile;
  • FIG. 25B illustrates the template of FIG. 26A is selected for a gradient profile due to fitting the width constraints at one gradient level, and the gradient profile also passes the other width constraint at the other gradient level;
  • FIG. 25C illustrates the template of FIG. 26A is selected for a fat-top spurious gradient profile due to fitting the width constraints at one gradient level, but violates the maximal width constraint at the other gradient level;
  • FIG. 25D illustrates the template of FIG. 26A is selected for a fat-top spurious gradient profile due to fitting the width constraints at one gradient level, but violates the minimal width constraint at the other gradient level;
  • FIG. 26 shows an interpolation of a expected good
  • an auto focus image system that includes a pixel array coupled to a focus signal generator.
  • the pixel array captures an image that has at least one edge with a width.
  • the focus signal generator may generate a focus signal that is a function of the edge width and/or statistics of edge widths.
  • An auto focus image system that includes a pixel array coupled to a focus signal generator.
  • the pixel array captures an image that has at least one edge with a width.
  • the generator generates a focus signal that is a function of the edge width and various statistics of edge width.
  • the generator may eliminate an edge having an asymmetry of a gradient of an image signal.
  • the generator may also eliminate an edge that fails a template for an associated peaking in the gradient.
  • a processor receives the focus signal and/or the statistics of edge widths and adjusts a focus
  • the edge width can be
  • a histogram of edge widths may be used to determine whether a particular image is focused or
  • a histogram with a large population of thin edge widths is indicative of a focused image.
  • Figure 1 shows an embodiment of an auto-focus image capture system 102.
  • the system 102 may be part of a digital still camera, but it is to be
  • the system 102 may include a focus lens 104, a pixel array and circuits 108, an A/D converter 110, a processor 112, a display 114, a memory card 116 and a drive
  • the motor/circuit 118 Light from a scene enters through the lens 104.
  • the pixel array and circuits 108 generates an analog signal that is converted to a digital signal by the A/D Converter 110.
  • the pixel array 108 may
  • the digital signal may be sent to the processor 112 that performs various processes, e.g. color
  • a color interpolation unit 148 may be implemented to perform color interpolation on the digital signal 130 to estimate the missing color signals on each pixel for the focus signal generator 120. Alternately, where the focus signal generator 120 and the processor 112 reside
  • the focus signal generator 120 may input interpolated color images from the
  • processor 112 on bus 146 as shown in Figure 2 or a single image signal derived from the original image signal generated from the A/D converter 110, for example a grayscale signal .
  • the focus signal generator 120 receives a group of control signals 132 from the processor 112, in addition, and may output signals 134 to the processor 112.
  • the output signals 134 may comprise one or more of the following: a focus signal 134, a narrow-edge count, and a set of numbers representing a statistics of edge width in the image.
  • the processor 112 may generate a focus control signal 136 that is sent to the drive
  • a focused image is ultimately provided to the display 114 and/or stored in the memory card 116.
  • the algorithm(s) used to adjust a focus position may be performed by the processor 112.
  • the pixel array and circuits 108, A/D Converter 110, focus signal generator 120, and processor 112 may all reside within a package. Alternately, the pixel array and circuits 108, A/D Converter 110, and focus signal generator 120 may reside within a package 142 as image sensor 150 shown in Figure 1, separate from the processor 112. Alternately, the focus signal generator 120 and processor 112 may together reside within a package 144 as a camera controller 160 shown in Figure 2, separate from the pixel array 108 and A/D Converter 110.
  • Focus Signal Generator Figure 3 shows an embodiment of a focus signal generator 120 receiving image (s) from a image providing unit 202.
  • the image providing unit 202 may be the color interpolator 148 in Figure 1 or the processor 212 in Figure 2.
  • the focus signal generator 120 may comprise an edge detection & width measurement (EDWM) unit 206, a focus signal calculator 210, a length filter 212, and a width filter 209. It may further comprise a fine switch 220 controlled by input ⁇ fine' 222.
  • the focus signal generator 120 may provide a narrow-edge count from the width filter 209 and a focus signal from the focus signal calculator 210, the focus signal being configurable between a fine focus signal and a gross focus signal, selectable by input ⁇ fine' 222. Alternately, both fine focus signal and gross focus signal may be calculated and output as part of output signals 134.
  • the edge detection & width measurement unit 206 receives image (s) provided by the image providing unit 202. In the context of
  • control signals such as control signal ⁇ fine' 222, may be provided by the processor 112 in signals 132.
  • the output signals 134 may be provided to the processor 112, which functions as a focus system controller that
  • the focus position of the focus lens 104 controls the focus position of the focus lens 104 to bring images of objects into sharp focus on the pixel array 108 by analyzing the output signals 134 to detect a sharp object in the image.
  • Various components of the focus signal generator 120 are described below.
  • the EDWM unit 206 may transform the input image such that the three signals of the image, red (R) , green (G) and blue (B) are converted to a single image signal.
  • RGB values can be used to calculate a luminance or chrominance value or a specific ratio of RGB values can be taken to form the single image signal.
  • the single image signal may then be processed by a Gaussian filter or any lowpass filter to smooth out pixel signal values among neighboring pixels to remove a noise.
  • the focus signal generator 120, 120', 120" is not limited to grayscale signal. It may operate on any one image signal to detect one or more edges in the image signal. Or it may operate on any combination of the image signals, for example Y, R-G, or B-G. It may
  • edges may form statistics of edge widths for each of the R, G, B image signals, or any combination thereof. It may form a focus signal from statistics of edge widths from one or more image signals.
  • a gradient of the processed image is then calculated.
  • Gradients across the columns and the rows may be calculated to detect vertical and horizontal edges respectively, for example using a Sobel-X operator and a Sobel-Y operator, respectively.
  • Each pixel is tagged either a horizontal edge ( ⁇ ⁇ ' ) or a vertical edge ( ⁇ ' ) if either vertical or horizontal gradient magnitude exceeds a predetermined lower limit ("elimination threshold"), e.g. 5 for an 8-bit image, or no edge if neither is true.
  • a predetermined lower limit e.g. 5 for an 8-bit image
  • This lower limit eliminates spurious edges due to gentle shading or noise.
  • a pixel is tagged a vertical edge if its horizontal gradient magnitude exceeds its vertical gradient magnitude by a predetermined hysteresis amount or more, e.g. 2 for an 8- bit image, and vice versa.
  • both gradient magnitudes differ less than the hysteresis amount
  • the pixel gets a direction tag same as that of its nearest neighbor that has an direction tag already determined. For example, if the image is scanned from left to right in each row and from row to row downwards, a sequence of inspection of neighboring pixels may be the pixel above first, the pixel above left second, and the pixel on the left third, and the pixel above right last. Applying this hysteresis helps to ensure that adjacent pixels get similar tags if each of them has nearly identical horizontal and vertical gradient magnitudes.
  • Figure 4 illustrates the result of tagging on a 6-by-6 array of horizontal and vertical gradients. In each cell, the horizontal gradient is in the upper-left, vertical gradient is on the right, and direction tag is at the bottom. Only pixels that have either horizontal or vertical gradient magnitude
  • the image, gradients and tags may be scanned
  • Each group of consecutive pixels in a same row, having a same horizontal gradient polarity and all tagged for vertical edge may be designated a vertical edge if no adjacent pixel on left or right of the group are likewise.
  • each group of consecutive pixels in a same column having a same vertical gradient polarity and all tagged for horizontal edge may be designated a horizontal edge if no adjacent pixel above or below the group satisfies the same.
  • horizontal and vertical edges may be identified.
  • Edge Width Each edge may be refined by removing pixels whose gradient magnitudes are less than a given fraction of the peak gradient magnitude within the edge.
  • Figure 5 illustrates this step using a refinement threshold equal to one third of the edge's peak gradient magnitude, refining the edge width down to 3 from the original 9.
  • This edge refinement distinguishes the dominant gradient component that sets the apparent edge width that
  • Edge width may be calculated in any one of known methods.
  • One method of calculating edge width is simply counting the number of pixels within an edge.
  • Figure 5 a first fractional pixel position (2.4) is found between a first outer pixel (pixel 3) of a refined edge and the adjacent outside pixel (pixel 2) by an interpolation from the refinement threshold 304.
  • a second fractional pixel position (5.5) is found between a second outer pixel (pixel 5) and its adjacent outside pixel (pixel 6) .
  • each edge may be assigned to one prescribed direction (e.g. vertical direction or horizontal
  • direction e.g horizontal direction or vertical
  • a boundary (shaded band) is shown to be inclined at a slant angle ⁇ with respect to the vertical dashed line, and a width a is shown to be measured in the perpendicular direction (i.e. horizontal direction) .
  • a width b (as indicated in the drawing) measured in a direction perpendicular to the direction of the boundary (also direction of an edge that forms a part of the boundary) is more appropriate as the width of the boundary (and also of the edge) than width a.
  • the edge widths measured in one or the other of those prescribed directions are to be corrected by reducing them down to be widths in directions
  • the Edge Detection and Width Measurement Unit 206 performs such a correction on edge widths.
  • the measured width a is the length of the hypotenuse of a right-angled triangle that has its base (marked with width b) straddling across the shaded boundary
  • the corrected width b may then be obtained from a projection of the measured width a to the direction perpendicular to the edge direction. From elementary trigonometry, such a
  • angle ⁇ or cos ( ⁇ ) itself, may be found by any method known in the art for finding a direction of an edge in an image, or by a more accurate method described in the flowchart shown in Figure 7.
  • Each horizontal or vertical edge's edge width may be corrected for its slant from either the horizontal or vertical orientation (the prescribed directions),
  • Figure 6A, 6B illustrate a correction calculation for an edge width measured in the horizontal direction for a boundary (and hence edges that form the boundary) that has a slant from the vertical line.
  • Figure 6C, 6D illustrate a correction calculation for an edge width measured in the vertical direction for a boundary (and hence edges that form the boundary) that has a slant from the horizontal line.
  • the correction may be made by multiplying the edge width measured in a prescribed direction, such as a vertical direction or a horizontal direction, by a factor of cos ⁇ , where ⁇ is an angle of slant from the prescribed direction.
  • Figure 7 shows a flowchart of a process to correct edge widths for slant for edges
  • a slant angle ⁇ is found. For each vertical edge, at step 502, locate the column position where the horizontal gradient magnitude peaks, and find the horizontal gradient x. At step 504, find where the vertical gradient magnitude peaks along the column position and within two pixels away, and find the vertical gradient y.
  • the slant angle may be found by looking up a lookup table.
  • step 508 scale down the edge width by multiplying with cos ( ⁇ ) , or with an approximation thereto as one skilled in the art usually does in practice.
  • a first modification of the process shown in Figure 7 is to substitute for step 506 and part of step 508 by providing a lookup table that has entries for various combinations of input values of x and y. For each
  • the lookup table returns an edge width correction factor.
  • the edge width correction factor output by the lookup table may be an approximation to cos (tan -1 (y/x) ) to within 20%, preferably within 5%.
  • the edge width is then multiplied with this correction factor to produce a slant-corrected edge width .
  • a second modification is to calculate a quotient y/x between a vertical gradient y and a horizontal gradient x to produce a quotient q, then use q to input to a lookup table that has entries for various values of q. For each value of q, the lookup table returns an edge width
  • the edge width correction factor may be an approximation to cos ( tan -1 ( q) ) to within 20%
  • the values of x and y may be obtained in steps 502 to 506, but other methods may be employed instead.
  • Adjacent edges may be prevented altogether from contributing to a focus signal, or have their
  • Figure 9A, 9B, and 9C illustrate a problem that is being
  • Figure 9A illustrates three vertical white bars separated by two narrow black spaces each 2 pixels wide.
  • the middle white bar is a narrow bar 2 pixels wide.
  • Figure 9B shows an image signal plotted horizontally across the image in Figure 9A for each of a sharp image and a blurred image.
  • Figure 9C plots Sobel-x gradients of Figure 9B for the sharp image and blurred image.
  • the first edge (pixels 2-5) for the blurred image is wider than that for the sharp image, and
  • the two narrowest edges (pixels 9 & 10, and pixels 11 & 12) have widths of two in both images.
  • the corresponding slopes at pixels 9 & 10, and pixels 11 & 12 each takes two pixels to complete a transition.
  • the blurred image has a
  • edge gap is in terms of a number of pixels, e.g. 1, or 2, or in between.
  • edges may have been eliminated due to having a peak gradient less than the elimination threshold, two successive edges having an identical gradient polarity and spaced no more than two times the minimum edge gap plus a sharp_edge_width
  • sharp_edge_width is a number assigned to designate an edge width of a sharp edge
  • the Edge Detection and Width Measurement Unit 206 may execute the following algorithm for eliminating closely- packed narrower edges based on a screen threshold
  • the screen threshold and screen flag to be used for the immediate next edge of an opposite polarity are determined according to the process of the flowchart shown in Figure 10. Given the screen threshold and screen flag, an edge may be eliminated unless one of the following conditions is true: (a) the screen flag is off for this edge, (b) a peak gradient magnitude of the edge is not smaller than the screen threshold for this edge.
  • condition (c) the edge width is not less than sharp_edge_width + 1, where a number has been assigned for sharp_edge_width to designate an edge width of a sharp edge, and where the "+1" may be varied to set a range of edge widths above the sharp_edge_width within which edges may be eliminated if they fail (a) and (b) .
  • sharp_edge_width may be 2.
  • Figure 10 is a flowchart to determine a screen threshold and a screen flag for each edge. For vertical edges, assume scanning from left to right along a row, though this is not required. (For horizontal edges, assume scanning from top to bottom along a column, though this is not required.) A number is assigned for
  • sharp_edge_width and may be 2 for the example shown in Figures 9A-9C.
  • each edge is queried at step 720 as to whether its edge width is greater than or equal to one plus
  • sharp_edge_width the value of one being the minimum edge gap value used for this illustration, but a different value may be used, such as between 0.5 and 2.0. If yes, the edge is a wider edge, and step 706 follows to set the screen threshold for the immediate next edge that has an opposite polarity to beta times a peak gradient magnitude of the edge, beta being from 0.3 to 0.7, preferably 0.55, then step 708 follows to turn on the screen flag for the next edge, then proceed to the next edge.
  • step 730 follows to check whether the spacing from the prior edge of the same gradient polarity is greater than two times the minimum edge gap (or a different predetermined number) plus sharp_edge_width and the immediate prior edge of an opposite polarity, if any, is more than the minimum edge gap away. If yes, step 710 follows to turn off the screen flag for the next edge. If no, keep the screen flag and the screen threshold for the next edge and proceed to the next edge.
  • Beta may be a predetermined fraction, or it may be a fraction calculated following a predetermined formula, such as a function of an edge width. In the latter case, beta may vary from one part of the image to another part.
  • FIG. 23A and Figure 23B illustrate a method where the focus signal generator compares a gradient peaking template with a gradient profile about a peak. If a mismatch is detected, to the focus signal generator reduces or eliminates altogether an associated edge and its edge width from entering a calculation for a focus signal or edge count or focus control.
  • Figure 23A illustrates a method where the focus signal generator compares a gradient peaking template with a gradient profile about a peak. If a mismatch is detected, to the focus signal generator reduces or eliminates altogether an associated edge and its edge width from entering a calculation for a focus signal or edge count or focus control.
  • a gradient peaking template may be specified in terms of a function, e.g. a difference or a ratio, of a width of the peaking gradient profile at an upper gradient magnitude and a width at a lower gradient magnitude that constrains one by the other.
  • a function e.g. a difference or a ratio
  • FIG. 23B an upper gradient magnitude of 0.85 and a lower gradient magnitude of 0.3 are indicated with dotted and dashed lines, respectively.
  • the width of the gradient profile at the upper gradient magnitude is about 1.5 pixels, being the distance between the two positions where the
  • the interpolated gradient profile intersects the upper gradient level.
  • the gradient profile width is about 4.5 pixels.
  • the difference between the two widths at the upper and lower gradient levels is 3.0 pixels.
  • the ratio between the two widths at the upper and lower gradient levels is 1 to 3.
  • the widths are about 3.2 pixels and 5.5 pixels, respectively, giving a difference of 2.3 pixels and a ratio of 1 to 1.7, clearly much unlike the genuine gradient profile in Figure 23A.
  • a template may be specified as a constraint that the difference in the gradient profile's width between the upper and lower gradient levels shall lie between 2.5 and 3.5 pixels, or/and that the ratio shall lie between 1 to 2.6 and 1 to 3.45, failing which the edge associated with the peaking gradient profile may be rejected or de-emphasized.
  • the edge associated with the peaking gradient profile in Figure 23B is rejected or de-emphasized because neither the difference nor the ratio lie within the acceptance range of the constraints of the gradient peaking template.
  • the gradient peaking template's constraint stipulates that the width of a good gradient profile at a first gradient level is dependent on the width of the gradient profile at another gradient level, i.e. there is a definite relationship that constrains one by the other.
  • the template may be expressed as
  • FIG. 23D The original of this constraint is explain with regards to Figure 23D.
  • Figure 23D five good gradient profiles of different widths are shown, all normalized to peak gradient level of 1.
  • Li and L ⁇ are two different gradient levels that slice through the gradient profiles.
  • the gradient profiles have widths Wo a to W 4a , from the smallest to the largest, at the lower gradient level L ⁇ .
  • the widths follow the same order from Wo b for the narrowest to W 4b for the widest gradient profile.
  • Figure 26 shows their relationship, clearly the upper widths are proportional to the lower widths. The relationship is predictable and can be described by a simple function, such as a low degree
  • the upper width can be found by multiplying the lower width with the function, which is dependent on the lower width, plus a tolerance to account for errors due to
  • the function may be implementated as a lookup table that stores a small number of lower widths
  • An alternate method to specify a gradient peaking template is to find a difference or ratio between the numbers of pixels above an upper gradient magnitude and those above a lower gradient magnitude, each gradient
  • Two different templates may be specified for two
  • a gradient profile having a width of 6 at a gradient level at 50% of the peak gradient value may use a different template than another having a width of 3.
  • the predetermined fraction can be a function of a width of the gradient profile.
  • Figures 24A to 24D illustrates an alternative method to implement such a constraint.
  • two pairs of maximum and minimum width constraints one pair for the upper gradient level L lr the other pair for the lower gradient level L ⁇ .
  • the sideway-pointing shaded triangles show the limits.
  • For each set of two pairs of constraints only one among the five reference gradient profiles shown can satisfy the constraints within the set.
  • Figures 25A to 25D it is shown that each gradient profile being tested is assigned a set so that the gradient profile under test meets the max-min constraints at one of the two gradient level, say the lower gradient level Li.
  • edge width should be corrected for the slant, and a procedure was given to perform this correction to shrink the widths.
  • the width measured from the gradient profile should be corrected for a slant before further action, such as performing the lookup-table lookup and interpolation.
  • the widths from the lookup table shall be scaled up.
  • the above detection of spurious edges and solution r spurious edges may be performed in the Edge Detection Width Measurement Unit 206.
  • length filter 212 creates a preference for edges that each connects to one or more edges of a similar orientation.
  • concatenated edge is less likely to be due to noise, compared with an isolated edge that does not touch any other edge of similar orientation.
  • the probability of the group being due to noise falls off exponentially as the number of edges within the group increases, and far faster than linearly.
  • This property can be harnessed to reject noise, especially under dim- lit or short-exposure situations where the signal-to- noise ratio is weak, e.g. less than 10, within the image or within the region of interest.
  • the preference may be implemented in any reasonable method to express such preference. The several ways described below are merely examples .
  • a first method is to eliminate edges that belong to vertical/horizontal concatenated edges having lengths lesser than a concatenated length threshold.
  • the concatenated length threshold may be larger when the region of interest is dimmer. For example, the
  • concatenated length threshold may start as small as 2, but increases to 8 as a signal-to-noise ratio within the region of interest drops to 5.
  • the concatenated length threshold may be provided by the processor 112, 112', 112", for example through a ⁇ length command' signal, shown in Figure 3, as part of signals 132. Alternately, the threshold may be calculated according to a formula on the focus signal generator.
  • a second method is to provide a length-weight in the length filter 212 for each edge and apply the length- weight to a calculation of focus signal in the focus signal calculator 210.
  • An edge that is part of a longer concatenated edge receives a larger weight than one that is part of a shorter concatenated edge.
  • the length-weight may be a square of the length of the concatenated edge.
  • a contribution of each edge towards the focus signal may be multiplied by a factor A/B before summing all contributions to form the focus signal, where B is a sum of the length-weights of all edges that enter the focus signal calculation, and A is a length-weight of the edge.
  • the edge-width histogram which may be output as part of signals 134, may have edges that are members of longer concatenated edges contribute more to the bins corresponding to their respective edge width, thus preferred, instead of all edges contribute the same amount, e.g. +1.
  • each edge may contribute A/C, where C is an average value of A across the edges.
  • the narrow-edge count may have edges that are members to longer concatenated edges contribute more.
  • the contribution from each edge may be
  • A/D multiplied by A/D, where D is an average of A among edges that are counted in the narrow-edge count.
  • D is an average of A among edges that are counted in the narrow-edge count.
  • a group of N vertical (horizontal) edges where, with the exception of the top (leftmost) and the bottom
  • Figure 8 illustrates a vertical concatenated edge and its length.
  • cells R2C3 and R2C4 form a first vertical edge
  • cells R3C3, R3C4, and R3C5 together form a second vertical edge
  • cells R4C4 and R4C5 together form a third vertical edge.
  • the first and the third vertical edges each touches only one other vertical edge
  • the second vertical edge touches two other vertical edges.
  • the first, second and third vertical edges together form a vertical concatenated edge having a length of 3.
  • (horizontal) concatenated edge has two or more branches, i.e. having two edges in a row (column), the length may be defined as the total number of edges within the
  • the length may be
  • a definition of a length for a concatenated edge shall have a property that the length is proportional to the number of member edges within the concatenated edge at least up to three. This is to be consistent with the previously stated reasoning that more edges being
  • the length filter 212 may de-emphasize or eliminate and thus, broadly speaking, discriminate against an edge having a
  • the length filter 212 may discriminate against an edge having a concatenated length of two.
  • the length filter 212 may discriminate against an edge having a concatenated length of three, to further reduce an influence of noise.
  • the length filter 212 may do any one of these actions under a command from the processor.
  • Filter 212 may be inserted before the focus signal calculator 210, wherein the edges processed by the Length Filter 212 are those that pass through the width filter 209 depending on the ⁇ fine' signal.
  • the fine switch 220 may be removed so that the focus signal calculation unit 210 receives a first set of data not filtered by the width filter 209 and a second set filtered, and for each calculates a different focus signal, gross focus signal for the former, fine focus signal for the latter, and outputs both to the processor 112, 112' .
  • FIG. 11 plots a histogram of edge widths, i.e. a graph of edge counts against edge widths.
  • edge width of 2 i.e. the aforementioned sharp_edge_width
  • there is a peak indicating a presence of sharp edges in the image.
  • edge widths of 4 and 5 there are peaks, indicating edges that are blurred, possibly due to the corresponding imaged objects being out of focus, being at a different distance away from the focus lens than those objects that give rise to the sharp edges.
  • edges whose widths lie outside a predetermined range (“narrow- edge range”) may be de-emphasized using the Width Filter 209.
  • the Width Filter 209 may create a lesser weight for edge widths outside the narrow-edge range for use in the focus signal calculation. For example, edge widths may be assigned weight of 1.0, whereas edges widths more than +1 to the right of the upper limit 840 assigned a weight of 0, and edge widths in between assigned weights between 0 and 1.0, falling monotonically with edge width. Alternately, the Width Filter 209 may prevent such edges from entering the focus signal calculation altogether. Appropriate upper and lower limits 830, 840 depend on several factors, including crosstalk in the pixel array 108, the interpolation method used to generate missing colors for the image received by the focus signal
  • upper and lower limits 830, 840 and the parameter sharp_edge_width may be determined for the image pickup apparatus 102, 102' by capturing images of various degrees of sharpness and inspecting the edge width histograms. For example, if a sharp image has a peak at edge width of 2, an appropriate lower and upper limit may be 1.5 and 3, respectively, and the sharp_edge_width may be set to 2.0.
  • the lower and upper limits and sharp_edge_width may be determined as above and provided to the focus signal generator 120, 120', 120" by the processor 112, 112". When ⁇ fine command' is ON, the fine focus signal thus calculated de- emphasizes edge widths outside the narrow-edge range.
  • the Width Filter 209 may calculate a total count of the edges whose edge widths fall within the narrow-edge range and output as part of output signals 134. Narrow-Edge Count may be input to and used by the focus system controller (processor 112) to detect a presence of sharp image and/or for initiating tracking.
  • the focus signal calculator 210 receives edge widths and outputs a focus signal.
  • the weight at each edge width may be the edge count for the edge width multiplied by the edge width itself, i.e.
  • Wi Ciei.
  • control signal ⁇ fine' is OFF and ⁇ exclude' is OFF
  • the focus signal may be a value close to 5.0, indicating that there are substantial details of the image that are out of focus. Turning ON the fine switch 220 allows the focus signal to respond more to objects slightly blurred while less to those that are completely blurred.
  • the fine switch 220 is ON, we shall refer to the focus signal as a fine focus signal, whereas when the fine switch 220 is OFF, a gross focus signal.
  • the emphasis expressed by the Length Filter 212 may be incorporated into the focus signal in one of several ways, such as eliminating an edge that is de-emphasized from entering the focus signal calculation, or reducing a weight of the edge's contribution towards a count ei of a corresponding edge width bin.
  • Figure 15 sketches a response of the fine focus signal to an adjustment of the focus position in the vicinity of where an object is in sharp focus.
  • the fine focus signal reaches a minimum value, approximately at sharp_edge_width, where the focus position brings an image into sharp focus, and increases if otherwise.
  • the fine focus signal may be used for tracking objects already in-focus or very nearly so. For moving objects, the fine focus signal allows the focus control system to keep the objects in sharp focus even if the focus
  • Fine focus signal may also be used to acquire a sharp focus ("acquisition") of an object that is not yet in sharp focus but close enough such that the object gives rise to edges whose widths fall within the narrow-edge range. Since the edge width histogram exhibits a peak at the edge width corresponding to the object away from the sharp_edge_width, resulting in the fine focus signal being larger than the
  • the focus control system may respond by adjusting the focus position to bring the fine focus signal value towards the sharp_edge_width, thus centering the peak of edge width due to the object at the edge width value equal to sharp_edge_width .
  • Figures 12-16 illustrate how the narrow-edge count, gross focus signal, and fine focus signal may be used to perform focus control to achieve sharp images.
  • Figure 12 illustrates an outdoor scene having 3 groups of objects at different focus distances: "person” in the foreground, “mountain, sun, and horizon” in the background, and "car” in the between.
  • Figure 13 is an illustration of the narrow-edge count plotted against time when the focus position of the focus lens 104 sweeps from far to near for the scene
  • the narrow-edge count peaks when the focus position brings an object into a sharp image on the pixel array 108.
  • the narrow-edge count plot exhibits 3 peaks, one each for "mountain, sun, and horizon", “car”, and “person”, in this order, during the sweep .
  • Figure 14 shows the gross focus signal plotted against time.
  • the gross focus signal exhibits a minimum when the focus position is near each of the 3 focus positions where the narrow-edge count peaks. However, at each minimum, the gross focus signal is not at the sharp edge width level, which is 2.0 in this example, due to bigger edge widths contributed by the other objects that are out-of-focus .
  • Figure 15 illustrates the fine focus signal plotted against the focus position in the vicinity of the sharp focus position for "car” in the scene of Figure 12.
  • the fine focus signal achieves essentially the sharp edge width, which is 2 in this example, despite the presence of blurred objects ("person” and “mountains, sun, and horizon") .
  • the Width Filter 324 where two peaks at widths of 4 and 5 are contributed by those two groups of blurred objects, this can be understood as the Width Filter 324 having reduced the weight or eliminated
  • a focus control system may use the gross focus signal to search for the nearest sharp focus position in a search mode. It can move the focus position away from the current focus position to determine whether the gross focus signal increases or decreases. For example, if the gross focus signal increases (decreases) when the focus position moves inwards (outwards) , there is a sharp focus position farther from the current focus position.
  • the processor 112, 112', 112" can then provide a focus drive signal to move the focus lens 104 in the direction
  • a focus control system may use the fine focus signal to track an object already in sharp focus to maintain the corresponding image sharp (thus a "tracking mode")
  • any shift in the fine focus signal level immediately informs the processor 112, 112', 112" of a change in the focus distance of the object.
  • the processor 112, 112', 112" can then determine a direction and cause the focus lens 104 to move to bring the fine focus signal level back to the "locked" level.
  • the image pickup apparatus 102, 103, 103', 103" is able to track a moving object.
  • a focus control system may use narrow- edge count to trigger a change from a search mode to a tracking mode.
  • the focus control system uses the fine focus signal to "lock" the object.
  • the focus control system may use the gross focus signal to identify the direction to move and regulate the speed of movement of the lens.
  • narrow-edge count peaks sharply.
  • the processor 112, 112', 112" may switch into the tracking mode and use the fine focus signal for focus position control upon detection of a sharp rise in the narrow-edge count or a peaking or both.
  • a threshold which may be different for each different sharp focus position, may be assigned to each group of objects found from an end-to-end focus position "scan", and subsequently when the narrow-edge count surpasses this threshold the corresponding group of objects is detected.
  • an end-to-end focus position scan can return a list of maximum counts, one maximum count for each peaking of the narrow-edge count.
  • a list of thresholds may be generated from the list of maximum counts, for example by taking 50% of the maximum counts.
  • Figure 16 illustrates an image pickup apparatus 102 having a display 114, an input device 107 comprising buttons, and selection marker 1920 highlighted in the display 114.
  • a user can create, shape and maneuver the selection marker 1920 using input device 107.
  • input device 107 may comprise a touch-screen overlaying the display 114 to detect positions of touches or strokes on the display 114.
  • Input device 107 and processor 112, 112', 112" or a separate dedicated controller (not shown) for the input device 107 may determine the selection region.
  • the parameters for describing the selection region may be transmitted to the focus signal generator 120, 120', 120" over bus 132 (or internally within the processor 112 in the case where focus signal generator 120 is part of the processor 112) .
  • the focus signal generator 120 may limit the focus signal calculation or the narrow- edge count or both to edges within the selection region described by said parameters or de-emphasize edges outside the selection region. Doing so can de-emphasize unintended objects from the focus signal and then even the gross focus signal will exhibit a single minimum and a minimum level within 1.0 or less of the sharp edge width .
  • Figure 45 shows an alternate embodiment of a focus signal generator 120'.
  • Focus signal generator 120' outputs statistics of edges and edge widths.
  • edge-width statistics that controller 120' outputs may be one or more of the following: an edge-width histogram comprising edge counts at different edge widths; an edge width where edge width count reaches maximum; a set of coefficients representing a spline function that
  • edge counts at different edge widths approximates edge counts at different edge widths; and any data that can represent a function of edge width.
  • Census Unit 240 may receive data computed in one or more of the other units with the focus signal generator 120' to calculate statistics of edge widths.
  • the focus signal generator 120' may output a signal that has an indication of a distribution of edge widths.
  • the edge-width statistics thus provided in signals 134 to an alternate embodiment of processor 112' in an alternate auto-focus image pickup apparatus 102' may be used by the processor 112' to compute a gross and/or fine focus signal and a narrow- edge count in accordance with methods discussed above or equivalent thereof.
  • any data computed in the focus signal generator 120' may be output to the processor 112' as part of the output signals 134.
  • the processor 112' may internally generate a focus signal and/or a narrow-edge count in addition to the functions included in the processor 112 of Figure 1.
  • interpolator 148, and generator 120' may reside within a package 142, together comprising an image sensor 150', separate from the processor 112'.
  • a focus signal generator may add a census unit 240 to the generator 102 of Figure 1 and output one or more statistics calculated in such a generator to the processor 112.
  • Auxiliary Pixel Array Figure 47 shows an alternate embodiment of an auto- focus image pickup system 103.
  • the system 103 may include a partial mirror 2850, a full mirror 2852, an optical lowpass filter 2840, a main pixel array 2808, and a main A/D Converter 2810.
  • the partial mirror 2850 may split the incoming light beam into a first split beam and a second split beam, one transmitted, the other reflected.
  • the first split beam may further pass through the optical lowpass filter 2840 before finally reaching the main pixel array 2808, which detects the first split beam and converts to analog signals.
  • the second split beam may be reflected by the full mirror 2852 before finally reaching the auxiliary pixel array 108", which corresponds to the pixel array 108 in system 102 shown in Figure 1.
  • the ratio of light intensity of the first beam to the second beam may be 1-to-l or greater than 1-to-l.
  • the ratio may be 4-to-l.
  • the main pixel array 2808 may be covered by a color filter array of a color mosaic pattern, e.g. the Bayer pattern.
  • the optical lowpass filter 2808 prevents the smallest light spot focused on the pixel array 2808 from being too small as to cause aliasing.
  • aliasing can give rise to color moire artifacts after a color interpolation, .
  • the smallest diameter of a circle encircling 84% of the visible light power of a light spot on the main pixel array 2808 (“smallest main diameter") may be kept larger than one and a half pixel width but less than two pixel widths by use of the optical lowpass filter.
  • the optical lowpass filter 2840 may be
  • the auxiliary pixel array 108" may comprise one or more arrays of photodetectors . Each of the arrays may or may not be covered by a color filter array of a color mosaic pattern.
  • the array (s) in auxiliary pixel array 108" outputs image (s) in analog signals that are
  • A/D Converter 110 converts digital signals 130 by A/D Converter 110 to digital signals 130 by A/D Converter 110.
  • the images are sent to the focus signal generator 120.
  • a color interpolator 148 may generate the missing colors for images generated from pixels covered by color
  • auxiliary pixel array 108 comprises
  • each array may capture a sub-image that corresponds to a portion of the image captured by the main pixel array 2808.
  • the multiple arrays may be physically apart by more than a hundred pixel widths, and may or may not share a semiconductor substrate. Where the pixel arrays within auxiliary pixel array 108" do not share a semiconductor substrate, they may be housed together in a package (not shown) .
  • Main A/D Converter 2810 converts analog signals from the Main Pixel Array 2808 into digital main image data signal 2830, which is sent to the processor 112, where the image captured on the Main Pixel Array 2808 may receive image processing such as color interpolation, color correction, and image compression/decompression and finally be stored in memory card 116.
  • An array of photodetectors in the auxiliary pixel array 108" may have a pixel width ("auxiliary pixel width") that is smaller than a pixel width of the main pixel array 2808 ("main pixel width”) .
  • the auxiliary pixel width may be as small as half of the main pixel width. If an auxiliary pixel is covered by a color filter and the auxiliary pixel width is less than 1.3 times the smallest spot of visible light without optical lowpass filtering, a second optical lowpass filter may be inserted in front of the auxiliary array 108" to increase the smallest diameter on the auxiliary pixel array 108" ("smallest auxiliary diameter") to between 1.3 to 2 times as large but still smaller than the smallest main
  • the slight moire in the auxiliary image is not an issue as the auxiliary image is not presented to the user as the final captured image.
  • Figure 50 illustrates how edge widths may vary about a sharp focus position for main images from the main pixel array 2808 (solid curve) and auxiliary images from the auxiliary pixel array 108" (dashed curve) .
  • the auxiliary images give sharper slopes even as the main images reach the targeted sharp edge width of 2.
  • the auxiliary image is permitted to reach below the targeted sharp edge width, since moire due to aliasing is not as critical in the auxiliary image, as it is not presented to the user as a final image. This helps to sharpen the slope below and above the sharp edge width.
  • the sharper slope is also helped by the auxiliary pixel width being smaller than the main pixel width.
  • the shaded region in Figure 50 indicates a good region within which to control the focus position to keep the main image in sharp focus. A change in focus
  • a linear feedback control system may be employed to target the middle auxiliary edge width value within the shade region and to use as feedback signal the edge widths generated from the auxiliary images.
  • the auxiliary pixel array 108", A/D Converter 110, focus signal generator 120 together may be housed in a package 142 and constitute an auxiliary sensor 150.
  • the auxiliary sensor 150 may further comprise a color
  • Figure 48 shows an alternate embodiment of auto-focus image pickup apparatus 103' similar to apparatus 103 except focus signal generator 120' replaces focus signal generator 120.
  • Converter 110, focus signal generator 120' together may be housed in a package 142 and constitute an auxiliary sensor 150'.
  • the auxiliary sensor 150 may further
  • Figure 49 shows an alternate embodiment of auto-focus image pickup apparatus 103".
  • the focus signal generator 120 and the processor 112" may be housed in a package 144 as a camera controller, separate from the auxiliary pixel array 108".
  • the processor 112" is similar to processor 112 except that processor 112" receives images from the main pixel array 2808 as well as the auxiliary pixel array 108".
  • the processor 112" may perform a color interpolation, a color correction, a
  • the processor 112" may perform color interpolation on images received on signal 130 for pixels that are covered by color filters in the auxiliary pixel array 108" and send the color interpolated images to the focus signal generator 120 on signal 146.
  • the auto-focus image pickup system 102, 102', 103, 103' , 103" may include a computer program storage medium (not shown) that comprises instructions that causes the processor 112, 112', 112" respectively, and/or the focus signal generator 120, 120' to perform one or more of the functions described herein.
  • the instructions may cause the processor 112 or the generator 120' to perform a slant correction for an edge width in accordance with the flowchart of Figure 7.
  • the instructions may cause the processor 112' or the generator 120 to perform an edge width filtering in accordance with the above description for Width Filter 209. Alternately, the processor 112, 112' or the
  • generator 120, 120' may be configured to have a
  • correction may be performed in pure hardware and a length filter 212 performed according to instructions in a firmware .
  • any nonvolatile storage medium may be used instead, e.g. hard disk drive, wherein images stored therein are accessible by a user and may be copied to a different location outside and away from the system 102.
  • One or more parameters for use in the system may be stored in a non ⁇ volatile memory in a device within the system.
  • the device may be a flash memory device, the processor, or the image sensor, or the focus signal generator as a separate device from those.
  • One or more formulae for use in the system for example for calculating the sharp_edge_width
  • concatenated length threshold, or for calculating beta may likewise be stored as parameters or as computer- executable instructions in a non-volatile memory in one or more of those devices. While certain exemplary embodiments have been

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Optics & Photonics (AREA)
  • Human Computer Interaction (AREA)
  • Theoretical Computer Science (AREA)
  • Automatic Focus Adjustment (AREA)
  • Studio Devices (AREA)
  • Image Analysis (AREA)
  • Focusing (AREA)
  • Image Processing (AREA)
PCT/IB2010/055649 2009-12-07 2010-12-07 Auto-focus image system Ceased WO2011070514A1 (en)

Priority Applications (19)

Application Number Priority Date Filing Date Title
JP2012542670A JP2013527483A (ja) 2009-12-07 2010-12-07 オートフォーカス画像システム
SG2012041588A SG181539A1 (en) 2009-12-07 2010-12-07 Auto-focus image system
GB1209942.0A GB2488482A (en) 2009-12-07 2010-12-07 Auto-focus image system
US12/962,649 US8159600B2 (en) 2009-12-07 2010-12-07 Auto-focus image system
AU2010329534A AU2010329534A1 (en) 2009-12-07 2010-12-07 Auto-focus image system
MX2012006469A MX2012006469A (es) 2009-12-07 2010-12-07 Sistema de imágenes de enfoque automático.
CA2820856A CA2820856A1 (en) 2010-12-07 2011-06-09 Auto-focus image system
PCT/IB2011/052515 WO2012076992A1 (en) 2010-12-07 2011-06-09 Auto-focus image system
BR112013014240A BR112013014240A2 (pt) 2009-12-07 2011-06-09 método para gerar um sinal de foco a partir de uma pluralidade de bordas de uma imagem, meio legível por computador, circuito que gera um sinal de fogo e sistema de captura de imagem
DE112011104256.6T DE112011104256T5 (de) 2010-12-07 2011-06-09 Autofokus-Bildsystem
SG2013044201A SG190451A1 (en) 2010-12-07 2011-06-09 Auto-focus image system
AU2011340207A AU2011340207A1 (en) 2010-12-07 2011-06-09 Auto-focus image system
GB1311741.1A GB2501414A (en) 2010-12-07 2011-06-09 Auto-focus image system
JP2013542632A JP2014504375A (ja) 2010-12-07 2011-06-09 オートフォーカス画像システム
EP11735545.3A EP2649787A1 (en) 2010-12-07 2011-06-09 Auto-focus image system
MX2013006517A MX2013006517A (es) 2010-12-07 2011-06-09 Sistema de imágenes de enfoque automatico.
US13/491,590 US20130044255A1 (en) 2009-12-07 2012-06-07 Auto-focus image system
US13/492,825 US20120314960A1 (en) 2009-12-07 2012-06-09 Auto-focus image system
JP2016078083A JP6179827B2 (ja) 2009-12-07 2016-04-08 フォーカス信号を生成する方法

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US26743609P 2009-12-07 2009-12-07
US61/267,436 2009-12-07

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US13/491,590 Continuation-In-Part US20130044255A1 (en) 2009-12-07 2012-06-07 Auto-focus image system

Publications (1)

Publication Number Publication Date
WO2011070514A1 true WO2011070514A1 (en) 2011-06-16

Family

ID=42470708

Family Applications (2)

Application Number Title Priority Date Filing Date
PCT/IB2010/055641 Ceased WO2011070513A1 (en) 2009-12-07 2010-12-07 Auto-focus image system
PCT/IB2010/055649 Ceased WO2011070514A1 (en) 2009-12-07 2010-12-07 Auto-focus image system

Family Applications Before (1)

Application Number Title Priority Date Filing Date
PCT/IB2010/055641 Ceased WO2011070513A1 (en) 2009-12-07 2010-12-07 Auto-focus image system

Country Status (11)

Country Link
US (8) US8159600B2 (enExample)
EP (1) EP2510680B1 (enExample)
JP (4) JP5725380B2 (enExample)
CN (1) CN103416052B (enExample)
AU (2) AU2010329534A1 (enExample)
BR (3) BR112012013721A2 (enExample)
CA (1) CA2821143A1 (enExample)
GB (4) GB2504857B (enExample)
MX (2) MX2012006468A (enExample)
SG (2) SG181539A1 (enExample)
WO (2) WO2011070513A1 (enExample)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
DE112013006265B4 (de) * 2012-12-28 2020-08-13 Fujifilm Corporation Pixelkorrekturverfahren und Bildaufnahmegerät

Families Citing this family (35)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
GB2504857B (en) 2009-12-07 2014-12-24 Hiok-Nam Tay Auto-focus image system
DE112010005589T5 (de) * 2010-05-26 2013-03-14 Hiok Nam Tay Autofokus-bildsystem
GB2510495A (en) * 2010-05-26 2014-08-06 Hiok Nam Tay Auto-focus image system
EP2649787A1 (en) * 2010-12-07 2013-10-16 Hiok Nam Tay Auto-focus image system
SG190755A1 (en) * 2010-12-07 2013-07-31 Hiok Nam Tay Auto-focus image system
US9065999B2 (en) 2011-03-24 2015-06-23 Hiok Nam Tay Method and apparatus for evaluating sharpness of image
CA2838821A1 (en) * 2011-06-09 2012-12-13 Hiok Nam Tay Auto-focus image system
CN103283215B (zh) * 2011-06-09 2017-03-08 郑苍隆 自动聚焦图像系统
US8873837B2 (en) * 2011-08-04 2014-10-28 University Of Southern California Image-based crack detection
JP5325966B2 (ja) * 2011-11-25 2013-10-23 オリンパス株式会社 撮像装置及び撮像方法
WO2013090830A1 (en) 2011-12-16 2013-06-20 University Of Southern California Autonomous pavement condition assessment
US20130162806A1 (en) * 2011-12-23 2013-06-27 Mitutoyo Corporation Enhanced edge focus tool
US8630504B2 (en) * 2012-01-16 2014-01-14 Hiok Nam Tay Auto-focus image system
US9030591B2 (en) 2012-07-20 2015-05-12 Apple Inc. Determining an in-focus position of a lens
US8922662B1 (en) * 2012-07-25 2014-12-30 Amazon Technologies, Inc. Dynamic image selection
JP6169366B2 (ja) * 2013-02-08 2017-07-26 株式会社メガチップス 物体検出装置、プログラムおよび集積回路
US9480860B2 (en) 2013-09-27 2016-11-01 Varian Medical Systems, Inc. System and methods for processing images to measure multi-leaf collimator, collimator jaw, and collimator performance utilizing pre-entered characteristics
US9392198B2 (en) 2014-02-26 2016-07-12 Semiconductor Components Industries, Llc Backside illuminated imaging systems having auto-focus pixels
US9560259B2 (en) * 2014-06-27 2017-01-31 Sony Corporation Image processing system with blur measurement and method of operation thereof
US20170139308A1 (en) * 2014-07-03 2017-05-18 Sony Corporation Filter control device, filter controlling method, and imaging device
US9716822B2 (en) * 2014-11-14 2017-07-25 Qualcomm Incorporated Direction aware autofocus
US9478007B2 (en) * 2015-01-21 2016-10-25 Samsung Electronics Co., Ltd. Stable video super-resolution by edge strength optimization
CN106973219B (zh) * 2017-02-21 2019-06-28 苏州科达科技股份有限公司 一种基于感兴趣区域的自动聚焦方法及装置
US10945657B2 (en) * 2017-08-18 2021-03-16 Massachusetts Institute Of Technology Automated surface area assessment for dermatologic lesions
FR3080936B1 (fr) * 2018-05-04 2020-04-24 Imginit Methode de detection et de quantification du flou dans une image numerique
US20200084392A1 (en) * 2018-09-11 2020-03-12 Sony Corporation Techniques for improving photograph quality for poor focus situations
US10686991B2 (en) 2018-09-11 2020-06-16 Sony Corporation Techniques for improving photograph quality for fouled lens or sensor situations
WO2020215050A1 (en) 2019-04-19 2020-10-22 Arizona Board Of Regents On Behalf Of The University Of Arizona All-in-focus imager and associated method
US11921285B2 (en) * 2019-04-19 2024-03-05 Arizona Board Of Regents On Behalf Of The University Of Arizona On-chip signal processing method and pixel-array signal
CN112534330B (zh) 2019-06-27 2022-11-29 松下知识产权经营株式会社 摄像装置
JP7289055B2 (ja) * 2019-06-27 2023-06-09 パナソニックIpマネジメント株式会社 撮像装置
JP7390636B2 (ja) 2019-06-27 2023-12-04 パナソニックIpマネジメント株式会社 撮像装置
CN112464947B (zh) * 2020-10-30 2021-09-28 深圳市路远智能装备有限公司 一种三脚透镜的视觉识别方法
US12328502B2 (en) 2022-11-16 2025-06-10 Black Sesame Technologies Inc. System and method for image auto-focusing
CN116309539B (zh) * 2023-04-25 2025-09-09 东北大学 一种热连轧带钢的边缘快速定位方法

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20020114015A1 (en) * 2000-12-21 2002-08-22 Shinichi Fujii Apparatus and method for controlling optical system
US20030099044A1 (en) * 2001-11-29 2003-05-29 Minolta Co. Ltd. Autofocusing apparatus
US20090102963A1 (en) * 2007-10-22 2009-04-23 Yunn-En Yeo Auto-focus image system

Family Cites Families (87)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPS6068510U (ja) * 1983-10-17 1985-05-15 キヤノン株式会社 焦点検出装置
JPS60143311A (ja) 1983-12-29 1985-07-29 Asahi Optical Co Ltd Ttlオ−トフオ−カスビデオカメラの光学系
JPS6119211U (ja) 1984-07-06 1986-02-04 株式会社 コシナ オ−トフオ−カス付ズ−ムレンズ
JPS61210310A (ja) * 1985-03-15 1986-09-18 Nippon Kokan Kk <Nkk> 自動焦点合せ装置
JPH0779434B2 (ja) * 1986-05-16 1995-08-23 キヤノン株式会社 合焦検出装置
US5040228A (en) * 1989-08-28 1991-08-13 At&T Bell Laboratories Method and apparatus for automatically focusing an image-acquisition device
JP3167023B2 (ja) 1989-11-13 2001-05-14 キヤノン株式会社 焦点調節装置、ブレ検出装置、動き検出装置、並びに、被写体位置検出装置
DE69127850T2 (de) 1990-04-29 1998-03-12 Canon Kk Vorrichtung zum Erfassen von Bewegungen und Fokusdetektor, der eine solche Vorrichtung benutzt
US5790710A (en) * 1991-07-12 1998-08-04 Jeffrey H. Price Autofocus system for scanning microscopy
US5488429A (en) 1992-01-13 1996-01-30 Mitsubishi Denki Kabushiki Kaisha Video signal processor for detecting flesh tones in am image
JPH066661A (ja) 1992-06-19 1994-01-14 Canon Inc 合焦検出装置
JP3482013B2 (ja) 1993-09-22 2003-12-22 ペンタックス株式会社 光学系のピント評価方法、調整方法、調整装置、およびチャート装置
JPH07177414A (ja) * 1993-12-20 1995-07-14 Canon Inc 合焦検出装置
JPH07311025A (ja) 1994-05-17 1995-11-28 Komatsu Ltd 3次元形状検査装置
US5496106A (en) 1994-12-13 1996-03-05 Apple Computer, Inc. System and method for generating a contrast overlay as a focus assist for an imaging device
US5532777A (en) * 1995-06-06 1996-07-02 Zanen; Pieter O. Single lens apparatus for three-dimensional imaging having focus-related convergence compensation
US5880455A (en) 1995-08-08 1999-03-09 Nikon Corporation Focal position detection apparatus having a light reflecting member with plural reflecting films
US5875040A (en) * 1995-12-04 1999-02-23 Eastman Kodak Company Gradient based method for providing values for unknown pixels in a digital image
JP3630880B2 (ja) 1996-10-02 2005-03-23 キヤノン株式会社 視線検出装置及び光学機器
US6094508A (en) * 1997-12-08 2000-07-25 Intel Corporation Perceptual thresholding for gradient-based local edge detection
JP4018218B2 (ja) 1997-12-25 2007-12-05 キヤノン株式会社 光学装置及び測距点選択方法
US6847737B1 (en) * 1998-03-13 2005-01-25 University Of Houston System Methods for performing DAF data filtering and padding
US6415053B1 (en) * 1998-04-20 2002-07-02 Fuji Photo Film Co., Ltd. Image processing method and apparatus
US6337925B1 (en) 2000-05-08 2002-01-08 Adobe Systems Incorporated Method for determining a border in a complex scene with applications to image masking
JP2001331806A (ja) 2000-05-22 2001-11-30 Nec Corp 画像処理方式
JP2002209135A (ja) * 2001-01-11 2002-07-26 Minolta Co Ltd デジタル撮像装置および記録媒体
JP2002214513A (ja) * 2001-01-16 2002-07-31 Minolta Co Ltd 光学系制御装置、光学系制御方法および記録媒体
JP3555583B2 (ja) * 2001-01-23 2004-08-18 ミノルタ株式会社 光学系制御装置、光学系制御方法、記録媒体および撮像装置
JP3555584B2 (ja) * 2001-01-23 2004-08-18 ミノルタ株式会社 デジタル撮像装置および記録媒体
US20030123704A1 (en) 2001-05-30 2003-07-03 Eaton Corporation Motion-based image segmentor for occupant tracking
US20020191102A1 (en) * 2001-05-31 2002-12-19 Casio Computer Co., Ltd. Light emitting device, camera with light emitting device, and image pickup method
US20020191973A1 (en) 2001-06-13 2002-12-19 Hofer Gregory V. Method and apparatus for focus error reduction in a camera
US7088474B2 (en) * 2001-09-13 2006-08-08 Hewlett-Packard Development Company, Lp. Method and system for enhancing images using edge orientation
JP2003125198A (ja) 2001-10-18 2003-04-25 Sharp Corp 画像処理装置および画像処理方法、並びにそれを備えた画像形成装置、プログラム、記録媒体
US6917901B2 (en) 2002-02-20 2005-07-12 International Business Machines Corporation Contact hole profile and line edge width metrology for critical image control and feedback of lithographic focus
JP4334179B2 (ja) 2002-03-07 2009-09-30 シャープ株式会社 電子カメラ
JP2003262783A (ja) * 2002-03-08 2003-09-19 Minolta Co Ltd オートフォーカス装置および撮像装置
US6888564B2 (en) 2002-05-24 2005-05-03 Koninklijke Philips Electronics N.V. Method and system for estimating sharpness metrics based on local edge kurtosis
JP3885000B2 (ja) * 2002-06-25 2007-02-21 日本特殊陶業株式会社 スパークプラグの検査方法、スパークプラグの製造方法及びスパークプラグの検査装置
JP4208563B2 (ja) 2002-12-18 2009-01-14 キヤノン株式会社 自動焦点調節装置
JP2004219546A (ja) * 2003-01-10 2004-08-05 Pentax Corp 自動焦点調節装置および自動焦点調節方法
US7269296B2 (en) * 2003-01-16 2007-09-11 Samsung Electronics Co., Ltd. Method and apparatus for shoot suppression in image detail enhancement
SG115540A1 (en) 2003-05-17 2005-10-28 St Microelectronics Asia An edge enhancement process and system
JP2005043792A (ja) 2003-07-25 2005-02-17 Pentax Corp 内視鏡の自動焦点調節装置
US7720302B2 (en) * 2003-09-25 2010-05-18 Fujifilm Corporation Method, apparatus and program for image processing
JP2004110059A (ja) * 2003-10-27 2004-04-08 Minolta Co Ltd 光学系制御装置、光学系制御方法および記録媒体
US7391920B2 (en) * 2003-11-04 2008-06-24 Fujifilm Corporation Image processing method, apparatus, and program
WO2005065823A1 (ja) * 2004-01-09 2005-07-21 Nippon Oil Corporation 石油系炭化水素の水素化脱硫触媒および水素化脱硫方法
JP2005269604A (ja) 2004-02-20 2005-09-29 Fuji Photo Film Co Ltd 撮像装置、撮像方法、及び撮像プログラム
US7561186B2 (en) * 2004-04-19 2009-07-14 Seiko Epson Corporation Motion blur correction
JP2005309559A (ja) 2004-04-19 2005-11-04 Fuji Photo Film Co Ltd 画像処理方法および装置並びにプログラム
US20050249429A1 (en) * 2004-04-22 2005-11-10 Fuji Photo Film Co., Ltd. Method, apparatus, and program for image processing
US20050244077A1 (en) 2004-04-22 2005-11-03 Fuji Photo Film Co., Ltd. Method, apparatus and program for image processing
JP2005332383A (ja) 2004-04-23 2005-12-02 Fuji Photo Film Co Ltd 画像処理方法および装置並びにプログラム
US20060078217A1 (en) * 2004-05-20 2006-04-13 Seiko Epson Corporation Out-of-focus detection method and imaging device control method
JP2006024193A (ja) * 2004-06-07 2006-01-26 Fuji Photo Film Co Ltd 画像補正装置、画像補正プログラム、画像補正方法、および画像補正システム
EP1624672A1 (en) * 2004-08-07 2006-02-08 STMicroelectronics Limited A method of determining a measure of edge strength and focus
JP2006115446A (ja) 2004-09-14 2006-04-27 Seiko Epson Corp 撮影装置、及び画像評価方法
US7343047B2 (en) 2004-09-22 2008-03-11 Hewlett-Packard Development Company, L.P. Systems and methods for arriving at an auto focus Figure of Merit
US7454053B2 (en) * 2004-10-29 2008-11-18 Mitutoyo Corporation System and method for automatically recovering video tools in a vision system
JP4487805B2 (ja) * 2004-11-16 2010-06-23 セイコーエプソン株式会社 画像評価方法、画像評価装置、及び印刷装置
JP4539318B2 (ja) 2004-12-13 2010-09-08 セイコーエプソン株式会社 画像情報の評価方法、画像情報の評価プログラム及び画像情報評価装置
JP4759329B2 (ja) 2005-06-23 2011-08-31 Hoya株式会社 オートフォーカス機能を有する撮影装置
US7630571B2 (en) * 2005-09-15 2009-12-08 Microsoft Corporation Automatic detection of panoramic camera position and orientation table parameters
US7590288B1 (en) * 2005-11-07 2009-09-15 Maxim Integrated Products, Inc. Method and/or apparatus for detecting edges of blocks in an image processing system
WO2007075514A1 (en) 2005-12-16 2007-07-05 Thomson Licensing System and method for providing uniform brightness in seam portions of tiled images
US7792420B2 (en) 2006-03-01 2010-09-07 Nikon Corporation Focus adjustment device, imaging device and focus adjustment method
JP4743773B2 (ja) 2006-06-01 2011-08-10 コニカミノルタセンシング株式会社 エッジ検出方法、装置、及びプログラム
JP4182990B2 (ja) 2006-06-02 2008-11-19 セイコーエプソン株式会社 印刷装置、画像がぼやけているか否かを決定する方法、およびコンピュータプログラム
US20080021665A1 (en) 2006-07-20 2008-01-24 David Vaughnn Focusing method and apparatus
JP2008072696A (ja) 2006-08-14 2008-03-27 Seiko Epson Corp 合焦情報の視覚化装置、その方法、プログラム及び記録媒体
JP4781233B2 (ja) 2006-11-07 2011-09-28 キヤノン株式会社 画像処理装置、撮像装置、及び画像処理方法
US7924468B2 (en) 2006-12-20 2011-04-12 Seiko Epson Corporation Camera shake determination device, printing apparatus and camera shake determination method
KR20080081693A (ko) 2007-03-06 2008-09-10 삼성전자주식회사 카메라의 자동초점조절 방법
JP2009065224A (ja) * 2007-09-04 2009-03-26 Seiko Epson Corp 画像データ解析装置、画像データ解析方法、およびプログラム
JP2009156946A (ja) 2007-12-25 2009-07-16 Sanyo Electric Co Ltd 撮像装置の防振制御回路
JP2009218806A (ja) * 2008-03-10 2009-09-24 Seiko Epson Corp 画像処理装置及び方法、並びに、そのためのプログラム
JP2009260411A (ja) 2008-04-11 2009-11-05 Olympus Corp 撮像装置
US8184196B2 (en) 2008-08-05 2012-05-22 Qualcomm Incorporated System and method to generate depth data using edge detection
WO2010036249A1 (en) 2008-09-24 2010-04-01 Nikon Corporation Autofocus technique utilizing gradient histogram distribution characteristics
WO2010061250A1 (en) 2008-11-26 2010-06-03 Hiok-Nam Tay Auto-focus image system
WO2010061352A2 (en) * 2008-11-26 2010-06-03 Hiok Nam Tay Auto-focus image system
JP5511212B2 (ja) 2009-03-31 2014-06-04 大日本スクリーン製造株式会社 欠陥検出装置及び欠陥検出方法
JP2011003124A (ja) 2009-06-22 2011-01-06 Sony Corp 画像処理装置および画像処理プログラム
GB2504857B (en) 2009-12-07 2014-12-24 Hiok-Nam Tay Auto-focus image system
US8724009B2 (en) 2010-05-05 2014-05-13 Hiok Nam Tay Auto-focus image system
US8630504B2 (en) 2012-01-16 2014-01-14 Hiok Nam Tay Auto-focus image system

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20020114015A1 (en) * 2000-12-21 2002-08-22 Shinichi Fujii Apparatus and method for controlling optical system
US20030099044A1 (en) * 2001-11-29 2003-05-29 Minolta Co. Ltd. Autofocusing apparatus
US20090102963A1 (en) * 2007-10-22 2009-04-23 Yunn-En Yeo Auto-focus image system

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
DE112013006265B4 (de) * 2012-12-28 2020-08-13 Fujifilm Corporation Pixelkorrekturverfahren und Bildaufnahmegerät

Also Published As

Publication number Publication date
JP6179827B2 (ja) 2017-08-16
JP2016130869A (ja) 2016-07-21
HK1191780A1 (en) 2014-08-01
GB201020759D0 (en) 2011-01-19
BR112012013721A2 (pt) 2016-03-15
GB201209942D0 (en) 2012-07-18
US20120314960A1 (en) 2012-12-13
GB2504857B (en) 2014-12-24
MX2012006468A (es) 2012-11-29
US20150207978A1 (en) 2015-07-23
US20130044255A1 (en) 2013-02-21
GB2488482A (en) 2012-08-29
US20150317774A1 (en) 2015-11-05
BR112013014226A2 (pt) 2017-04-04
EP2510680B1 (en) 2016-10-26
US20120314121A1 (en) 2012-12-13
US9251571B2 (en) 2016-02-02
AU2010329534A1 (en) 2012-07-26
US9734562B2 (en) 2017-08-15
US20110135215A1 (en) 2011-06-09
AU2010329533A1 (en) 2012-07-26
US8457431B2 (en) 2013-06-04
JP2013527635A (ja) 2013-06-27
SG181173A1 (en) 2012-07-30
JP2013527483A (ja) 2013-06-27
CN103416052B (zh) 2017-03-01
CN103416052A (zh) 2013-11-27
JP2016057638A (ja) 2016-04-21
SG181539A1 (en) 2012-07-30
GB2475983A (en) 2011-06-08
GB201020769D0 (en) 2011-01-19
GB2475983B (en) 2013-03-13
MX2012006469A (es) 2012-11-29
US8923645B2 (en) 2014-12-30
JP5725380B2 (ja) 2015-05-27
US20110134312A1 (en) 2011-06-09
CA2821143A1 (en) 2011-06-16
BR112013014240A2 (pt) 2017-10-31
GB201314741D0 (en) 2013-10-02
US8159600B2 (en) 2012-04-17
WO2011070513A1 (en) 2011-06-16
EP2510680A1 (en) 2012-10-17
US20130265480A1 (en) 2013-10-10
GB2504857A (en) 2014-02-12

Similar Documents

Publication Publication Date Title
US8159600B2 (en) Auto-focus image system
US8630504B2 (en) Auto-focus image system
US9065999B2 (en) Method and apparatus for evaluating sharpness of image
EP2719162B1 (en) Auto-focus image system
US20140022443A1 (en) Auto-focus image system
CN103262524B (zh) 自动聚焦图像系统
EP2649788A1 (en) Auto-focus image system
WO2012076992A1 (en) Auto-focus image system
HK1175341A (en) Auto-focus image system
HK1175341B (en) Auto-focus image system

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 10809173

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 1209942

Country of ref document: GB

Kind code of ref document: A

Free format text: PCT FILING DATE = 20101207

WWE Wipo information: entry into national phase

Ref document number: 2012542670

Country of ref document: JP

Ref document number: 1209942.0

Country of ref document: GB

Ref document number: MX/A/2012/006469

Country of ref document: MX

NENP Non-entry into the national phase

Ref country code: DE

WWE Wipo information: entry into national phase

Ref document number: 2010329534

Country of ref document: AU

ENP Entry into the national phase

Ref document number: 2010329534

Country of ref document: AU

Date of ref document: 20101207

Kind code of ref document: A

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 27/09/2012)

122 Ep: pct application non-entry in european phase

Ref document number: 10809173

Country of ref document: EP

Kind code of ref document: A1