WO2012076992A1 - Auto-focus image system - Google Patents
Auto-focus image system Download PDFInfo
- Publication number
- WO2012076992A1 WO2012076992A1 PCT/IB2011/052515 IB2011052515W WO2012076992A1 WO 2012076992 A1 WO2012076992 A1 WO 2012076992A1 IB 2011052515 W IB2011052515 W IB 2011052515W WO 2012076992 A1 WO2012076992 A1 WO 2012076992A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- edge
- gradient
- gradients
- measure
- sequence
- Prior art date
Links
- 238000000034 method Methods 0.000 claims description 117
- 230000006870 function Effects 0.000 claims description 26
- 238000003708 edge detection Methods 0.000 claims description 20
- 238000004364 calculation method Methods 0.000 claims description 16
- 238000005259 measurement Methods 0.000 claims description 14
- 230000005484 gravity Effects 0.000 claims description 12
- 238000001514 detection method Methods 0.000 claims description 10
- 230000007423 decrease Effects 0.000 claims description 9
- 238000005286 illumination Methods 0.000 claims description 5
- 238000012937 correction Methods 0.000 description 27
- 230000008859 change Effects 0.000 description 10
- 230000003287 optical effect Effects 0.000 description 8
- 230000008569 process Effects 0.000 description 7
- 230000004048 modification Effects 0.000 description 6
- 238000012986 modification Methods 0.000 description 6
- 238000003491 array Methods 0.000 description 5
- 230000006835 compression Effects 0.000 description 4
- 238000007906 compression Methods 0.000 description 4
- 230000006837 decompression Effects 0.000 description 4
- 238000001914 filtration Methods 0.000 description 4
- 238000012545 processing Methods 0.000 description 4
- 239000000758 substrate Substances 0.000 description 4
- 238000004422 calculation algorithm Methods 0.000 description 3
- 239000004065 semiconductor Substances 0.000 description 3
- 239000003086 colorant Substances 0.000 description 2
- 238000010586 diagram Methods 0.000 description 2
- 230000008030 elimination Effects 0.000 description 2
- 238000003379 elimination reaction Methods 0.000 description 2
- 239000003550 marker Substances 0.000 description 2
- 238000002310 reflectometry Methods 0.000 description 2
- 230000004044 response Effects 0.000 description 2
- 230000007704 transition Effects 0.000 description 2
- 238000012935 Averaging Methods 0.000 description 1
- XUIMIQQOPSSXEZ-UHFFFAOYSA-N Silicon Chemical compound [Si] XUIMIQQOPSSXEZ-UHFFFAOYSA-N 0.000 description 1
- 238000013459 approach Methods 0.000 description 1
- 230000002238 attenuated effect Effects 0.000 description 1
- 238000004590 computer program Methods 0.000 description 1
- 238000010276 construction Methods 0.000 description 1
- 230000001419 dependent effect Effects 0.000 description 1
- 230000000977 initiatory effect Effects 0.000 description 1
- 238000007689 inspection Methods 0.000 description 1
- 239000011159 matrix material Substances 0.000 description 1
- 238000012797 qualification Methods 0.000 description 1
- 230000009467 reduction Effects 0.000 description 1
- 238000007670 refining Methods 0.000 description 1
- 229910052710 silicon Inorganic materials 0.000 description 1
- 239000010703 silicon Substances 0.000 description 1
- 239000007787 solid Substances 0.000 description 1
- 230000016776 visual perception Effects 0.000 description 1
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/60—Control of cameras or camera modules
- H04N23/67—Focus control based on electronic image sensor signals
-
- G—PHYSICS
- G02—OPTICS
- G02B—OPTICAL ELEMENTS, SYSTEMS OR APPARATUS
- G02B7/00—Mountings, adjusting means, or light-tight connections, for optical elements
- G02B7/28—Systems for automatic generation of focusing signals
- G02B7/36—Systems for automatic generation of focusing signals using image sharpness techniques, e.g. image processing techniques for generating autofocus signals
-
- G—PHYSICS
- G02—OPTICS
- G02B—OPTICAL ELEMENTS, SYSTEMS OR APPARATUS
- G02B7/00—Mountings, adjusting means, or light-tight connections, for optical elements
- G02B7/28—Systems for automatic generation of focusing signals
-
- G—PHYSICS
- G03—PHOTOGRAPHY; CINEMATOGRAPHY; ANALOGOUS TECHNIQUES USING WAVES OTHER THAN OPTICAL WAVES; ELECTROGRAPHY; HOLOGRAPHY
- G03B—APPARATUS OR ARRANGEMENTS FOR TAKING PHOTOGRAPHS OR FOR PROJECTING OR VIEWING THEM; APPARATUS OR ARRANGEMENTS EMPLOYING ANALOGOUS TECHNIQUES USING WAVES OTHER THAN OPTICAL WAVES; ACCESSORIES THEREFOR
- G03B13/00—Viewfinders; Focusing aids for cameras; Means for focusing for cameras; Autofocus systems for cameras
- G03B13/32—Means for focusing
-
- G—PHYSICS
- G03—PHOTOGRAPHY; CINEMATOGRAPHY; ANALOGOUS TECHNIQUES USING WAVES OTHER THAN OPTICAL WAVES; ELECTROGRAPHY; HOLOGRAPHY
- G03B—APPARATUS OR ARRANGEMENTS FOR TAKING PHOTOGRAPHS OR FOR PROJECTING OR VIEWING THEM; APPARATUS OR ARRANGEMENTS EMPLOYING ANALOGOUS TECHNIQUES USING WAVES OTHER THAN OPTICAL WAVES; ACCESSORIES THEREFOR
- G03B13/00—Viewfinders; Focusing aids for cameras; Means for focusing for cameras; Autofocus systems for cameras
- G03B13/32—Means for focusing
- G03B13/34—Power focusing
- G03B13/36—Autofocus systems
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/13—Edge detection
-
- G—PHYSICS
- G02—OPTICS
- G02B—OPTICAL ELEMENTS, SYSTEMS OR APPARATUS
- G02B15/00—Optical objectives with means for varying the magnification
- G02B15/14—Optical objectives with means for varying the magnification by axial movement of one or more lenses or groups of lenses relative to the image plane for continuously varying the equivalent focal length of the objective
-
- G—PHYSICS
- G02—OPTICS
- G02B—OPTICAL ELEMENTS, SYSTEMS OR APPARATUS
- G02B7/00—Mountings, adjusting means, or light-tight connections, for optical elements
- G02B7/02—Mountings, adjusting means, or light-tight connections, for optical elements for lenses
- G02B7/04—Mountings, adjusting means, or light-tight connections, for optical elements for lenses with mechanism for focusing or varying magnification
-
- G—PHYSICS
- G02—OPTICS
- G02B—OPTICAL ELEMENTS, SYSTEMS OR APPARATUS
- G02B7/00—Mountings, adjusting means, or light-tight connections, for optical elements
- G02B7/02—Mountings, adjusting means, or light-tight connections, for optical elements for lenses
- G02B7/04—Mountings, adjusting means, or light-tight connections, for optical elements for lenses with mechanism for focusing or varying magnification
- G02B7/09—Mountings, adjusting means, or light-tight connections, for optical elements for lenses with mechanism for focusing or varying magnification adapted for automatic focusing or varying magnification
-
- G—PHYSICS
- G03—PHOTOGRAPHY; CINEMATOGRAPHY; ANALOGOUS TECHNIQUES USING WAVES OTHER THAN OPTICAL WAVES; ELECTROGRAPHY; HOLOGRAPHY
- G03B—APPARATUS OR ARRANGEMENTS FOR TAKING PHOTOGRAPHS OR FOR PROJECTING OR VIEWING THEM; APPARATUS OR ARRANGEMENTS EMPLOYING ANALOGOUS TECHNIQUES USING WAVES OTHER THAN OPTICAL WAVES; ACCESSORIES THEREFOR
- G03B13/00—Viewfinders; Focusing aids for cameras; Means for focusing for cameras; Autofocus systems for cameras
- G03B13/32—Means for focusing
- G03B13/34—Power focusing
-
- G—PHYSICS
- G03—PHOTOGRAPHY; CINEMATOGRAPHY; ANALOGOUS TECHNIQUES USING WAVES OTHER THAN OPTICAL WAVES; ELECTROGRAPHY; HOLOGRAPHY
- G03B—APPARATUS OR ARRANGEMENTS FOR TAKING PHOTOGRAPHS OR FOR PROJECTING OR VIEWING THEM; APPARATUS OR ARRANGEMENTS EMPLOYING ANALOGOUS TECHNIQUES USING WAVES OTHER THAN OPTICAL WAVES; ACCESSORIES THEREFOR
- G03B3/00—Focusing arrangements of general interest for cameras, projectors or printers
- G03B3/10—Power-operated focusing
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10141—Special mode during image acquisition
- G06T2207/10148—Varying focus
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
Definitions
- the subject matter disclosed generally relates to auto- focusing electronically captured images.
- Photographic equipment such as digital cameras and digital camcorders may contain electronic image sensors that capture light for processing into still or video images, respectively.
- Electronic image sensors typically contain millions of light capturing elements such as photodiodes.
- the process of auto-focusing includes the steps of capturing an image, processing the image to determine whether it is in focus, and if not, generating a feedback signal that is used to vary a position of a focus lens ("focus position") .
- focus position a position of a focus lens
- the other technique looks at a phase difference between a pair of images.
- the contrast method the
- the phase difference method includes splitting an incoming image into two images that are captured by separate image sensors. The two images are compared to determine a phase difference. The focus position is adjusted until the two images match.
- the phase difference method requires additional parts such as a beam splitter and an extra image sensor.
- the phase difference approach analyzes a relatively small band of fixed detection points. Having a small group of detection points is prone to error because noise may be superimposed onto one or more points. This technique is also ineffective if the detection points do not coincide with an image edge.
- the phase difference method splits the light the amount of light that impinges on a light sensor is cut in half or even more. This can be problematic in dim settings where the image light intensity is already low.
- An auto focus image system that includes a pixel array coupled to a focus signal generator.
- the pixel array captures an image that has a plurality of edges.
- the generator may determine to reduce a relative extent to which an edge contributes to the focus signal on basis of a pair of shape measures of the edge that are
- each sample-pair difference is a difference between a pair of samples of image data within a predetermined neighborhood of the edge.
- One of the shape measures may be the edge-sharpness measure of the edge.
- FIG. 1 is a schematic of an embodiment of an auto-focus image pickup apparatus
- FIG. 2 is a schematic of an alternate embodiment of an auto-focus image pickup apparatus
- FIG. 3 is a block diagram of a focus signal generator
- FIG. 4 is an illustration of a horizontal Sobel operator's operation on a image signal matrix
- FIG. 5 illustrates a calculation of edge width from a horizontal gradient
- FIG. 6A, 6B are illustrations of a calculation of an edge width of a vertical edge having a slant angle ⁇ ;
- FIG. 6C, 6D are illustrations of a calculation of an edge width of a horizontal edge having a slant angle ⁇ ;
- FIG. 7 is a flowchart of a process to calculate a slant angle ⁇ and correct an edge width for a vertical edge having a slant;
- FIG. 8 is an illustration of a vertical concatenated edge
- FIG. 9A is an illustration of a group of closely-packed vertical bars
- FIG. 9B is a graph of an image signal across FIG. 9A;
- FIG. 9C is a graph of a horizontal Sobel gradient across FIG. 9A;
- FIG. 10 is a flowchart of a process to eliminate closely- packed edges having shallow depths of modulation;
- FIG. 11 is a histogram of edge widths illustrating a range of edge widths for calculating a fine focus signal
- FIG. 12 is an illustration of a scene
- FIG. 13 is a graph illustrating a variation of a narrow- edge count during a focus scan of the scene of FIG. 12;
- FIG. 14 is a graph illustrating a variation of a gross focus signal during a focus scan of the scene of FIG. 12;
- FIG. 15 is a graph illustrating a variation of a fine focus signal across a range of focus positions;
- FIG. 16 is an illustration of an apparatus displaying multiple objects in a scene and a selection mark over one of the ob ects ;
- FIG. 17 is a block diagram of an alternate embodiment of a focus signal generator;
- FIG. 18 is a schematic of an alternate embodiment of an auto-focus image pickup apparatus
- FIG. 19 is a schematic of an embodiment of an auto-focus image pickup apparatus having a main pixel array and an
- FIG. 20 is a schematic of an alternate embodiment of an auto-focus image pickup apparatus having a main pixel array and an auxiliary pixel array
- FIG. 21 is a schematic of an alternate embodiment of an auto-focus image pickup apparatus having a main pixel array and an auxiliary pixel array
- FIG. 22 is an illustration of a variation of an edge width from a main pixel array and a variation of an edge width from an auxiliary pixel array at different focus positions;
- FIG. 23A illustrates a symmetrical sequence of gradients of an image signal across a good edge plotted against distance in multiples of a spacing between successive gradients, and two widths measured for two pairs of interpolated gradients, each pair at a different gradient level;
- FIG. 23B illustrates another symmetrical sequence of gradients of an image signal across a spurious edge plotted against distance in multiples of a spacing between successive gradients, and two widths measured for two pairs of interpolated gradients, each pair at a different gradient level, ratio of the smaller width to the larger width being nearly double of that shown in FIG. 23A;
- FIG. 24A illustrates a symmetrical sequence of gradients across an edge plotted against distance in multiples of a spacing between successive gradients, and a normalized gradient value of an interpolated gradient at a predefined distance from a peak gradient;
- FIG. 24B illustrates a sequence of gradients across an edge plotted against distance in multiples of a spacing between successive gradients, and an area of a region under the plotted sequence of gradients
- FIG. 24C illustrates a sequence of gradients of an image signal across an edge plotted against distance in multiples of a spacing between successive gradients, and a slope (i.e. second derivative of the image signal) of the plotted sequence of gradients taken at a gradient level defined with respect of an interpolated peak gradient;
- FIG. 24D illustrates a sequence of gradients of an image signal across an edge plotted against distance in multiples of a spacing between successive gradients, a center of gravity (i.e. center of moment) , and distances of the gradients from the center of gravity;
- FIG. 25 illustrates a sequence of second derivatives of an image signal across an edge plotted against distance in
- multiples of a spacing between successive second derivatives showing (a) a width W s between a pair of positive and negative peaks, (b) a width Wi between a pair of outermost interpolated second derivatives that have a given magnitude h lr (c) a width W 2 between an inner pair of interpolated second derivatives that have the given magnitude h lr and (d) a distance Di from a zero- crossing (between the pair of positive and negative peaks) to an outermost interpolated second derivative that has the given magnitude hi;
- FIG. 26 illustrates a sequence of image data samples of the image signal plotted against distance in multiples of a spacing between successive samples, showing (a) a width W edge and a contrast C edge between two samples at two ends of the edge, (b) a peak gradient value g peak between a pair of samples that has a steepest change of sample value, (c) an undivided portion of the edge that has contrast Ci and width W part i, and (d) an undivided portion of the edge that has contrast C ⁇ and width W part2 ;
- FIG. 26 illustrates a sequence of image data samples of the image signal plotted against distance in multiples of a spacing between successive samples, showing (a) a width W edge and a contrast C edge between two samples at two ends of the edge, (b) a peak gradient value g peak between a pair of samples that has a steepest change of sample value, (c) an undivided portion of the edge that has contrast Ci and width W part i, and (d) an undivided portion of the
- 26 illustrates two symmetrical sequences of gradients plotted against distance in multiples of a spacing between successive samples of each sequence, the sequences normalized with respect to their respect peak gradients, where the plot for one sequence has a triangular shape and the plot for the other sequence has a shape of a hat;
- FIG. 27 illustrates two symmetrical sequences of gradients plotted against distance in multiples of a spacing between successive samples of each sequence, the sequences normalized with respect to their respect peak gradients, where the plot for one sequence has a triangular shape down to a normalized gradient level and the plot for the other sequence has a shape of a dome;
- FIG. 28 shows a scatter plot of four pairs of expected values of first and second shape measures (wi br wi a ) , (W2b, w ⁇ a ) , (w 3br w 3a ) , (w 4br w 4a ) , and illustrates a value w' b for the first shape measure is found by interpolation from a value w' a for the second shape measure;
- FIG. 29 illustrates finding an interpolated peak's position by interpolation
- FIG. 30 shows an alternate embodiment of a focus signal generator .
- an auto focus image system that includes a pixel array coupled to a focus signal generator.
- the pixel array captures an image that has at least one edge with a width.
- the focus signal generator may generate a focus signal that is a function of the edge width and/or
- An auto focus image system that includes a pixel array coupled to a focus signal generator.
- the pixel array captures an image that has at least one edge with a width.
- the generator generates a focus signal that is a function of the edge width and various statistics of edge width.
- the generator may eliminate an edge having an asymmetry of a gradient of an image signal.
- a processor receives the focus signal and/or the statistics of edge widths and adjusts a focus position of a focus lens.
- the edge width can be determined by various techniques
- a histogram of edge widths may be used to determine whether a particular image is focused or unfocused.
- a histogram with a large population of thin edge widths is indicative of a focused image.
- Figure 1 shows an embodiment of an auto- focus image capture system 102.
- the system 102 may be part of a digital still camera, but it is to be understood that the system can be embodied in any device that requires controlled focusing of an image.
- the system 102 may include a focus lens 104, a pixel array and circuits 108, an A/D converter 110, a processor 112, a display 114, a memory card 116 and a drive motor/circuit 118.
- Light from a scene enters through the lens 104.
- the pixel array and circuits 108 generates an analog signal that is converted to a digital signal by the A/D Converter 110.
- the pixel array 108 may incorporate a mosaic color pattern, e.g. the Bayer pattern.
- the digital signal may be sent to the processor 112 that performs various processes, e.g. color interpolation, focus position control, color correction, image compression/decompression, user interface control, and display control, and to the focus signal generator 120.
- a color interpolation unit 148 may be implemented to perform color interpolation on the digital signal 130 to estimate the missing color signals on each pixel for the focus signal generator 120.
- the focus signal generator 120 may input interpolated color images from the processor 112 on bus 146 as shown in Figure 2 or a single image signal derived from the original image signal generated from the A/D converter 110, for example a grayscale signal.
- the focus signal generator 120 receives a group of control signals 132 from the processor 112, in addition, and may output signals 134 to the processor 112.
- the output signals 134 may comprise one or more of the following: a focus signal 134, a narrow-edge count, and a set of numbers representing a statistics of edge width in the image.
- the processor 112 may generate a focus control signal 136 that is sent to the drive motor/circuit 118 to control the focus lens 104.
- a focused image is ultimately provided to the display 114 and/or stored in the memory card 116.
- the algorithm (s) used to adjust a focus position may be performed by the processor 112.
- the pixel array and circuits 108, A/D Converter 110, focus signal generator 120, and processor 112 may all reside within a package. Alternately, the pixel array and circuits 108, A/D Converter 110, and focus signal generator 120 may reside within a package 142 as image sensor 150 shown in Figure 1, separate from the processor 112.
- the focus signal generator 120 and processor 112 may together reside within a package 144 as a camera controller 160 shown in Figure 2, separate from the pixel array 108 and A/D Converter 110.
- the focus signal generator 120 (or any alternative embodiment, such as one shown in Figure 30) and the processor 112 may together reside on a semiconductor substrate, such as a silicon substrate.
- Figure 3 shows an embodiment of a focus signal
- the focus signal generator 120 receiving image (s) from a image providing unit 202.
- the image providing unit 202 may be the color interpolator 148 in Figure 1 or the processor 212 in Figure 2.
- the focus signal generator 120 may comprise an edge detection & width measurement (EDWM) unit 206, a focus signal calculator 210, a length filter 212, and a width filter 209. It may further comprise a fine switch 220 controlled by input 3 ⁇ 4 fine' 222.
- the focus signal generator 120 may provide a narrow-edge count from the width filter 209 and a focus signal from the focus signal calculator 210, the focus signal being configurable between a fine focus signal and a gross focus signal, selectable by input 3 ⁇ 4 fine' 222.
- both fine focus signal and gross focus signal may be calculated and output as part of output signals 134.
- the edge detection & width measurement unit 206 receives image (s) provided by the image providing unit 202.
- control signals such as control signal 3 ⁇ 4 fine' 222, may be provided by the processor 112 in signals 132. Also in the context of
- the output signals 134 may be provided to the processor 112, which functions as a focus system controller that controls the focus position of the focus lens 104 to bring images of objects into sharp focus on the pixel array 108 by analyzing the output signals 134 to detect a sharp object in the image.
- the processor 112 functions as a focus system controller that controls the focus position of the focus lens 104 to bring images of objects into sharp focus on the pixel array 108 by analyzing the output signals 134 to detect a sharp object in the image.
- the focus signal generator 120 Various components of the focus signal generator 120 are described below.
- the EDWM unit 206 may transform the input image such that the three signals of the image, red (R) , green (G) and blue (B) are converted to a single image signal.
- RGB values can be used to calculate a luminance or chrominance value or a specific ratio of RGB values can be taken to form the single image signal.
- the single image signal may then be processed by a Gaussian filter or any lowpass filter to smooth out image data sample values among neighboring pixels to remove a noise.
- the focus signal generator 120, 120', 120" is not limited to grayscale signal. It may operate on any one image signal to detect one or more edges in the image signal. Or it may operate on any combination of the image signals, for example Y, R-G, or B-G. It may operate on each and every one of the R, G, B image signals separately, or any one or more combinations thereof, to detect edges. It may form statistics of edge widths for each of the R, G, B image signals, or any combination thereof. It may form a focus signal from statistics of edge widths from one or more image signals.
- the focus signal generator includes an edge detector to identify an edge in an image signal.
- the edge detector may use a first-order edge detection operator, such as Sobel operator, Prewitt operator, Roberts Cross operator, or
- the edge detector may use a higher-order edge detection operator to identify the edge, for example a second order operator such as a Laplacian operator.
- the edge detector may use any one of the known edge detection operators or any improved operator that shares a common edge detection principle of any of the known operators.
- edge detector uses a first-order edge
- a gradient i.e. first derivative
- the Roberts operator has two kernels which are single column or single row matrices: [-1 +1] and its transpose.
- the Roberts Cross operator has two kernels which are 2-by-2 matrices: [+1, 0; 0, -1] and [0, +1; -1, 0], shown in the format of [ ⁇ first- row vector; second-row vector; third-row vector] like in Matlab.
- the Prewitt and the Sobel operator are basically have the same kernels, [-1, 0, +1] taking gradient in a direction of the row and its transpose taking gradient in a direction of the column, further multiplied by different lowpass filter kernels performing lowpass filterings perpendicular to the respective gradient directions .
- Gradients across the columns and the rows may be calculated to detect vertical and horizontal edges respectively, for example using a Sobel-X operator and a Sobel-Y operator, respectively.
- a second derivative (such as the Laplacian) of the image signal is computed.
- Each pixel may be tagged either a horizontal edge ( ⁇ ⁇ ' ) or a vertical edge ( ⁇ ⁇ ' ) if either vertical or horizontal gradient magnitude exceeds a predetermined lower limit
- a pixel may be tagged a vertical edge if its horizontal gradient
- hysteresis amount e.g. 2 for an 8- bit image, and vice versa. If both gradient magnitudes differ less than the hysteresis amount, the pixel gets a direction tag same as that of its nearest neighbor that has a direction tag already determined. For example, if the image is scanned from left to right in each row and from row to row downwards, a sequence of inspection of
- FIG. 4 illustrates the result of tagging on a 6-by-6 array of horizontal and vertical gradients. In each cell, the horizontal gradient is in the upper-left, vertical gradient is on the right, and direction tag is at the bottom. Only pixels that have either horizontal or vertical gradient magnitude exceeding 5 qualify at this step as edge pixels are printed in bold and get direction tags .
- the image, gradients and tags may be scanned horizontally for vertical edges, and vertically for
- Each group of consecutive pixels in a same row, having a same horizontal gradient polarity and all tagged for vertical edge may be designated a vertical edge if no adjacent pixel on left or right of the group are likewise.
- each group of consecutive pixels in a same column having a same vertical gradient polarity and all tagged for horizontal edge may be designated a
- horizontal edge if no adjacent pixel above or below the group satisfies the same.
- horizontal and vertical edges may be identified.
- Each edge may be refined by removing pixels whose gradient magnitudes are less than a given fraction of the peak gradient magnitude within the edge .
- Edge width may be calculated in any one of known methods. One method of calculating edge width is simply counting the number of pixels within an edge. An alternate method of calculating edge width is shown in Figure 5. In Figure 5, a first fractional pixel position (2.4) is found between a first outer pixel (pixel 3) of a refined edge and the adjacent outside pixel (pixel 2) by an interpolation from the refinement threshold 304.
- a second fractional pixel position (5.5) is found between a second outer pixel (pixel 5) and its adjacent outside pixel (pixel 6) .
- Another alternative edge width calculation method is to calculate a difference of the image signal across the edge (with or without edge refinement) and divide it by a peak gradient of the edge.
- edge width may be a distance between a pair of positive and negative peaks (or interpolated
- edge-sharpness measure there are other alternatives than a width, which is merely one example of a edge-sharpness measure that is essentially independent of illumination of the scene.
- each edge may be assigned to one prescribed direction (e.g. vertical direction or horizontal direction) or another, perpendicular, prescribed direction (e.g horizontal direction or vertical direction) and may have its edge width measured in a direction perpendicular to that assigned edge direction, the boundaries between
- a boundary is shown to be inclined at a slant angle ⁇ with respect to the vertical dashed line, and a width a is shown to be measured in the perpendicular direction (i.e. horizontal direction) .
- a width b (as indicated in the drawing) measured in a direction perpendicular to the direction of the boundary (also
- edge widths measured in one or the other of those prescribed directions are to be corrected by reducing them down to be widths in directions perpendicular to directions of the respective edges .
- the Edge Detection and Width Measurement Unit 206 performs such a correction on edge widths .
- the measured width a is the length of the hypotenuse of a right-angled triangle that has its base (marked with width b) straddling across the shaded boundary perpendicularly (thus perpendicular to the edge direction) and that has the angle ⁇ .
- the angle ⁇ , or cos ( ⁇ ) itself may be found by any method known in the art for finding a direction of an edge in an image, or by a more accurate method described in the flowchart shown in Figure 7.
- Each horizontal or vertical edge's edge width may be corrected for its slant from either the horizontal or vertical orientation (the prescribed directions) ,
- Figure 6C, 6D illustrate a correction calculation for an edge width measured in the vertical direction for a boundary
- the correction may be made by multiplying the edge width measured in a prescribed
- Figure 7 shows a flowchart of a process to correct edge widths for slant for edges inclined from a vertical line. (For horizontal edges, substitute 'row' for 'column', and interchange 'vertical' with
- a slant angle ⁇ is found. For each vertical edge, at step 502, locate the column position where the horizontal gradient magnitude peaks, and find the horizontal gradient x. At step 504, find where the vertical gradient magnitude peaks along the column position and within two pixels away, and find the vertical gradient y.
- the slant angle may be found by looking up a lookup table.
- step 508 scale down the edge width by multiplying with cos ( ⁇ ) , or with an approximation thereto as one skilled in the art usually does in practice .
- a first modification of the process shown in Figure 7 is to substitute for step 506 and part of step 508 by providing a lookup table that has entries for various combinations of input values of x and y. For each
- the lookup table returns an edge width correction factor.
- the edge width correction factor output by the lookup table may be an approximation to cos ( tan -1 (y/x) ) to within 20%, preferably within 5%.
- the edge width is then multiplied with this correction factor to produce a slant-corrected edge width.
- a second modification is to calculate a quotient y/x between a vertical gradient y and a horizontal gradient x to produce a quotient q, then use q to input to a lookup table that has entries for various values of q. For each value of q, the lookup table returns an edge width
- the edge width correction factor may be an approximation to cos ( tan -1 ( q) ) to within 20%, preferably within 5%.
- the values of x and y may be obtained in steps 502 to 506, but other methods may be employed instead.
- Adjacent edges may be prevented altogether from
- Figure 9A, 9B, and 9C illustrate a problem that is being addressed.
- Figure 9A illustrates three vertical white bars
- Figure 9B shows an image signal plotted horizontally across the image in Figure 9A for each of a sharp image and a blurred image.
- Figure 9C plots Sobel-x gradients of Figure 9B for the sharp image and blurred image.
- the first edge (pixels 2-5) for the blurred image is wider than that for the sharp image, and likewise the last edge (pixels 13- 15) as expected.
- the two narrowest edges (pixels 9 & 10, and pixels 11 & 12) have widths of two in both images.
- the corresponding slopes at pixels 9 & 10, and pixels 11 & 12 each takes two pixels to complete a transition.
- the blurred image has a
- narrower edge should not be relied upon as an indication that the blurred image is sharp.
- edge gap is in terms of a number of pixels, e.g. 1, or 2, or in between .
- sharp_edge_width is a number assigned to designate an edge width of a sharp edge
- the Edge Detection and Width Measurement Unit 206 may execute the following algorithm for eliminating closely- packed narrower edges based on a screen threshold
- the screen threshold and screen flag to be used for the immediate next edge of an opposite polarity are determined according to the process of the flowchart shown in Figure 10. Given the screen threshold and screen flag, an edge may be eliminated unless one of the following conditions is true: (a) the screen flag is off for this edge, (b) a peak gradient magnitude of the edge is not smaller than the screen threshold for this edge.
- condition (c) the edge width is not less than sharp_edge_width + 1, where a number has been assigned for sharp_edge_width to designate an edge width of a sharp edge, and where the "+1" may be varied to set a range of edge widths above the sharp_edge_width within which edges may be eliminated if they fail (a) and (b) .
- sharp_edge_width may be 2.
- Figure 10 is a flowchart to determine a screen threshold and a screen flag for each edge. For vertical edges, assume scanning from left to right along a row, though this is not
- sharp_edge_width A number is assigned for sharp_edge_width and may be 2 for the example shown in Figures 9A-9C.
- each edge is queried at step 720 as to whether its edge width is greater than or equal to one plus sharp_edge_width, the value of one being the minimum edge gap value used for this illustration, but a different value may be used, such as between 0.5 and 2.0.
- step 706 follows to set the screen threshold for the immediate next edge that has an opposite polarity to beta times a peak gradient magnitude of the edge, beta being from 0.3 to 0.7, preferably 0.55
- step 708 follows to turn on the screen flag for the next edge, then proceed to the next edge.
- step 730 follows to check whether the spacing from the prior edge of the same gradient polarity is greater than two times the minimum edge gap (or a different predetermined number) plus sharp_edge_width and the immediate prior edge of an opposite polarity, if any, is more than the minimum edge gap away. If yes, step 710 follows to turn off the screen flag for the next edge.
- Beta may be a predetermined fraction, or it may be a fraction calculated following a predetermined formula, such as a function of an edge width. In the latter case, beta may vary from one part of the image to another part.
- the image input by the focus signal generator 120 may have pixels laid out in a rectangular grid ("pixel grid") rotated at 45 degrees with respect to a rectangular frame of the image.
- pixel grid rectangular grid
- the X- and Y-directions of the edge detection operations and width measurement operations may be rotated likewise.
- Edge-sharpness measures :
- edge-sharpness measure a quantity that is independent of scaling the image data by, for example, 20%, or essentially independent, such as changes by not more 5% for 20% scaling down of the image data, thus helping to make the focus signal independent of or far less dependent on illumination of the scene of the image or reflectivity of objects in the scene compared with the conventional contrast detection method.
- edge-sharpness measure any edge- sharpness measure that has the above characteristic of being independent of or essentially independent of 20% scaling down of the image data in addition is a good
- the alternative edge-sharpness measure preferably has a unit that does not include a unit of energy.
- the unit of the edge-sharpness measure is determined on basis two points: (a) each sample of the image data on which the first-order edge-detection operator operates on has a unit of energy, (b) distance between samples has a unit of length. On basis of points (a) and (b) , a gradient value has a unit of a unit of energy divided by a unit of length. Likewise, contrast across the edge or across any undivided portion of the edge has a unit of energy.
- the contrast is not a good edge-sharpness measure, as the unit reveals that it is affected by illumination of the scene and reflectivity of the object. Neither is peak gradient of the edge, because the unit of the peak gradient has a unit of energy in it, indicating also that it is responsive to a change in illumination of the scene.
- peak gradient of the edge divided by a contrast of the edge is a good edge-sharpness measure, as it has a unit of the reciprocal of a unit of length.
- the count of gradients whose gradient values exceeds a certain predetermine fraction of the peak gradient is a good edge- sharpness measure, as the count is simply a measure of distance quantized to the size of the spacing between contiguous gradients, hence having a unit of length.
- a gradient may be generated from a first-order edge detection operator used to detect the edge, or may be generated from a different first-derivative operator (i.e. gradient operator) .
- the Sobel operator or even a second-order edge detection operator, such as a Laplacian operator
- the Roberts operator whose kernels are simply [-1, +1] and its transpose, which is simply
- Edges may be detected with a higher-order edge detection operator than first-order independently of one or more derivative operators used in generating the edge- sharpness measure or any of the shape measures described in the next section.
- the edge-sharpness measure should have a unit of a power of a unit of length, for example a square of a unit of length, a reciprocal of a unit of length, the unit of length itself, or a square-root of a unit of length.
- edge-sharpness measure can replace the edge width in the focus signal generator 120.
- the correction factor as described above with reference to Figures 6A-6D and Figure 7 (hereinafter “width correction factor") should be converted to adopt the same power.
- width correction factor For example, if the edge-sharpness measure is peak gradient divided by a contrast, which gives it a unit of the reciprocal of a unit of length, then the appropriate correction factor for the edge-sharpness measure is the reciprocal of the correction factor described with reference to Figures 6A-6D and Figure 7 above.
- the slant correction factor for the edge-sharpness measure should be a square of the width correction factor.
- FIG. 24B illustrates a sequence of gradients across an edge plotted against distance in multiples of a spacing between successive gradients, and an area A3 of a shaded region under the plotted sequence of gradients.
- the region is defined between two gradient levels Li and L 2 , which may be defined with respect to an interpolated peak gradient value (alternatively, the peak gradient value) of the sequence of gradients as, for example, predetermined portion of the
- the shaded region has four corners of interpolated gradients .
- the area divided by the interpolated peak gradient value is a good edge-sharpness measure, as it has a unit of length. It is noted that alternative definitions of the region are possible. For example, the region may be bounded from above not by the gradient level Li but by the sequence of gradients .
- FIG. 24D illustrates a sequence of gradients of samples of the image data across an edge plotted against distance in multiples of a spacing between successive gradients, a center of gravity 3401 (i.e. center of moment), and distances u ⁇ , 123, U4 , U5 and Ue of the gradients (having gradient values g ⁇ , gs, g ⁇ , gs and ge) from the center of gravity.
- a good edge-sharpness measure is a k-t central moment of the gradients about the center of gravity, namely a weighted average of the distances of the gradients from the center of gravity with the weights being magnitudes of the respective gradients, k being an even integer.
- k can be 2, which makes the edge-sharpness measure a variance as if the sequence of gradients were a probability distribution.
- the edge-sharpness measure has a unit of a square of a unit of length. More generally, the edge- sharpness measure may be a function of distances of a plurality of gradients of a sequence of gradients from a position
- the predefined position may be an interpolated peak position for the sequence of gradients.
- a proper subset of the gradients of edge may be chosen according to a predefined criterion to participate in this calculation.
- the gradients may be required to have gradient values at least a predetermined fraction of the peak gradient or gradient value of an
- FIG. 25 illustrates a sequence of second derivatives of a sequence of samples of image data across an edge plotted against distance in multiples of a spacing between successive second derivatives, showing (a) a width W s between a pair of positive and negative peaks, (b) a width Wi between a pair of outermost interpolated second derivatives that have a given magnitude h lr (c) a width W 2 between an inner pair of interpolated second derivatives that have the given magnitude hi, and (d) a distance Di from a zero-crossing (between the pair of positive and negative peaks) to an outermost interpolated second derivative that has the given magnitude hi.
- the edge-sharpness measure may be a weighted sum of distances from the zero- crossing (between the pair of positive and negative peaks, and may be interpolated) of the second derivatives with the weights being magnitudes of the respective second derivatives . More generally, the edge-sharpness measure may be a function of distances of a plurality of second derivatives across the edge from a predefined position relative to the plurality of second derivatives. Other the zero-crossing position, a center of gravity is a good candidate for the predefined position, with the weights being magnitudes of the second derivatives. Yet another good candidate for the predefined position may be the midway point between the pair of positive and negative
- FIG. 26 illustrates a sequence of samples of image data from pixels of an edge plotted against distance in multiples of a spacing between contiguous pixels, showing (a) a width W edge and a contrast C edge between two samples at two ends of the edge, (b) a peak gradient value g peak (generated by the Roberts operator) between a pair of samples that has a steepest change of sample value, (c) a narrowest undivided portion of the edge that has contrast Ci and width W partl , and (d) a narrowest undivided portion of the edge that has contrast C ⁇ and width W part2 .
- the peak gradient value g peak divided by the contrast C edge is a good edge-sharpness measure.
- the width W edge is another good edge-sharpness measure.
- the widths W partl and W part2 are also good alternatives.
- the contrasts Ci and/or C ⁇ may be defined to be a predetermine portion of the edge contrast C eC i ge - I Alternatively, any one of them may be defined to be a
- the "narrowest undivided portion" may be delimited by interpolated samples of image data, such as shown in squares in Figure 26, or by
- Figures 23A and 23B show a pair of symmetrical
- the EDWM unit implements such a method to qualify edges for participation in generating the focus signal.
- the method is based on taking at least two measurements made from samples of the image data within a predetermined neighborhood of the edge (hereinafter "shape measures") .
- the predetermined neighborhood may be all the image data samples from which all gradients and/or second derivatives within the edge are computed from for detection of the edge. Alternatively, the predetermined
- neighborhood may be all pixels within a predetermined distance of the edge, for example 8 pixels, or a minimal distance sufficient to include all image data samples used for detecting the edge and/or computing the edge-sharpness measure of the edge.
- Each shape measure of the edge is measured from at least two sample-pair differences, where each sample-pair difference is a difference between a pair of samples of image data, these samples being from a sequence of samples of image data arrayed across the edge.
- the method may determine to reduce a relative extent to which the edge contributes to the focus signal (as compared with other edges that contribute to the focus signal) depending on values of at least two shape measures for the edge. For example, where the focus signal is computed as a weighted average of all edges that are allowed to contribute, the weights having been already determined through other methods (such as the length filter described in the next section) , the weight of the edge may be further reduced as compared to other edges by multiplying with a factor (hereinafter "shape-qualifying factor”) computed from the determining as the relative extent.
- shape-qualifying factor a factor
- the method may determine whether together the at least two shape measures meet a criterion in order to determine the relative extent to reduce the edge's contribution to the focus signal.
- the criterion may be expressed as a boundary separating a region of no or little reduction in the relative extent from all other region (s) in a two-dimensional scatter plot of the first measure against the second measure. Contours may be defined such that pairs of first-measure value and second-measure value that will be assign same relative extent are on same contour, and the relative extent is read from a memory by looking up to which contour the pair for the edge belongs.
- the method may evaluate whether one of the at least two shape measures meets a criterion that depends on one or more of the other shape measures.
- the criterion may require that a first shape measure is within a predetermined tolerance of an expected value, which is a function of the second shape measure. Following from the evaluating, the edge may be omitted or de-emphasized in
- the relative extent may be a function that varies between one value (e.g. one) for satisfying the criterion to another value for not satisfying the criterion (e.g. zero) and having a smooth transition with respect to variation of the difference between the value of the first measure and the expected value, and the relative extent can be used to reduce a weight of the edge in the focus signal by multiplying the weight with this relative extent prior to calculating the focus signal, where the focus signal is a weighted average from edges that contribute to it.
- Such function preferably assumes a shape of a sigmoid function with respect to the difference.
- the method may compute the relative extent as a function of the at least two shape measures.
- the relative extent may be computed as X/E, where E is the expected value (found from plugging the measured value for the edge for the second measure into the function) and X is the measured value for the first shape measure.
- the expected value of the first measure in terms of the second measure may be expressed in a mathematical formula recorded in a computer-readable medium, such as a non-volatile memory (for example flash memory), and retrieved into the EDWM unit for execution.
- a lookup table stored in the computer-readable medium can be used. Referring to Figure 28 shows a scatter plot of four pairs of values of first and second shape measures (w lb , w la ) , (w 2br w 2a ) , ⁇ w 3br w 3a ) , (w 4br w 4a ) , and illustrates a value w r b for the first shape measure is found by interpolation from a value w r a for the second shape measure.
- the lookup table may store pairs of values first and second measures and the EDWN retrieve pairs for interpolation to find expect value of one shape measure given measured value of another shape measure .
- the method does not determine the relative extent on basis of an extent to which a sequence of gradient across the edge departs from a perfect reflection symmetry.
- Figures 23A and 23B which each plots a perfectly symmetrical sequence of gradients of an edge, there are edges the method will discriminate against despite the edges having perfect reflection symmetry in their respective sequences of gradients across themselves. It will be clear from the examples later in this section that having perfect reflection symmetry in a sequence of gradients across the edge will not prevent an edge from being discriminated against (i.e. having its relative extent reduced) .
- sequence of gradients having perfect reflection symmetry such that if an edge has the sequence of gradients across itself then its relative extent will be reduced under this method.
- a sequence may be ⁇ 0.1, 0.15, 0.2, 0.9, 0.9, 1, 0.9, 0.9, 0.2, 0.15, 0.1 ⁇ .
- such sequence may be ⁇ 0, 0.2, 0.2, 0.7, 0.7, 1, 0.7, 0.7, 0.2, 0.2, 0 ⁇ , which is sequence 8002 shown in Figure 25 to take a shape of a hat.
- sequence may be ⁇ 0, 0.25, 0.5, 0.75, 1, 0.75, 0.5, 0.25, 0 ⁇ , which is shown as sequence 8000 in Figure in a shape of a isosceles triangle.
- One of the shape measures may be the edge-sharpness measure, but this is not necessary. Where the edge-sharpness measure is not one of the shape measures and the edge is disqualified (i.e. omitted) from contributing to the focus signal, computing of the edge-sharpness measure for the edge may be omitted.
- the shape measures are mutually independent in a sense that any shape measure cannot be computed from the other shape measures without further involving at least one sample of image data from the predetermined neighborhood of the edge for which said any shape measure is computed.
- a shape measure is not computed from one positive gradient and one negative gradient for every edge for which the shape measure is computed. For most edges, to find an interpolated gradient on the edges does not require interpolating between a positive gradient and a negative gradient .
- evaluating a shape measure for an edge does not depend upon detection of another edge . Frequently, an edge has its own distribution of normalized gradients that is independent of another edge, and a shape measure
- edge formulated based such characteristic of the edge is not affected by detection or not of the other edge, especially if the predetermined neighborhood of the other edge and this edge do not overlap.
- a shape measure is not chosen to measure an edge unless a 20% decrease the scene will not result in difference between whether the edge is omitted or allowed to contribute to the focus signal.
- a shape measure is preferably not chosen to measure an edge unless a 20% decrease in the image signal values within the
- predetermined neighborhood will not result in whether the edge is omitted or accepted to contribute to the focus signal .
- a first shape measure may be the width W 2 between two interpolated gradients at the upper normalized gradient level 3310 and a second shape measure the width i 1 measured between a pair of interpolated
- the second shape measure may also be used as the edge-sharpness measure for the edge of this example.
- the edge-sharpness measure may be measure at a third normalized gradient level different than the upper 3310 and lower 3312 normalized gradient levels .
- either the second measure or the edge-sharpness measure may be
- a second moment of the distances is one example of such function of distances of the gradients, or for example as a distance between the outer pair of interpolated second derivatives interpolated from a sequence of second
- edge-sharpness measure that uses a normalizing with respect to a power of a peak gradient value or interpolated peak gradient value may bypass the normalizing to generate a shape measure that is not free of a unit of energy within the unit of the shape measure.
- edge-sharpness measure is made by measuring an area of a region under a sequence of gradients (see Figure 24B and its related discussion under the heading "Edge-sharpness measures") and normalizing the area by a peak gradient value or
- the normalizing may be avoided, resulting in a area for a shape measure having a unit of a unit of gradient times a unit of length, thus a unit of energy.
- a shape measure can draw on other methods. Further examples are described below.
- FIG. 24A illustrates a symmetrical sequence of gradients across an edge plotted against distance in multiples of a spacing between successive gradients, and a normalized gradient value of an interpolated gradient at a predefined distance D 3 from a peak gradient.
- This sequence of gradients has a peak gradient 3212.
- a width W 0 measured at normalized gradient level LQ can be used as the second shape measure.
- the distance D 3 may be defined with respect to width W 0 , for example as a
- FIG. 24C illustrates a sequence of gradients of an image signal across an edge plotted against distance in multiples of a spacing between successive gradients, and a slope S L (i.e.
- both shape measures may be chosen such that both are not affected by scaling the samples of image data from the aforementioned predetermined neighborhood of the edge. For example, as in the above discussion with reference to Figure 23B, both widths Wi and W 2 are not affected by scaling the image data that enter the computation of the gradients in the sequence of gradients displayed. Alternatively, both measure be chosen so that they are both affected by same.
- the first measure may be the interpolated gradient value L3 at a predefined distance D3 from an interpolated peak or a peak gradient of the sequence of gradients, as shown in Figure 24A and discussed above
- the second measure may be an area of a region under the sequence of gradients plotted against distance, as shown in Figure 24B and discussed above, but without normalizing.
- a quantity from an edge is said to be normalized when it is divided by, by default unless otherwise
- peak gradient 3212 has a normalized value of exactly 1
- the interpolated peak 3270 is different from the peak gradient 3212, and the gradients shown in Figure 24C are normalized with respect to the interpolated peak 3270, not the peak gradient 3212.
- length filter 212 creates a preference for edges that each connects to one or more edges of a similar orientation.
- a group of edges that are similarly oriented and mutually connected within the group (“concatenated edge”) is less likely to be due to noise, compared with an isolated edge that does not touch any other edge of similar orientation.
- the more edges of a similar orientation thus concatenated together the lesser the chance of them being due to noise.
- the probability of the group being due to noise falls off exponentially as the number of edges within the group increases, and far faster than linearly.
- This property can be harnessed to reject noise, especially under dim-lit or short-exposure situations where the signal-to- noise ratio is weak, e.g. less than 10, within the image or within the region of interest.
- the preference may be implemented in any reasonable method to express such preference. The several ways described below are merely examples .
- a first method is to eliminate edges that belong to vertical/horizontal concatenated edges having lengths lesser than a concatenated length threshold.
- concatenated length threshold may be larger when the region of interest is dimmer.
- the concatenated length threshold may start as small as 2, but increases to 8 as a signal-to-noise ratio within the region of interest drops to 5.
- the concatenated length threshold may be provided by the processor 112, 112', 112", for example through a length command' signal, shown in Figure 3, as part of signals 132. Alternately, the threshold may be calculated according to a formula on the focus signal generator.
- a second method is to provide a length-weight in the length filter 212 for each edge and apply the length-weight to a calculation of focus signal in the focus signal calculator 210.
- An edge that is part of a longer concatenated edge receives a larger weight than one that is part of a shorter concatenated edge.
- the length-weight may be a square of the length of the
- each edge may be multiplied by a factor A/B before summing all contributions to form the focus signal, where B is a sum of the length-weights of all edges that enter the focus signal calculation, and A is a length- weight of the edge.
- the edge-width histogram which may be output as part of signals 134, may have edges that are members of longer concatenated edges contribute more to the bins corresponding to their respective edge width, thus preferred, instead of all edges contribute the same amount, e.g. +1.
- each edge may contribute A/C, where C is an average value of A across the edges.
- the narrow-edge count may have edges that are members to longer concatenated edges contribute more.
- the contribution from each edge may be multiplied by A/D, where D is an average of A among edges that are counted in the narrow-edge count.
- a group of N vertical (horizontal) edges where, with the exception of the top (leftmost) and the bottom
- Figure 8 illustrates a vertical concatenated edge and its length.
- cells R2C3 and R2C4 form a first vertical edge
- cells R3C3, R3C4, and R3C5 together form a second vertical edge
- cells R4C4 and R4C5 together form a third vertical edge.
- the first and the third vertical edges each touches only one other vertical edge
- the second vertical edge touches two other vertical edges.
- the first, second and third vertical edges together form a vertical concatenated edge having a length of 3.
- (horizontal) concatenated edge has two or more branches, i.e. having two edges in a row (column) , the length may be defined as the total number of edges within the
- the length may be defined as the vertical (horizontal) distance from a topmost
- a definition of a length for a concatenated edge shall have a property that the length is proportional to the number of member edges within the concatenated edge at least up to three. This is to be consistent with the previously stated reasoning that more edges being mutually connected by touching each other exponentially reduces a probability that the concatenated edge is caused by a noise, and as such the length should express a proportionality to the number of member edges within the concatenated edge up to a reasonable number that sufficiently enhances a confidence in the concatenated edge beyond that for a single member.
- the length filter 212 may de-emphasize or eliminate and thus, broadly speaking, discriminate against an edge having a concatenated length of one.
- the length filter 212 may discriminate against an edge having a concatenated length of two.
- the length filter 212 may discriminate against an edge having a concatenated length of three, to further reduce an influence of noise.
- the length filter 212 may do any one of these actions under a command from the
- the Length Filter 212 may be inserted before the focus signal calculator 210, wherein the edges processed by the Length Filter 212 are those that pass through the width filter 209 depending on the 3 ⁇ 4 fine' signal.
- the fine switch 220 may be removed so that the focus signal calculation unit 210 receives a first set of data not filtered by the width filter 209 and a second set filtered, and for each calculates a different focus signal, gross focus signal for the former, fine focus signal for the latter, and outputs both to the processor 112, 112' .
- Width Filter Refer next to Figure 3 to understand an operation of the Width Filter 209.
- Figure 11 plots a histogram of edge widths, i.e. a graph of edge counts against edge widths. At edge width of 2, i.e. the aforementioned
- sharp_edge_width there is a peak, indicating a presence of sharp edges in the image. At edge widths of 4 and 5, however, there are peaks, indicating edges that are
- edges whose widths lie outside a predetermined range may be de-emphasized using the Width Filter 209.
- the Width Filter 209 may create a lesser weight for edge widths outside the narrow-edge range for use in the focus signal calculation. For example, edge widths may be assigned weight of 1.0, whereas edges widths more than +1 to the right of the upper limit 840 assigned a weight of 0, and edge widths in between assigned weights between 0 and 1.0, falling monotonically with edge width. Alternately, the Width Filter 209 may prevent such edges from entering the focus signal calculation altogether. Appropriate upper and lower limits 830, 840 depend on several factors, including crosstalk in the pixel array 108, the
- Appropriate upper and lower limits 830, 840 and the parameter sharp_edge_width may be determined for the image pickup apparatus 102, 102' by capturing images of various degrees of sharpness and inspecting the edge width
- an appropriate lower and upper limit may be 1.5 and 3, respectively, and the sharp_edge_width may be set to 2.0.
- sharp_edge_width may be determined as above and provided to the focus signal generator 120, 120', 120" by the processor 112, 112".
- the fine focus signal thus calculated de-emphasizes edge widths outside the narrow-edge range.
- the Width Filter 209 may calculate a total count of the edges whose edge widths fall within the narrow-edge range and output as part of output signals 134. Narrow-Edge Count may be input to and used by the focus system controller (processor 112) to detect a presence of sharp image and/or for initiating tracking.
- Focus Signal Referring next to the focus signal calculator 210 of
- the focus signal calculator 210 receives edge widths and outputs a focus signal.
- the weight at each edge width may be the edge count for the edge width multiplied by the edge width itself, i.e. w ⁇ Ciei.
- preferences from the Width Filter 209 that are expressed in terms of weights may be further multiplied to each edge width.
- focus signal may be calculated as ⁇ QiWiei / ⁇ QiWi . If control signal 3 ⁇ 4 fine' is ON and ⁇ exclude' is OFF, the focus signal would be a value very close to the sharp edge width of 2.0 for the example shown in Figure 11, indicating that among object details within the focus distance range that would produce edge widths between 2.0 and 3.0, most are actually in sharp focus.
- control signal 3 ⁇ 4 fine' is OFF and ⁇ exclude' is OFF
- the focus signal may be a value close to 5.0, indicating that there are substantial details of the image that are out of focus .
- Turning ON the fine switch 220 allows the focus signal to respond more to objects slightly blurred while less to those that are completely blurred.
- the fine switch 220 is ON, we shall refer to the focus signal as a fine focus signal, whereas when the fine switch 220 is OFF, a gross focus signal.
- the emphasis expressed by the Length Filter 212 may be incorporated into the focus signal in one of several ways, such as eliminating an edge that is de-emphasized from entering the focus signal calculation, or reducing a weight of the edge's contribution towards a count ei of a corresponding edge width bin.
- Figure 15 sketches a response of the fine focus signal to an adjustment of the focus position in the vicinity of where an object is in sharp focus.
- the fine focus signal reaches a minimum value, approximately at sharp_edge_width, where the focus position brings an image into sharp focus, and increases if otherwise.
- the fine focus signal may be used for tracking objects already in-focus or very nearly so. For moving objects, the fine focus signal allows the focus control system to keep the objects in sharp focus even if the focus distance continues to change. Fine focus signal may also be used to acquire a sharp focus
- the focus control system may respond by adjusting the focus position to bring the fine focus signal value towards the sharp_edge_width, thus centering the peak of edge width due to the object at the edge width value equal to sharp_edge_width .
- Figures 12-16 illustrate how the narrow-edge count, gross focus signal, and fine focus signal may be used to perform focus control to achieve sharp images.
- Figure 12 illustrates an outdoor scene having 3 groups of objects at different focus distances: "person” in the foreground, “mountain, sun, and horizon” in the background, and “car” in the between.
- Figure 13 is an illustration of the narrow-edge count plotted against time when the focus position of the focus lens 104 sweeps from far to near for the scene illustrated in Figure 12.
- the narrow-edge count peaks when the focus position brings an object into a sharp image on the pixel array 108.
- the narrow-edge count plot exhibits 3 peaks, one each for "mountain, sun, and horizon", “car”, and "person”, in this order, during the sweep.
- Figure 14 shows the gross focus signal plotted against time.
- the gross focus signal exhibits a minimum when the focus position is near each of the 3 focus positions where the narrow-edge count peaks. However, at each minimum, the gross focus signal is not at the sharp edge width level, which is 2.0 in this example, due to bigger edge widths contributed by the other objects that are out-of-focus .
- Figure 15 illustrates the fine focus signal plotted against the focus position in the vicinity of the sharp focus position for "car” in the scene of Figure 12.
- the fine focus signal achieves essentially the sharp edge width, which is 2 in this example, despite the presence of blurred objects ("person” and “mountains, sun, and
- a focus control system may use the gross focus signal to search for the nearest sharp focus position in a search mode . It can move the focus position away from the current focus position to determine whether the gross focus signal increases or decreases. For example, if the gross focus signal increases (decreases) when the focus position moves inwards (outwards), there is a sharp focus position farther from the current focus position.
- the processor 112, 112', 112" can then provide a focus drive signal to move the focus lens 104 in the direction towards the adjacent sharp focus position.
- a focus control system may use the fine focus signal to track an object already in sharp focus to maintain the corresponding image sharp (thus a "tracking mode") despite changes in the scene, movement of the object, or movement of the image pickup apparatus.
- the fine focus signal level is stable despite such changes.
- a change in the fine focus signal suggests a change in focus distance of the object from the image pickup apparatus .
- any shift in the fine focus signal level immediately informs the processor 112, 112', 112" of a change in the focus distance of the object.
- the processor 112, 112', 112" can then determine a direction and cause the focus lens 104 to move to bring the fine focus signal level back to the "locked" level.
- the image pickup apparatus 102, 103, 103', 103" is able to track a moving object.
- a focus control system e.g. as implemented in
- processor 112, 112', 112 may use narrow-edge count to trigger a change from a search mode to a tracking mode.
- the focus control system uses the fine focus signal to "lock" the object.
- the focus control system may use the gross focus signal to identify the direction to move and regulate the speed of movement of the lens.
- the processor 112, 112', 112" may switch into the tracking mode and use the fine focus signal for focus position control upon detection of a sharp rise in the narrow-edge count or a peaking or both.
- a threshold which may be different for each different sharp focus position, may be assigned to each group of objects found from an end- to-end focus position "scan", and subsequently when the narrow-edge count surpasses this threshold the
- FIG. 16 illustrates an image pickup apparatus 102 having a display 114, an input device 107 comprising buttons, and selection marker 1920 highlighted in the display 114. A user can create, shape and maneuver the selection marker 1920 using input device 107. Although shown in this example to comprise buttons, input device 107 may comprise a touch-screen overlaying the display 114 to detect positions of touches or strokes on the display 114. Input device 107 and processor 112, 112', 112" or a
- parameters for describing the selection region may be transmitted to the focus signal generator 120, 120', 120" over bus 132 (or internally within the processor 112 in the case where focus signal generator 120 is part of the processor 112) .
- the focus signal generator 120 may limit the focus signal calculation or the narrow- edge count or both to edges within the selection region described by said parameters or de-emphasize edges outside the selection region. Doing so can de-emphasize unintended objects from the focus signal and then even the gross focus signal will exhibit a single minimum and a minimum level within 1.0 or less of the sharp edge width.
- Focus signal generator 120' outputs statistics of edges and edge widths.
- edge-width statistics that controller 120' outputs may be one or more of the following: an edge-width histogram comprising edge counts at different edge widths; an edge width where edge width count reaches maximum; a set of coefficients
- Census Unit 240 may receive data computed in one or more of the other units with the focus signal generator 120' to calculate
- the focus signal generator 120' may output a signal that has an indication of a distribution of edge widths.
- the edge-width statistics thus provided in signals 134 to an alternative embodiment of processor 112' in an alternative auto-focus image pickup apparatus 102' may be used by the processor 112' to compute a gross and/or fine focus signal and a narrow-edge count in accordance with methods discussed above or equivalent thereof.
- any data computed in the focus signal generator 120' may be output to the processor 112' as part of the output signals 134.
- the processor 112' may internally generate a focus signal and/or a narrow-edge count in addition to the functions included in the processor 112 of Figure 1.
- the pixel array 108 The pixel array 108, A/D Converter 110, color
- interpolator 148 and generator 120' may reside within a package 142, together comprising an image sensor 150', separate from the processor 112'.
- image sensor 150 separate from the processor 112'.
- Figure 19 shows an alternate embodiment of an auto- focus image pickup system 103.
- the system 103 may include a partial mirror 2850, a full mirror 2852, an optical lowpass filter 2840, a main pixel array 2808, and a main A/D
- the partial mirror 2850 may split the incoming light beam into a first split beam and a second split beam, one transmitted, the other reflected.
- the first split beam may further pass through the optical lowpass filter 2840 before finally reaching the main pixel array 2808, which detects the first split beam and converts to analog signals.
- the second split beam may be reflected by the full mirror 2852 before finally reaching the
- the ratio of light intensity of the first beam to the second beam may be 1-to-l or greater than 1-to-l.
- the ratio may be 4-to-l.
- the main pixel array 2808 may be covered by a color filter array of a color mosaic pattern, e.g. the Bayer pattern.
- the optical lowpass filter 2808 prevents the smallest light spot focused on the pixel array 2808 from being too small as to cause aliasing. Where a color filter of a mosaic pattern covers the pixel array 2808, aliasing can give rise to color moire artifacts after a color interpolation, .
- the smallest diameter of a circle encircling 84% of the visible light power of a light spot on the main pixel array 2808 may be kept larger than one and a half pixel width but less than two pixel widths by use of the optical lowpass filter.
- the optical lowpass filter 2840 may be selected to make the light spot 6.7um or larger in diameter.
- the auxiliary pixel array 108" may comprise one or more arrays of photodetectors . Each of the arrays may or may not be covered by a color filter array of a color mosaic pattern.
- the array (s) in auxiliary pixel array 108" outputs image (s) in analog signals that are converted to digital signals 130 by A/D Converter 110. The images are sent to the focus signal generator 120.
- interpolator 148 may generate the missing colors for images generated from pixels covered by color filters. If
- auxiliary pixel array 108" comprises multiple arrays of photodetectors, each array may capture a sub-image that corresponds to a portion of the image captured by the main pixel array 2808.
- the multiple arrays may be physically apart by more than a hundred pixel widths, and may or may not share a semiconductor substrate. Where the pixel arrays within auxiliary pixel array 108" do not share a semiconductor substrate, they may be housed together in a package (not shown) .
- Main A/D Converter 2810 converts analog signals from the Main Pixel Array 2808 into digital main image data signal 2830, which is sent to the processor 112, where the image captured on the Main Pixel Array 2808 may receive image processing such as color interpolation, color
- An array of photodetectors in the auxiliary pixel array 108" may have a pixel width ("auxiliary pixel width") that is smaller than a pixel width of the main pixel array 2808 ("main pixel width") .
- the auxiliary pixel width may be as small as half of the main pixel width. If an auxiliary pixel is covered by a color filter and the auxiliary pixel width is less than 1.3 times the smallest spot of visible light without optical lowpass filtering, a second optical lowpass filter may be inserted in front of the auxiliary array 108" to increase the smallest diameter on the
- auxiliary pixel array 108" ("smallest auxiliary diameter") to between 1.3 to 2 times as large but still smaller than the smallest main diameter, preferably 1.5.
- the slight moire in the auxiliary image is not an issue as the
- auxiliary image is not presented to the user as the final captured image .
- Figure 22 illustrates how edge widths may vary about a sharp focus position for main images from the main pixel array 2808 (solid curve) and auxiliary images from the auxiliary pixel array 108" (dashed curve) .
- the auxiliary images give sharper slopes even as the main images reach the targeted sharp edge width of 2.
- the auxiliary image is permitted to reach below the targeted sharp edge width, since moire due to aliasing is not as critical in the auxiliary image, as it is not presented to the user as a final image. This helps to sharpen the slope below and above the sharp edge width.
- the sharper slope is also helped by the auxiliary pixel width being smaller than the main pixel width.
- the shaded region in Figure 22 indicates a good region within which to control the focus position to keep the main image in sharp focus .
- a change in focus position outwards will cause the edge width to increase in the auxiliary image, whereas a change inwards will cause the it to
- a linear feedback control system may be employed to target the middle auxiliary edge width value within the shade region and to use as feedback signal the edge widths generated from the auxiliary images .
- the auxiliary pixel array 108", A/D Converter 110, focus signal generator 120 together may be housed in a package 142 and constitute an auxiliary sensor 150.
- the auxiliary sensor 150 may further comprise a color
- Figure 20 shows an alternative embodiment of auto-focus image pickup apparatus 103' similar to apparatus 103 except focus signal generator 120' replaces focus signal generator 120.
- the auxiliary pixel array 108", A/D Converter 110, focus signal generator 120' together may be housed in a package 142 and constitute an auxiliary sensor 150' .
- the auxiliary sensor 150 may further comprise a color interpolator 148.
- Figure 21 shows an alternate embodiment of auto-focus image pickup apparatus 103".
- the focus signal generator 120 and the processor 112" may be housed in a package 144 as a camera controller, separate from the auxiliary pixel array 108".
- the processor 112" is similar to processor 112 except that processor 112" receives images from the main pixel array 2808 as well as the auxiliary pixel array 108".
- the processor 112" may perform a color interpolation, a color correction, a compression/decompression, and a storing to memory card 116 for the images received on signal 2830 similar to the processing that the processor 112 may perform on signal 130 in Figure 2.
- the auto-focus image pickup system 102, 102', 103, 103', 103" may include a computer program storage medium (not shown) that comprises instructions that causes the processor 112, 112', 112" respectively, and/or the focus signal generator 120, 120' to perform one or more of the functions described herein.
- a computer program storage medium not shown
- the processor 112, 112', 112" respectively, and/or the focus signal generator 120, 120' to perform one or more of the functions described herein.
- instructions may cause the processor 112 or the generator 120' to perform a slant correction for an edge width in accordance with the flowchart of Figure 7.
- the instructions may cause the processor 112' or the generator 120 to perform an edge width filtering in accordance with the above description for Width Filter 209.
- the processor 112, 112' or the generator 120, 120' may be configured to have a combination of firmware and hardware, or a pure hardware implementation for one or more of the functions contained therein.
- a slant correction may be performed in pure hardware and a length filter 212 performed according to instructions in a firmware.
- Figure 30 shows yet another embodiment of focus signal generator 120' . This embodiment may be employed in any of the above image capture systems .
- any nonvolatile storage medium may be used instead, e.g. hard disk drive, wherein images stored therein are
- One or more parameters for use in the system may be stored in a nonvolatile memory in a device within the system.
- the device may be a flash memory device, the processor, or the image sensor, or the focus signal generator as a separate device from those.
- One or more formulae for use in the system for example for calculating the concatenated length
- threshold or for calculating beta may likewise be stored as parameters or as computer-executable instructions in a non-volatile memory in one or more of those devices .
Landscapes
- Physics & Mathematics (AREA)
- Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Optics & Photonics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Signal Processing (AREA)
- Multimedia (AREA)
- Theoretical Computer Science (AREA)
- Studio Devices (AREA)
- Automatic Focus Adjustment (AREA)
- Image Analysis (AREA)
- Image Processing (AREA)
- Focusing (AREA)
Abstract
Description
Claims
Priority Applications (12)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
GB1311741.1A GB2501414A (en) | 2010-12-07 | 2011-06-09 | Auto-focus image system |
BR112013014240A BR112013014240A2 (en) | 2009-12-07 | 2011-06-09 | method for generating a focus signal from a plurality of edges of an image, computer readable medium, fire signal generating circuit and image capture system |
MX2013006517A MX2013006517A (en) | 2010-12-07 | 2011-06-09 | Auto-focus image system. |
AU2011340207A AU2011340207A1 (en) | 2010-12-07 | 2011-06-09 | Auto-focus image system |
JP2013542632A JP2014504375A (en) | 2010-12-07 | 2011-06-09 | Autofocus image system |
EP11735545.3A EP2649787A1 (en) | 2010-12-07 | 2011-06-09 | Auto-focus image system |
DE112011104256.6T DE112011104256T5 (en) | 2010-12-07 | 2011-06-09 | Autofocus imaging system |
CA2820856A CA2820856A1 (en) | 2010-12-07 | 2011-06-09 | Auto-focus image system |
CN201180059197.0A CN103262524B (en) | 2011-06-09 | 2011-06-09 | Automatic focusedimage system |
SG2013044201A SG190451A1 (en) | 2010-12-07 | 2011-06-09 | Auto-focus image system |
US13/492,825 US20120314960A1 (en) | 2009-12-07 | 2012-06-09 | Auto-focus image system |
US14/635,046 US9251571B2 (en) | 2009-12-07 | 2015-03-02 | Auto-focus image system |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
IBPCT/IB2010/055649 | 2010-12-07 | ||
PCT/IB2010/055649 WO2011070514A1 (en) | 2009-12-07 | 2010-12-07 | Auto-focus image system |
Related Child Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US13/492,825 Continuation-In-Part US20120314960A1 (en) | 2009-12-07 | 2012-06-09 | Auto-focus image system |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2012076992A1 true WO2012076992A1 (en) | 2012-06-14 |
Family
ID=44628863
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/IB2011/052515 WO2012076992A1 (en) | 2009-12-07 | 2011-06-09 | Auto-focus image system |
Country Status (9)
Country | Link |
---|---|
EP (1) | EP2649787A1 (en) |
JP (1) | JP2014504375A (en) |
AU (1) | AU2011340207A1 (en) |
CA (1) | CA2820856A1 (en) |
DE (1) | DE112011104256T5 (en) |
GB (1) | GB2501414A (en) |
MX (1) | MX2013006517A (en) |
SG (1) | SG190451A1 (en) |
WO (1) | WO2012076992A1 (en) |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111433811A (en) * | 2017-11-06 | 2020-07-17 | 卡尔蔡司显微镜有限责任公司 | Reducing image artifacts in images |
CN112969027A (en) * | 2021-04-02 | 2021-06-15 | 浙江大华技术股份有限公司 | Focusing method and device of electric lens, storage medium and electronic equipment |
CN114815121A (en) * | 2022-02-22 | 2022-07-29 | 湖北三赢兴光电科技股份有限公司 | Quick focusing method and system for camera module |
Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20020114015A1 (en) * | 2000-12-21 | 2002-08-22 | Shinichi Fujii | Apparatus and method for controlling optical system |
US20030099044A1 (en) * | 2001-11-29 | 2003-05-29 | Minolta Co. Ltd. | Autofocusing apparatus |
US20060029284A1 (en) * | 2004-08-07 | 2006-02-09 | Stmicroelectronics Ltd. | Method of determining a measure of edge strength and focus |
US20060062484A1 (en) * | 2004-09-22 | 2006-03-23 | Aas Eric F | Systems and methods for arriving at an auto focus Figure of Merit |
US20090102963A1 (en) * | 2007-10-22 | 2009-04-23 | Yunn-En Yeo | Auto-focus image system |
US20100128144A1 (en) * | 2008-11-26 | 2010-05-27 | Hiok Nam Tay | Auto-focus image system |
GB2475983A (en) * | 2009-12-07 | 2011-06-08 | Hiok Nam Tay | Reflection symmetry of gradient profile associated with an edge |
Family Cites Families (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP3482013B2 (en) * | 1993-09-22 | 2003-12-22 | ペンタックス株式会社 | Optical system focus evaluation method, adjustment method, adjustment device, and chart device |
-
2011
- 2011-06-09 AU AU2011340207A patent/AU2011340207A1/en not_active Abandoned
- 2011-06-09 JP JP2013542632A patent/JP2014504375A/en not_active Ceased
- 2011-06-09 EP EP11735545.3A patent/EP2649787A1/en not_active Withdrawn
- 2011-06-09 SG SG2013044201A patent/SG190451A1/en unknown
- 2011-06-09 GB GB1311741.1A patent/GB2501414A/en not_active Withdrawn
- 2011-06-09 WO PCT/IB2011/052515 patent/WO2012076992A1/en active Application Filing
- 2011-06-09 CA CA2820856A patent/CA2820856A1/en not_active Abandoned
- 2011-06-09 MX MX2013006517A patent/MX2013006517A/en active IP Right Grant
- 2011-06-09 DE DE112011104256.6T patent/DE112011104256T5/en not_active Withdrawn
Patent Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20020114015A1 (en) * | 2000-12-21 | 2002-08-22 | Shinichi Fujii | Apparatus and method for controlling optical system |
US20030099044A1 (en) * | 2001-11-29 | 2003-05-29 | Minolta Co. Ltd. | Autofocusing apparatus |
US20060029284A1 (en) * | 2004-08-07 | 2006-02-09 | Stmicroelectronics Ltd. | Method of determining a measure of edge strength and focus |
US20060062484A1 (en) * | 2004-09-22 | 2006-03-23 | Aas Eric F | Systems and methods for arriving at an auto focus Figure of Merit |
US20090102963A1 (en) * | 2007-10-22 | 2009-04-23 | Yunn-En Yeo | Auto-focus image system |
US20100128144A1 (en) * | 2008-11-26 | 2010-05-27 | Hiok Nam Tay | Auto-focus image system |
GB2475983A (en) * | 2009-12-07 | 2011-06-08 | Hiok Nam Tay | Reflection symmetry of gradient profile associated with an edge |
Cited By (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111433811A (en) * | 2017-11-06 | 2020-07-17 | 卡尔蔡司显微镜有限责任公司 | Reducing image artifacts in images |
CN111433811B (en) * | 2017-11-06 | 2024-03-22 | 卡尔蔡司显微镜有限责任公司 | Reducing image artifacts in images |
CN112969027A (en) * | 2021-04-02 | 2021-06-15 | 浙江大华技术股份有限公司 | Focusing method and device of electric lens, storage medium and electronic equipment |
CN112969027B (en) * | 2021-04-02 | 2022-08-16 | 浙江大华技术股份有限公司 | Focusing method and device of electric lens, storage medium and electronic equipment |
CN114815121A (en) * | 2022-02-22 | 2022-07-29 | 湖北三赢兴光电科技股份有限公司 | Quick focusing method and system for camera module |
Also Published As
Publication number | Publication date |
---|---|
SG190451A1 (en) | 2013-07-31 |
GB201311741D0 (en) | 2013-08-14 |
GB2501414A (en) | 2013-10-23 |
EP2649787A1 (en) | 2013-10-16 |
MX2013006517A (en) | 2013-12-06 |
DE112011104256T5 (en) | 2014-02-13 |
CA2820856A1 (en) | 2012-06-14 |
JP2014504375A (en) | 2014-02-20 |
AU2011340207A1 (en) | 2013-07-18 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US9251571B2 (en) | Auto-focus image system | |
US8630504B2 (en) | Auto-focus image system | |
US9065999B2 (en) | Method and apparatus for evaluating sharpness of image | |
US20140022443A1 (en) | Auto-focus image system | |
EP2719162B1 (en) | Auto-focus image system | |
EP2649787A1 (en) | Auto-focus image system | |
JP2014504375A5 (en) | ||
AU2011340208A1 (en) | Auto-focus image system |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 11735545 Country of ref document: EP Kind code of ref document: A1 |
|
ENP | Entry into the national phase |
Ref document number: 2013542632 Country of ref document: JP Kind code of ref document: A |
|
ENP | Entry into the national phase |
Ref document number: 2820856 Country of ref document: CA |
|
WWE | Wipo information: entry into national phase |
Ref document number: MX/A/2013/006517 Country of ref document: MX Ref document number: 112011104256 Country of ref document: DE Ref document number: 1120111042566 Country of ref document: DE |
|
ENP | Entry into the national phase |
Ref document number: 1311741 Country of ref document: GB Kind code of ref document: A Free format text: PCT FILING DATE = 20110609 |
|
WWE | Wipo information: entry into national phase |
Ref document number: 1311741.1 Country of ref document: GB |
|
WWE | Wipo information: entry into national phase |
Ref document number: 2011735545 Country of ref document: EP |
|
ENP | Entry into the national phase |
Ref document number: 2011340207 Country of ref document: AU Date of ref document: 20110609 Kind code of ref document: A |
|
REG | Reference to national code |
Ref country code: BR Ref legal event code: B01A Ref document number: 112013014240 Country of ref document: BR |
|
REG | Reference to national code |
Ref country code: BR Ref legal event code: B01E Ref document number: 112013014240 Country of ref document: BR |
|
ENP | Entry into the national phase |
Ref document number: 112013014240 Country of ref document: BR Kind code of ref document: A2 Effective date: 20130607 |