GB2523163A - A method, apparatus and computer program for measuring lens movements - Google Patents

A method, apparatus and computer program for measuring lens movements Download PDF

Info

Publication number
GB2523163A
GB2523163A GB1402703.1A GB201402703A GB2523163A GB 2523163 A GB2523163 A GB 2523163A GB 201402703 A GB201402703 A GB 201402703A GB 2523163 A GB2523163 A GB 2523163A
Authority
GB
United Kingdom
Prior art keywords
lens
image data
brightness information
sensor
determining
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
GB1402703.1A
Other versions
GB201402703D0 (en
Inventor
Eero Paivansalo
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nokia Technologies Oy
Original Assignee
Nokia Technologies Oy
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nokia Technologies Oy filed Critical Nokia Technologies Oy
Priority to GB1402703.1A priority Critical patent/GB2523163A/en
Publication of GB201402703D0 publication Critical patent/GB201402703D0/en
Publication of GB2523163A publication Critical patent/GB2523163A/en
Withdrawn legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G03PHOTOGRAPHY; CINEMATOGRAPHY; ANALOGOUS TECHNIQUES USING WAVES OTHER THAN OPTICAL WAVES; ELECTROGRAPHY; HOLOGRAPHY
    • G03BAPPARATUS OR ARRANGEMENTS FOR TAKING PHOTOGRAPHS OR FOR PROJECTING OR VIEWING THEM; APPARATUS OR ARRANGEMENTS EMPLOYING ANALOGOUS TECHNIQUES USING WAVES OTHER THAN OPTICAL WAVES; ACCESSORIES THEREFOR
    • G03B13/00Viewfinders; Focusing aids for cameras; Means for focusing for cameras; Autofocus systems for cameras
    • G03B13/32Means for focusing
    • G03B13/34Power focusing
    • G03B13/36Autofocus systems
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/67Focus control based on electronic image sensor signals
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N25/00Circuitry of solid-state image sensors [SSIS]; Control thereof
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N25/00Circuitry of solid-state image sensors [SSIS]; Control thereof
    • H04N25/60Noise processing, e.g. detecting, correcting, reducing or removing noise
    • H04N25/61Noise processing, e.g. detecting, correcting, reducing or removing noise the noise originating only from the lens unit, e.g. flare, shading, vignetting or "cos4"
    • GPHYSICS
    • G03PHOTOGRAPHY; CINEMATOGRAPHY; ANALOGOUS TECHNIQUES USING WAVES OTHER THAN OPTICAL WAVES; ELECTROGRAPHY; HOLOGRAPHY
    • G03BAPPARATUS OR ARRANGEMENTS FOR TAKING PHOTOGRAPHS OR FOR PROJECTING OR VIEWING THEM; APPARATUS OR ARRANGEMENTS EMPLOYING ANALOGOUS TECHNIQUES USING WAVES OTHER THAN OPTICAL WAVES; ACCESSORIES THEREFOR
    • G03B2205/00Adjustment of optical system relative to image or object surface other than for focusing
    • G03B2205/0007Movement of one or more optical elements for control of motion blur
    • G03B2205/0015Movement of one or more optical elements for control of motion blur by displacing one or more optical elements normal to the optical axis
    • GPHYSICS
    • G03PHOTOGRAPHY; CINEMATOGRAPHY; ANALOGOUS TECHNIQUES USING WAVES OTHER THAN OPTICAL WAVES; ELECTROGRAPHY; HOLOGRAPHY
    • G03BAPPARATUS OR ARRANGEMENTS FOR TAKING PHOTOGRAPHS OR FOR PROJECTING OR VIEWING THEM; APPARATUS OR ARRANGEMENTS EMPLOYING ANALOGOUS TECHNIQUES USING WAVES OTHER THAN OPTICAL WAVES; ACCESSORIES THEREFOR
    • G03B2205/00Adjustment of optical system relative to image or object surface other than for focusing
    • G03B2205/0053Driving means for the movement of one or more optical element

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Studio Devices (AREA)

Abstract

A method comprises causing acquisition of first image data 10 at a sensor 12 via a lens 14, the first image data comprising at least first brightness information 16 and the lens being at a first position. Next, the lens is moved relative to the sensor to at least a second position and second image data 22 is acquired via the lens at the second position, the second image data comprising second brightness information 24. A positional parameter 26 of the lens is then determined, dependent upon a relative displacement of the lens and sensor, using at least the first brightness information of the first image data and the second brightness information of the second image data. The brightness information may comprise brightness features such as a brightness maximum or an average brightness, or signal levels of a colour channel. A plane of the lens may be parallel to a sensing plane of the sensor and an optical axis may be defined perpendicular to the plane of the lens. Movement of the lens may be along or perpendicular to the optical axis. The method may be used to calculate lens movements during auto-focus or stabilization processes.

Description

TITLE
A method, apparatus and computer program for measuring lens movements.
TECHNOLOGICAL FIELD
Examples of the present invention relate to a method, apparatus and computer program for measuring lens movements. In particular, they relate to a method, apparatus and computer program for measuring lens movements during rapid movement of a lens.
BACKGROUND
Traditionally rapid movements of a lens in a camera are measured using a laser measurement device or position sensing components within the camera. Such methods are expensive and can be time consuming.
BRIEF SUMMARY
According to various, but not necessarily all examples, there is provided a method comprising: causing acquisition of first image data at a sensor via a lens, the first image data comprising at least first brightness information and the lens being at a first position; causing movement of the lens relative to the sensor to at least a second position; causing acquisition of second image data via the lens at the second position, the second image data comprising second brightness information; and determining a positional parameter of the lens, dependent upon a relative displacement of the lens and sensor, using at least the first brightness information of the first image data and the second brightness information of the second image data.
According to various, but not necessarily all examples, there is provided an apparatus comprising: circuitry configured to cause acquisition of first image data at a sensor via a lens, the first image data comprising at least first brightness information and the lens being at a first position; circuitry configured to cause movement of the lens relative to the sensor to at least a second position; circuitry configured to cause acquisition of second image data via the lens at the second position, the second image data comprising second brightness information; circuitry configured to determine a positional parameter of the lens, dependent upon a relative displacement of the lens and sensor, using at least the first brightness information of the first image data and the second brightness information of the second image data.
According to various, but not necessarily all examples, there is provided an apparatus comprising: at least one processor; and at least one memory including computer program code the at least one memory and the computer program code configured to, with the at least one processor, cause the apparatus at least to perform: causing acquisition of first image data at a sensor via a lens, the first image data comprising at least first brightness information and the lens being at a first position; causing movement of the lens relative to the sensor to at least a second position;causing acquisition of second image data via the lens at the second position, the second image data comprising second brightness information; and determining a positional parameter of the lens, dependent upon a relative displacement of the lens and sensor, using at least the first brightness information of the first image data and the second brightness information of the second image data.
According to various, but not necessarily all examples, there is provided an apparatus comprising: means for causing acquisition of first image data at a sensor via a lens, the first image data comprising at least first brightness information and the lens being at a first position; means for causing movement of the lens relative to the sensor to at least a second position; means for causing acquisition of second image data via the lens at the second position, the second image data comprising second brightness information; and means for determining a positional parameter of the lens, dependent upon a relative displacement of the lens and sensor, using at least the first brightness information of the first image data and the second brightness information of the second image data.
According to various, but not necessarily all examples, there is provided a computer program that, when run on a computer, enables performance of: causing acquisition of first image data at a sensor via a lens, the first image data comprising at least first brightness information and the lens being at a first position; causing movement of the lens relative to the sensor to at least a second position; causing acquisition of second image data via the lens at the second position, the second image data comprising second brightness information; and determining a positional parameter of the lens, dependent upon a relative displacement of the lens and sensor, using at least the first brightness information of the first image data and the second brightness information of the second image data.
BRIEF DESCRIPTION
For a better understanding of various examples that are useful for understanding the brief description, reference will now be made by way of example only to the accompanying drawings in which: figure 1 schematically illustrates an example of a system; figure 2 schematically illustrates a further example of the system; figure 3A schematically illustrates an example of first and second image data; figure 3B schematically illustrates a further example of first and second image data; figure 4 illustrates an example of the system of figure 1; figure 5A schematically illustrates an example of first and second image data; figure 5B schematically illustrates a further example of first image data and second image data; figure 6 schematically illustrates an example of an apparatus; figure 7 illustrates a flowchart of a method; figure 8 illustrates an example plot of determined positional parameter as a function of time for lens movement along the optical axis; and figure 9 illustrates an example plot of lens position in millimetres as a function of time.
DETAILED DESCRIPTION
The figures illustrate a method comprising: causing acquisition of first image data 10 at a sensor 12 via a lens 14, the first image data 10 comprising at least first brightness information 16 and the lens 14 being at a first position; causing movement of the lens 14 relative to the sensor 12 to at least a second position; causing acquisition of second image data 22 via the lens 14 at the second position, the second image data 22 comprising second brightness information 24; and determining a positional parameter 26 of the lens 14, dependent upon a relative displacement of the lens 14 and sensor 12, using at least the first brightness information 16 of the first image data 10 and the second brightness information 24 of the second image data 22.
The figures also illustrate an apparatus configured to perform the method and a computer program that, when run on a computer, enables the method to be performed. The apparatus may be for measuring lens movement.
Figure 1 schematically illustrates an example of a system 13. The system 13 is configured to acquire image data 10, 22, 70.
In the illustrated example, the system 13 comprises a lens 14, a sensor 12 and processing circuitry 11. The lens 14 is configured to focus light 32 onto the sensor 12. The lens 14 may be any suitable lens or lens system for focussing light 32 onto the sensor 12 and may be made from any suitable material or materials.
In the example of figure 1, a plane of the lens 14 is substantially parallel to a sensing plane of the sensor 12. The lens 14 may be held in position relative to the lens in any suitable way. For example, the lens 14 may be mounted in a housing.
The lens may be moved relative to the sensor from a first position P1 to a second position P2. In general, the lens 14 may be moved to any number of positions. The movement of the lens 14 will be discussed in greater detail with reference to figures 2 and 4.
In use, light 32 passes through the lens 14 and is focussed onto the sensor 12. The sensor 12 senses the light 32 incident upon it and produces at least one signal 34.
The sensor 12 may be any suitable sensor for sensing light 32. For example, the sensor 12 may comprise a charge-coupled device (CCD), an active pixel sensor such as a complementary metal-oxide-semiconductor (CMOS) sensor, photoresistors, photodiodes and so on.
The signal 34 produced by the sensor 12 is received by the processing circuitry 11.
The processing circuitry 11 may be any suitable circuitry for processing information received from the sensor 12. Details regarding an example of the processing circuitry 11 will be discussed in relation to figure 6.
The signal 34 may comprise image data 10, 22, 70 comprising respective brightness information 16, 24, 72.
For example, the lens 14 may be at a first position P1 and light 32 may pass through the lens 14 and be focussed onto the sensor 12. The sensor 12 senses the light 32 from the lens 14 at the first position P1 and produces a signal 34. In some examples, the sensor 12 may produce more than one signal 34.
The signal 34, produced by the sensor 12 with the lens 14 at P1, may comprise first image data 10 comprising first brightness information 16. The signal 34 may be received by the processing circuitry 11. The processing circuitry 11 may then process the signal 34 to determine a positional parameter 26 of the lens 14 at the first position Pi.
The lens 14 may be moved relative to the sensor 12 to a second position P2 and light 32 may pass through the lens 14 at the second position P2 and be focussed onto the sensor 12. The sensor 12 senses the light 32 from the lens 14 at the second position P2 and produces a signal 34. In some examples, the sensor 12 may produce more than one signal 34.
The signal 34 may comprise second image data 22 comprising second brightness information 24. The signal 34 may be received at the processing circuitry 11 and processed to determine a positional parameter 26 of the lens 14 at the second position P2.
This may be repeated for any number of movements of the lens 14 to any number of positions. For example, the lens 14 may be moved to a third position P3 to produce third image data 70 comprising third brightness information 72 which may be processed to determine a positional parameter 26 of the lens 14 at the third position Pa.
The system 13 illustrated in the example of figure 1 is configured to perform a method comprising acquisition of first image data 10 ata sensor 12 via a lens 14, the first image data 10 comprising at least first brightness information 16 and the lens 14 being at a first position Pi; causing movement of the lens 14 relative to the sensor 12 to at least a second position P2; causing acquisition of second image data 22 via the lens 14 at the second position P2, the second image data 22 comprising second brightness information 24; and determining a positional parameter 26 of the lens, dependent upon a relative displacement of the lens 14 and sensor 12: using at least the first brightness information 16 of the first image data 10 and the second brightness information 24 of the second image data 22.
The inventor has realised that there is a relationship between the position of the lens 14 relative to an optical axis and the brightness information 16, 24, 72 in the image data 10, 22, 70. Accordingly, by analysing the brightness information 16, 24, 72 the positional parameter 26 that is dependent upon the relative displacement of the lens 14 and the sensor 12 can be determined. In this way, movement 30 of the lens 14 can be determined from analysis of image data 10, 22, 70 produced with the lens at different positions P1, F2, P3.
This allows movement of the lens 14, such as rapid movement during an autofocus function, to be measured without use of expensive laser equipment or specialised internal position measurement components.
In addition, as the method can use only image data acquired by the system 13, the method can be used in, for example, complete mobile telephones. This is not possible using laser techniques as the phone camera window reflects the laser and therefore the autofocus lens unit cannot be measured.
In some examples, but not necessarily all examples, determining a positional parameter 26 means processing image data 10, 22, 70 produced by the sensor 12 to allow movement and/or relative and/or absolute position of the lens 14 to be determined. For example, determining a positional parameter 26 may comprise determining an average of brightness information 16, 24, 72 from at least a part of an image frame. In other examples, determining a positional parameter 26 may comprise determining the location of a brightness feature in at least part of an image frame. Determining a positional parameter will be discussed in greater detail in relation to figures 3A, 3B, 5A and 5B.
In some examples, but not necessarily all examples, acquisition of image data 10, 22, 70 may be considered to be the sensing of light 32 that has passed through the lens 14 by the sensor 12 and the production of one or more signals 34 by the sensor 12.
In other examples, acquisition of image data 10, 22, 70 may be considered to be the sensing of light 32 that has passed through the lens 14 by the sensor 12, the production of one or more signals 34 by the sensor 12 and the processing of the produced signal or signals 34 by the processing circuitry 11.
In the example of figure 1, the various elements have been schematically illustrated for the sake of clarity. However, there may be any number of additional elements not illustrated in the example. For example, the light 32 may pass through intervening elements prior to be being sensed by the sensor 12.
Additionally/Alternatively, the signal 34 produced by the sensor 12 may pass through any number of intervening elements prior to being received by the processing circuitry 11. In general, any number of intervening elements (including no intervening elements) may be present between the various elements illustrated in the
example of figure 1.
In some examples, but not necessarily all examples, the processing circuitry 11 may process the image data 10, 22, 70 as it is received from the sensor 12. In other examples, the image data 10, 22, 70 produced while the lens 14 is in positions P1, P2, P3 may be stored in one or more memories and processed by the processing circuitry 11 after movement of the lens 14 has stopped to determine the positional parameter 26 that is dependent on the relative displacement of the lens 14 and sensor 12.
Figure 2 schematically illustrates a further example of the system 13 showing movement of the lens 14. In the example illustrated in figure 2, the system 13 comprises a lens 14, a sensor 12 and processing circuitry 11 as discussed above with regard to the example of figure 1.
In the example of figure 2, an optical axis 36 is shown. The optical axis 36 is substantially perpendicular to a sensing plane of the sensor 12 and a plane of the lens 14. In the example of figure 2, the lens 14 is moved along the optical axis 36 from position P1 to position P2 and position P3. In the figure, the lens is marked in a solid outline at position F1, a dashed outline at position P2 and a dot-dashed outline at position P3.
As the lens 14 is moved from position P1, to position P2 and position P3 the relative distance between the lens 14 and the sensor 12 changes. This can be seen in the illustrated example by the indicated distances between the lens 14 and the sensor 12, d1, d2 and d3.
As described above with reference to figure 1, image data 10, 22, 70 may be acquired with the lens 14 at positions P1, P2 and P3 and a positional parameter 26 of the lens may be calculated for positions P1, P2 and P3.
In the example illustrated in figure 2, the movement of the lens 14 has been exaggerated for the purpose of clarity. In some examples, the movement of the lens 14 may be less than 1mm. For example, the movement of the lens may be on the scale of micrometers.
Furthermore, in the illustrated example, the lens 14 is moved from P1 towards the sensor 12 to P2 and then away from the sensor 12 to position P3. In some examples, the lens 14 may be moved solely towards the sensor 12 such that the distance between the lens 14 and the sensor 12 continually decreases. In other examples, the lens 14 may be moved away from the sensor 12 such that the distance between the lens 14 and the sensor 12 increases. In general, any combination of movement of the lens 14 towards and away from the sensor 12 is possible.
Movement of the lens 14 along the optical axis 36 may occur during an autofocus function that may be performed by the system 13. In some examples the movement of the lens 14 during an autofocus function may be measured by determination of the positional parameter 26. See, for example, figures 8 and 9.
Although figure 2 illustrates movement 30 of the lens 14 parallel to the optical axis 36, in other examples the movement 30 of the lens 14 may alternatively be perpendicular to the optical axis 36. This will be discussed in relation to figure 4.
Figure 3A schematically illustrates an example of first and second image data 10, 22.
In the example of figure 3A, the first image data 10 comprises a first image frame 52 captured with the lens 14 at position P1 and the second image data 22 comprises a second image frame 54 captured with the lens 14 at position P2. Accordingly, in the illustrated example, acquisition of first image data 10 comprises capturing of at least a first image frame 52 and acquisition of second image data 22 comprises capturing of at least a second image frame 54.
In some examples acquisition of first image data 10 and/or second image data 22 may comprise capturing of a plurality of image frames. See, for example, figure 8.
The first image data 10 comprises first brightness information 16 and the second image data 22 comprises second brightness information 24. For example, the first and second image frames 52, 54 may be divided into a number of pixels and the first and second image data 10, 22 may comprise brightness information 16, 24, such as pixel values, for the pixels of the first and second image frames 52, 54.
The pixel values of the first and second image data 10, 22 may be taken from part or all of the pixels of the sensor 12.
In some examples, the first and second image data 10, 22 comprise red and/or green and/or blue sub-pixel values for the pixels of the first and second image data 10, 22. However, any suitable measurement of brightness may be used.
Accordingly, the first brightness information 16 may comprise a signal level 48 of at least one color channel 50 (for example, RIG/B) and the second brightness information 24 comprises a signal level 48 of the same at least one color channel 50
(for example RIG/B).
In general, in examples, brightness is measured against a reference value common to all pixels of the image data 10, 22. For example, red and/or green and/or blue signal levels.
In some examples, brightness can be considered to be the integral of signal intensity, in one or more channels, of a number of pixels, divided by the number of pixels used and compared to a reference value that is common to all pixels. For example, a signal intensity scale.
A region 56 of the first image data 10 has been marked in the example of figure 3A and a corresponding region 58 of the second image data 22 has also been marked.
The pixels of the region 56 of the first image data 10 have been schematically illustrated.
In the example, the region 56 is a part 44 of the first image data 10 and the corresponding region 58 is a part 46 of the second image data 22. However, in other examples the region 56 and corresponding region 58 could comprise more or less of the first image data 10 and second image data 22 including, in some examples, all of the first image data 10 and second image data 22.
In figure 3A, the region 56 and corresponding region 58 are considered corresponding because they encompass the same pixels of the first image data 10 and the second image data 22. Additionally/Alternatively, the region 56 and corresponding region 58 may encompass a known feature in the first and second image data 10, 22.
However, in general, the region 56 and the corresponding region 58 correspond to allow a like-for-like comparison between the region 56 and the corresponding region 58.
The selected regions 56, 58 correspond to allow a meaningful comparison of the brightness information 16, 24, 72 in the selected regions 56, 58 to be carried out to allow the determination of the positional parameter 26.
In examples where a target of uniform brightness is used then any region/pad 56, 44 of the first image data 10 and the corresponding region/part 58, 46 of the second image data 22 can be used. For example, in the autofocus example, the sensitivity of brightness information 16, 24, 72 to lens movement is greatest in the region of the images corresponding to the centre of the lens 14, which may be the central region of the image frames 52, 54. In such examples, the central region of the image frames 52, 54 may be used to determine the positional parameter 26.
In various examples, but not necessarily all examples, determining the positional parameter 26 comprises determining an average of the signal level of at least one of the color channels for at least pad 44 of the first image data 10 and determining an average of the signal level of the same channel or channels for at least part 46 of the second image data 22.
The brightness of the region 56, 58 of the first and second image data 10, 22 may be calculated by the following formula: (equation 1) R+G+B
N
VII. VII. Vflh.
n LiT1 r Li Yt Li "1 where. it = = and = N = the number of color channels n it n used and n = the number of pixels in the selected region 56, 58 and r, gj and b = the red, green and blue signal level in the (h pixel.
Although two image frames 52, 54 have been illustrated in the example of figure 3A, any number of image frames may be used and the positional parameter 26 calculated for the image frames. For example, multiple image frames may be captured with the lens 14 at position P1, P2 and/or P3.
The lens 14 may cause lens shading or vignetting in the captured image frames 52, 54. This effect may result in the part of the image frame 52, 54 corresponding with the centre of the lens, for example the central part of the image frame 52, 54, being brighter than the corners of the image frame 52, 54 for a uniform source of illumination.
In some examples where movement of the lens 14 is along the optical axis 36 and the first and second image data 10, 22 comprise separate image frames 52, 54 a lens shading correction may be applied to correct for the lens shading. However, in other examples, such as the example illustrated in figure 3A, a correction for lens shading may not be applied as the region 56 and corresponding region 58 are from the same position of the lens shading pattern, for example the central region of the image frames 52, 54.
As described above, the determining of the positional parameter 26 allows for the movement of the lens 14 during capture of image data 10, 22, 70 to be determined.
In some examples, such as during autofocus, rapid movement of the lens 14 may occur and accordingly rapid acquisition of image data is used. In some examples, the image frames may be captured at a rate of lOOs per second, for example 400 frames per second.
Figure 3B schematically illustrates a further example of first and second image data 10, 22.
In the example of figure 3B the first image data 10 is a first portion of an image frame and the second image data 22 is a second portion of the same image frame.
In some examples, the first image data 10 may comprise one or more rows of data from the image frame and the second image data 22 may comprise one or more different rows of data 62 from the same image frame. In other examples the first image data 10 may comprise a portion of a row and the second image data 22 may comprise a different portion of the same row.
In the example of figure 3B, the first image data 10 and second image data 22 comprise brightness information as described above with regard to figure 3A. The first image data 10 has been acquired with the lens 14 at position P1 and the second image data 22 has been acquired with the lens 14 positioned at P2.
In some examples, all of the data from the row/rows 60 may be used in determining the positional parameter 26. In other examples, a part 44 of the first image data 10 such as region 56 indicated in figure 3B and the corresponding pad 46 of the second image data 22 such as region 58 may be used in determining the positional parameter26.
The positional parameter 26 may be calculated in the example of figure 3B as described above with regard to the example of figure 3A. For example, an average of the pixel values for one or more color channels (for example, R/G/B) may be calculated for the first region 56 and the corresponding region 58.
The example of figure 36 may be used to allow even more rapid lens movement to be determined where the frame capture rate of the system is not sufficient. By using information from rows within a single image frame, the data rate can be increased and faster lens movement measured. A further increase in speed may be obtained by modifying the length of a row, for example by center cropping.
In such examples, the portion (for example, row/rows) of the image frame may be selected based on when movement of the lens 14 has occurred to allow the appropriate image regions to be selected. In addition, optical shading of the lens 14 within the image frame should be compensated for.
As discussed above lens shading can cause the central part of an image frame to be brighter than the corners of the image frame. In examples where the first and second image data 10, 22 are portions of the same image frame, the effect of lens shading can prevent a meaningful comparison between the first and second image data 1Q 12.
In some examples compensation for lens shading can be based on lens design data and/or calibration data. For example a scaling factor can be used to compensate for lens shading. A first scaling factor k1 can be used for the first region 56 and a second scaling factor k2 used for the corresponding region 58. The scaling factors remove the effect of the lens shading and allow a meaningful comparison to be performed. In general, a scaling factor may be calculated for every region 56, 58 used in an image frame. The scaling factors may be calculated for the different lens positions Fi, P2, P3 and so on.
In some examples the scaling factors may be calculated based on design data of the lens 14 and/or calibration data.
In some examples calibration data for the lens shading correction may be obtained by capturing an image frame at each lens position and determining correction tables for each lens position from the captured image frames. In order to reduce the amount of lens shading calibration data stored, in some examples a linear approximation between different lens positions may be used.
In other examples a correction table generated from one image at a single lens position may be used to correct data obtained at all lens positions.
In some examples, the first image data 10 and second image data 22 such as the first and second image frame 52, 54 and/or the single image frame in the example of figure 3B may be stored in a memory or may be discarded after determination of the positional parameter 26. For example, the first and second image data 10, 22 may be retrievably stored in a memory or may be temporarily stored/buffered and then discarded.
Figure 4 illustrates an example of the system 13 of figure 1. In the example of figure 4, movement of the lens 14 in a plane that is parallel to a sensing plane of the sensor 12 is illustrated.
In figure 4, the system 13 comprises a lens 14, a sensor 12 and processing circuitry 11 as described above in relation to figures 1 and 2. The optical axis 36, perpendicular to the sensing plane of the sensor 12 and the plane of the lens 14 is also illustrated.
In the illustrated example, the lens 14 is moved from a position P1 on the optical axis 36 to a position P2 and position P3 both of which are off the optical axis 36. The lens 14 is moved substantially perpendicularly to the optical axis 36. Such movement of the lens 14 may occur, for example, during optical image stabilisation function of a camera.
The lens 14 at position P1 is illustrated with a solid outline, with a dashed line at P2 and with a dot-dash line at P3.
Light 32 passes through the lens 14 at position P1, P2 and P3 and image data 10, 22, is acquired with the lens 14 at P1, P2 and P3. The positional parameter 26 may be calculated using the image data 10, 22, 70.
The inventor has realised that brightness features in the image data 10, 22, 70 caused by the optics of the system 13, such as the lens 14, can be determined and tracked in the image data 10, 22, 70. By determining a positional parameter 26 in this way, movement of the lens 14 can be determined/measured by analysing image data 10, 22, 70.
The movement of the lens 14 in the example of figure 4 has been exaggerated for the sake of clarity. In some examples, the movement of the lens 14 may be less than 1mm. For example, the movement of the lens may be on the scale of micrometers.
Furthermore, in the illustrated example, the lens 14 is moved from P1 to P2 and then back to position P3. In some examples, the lens 14 may be moved solely in one direction, for example in the direction from P1 to P2 or from P1 to P3. In general, any combination of movement 30 of the lens 14 in a plane parallel to a sensing plane of the sensor 12 is possible.
Figure 5A schematically illustrates an example of first image data 10 and second image data 22. The first image data 10 and second image data 22 may be as described above with regard to figure 3A.
The lens 14 may cause image shading or vignetting in the image data 10 resulting in the centre of the image being brighter than the corners of the image for a uniform source of illumination.
This effect causes a brightness maximum 40 in the image data 10, 22, for example at the position corresponding to the centre of the lens 14. In the left-hand panel of figure 5A the brightness maximum 40 in the image data 10 is shown by a cross. The signal level cross-sections horizontally and vertically through the image data 10 at the position of the cross are shown at the bottom and right of the illustrated image frame 52.
The brightness maximum 40 may be located by any suitable method. For example, a surface may be fitted to the data and the maximum value of the surface found. In some examples the shape of the surface caused by the lens shading may be detected in a first image frame 52 and this shape subsequently used to fit the data of the subsequent image frames. The best fitting position can be used to determine the location of the brightness maximum 40. Additionally/alternatively, averages of sections of the image data 10 may be taken and the maximum average value located in the image data 10.
In the illustrated example, the red, green and blue signal channels have been used to determine the brightness maximum. However, in other examples, one or more of the R, G, B channels may be used in the determination.
In the right-hand panel of figure SA, the brightness maximum 40 has moved as illustrated by the cross and the cross-sectional profiles at the bottom and right-hand side of the figure. By comparison of the location of the brightness maximum 40 in the first image data 10 and the second image data 22, the centre of the lens 14 can be tracked and the movement of the lens measured accordingly.
Although the above example has been described with reference to a brightness maximum, any brightness feature 38 introduced into the image data 10, 22 by the optics of the system 13, such as the lens 14, may be used to determine/measure movement of the lens 14 substantially parallel to the optical axis 36.
Figure 5B illustrates a further example of first image data 10 and second image data 22. As with the example illustrated in figure 3B, in the example of figure 5B the first image data 10 is one or more rows 60 of an image frame and the second image data 22 is one or more different rows 62 of the same image frame. In other examples the first image data 10 may comprise a portion of a row and the second image data 22 may comprise a different portion of the same row.
The first image data 10 may be analysed to determine the positional parameter 26.
This may be done by analysing the intensity profile of one or more of the color channels (for example RIG/B) to determine the location of a brightness maximum 40 in the first image data 10.
In some examples, an average may be taken of the data across all rows of the first image data 10 to obtain an average signal profile for the one or more of the color channels in the first image data 10.
An example of a signal profile is shown in the right-hand panel of figure 5B. In this panel, the top profile relates to the first image data 10 and the bottom profile to the second image data 22.
As can be seen from the signal profiles in figure 5B, the location of the brightness maximum 38, 40 has moved between the first image data 10 and the second image data 22, indicating that the lens 14 has moved position, for example from position P1 to position P2 in figure 4.
In the example of figure 5B, it may also possible to determine a surface for the first image data 10 and second image data 22 as described in relation to figure 5A.
The location of the brightness feature 38, such as the brightness maximum 40, can be determined by any suitable fashion such as profile fitting or the taking of averages as discussed above with regard to figure SA.
Also shown in the example of figure 5B is a vertical axis 92 and a horizontal axis 94.
In some examples the sensor 12 is read out along the horizontal axis 94 and signal profiles such as those illustrated in figure 5B may be obtained. For example, a signal profile may be determined for a single row of pixels.
In examples with horizontal read out of the sensor 12 it may only be possible to detect movement of the lens 14 along the horizontal axis 94. In such examples a lens shading correction that removes the lens shading effect along the vertical axis 92 may be used. The vertical axis lens shading compensation may be determined as described above with regard to figure 3B.
In other examples, read out of the sensor may be along the vertical axis 92. In such examples the signal profiles obtained are along the vertical axis 92. For example a signal profile may be obtained for a single column of pixels.
In examples with vertical read out of the sensor 12 it may only be possible to detect movement of the lens 14 along the vertical axis 92. In such examples a lens shading correction that removes the lens shading effect along the horizontal axis 94 may be used. The horizontal axis lens shading compensation may be determined as described above with regard to figure 3B.
Figure 6 schematically illustrates an example of an apparatus 74. The apparatus 74 comprises processing circuitry 11. The apparatus 74 may be configured to cause the method of figure 7 to be performed. For example, the apparatus may be configured to control the system 13 described above.
As illustrated in figure 6, the apparatus 74 may comprise a sensor that comprises circuitry 76.
Implementation of the processing circuitry 11 may be as controller circuitry. The processing circuitry 11 may be implemented in hardware alone, have certain aspects in software including firmware alone or can be a combination of hardware and software (including firmware).
As illustrated in figure 6 the processing circuitry 11 may be implemented using instructions that enable hardware functionality, for example, by using executable computer program instructions 86 in a general-purpose or special-purpose processor 78 that may be stored on a computer readable storage medium (disk, memory etc) to be executed by such a processor 78.
The processor 78 is configured to read from and write to the memory 80. The processor 78 may also comprise an output interface via which data and/or commands are output by the processor 78 and an input interface via which data and/or commands are input to the processor 78.
The memory 80 stores a computer program 86 comprising computer program instructions (computer program code) that controls the operation of the apparatus 74 when loaded into the processor 78. The computer program instructions, of the computer program 86, provide the logic and routines that enables the apparatus to perform the methods illustrated in figure 7. The processor 78 by reading the memory is able to load and execute the computer program 86.
The apparatus 74 therefore comprises: at least one processor 78; and at least one memory 80 including computer program code 86 the at least one memory 80 and the computer program code 86 configured to, with the at least one processor 78, cause the apparatus 74 at least to perform: a method comprising: causing acquisition of first image data at a sensor via a lens, the first image data comprising at least first brightness information and the lens being at a first position; causing movement of the lens relative to the sensor to at least a second position; causing acquisition of second image data via the lens at the second position, the second image data comprising second brightness information; and determining a positional parameter of the lens, dependent upon a relative displacement of the lens and sensor, using at least the first brightness information of the first image data and the second brightness information of the second image data.
The computer program 86 may arrive at the apparatus 74 via any suitable delivery mechanism 88 and 90. The delivery mechanism 88 and 90 may be, for example, a non-transitory computer-readable storage medium, a computer program product, a memory device, a record medium such as a compact disc read-only memory (CD-ROM) or digital versatile disc (DVD), an article of manufacture that tangibly embodies the computer program 86. The delivery mechanism may be a signal configured to reliably transfer the computer program 86. The apparatus 74 may propagate or transmit the computer program 86 as a computer data signal.
Although the memory 80 is illustrated as a single component/circuitry it may be implemented as one or more separate components/circuitry some or all of which may be integrated/removable and/or may provide permanent/semi-permanent! dynam ic/cached storage.
Although the processor 78 is illustrated as a single component/circuitry it may be implemented as one or more separate components/circuitry some or all of which may be integrated/removable. The processor 78 may be a single core or multi-core processor.
References to computer-readable storage medium', computer program product', tangibly embodied computer program' etc. or a controller', computer', processor' etc. should be understood to encompass not only computers having different architectures such as single /multi-processor architectures and sequential (Von Neumann)/parallel architectures but also specialized circuits such as field-programmable gate arrays (FPGA), application specific circuits (ASIC), signal processing devices and other processing circuitry. References to computer program, instructions, code etc. should be understood to encompass software for a programmable processor or firmware such as, for example, the programmable content of a hardware device whether instructions for a processor, or configuration settings for a fixed-function device, gate array or programmable logic device etc. As used in this application, the term circuitry' refers to all of the following: (a) hardware-only circuit implementations (such as implementations in only analog and/or digital circuitry) and (b) to combinations of circuits and software (and/or firmware), such as (as applicable): (i) to a combination of processor(s) or (ii) to portions of processor(s)/software (including digital signal processor(s)), software, and memoryes) that work together to cause an apparatus, such as a mobile phone or server, to perform various functions) and (c) to circuits, such as a microprocessor(s) or a portion of a microprocessor(s), that require software or firmware for operation, even if the software or firmware is not physically present.
This definition of circuitry' applies to all uses of this term in this application, including in any claims. As a further example, as used in this application, the term "circuitry" would also cover an implementation of merely a processor (or multiple processors) or portion of a processor and its (or their) accompanying software and/or firmware.
The term "circuitry" would also cover, for example and if applicable to the particular claim element, a baseband integrated circuit or applications processor integrated circuit for a mobile phone or a similar integrated circuit in a server, a cellular network device, or other network device.
The circuitry 76 and/or processor 78 in combination with the memory 80 storing the computer program 86 provide means for performing the method of figure 7. For example they provide the means for controlling the system 13 described herein.
Figure 7 illustrates a flowchart of a method 700.
At block 702, acquisition of image data at a sensor 12 via a lens 14 is caused. For example, first image data 10 comprising at least first brightness information 16 may be acquired with the lens 14 at a first position P1. In some examples the acquisition of image data may comprise capturing one or more image frames. As discussed above, the first brightness information 16 may, for example, comprise R, G, B signal values.
At block 704, movement of the lens 14 relative to the sensor 12 is caused. As discussed above, this may be along the optical axis 36 passing through the sensor 12 and lens 14 or perpendicular to the optical axis 36.
At block 706, acquisition of image data via the lens 14 is caused. For example, second image data 22 comprising at least second brightness information 24 may be acquired with the lens at a second position P2. In some examples the acquisition of image data may comprise capturing one or more image frames.
At block 708 it is determined if further movement of the lens 14 should occur.
If yes, the method returns to block 704 and movement of the lens 14 relative to the sensor 12 is caused.
At block 706, acquisition of image data via the lens 14 is again caused. For example, third image data 70 comprising at least third brightness information 72 may be acquired with the lens at a third position P3.
If it is determined at block 708 that no further movement of the lens 14 should occur the method may proceed to block 710.
At block 710 a uniform region/area in the image data may be determined. For example, the first image data 10 may be analysed to determine a region of substantially uniform brightness. This may be performed in examples where the target being imaged is not of uniform brightness. In some examples the corresponding regions in the other image data may be chosen on the basis of the determined uniform region.
For example, in the case of extremely rapid data acquisition it may be assumed that the region of uniform brightness may remain stationary in the image data. In other examples the image data may be analysed prior to determining the positional parameter to ensure that the location of the region of uniform brightness is tracked in the image data such that corresponding regions in the image data are used.
At block 712 a positional parameter 26 of the lens 14, dependent upon a relative displacement of the lens 14 and sensor 12, is determined using at least the first brightness information 16 of the first image data 10 and the second brightness information 24 of the second image data 22. In addition, in examples where the lens 14 is moved to further positions, for example a third position, the further brightness information of the further image data may also be used, for example third image data 70 and brightness information 72.
An example of the determined positional parameter is shown in figure 8.
At block 714 the positional parameter 26, for example, the brightness information from the image data that is described above with regard to figures 3A, 3B, 5A, SB may be converted to physical measurement of movement of the lens 14 using predetermined calibration data.
The blocks illustrated in the figure 7 may represent steps in a method and/or sections of code in the computer program 86. The illustration of a particular order to the blocks does not necessarily imply that there is a required or preferred order for the blocks and the order and arrangement of the block may be varied. Furthermore, it may be possible for some blocks to be omitted.
For example, block 710 may be omitted such as in examples where a target of uniform brightness is used.
Additionally/Alternatively block 714 may be omitted. For example, the positional parameter 26 as illustrated in the example of figure 8 may be sufficient to determine/measure movement of the lens 14 and conversion using calibration data may not be carried out.
Additionally/Alternatively block 712 may be located after block 706 so that the positional parameter is determined after the acquisition of image data.
Figure 8 illustrates an example plot of the determined positional parameter 26 as a function of time for lens movement along the optical axis 36. In the illustrated example the positional parameter 26 is the brightness and has been determined as described above with regard to figure 3A. The positional parameter is indicated as brightness' on the Y axis in figure 8.
In this example the positional parameter has been determined from a single color channel, the green channel G. As described in relation to figure 3A the positional parameter has been determined from corresponding regions of captured image frames. In the example of figure 8 the image frames were captured at a rate of 400 frames per second and the positional parameter 26 determined for the captured image frames. In the plot shown in figure 8 each of the single frame brightness measurements corresponds to a single point (sample) in the plotted curve. Accordingly, in the plot of figure 8 the x axis could alternatively be labelled frame number.
In other examples different frame rates may be used, for example quicker or slower than 400 frames per second.
In the example of figure 8 a normal uniform light source was used as the target for the captured images and the light level held stable during the image capture.
At time = 0 the lens 14 is at the infinity lens stopper position A. During the acquisition of image data the lens position is stepped five times, reaching the macro stopper lens position B. It can be seen from the example of figure 8 that the positional parameter 26, the green channel signal level in this example, varies with lens position. The fast "ringing" effect of the lens movement can also be seen in the example plot of figure 8. The "ringing" effect is caused by the lens oscillating after a change in position. In the example of figure 8 the image data acquisition (frame rate in this example) is sufficient to see the detail of the "ringing" effect caused by the movement of the lens 14.
Although the green signal channel has been used in the example of figure 8, any single channel/combination of the signal channels may be used as described above with regard to figure 3A and 3B for example.
Figure 9 illustrates an example plot of lens position in millimetres as a function of time. In the example of figure 9 the lens position data has been obtained with a traditional laser measurement device. In the example of figure 9 the lens movement used in the example of figure 8 has been repeated.
In some examples, the data from the traditional laser methods can be used as calibration data to allow conversion of the brightness information of figure 8 into physical measurement of movement of the lens.
In other examples the calibration data may be obtained by any suitable means, for example position sensing components within a camera system, such as system 13.
The conversion from brightness information is carried out by determining the extreme movement positions (for example, macro and infinity stopper positions) and assuming linear movement behaviour. In this way the measured brightness curves can be converted to physical measurement of movement using the direct relationship between the brightness measurements and the calibration data.
The system 13 described above may form part of an image acquisition system such as a camera system. The system 13 may be incorporated into any apparatus comprising a camera system. For example the system 13 may be incorporated into a mobile telephone, a personal digital assistant (PDA), a tablet, a laptop computer, a desktop computer, a single lens reflex camera, a compact camera and so on.
An apparatus into which the system is incorporated may comprise further features such as a display that may be configured to display the result of the determination of the positional parameter (for example the plot of figure 8). Additionally/Alternatively such an apparatus may comprise one or more user inputs, one or more antennas/transceivers, one or more microphones, one or more speakers and so on.
The term comprise' is used in this document with an inclusive not an exclusive meaning. That is any reference to X comprising Y indicates that X may comprise only one Y or may comprise more than one Y. If it is intended to use comprise' with an exclusive meaning then it will be made clear in the context by referring to "comprising only one.." or by using "consisting".
In this brief description, reference has been made to various examples. The description of features or functions in relation to an example indicates that those features or functions are present in that example. The use of the term example' or for example' or may' in the text denotes, whether explicitly stated or not, that such features or functions are present in at least the described example, whether described as an example or not, and that they can be, but are not necessarily, present in some of or all other examples. Thus example', for example' or may' refers to a particular instance in a class of examples. A property of the instance can be a property of only that instance or a property of the class or a property of a sub-class of the class that includes some but not all of the instances in the class.
Although examples of the present invention have been described in the preceding paragraphs with reference to various examples, it should be appreciated that modifications to the examples given can be made without departing from the scope of the invention as claimed.
Features described in the preceding description may be used in combinations other than the combinations explicitly described.
Although functions have been described with reference to certain features, those functions may be performable by other features whether described or not.
Although features have been described with reference to certain examples, those features may also be present in other examples whether described or not.
Whilst endeavoring in the foregoing specification to draw attention to those features of the invention believed to be of particular importance it should be understood that the Applicant claims protection in respect of any patentable feature or combination of features hereinbefore referred to and/or shown in the drawings whether or not particular emphasis has been placed thereon.
I/we claim:

Claims (42)

  1. CLAIMS1. A method comprising: causing acquisition of first image data at a sensor via a lens, the first image data comprising at least first brightness information and the lens being at a first position; causing movement of the lens relative to the sensor to at least a second position; causing acquisition of second image data via the lens at the second position, the second image data comprising second brightness information; and determining a positional parameter of the lens, dependent upon a relative displacement of the lens and sensor, using at least the first brightness information of the first image data and the second brightness information of the second image data.
  2. 2. A method as claimed in claim 1, wherein causing acquisition of first image data and second image data comprises causing the sensor to sense light incident upon it after passing through the lens and processing at least one signal produced by the sensor.
  3. 3. A method as claimed in claim 2, wherein a plane of the lens is substantially parallel to a sensing plane of the sensor and an optical axis is defined perpendicular to the plane of the lens and the sensing plane of the sensor and wherein causing movement of the lens comprises causing movement of the lens along the optical axis.
  4. 4. A method as claimed in claim 2, wherein a plane of the lens is substantially parallel to a sensing plane of the sensor and an optical axis is defined perpendicular to the plane of the lens and the sensing plane of the sensor; and wherein causing movement of the lens comprises causing movement of the lens perpendicular to the optical axis.
  5. 5. A method as claimed in claim 4, wherein determining a positional parameter of the lens comprises determining a location of a brightness feature in the first image data based on at least the first brightness information; and determining a location of a brightness feature in the second image data based on at least the second brightness information.
  6. 6. A method as claimed in claim 5, wherein the brightness feature is a brightness maximum.
  7. 7. A method as claimed in any of claims 1 to 6, wherein determining a positional parameter comprises determining an average of the first brightness information from at least part of the first image data and determining an average of the second brightness information from at least part of the second image data corresponding to the at least part of the first image data.
  8. 8. A method as claimed in any preceding claim, wherein the first brightness information comprises a signal level of at least one color channel and the second brightness information comprises a signal level of the same at least one color channel.
  9. 9. A method as claimed in claim 8, wherein determining a positional parameter comprises determining an average of the signal level of the at least one color channel for at least pad of the first image data and determining an average of the signal level of the same at least one color channel for at least pad of the second image data.
  10. 10. A method as claimed in claim 9, wherein determining a positional parameter comprises determining averages of the signal levels of a plurality of color channels for at least part of the first image data and determining averages of the signal levels of the same plurality of color channels for at least pad of the second image data.
  11. 11. A method as claimed in any preceding claim, wherein causing acquisition of first image data comprises causing capturing of at least a first image frame using the sensor and wherein causing acquisition of second image data comprises causing capturing of at least a second image frame using the sensor.
  12. 12. A method as claimed in claim 11, wherein determining a positional parameter comprises comparing brightness information of a region of the first image frame with brightness information of a corresponding region of the second image frame.
  13. 13. A method as claimed in claim 11 or 12, wherein determining a positional parameter comprises determining at least one region of substantially uniform brightness in the first image frame.
  14. 14. A method as claimed in any of claims 2 to 10, wherein the first image data is a first portion of data from a first image frame and the second image data is a second portion of data from within the first image frame and wherein determining a positional parameter comprises comparing the first portion of data and the second portion of data.
  15. 15. A method as claimed in claim 14, wherein the first portion comprises one or more rows of data and the second portion comprises one or more different rows of data.
  16. 16. A method as claimed in any preceding claim, wherein causing movement of the lens comprises causing movement of the lens of less than one millimetre.
  17. 17. A method as claimed in any preceding claim] wherein determining a positional parameter comprises converting the brightness information into physical measurement of movement of the lens using predetermined calibration data.
  18. 18. A method as claimed in any preceding claim, further comprising causing movement of the lens relative to the sensor to at least a third position; and causing acquisition of third image data via the lens at the third position, the third image data comprising third brightness information; and wherein determining a positional parameter comprises using at least the first brightness information of the first image data, the second brightness information of the second image data and the third brightness information of the third image data.
  19. 19. An apparatus comprising: circuitry configured to cause acquisition of first image data at a sensor via a lens, the first image data comprising at least first brightness information and the lens being at a first position; circuitry configured to cause movement of the lens relative to the sensor to at least a second position; circuitry configured to cause acquisition of second image data via the lens at the second position, the second image data comprising second brightness information; circuitry configured to determine a positional parameter of the lens, dependent upon a relative displacement of the lens and sensor, using at least the first brightness information of the first image data and the second brightness information of the second image data.
  20. 20. An apparatus as claimed in claim 19, wherein the circuitry configured to cause acquisition of first image data is configured to cause acquisition of image data comprising causing the sensor to sense light incident upon it after passing through the lens and processing at least one signal produced by the sensor and the circuitry configured to cause acquisition of second image data is configured to cause acquisition of image data comprising causing the sensor to sense light incident upon it after passing through the lens and processing at least one signal produced by the sensor.
  21. 21. An apparatus as claimed in claim 20, wherein a plane of the lens is substantially parallel to a sensing plane of the sensor and an optical axis is defined perpendicular to the plane of the lens and the sensing plane of the sensor and wherein causing movement of the lens comprises causing movement of the lens along the optical axis.
  22. 22. An apparatus as claimed in claim 20, wherein a plane of the lens is substantially parallel to a sensing plane of the sensor and an optical axis is defined perpendicular to the plane of the lens and the sensing plane of the sensor; and wherein causing movement of the lens comprises causing movement of the lens perpendicular to the optical axis.
  23. 23. An apparatus as claimed in claim 22, wherein the circuitry configured to determine a positional parameter of the lens is configured to determine a positional parameter comprising determining a location of a brightness feature in the first image data based on at least the first brightness information; and determining a location of a brightness feature in the second image data based on at least the second brightness information.
  24. 24. An apparatus as claimed in claim 23, wherein the brightness feature is a brightness maximum.
  25. 25. An apparatus as claimed in any of claims 19 to 24, wherein the circuitry configured to determine a positional parameter is configured to determine a positional parameter comprising determining an average of the first brightness information from at least part of the first image data and determining an average of the second brightness information from at least part of the second image data corresponding to the at least part of the first image data.
  26. 26. An apparatus as claimed in any of claims 19 to 25, wherein the first brightness information comprises a signal level of at least one color channel and the second brightness information comprises a signal level of the same at least one color channel.
  27. 27. An apparatus as claimed in claim 26, wherein the circuitry configured to determine a positional parameter is configured to determine a positional parameter comprising determining an average of the signal level of the at least one color channel for at least part of the first image data and determining an average of the signal level of the same at least one color channel for at least part of the second image data.
  28. 28. An apparatus as claimed in claim 27, wherein the circuitry configured to determine a positional parameter is configured to determine a positional parameter comprising determining averages of the signal levels of a plurality of color channels for at least part of the first image data and determining averages of the signal levels of the same plurality of color channels for at least part of the second image data.
  29. 29. An apparatus as claimed in any of claims 19 to 28, wherein the circuitry configured to cause acquisition of first image data is configured to cause acquisition of image data comprising causing capturing of at least a first image frame using the sensor and wherein the circuitry configured to cause acquisition of second image data is configured to cause acquisition of image data comprising causing capturing of at least a second image frame using the sensor.
  30. 30. An apparatus as claimed in claim 29, wherein the circuitry configured to determine a positional parameter is configured to determine a positional parameter comprising comparing brightness information of a region of the first image frame with brightness information of a corresponding region of the second image frame.
  31. 31. An apparatus as claimed in claim 29 or 30, wherein the circuitry configured to determine a positional parameter is configured to determine a positional parameter comprising determining at least one region of substantially uniform brightness in the first image frame.
  32. 32. An apparatus as claimed in any of claims 20 to 28, wherein the circuitry configured to determine a positional parameter is configured to determine a positional parameter comprising comparing a first portion of data from a first image frame and a second portion of data within the first image frame.
  33. 33. An apparatus as claimed in claim 32, wherein the first portion comprises one or more rows of data and the second portion comprises one or more different rows of data.
  34. 34. An apparatus as claimed in any of claims 19 to 33, wherein the circuitry configured to cause movement of the lens is configured to cause movement of the lens of less than one millimetre.
  35. 35. An apparatus as claimed in any of claims 19 to 34, wherein the circuitry configured to determine a positional parameter is configured to determine a positional parameter comprising converting the brightness information into physical measurement of movement of the lens using predetermined calibration data.
  36. 36. An apparatus as claimed in any preceding claim, further comprising: circuitry configured to cause movement of the lens relative to the sensor to at least a third position; and circuitry configured to cause acquisition of third image data via the lens at the third position, the third image data comprising third brightness information; and circuitry configured to determine a positional parameter using at least the first brightness information of the first image data, the second brightness information of the second image data and the third brightness information of the third image data.
  37. 37. An apparatus comprising: at least one processor; and at least one memory including computer program code the at least one memory and the computer program code configured to, with the at least one processor, cause the apparatus at least to perform: causing acquisition of first image data at a sensor via a lens, the first image data comprising at least first brightness information and the lens being at a first position; causing movement of the lens relative to the sensor to at least a second position; causing acquisition of second image data via the lens at the second position, the second image data comprising second brightness information; and determining a positional parameter of the lens, dependent upon a relative displacement of the lens and sensor, using at least the first brightness information of the first image data and the second brightness information of the second image data.
  38. 38. An apparatus comprising: at least one processor; and at least one memory including computer program code the at least one memory and the computer program code configured to, with the at least one processor, cause the apparatus at least to perform the method of one or more of claims 1 to 18.
  39. 39. An apparatus comprising: means for causing acquisition of first image data at a sensor via a lens, the first image data comprising at least first brightness information and the lens being at a first position; means for causing movement of the lens relative to the sensor to at least a second position; means for causing acquisition of second image data via the lens at the second position, the second image data comprising second brightness information; and means for determining a positional parameter of the lens, dependent upon a relative displacement of the lens and sensor, using at least the first brightness information of the first image data and the second brightness information of the second image data.
  40. 40. An apparatus comprising: means for performing the method of one or more of claims 1 to 18.
  41. 41. A computer program that, when run on a computer, enables performance of causing acquisition of first image data at a sensor via a lens, the first image data comprising at least first brightness information and the lens being at a first position; causing movement of the lens relative to the sensor to at least a second position; causing acquisition of second image data via the lens at the second position, the second image data comprising second brightness information; and determining a positional parameter of the lens, dependent upon a relative displacement of the lens and sensor, using at least the first brightness information of the first image data and the second brightness information of the second image data.
  42. 42. A computer program that, when run on a computer, enables the method of one or more of claims 1 to 18 to be performed.
GB1402703.1A 2014-02-17 2014-02-17 A method, apparatus and computer program for measuring lens movements Withdrawn GB2523163A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
GB1402703.1A GB2523163A (en) 2014-02-17 2014-02-17 A method, apparatus and computer program for measuring lens movements

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
GB1402703.1A GB2523163A (en) 2014-02-17 2014-02-17 A method, apparatus and computer program for measuring lens movements

Publications (2)

Publication Number Publication Date
GB201402703D0 GB201402703D0 (en) 2014-04-02
GB2523163A true GB2523163A (en) 2015-08-19

Family

ID=50440231

Family Applications (1)

Application Number Title Priority Date Filing Date
GB1402703.1A Withdrawn GB2523163A (en) 2014-02-17 2014-02-17 A method, apparatus and computer program for measuring lens movements

Country Status (1)

Country Link
GB (1) GB2523163A (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH02234577A (en) * 1989-03-08 1990-09-17 Nec Corp Automatic focusing device
JP2002303780A (en) * 2001-04-04 2002-10-18 Ricoh Co Ltd Automatic focus controller
US20080074530A1 (en) * 2006-09-27 2008-03-27 Fujitsu Limited Imaging apparatus with AF optical zoom
US20130016277A1 (en) * 2011-07-14 2013-01-17 Satoru Ito Camera with focus detection unit

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH02234577A (en) * 1989-03-08 1990-09-17 Nec Corp Automatic focusing device
JP2002303780A (en) * 2001-04-04 2002-10-18 Ricoh Co Ltd Automatic focus controller
US20080074530A1 (en) * 2006-09-27 2008-03-27 Fujitsu Limited Imaging apparatus with AF optical zoom
US20130016277A1 (en) * 2011-07-14 2013-01-17 Satoru Ito Camera with focus detection unit

Also Published As

Publication number Publication date
GB201402703D0 (en) 2014-04-02

Similar Documents

Publication Publication Date Title
US10070042B2 (en) Method and system of self-calibration for phase detection autofocus
US9804357B2 (en) Phase detection autofocus using masked and unmasked photodiodes
US9729779B2 (en) Phase detection autofocus noise reduction
US9147230B2 (en) Image processing device, image processing method, and program to perform correction processing on a false color
US9432563B2 (en) Three-dimensional imaging through multi-image processing
US10002436B2 (en) Image processing device, image processing method, and solid-state imaging device
US20190132495A1 (en) Image processing method, and device
US20150350522A1 (en) Focus Score Improvement By Noise Correction
KR20130104756A (en) Image apparatus and image sensor thereof
WO2019105433A1 (en) Image distortion detection method and system
EP2605063A1 (en) Dome-type camera and aperture control method
KR20110023762A (en) Image processing apparatus, image processing method and computer readable-medium
JP6353233B2 (en) Image processing apparatus, imaging apparatus, and image processing method
CN114584700A (en) Focusing marking method, marking device and electronic equipment
US10003758B2 (en) Defective pixel value correction for digital raw image frames
JP2017073639A (en) Image processing apparatus, control method thereof and program
US20160373641A1 (en) Imaging device and photographing apparatus
US9386214B2 (en) Focusing control method using colour channel analysis
JP6362070B2 (en) Image processing apparatus, imaging apparatus, image processing method, program, and storage medium
GB2523163A (en) A method, apparatus and computer program for measuring lens movements
US9787893B1 (en) Adaptive output correction for digital image capture processing
KR20120052593A (en) Camera module and method for correcting lens shading thereof
Pham et al. Analysis of the smartphone camera exposure effect on laser extraction
JP2013115702A5 (en) IMAGING DEVICE AND IMAGING DEVICE CONTROL METHOD
US10021292B2 (en) Image processing apparatus, image capturing apparatus and image processing program

Legal Events

Date Code Title Description
COOA Change in applicant's name or ownership of the application

Owner name: NOKIA TECHNOLOGIES OY

Free format text: FORMER OWNER: NOKIA CORPORATION

WAP Application withdrawn, taken to be withdrawn or refused ** after publication under section 16(1)