WO2018087424A1 - Détermination d'un emplacement d'intersection d'un axe optique d'un objectif avec un capteur d'appareil photo - Google Patents

Détermination d'un emplacement d'intersection d'un axe optique d'un objectif avec un capteur d'appareil photo Download PDF

Info

Publication number
WO2018087424A1
WO2018087424A1 PCT/FI2017/050755 FI2017050755W WO2018087424A1 WO 2018087424 A1 WO2018087424 A1 WO 2018087424A1 FI 2017050755 W FI2017050755 W FI 2017050755W WO 2018087424 A1 WO2018087424 A1 WO 2018087424A1
Authority
WO
WIPO (PCT)
Prior art keywords
display
lines
displayed
lens
line
Prior art date
Application number
PCT/FI2017/050755
Other languages
English (en)
Inventor
Radu Bilcu
Martin Schrader
Andrew Baldwin
Original Assignee
Nokia Technologies Oy
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nokia Technologies Oy filed Critical Nokia Technologies Oy
Publication of WO2018087424A1 publication Critical patent/WO2018087424A1/fr

Links

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01MTESTING STATIC OR DYNAMIC BALANCE OF MACHINES OR STRUCTURES; TESTING OF STRUCTURES OR APPARATUS, NOT OTHERWISE PROVIDED FOR
    • G01M11/00Testing of optical apparatus; Testing structures by optical methods not otherwise provided for
    • G01M11/02Testing optical properties
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01BMEASURING LENGTH, THICKNESS OR SIMILAR LINEAR DIMENSIONS; MEASURING ANGLES; MEASURING AREAS; MEASURING IRREGULARITIES OF SURFACES OR CONTOURS
    • G01B11/00Measuring arrangements characterised by the use of optical techniques
    • G01B11/26Measuring arrangements characterised by the use of optical techniques for measuring angles or tapers; for testing the alignment of axes
    • G01B11/27Measuring arrangements characterised by the use of optical techniques for measuring angles or tapers; for testing the alignment of axes for testing the alignment of axes
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01MTESTING STATIC OR DYNAMIC BALANCE OF MACHINES OR STRUCTURES; TESTING OF STRUCTURES OR APPARATUS, NOT OTHERWISE PROVIDED FOR
    • G01M11/00Testing of optical apparatus; Testing structures by optical methods not otherwise provided for
    • G01M11/02Testing optical properties
    • G01M11/0207Details of measuring devices
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01MTESTING STATIC OR DYNAMIC BALANCE OF MACHINES OR STRUCTURES; TESTING OF STRUCTURES OR APPARATUS, NOT OTHERWISE PROVIDED FOR
    • G01M11/00Testing of optical apparatus; Testing structures by optical methods not otherwise provided for
    • G01M11/02Testing optical properties
    • G01M11/0221Testing optical properties by determining the optical axis or position of lenses
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01MTESTING STATIC OR DYNAMIC BALANCE OF MACHINES OR STRUCTURES; TESTING OF STRUCTURES OR APPARATUS, NOT OTHERWISE PROVIDED FOR
    • G01M11/00Testing of optical apparatus; Testing structures by optical methods not otherwise provided for
    • G01M11/02Testing optical properties
    • G01M11/0242Testing optical properties by measuring geometrical properties or aberrations
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B7/00Mountings, adjusting means, or light-tight connections, for optical elements
    • G02B7/02Mountings, adjusting means, or light-tight connections, for optical elements for lenses

Definitions

  • This specification generally relates to determining an intersection location of an optical axis of a lens with a camera sensor.
  • Wide angle lenses such as fisheye lenses
  • the projection of the line into the imaging sensor is strongly distorted, and may appear curved.
  • the distortion usually increases for pixels further away from the optical axis of the lens, and is zero or negligible close to the optical axis of the lens.
  • Lenses with narrower field-of-view also introduce geometrical distortions but with reduced effect.
  • the captured images are warped to correct for the lens distortion, which necessitates the knowledge of the optical axis position and lens distortion parameters.
  • precise measurement of the lens distortion and optical axis position is needed for 3D modelling of the scene.
  • Lens distortion modelling and measurement usually necessitate the knowledge of the location of the intersection of the optical axis with the sensor.
  • the specification describes a method comprising: based on a plurality of images of at least one displayed straight line captured by an image capture device having at least one lens and a sensor, the line having a different relative position in each respective image, wherein the lens introduces a degree of radial distortion to the captured images and has an optical axis, determining a location of intersection of the optical axis of the lens with the sensor by determining an intersection location of two non-parallel lines from the captured images having the smallest deviations from respective fitted straight lines.
  • the method may further comprise fitting a respective first order polynomial to each of a plurality of lines from captured images, and calculating the deviation of each line from its respective first order polynomial.
  • the method may further comprise determining the intersection location of two non-parallel lines from the captured images having the smallest deviations from their respective fitted first order polynomial.
  • the method may further comprise checking the curvature of each line in each captured image, and determining the position of the intersection location of the optical axis with the sensor relative to the line based on the curvature of the line.
  • Checking the curvature of the straight line may comprise fitting a second order polynomial to the line in each captured image.
  • the method may further comprise controlling a display to subsequently display a straight line at a position based on the curvature of at least one line from a captured image.
  • the method may further comprise performing a half-interval search of displayed lines at half intervals while the number of pixels between the left and right hand lines is above a first predetermined threshold, and while the number of pixels between the top and bottom lines is above a second predetermined threshold.
  • the method may further comprise, when first and second predetermined thresholds are reached, controlling display of a respective pair of intersecting lines passing through each respective pixel in the final interval following the half-interval search.
  • the displayed straight lines may comprise perpendicular straight lines.
  • the displayed lines may comprise horizontal and vertical straight lines.
  • the displayed straight lines may each comprise a borderline between two different coloured areas of a displayed pattern, the borderline demarcating the two different coloured areas.
  • the method may further comprise capturing images of a displayed straight line respectively located on a display at different quadrants of the display, e.g. a left hand side, right hand side, top, and bottom of the display.
  • the method may further comprise controlling a display to display the at least one straight line.
  • the method may further comprise determining a correspondence between display pixel locations and a location on the sensor of the camera module, based on at least one captured image of a displayed known pattern, and a determined centrepoint of the pattern in the captured image.
  • the method may further comprise causing display of at least one straight line on a display, and causing movement of the camera module relative to the display to capture a plurality of images having lines in a different respective position in each image.
  • the specification describes a computer program comprising machine readable instructions that when executed by computing apparatus causes it to perform any method as described with reference to the first aspect.
  • the specification describes an apparatus configured to perform any method as described with reference to the first aspect.
  • the specification describes an apparatus comprising:
  • At least one processor and
  • At least one memory including computer program code, which when executed by the at least one processor, causes the apparatus to perform a method comprising:
  • the computer program code when executed by the at least one processor, may cause the apparatus to perform: fitting a respective first order polynomial to each of a plurality of lines from captured images, and calculating the deviation of each line from its respective first order polynomial.
  • the computer program code when executed by the at least one processor, may cause the apparatus to perform: determining the intersection location of two non-parallel lines from the captured images having the smallest deviations from their respective fitted first order polynomial.
  • the computer program code when executed by the at least one processor, may cause the apparatus to perform: checking the curvature of each line in each captured image, and determining the position of the intersection location of the optical axis with the sensor relative to the line based on the curvature of the line.
  • Checking the curvature of the straight line may comprise fitting a second order polynomial to the line in each captured image.
  • the computer program code when executed by the at least one processor, may cause the apparatus to perform: controlling a display to subsequently display a straight line at a position based on the curvature of at least one line from a captured image.
  • the computer program code when executed by the at least one processor, may cause the apparatus to perform: performing a half-interval search of displayed lines at half intervals while the number of pixels between the left and right hand lines is above a first
  • the computer program code when executed by the at least one processor, may cause the apparatus to perform: controlling display of a respective pair of intersecting lines passing through each respective pixel in the final interval following the half-interval search.
  • the displayed straight lines may comprise perpendicular straight lines.
  • the displayed lines may comprise horizontal and vertical straight lines.
  • the displayed straight lines may each comprise a borderline between two different coloured areas of a displayed pattern, the borderline demarcating the two different coloured areas.
  • the computer program code when executed by the at least one processor, may cause the apparatus to perform: capturing images of a displayed straight line respectively located on a display at different quadrants of the display, e.g. a left hand side, right hand side, top, and bottom of the display.
  • the computer program code when executed by the at least one processor, may cause the apparatus to perform: controlling a display to display the at least one straight line.
  • the computer program code when executed by the at least one processor, may cause the apparatus to perform: determining a correspondence between display pixel locations and a location on the sensor of the camera module, based on at least one captured image of a displayed known pattern, and a determined centrepoint of the pattern in the captured image.
  • the computer program code when executed by the at least one processor, may cause the apparatus to perform: causing display of at least one straight line on a display, and causing movement of the camera module relative to the display to capture a plurality of images having lines in a different respective position in each image.
  • the specification describes a computer-readable medium having computer-readable code stored thereon, the computer-readable code, when executed by at least one processor, causes the performance of: based on a plurality of images of at least one displayed straight line captured by an image capture device having at least one lens and a sensor, the line having a different relative position in each respective image, wherein the lens introduces a degree of radial distortion to the captured images and has an optical axis, determining a location of intersection of the optical axis of the lens with the sensor by determining an intersection location of two non-parallel lines from the captured images having the smallest deviations from respective fitted straight lines.
  • the specification describes an apparatus comprising means for: based on a plurality of images of at least one displayed straight line captured by an image capture device having at least one lens and a sensor, the line having a different relative position in each respective image, wherein the lens introduces a degree of radial distortion to the captured images and has an optical axis, determining a location of intersection of the optical axis of the lens with the sensor by determining an intersection location of two non- parallel lines from the captured images having the smallest deviations from respective fitted straight lines.
  • Figure l is a schematic illustration of a calibration system for determining the position of the intersection of the lens optical axis and the sensor plane, according to embodiments of this specification;
  • Figure 2 is a flow chart illustrating an example of operations which may be performed by a module for detection of the intersection of the optical axis and the sensor, according to embodiments of this specification;
  • Figure 3 is a flow chart illustrating an example of operations which may be performed by a module for detection of the intersection of the optical axis and the sensor, according to embodiments of this specification
  • Figure 4 illustrates an example of a known pattern to be displayed to the camera module
  • Figure 5 illustrates an example of a pattern to be displayed to the camera module
  • Figure 6 illustrates an example of a pattern to be displayed to the camera module
  • Figure 7 illustrates an example of an image captured by the camera module, and fitting a polynomial to the border line
  • Figure 8 illustrates an example of a known pattern to be displayed to the camera module
  • Figure 9 is a schematic illustration of an example configuration of the camera module according to embodiments of this specification
  • Figure 10 is a computer-readable memory medium upon which computer-readable code may be stored, according to embodiments of this specification.
  • embodiments described herein provide identification of a location on a camera sensor at which the optical axis of the lens intersects.
  • the optical axis describes a line through the optical system with the highest degree of rotational symmetry. Rotating the optical system around this line while imaging an object will not change the image of the object.
  • the embodiments can be applied to any kind of lenses having radial distortion. Such lenses may include, but are by no means limited to, wide angle lenses including fisheye lenses.
  • camera modules such as catadioptric cameras where the optical system comprises a combination of one or more lenses and mirrors.
  • the embodiments can also be applied to such catadioptric cameras. Determining the position of the optical axis of the lens with respect to the camera sensor allows the camera to be calibrated against the lens, or facilitates movement of the lens relative to the camera sensor into a better position, e.g. during manufacture.
  • a display is controlled to display a plurality of images of at least one displayed straight line.
  • a camera module including the sensor and the lens is controlled to capture the displayed images.
  • a module for detection of the intersection of the lens optical axis and the sensor which may be integral with or external to the camera module, processes captured images. It also controls the display of images on the display. Plural images, each with straight lines at different locations, are displayed. The lines appear curved in the captured images because of the distortion caused by the lens.
  • the module for detection of the intersection of the lens optical axis and the sensor fits a first order polynomial to lines from some of the captured images, and determines the deviation of the coordinates of the (curved) lines in the captured images from their respective fitted straight lines. This is performed horizontally and vertically.
  • the display may be controlled to display straight lines at all vertical and horizontal positions, at the resolution of the display, in a relatively small area around the point on the display that corresponds to the point on the camera sensor at which the optical axis of the lens intersects with the sensor.
  • the module for detection of the intersection of the lens optical axis and the sensor determines the intersection location of the lines having the smallest deviation from their respective fitted straight lines, both
  • the fitted straight lines refer to lines which best fit the coordinates of the lines in the captured images.
  • the lines may be fit according to any suitable line fitting method.
  • the display is controlled to display straight lines at positions that are increasingly closer to the point on the display that corresponds to the point on the camera sensor at which the optical axis of the lens intersects. This is performed without needing to know the positional relationship between the display, lens and sensor.
  • a straight line is caused to be displayed at a position on the display, and the direction of curvature (left or right, up or down) in the captured image is used to determine the location of the intersection of the optical axis and the sensor relative to the displayed line.
  • a straight line is displayed at a location on the display that is in the determined relative direction, and the direction of curvature (left or right, up or down) in the captured image is used to determine the location of the intersection of the optical axis and the sensor relative to the newly displayed line.
  • the iteration may utilise a half-interval search, for instance.
  • the first phase may be used to identify the location to a greater level of accuracy.
  • the iterative phase reduces the time needed to find the intersection location of the optical axis of the lens with the sensor, but may be omitted in some implementations.
  • FIG ⁇ is a schematic illustration of a calibration system l.
  • the system includes a camera module io which comprises a lens 20 and a sensor 50.
  • the lens may be a wide angle lens 20 such as, for example, a fisheye lens.
  • the lens 20 can also be a combination of two or more individual lenses. Alternatively, the lens may be any other kind of lens which introduces some form of radial distortion to the captured image.
  • the camera module 10 can be a catadioptric module where instead of one or more lenses 20, a combination of one or more mirrors and lenses can be used to project the image of a scene onto the sensor 50.
  • the system also includes a display 30, for instance, a flat display.
  • the flat display 30 may be a TV display.
  • the flat display 30 may comprise a pattern projected onto a flat wall by a high resolution projector.
  • the display 30 may comprise any suitable means for displaying an image to the camera module 10.
  • the camera module 10 is configured to capture at least one image displayed on the display. For example, in a configuration set up for calibration, the camera module 10 is positioned such that the lens 20 is directed at the display 30.
  • the lens 20 produces geometrical distortions in images captured by the camera module 10. For example, straight lines imaged by camera module 10 may appear as curved lines in the captured images. The distortion increases for pixels further away from the optical axis of the lens. The least distortion is produced at the location at which the optical axis of the lens intersects with the sensor.
  • the system 1 additionally comprises a detection module 40 for detection of the location of the intersection of the lens optical axis and the sensor.
  • the module 40 may form part of a computing device such as a PC, or a server, for example. Alternatively, the detection module 40 may form part of the camera module 10.
  • the detection module 40 may be configured to receive captured images from the camera module 10.
  • the detection module 40 may be configured to process images captured by the camera module 10.
  • the detection module 40 may be configured to control the display 30 to display a pattern.
  • the camera module 10 is configured to capture a plurality of images each of at least one displayed straight line, the line being located at a different relative position in each respective image.
  • the detection module 40 is configured to determine an intersection location of two non-parallel lines from the captured images having the smallest deviation from a respective fitted straight line. An example of how this determination may be performed is described in more detail with reference to Figures 2 and 3.
  • images captured by the camera module 10 can be modified using software e.g. to model a 3D scene.
  • the detection of the location intersection between the lens optical axis and the sensor may be made during the manufacturing process.
  • the determined intersection location of the optical axis with the sensor may be used to move the lens relative to the camera sensor 50 into a desired position, for instance such that the optical axis of the lens intersects with a position at or close to the centre of the camera sensor.
  • the determined intersection of the optical axis and the sensor may alternatively be used at the manufacturing stage to configure image processing software such that the software processes captured images optimally.
  • the determination of the location of the intersection of the lens optical axis and the sensor may be made in order to determine any changes to the position of the optical axis of the lens relative to the sensor 50 during the life of the camera module. Changes to the optical axis relative to the sensor 50 can occur in the case that the camera module suffers an impact, for example, if the camera module is bumped or dropped.
  • the system 1 may be configured such that the display 30 appears as large as possible in the field of view of the camera module 10, advantageously occupying the whole of the field of view of the camera module 10.
  • the display 30 may be positioned close to the lens of the camera module, without the display 30 extending beyond the field of view of the lens 20.
  • the display 30 may include a large number of pixels.
  • the display may be a 4k or 8k resolution display.
  • the accuracy of the optical axis location determination may be improved.
  • the higher the resolution of the display the higher the accuracy of detection of the intersection between the lens optical axis and the sensor.
  • Figure 2 is a flow chart illustrating various operations which may be performed by the calibration system.
  • not all of the illustrated operations need to be performed. Operations may also be performed in a different order compared to the order presented in Figure 2.
  • the display 30 is controlled to display a plurality of images of at least one displayed straight line, and the camera module 10 is configured to capture these plurality of images of at least one displayed straight line.
  • the camera module 10 may provide the captured images to a detection module 40.
  • the detection module 40 may form part of the camera module 10. Alternatively, the detection module 40 may form part of a separate computing device such as a PC or a server.
  • the detection module 40 is configured to fit a first order polynomial to each of the lines from the captured images.
  • the distortion from the camera lens 20 results in the lines in the captured images having a curvature which increases with the distance of the projection of the line on the sensor 40 from the intersection location of the optical axis of the lens 20 with the sensor.
  • the detection module 40 is configured to determine the deviation of the coordinates of the imaged lines from their respective fitted straight lines.
  • step S140 the detection module 40 is configured to determine the intersection location of the lines having the smallest deviation from the respective fitted lines.
  • the determined intersection location of the lines is determined to be intersection location of the optical axis of the lens with the sensor of the camera module 10.
  • the display 30 is controlled to display straight lines at positions that are increasingly closer to the point on the display that corresponds to the point on the camera sensor 50 at which the optical axis of the lens 20 intersects.
  • the display 30 is controlled to display straight lines at all vertical and horizontal pixel positions, at the resolution of the display, in a relatively small area around the point on the display that corresponds to the point on the camera sensor 50 at which the optical axis of the lens 20 intersects with the sensor 50.
  • any other suitable method of determining an intersection location of two straight lines in the captured images may be used.
  • Figure 3 is a flow chart illustrating various operations which may be performed by the calibration system. In some embodiments, not all of the illustrated operations need to be performed. Operations may also be performed in a different order compared to the order presented in Figure 3.
  • the detection module 40 may check the curvature of a vertical line displayed on the left hand side of the display.
  • the display is configured to show a known pattern including a centrepoint 60.
  • An example pattern is a checkerboard, such as that shown in Figure 4.
  • the checkerboard in Figure 4 is merely an example of a pattern which may be used as the known pattern in the method.
  • the checkerboard may be of any suitable size.
  • the checkerboard may be displayed over a given number of display pixels.
  • the example of Figure 4 the
  • checkerboard is a 4x4 checkerboard pattern. It will be appreciated that any other dimensioned checkerboard may be used, as long as the centrepoint 60 of the pattern is suitable for detection in a captured image of the pattern.
  • the centrepoint 60 of the pattern is displayed close to the left border of the display.
  • the pixel coordinates of the centrepoint 60 of the pattern in this step are at dispXleft (i.e. on the left hand side of the display) and dispYres/2 (where dispYres is the vertical resolution of the display).
  • the vertical pixel coordinate may instead be any vertical pixel coordinate on the display, other than dispYres/2.
  • the camera module is configured to capture an image of the displayed pattern.
  • the centrepoint 60 of the pattern is detected in the captured image of the pattern by the detection module 40. In this way, the detection module 40 may determine the correspondence between the display coordinates (coordinates on the display 30 at which the centre of the checkerboard is displayed) and the image coordinates
  • the display is configured subsequently to show a pattern containing two differently coloured areas, as shown for example in Figure 5.
  • the image includes one white area and one black area on different portions of the screen.
  • a vertical borderline 70 demarcates the two areas.
  • the borderline is located at a horizontal pixel position dispXleft. The vertical size of each area is equal to the vertical resolution of the display.
  • the camera module 10 is configured to capture an image of the displayed pattern.
  • the border line between the two areas is detected in the captured image.
  • the border line 70 may appear curved in the captured image due to the distortion introduced by the wide angle lens of the camera module. The amount of distortion introduced depends on the distance between the border line and the intersection location of the optical axis of the lens 20 with the sensor 50.
  • the detection module 40 is configured to calculate a polynomial function of second order to fit the border line 70 in the captured image.
  • the second order polynomial may be fit according to any suitable fitting method.
  • the sign of the parameter A is determined. In this example, if the sign of A is positive, then this indicates that the intersection location of the optical axis with the sensor 50 is to the right hand side of the curve. If A is negative at this step, the procedure is stopped. If A is positive at this step, the procedure continues in the next step to determine the curvature of a line right hand side line as follows.
  • the detection module 40 may check the curvature of a vertical line displayed on the right hand side of the display.
  • the display is configured to show the known pattern including a centrepoint, such as the checkerboard described above and shown in Figure 4.
  • the centrepoint 60 of the pattern is displayed close to the right border of the display.
  • the pixel coordinates of the centrepoint 60 of the pattern in this step are at dispXright (i.e. on the right hand side of the display) and dispYres/2 (where dispYres is the vertical resolution of the display).
  • a vertical pixel coordinate other than dispYres/2 may be used for the centrepoint coordinates.
  • the camera module 10 is configured to capture an image of the displayed pattern.
  • the centrepoint 60 of the pattern is detected in the captured image of the pattern.
  • the system may determine the correspondence between the display coordinates and the image coordinates.
  • the display is configured to show a pattern containing two differently coloured areas, with a border line between the two areas being located at a horizontal pixel position dispXright.
  • the vertical size of each area is equal to the vertical resolution of the display.
  • the camera module is configured to capture an image of the displayed pattern.
  • the borderline between the two areas is detected in the captured image.
  • the detection module 40 is configured to determine a second order polynomial to fit the borderline in the captured image.
  • the sign of the parameter A is determined. In this step, if the sign of A is negative, then this indicates that the intersection location of the optical axis with the sensor 50 is to the left hand side of the curve. If A is positive at this step, the procedure is stopped. If A is negative at this step, the procedure continues as follows.
  • the detection module 40 may check the curvature of a horizontal line displayed at the top of the display.
  • the display is configured to show a known pattern including a centrepoint 60, such as a checkerboard.
  • the centrepoint 60 of the pattern is displayed close to the top border of the display.
  • the pixel coordinates of the centrepoint 60 of the pattern in this case are at dispYtop (i.e. at the top of the display) and dispXres/ 2 (where dispXres is the horizontal resolution of the display).
  • a horizontal pixel coordinate other than dispXres/2 may be used for the centrepoint of the displayed checkerboard.
  • the camera module is configured to capture an image of the displayed pattern.
  • the centrepoint 60 of the pattern is detected in the captured image of the pattern.
  • the display is configured to show a pattern containing two differently coloured areas, as shown for example in Figure 6. In the example of Figure 6, the image includes one white area and one black area on different portions of the screen.
  • the borderline is located at a vertical pixel position dispYtop.
  • the horizontal size of each area is equal to the horizontal resolution of the display.
  • the camera module is configured to capture an image of the displayed pattern.
  • the borderline between the two areas is detected in the captured image.
  • the detection module 40 is configured to calculate a polynomial function of second order to fit the border line in the captured image.
  • the sign of the parameter A is determined. In this example, if the sign of A is positive, then this indicates that the intersection location of the optical axis with the sensor 50 is below the curve. If A is negative at this step, the procedure is stopped.
  • the procedure continues to determine the curvature of a horizontal line at the bottom of the display as follows.
  • the detection module 40 may check the curvature of a horizontal line displayed at the bottom of the display.
  • the display is configured to show a known pattern including a centrepoint 60, such as a checkerboard.
  • the centrepoint 60 of the pattern is displayed close to the bottom border of the display.
  • the pixel coordinates of the centrepoint 60 of the pattern in this case are at dispYbottom (i.e. at the bottom of the display) and dispXres/ 2.
  • a horizontal pixel coordinate other than dispXres/2 may be used for the centrepoint of the displayed checkerboard.
  • the camera module is configured to capture an image of the displayed pattern.
  • the centrepoint 60 of the pattern is detected in the captured image of the pattern.
  • the display is configured to show a pattern containing two differently coloured areas, with a borderline being located at a vertical pixel position dispYbottom.
  • the horizontal size of each area is equal to the horizontal resolution of the display.
  • the camera module is configured to capture an image of the displayed pattern.
  • the borderline between the two areas is detected in the captured image.
  • the system is configured to calculate a polynomial function of second order to fit the border line in the captured image.
  • the sign of the parameter A is determined. In this example, if the sign of A is negative, then this indicates that intersection location of the optical axis with the sensor is above the curve. If A is positive at this step, the procedure is stopped. If A is negative at this step, the procedure continues as follows.
  • step S250 a half-interval search is performed.
  • Each iteration may include the following sub-steps.
  • sub-step (a) the known pattern described above is displayed.
  • the centrepoint of the pattern is displayed at (dispXleft + dispXright)/2 and (dispYtop + dispYbottom)/2.
  • An image of the pattern is captured.
  • the coordinates of the centrepoint of the pattern are detected in the captured image.
  • sub-step (b) a pattern including two areas separated by a horizontal borderline is displayed.
  • the borderline is located at vertical coordinate (dispYtop+dispYbottom)/2.
  • An image is captured and the borderline is detected in the captured image.
  • a polynomial of second order is fitted to the detected borderline in the captured image. The value of A is determined.
  • a pattern including two areas separated by a vertical borderline is displayed.
  • the borderline is located at horizontal coordinate (dispXleft + dispXright)/2.
  • An image is captured and the borderline is detected in the captured image.
  • Sub-steps a-c are cyclically repeated with the updated coordinate values of dispXleft, dispXright, dispYtop and dispYbottom, while the difference between the left and right horizontal coordinates is greater than N number of pixels (i.e. while dispXright - dispXleft > N), and while the difference between the top and bottom vertical coordinates (i.e. while dispYtop - dispYbottom >N) is greater than M number of pixels.
  • the search interval is halved at each iteration, until the length of the search interval is no less than a minimum number N pixels for the horizontal interval and M pixels for the vertical interval.
  • the values of N and M may be set to any suitable value. In one example, the N and M may each be set to be equal to 4.
  • dispXleft dispXright
  • step S260 the final search interval is scanned to find the intersection location of the optical axis with the sensor 50.
  • the search intervals are between the final updated values of dispXleft, dispXright, dispYtop and dispYbottom after the final iteration in step S250.
  • Each pair of horizontal and vertical coordinates in the interval are scanned.
  • the known pattern is displayed with the centrepoint at the horizontal and vertical coordinates being scanned.
  • An image of the pattern is captured, and the coordinates of the centrepoint is detected.
  • a pattern including two areas with a horizontal borderline is displayed.
  • the borderline is displayed at the vertical coordinate being scanned.
  • An image of the pattern is captured.
  • the borderline is detected in the captured image.
  • a pattern including two areas with a vertical borderline is then displayed.
  • the borderline is displayed at the horizontal coordinate being scanned.
  • An image of the pattern is then captured.
  • the borderline is detected in the captured image.
  • the deviation of each point of the detected line from the fitted line is calculated.
  • the intersection location of the optical axis with the sensor 50 is determined to be the point in the captured image for which both horizontal and vertical lines have the smallest deviations from the respective fitted line. Therefore, the intersection location of the optical axis with the sensor 50 may be determined based on an intersection location of two lines which have a degree of curvature below a given threshold. The two intersecting lines may be selected based on their deviation from a respective fitted straight line being the smallest.
  • the steps described above may be passed once, and the horizontal and vertical coordinates of the intersection location of the optical axis with the sensor 50 are determined.
  • the above procedure may be run multiple times to obtain multiple estimates of the coordinates of the intersection location of the optical axis with the sensor 50. There may be small differences between the determined coordinates of the intersection location of the optical axis and the sensor 50 from each run due to noise in the estimation. From the multiple estimations the average, median, or any other statistical measure may be computed in order to obtain a single pair of coordinates.
  • the values of N and M may be set to any suitable value. However, it will be understood that larger values of N and M increase the processing time in step S260, as a larger number of optical axis intersection coordinate candidates are scanned over the larger search interval.
  • the known pattern displayed in the above steps is not limited to being a checkerboard.
  • the known pattern may be a circle.
  • the circle may be positioned at given coordinates of the display in the same way as the centrepoint of the checkerboard in order to determine the correspondence between the display's coordinates and the coordinates of the captured images.
  • An example of a circle having a centrepoint at dispXleft, dispYres/2 is shown in Figure 8a.
  • An example of the same circle is shown in Figure 8b, having a centrepoint at dispXres/2, dispYtop.
  • any suitable pattern may be used.
  • a checkerboard pattern as shown in Figure 4 is not limited to being a 4x4 checkerboard, but may include any suitable number of rectangles. It will also be appreciated that the patterns may be provided in any suitable colour.
  • the patterns are in black and white, but they are not limited to such colours.
  • the centrepoint of the checkerboard is described as being detected in order to determine the correspondence between the display coordinates and the camera sensor 50 coordinates. However, it will be appreciated that any of the intersection points on the checkerboard may be chosen to be detected for the coordinate correspondence
  • Any displayed pattern may be scaled so that it is completely contained on the display. In this way, detection of the pattern may be improved. It will be appreciated that instead of patterns including two areas demarcated by a borderline, a pattern showing a single line may be displayed. Alternatively, more than one line at a time may be displayed. For example, two intersecting lines may be displayed.
  • the detectability of the line may be improved.
  • the detectability of a single line may depend on the sensor 50 and display resolutions.
  • the display is a TV display, it may include a larger number of horizontal pixels than vertical pixels. Therefore, instead of displaying a vertical line on the display, the display may only show horizontal lines, and the camera module may be rotated 90 degrees relative to the display. Alternatively, the relative roll between the displayed border and the camera module may be any suitable angle, in which case the errors in the detection of the beginning and end of the border lines may be taken into account.
  • the relative rotation between the display and sensor 50 plane of the camera module may be limited, including the roll angle.
  • borderlines displayed may not only be horizontal and vertical lines, but may be lines displayed at any angle on the display. Additionally, the displayed lines need not be perpendicular to each other, but may be displayed at any angle to each other.
  • the display may be configured to display a fixed pattern, where the display is moveable relative to the camera module.
  • the display may be configured to be moved by a high precision motor so that the pattern is imaged on the sensor 50 at different locations.
  • the detection module 40 may be configured to control the position of the display 30 relative to the camera module. This may involve moving the position of the display 30, for example by controlling a motor connected to the display. Alternatively, this may involve moving the position of the camera module 10.
  • the camera module 10 may be controlled by the detection module 40 to rotate relative to the display.
  • the display may be configured to show the displayed patterns to have as large as possible size, so that as large as possible field of view of the lens is covered.
  • the display may be provided at as small a distance as possible from the lens of the camera module. This increases the coverage of the lens field of view.
  • the method is not limited to the half-interval search algorithm disclosed above, but any other efficient search algorithm could be implemented.
  • the steps S210 - S250 are not performed. In such embodiments, only step S260 is performed, and each pixel of the display is scanned. In this case, respective first order polynomials are fitted to captured images of displayed horizontal and vertical lines for each pixel of the display. This may increase the overall processing time taken to determine the intersection location of the optical axis with the sensor. In some embodiments, step S260 are not performed. In such embodiments, only steps S210 - S250 are performed. In this case, second order polynomials are fitted to captured images of displayed horizontal and vertical lines in as described in steps S210 - S250.
  • the half interval search may continue until two of the respective fitted lines have a value of A of approximately zero, indicating that the line is a first order polynomial.
  • the intersection location of the optical axis with the sensor 50 is determined to be the point at which the first order polynomials intersect.
  • the curvature reaches a small value as the pixels being searched approach the optical axis, the uncertainty in the sign of A may increase, and so the location of the intersection of the optical axis with the sensor 50 may be determined to a lower level of accuracy compared to performing the method depicted in Figure 3.
  • a search interval including the optical axis may be determined relatively quickly by using the half interval searching method, and the position of the intersection of the optical axis and the sensor may be determined to a high level of accuracy by scanning each (displayed) pixel in the final search interval and fitting respective first order polynomials to the captured lines for each pixel in the search interval.
  • steps S210 - S260 it will be appreciated that not all of the steps need to be performed in combination to obtain a determination of the intersection location of the optical axis with the sensor 50.
  • FIG 9 is a schematic block diagram of an example configuration of a camera module 10 such as described with reference to Figures 1 to 6.
  • the camera module 10 may comprise memory and processing circuitry.
  • the memory 11 may comprise any combination of different types of memory.
  • the memory comprises one or more read-only memory (ROM) media 13 and one or more random access memory (RAM) memory media 12.
  • the camera module 10 comprises one or more sensors 50 which may be configured to receive a projection of an image through the lens 20.
  • the processing circuitry 14 may be configured to process a captured image as described with reference to Figures 2 and 3. Alternatively, the processing circuitry may be separate to the camera module 10. In this case, the camera module 10 may further comprise an output interface 15 configured for transmitting a captured image to the processing circuitry.
  • the memory described with reference to Figure 9 may have computer readable instructions stored thereon 13A, which when executed by the processing circuitry 14 causes the processing circuitry 14 to cause performance of various ones of the operations described above.
  • the processing circuitry 14 described above with reference to Figure 9 may be of any suitable composition and may include one or more processors 14A of any suitable type or suitable combination of types.
  • the processing circuitry 14 may be a programmable processor that interprets computer program instructions and processes data.
  • the processing circuitry 14 may include plural programmable processors.
  • the processing circuitry 14 may be, for example, programmable hardware with embedded firmware.
  • the processing circuitry 14 may be termed processing means.
  • the processing circuitry 14 may alternatively or additionally include one or more Application Specific Integrated Circuits (ASICs).
  • ASICs Application Specific Integrated Circuits
  • processing circuitry 14 may be referred to as computing apparatus.
  • the processing circuitry 14 described with reference to Figure 9 is coupled to the memory 11 (or one or more storage devices) and is operable to read/write data to/from the memory.
  • the memory may comprise a single memory unit or a plurality of memory units 13 upon which the computer readable instructions 13A (or code) is stored.
  • the memory 11 may comprise both volatile memory 12 and non-volatile memory 13.
  • the computer readable instructions 13A may be stored in the non-volatile memory 13 and may be executed by the processing circuitry 14 using the volatile memory 12 for temporary storage of data or data and instructions.
  • volatile memory include RAM, DRAM, and SDRAM etc.
  • Examples of non-volatile memory include ROM, PROM, EEPROM, flash memory, optical storage, magnetic storage, etc.
  • the memories 11 in general may be referred to as non- transitory computer readable memory media.
  • the term 'memory' in addition to covering memory comprising both non-volatile memory and volatile memory, may also cover one or more volatile memories only, one or more nonvolatile memories only, or one or more volatile memories and one or more non-volatile memories.
  • the computer readable instructions 13A described herein with reference to Figure 9 may be pre-programmed into the camera module 10. Alternatively, the computer readable instructions 13A may arrive at the camera module 10 via an electromagnetic carrier signal or may be copied from a physical entity such as a computer program product, a memory device or a record medium such as a CD-ROM or DVD. The computer readable instructions 13A may provide the logic and routines that enable the camera module 10 to perform the
  • Figure 10 illustrates an example of a computer-readable medium 16 with computer-readable instructions (code) stored thereon.
  • the computer-readable instructions (code) when executed by a processor, may cause any one of or any combination of the operations described above to be performed.
  • the camera module 10 described herein may include various hardware components which have may not been shown in the Figures since they may not have direct interaction with the shown features.
  • Embodiments may be implemented in software, hardware, application logic or a combination of software, hardware and application logic.
  • the software, application logic and/or hardware may reside on memory, or any computer media.
  • the application logic, software or an instruction set is maintained on any one of various conventional computer-readable media.
  • a "memory" or “computer- readable medium” may be any non-transitory media or means that can contain, store, communicate, propagate or transport the instructions for use by or in connection with an instruction execution system, apparatus, or device, such as a computer.
  • references to, where relevant, "computer-readable storage medium”, “computer program product”, “tangibly embodied computer program” etc., or a “processor” or “processing circuitry” etc. should be understood to encompass not only computers having differing architectures such as single/multi-processor architectures and sequencers/parallel architectures, but also specialised circuits such as field programmable gate arrays FPGA, application specific circuits ASIC, signal processing devices and other devices.
  • References to computer program, instructions, code etc. should be understood to express software for a programmable processor firmware such as the programmable content of a hardware device as instructions for a processor or configured or configuration settings for a fixed function device, gate array, programmable logic device, etc.
  • circuitry refers to all of the following: (a) hardware-only circuit implementations (such as implementations in only analogue and/or digital circuitry) and (b) to combinations of circuits and software (and/or firmware), such as (as applicable): (i) to a combination of processor(s) or (ii) to portions of processor(s)/software (including digital signal processor(s)), software, and memory(ies) that work together to cause an apparatus, such as a mobile device or server, to perform various functions) and (c) to circuits, such as a microprocessor(s) or a portion of a microprocessor(s), that require software or firmware for operation, even if the software or firmware is not physically present.
  • circuitry would also cover an implementation of merely a processor (or multiple processors) or portion of a processor and its (or their) accompanying software and/or firmware.
  • circuitry would also cover, for example and if applicable to the particular claim element, a baseband integrated circuit or applications processor integrated circuit for a mobile phone or a similar integrated circuit in server, a cellular network device, or other network device.

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Chemical & Material Sciences (AREA)
  • Analytical Chemistry (AREA)
  • Optics & Photonics (AREA)
  • Geometry (AREA)
  • Studio Devices (AREA)
  • Image Processing (AREA)

Abstract

La présente invention concerne un procédé comprenant : sur la base d'une pluralité d'images d'au moins une ligne droite affichée capturée par un dispositif de capture d'image (10) ayant au moins un objectif (20) et un capteur (50), la ligne ayant une position relative différente dans chaque image respective, l'objectif introduisant un degré de distorsion radiale sur les images capturées et ayant un axe optique, la détermination d'un emplacement d'intersection de l'axe optique de l'objectif avec le capteur par détermination d'un emplacement d'intersection de deux lignes non parallèles à partir des images capturées ayant les plus petits écarts par rapport à des lignes droites ajustées respectives. L'objectif peut être un objectif ultra-grand-angulaire. Le module d'appareil photo peut comprendre un système optique avec une combinaison d'une ou de plusieurs objectifs et miroirs.
PCT/FI2017/050755 2016-11-08 2017-11-02 Détermination d'un emplacement d'intersection d'un axe optique d'un objectif avec un capteur d'appareil photo WO2018087424A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
GB1618779.1A GB2555643A (en) 2016-11-08 2016-11-08 Determining an intersection location of an optical axis of a lens with a camera sensor
GB1618779.1 2016-11-08

Publications (1)

Publication Number Publication Date
WO2018087424A1 true WO2018087424A1 (fr) 2018-05-17

Family

ID=61908000

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/FI2017/050755 WO2018087424A1 (fr) 2016-11-08 2017-11-02 Détermination d'un emplacement d'intersection d'un axe optique d'un objectif avec un capteur d'appareil photo

Country Status (2)

Country Link
GB (1) GB2555643A (fr)
WO (1) WO2018087424A1 (fr)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109788277A (zh) * 2019-01-08 2019-05-21 浙江大华技术股份有限公司 防抖机芯的光轴偏差的补偿方法、装置和存储介质
CN112562014A (zh) * 2020-12-29 2021-03-26 纵目科技(上海)股份有限公司 相机标定方法、系统、介质及装置

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112326202B (zh) * 2020-10-23 2022-12-09 歌尔光学科技有限公司 虚拟现实设备的双目视差测试方法、装置及工装

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6002525A (en) * 1998-07-06 1999-12-14 Intel Corporation Correcting lens distortion
US20050018175A1 (en) * 2001-11-13 2005-01-27 Tzu-Hung Cheng Method for deriving a calibration and method for image processing
US20050036706A1 (en) * 2003-08-15 2005-02-17 Donghui Wu Better picture for inexpensive cameras
US20090059041A1 (en) * 2007-08-27 2009-03-05 Sung Jin Kwon Method of correcting image distortion and apparatus for processing image using the method
US20150146048A1 (en) * 2013-11-27 2015-05-28 Huawei Technologies Co., Ltd. Radial distortion parameter acquiring method and apparatus

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPS56103302A (en) * 1980-01-21 1981-08-18 Fuji Photo Film Co Ltd Method for positioning lens in photodetector array
EP2197018A1 (fr) * 2008-12-12 2010-06-16 FEI Company Procédé pour déterminer les distorsions dans un appareil optique corpusculaire
JP2014182354A (ja) * 2013-03-21 2014-09-29 Sharp Corp レンズ傾き検出装置およびレンズ傾き検出方法

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6002525A (en) * 1998-07-06 1999-12-14 Intel Corporation Correcting lens distortion
US20050018175A1 (en) * 2001-11-13 2005-01-27 Tzu-Hung Cheng Method for deriving a calibration and method for image processing
US20050036706A1 (en) * 2003-08-15 2005-02-17 Donghui Wu Better picture for inexpensive cameras
US20090059041A1 (en) * 2007-08-27 2009-03-05 Sung Jin Kwon Method of correcting image distortion and apparatus for processing image using the method
US20150146048A1 (en) * 2013-11-27 2015-05-28 Huawei Technologies Co., Ltd. Radial distortion parameter acquiring method and apparatus

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
BRITO, J. H. ET AL.: "Radial Distortion Self-Calibration", IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION, 23 June 2013 (2013-06-23), PORTLAND, OR, USA, pages 1368 - 1375, XP032493036 *

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109788277A (zh) * 2019-01-08 2019-05-21 浙江大华技术股份有限公司 防抖机芯的光轴偏差的补偿方法、装置和存储介质
CN109788277B (zh) * 2019-01-08 2020-08-04 浙江大华技术股份有限公司 防抖机芯的光轴偏差的补偿方法、装置和存储介质
CN112562014A (zh) * 2020-12-29 2021-03-26 纵目科技(上海)股份有限公司 相机标定方法、系统、介质及装置

Also Published As

Publication number Publication date
GB2555643A (en) 2018-05-09

Similar Documents

Publication Publication Date Title
US11461930B2 (en) Camera calibration plate, camera calibration method and device, and image acquisition system
US10176554B2 (en) Camera calibration using synthetic images
CN111179358B (zh) 标定方法、装置、设备及存储介质
US9769443B2 (en) Camera-assisted two dimensional keystone correction
JP5580164B2 (ja) 光学情報処理装置、光学情報処理方法、光学情報処理システム、光学情報処理プログラム
US9916689B2 (en) Apparatus and method for estimating camera pose
US9998719B2 (en) Non-planar surface projecting system, auto-calibration method thereof, and auto-calibration device thereof
US9344695B2 (en) Automatic projection image correction system, automatic projection image correction method, and non-transitory storage medium
CN108574825B (zh) 一种云台摄像机的调整方法和装置
CN109855568B (zh) 自动驾驶传感器的检测方法、装置、电子设备及存储介质
CN110689581A (zh) 结构光模组标定方法、电子设备、计算机可读存储介质
CN112399158B (zh) 投影图像校准方法、装置及投影设备
CN107871329B (zh) 一种相机光学中心的快速标定方法及装置
WO2018102990A1 (fr) Système et procédé de correction d'une image grand angle
CN113034612B (zh) 一种标定装置、方法及深度相机
US11880993B2 (en) Image processing device, driving assistance system, image processing method, and program
US20190320166A1 (en) Calibration system and method
WO2018087424A1 (fr) Détermination d'un emplacement d'intersection d'un axe optique d'un objectif avec un capteur d'appareil photo
CN112995624B (zh) 用于投影仪的梯形误差校正方法及装置
CN110443750B (zh) 检测视频序列中的运动的方法
US20180182078A1 (en) Image processing apparatus and image processing method
CN114286075B (zh) 校正参数调整方法、装置、电子设备及可读介质
JP7284246B2 (ja) 撮像装置
KR20140068444A (ko) 다층 평면 물체 영상을 이용하여 카메라를 보정하기 위한 장치 및 그 방법
KR101583662B1 (ko) 광각 카메라 영상의 특징점 검출 방법

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 17869607

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 17869607

Country of ref document: EP

Kind code of ref document: A1