WO2015133414A1 - Calibrarion method, calibration device, and computer program product - Google Patents

Calibrarion method, calibration device, and computer program product Download PDF

Info

Publication number
WO2015133414A1
WO2015133414A1 PCT/JP2015/056016 JP2015056016W WO2015133414A1 WO 2015133414 A1 WO2015133414 A1 WO 2015133414A1 JP 2015056016 W JP2015056016 W JP 2015056016W WO 2015133414 A1 WO2015133414 A1 WO 2015133414A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
correction parameter
positional deviation
camera
photographic
Prior art date
Application number
PCT/JP2015/056016
Other languages
French (fr)
Inventor
Jun Kishiwada
Shin Aoki
Naoki Kikuchi
Kagehiro Nagao
Original Assignee
Ricoh Company, Limited
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Ricoh Company, Limited filed Critical Ricoh Company, Limited
Priority to CN201580012037.9A priority Critical patent/CN106104196B/en
Priority to EP15759240.3A priority patent/EP3114430B1/en
Priority to US15/123,998 priority patent/US10218961B2/en
Priority to KR1020167024651A priority patent/KR101787304B1/en
Publication of WO2015133414A1 publication Critical patent/WO2015133414A1/en
Priority to US16/235,131 priority patent/US10701341B2/en

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/62Control of parameters via user interfaces
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/20Image signal generators
    • H04N13/204Image signal generators using stereoscopic image cameras
    • H04N13/246Calibration of cameras
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60RVEHICLES, VEHICLE FITTINGS, OR VEHICLE PARTS, NOT OTHERWISE PROVIDED FOR
    • B60R11/00Arrangements for holding or mounting articles, not otherwise provided for
    • B60R11/04Mounting of cameras operative during drive; Arrangement of controls thereof relative to the vehicle
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
    • G06T7/85Stereo camera calibration
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/20Image signal generators
    • H04N13/204Image signal generators using stereoscopic image cameras
    • H04N13/239Image signal generators using stereoscopic image cameras using two 2D image sensors having a relative position equal to or related to the interocular distance
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/45Cameras or camera modules comprising electronic image sensors; Control thereof for generating image signals from two or more image sensors being of different type or operating in different modes, e.g. with a CMOS sensor for moving images in combination with a charge-coupled device [CCD] for still images
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/69Control of means for changing angle of the field of view, e.g. optical zoom objectives or electronic zooming
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30204Marker
    • G06T2207/30208Marker matrix
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30248Vehicle exterior or interior
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30248Vehicle exterior or interior
    • G06T2207/30252Vehicle exterior; Vicinity of vehicle

Definitions

  • the present invention relates to a calibration method, a calibration device, and a computer program product.
  • Stereo cameras have been used that can measure the distance to an object. For example, techniques have been in practical use that control a vehicle by measuring the distance to an object existing in front of the vehicle by using a stereo camera (hereinafter referred to as an "in- vehicle stereo camera") mounted on the vehicle.
  • the distance measured by using the in-vehicle stereo camera is used in alerting a driver and controlling the brake, steering, and/or the like for the purpose of preventing a crash of the vehicle, controlling the distance between vehicles, and others.
  • General in-vehicle stereo cameras are installed inside a windshield of a vehicle, because higher durability particularly in water resistance and dust resistance is required of an in-vehicle stereo camera installed outside a vehicle.
  • a stereo camera installed inside a vehicle photographs views outside the vehicle through the
  • Japanese Patent No. 4109077 describes a device that
  • a calibration method is for a photographic device that photographs an object through a transparent body.
  • the calibration method includes: acquiring a first photographic image by photographing the object without interposing the transparent body; acquiring a second photographic image by photographing the object through the transparent body;
  • a calibration device calibrates a photographic device that photographs an object through a transparent body.
  • the calibration device includes: a receiving unit that receives a first photographic image obtained by photographing the object photographed without interposing the transparent body and a second photographic image obtained by
  • a computer program product includes a non-transitory computer-readable medium having computer readable program codes.
  • the program codes when executed cause a computer that calibrates a photographic device that photographs an object through a transparent body to perform: receiving a first photographic image obtained by photographing the object without interposing the transparent body and a second photographic image obtained by photographing the object through the transparent body; calculating an
  • FIG.l is a drawing that illustrates a principle of measuring a distance using a stereo camera.
  • FIG. 2A is a drawing that illustrates an ideal
  • FIG. 2B is a drawing that illustrates a deviation in a detection position of the object image.
  • FIG. 3A is a drawing that illustrates an ideal
  • FIG. 3B is a drawing that illustrates an absolute positional deviation in the object image due to light refracted by a windshield.
  • FIG. 3C is a drawing that illustrates the case where parallax is calculated based on the position of the image on a reference image in FIG. 3B.
  • FIG. 3D is a drawing that illustrates the case where a comparison image is calibrated such that the parallax coincides with ideal parallax D.
  • FIG. 3E is a drawing that illustrates a state where the absolute positional deviation in the object image is not calibrated.
  • FIG. 4 is a drawing that illustrates an example of an environment (without a windshield) where a calibration method in a first embodiment is implemented.
  • FIG. 5 is a drawing that illustrates an example of a pattern of a calibration chart.
  • FIG. 6 is a drawing that illustrates an example of an environment (with a windshield) where the calibration method in the first embodiment is implemented.
  • FIG. 7 is a drawing that illustrates an example of the configuration of an information processing device in the first embodiment.
  • FIG. 8 is a flowchart that illustrates an example of the calibration method in the first embodiment.
  • FIG. 9 is a drawing that illustrates an example of the configuration of the information processing device in a second embodiment.
  • FIG. 10 is a flowchart that illustrates an example of the calibration method in the second embodiment.
  • FIG. 11 is a drawing that illustrates an example of the configuration of a parallax calculating device in a third embodiment.
  • FIG. 12 is a flowchart that illustrates an example of a method for calculating parallax in the third embodiment.
  • FIG. 13 is a drawing that illustrates an example of the configuration of a stereo camera in a fourth embodiment.
  • FIG. 14 is a drawing that illustrates an example of using the stereo camera in the fourth embodiment as an in- vehicle stereo camera.
  • FIG. 15 is a drawing that illustrates an example of the hardware configuration of the information processing device and the parallax calculating device.
  • FIG. 1 is a drawing that illustrates a principle of measuring a
  • a first camera 1 (focal length f, an optical center Oo, an image capturing surface So) is arranged with the Z axis as the direction of an optical axis, and a second camera 2 (the focal length f, an optical center Oi, an image
  • the capturing surface Si) is arranged with the Z axis as the direction of an optical axis.
  • the first camera 1 and the second camera 2 are arranged parallel to the X axis and located in the position apart from each other by a distance B (baseline length) .
  • An image of an object A located apart from the optical center Oo of the first camera 1 by a distance d in the direction of the optical axis is formed at P 0 that is an intersection of the straight line A - Oo and the image capturing surface S 0 .
  • an image of the same object A is formed at a position Pi on the image capturing surface Si.
  • a photographic image acquired from the image capturing surface So is hereinafter referred to as a "comparison image”
  • a photographic image acquired from the image capturing surface Si is referred to as a "reference image”.
  • a point is defined as Po 1 where a. straight line passing the optical center Oi of the second camera 2 and parallel to the straight line A - Oo intersects with the image capturing surface Si.
  • the distance between P 0 1 and Pi is defined as D.
  • the distance D indicates the amount of a positional deviation (parallax) between images of the same object photographed by two cameras.
  • the transparent body causes a positional deviation (an absolute positional deviation described above) of the object image on the photographic image.
  • FIG. 2A is a drawing that illustrates an ideal
  • a lens 11 an optical system
  • a light beam advances straight in the same direction as an optical axis 14 and reaches a position on a sensor 12. An image of the object 13 is thus detected in the position corresponding to a position of the optical axis .
  • FIG. 2B is a drawing that illustrates a deviation in a detection position of the object image.
  • FIG. 2A illustrates an example of the case where a windshield 15 is installed in front of the lens 11 of FIG. 2A.
  • a light beam output from the object 13 is refracted at the front and the back surfaces of the windshield 15 and eventually reaches a position having a deviation of AFr from a position (see FIG 2A) where the light beam reaches in the case of having no windshields.
  • the image of the object 13 is detected at a position that differs by AFr from a position corresponding to the position of the optical axis.
  • the deviation AFr occurs in each of the two cameras constituting the stereo camera.
  • the following is a
  • FIGS. 3A to 3E are drawings for describing a principle of calibration in which a deviation in a parallax (the above-described relative positional deviation) in an object image can be calibrated to the correct parallax while a positional deviation (the above-described absolute
  • the comparison images in FIGS. 3A to 3E are photographed by the first camera 1, and the reference images in FIGS. 3A to 3E are photographed by the second camera 2.
  • FIG. 3A is a drawing that illustrates an ideal
  • An image of the object is positioned at (5, 7) on the comparison image.
  • an image of the object is positioned positioned at (5, 4) on the reference image.
  • FIG. 3B is a drawing that illustrates an absolute positional deviation in the object image due to the effect of light refraction by a windshield.
  • the object image is positioned at (7, 9) on the comparison image.
  • deviation amount from the ideal condition is thus 2 in the vertical direction and 2 in the horizontal direction.
  • the object image is positioned at (6, 3) on the reference image.
  • the deviation amount from the ideal condition is thus 1 in the vertical direction and 1 in the horizontal direction.
  • FIG. 3C is a drawing that illustrates the case where the parallax is calculated based on the position of the image on the reference image in FIG. 3B.
  • an image serving as a reference is positioned at (6, 3), that is, at the same position as the position of the image on the reference image.
  • the parallax in FIG. 3C is 1 in the vertical direction and 6 in the horizontal direction, which means that the absolute positional deviation in the object image causes a deviation (a relative positional deviation) of 1 in the vertical direction and 3 in the horizontal direction from the ideal parallax.
  • FIG. 3D is a drawing that illustrates the case where the comparison image is calibrated in such a manner that the parallax coincides with the ideal parallax D.
  • the ideal parallax D is calculated by using a calibration chart located at a known distance and photographed by a stereo camera.
  • the position (6, 3) of the image on the reference image, which includes the calibration chart located at a known distance as a photographic object is set as a reference
  • the position of the image on the comparison image which includes the calibration chart located at a known distance as a photographic object, is calibrated in such a manner that the parallax comes to 3 (the ideal parallax D) .
  • those conventional stereo camera are calibrated in such a manner that the parallax comes to 3 (the ideal parallax D) .
  • calibration methods calibrate the comparison image in such a manner that the position of the image on the comparison image is moved from (7, 9) to (6, 6) .
  • the ideal parallax D is calculated based on the comparison image and the reference image.
  • FIG. 3E is a drawing that illustrates a state where the absolute positional deviation in the object image is not calibrated.'
  • the position (6, 6) of the image on the comparison image remains different from the position (5, 7) in the ideal condition by 1 in the vertical direction and 1 in the horizontal direction.
  • the position (6, 3) of the image on the reference image also remains different from the position (5, 4) in the ideal condition by 1 in the vertical direction and 1 in the horizontal direction.
  • the results indicate that the position of the object image cannot be calibrated to the correct position even if image data is calibrated by using a pair of image data so that the ideal parallax D is achieved.
  • the position of the object image is substantially calibrated to the position in the ideal condition .
  • the calibration method in the first embodiment uses photographic images (a comparison image and a reference image) obtained by photographing a calibration chart without a windshield 15 and photographic images (a
  • a comparison image photographed without a windshield is referred to as a first comparison image
  • a reference image photographed without a windshield is referred to as a first reference image
  • a comparison image photographed with a windshield is referred to as a second comparison image
  • a reference image photographed with a windshield is referred to as a second reference image.
  • FIG. 4 is a drawing that illustrates an example of an environment (without the windshield 15) where the
  • a calibration chart 60 (a calibration tool) is installed within a photographic range of a stereo camera 30.
  • the calibration chart 60 has a pattern or the like that
  • FIG. 5 is a drawing that illustrates an example of a pattern of the calibration chart 60.
  • FIG. 5 illustrates a checkered pattern as a pattern of the calibration chart 60.
  • a smaller pitch between checks on the checkered pattern generates more characteristic points (corresponding points), and these points enable an information processing device 50 described later to correctly detect a local absolute positional deviation resulting from the windshield 15.
  • corresponding points characteristic points
  • an irregular fine pattern may be used when a pitch between lattice points is reduced. Use of a fine pattern, however,
  • the calibration chart 60 be large enough to be imaged on the whole of a photographic image.
  • the calibration chart 60 in such a large size enables the information processing device 50 to use
  • any shape of a pattern other than a checkered pattern is applicable to the calibration chart 60.
  • Examples of the pattern of the calibration chart 60 may include a circular pattern.
  • the stereo camera 30 photographs the calibration chart 60 without a windshield and acquires the first comparison image and the first reference image.
  • the first comparison image is photographed by the first camera 1 (see FIG. 1)
  • the first reference image is photographed by the second camera 2 (see FIG. 1) .
  • the first comparison image and the first reference image are input into the information processing device 50 serving as a calibration device.
  • FIG. 6 is a drawing that illustrates an example of an environment (with the windshield 15) where the calibration method in the first embodiment is implemented.
  • implementation environment in FIG. 6 is the case where the vehicle in the implementation environment in FIG. 4 is equipped with the windshield 15.
  • the implementation environments of FIG. 4 and FIG. 6 differ from each other only in whether to include the windshield 15.
  • the stereo camera 30 photographs the calibration chart 60 with the windshield 15 and acquires the second comparison image and the second reference image.
  • the second comparison image and the second reference image are input into the
  • the information processing device 50 uses the first comparison image and the second comparison image to calculate the first comparison image and the second comparison image to calculate the first comparison image and the second comparison image to calculate the first comparison image and the second comparison image to calculate the first comparison image and the second comparison image to calculate the first comparison image and the second comparison image to calculate the first comparison image and the second comparison image to calculate the first comparison image and the second comparison image to calculate the first comparison image and the second comparison image to calculate the first comparison image and the second comparison image to
  • absolute positional deviation in the first camera 1 of the stereo camera 30 uses the first reference image and the second reference image to determine a correction parameter for calibrating an absolute positional deviation in the second camera 2 of the stereo camera 30.
  • FIG. 7 is a drawing that illustrates an example of the configuration of the information processing device 50 in the first embodiment.
  • the information processing device 50 in the first embodiment includes a receiving unit 51, a determining unit 52, an absolute positional deviation calculating unit 53, a correction parameter calculating unit 54, and a memory control unit 55.
  • the receiving unit 51 receives, from the stereo camera 30, first photographic images (the first comparison image and the first reference image) obtained by photographing the calibration chart 60 without interposing the windshield 15.
  • the receiving unit 51 inputs the first photographic images (the first comparison image and the first reference image) into the determining unit 52.
  • the receiving unit 51 furthermore receives, from the stereo camera 30, second photographic images (the second comparison image and the second reference image) obtained by photographing the calibration chart 60 through the windshield 15.
  • the receiving unit 51 inputs the second photographic images (the second comparison image and the second reference image) into the determining unit 52.
  • the determining unit 52 receives the first
  • the determining unit 52 determines whether the first photographic images (the first comparison image and the first reference image) from the receiving unit 51.
  • the determining unit 52 extracts white luminance of the image of the pattern of the calibration chart 60 included in the first photographic images. Uneven luminance on the image of the pattern on the calibration chart 60 affects accuracy in the corresponding point detecting processing described later. The determining unit 52 thus determines whether uneven luminance markedly appears over the whole areas of the first photographic images. If, for example, uneven
  • the determining unit 52 determines that the first photographic images are reliable. When the first photographic images are determined to be reliable, the determining unit 52 inputs the first
  • the determining unit 52 similarly receives second photographic images (the second comparison image and the second reference image) from the receiving unit 51. The determining unit 52 determines whether the second
  • the determining unit 52 determines whether differences of luminance on the second photographic images are normal and specifies a case where dust adheres to the windshield 15 and the like. Dust and the like adhering to the windshield 15 affect accuracy in the corresponding point detecting processing described later. If, for example, differences of luminance on the second photographic images are normal, the second photographic images are determined to be reliable. When the second photographic images are determined to be
  • the determining unit 52 inputs the second
  • the absolute positional deviation calculating unit 53 receives the first photographic images (the first
  • the absolute positional deviation calculating unit 53 calculates an absolute positional deviation in the first camera 1 and an absolute positional deviation in the second camera 2. The same method is employed to calculate the absolute positional deviations of the first camera 1 and the second camera 2, and thus described in the following is a method for calculating an absolute positional deviation in the first camera 1 using the first comparison image and the second comparison image.
  • the absolute positional deviation calculating unit 53 calculates an absolute positional deviation (a deviation in coordinates of an object image due to the windshield 15) based on the coordinates of the image of the calibration chart 60 on the first comparison image and the coordinates of the image of the calibration chart 60 on the second comparison image. Specifically, the absolute positional deviation calculating unit 53 retrieves respective
  • characteristic points (corresponding points) on the second comparison image corresponding to characteristic points on the first comparison image in the two dimensional
  • the absolute positional deviation calculating unit 53 determines those characteristic points by using the image of a pattern on the calibration chart 60.
  • deviation calculating unit 53 calculates a deviation in coordinates ( ⁇ , Ay) between the coordinates (xl, yl) of a characteristic point on the first comparison image and the coordinates (x2, y2) of a characteristic point (a corresponding point) , which corresponds to the characteristic point on the first comparison image, on the second comparison image as an absolute positional deviation in the vicinity of the characteristic point in the first camera 1.
  • the absolute positional deviation calculating unit 53 inputs the absolute positional deviation in the first camera 1 into the correction parameter calculating unit 54.
  • the absolute positional deviation calculating unit 53 calculates an absolute positional deviation in the second camera 2 in the manner similar to that for calculating the absolute positional deviation in the first camera 1 and inputs the absolute positional deviation in the second camera 2 into the correction parameter calculating unit 54.
  • the correction parameter calculating unit 54 receives the absolute positional deviation in the first camera 1 and the absolute positional deviation in the second camera 2 from the absolute positional deviation calculating unit 53.
  • the correction parameter calculating unit 54 calculates a first correction parameter for calibrating the absolute positional deviation in the first camera 1 and a second correction parameter for calibrating the absolute
  • Examples of the first correction parameter and the second . correction parameter include a coefficient used in a correction
  • the correction formula for transforming coordinates in such a manner that an absolute positional deviation is cancelled.
  • the absolute positional deviation is indicated as (1, 2)
  • the correction formula transforms the coordinates by -1 in the x direction and -2 in the y direction.
  • correction parameter calculating unit 54 inputs the first correction parameter and the second correction parameter into the memory control unit 55.
  • the memory control unit 55 receives the first
  • the memory control unit 55 stores the first correction
  • the memory control unit 55 stores the first correction parameter and the second correction parameter in the stereo camera 30 by, for example, transmitting the first correction parameter and the second correction parameter to the stereo camera 30 by wired or wireless communication.
  • the first correction parameter and the second correction parameter may be once stored in an attachable and detachable memory medium or the like and stored in the stereo camera 30 through the memory medium.
  • FIG. 8 is a flowchart that illustrates an example of the calibration method in the first embodiment.
  • the stereo camera 30 photographs the calibration chart 60 without the windshield 15 (see FIG. 4) and acquires the first photographic images (the first comparison image and the first reference image) (Step SI) .
  • the information processing device 50 (the determining unit 52) determines whether the first photographic images acquired at Step SI are reliable (Step S2).
  • the information processing device 50 determines reliability of the first photographic images, for example, based on whether uneven luminance markedly appears over the whole areas of the first photographic images .
  • Step S3 the implementation environment is adjusted (Step S3), and the process returns to Step SI. Examples of the adjustment for the implementation
  • Step S2 the windshield 15 is mounted on the vehicle (Step S4). That is, the environment where the calibration method in the first embodiment is implemented is brought into the state of FIG. 6.
  • the stereo camera 30 photographs the calibration chart 60 with the windshield 15 mounted (see FIG. 6) and acquires the second photographic images, (the second comparison image and the second reference image) (Step S5) .
  • the information processing device 50 (the determining unit 52) determines whether the second photographic images acquired at Step S5 are reliable (Step S6) .
  • the information processing device 50 determines reliability of the second photographic images, for example, based on whether differences of luminance on the second photographic images are normal.
  • Step S7 the environment is adjusted (Step S7), and the process returns to Step SI. Examples of the adjustment for the
  • implementation environment include remounting of the
  • Step S7 If the adjustment for the implementation environment (Step S7) is minor, the process may restart from Step S4 instead of returning to Step SI.
  • the information processing device 50 calculates an absolute positional deviation in the first camera 1 using the above-described method with
  • the information processing device 50 calculates the first
  • Step S9 positional deviation in the second camera 2
  • Examples of the first correction parameter and the second correction parameter include a coefficient used in a
  • the information processing device 50 (the memory control unit 55) stores the first correction parameter and the " ' second correction parameter in the stereo camera 30.
  • the memory control unit 55 stores the first correction parameter and the second correction parameter in the stereo camera 30 by, for example, transmitting the first
  • Step S10 the correction parameter and the second correction parameter to the stereo camera 30 by wired or wireless communication.
  • the calibration method in the first embodiment acquires the first photographic images (the first comparison image " and the first reference image) photographed without the windshield 15 and the second photographic images (the second comparison image and the second reference image) photographed with the windshield 15.
  • the calibration method in the first embodiment thereafter calculates the difference between a characteristic point on the first comparison image and a characteristic point (a corresponding point) , which corresponds to the
  • characteristic point on the first comparison image on the second comparison image as an absolute positional deviation in the vicinity of the characteristic point of the first camera 1 and similarly calculates the difference between a characteristic point on the first reference image and a characteristic point (corresponding point), which
  • the calibration method in the first embodiment calculates the first
  • correction parameter (the second correction parameter) .
  • the absolute positional deviation in the first camera 1 (the second camera 2) due to the windshield 15 is therefore accurately calibrated by using the first correction
  • the stereo camera 30 mounted on a vehicle is used as an example of a photographic device to be calibrated, however, the
  • any number of cameras is thus applicable as a photographic device to be calibrated.
  • Examples of the photographic device to be calibrated may include a monocular camera.
  • FIGS. 3A to 3E a relative positional deviation described in FIGS. 3A to 3E occurs due to factors such as an assembly tolerance of the stereo camera 30 mounted on an object.
  • the relative positional deviation resulting from the assembly tolerance and/or the like can be calibrated by firstly correcting the absolute positional deviation in the second comparison image (the second reference image) using the first correction parameter (the second correction parameter) calculated by using the calibration method in the embodiment and secondly updating the first correction parameter of the stereo camera 30 so as to perform
  • FIG. 9 is a drawing that illustrates an example of the configuration of the information processing device 50 in the second embodiment.
  • the information processing device 50 in the second embodiment includes the receiving unit 51, the determining unit 52, the absolute positional deviation calculating unit 53, the correction parameter calculating unit 54, the memory control unit 55, and a relative
  • device 50 in the second embodiment additionally includes the relative positional deviation calculating unit 56 compared with the configuration of the information processing device 50 in the first embodiment.
  • the same description as the first embodiment will be omitted from the description of the second embodiment, and processing for calibrating a
  • processing device 50 in the second embodiment calculates the first and the second correction parameters by using the absolute positional deviation calculating unit 53 and the correction parameter calculating unit 54 and stores the parameters in the stereo camera 30 by using the memory control unit 55 (see FIG. .8) .
  • the information processing device 50 in the second embodiment receives, from the stereo camera 30, a
  • the receiving unit 51 receives, from the stereo camera 30, the second comparison image (a comparison image
  • the determining unit 52 determines whether the second comparison image (the second reference image) in which the absolute positional deviation has been calibrated by using the first correction parameter (the second correction parameter) is reliable. The method for determining
  • the determining unit 52 inputs the second comparison image (the second reference image) into the relative positional deviation calculating unit 56.
  • the relative positional deviation calculating unit 56 calculates parallax (Dx, Dy) by retrieving respective characteristic points (corresponding points) on the second reference image that correspond to characteristic points on the second comparison image. The relative positional deviation calculating unit 56 thereafter calculates the difference between the parallax (Dx, Dy) and ideal parallax (D, 0) as a relative positional deviation and inputs the relative positional deviation into the correction parameter calculating unit 54.
  • the third correction parameter calculates the third correction parameter for calibrating a relative positional deviation between the second comparison image and the second reference image. Calibration using the third correction parameter is performed on the second comparison image (see FIG. 3D) .
  • the third correction parameter include a coefficient used in a correction formula for transforming coordinates on the second comparison image in such a manner that the relative positional deviation is cancelled.
  • parameter calculating unit 54 modifies the first correction parameter by combining the first correction parameter for calibrating an absolute positional deviation with the third correction parameter and works out a modified first
  • the correction parameter calculating unit 54 inputs the modified first correction parameter into the memory control unit 55.
  • the memory control unit 55 stores the modified first correction parameter in the stereo camera 30, thereby updating the first correction parameter stored in the stereo camera 30.
  • FIG. 10 is a flowchart that illustrates an example of the calibration method in the second embodiment.
  • the information processing device 50 stores the first correction parameter and the second correction .parameter calculated by using the calibration method in the first embodiment (see Step SI to Step S10 in FIG. 8) in the stereo camera 30 (Step Sll) .
  • the stereo camera 30 photographs the calibration chart 60 serving as an object through the windshield 15 and acquires the second comparison image and the second
  • the stereo camera 30 furthermore calibrates the second reference image using the second correction parameter (Step S14).
  • the information processing device 50 calculates the third correction parameter for calibrating a relative positional deviation indicating a deviation in a parallax between the object image on the calibrated second comparison image and the object image on the calibrated second reference image (Step S15) .
  • the information processing device 50 modifies the first correction parameter using the third correction parameter, thereby calculating the modified first
  • Step S17 (Step S17) .
  • the calibration method in the second embodiment provides further modifications to the first correction parameter of the stereo camera 30, thereby acquiring three dimensional information indicating a more accurate position of the object from the object image included in the photographic image photographed by the stereo camera 30.
  • the information processing device 50 modifies the first correction parameter using the third correction parameter. In another case, the
  • information processing device 50 may modify the second correction parameter using the third correction parameter.
  • the third embodiment relates to a parallax calculating device storing therein a correction parameter calculated by using the calibration method in the second embodiment.
  • the parallax calculating device in operation uses a correction parameter, the word "correction" is used instead of
  • FIG. 11 is a drawing that illustrates an example of the configuration of a parallax calculating device 20 in the third embodiment.
  • the calculating device 20 in the third embodiment includes a receiving unit 21, a first correcting unit 22, a second correcting unit 23, a memory unit 24, a calculating unit 25, and a restoring unit 26.
  • the receiving unit 21 receives an input of the second comparison image (a comparison image photographed through a transparent body) and outputs the second comparison image to the first correcting unit 22.
  • the receiving unit 21 receives an input of the second reference image (a)
  • the first correcting unit 22 receives the second comparison image from the receiving unit 21, corrects the second comparison image using the modified first correction parameter in the above description, and outputs the
  • the second correcting unit 23 receives the second reference image from the receiving unit 21, corrects the second reference image using the second correction
  • the memory unit 24 stores therein the modified first correction parameter used by the first correcting unit 22 and the second correction parameter used by the second correcting unit 23.
  • the calculating unit 25 receives the corrected second comparison image from the first correcting unit 22 and receives the corrected second reference image from the second correcting unit 23.
  • the calculating unit 25 receives the corrected second comparison image from the first correcting unit 22 and receives the corrected second reference image from the second correcting unit 23.
  • the calculating unit 25 calculates the parallax based on the object image included in the corrected second comparison image and the object image included in the corrected second reference image.
  • the calculating unit 25 calculates the parallax for each pixel and generates a parallax image indicating the parallaxes by density values.
  • the restoring unit 26 receives the corrected second comparison image from the first correcting unit 22 and receives the corrected second reference image from the second correcting unit 23.
  • the restoring unit 26 restores the modulation transfer function (MTF) characteristics of the second comparison image, which has been decreased by the correction. By restoring the MTF characteristics of the second comparison image, the restoring unit 26
  • the restoring unit 26 restores the MTF characteristics of the second reference image, which has been decreased by the correction. By restoring the MTF characteristics of the second reference image, the restoring unit 26 generates a luminance image o: the second camera 2 with its resolution improved.
  • FIG. 12 is a flowchart that illustrates an example of the method for calculating the parallax in the third embodiment.
  • the receiving unit 21 receives an input of the second comparison image (Step S21) and receives an input of the second reference image (Step S22).
  • the first correcting unit 22 corrects the second comparison image using the modified first correction parameter (Step S23) .
  • the second correcting unit 23 corrects the second reference image using the second correction parameter (Step S24) .
  • the calculating unit 25 calculates parallax based on the object image included in the corrected second
  • calculating unit 25 generates a parallax image indicating the parallaxes by density values of pixels by using the parallaxes (the parallax calculated for each pixel)
  • Step S26 calculated at Step S25 (Step S26) .
  • the first correcting unit 22 corrects the second comparison image using the modified first correction parameter
  • the second correcting unit 23 corrects the second reference image using the second correction parameter.
  • the calculating unit 25 calculates the parallax based on the object image included in the corrected second comparison image and the object image included in the corrected second reference image.
  • the parallax calculating device 20 in the third embodiment can correct a deviation in a parallax (a
  • the parallax calculating device 20 in the third embodiment can more accurately calculate three dimensional coordinates indicating the position of an object based on the distance to the object calculated from the parallax of the object image and the coordinates of the object image on image data.
  • FIG. 13 is a drawing that illustrates an example of the configuration of the stereo camera 30 in the fourth embodiment.
  • the stereo camera 30 in the fourth embodiment includes the first camera 1, the second camera 2, and the parallax calculating device 20.
  • the parallax calculating device 20 includes the receiving unit 21, the first correcting unit 22, the second correcting unit 23, the memory unit 24, the calculating unit 25, and the restoring unit 26.
  • the stereo camera 30 in the fourth embodiment includes the parallax calculating device 20 of the third embodiment.
  • Examples of the application of the stereo camera 30 in the fourth embodiment include an in-vehicle stereo camera.
  • FIG. 14 is a drawing that illustrates an example of using the stereo camera 30 in. the fourth embodiment as an in-vehicle stereo camera.
  • the stereo camera 30 is installed inside the windshield 15, and this arrangement makes it possible to correct a deviation (an absolute positional deviation) in coordinates of the object image on image data in
  • the stereo camera 30 in the fourth embodiment can correct a deviation (an absolute positional deviation) in coordinates of an object image on image data on a real-time basis in addition to a deviation in a parallax (a relative positional deviation) in an object image on image data.
  • the stereo camera 30 in the fourth embodiment can accurately calculate, on a real-time basis, three dimensional coordinates indicating the position of the object based on the distance to the object calculated from the parallax of the object image and the coordinates of the object image on image data.
  • FIG. 15 is a drawing that illustrates an example of the hardware
  • control device 41 a main memory device 42, an auxiliary memory device 43, an external interface 44, and a communication device 45.
  • the control device 41, the main memory device 42, the auxiliary memory device 43, the external interface 44, and the communication device 45 are connected with one another via a bus 46.
  • the control device 41 executes a computer program read out on the main memory device 42 from the auxiliary memory device 43.
  • the main memory device 42 include a read only memory (ROM) and a random access memory (RAM) .
  • the auxiliary memory device 43 include a hard disk drive (HDD) and a memory card.
  • the external interface 44 is an interface for transmitting and receiving data to and from other devices.
  • the communication device 45 is an interface for communicating with other devices by wireless communication and/or the like.
  • a computer-readable memory medium such as a compact disc read only memory (CD-ROM) , a memory card, a compact disc recordable (CD-R) , and a digital versatile disc (DVD) as an installable or executable file and
  • CD-ROM compact disc read only memory
  • CD-R compact disc recordable
  • DVD digital versatile disc
  • the program executed by the information processing device 50 and the parallax calculating device 20 may be stored in a computer connected to a network such as the Internet and provided by being downloaded via the network.
  • the program executed by the information processing device 50 and the parallax calculating device 20 may be provided via a network such as the Internet without being downloaded.
  • the program executed by the information processing device 50 and the parallax calculating device 20 may be preliminarily embedded in a read only memory (ROM) or the like and provided.
  • the program executed by the information processing device 50 consists of modules including the above-described functional blocks (the receiving unit 51, the determining unit 52, the absolute positional deviation calculating unit 53, the correction parameter calculating unit 54, the memory control unit 55, and the relative positional
  • control device 41 reads out the program from a memory medium and executes the program, whereby each of the functional blocks loads on the main memory device 42 In other words, each of the functional blocks is generated on the main memory device 42.
  • the program executed by the parallax calculating device 20 consists of modules including the above-described functional blocks (the receiving unit 21, the first
  • control device 41 reads out the program from a memory medium and executes the program, whereby each of the functional blocks loads on the main memory device 42. In other words, each of the
  • IC integrated circuit
  • An embodiment provides the effect that an absolute positional deviation in image data due to a transparent body can be accurately calibrated.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Mechanical Engineering (AREA)
  • Length Measuring Devices By Optical Means (AREA)
  • Measurement Of Optical Distance (AREA)
  • Image Processing (AREA)
  • Image Analysis (AREA)

Abstract

A calibration method is for a photographic device that photographs an object through a transparent body. The calibration method includes: acquiring a first photographic image by photographing the object without interposing the transparent body; acquiring a second photographic image by photographing the object through the transparent body; calculating an absolute positional deviation that indicates a deviation in coordinates of an image of the object due to the transparent body based on coordinates of an image of the object on the first photographic image and coordinates of an image of the object on the second photographic image; calculating a correction parameter for calibrating the absolute positional deviation; and storing the correction parameter in the photographic device.

Description

DESCRIPTION
CALIBRARION METHOD, CALIBRATION DEVICE, AND COMPUTER
. PROGRAM PRODUCT
TECHNICAL FIELD
The present invention relates to a calibration method, a calibration device, and a computer program product.
BACKGROUND ART
Stereo cameras have been used that can measure the distance to an object. For example, techniques have been in practical use that control a vehicle by measuring the distance to an object existing in front of the vehicle by using a stereo camera (hereinafter referred to as an "in- vehicle stereo camera") mounted on the vehicle. The distance measured by using the in-vehicle stereo camera is used in alerting a driver and controlling the brake, steering, and/or the like for the purpose of preventing a crash of the vehicle, controlling the distance between vehicles, and others.
General in-vehicle stereo cameras are installed inside a windshield of a vehicle, because higher durability particularly in water resistance and dust resistance is required of an in-vehicle stereo camera installed outside a vehicle. A stereo camera installed inside a vehicle photographs views outside the vehicle through the
windshield. General windshields, however, have a complexly curved shape, and the shape is distorted compared with optical parts such as a lens inside a camera. A windshield thus causes distortion on images photographed through the windshield.
Techniques to correct an image photographed by a stereo camera are conventionally known. For example, Japanese Patent No. 4109077 describes a device that
transforms each of a pair of image data output from a pair of cameras constituting a stereo camera by using a
calibration parameter based on a deviation in coordinates between one of the image data and the other image data and adjusts optical distortion and a positional deviation in the stereo camera through image processing.
Such conventional techniques can correctly calibrate a deviation (hereinafter referred to as a "relative
positional deviation") in a parallax (a relative position) between object images on a pair of image data; however, those techniques cannot correctly calibrate a deviation (hereinafter referred to as an "absolute positional
deviation") in coordinates of the object image on the image data due to a transparent body such as a windshield. This configuration problematically causes an error on three dimensional coordinates, which indicate the position of an object, when the three dimensional coordinates are
calculated from the distance to the object calculated based on the parallax in the object image and from the
coordinates of the object image on the image data.
In view of the above, there is a need to provide a calibration method, a calibration device, and a computer program product that can accurately calibrate an absolute positional deviation in image data due to a transparent body.
SUMMARY OF THE INVENTION
A calibration method is for a photographic device that photographs an object through a transparent body. The calibration method includes: acquiring a first photographic image by photographing the object without interposing the transparent body; acquiring a second photographic image by photographing the object through the transparent body;
calculating an absolute positional deviation that indicates a deviation in coordinates of an image of the object due to the transparent body based on coordinates of an image of the object on the first photographic image and coordinates of an image of the object on the second photographic image; calculating a correction parameter for calibrating the absolute positional deviation; and storing the correction parameter in the photographic device.
A calibration device calibrates a photographic device that photographs an object through a transparent body. The calibration device includes: a receiving unit that receives a first photographic image obtained by photographing the object photographed without interposing the transparent body and a second photographic image obtained by
photographing the object through the transparent body; an absolute positional deviation calculating unit that
calculates an absolute positional deviation indicating a deviation in coordinates of an. image of the object due to the transparent body based on coordinates of an image of the object on the first photographic image and coordinates of an image of the object on the second photographic image; a correction parameter calculating unit that calculates a correction parameter for calibrating the absolute
positional deviation; and a memory control unit that stores the correction parameter in the photographic device.
A computer program product includes a non-transitory computer-readable medium having computer readable program codes. The program codes when executed cause a computer that calibrates a photographic device that photographs an object through a transparent body to perform: receiving a first photographic image obtained by photographing the object without interposing the transparent body and a second photographic image obtained by photographing the object through the transparent body; calculating an
absolute positional deviation indicating a deviation in coordinates of an image of the object due to the
transparent body based on coordinates of an image of the object on the first photographic image and coordinates of an image of the object on the second photographic image; calculating a correction parameter for calibrating the absolute positional deviation; and storing the correction parameter in the photographic device.
BRIEF DESCRIPTION OF DRAWINGS
FIG.l is a drawing that illustrates a principle of measuring a distance using a stereo camera.
FIG. 2A is a drawing that illustrates an ideal
detection position of an object image.
FIG. 2B is a drawing that illustrates a deviation in a detection position of the object image.
FIG. 3A is a drawing that illustrates an ideal
condition for the object image and parallax.
FIG. 3B is a drawing that illustrates an absolute positional deviation in the object image due to light refracted by a windshield.
FIG. 3C is a drawing that illustrates the case where parallax is calculated based on the position of the image on a reference image in FIG. 3B.
FIG. 3D is a drawing that illustrates the case where a comparison image is calibrated such that the parallax coincides with ideal parallax D.
FIG. 3E is a drawing that illustrates a state where the absolute positional deviation in the object image is not calibrated. FIG. 4 is a drawing that illustrates an example of an environment (without a windshield) where a calibration method in a first embodiment is implemented.
FIG. 5 is a drawing that illustrates an example of a pattern of a calibration chart.
FIG. 6 is a drawing that illustrates an example of an environment (with a windshield) where the calibration method in the first embodiment is implemented.
FIG. 7 is a drawing that illustrates an example of the configuration of an information processing device in the first embodiment.
FIG. 8 is a flowchart that illustrates an example of the calibration method in the first embodiment.
FIG. 9 is a drawing that illustrates an example of the configuration of the information processing device in a second embodiment.
FIG. 10 is a flowchart that illustrates an example of the calibration method in the second embodiment.
FIG. 11 is a drawing that illustrates an example of the configuration of a parallax calculating device in a third embodiment.
FIG. 12 is a flowchart that illustrates an example of a method for calculating parallax in the third embodiment.
FIG. 13 is a drawing that illustrates an example of the configuration of a stereo camera in a fourth embodiment.
FIG. 14 is a drawing that illustrates an example of using the stereo camera in the fourth embodiment as an in- vehicle stereo camera.
FIG. 15 is a drawing that illustrates an example of the hardware configuration of the information processing device and the parallax calculating device.
DESCRIPTION OF EMBODIMENTS Embodiments of a calibration method, a calibration device, and a computer program product will be described in detail with reference to the accompanying drawings.
First Embodiment.
A first embodiment will be described with an example of the case where a photographic device to be calibrated is an in-vehicle stereo camera. Positional deviations in an image photographed by an in-vehicle stereo camera include an absolute positional deviation and a relative positional deviation. For description of the absolute positional deviation and the relative positional deviation, parallax and a principle of measuring a distance using the parallax will firstly be described. The parallax is calculated by using images photographed by a stereo camera. FIG. 1 is a drawing that illustrates a principle of measuring a
distance using a stereo camera. In the example of FIG. 1, a first camera 1 (focal length f, an optical center Oo, an image capturing surface So) is arranged with the Z axis as the direction of an optical axis, and a second camera 2 (the focal length f, an optical center Oi, an image
capturing surface Si) is arranged with the Z axis as the direction of an optical axis. The first camera 1 and the second camera 2 are arranged parallel to the X axis and located in the position apart from each other by a distance B (baseline length) .
An image of an object A located apart from the optical center Oo of the first camera 1 by a distance d in the direction of the optical axis is formed at P0 that is an intersection of the straight line A - Oo and the image capturing surface S0. With the second camera 2, an image of the same object A is formed at a position Pi on the image capturing surface Si. While a photographic image acquired from the image capturing surface So is hereinafter referred to as a "comparison image", a photographic image acquired from the image capturing surface Si is referred to as a "reference image".
A point is defined as Po 1 where a. straight line passing the optical center Oi of the second camera 2 and parallel to the straight line A - Oo intersects with the image capturing surface Si. The distance between P0 1 and Pi is defined as D. The distance D indicates the amount of a positional deviation (parallax) between images of the same object photographed by two cameras. The triangle A - O0 -' Oi and the triangle Oi - Po 1 - Pi are similar with each other, and the formula d = B x f/D is thus satisfied. In other words, the distance d to the object A can be worked out from the baseline length B, the focal length f, and the parallax D.
The above is the principle of measuring a distance using a stereo camera. In » using a stereo camera
photographing an object through a transparent body (such as an in-vehicle stereo camera photographing an object through a windshield) , however, the transparent body causes a positional deviation (an absolute positional deviation described above) of the object image on the photographic image.
FIG. 2A is a drawing that illustrates an ideal
detection position of an object image. In FIG. 2A, a lens 11 (an optical system) is described as a pinhole camera for convenience. If an object 13 exists on the optical axis of the lens 11, a light beam advances straight in the same direction as an optical axis 14 and reaches a position on a sensor 12. An image of the object 13 is thus detected in the position corresponding to a position of the optical axis .
FIG. 2B is a drawing that illustrates a deviation in a detection position of the object image. FIG. 2B
illustrates an example of the case where a windshield 15 is installed in front of the lens 11 of FIG. 2A. A light beam output from the object 13 is refracted at the front and the back surfaces of the windshield 15 and eventually reaches a position having a deviation of AFr from a position (see FIG 2A) where the light beam reaches in the case of having no windshields. In other words, the image of the object 13 is detected at a position that differs by AFr from a position corresponding to the position of the optical axis.
The deviation AFr occurs in each of the two cameras constituting the stereo camera. The following is a
description about why a deviation in a parallax (a relative positional deviation) in the object image can be calibrated to the correct parallax but a deviation (AFr as an absolute positional deviation) in coordinates of the object image cannot be calibrated when calibrating the image data based on the ideal parallax and the parallax obtained from a pair of image data acquired by the stereo camera.
FIGS. 3A to 3E are drawings for describing a principle of calibration in which a deviation in a parallax (the above-described relative positional deviation) in an object image can be calibrated to the correct parallax while a positional deviation (the above-described absolute
positional deviation) in the object image cannot be
calibrated to the correct position. The comparison images in FIGS. 3A to 3E are photographed by the first camera 1, and the reference images in FIGS. 3A to 3E are photographed by the second camera 2.
FIG. 3A is a drawing that illustrates an ideal
condition for the object image and parallax. An image of the object is positioned at (5, 7) on the comparison image. On the other hand, an image of the object is positioned positioned at (5, 4) on the reference image. Ideal
parallax D is thus 3.
FIG. 3B is a drawing that illustrates an absolute positional deviation in the object image due to the effect of light refraction by a windshield. The object image is positioned at (7, 9) on the comparison image. The
deviation amount from the ideal condition is thus 2 in the vertical direction and 2 in the horizontal direction. The object image is positioned at (6, 3) on the reference image. The deviation amount from the ideal condition is thus 1 in the vertical direction and 1 in the horizontal direction.
FIG. 3C is a drawing that illustrates the case where the parallax is calculated based on the position of the image on the reference image in FIG. 3B. On the comparison image, an image serving as a reference is positioned at (6, 3), that is, at the same position as the position of the image on the reference image. The parallax in FIG. 3C is 1 in the vertical direction and 6 in the horizontal direction, which means that the absolute positional deviation in the object image causes a deviation (a relative positional deviation) of 1 in the vertical direction and 3 in the horizontal direction from the ideal parallax.
FIG. 3D is a drawing that illustrates the case where the comparison image is calibrated in such a manner that the parallax coincides with the ideal parallax D. The ideal parallax D is calculated by using a calibration chart located at a known distance and photographed by a stereo camera. In conventional stereo camera calibration methods, the position (6, 3) of the image on the reference image, which includes the calibration chart located at a known distance as a photographic object, is set as a reference, and the position of the image on the comparison image, which includes the calibration chart located at a known distance as a photographic object, is calibrated in such a manner that the parallax comes to 3 (the ideal parallax D) . In other words, those conventional stereo camera
calibration methods calibrate the comparison image in such a manner that the position of the image on the comparison image is moved from (7, 9) to (6, 6) . With this
calibration, the ideal parallax D is calculated based on the comparison image and the reference image.
FIG. 3E is a drawing that illustrates a state where the absolute positional deviation in the object image is not calibrated.' The position (6, 6) of the image on the comparison image remains different from the position (5, 7) in the ideal condition by 1 in the vertical direction and 1 in the horizontal direction. The position (6, 3) of the image on the reference image also remains different from the position (5, 4) in the ideal condition by 1 in the vertical direction and 1 in the horizontal direction. The results indicate that the position of the object image cannot be calibrated to the correct position even if image data is calibrated by using a pair of image data so that the ideal parallax D is achieved.
With the following calibration method in the first embodiment, however, the position of the object image is substantially calibrated to the position in the ideal condition .
The calibration method in the first embodiment uses photographic images (a comparison image and a reference image) obtained by photographing a calibration chart without a windshield 15 and photographic images (a
comparison image and a reference image) obtained by
photographing a calibration chart with the windshield 15. A comparison image photographed without a windshield is referred to as a first comparison image, and a reference image photographed without a windshield is referred to as a first reference image. A comparison image photographed with a windshield is referred to as a second comparison image, and a reference image photographed with a windshield is referred to as a second reference image.
FIG. 4 is a drawing that illustrates an example of an environment (without the windshield 15) where the
calibration method in the first embodiment is implemented. A calibration chart 60 (a calibration tool) is installed within a photographic range of a stereo camera 30. The calibration chart 60 has a pattern or the like that
facilitates detection of a corresponding point on the reference image that corresponds to a point on the
comparison image.
FIG. 5 is a drawing that illustrates an example of a pattern of the calibration chart 60. FIG. 5 illustrates a checkered pattern as a pattern of the calibration chart 60. In FIG. 5, a smaller pitch between checks on the checkered pattern generates more characteristic points (corresponding points), and these points enable an information processing device 50 described later to correctly detect a local absolute positional deviation resulting from the windshield 15. However, because such a small pitch is likely to cause a detection error of a corresponding point in corresponding point detecting processing described later, an irregular fine pattern may be used when a pitch between lattice points is reduced. Use of a fine pattern, however,
increases the amount of information handled by the
information processing device 50 and thus increases load on processing performed by the information processing device 50. It is preferable that the calibration chart 60 be large enough to be imaged on the whole of a photographic image. The calibration chart 60 in such a large size enables the information processing device 50 to use
information of the characteristic points (corresponding points) existing over the whole area of the photographic image and thus to correctly obtain an absolute positional deviation resulting from the windshield 15. Any shape of a pattern other than a checkered pattern is applicable to the calibration chart 60. Examples of the pattern of the calibration chart 60 may include a circular pattern.
Returning to FIG. 4, the stereo camera 30 photographs the calibration chart 60 without a windshield and acquires the first comparison image and the first reference image. The first comparison image is photographed by the first camera 1 (see FIG. 1), and the first reference image is photographed by the second camera 2 (see FIG. 1) . The first comparison image and the first reference image are input into the information processing device 50 serving as a calibration device.
FIG. 6 is a drawing that illustrates an example of an environment (with the windshield 15) where the calibration method in the first embodiment is implemented. The
implementation environment in FIG. 6 is the case where the vehicle in the implementation environment in FIG. 4 is equipped with the windshield 15. The implementation environments of FIG. 4 and FIG. 6 differ from each other only in whether to include the windshield 15. The stereo camera 30 photographs the calibration chart 60 with the windshield 15 and acquires the second comparison image and the second reference image. The second comparison image and the second reference image are input into the
information processing device 50 serving as a calibration device.
The information processing device 50 uses the first comparison image and the second comparison image to
determine a correction parameter for calibrating an
absolute positional deviation in the first camera 1 of the stereo camera 30 and uses the first reference image and the second reference image to determine a correction parameter for calibrating an absolute positional deviation in the second camera 2 of the stereo camera 30.
FIG. 7 is a drawing that illustrates an example of the configuration of the information processing device 50 in the first embodiment. The information processing device 50 in the first embodiment includes a receiving unit 51, a determining unit 52, an absolute positional deviation calculating unit 53, a correction parameter calculating unit 54, and a memory control unit 55.
The receiving unit 51 receives, from the stereo camera 30, first photographic images (the first comparison image and the first reference image) obtained by photographing the calibration chart 60 without interposing the windshield 15. The receiving unit 51 inputs the first photographic images (the first comparison image and the first reference image) into the determining unit 52. The receiving unit 51 furthermore receives, from the stereo camera 30, second photographic images (the second comparison image and the second reference image) obtained by photographing the calibration chart 60 through the windshield 15. The receiving unit 51 inputs the second photographic images (the second comparison image and the second reference image) into the determining unit 52.
The determining unit 52 receives the first
photographic images (the first comparison image and the first reference image) from the receiving unit 51. The determining unit 52 determines whether the first
photographic images are reliable. The determining unit 52, for example, extracts white luminance of the image of the pattern of the calibration chart 60 included in the first photographic images. Uneven luminance on the image of the pattern on the calibration chart 60 affects accuracy in the corresponding point detecting processing described later. The determining unit 52 thus determines whether uneven luminance markedly appears over the whole areas of the first photographic images. If, for example, uneven
luminance does not markedly appear over the whole areas of the first photographic images, the determining unit 52 determines that the first photographic images are reliable. When the first photographic images are determined to be reliable, the determining unit 52 inputs the first
photographic images into the absolute positional deviation calculating unit 53.
The determining unit 52 similarly receives second photographic images (the second comparison image and the second reference image) from the receiving unit 51. The determining unit 52 determines whether the second
photographic images are reliable. The determining unit 52, for example, determines whether differences of luminance on the second photographic images are normal and specifies a case where dust adheres to the windshield 15 and the like. Dust and the like adhering to the windshield 15 affect accuracy in the corresponding point detecting processing described later. If, for example, differences of luminance on the second photographic images are normal, the second photographic images are determined to be reliable. When the second photographic images are determined to be
reliable, the determining unit 52 inputs the second
photographic images into the absolute positional deviation calculating unit 53.
The absolute positional deviation calculating unit 53 receives the first photographic images (the first
comparison image and the first reference image) and the second photographic images (the second comparison image and the second reference image) from the determining unit 52. The absolute positional deviation calculating unit 53 calculates an absolute positional deviation in the first camera 1 and an absolute positional deviation in the second camera 2. The same method is employed to calculate the absolute positional deviations of the first camera 1 and the second camera 2, and thus described in the following is a method for calculating an absolute positional deviation in the first camera 1 using the first comparison image and the second comparison image.
The absolute positional deviation calculating unit 53 calculates an absolute positional deviation (a deviation in coordinates of an object image due to the windshield 15) based on the coordinates of the image of the calibration chart 60 on the first comparison image and the coordinates of the image of the calibration chart 60 on the second comparison image. Specifically, the absolute positional deviation calculating unit 53 retrieves respective
characteristic points (corresponding points) on the second comparison image corresponding to characteristic points on the first comparison image in the two dimensional
directions, the x direction and the y direction
(corresponding point retrieving processing) . The absolute positional deviation calculating unit 53 determines those characteristic points by using the image of a pattern on the calibration chart 60. The absolute positional
deviation calculating unit 53 calculates a deviation in coordinates (Δχ, Ay) between the coordinates (xl, yl) of a characteristic point on the first comparison image and the coordinates (x2, y2) of a characteristic point (a corresponding point) , which corresponds to the characteristic point on the first comparison image, on the second comparison image as an absolute positional deviation in the vicinity of the characteristic point in the first camera 1. The absolute positional deviation calculating unit 53 inputs the absolute positional deviation in the first camera 1 into the correction parameter calculating unit 54.
The absolute positional deviation calculating unit 53 calculates an absolute positional deviation in the second camera 2 in the manner similar to that for calculating the absolute positional deviation in the first camera 1 and inputs the absolute positional deviation in the second camera 2 into the correction parameter calculating unit 54.
The correction parameter calculating unit 54 receives the absolute positional deviation in the first camera 1 and the absolute positional deviation in the second camera 2 from the absolute positional deviation calculating unit 53. The correction parameter calculating unit 54 calculates a first correction parameter for calibrating the absolute positional deviation in the first camera 1 and a second correction parameter for calibrating the absolute
positional deviation in the second camera 2. Examples of the first correction parameter and the second . correction parameter include a coefficient used in a correction
formula for transforming coordinates in such a manner that an absolute positional deviation is cancelled. For example, when the absolute positional deviation is indicated as (1, 2), the correction formula transforms the coordinates by -1 in the x direction and -2 in the y direction. The
correction parameter calculating unit 54 inputs the first correction parameter and the second correction parameter into the memory control unit 55. The memory control unit 55 receives the first
correction parameter and the second correction parameter from the correction parameter calculating unit 54. The memory control unit 55 stores the first correction
parameter and the second correction parameter in the stereo camera 30. The memory control unit 55 stores the first correction parameter and the second correction parameter in the stereo camera 30 by, for example, transmitting the first correction parameter and the second correction parameter to the stereo camera 30 by wired or wireless communication. The first correction parameter and the second correction parameter may be once stored in an attachable and detachable memory medium or the like and stored in the stereo camera 30 through the memory medium.
A calibration method in the first embodiment will now be described. FIG. 8 is a flowchart that illustrates an example of the calibration method in the first embodiment. The stereo camera 30 photographs the calibration chart 60 without the windshield 15 (see FIG. 4) and acquires the first photographic images (the first comparison image and the first reference image) (Step SI) . The information processing device 50 (the determining unit 52) determines whether the first photographic images acquired at Step SI are reliable (Step S2). The information processing device 50 determines reliability of the first photographic images, for example, based on whether uneven luminance markedly appears over the whole areas of the first photographic images .
When the first photographic image is determined to be unreliable (No at Step S2), the implementation environment is adjusted (Step S3), and the process returns to Step SI. Examples of the adjustment for the implementation
environment include adjustments for the position and direction of the calibration chart 60. When the first photographic images are determined to be reliable (Yes at Step S2), the windshield 15 is mounted on the vehicle (Step S4). That is, the environment where the calibration method in the first embodiment is implemented is brought into the state of FIG. 6.
The stereo camera 30 photographs the calibration chart 60 with the windshield 15 mounted (see FIG. 6) and acquires the second photographic images, (the second comparison image and the second reference image) (Step S5) . The information processing device 50 (the determining unit 52) determines whether the second photographic images acquired at Step S5 are reliable (Step S6) . The information processing device 50 determines reliability of the second photographic images, for example, based on whether differences of luminance on the second photographic images are normal.
When the second photographic images are determined , to be unreliable (No at Step S6) , the implementation
environment is adjusted (Step S7), and the process returns to Step SI. Examples of the adjustment for the
implementation environment include remounting of the
windshield 15. If the adjustment for the implementation environment (Step S7) is minor, the process may restart from Step S4 instead of returning to Step SI.
When the second photographic images are determined to be reliable (Yes at Step S6) , the information processing device 50 (the absolute positional deviation calculating unit 53) calculates an absolute positional deviation in the first camera 1 using the above-described method with
reference to the first comparison image and the second comparison image and furthermore calculates an absolute positional deviation in the second camera 2 using the above-described method with reference to the first reference image and the second reference image (Step S8).
The information processing device 50 (the correction parameter calculating unit 54) calculates the first
correction parameter for calibrating the absolute
positional deviation in the first camera 1 and the second correction parameter for calibrating the absolute
positional deviation in the second camera 2 (Step S9) .
Examples of the first correction parameter and the second correction parameter include a coefficient used in a
correction formula for transforming coordinates in such a manner that the absolute positional deviation is cancelled.
The information processing device 50 (the memory control unit 55) stores the first correction parameter and the "'second correction parameter in the stereo camera 30.
The memory control unit 55 stores the first correction parameter and the second correction parameter in the stereo camera 30 by, for example, transmitting the first
correction parameter and the second correction parameter to the stereo camera 30 by wired or wireless communication (Step S10) .
As described above, the calibration method in the first embodiment acquires the first photographic images (the first comparison image "and the first reference image) photographed without the windshield 15 and the second photographic images (the second comparison image and the second reference image) photographed with the windshield 15. The calibration method in the first embodiment thereafter calculates the difference between a characteristic point on the first comparison image and a characteristic point (a corresponding point) , which corresponds to the
characteristic point on the first comparison image, on the second comparison image as an absolute positional deviation in the vicinity of the characteristic point of the first camera 1 and similarly calculates the difference between a characteristic point on the first reference image and a characteristic point (corresponding point), which
corresponds to the characteristic point on the first reference image, on the second reference image as an absolute positional deviation in the vicinity of the characteristic point of the second camera 2. Based on the absolute positional deviation in the first camera 1 (the second camera 2) calculated in this manner, the calibration method in the first embodiment calculates the first
correction parameter (the second correction parameter) . The absolute positional deviation in the first camera 1 (the second camera 2) due to the windshield 15 is therefore accurately calibrated by using the first correction
parameter (the second correction parameter) .
In the description of the first embodiment, the stereo camera 30 mounted on a vehicle is used as an example of a photographic device to be calibrated, however, the
calibration method in the first embodiment can be
separately employed for a single camera. Any number of cameras is thus applicable as a photographic device to be calibrated. Examples of the photographic device to be calibrated may include a monocular camera.
Second Embodiment
A second embodiment will now be described. When the stereo camera 30 is used as a photographic device to be calibrated, a relative positional deviation described in FIGS. 3A to 3E occurs due to factors such as an assembly tolerance of the stereo camera 30 mounted on an object. The relative positional deviation resulting from the assembly tolerance and/or the like can be calibrated by firstly correcting the absolute positional deviation in the second comparison image (the second reference image) using the first correction parameter (the second correction parameter) calculated by using the calibration method in the embodiment and secondly updating the first correction parameter of the stereo camera 30 so as to perform
calibration described in FIG. 3D. In the second embodiment, a case will be described where an absolute positional deviation and a relative positional deviation in the stereo camera 30 are calibrated.
FIG. 9 is a drawing that illustrates an example of the configuration of the information processing device 50 in the second embodiment. The information processing device 50 in the second embodiment includes the receiving unit 51, the determining unit 52, the absolute positional deviation calculating unit 53, the correction parameter calculating unit 54, the memory control unit 55, and a relative
positional deviation calculating unit 56. The
configuration of the information processing, device 50 in the second embodiment additionally includes the relative positional deviation calculating unit 56 compared with the configuration of the information processing device 50 in the first embodiment. The same description as the first embodiment will be omitted from the description of the second embodiment, and processing for calibrating a
relative positional deviation occurring due to a factor such as an assembly tolerance of the stereo camera 30 mounted on an object will be described in the second
embodiment .
Operation when using the absolute positional deviation calculating unit 53
For calibration of an absolute positional deviation resulting from the windshield 15, the information
processing device 50 in the second embodiment calculates the first and the second correction parameters by using the absolute positional deviation calculating unit 53 and the correction parameter calculating unit 54 and stores the parameters in the stereo camera 30 by using the memory control unit 55 (see FIG. .8) . The operation of the
information processing device 50 when using the absolute positional deviation calculating unit 53 is the same as that in the first embodiment, and the description of the operation is thus omitted.
Operation when using the relative positional deviation calculating unit 56
The information processing device 50 in the second embodiment receives, from the stereo camera 30, a
photographic image in which the absolute positional
deviation resulting from the windshield 15 has been
calibrated and calculates a parameter (a third parameter in the later description) for calibrating a relative
positional deviation occurring due to a factor such as an assembly tolerance of the stereo camera 30 by using the relative positional deviation calculating unit 56 and the correction parameter calculating unit 54. The following is a description about an operation of the information
processing device 50 when using the relative positional deviation calculating unit 56.
The receiving unit 51 receives, from the stereo camera 30, the second comparison image (a comparison image
including the calibration chart 60 photographed by the first camera 1 through the windshield 15) in which the absolute positional deviation has been calibrated by using the first correction parameter and the second reference image (a reference image including the calibration chart 60 photographed by the second camera 2 through the windshield 15) in which the absolute positional deviation has been calibrated by using the second correction parameter. The determining unit 52 determines whether the second comparison image (the second reference image) in which the absolute positional deviation has been calibrated by using the first correction parameter (the second correction parameter) is reliable. The method for determining
reliability is the same as that of the first embodiment, and the description of the method is thus omitted. If the second comparison image (the second reference image) is determined to be reliable, the determining unit 52 inputs the second comparison image (the second reference image) into the relative positional deviation calculating unit 56.
The relative positional deviation calculating unit 56 calculates parallax (Dx, Dy) by retrieving respective characteristic points (corresponding points) on the second reference image that correspond to characteristic points on the second comparison image. The relative positional deviation calculating unit 56 thereafter calculates the difference between the parallax (Dx, Dy) and ideal parallax (D, 0) as a relative positional deviation and inputs the relative positional deviation into the correction parameter calculating unit 54.
The correction parameter calculating unit 54
calculates the third correction parameter for calibrating a relative positional deviation between the second comparison image and the second reference image. Calibration using the third correction parameter is performed on the second comparison image (see FIG. 3D) . Examples of the third correction parameter include a coefficient used in a correction formula for transforming coordinates on the second comparison image in such a manner that the relative positional deviation is cancelled. The correction
parameter calculating unit 54 modifies the first correction parameter by combining the first correction parameter for calibrating an absolute positional deviation with the third correction parameter and works out a modified first
correction parameter. The correction parameter calculating unit 54 inputs the modified first correction parameter into the memory control unit 55.
The memory control unit 55 stores the modified first correction parameter in the stereo camera 30, thereby updating the first correction parameter stored in the stereo camera 30.
A calibration method in the second embodiment will now be described. FIG. 10 is a flowchart that illustrates an example of the calibration method in the second embodiment. The information processing device 50 stores the first correction parameter and the second correction .parameter calculated by using the calibration method in the first embodiment (see Step SI to Step S10 in FIG. 8) in the stereo camera 30 (Step Sll) .
The stereo camera 30 photographs the calibration chart 60 serving as an object through the windshield 15 and acquires the second comparison image and the second
reference image (Step S12). The stereo camera 30
calibrates the second comparison image using the first correction parameter (Step S13) . The stereo camera 30 furthermore calibrates the second reference image using the second correction parameter (Step S14).
Based on the difference between the coordinates of the object image on the calibrated second comparison image and the coordinates of the object image on the calibrated second reference image and the ideal parallax D, the information processing device 50 calculates the third correction parameter for calibrating a relative positional deviation indicating a deviation in a parallax between the object image on the calibrated second comparison image and the object image on the calibrated second reference image (Step S15) . The information processing device 50 modifies the first correction parameter using the third correction parameter, thereby calculating the modified first
correction parameter (Step S16) . The stereo camera 30 stores therein the modified first correction parameter
(Step S17) .
As described above, the calibration method in the second embodiment provides further modifications to the first correction parameter of the stereo camera 30, thereby acquiring three dimensional information indicating a more accurate position of the object from the object image included in the photographic image photographed by the stereo camera 30.
In the above description, the information processing device 50 modifies the first correction parameter using the third correction parameter. In another case, the
information processing device 50 may modify the second correction parameter using the third correction parameter.
Third Embodiment
A third embodiment will now be described. The third embodiment relates to a parallax calculating device storing therein a correction parameter calculated by using the calibration method in the second embodiment. When the parallax calculating device in operation uses a correction parameter, the word "correction" is used instead of
"calibration". FIG. 11 is a drawing that illustrates an example of the configuration of a parallax calculating device 20 in the third embodiment. The parallax
calculating device 20 in the third embodiment includes a receiving unit 21, a first correcting unit 22, a second correcting unit 23, a memory unit 24, a calculating unit 25, and a restoring unit 26. The receiving unit 21 receives an input of the second comparison image (a comparison image photographed through a transparent body) and outputs the second comparison image to the first correcting unit 22. The receiving unit 21 receives an input of the second reference image (a
reference image photographed through a transparent body) and outputs the second reference image to the second correcting unit 23.
The first correcting unit 22 receives the second comparison image from the receiving unit 21, corrects the second comparison image using the modified first correction parameter in the above description, and outputs the
corrected second comparison image to the calculating unit 25 and the restoring unit 26.
The second correcting unit 23 receives the second reference image from the receiving unit 21, corrects the second reference image using the second correction
parameter in the above description, and outputs the
corrected second reference image to the calculating unit 25 and the restoring unit 26.
The memory unit 24 stores therein the modified first correction parameter used by the first correcting unit 22 and the second correction parameter used by the second correcting unit 23.
The calculating unit 25 receives the corrected second comparison image from the first correcting unit 22 and receives the corrected second reference image from the second correcting unit 23. The calculating unit 25
calculates the parallax based on the object image included in the corrected second comparison image and the object image included in the corrected second reference image. The calculating unit 25 calculates the parallax for each pixel and generates a parallax image indicating the parallaxes by density values.
The restoring unit 26 receives the corrected second comparison image from the first correcting unit 22 and receives the corrected second reference image from the second correcting unit 23. The restoring unit 26 restores the modulation transfer function (MTF) characteristics of the second comparison image, which has been decreased by the correction. By restoring the MTF characteristics of the second comparison image, the restoring unit 26
generates a luminance image of the first camera 1 with its resolution improved. Similarly, the restoring unit 26 restores the MTF characteristics of the second reference image, which has been decreased by the correction. By restoring the MTF characteristics of the second reference image, the restoring unit 26 generates a luminance image o: the second camera 2 with its resolution improved.
A method for calculating the parallax in the third embodiment will now be described with reference to a flowchart. FIG. 12 is a flowchart that illustrates an example of the method for calculating the parallax in the third embodiment. The receiving unit 21 receives an input of the second comparison image (Step S21) and receives an input of the second reference image (Step S22).
The first correcting unit 22 corrects the second comparison image using the modified first correction parameter (Step S23) . The second correcting unit 23 corrects the second reference image using the second correction parameter (Step S24) .
The calculating unit 25 calculates parallax based on the object image included in the corrected second
comparison image and the object image included in the corrected second reference image (Step S25). The
calculating unit 25 generates a parallax image indicating the parallaxes by density values of pixels by using the parallaxes (the parallax calculated for each pixel)
calculated at Step S25 (Step S26) .
As described above, in the parallax calculating device 20 of the third embodiment, the first correcting unit 22 corrects the second comparison image using the modified first correction parameter, and the second correcting unit 23 corrects the second reference image using the second correction parameter. Furthermore, the calculating unit 25 calculates the parallax based on the object image included in the corrected second comparison image and the object image included in the corrected second reference image.
The parallax calculating device 20 in the third embodiment can correct a deviation in a parallax (a
relative positional difference) in an object image on image data due to an assembly tolerance and/or the like in addition to a deviation (an absolute positional deviation) in coordinates of object images on image data due to a transparent body. In other words, the parallax calculating device 20 in the third embodiment can more accurately calculate three dimensional coordinates indicating the position of an object based on the distance to the object calculated from the parallax of the object image and the coordinates of the object image on image data.
Fourth Embodiment
A fourth embodiment will now be described. FIG. 13 is a drawing that illustrates an example of the configuration of the stereo camera 30 in the fourth embodiment. The stereo camera 30 in the fourth embodiment includes the first camera 1, the second camera 2, and the parallax calculating device 20. The parallax calculating device 20 includes the receiving unit 21, the first correcting unit 22, the second correcting unit 23, the memory unit 24, the calculating unit 25, and the restoring unit 26.
The stereo camera 30 in the fourth embodiment includes the parallax calculating device 20 of the third embodiment. Examples of the application of the stereo camera 30 in the fourth embodiment include an in-vehicle stereo camera. FIG. 14 is a drawing that illustrates an example of using the stereo camera 30 in. the fourth embodiment as an in-vehicle stereo camera. The stereo camera 30 is installed inside the windshield 15, and this arrangement makes it possible to correct a deviation (an absolute positional deviation) in coordinates of the object image on image data in
addition to a deviation in a parallax (a relative
positional deviation) in the object image on image data when the car (vehicle) is running or in a halt condition.
The stereo camera 30 in the fourth embodiment can correct a deviation (an absolute positional deviation) in coordinates of an object image on image data on a real-time basis in addition to a deviation in a parallax (a relative positional deviation) in an object image on image data. In other words, the stereo camera 30 in the fourth embodiment can accurately calculate, on a real-time basis, three dimensional coordinates indicating the position of the object based on the distance to the object calculated from the parallax of the object image and the coordinates of the object image on image data.
The following is a description about an example of the hardware configuration of the information processing device 50 and the parallax calculating device 20. FIG. 15 is a drawing that illustrates an example of the hardware
configuration of the information processing device 50 and the parallax calculating device 20. The information
processing device 50 and the parallax calculating device 20 include a control device 41, a main memory device 42, an auxiliary memory device 43, an external interface 44, and a communication device 45. The control device 41, the main memory device 42, the auxiliary memory device 43, the external interface 44, and the communication device 45 are connected with one another via a bus 46.
The control device 41 executes a computer program read out on the main memory device 42 from the auxiliary memory device 43. Examples of the main memory device 42 include a read only memory (ROM) and a random access memory (RAM) . Examples of the auxiliary memory device 43 include a hard disk drive (HDD) and a memory card. The external interface 44 is an interface for transmitting and receiving data to and from other devices. The communication device 45 is an interface for communicating with other devices by wireless communication and/or the like.
A computer program executed by the information
processing device 50 and the parallax calculating device 20 is stored in a computer-readable memory medium such as a compact disc read only memory (CD-ROM) , a memory card, a compact disc recordable (CD-R) , and a digital versatile disc (DVD) as an installable or executable file and
provided as a computer program product.
The program executed by the information processing device 50 and the parallax calculating device 20 may be stored in a computer connected to a network such as the Internet and provided by being downloaded via the network. The program executed by the information processing device 50 and the parallax calculating device 20 may be provided via a network such as the Internet without being downloaded.
The program executed by the information processing device 50 and the parallax calculating device 20 may be preliminarily embedded in a read only memory (ROM) or the like and provided. The program executed by the information processing device 50 consists of modules including the above-described functional blocks (the receiving unit 51, the determining unit 52, the absolute positional deviation calculating unit 53, the correction parameter calculating unit 54, the memory control unit 55, and the relative positional
deviation calculating unit 56) . As an actual hardware configuration, the control device 41 reads out the program from a memory medium and executes the program, whereby each of the functional blocks loads on the main memory device 42 In other words, each of the functional blocks is generated on the main memory device 42.
The program executed by the parallax calculating device 20 consists of modules including the above-described functional blocks (the receiving unit 21, the first
correcting unit 22, the second correcting unit 23, the calculating unit 25, and the restoring unit 26) . As an actual hardware configuration, the control device 41 reads out the program from a memory medium and executes the program, whereby each of the functional blocks loads on the main memory device 42. In other words, each of the
functional blocks is generated on the main memory device 42
Some or all of the above-described functional blocks (the receiving unit 51, the determining unit 52, the absolute positional deviation calculating unit 53, the correction parameter calculating unit 54, the memory control unit 55, and the relative positional deviation calculating unit 56) included in the information processing device 50 and some or all of the above-described functional blocks (the receiving unit 21, the first correcting unit 22 the second correcting unit 23, the calculating unit 25, and the restoring unit 26) included in the parallax calculating device 20 may be implemented in hardware: such as an integrated circuit (IC) instead of being implemented in software.
An embodiment provides the effect that an absolute positional deviation in image data due to a transparent body can be accurately calibrated.
Although the invention has been described with respect to specific embodiments for a complete and clear disclosure, the appended claims are not to be thus limited but are to be construed as embodying all modifications and alternative constructions that may occur to one skilled in the art that fairly fall within the basic teaching herein set forth.
REFERENCE SIGNS LIST
1 first camera
2 second camera
11 lens (optical system)
12 sensor
13 obj ect
14 optical axis
15 windshield
20 parallax calculating device
21 receiving unit
22 first correcting unit
23 second correcting unit
24 memory unit
25 calculating unit
26 restoring unit
30 stereo camera
41 control device
42 main memory device
43 auxiliary memory device
44 external interface
45 communication device bus
information processing device (calibration device) receiving unit
determining unit
absolute positional deviation calculating unit correction parameter calculating unit
memory control unit
relative positional deviation calculating unit calibration chart (calibration tool)

Claims

1. A calibration method for a photographic device that photographs an object through a transparent body, the calibration method comprising:
acguiring a first photographic image by photographing the object without interposing the transparent body;
acguiring a second photographic image by photographin the object through the transparent body;
calculating an absolute positional deviation that indicates a deviation in coordinates of an image of the object due to the transparent body based on coordinates of an image of the object on the first photographic image and coordinates of an image of the object on the second photographic image;
calculating a correction parameter for calibrating th absolute positional deviation; and
storing the correction parameter in the photographic device.
2. The calibration method according to claim 1, wherein the photographic device is a stereo camera including first camera and a second camera;
the calculating of the absolute positional deviation includes :
calculating the absolute positional deviation in the first camera based on coordinates of an image of the object on the first photographic image photographed by the first camera and coordinates of an image of the object on the second photographic image photographed by the first camera, and
calculating the absolute positional deviation in the second camera based on coordinates of an image of the object on the first photographic image photographed by the second camera and coordinates of an image of the object on the second photographic image photographed by the second camera;
at the calculating of the correction parameter, a first correction parameter for calibrating the absolute positional deviation in the first camera and a second correction parameter for calibrating the absolute
positional deviation in the second camera are calculated; and
at the storing, the first camera correction parameter and the second camera correction parameter are stored in the stereo camera.
3. The calibration method according to claim 2, further comprising:
calculating a third correction parameter for
calibrating a relative positional deviation indicating a deviation in a parallax between the image of the object in the second photographic image photographed by the first camera and the image of the object in the second
photographic image photographed by the second camera; and updating the first correction parameter based on the third correction parameter.
4. The calibration method according to any one of claims 1 to 3, wherein the transparent body is a windshield of a vehicle .
5. The calibration method according to any one of claims 1 to 4, wherein the object photographed at the acquiring of the first photographic image and the object photographed at the acquiring of the second photographic image are a calibration tool that has a pattern for facilitating detection of coordinates on the second photographic image that correspond to coordinates on the first photographic image .
6. A calibration device that calibrates a photographic device that photographs an object through a transparent body, the calibration device comprising:
a receiving unit that receives a first photographic image obtained by photographing the object photographed without interposing the transparent body and a second photographic image obtained by photographing the object through the transparent body;
an absolute positional deviation calculating unit that calculates an absolute positional deviation indicating a deviation in coordinates of an image of the object due to the transparent body based on coordinates of an image of the object on the first photographic image and coordinates of an image of the object on the second photographic image; a correction parameter calculating unit that
calculates a correction parameter for calibrating the absolute positional deviation; and
a memory control unit that stores the correction parameter in the photographic device.
7. Ά computer program product comprising a non-transitory computer-readable medium having computer readable program codes, the program codes when executed causing a computer that calibrates a photographic device that photographs an object through a transparent body to perform:
receiving a first photographic image obtained by photographing the object without interposing the
transparent body and a second photographic image obtained by photographing the object through the transparent body; calculating an absolute positional deviation
indicating a deviation in coordinates of an image of the object due to the transparent body based on coordinates of an image of the object on the first photographic image and coordinates of an image of the object on the second
photographic image;
calculating a correction parameter for calibrating th- absolute positional deviation; and
storing the correction parameter in the photographic device.
PCT/JP2015/056016 2014-03-07 2015-02-24 Calibrarion method, calibration device, and computer program product WO2015133414A1 (en)

Priority Applications (5)

Application Number Priority Date Filing Date Title
CN201580012037.9A CN106104196B (en) 2014-03-07 2015-02-24 Calibration method and calibrator (-ter) unit
EP15759240.3A EP3114430B1 (en) 2014-03-07 2015-02-24 Calibration method, calibration device, and computer program product
US15/123,998 US10218961B2 (en) 2014-03-07 2015-02-24 Calibration method, calibration device, and computer program product
KR1020167024651A KR101787304B1 (en) 2014-03-07 2015-02-24 Calibration method, calibration device, and computer program product
US16/235,131 US10701341B2 (en) 2014-03-07 2018-12-28 Calibration method, calibration device, and computer program product

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2014045730A JP6427900B2 (en) 2014-03-07 2014-03-07 Calibration method, calibration system, program, and moving object
JP2014-045730 2014-03-07

Related Child Applications (2)

Application Number Title Priority Date Filing Date
US15/123,998 A-371-Of-International US10218961B2 (en) 2014-03-07 2015-02-24 Calibration method, calibration device, and computer program product
US16/235,131 Continuation US10701341B2 (en) 2014-03-07 2018-12-28 Calibration method, calibration device, and computer program product

Publications (1)

Publication Number Publication Date
WO2015133414A1 true WO2015133414A1 (en) 2015-09-11

Family

ID=54055220

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2015/056016 WO2015133414A1 (en) 2014-03-07 2015-02-24 Calibrarion method, calibration device, and computer program product

Country Status (6)

Country Link
US (2) US10218961B2 (en)
EP (1) EP3114430B1 (en)
JP (1) JP6427900B2 (en)
KR (1) KR101787304B1 (en)
CN (2) CN106104196B (en)
WO (1) WO2015133414A1 (en)

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106559619A (en) * 2016-11-29 2017-04-05 北京奇虎科技有限公司 3D Camera Calibration Methods, device and 3D video cameras
EP3293701A1 (en) * 2016-09-07 2018-03-14 Conti Temic microelectronic GmbH Method and apparatus for the compensation of static image distortions introduced by a windshield onto an adas camera
CN108235777A (en) * 2017-12-29 2018-06-29 深圳市锐明技术股份有限公司 A kind of scaling method, device, storage medium and the terminal device of ADAS cameras
CN108307178A (en) * 2016-09-16 2018-07-20 艾克松有限责任公司 Calibration system
EP3629053A1 (en) * 2018-09-28 2020-04-01 NEXION S.p.A. System for calibrating a vehicle camera
EP3505865A4 (en) * 2016-08-29 2020-04-29 Hitachi Automotive Systems, Ltd. On-vehicle camera, method for adjusting on-vehicle camera, and on-vehicle camera system
WO2021004642A1 (en) * 2019-07-11 2021-01-14 Toyota Motor Europe A camera calibration method, a computer program, a computer-readable recording medium and a camera calibration system
GB2615145A (en) * 2022-01-21 2023-08-02 Motional Ad Llc Methods and systems for determination of boresight error in an optical system
WO2024018709A1 (en) * 2022-07-22 2024-01-25 日立Astemo株式会社 Stereo camera device and calibration method

Families Citing this family (29)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP3088175B1 (en) 2015-04-27 2023-08-23 Ricoh Company, Ltd. Method for manufacturing laminated glass and laminated glass
WO2017051557A1 (en) * 2015-09-25 2017-03-30 オリンパス株式会社 Image calibration inspection tool and endoscope system
JP6645140B2 (en) * 2015-11-27 2020-02-12 株式会社明電舎 Image calibration apparatus and calibration method
EP3173975A1 (en) * 2015-11-30 2017-05-31 Delphi Technologies, Inc. Method for identification of candidate points as possible characteristic points of a calibration pattern within an image of the calibration pattern
EP3174007A1 (en) 2015-11-30 2017-05-31 Delphi Technologies, Inc. Method for calibrating the orientation of a camera mounted to a vehicle
EP3173979A1 (en) 2015-11-30 2017-05-31 Delphi Technologies, Inc. Method for identification of characteristic points of a calibration pattern within a set of candidate points in an image of the calibration pattern
JP6747176B2 (en) 2016-08-25 2020-08-26 株式会社リコー Image processing device, photographing device, program, device control system and device
WO2018163683A1 (en) * 2017-03-10 2018-09-13 ヤマハ発動機株式会社 Imaging system
DE102017220282A1 (en) * 2017-11-14 2019-05-16 Robert Bosch Gmbh Test method for a camera system, a control unit of the camera system, the camera system and a vehicle with this camera system
CN108364313B (en) * 2018-01-16 2021-08-27 深圳市科视创科技有限公司 Automatic alignment method, system and terminal equipment
EP3534334B1 (en) 2018-02-28 2022-04-13 Aptiv Technologies Limited Method for identification of characteristic points of a calibration pattern within a set of candidate points derived from an image of the calibration pattern
EP3534333A1 (en) 2018-02-28 2019-09-04 Aptiv Technologies Limited Method for calibrating the position and orientation of a camera relative to a calibration pattern
WO2019171984A1 (en) 2018-03-08 2019-09-12 ソニー株式会社 Signal processing device, signal processing method, and program
JP6947112B2 (en) * 2018-04-20 2021-10-13 トヨタ自動車株式会社 Calibration method
JP7427614B2 (en) * 2018-06-29 2024-02-05 ズークス インコーポレイテッド sensor calibration
JP2020034344A (en) * 2018-08-28 2020-03-05 株式会社Screenホールディングス Moving part position detection method, substrate processing method, substrate processing device, and substrate processing system
CN109068121B (en) * 2018-09-04 2019-07-23 珠海康弘医疗科技有限公司 3-D imaging system, 3-D imaging system calibration method and device
CN109981982B (en) * 2019-03-25 2021-02-19 联想(北京)有限公司 Control method, device and system
JP7136737B2 (en) * 2019-04-10 2022-09-13 株式会社神戸製鋼所 Three-dimensional position measuring device, three-dimensional position measuring method and program
JP7251425B2 (en) * 2019-09-20 2023-04-04 株式会社デンソーテン Attached matter detection device and attached matter detection method
JP7151675B2 (en) * 2019-09-20 2022-10-12 株式会社デンソーテン Attached matter detection device and attached matter detection method
DE102020100278A1 (en) * 2020-01-09 2021-07-15 Bayerische Motoren Werke Aktiengesellschaft Test method and test system for an optical sensor unit of a system for automated driving
JP7417859B2 (en) * 2020-03-19 2024-01-19 株式会社リコー Distance correction information calculation method, distance measuring device, moving object and stereo camera device
JP7405710B2 (en) 2020-07-16 2023-12-26 日立Astemo株式会社 Processing equipment and in-vehicle camera equipment
DE102020211154A1 (en) 2020-09-04 2022-03-10 Robert Bosch Gesellschaft mit beschränkter Haftung Method and apparatus for calibrating a vehicle mounted camera
CN112529790B (en) * 2020-11-13 2023-06-27 清华大学 Image light intensity correction method and device for spectrum recovery
JP2022140135A (en) * 2021-03-12 2022-09-26 株式会社リコー Lens unit, stereo camera, and movable body
JP2023055094A (en) 2021-10-05 2023-04-17 日立Astemo株式会社 Imaging apparatus and parallax deviation correction method
JP2024011739A (en) * 2022-07-15 2024-01-25 キヤノン株式会社 Calibration method for distance measuring device, distance measuring device, and computer program

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH06144006A (en) * 1992-11-06 1994-05-24 Nippon Sheet Glass Co Ltd Method of detecting transmissive distortion of plate shaped body
JPH09159442A (en) * 1995-12-04 1997-06-20 Honda Motor Co Ltd Environment recognition device for vehicle
JP2000322565A (en) * 1999-03-10 2000-11-24 Calsonic Kansei Corp Image data correcting circuit
US6381360B1 (en) * 1999-09-22 2002-04-30 Fuji Jukogyo Kabushiki Kaisha Apparatus and method for stereoscopic image processing
US20120206601A1 (en) * 2009-07-08 2012-08-16 Ulrich Seger Distortion correction of video systems
US20130250065A1 (en) * 2012-03-21 2013-09-26 Ricoh Company, Ltd. Range-finding system and vehicle mounting the range-finding system

Family Cites Families (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CA2095303A1 (en) * 1992-05-06 1993-11-07 Atsushi Miyake System of detecting optical distortion of a light-transmitting platelike member
US6067147A (en) * 1997-01-09 2000-05-23 Fuji Electric Co., Ltd. Distance-measuring apparatus
JP4109077B2 (en) * 2002-10-11 2008-06-25 敬二 実吉 Stereo camera adjustment device and stereo camera adjustment method
IL167193A (en) * 2004-03-18 2009-11-18 Elbit Systems Ltd Method and system for determining optical distortion in a transparent medium
US7365838B2 (en) * 2004-04-02 2008-04-29 Lockheed Martin Corporation System and method for the measurement of optical distortions
DE102004048400A1 (en) * 2004-10-01 2006-04-06 Robert Bosch Gmbh Method for detecting an optical structure
DE102004056669A1 (en) * 2004-10-13 2006-04-20 Robert Bosch Gmbh Device for the calibration of an image sensor system in a motor vehicle
DE102007040232B4 (en) * 2007-08-25 2015-12-31 Adc Automotive Distance Control Systems Gmbh Method for detecting defects in a windshield
DE102009007840A1 (en) * 2009-02-06 2010-08-12 Adc Automotive Distance Control Systems Gmbh Procedure for calibrating a camera-based system
JP5278819B2 (en) 2009-05-11 2013-09-04 株式会社リコー Stereo camera device and vehicle exterior monitoring device using the same
US9699438B2 (en) * 2010-07-02 2017-07-04 Disney Enterprises, Inc. 3D graphic insertion for live action stereoscopic video
JP6182866B2 (en) * 2012-03-21 2017-08-23 株式会社リコー Calibration device, distance measuring device, and vehicle
JP6520080B2 (en) 2014-01-31 2019-05-29 株式会社リコー Stereo camera calibration method, parallax calculation device, stereo camera and vehicle

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH06144006A (en) * 1992-11-06 1994-05-24 Nippon Sheet Glass Co Ltd Method of detecting transmissive distortion of plate shaped body
JPH09159442A (en) * 1995-12-04 1997-06-20 Honda Motor Co Ltd Environment recognition device for vehicle
JP2000322565A (en) * 1999-03-10 2000-11-24 Calsonic Kansei Corp Image data correcting circuit
US6381360B1 (en) * 1999-09-22 2002-04-30 Fuji Jukogyo Kabushiki Kaisha Apparatus and method for stereoscopic image processing
US20120206601A1 (en) * 2009-07-08 2012-08-16 Ulrich Seger Distortion correction of video systems
US20130250065A1 (en) * 2012-03-21 2013-09-26 Ricoh Company, Ltd. Range-finding system and vehicle mounting the range-finding system

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
See also references of EP3114430A4 *

Cited By (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP3505865A4 (en) * 2016-08-29 2020-04-29 Hitachi Automotive Systems, Ltd. On-vehicle camera, method for adjusting on-vehicle camera, and on-vehicle camera system
EP3293701A1 (en) * 2016-09-07 2018-03-14 Conti Temic microelectronic GmbH Method and apparatus for the compensation of static image distortions introduced by a windshield onto an adas camera
CN108307178A (en) * 2016-09-16 2018-07-20 艾克松有限责任公司 Calibration system
CN106559619A (en) * 2016-11-29 2017-04-05 北京奇虎科技有限公司 3D Camera Calibration Methods, device and 3D video cameras
CN106559619B (en) * 2016-11-29 2019-07-05 北京奇虎科技有限公司 3D Camera Calibration Methods, device and 3D video camera
CN108235777A (en) * 2017-12-29 2018-06-29 深圳市锐明技术股份有限公司 A kind of scaling method, device, storage medium and the terminal device of ADAS cameras
CN108235777B (en) * 2017-12-29 2021-07-30 深圳市锐明技术股份有限公司 Calibration method and device of ADAS camera, storage medium and terminal equipment
EP3629053A1 (en) * 2018-09-28 2020-04-01 NEXION S.p.A. System for calibrating a vehicle camera
EP3742192A1 (en) * 2018-09-28 2020-11-25 NEXION S.p.A. System for calibrating a vehicle camera
US11836947B2 (en) 2018-09-28 2023-12-05 Nexion S.P.A. System for calibrating a vehicle camera
EP4273004A3 (en) * 2018-09-28 2024-01-24 NEXION S.p.A. System for calibrating a vehicle camera
WO2021004642A1 (en) * 2019-07-11 2021-01-14 Toyota Motor Europe A camera calibration method, a computer program, a computer-readable recording medium and a camera calibration system
GB2615145A (en) * 2022-01-21 2023-08-02 Motional Ad Llc Methods and systems for determination of boresight error in an optical system
US11812128B2 (en) 2022-01-21 2023-11-07 Motional Ad Llc Methods and systems for determination of boresight error in an optical system
WO2024018709A1 (en) * 2022-07-22 2024-01-25 日立Astemo株式会社 Stereo camera device and calibration method

Also Published As

Publication number Publication date
CN106104196A (en) 2016-11-09
EP3114430A1 (en) 2017-01-11
US20190141313A1 (en) 2019-05-09
KR101787304B1 (en) 2017-10-18
CN106104196B (en) 2019-10-29
US10218961B2 (en) 2019-02-26
JP2015169583A (en) 2015-09-28
EP3114430B1 (en) 2020-05-06
EP3114430A4 (en) 2017-04-26
KR20160119444A (en) 2016-10-13
US10701341B2 (en) 2020-06-30
JP6427900B2 (en) 2018-11-28
US20170070725A1 (en) 2017-03-09
CN110490944A (en) 2019-11-22

Similar Documents

Publication Publication Date Title
US10701341B2 (en) Calibration method, calibration device, and computer program product
US10972716B2 (en) Calibration method and measurement tool
JP6520080B2 (en) Stereo camera calibration method, parallax calculation device, stereo camera and vehicle
JP6458439B2 (en) On-vehicle camera calibration device, image generation device, on-vehicle camera calibration method, and image generation method
US10620000B2 (en) Calibration apparatus, calibration method, and calibration program
WO2018196391A1 (en) Method and device for calibrating external parameters of vehicle-mounted camera
JP6970577B2 (en) Peripheral monitoring device and peripheral monitoring method
JP2019132855A (en) Stereo camera calibration method, parallax calculation device, and stereo camera
JP6791341B2 (en) Calibration method, calibration equipment, and program
JPWO2018042954A1 (en) In-vehicle camera, adjustment method of in-vehicle camera, in-vehicle camera system
JP2021025868A (en) Stereo camera
WO2017042998A1 (en) In-vehicle stereo camera device and method for correcting same
JP2014174067A (en) Calibration device of on-vehicle camera
JP2006300890A (en) Device and method for inspecting image processing
CN113474678B (en) System and method for compensating for movement of a vehicle component
KR102611759B1 (en) Apparatus for calibrating of around view image for vehicle and control method thereof
JP6996582B2 (en) Calibration method, calibration equipment and program
JP6680335B2 (en) Stereo camera, vehicle, calculation method and program
JP4539400B2 (en) Stereo camera correction method and stereo camera correction device
WO2017042995A1 (en) In-vehicle stereo camera device and method for correcting same
JP2018033035A (en) Image processing apparatus, imaging device, program, instrument control system, and instrument
JP6985872B2 (en) Vehicle peripheral monitoring device and peripheral monitoring method
JP6674127B2 (en) Image processing device, photographing device, program, device control system and device
JP7492599B2 (en) Vehicle-mounted camera device
WO2023175708A1 (en) External environment recognition device and external environment recognition method

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 15759240

Country of ref document: EP

Kind code of ref document: A1

REEP Request for entry into the european phase

Ref document number: 2015759240

Country of ref document: EP

WWE Wipo information: entry into national phase

Ref document number: 2015759240

Country of ref document: EP

ENP Entry into the national phase

Ref document number: 20167024651

Country of ref document: KR

Kind code of ref document: A

WWE Wipo information: entry into national phase

Ref document number: 15123998

Country of ref document: US

NENP Non-entry into the national phase

Ref country code: DE