WO2023007775A1 - Caméra montée sur véhicule et procédé d'étalonnage pour caméra montée sur véhicule - Google Patents

Caméra montée sur véhicule et procédé d'étalonnage pour caméra montée sur véhicule Download PDF

Info

Publication number
WO2023007775A1
WO2023007775A1 PCT/JP2022/004627 JP2022004627W WO2023007775A1 WO 2023007775 A1 WO2023007775 A1 WO 2023007775A1 JP 2022004627 W JP2022004627 W JP 2022004627W WO 2023007775 A1 WO2023007775 A1 WO 2023007775A1
Authority
WO
WIPO (PCT)
Prior art keywords
vehicle
image
pixel
calibration
camera
Prior art date
Application number
PCT/JP2022/004627
Other languages
English (en)
Japanese (ja)
Inventor
聰 陳
和良 山崎
正幸 小林
Original Assignee
日立Astemo株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 日立Astemo株式会社 filed Critical 日立Astemo株式会社
Priority to DE112022001700.7T priority Critical patent/DE112022001700T5/de
Publication of WO2023007775A1 publication Critical patent/WO2023007775A1/fr

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N17/00Diagnosis, testing or measuring for television systems or their details
    • H04N17/002Diagnosis, testing or measuring for television systems or their details for television cameras
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/80Geometric correction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/16Anti-collision systems
    • G08G1/166Anti-collision systems for active traffic, e.g. moving vehicles, pedestrians, bikes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20021Dividing image into blocks, subimages or windows
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30204Marker
    • G06T2207/30208Marker matrix
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30248Vehicle exterior or interior
    • G06T2207/30252Vehicle exterior; Vicinity of vehicle

Definitions

  • the present invention relates to an in-vehicle camera and an in-vehicle camera calibration method.
  • Patent Document 1 Japanese Patent Application Laid-Open Nos. 2011-49733 (Patent Document 1) and 2013-109416 (Patent Document 2) describe calibration techniques for camera systems such as in-vehicle cameras.
  • Patent Document 1 "In the present invention, the distortion of an image captured by a camera is approximated by a function model that differs for each region. By approximating the lens distortion to a function model, correction can be performed with high accuracy.” It is possible to improve the visibility and improve the accuracy of image recognition.”
  • FIG. 1(c) is an example of a diagram illustrating a distortion coefficient when weighting a WIDE region (wide angle of view) is increased. By weighting the wide angle of view more than the narrow angle of view, it is possible to obtain a distortion coefficient that can accurately correct the distortion of the wide angle of view image. Highly accurate distance measurement is possible.”
  • Distance measurement with conventional in-vehicle cameras is mainly a function that supports collision avoidance against obstacles in front.
  • there are still many accidents at intersections and in order to further reduce accidents, future in-vehicle cameras will need a wider recognition angle of view.
  • a wide-angle lens is used, and the amount of distortion (pixel shift) also increases, so the demand for accuracy in calibration increases.
  • the recognition angle of view widens, the range to be calibrated also increases, and the size of the calibration image (hereinafter referred to as the chart) for measuring pixel deviation also increases.
  • the chart for measuring pixel deviation also increases.
  • the pixel deviation becomes even greater due to the refraction of the windshield, and furthermore, variations in the installation of the on-board camera or variations in the shape of the windshield cannot be ignored, so calibration must be performed for each vehicle. the need increases.
  • the size of the chart is too large, it will be difficult to install the chart in the factory or dealer, making calibration difficult.
  • a chart of 3 m (horizontal width) ⁇ 3 m (vertical width) may be used.
  • a chart of 14 m (horizontal width) ⁇ 3 m (vertical width) may be required.
  • the object of the present invention is to provide a technology that can achieve high-precision calibration of wide-angle onboard cameras using a small chart.
  • An in-vehicle camera includes an imaging unit that captures an image of an object, an estimating unit that calculates the pixel deviation of the entire surface based on the image captured by the imaging unit, and a front camera in the image captured by the imaging unit.
  • An area selection unit that selects a plurality of predetermined areas determined in advance according to the shape and inclination of the glass, and a calibration unit that corrects pixel deviations on the entire surface based on information from the estimation unit.
  • the estimating unit estimates the pixel displacement of the entire image using images detected from a plurality of predetermined regions in the image captured by the imaging unit.
  • FIG. 1 is a block diagram showing a configuration example of an in-vehicle camera in an embodiment
  • FIG. 2 is a flowchart showing a procedure of calibrating the vehicle-mounted camera in the embodiment.
  • FIG. 3 is an explanatory diagram showing an example of how an in-vehicle camera captures a subject
  • FIG. 4 is an explanatory diagram showing an example of an image of an object captured by an in-vehicle camera
  • FIG. 5 is an explanatory diagram showing an example of measuring the pixel deviation of the entire screen using a large chart in the comparative example.
  • FIG. 6 is an explanatory diagram showing an example of measuring only a specific region using a small chart
  • 7A is a diagram showing Formula 1.
  • FIG. 1 is a block diagram showing a configuration example of an in-vehicle camera in an embodiment
  • FIG. 2 is a flowchart showing a procedure of calibrating the vehicle-mounted camera in the embodiment.
  • FIG. 3 is an explanatory diagram showing an example of how an
  • FIG. 7B is a diagram showing Equations 2, 3, and 4;
  • FIG. FIG. 8 is a diagram showing an example of variations in mounting of an in-vehicle camera and an example of variations in the shape of a windshield;
  • FIG. 9 is a diagram showing an example in which relevant variability parameters are indistinguishable.
  • FIG. 10 is a diagram for explaining the sensitivity of the variation parameter for the inclination angle of the windshield (30 degrees to 35 degrees).
  • FIG. 11 is a diagram for explaining the sensitivity of the variation parameter when the tilt angle of the windshield is 35 degrees or more: 40 degrees to 70 degrees.
  • FIG. 1 The contents of this embodiment will be explained using FIGS. 1 to 6.
  • FIG. 1 The contents of this embodiment will be explained using FIGS. 1 to 6.
  • Fig. 3 is an explanatory diagram showing an example of how the vehicle-mounted camera projects a subject.
  • 31 is a vehicle having an in-vehicle camera 30 .
  • 32, 33 and 34 are subjects (there are three subjects in this example).
  • Detection images DPA and DPB showing these three subjects 32-34 are shown in FIG.
  • the horizontal direction is H
  • the vertical direction is V.
  • the detection image DPA in FIG. 4 shows the positions of the images of the subjects 32A, 33A, and 34A without the windshield and the positions of the images of the subjects 32B, 33B, and 34B with the windshield.
  • the positions of the images (32A, 33A, 34A) without the windshield are ideal positions where there is no refraction of the windshield, that is, there is no pixel shift.
  • the positions of the images (32B, 33B, and 34B) with the windshield are assumed to be the positions of the images projected without variations in the mounting of the vehicle-mounted camera and variations in the shape and thickness of the windshield.
  • the difference between the position of the image (32A, 33A, 34A) and the position of the image (32A, 33A, 34A) is called pixel shift (PS1). If all cars of the same model are equipped with on-board cameras, and if there are no variations in the shape and thickness of the windshield, all cars of the same model can be calibrated by measuring only once for one model.
  • the detected image DPB includes the positions of the images of the subjects 32A, 33A, and 34A when there is no windshield, and the images (32C, 33C , 34C) are shown.
  • the pixel shift which is the difference between the positions of the images (32A, 33A, 34A) and the positions of the images (32C, 33C, 34C), is pixel shift PS2.
  • the amount of pixel shift PS2 (length of arrow) is greater than the amount of pixel shift PS1 (length of arrow).
  • a pixel shift occurs between the positions of the images (32B, 33B, 34B) of the detection image DPA and the positions of the images (32C, 33C, 34C) of the detection image DPB.
  • PS2-PS1 the horizontal pixel shift component of the pixel shift PS1 of the image 32B with respect to the image 32A and the horizontal pixel shift component of the pixel shift PS2 of the image 32C from the image 32A.
  • PSH the horizontal pixel shift
  • FIG. 5 is an explanatory diagram showing an example of measuring the pixel deviation of the entire screen using a large chart in the comparative example.
  • a large chart (calibration image) 51 that can cover the entire wide-angle region, which is the widened viewing angle of the vehicle-mounted camera 52, is prepared.
  • the chart 51 of 14 m (horizontal width) ⁇ 3 m (vertical width) is used.
  • a large chart 51 is photographed through the windshield 53 with the wide-angle camera of the in-vehicle camera 52 .
  • the pixel shift ( ⁇ ) of the entire screen of the on-vehicle camera 52 can be obtained.
  • the horizontal axis indicates the horizontal angle of view (FOVH)
  • the vertical axis indicates the vertical angle of view (FOVV).
  • the direction of the arrow indicates the direction of pixel displacement
  • the length of the arrow indicates the amount of pixel displacement.
  • Pixel displacement ( ⁇ ) in the entire screen is caused by distortion due to variations in mounting of the in-vehicle camera, distortion due to variations in windshield shape, inclination, and plate thickness, and distortion due to variations in the shape of the wide-angle lens of the in-vehicle camera. do.
  • FIG. 6 is an explanatory diagram showing an example of a calibration method for measuring only a specific area using a small chart.
  • FIG. is measured.
  • three small charts 61 are used to measure pixel shifts ( ⁇ p1, ⁇ p2, ⁇ p3) in three specific regions (64, 65, 66).
  • the size of each of the three specified areas (64, 65, 66) is less than 1/2 of the full screen image.
  • the plurality of predetermined areas is n areas, at least one area of each predetermined area is 1/n or less of the entire screen image.
  • the size of the small chart 61 can be, for example, 1 m to 2 m or less ⁇ 1 m to 2 m or less.
  • the in-vehicle camera 62 is a wide-angle camera of the in-vehicle camera 62 through the windshield 63, and by photographing a plurality of small charts 61, pixel shifts ( ⁇ p1, ⁇ p2, ⁇ p3) are measured.
  • the in-vehicle camera 62 uses the pixel shifts ( ⁇ p1, ⁇ p2, ⁇ p3) of the specific regions 64, 65, 66 to calculate the variation parameter ⁇ (eg, ⁇ 1, ⁇ 2, ⁇ 3, . to calculate Then, the vehicle-mounted camera 62 finally calculates an estimated pixel deviation ( ⁇ ′) for estimating the pixel deviation ( ⁇ ) of the entire screen from the variation parameter ( ⁇ ).
  • eg., ⁇ 1, ⁇ 2, ⁇ 3, .
  • Variation parameters ⁇ eg, ⁇ 1, ⁇ 2, ⁇ 3, . can be done.
  • FIG. 1 shows a block diagram showing a configuration example of an in-vehicle camera in an embodiment.
  • FIG. 2 is a flow chart showing a procedure of calibrating the vehicle-mounted camera in the embodiment.
  • the vehicle-mounted camera 62 includes an imaging unit 10, an area selection unit 11, an external input unit 12, a parameter calculation unit 13, and a full-screen pixel shift estimation unit 14 for estimating pixel shifts on the entire screen. , and a calibration unit 15 .
  • the full screen means the entire horizontal angle of view and vertical angle of view that can be acquired by the camera built into the imaging unit 10 .
  • the imaging unit 10 is an imaging circuit that images an object, and is connected to the area selection unit 11 .
  • the imaging unit 10 is, for example, an imaging device capable of acquiring images, such as a visible camera or an infrared camera having an optical lens and an imaging device, and is composed of a single camera (monocular camera) or multiple cameras (stereo camera).
  • the imaging unit 10 is not limited to these cameras, and various imaging circuits may be applied.
  • the imaging unit 10 acquires an image of one or a plurality of small charts 61 and outputs data of the acquired images to the area selection unit 11, as described with reference to FIG. In this image, each pixel indicates a luminance value.
  • the region selection unit 11 is connected to the imaging unit 10, the external input unit 12, and the parameter calculation unit 13.
  • the region selection unit 11 selects a plurality of predetermined regions in the image captured by the imaging unit 10, which are predetermined according to the shape and inclination of the windshield.
  • the region selection unit 11 receives from the external input unit 12 design data such as design values of the windshield 63 and region selection data for selecting a predetermined region such as one or a plurality of specific regions, and selects one or a plurality of corresponding regions. (or a predetermined (desired) specific area, predetermined area) is selected.
  • the small chart 61 illustrated in FIG. 6 is arranged so as to correspond to each of the selected one or more specific regions.
  • the area selection unit 11 calculates the pixel deviations ( ⁇ p1, ⁇ p2, ⁇ p3) of each specific region from the image of the small chart 61 of one or more selected specific regions.
  • the region selection unit 11 outputs the calculated pixel deviations ( ⁇ p1, ⁇ p2, ⁇ p3) of the specific regions to the parameter calculation unit 13 .
  • the external input unit 12 is connected to the area selection unit 11, the parameter calculation unit 13, and the full-surface pixel deviation estimation unit 14.
  • the external input unit 12 transmits the region selection data to the region selection unit 11 and passes the US matrix to the parameter calculation unit 13 and the full-surface pixel deviation estimation unit 14 .
  • the US matrix is a matrix representing a mathematical relationship (mathematical model) between the orthogonalized variation parameters ⁇ ( ⁇ 1, ⁇ 2, ⁇ 3, . . . ) and pixel shifts ( ⁇ p1, ⁇ p2, ⁇ p3).
  • the parameter calculation unit 13 is connected to the region selection unit 11 and the full pixel deviation estimation unit 14 .
  • the parameter calculator 13 receives the pixel shifts ( ⁇ p1, ⁇ p2, ⁇ p3) of the specific region from the region selector 11, and calculates variation parameters ⁇ ( ⁇ 1, ⁇ 2, ⁇ 3, . . . ).
  • the parameter calculator 13 outputs the calculated variation parameters ⁇ ( ⁇ 1, ⁇ 2, ⁇ 3, .
  • the full-surface pixel deviation estimation unit 14 is connected to the parameter calculation unit 13 and the calibration unit 15 .
  • the overall pixel shift estimator 14 estimates the pixel shift of the entire image using images detected from a plurality of predetermined regions in the image captured by the imaging unit 10 .
  • the plurality of predetermined areas can include three or more predetermined areas.
  • the full-screen pixel displacement estimating unit 14 receives the values of the variation parameters ⁇ ( ⁇ 1, ⁇ 2, ⁇ 3, . Calculate The full-screen pixel deviation estimation unit 14 outputs the estimated pixel deviation ( ⁇ ′) of the entire screen to the calibration unit 15 .
  • the calibration unit 15 is connected to the full-surface pixel deviation estimation unit 14 .
  • the calibration unit 15 corrects the displacement of the pixels on the entire surface based on the information from the overall pixel displacement estimation unit 14 .
  • the calibration unit 15 receives the pixel displacement ( ⁇ ′) of the entire screen estimated from the full-screen pixel displacement estimating unit 14, cancels the pixel displacement ( ⁇ ) of the entire screen, and performs calibration of the pixel positions of the entire screen. conduct.
  • the full-screen image acquired by the imaging unit 10 is calibrated by the calibration unit 15 and then used as image data for distance measurement.
  • the calibration unit 15 it is possible to provide an in-vehicle camera having a wide recognition angle of view for the purpose of reducing accidents at intersections.
  • the in-vehicle camera in Figure 1 can be summarized as follows.
  • the in-vehicle camera 62 includes an imaging unit 10 that captures an image of an object, an entire pixel deviation estimating unit 14 that calculates deviations of pixels on the entire surface based on the image captured by the imaging unit 10, and an image captured by the imaging unit 10.
  • an imaging unit 10 that captures an image of an object
  • an entire pixel deviation estimating unit 14 that calculates deviations of pixels on the entire surface based on the image captured by the imaging unit 10, and an image captured by the imaging unit 10.
  • the overall pixel displacement estimator 14 estimates the overall pixel displacement using images detected from a plurality of predetermined regions (specific regions 64 , 65 , 66 ) in the image captured by the imaging unit 10 .
  • the calibration method for the vehicle-mounted camera 62 described in FIG. 1 can be summarized as follows.
  • the calibration method for the in-vehicle camera 62 includes an image capturing process of capturing an image of an object by the image capturing unit 10 and an estimation of calculating the pixel deviation of the entire surface based on the image captured in the image capturing process by the pixel deviation estimating unit 14 of the entire surface.
  • the estimating step pixel deviations of the entire surface are estimated using images detected from a plurality of predetermined regions in the image photographed in the photographing step.
  • each part (10 to 15) can be configured by a hardware circuit.
  • the vehicle-mounted camera 62 can be rephrased as, for example, the image pickup circuit 10, the area selection circuit 11, the external input circuit 12, the parameter calculation circuit 13, the overall pixel deviation estimation circuit 14, and the calibration circuit 15.
  • each section (11 to 15) other than the photographing section 10 can be configured by a software program.
  • the vehicle-mounted camera 62 is provided with a data processing device that executes the operations of the respective units (11 to 15).
  • the data processing device includes, for example, a program memory device storing a software program describing the operation of each unit (11 to 15), and a central processing unit (CPU) for executing the software program stored in this program memory device. and a temporary storage memory device for temporarily holding data in the middle of computation by the central processing unit (CPU).
  • the program memory device can be composed of a rewritable non-volatile storage device such as a flash memory or a non-volatile storage device such as a read only memory (ROM).
  • the temporary storage memory device may comprise a volatile memory device such as static random access memory (SRAM).
  • the central processing unit (CPU) may be multiple central processing units (CPUs).
  • FIG. 7A is a diagram showing Equation 1.
  • FIG. 7B is a diagram showing Equations 2, 3 and 4;
  • the mathematical model is represented by (equation 1) in FIG. 7A.
  • is the pixel shift.
  • the Q matrix is a matrix showing the mathematical relationship between the pixel deviation ( ⁇ ) and the variation parameter ( ⁇ ).
  • is a variation parameter.
  • m is the number of variations.
  • the variations shown in FIG. 8 indicate X shift, Y shift, and Z shift as variations in the mounting position of the vehicle-mounted camera 62, and pitch rotation, yaw rotation, and roll rotation as variations in the camera posture of the vehicle-mounted camera 62.
  • the calculation of the Q matrix is based on inputting each variation into an optical simulation such as ray tracing and calculating the pixel deviation. Since all practical variations are small, the Q matrix can be calculated approximately by dividing each pixel by the variations. A mathematical relational Q-matrix is obtained by trying different variations of the variability.
  • the Q matrix can be used directly to calculate the correct variability parameter from the pixel shift.
  • the actual variability parameters are related to each other and not completely independent. Therefore, the parameters may not be distinguished, or different parameters may be calculated.
  • the variation in X shift horizontal shift in the X direction
  • the variation parameter ( ⁇ ) is calculated from the partial pixel shift ( ⁇ p) of this similarity region SMR, the X shift and yaw rotation cannot be distinguished, so the estimation of the pixel shift ( ⁇ ′) of the entire screen will be erroneous. Become.
  • the orthogonalized variation parameters be ⁇ ( ⁇ 1, ⁇ 2, ⁇ 3, . . . ). If the variation parameter (.delta..phi.) is the actual variation parameter, the orthogonalized variation parameter (.delta..psi.) can be said to be the orthogonal variation parameter.
  • the orthogonalized variation parameters ⁇ ( ⁇ 1, ⁇ 2, ⁇ 3, . . . ) are mathematically orthogonal to each other.
  • Singular Value Decomposition can be taken as an example, which can generate a US matrix.
  • Equation 2 in FIG. 7B shows an equation using the US matrix and the orthogonalized variation parameter ( ⁇ ). Note that the calculation method is not limited to singular value decomposition, and various calculation methods may be employed.
  • Equation 3) in FIG. 7B shows a relational expression between the orthogonal variation parameter ( ⁇ ) and the actual variation parameter ( ⁇ ).
  • V is a transformation matrix that transforms the real variation parameter ( ⁇ ) into the orthogonal variation parameter ( ⁇ ).
  • Formmula 4) in FIG. 7B shows a formula obtained by substituting (Formula 3) into (Formula 2).
  • Step 201 Power is supplied to the in-vehicle camera 62 to start calibration.
  • Step 202 is a chart screen acquisition step. As shown in FIG. 6, an in-vehicle camera 62 is placed behind a windshield 63, and images of a plurality of small charts 61 are captured by the imaging unit 10 through the windshield 63 from the inside of the vehicle. A photographed image of a small chart 61 is acquired. The chart 61 has patterns such as checkers and circles. The imaging unit 10 outputs the acquired captured image to the area selection unit 11 .
  • Step 203 is a step of inputting design values of the windshield 63 .
  • the external input unit 12 transmits the design values of the windshield 63 input from the external input unit 12 to the region selection unit 11 .
  • Step 204 is a step of calculating the pixel displacement of each specific area. Based on the design values of the windshield 63, a pre-calculated mathematical model can be used to determine where each variation is sensitive on the image. A location with good sensitivity is a location where pixel deviation is large when each variation changes. Select an image region that is sensitive to each variation.
  • the chart 61 is arranged (moved and left) in the selected image area (for example, corresponding to the specific areas 64, 65, and 66 in FIG. 6). Also, the chart 61 of each selected image area is photographed and the pixel shift is calculated from the image.
  • the region selection unit 11 calculates the pixel deviation ⁇ P ( ⁇ p1, ⁇ p2, ⁇ p3) of each specified region from the image of the chart 61 of the plurality of selected specified regions.
  • the region selection unit 11 outputs the calculated pixel shifts ⁇ P ( ⁇ p1, ⁇ p2, ⁇ p3) of the specific regions to the parameter calculation unit 13 .
  • the three charts 61 are arranged so as to correspond to the selected image areas, and the imaging unit 10 It is preferable to photograph the entire screen and calculate the pixel deviation of each specific area by measuring the image data of the entire screen once. This has the effect of being able to calculate pixel deviations in a plurality of specific regions in a short time.
  • the chart 61 is arranged so as to correspond to one image area (specific area 64), The entire screen is photographed by the imaging unit 10, then the chart 61 is arranged so as to correspond to the next image area (specific area 65), the entire screen is photographed by the imaging unit 10, and the chart 61 is further arranged as follows.
  • the images are arranged so as to correspond to the image area (specific area 66), and the image pickup unit 10 photographs the entire screen. It is preferable to calculate the pixel deviation of each specific area by measuring the image data of the entire screen three times. Since only one chart is used, it is possible to calculate pixel shifts in a plurality of specific regions at low cost and in a small area.
  • Step 205 is a step of calculating the variation parameter ⁇ .
  • the parameter calculation unit 13 calculates the values of the variation parameters ⁇ ( ⁇ 1, ⁇ 2, ⁇ 3, . calculate.
  • a least squares method or the like can be used as a calculation method. Note that the calculation method is not limited to the least squares method, and various calculation methods can be applied.
  • the parameter calculation unit 13 receives the pixel shift ⁇ P ( ⁇ p1, ⁇ p2, ⁇ p3) of the specific region from the region selection unit 11, and calculates variation parameters ⁇ ( ⁇ 1, ⁇ 2, ⁇ 3, . . . ).
  • the parameter calculator 13 outputs the calculated variation parameters ⁇ ( ⁇ 1, ⁇ 2, ⁇ 3, .
  • Step 206 is a step of estimating the pixel deviation of the entire screen. Using a pre-stored Q-matrix or an optical simulation such as ray tracing, the pixel deviation ( ⁇ ) of the entire screen is estimated from the calculated variation parameter ⁇ ( ⁇ 1, ⁇ 2, ⁇ 3, . . . ). The full-screen pixel displacement estimating unit 14 receives the values of the variation parameters ⁇ ( ⁇ 1, ⁇ 2, ⁇ 3, . Calculate The full-screen pixel deviation estimation unit 14 outputs the estimated pixel deviation ( ⁇ ′) of the entire screen to the calibration unit 15 .
  • Step 207 is a calibration step. Based on the estimated pixel deviation ( ⁇ ′) of the entire screen, the pixel deviation of the entire screen in the captured image is canceled (or corrected or corrected) at each pixel point, and calibration is performed.
  • the calibration unit 15 receives the pixel displacement ( ⁇ ′) of the entire screen estimated from the full-screen pixel displacement estimating unit 14, cancels the pixel displacement ( ⁇ ) of the entire screen, and performs calibration of the pixel positions of the entire screen. conduct.
  • Step 208 At step 208, the calibration flow of the vehicle-mounted camera 62 ends.
  • the image captured by the vehicle-mounted camera 62 has been calibrated for the positions of the pixels of the full screen, and there is no pixel shift (or the pixel shift has been corrected or corrected).
  • the image will be used for distance measurement (also called ranging) at intersections.
  • the in-vehicle camera 62 can accurately measure the distance to an obstacle at an intersection, and can reliably provide a function of supporting collision avoidance at the intersection.
  • FIG. 10 is a diagram for explaining the sensitivity of the variation parameter for the inclination angle of the windshield (30 degrees to 35 degrees).
  • FIG. 11 is a diagram for explaining the sensitivity of the variation parameter when the tilt angle of the windshield is 35° or more, eg, 40° to 70°.
  • FIG. 10 shows, when the design value of the tilt angle of the windshield 63 is 30 degrees to 35 degrees, for example, three specified points considering the sensitive portions of the three variation parameters ( ⁇ 1, ⁇ 2, ⁇ 3).
  • the position (camera angle of view) of the area (64, 65, 66) is shown.
  • FIG. 11 shows, when the design value of the tilt angle of the windshield 63 is 40 degrees to 70 degrees, for example, three specific points considering the sensitive portions of the three variation parameters ( ⁇ 1, ⁇ 2, ⁇ 3).
  • the positions (angles of view of the camera) of areas (64, 65, 66) are shown.
  • the variation parameters (actual variation parameters: ⁇ 1, ⁇ 2, ⁇ 3) may be orthogonal variation parameters ( ⁇ 1, ⁇ 2, ⁇ 3).
  • legends (0-300) for the sensitivities of the variation parameters ( ⁇ 1, ⁇ 2, ⁇ 3) are shown on the right side of each diagram of the sensitivities of the variation parameters ( ⁇ 1, ⁇ 2, ⁇ 3).
  • the variation parameter ⁇ 1 has high sensitivity in the first upper region of 10 to 20 degrees vertically and -15 to -5 degrees horizontally in the angle of view of the vehicle-mounted camera 62 . Therefore, the first specific region 64 is placed in the first top region in the image. That is, the first specific region 64 is set to a region of the angle of view of the image of the in-vehicle camera 62 (length (vertical direction): 10 degrees to 20 degrees, horizontal (horizontal direction): -15 degrees to -5 degrees). .
  • Variation parameter ⁇ 2 has high sensitivity in the second upper region of the angle of view of vehicle-mounted camera 62, which is 10 to 20 degrees vertically and 30 to 40 degrees horizontally.
  • the second specific area 65 is arranged in a second upper area different from the first upper area in the image.
  • the second specific area 65 is set to the area of the angle of view of the image of the vehicle-mounted camera 62 (vertical: 10 to 20 degrees, horizontal: 30 to 40 degrees).
  • the variation parameter ⁇ 3 has high sensitivity in the range of ⁇ 20° to ⁇ 10° vertically and ⁇ 35° to ⁇ 25° horizontally in the angle of view of the vehicle-mounted camera 62 . Therefore, the third specific region 66 is placed in the first bottom region in the image. That is, the third specific area 66 is set to the area of the angle of view of the image of the in-vehicle camera 62 (vertical: -20 degrees to -10 degrees, horizontal: -35 degrees to -25 degrees).
  • Three charts 61 are arranged so as to correspond to the three specific areas (64, 65, 66) thus determined (see FIG. 6).
  • the pixel displacements in these specific regions (64, 65, 66) can be measured with a chart 61 of size 1m x 1m.
  • the variation parameter .delta..phi.1 has high sensitivity in the upper region of the angle of view of the in-vehicle camera 62, which is 10 degrees to 20 degrees vertically and -15 degrees to -5 degrees horizontally (in this case, in FIG. 10 is the same angle of view as the variation parameter .delta..phi.1 of . Therefore, the first specific area 64 is placed in the upper area within the image. That is, the first specific area 64 is set to the area of the angle of view of the image of the in-vehicle camera 62 (vertical: 10 degrees to 20 degrees, horizontal: -15 degrees to -5 degrees).
  • the variation parameter ⁇ 2 has a high sensitivity in the central region of the angle of view of the in-vehicle camera 62, which is -5 degrees to 5 degrees vertically and 30 degrees to 40 degrees horizontally. Therefore, the second specific area 65 is arranged in the central area within the image. In other words, the second specific area 65 is set to the area of the angle of view of the image of the in-vehicle camera 62 (-5 degrees to 5 degrees vertically, 30 degrees to 40 degrees horizontally).
  • the variation parameter ⁇ 3 has high sensitivity in the lower region of the view angle of the vehicle-mounted camera 62 from -20 degrees to -10 degrees in the vertical direction and from -10 degrees to 0 degrees in the horizontal direction. Therefore, the third specific area 66 is located in the central area within the image. That is, the third specific area 66 is set to the area of the angle of view of the image of the in-vehicle camera 62 (vertical -20 to -10 degrees, horizontal -10 to 0 degrees).
  • Three charts 61 are arranged so as to correspond to the three specific areas (64, 65, 66) thus determined (see FIG. 6). These specific areas (64, 65, 66) are measurable with a chart 61 of size 1 m x 1 m.
  • the size of the specific areas (64, 65, 66) is smaller than the chart 61, which is the calibration image. That is, the area of the chart 61 is made larger than the size of the specific regions (64, 65, 66) on the image so that the specific regions (64, 65, 66) are included inside the chart 61.
  • the small chart 61 is used to measure only pixel deviations in specific areas (64, 65, 66). Then, each variation parameter is calculated from the displacement amount of the pixel displacement of the specific regions (64, 65, 66). Then, the displacement amount of the pixel displacement of the entire screen is calculated from the calculated variation parameter. This makes it possible to cancel the pixel displacement of the entire screen at each pixel point and perform the calibration of the pixel positions of the entire screen. Since the selection of the specific regions (64, 65, 66) selects the sensitive part of the variation parameter, the pixel displacement in the specific regions (64, 65, 66) can be measured with a chart 61 of size 1m x 1m. be.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • General Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Health & Medical Sciences (AREA)
  • Image Processing (AREA)
  • Studio Devices (AREA)
  • Traffic Control Systems (AREA)
  • Length Measuring Devices By Optical Means (AREA)
  • Closed-Circuit Television Systems (AREA)

Abstract

L'invention concerne une technologie qui est capable d'obtenir une précision élevée d'étalonnage d'une caméra montée sur véhicule avec un angle plus large, au moyen d'un petit diagramme. Une caméra montée sur véhicule selon la présente invention comprend une unité d'imagerie qui capture une image d'un objet; une unité d'estimation qui calcule le déplacement de tous les pixels sur la base de l'image capturée par l'unité d'imagerie; une unité de sélection de région qui sélectionne une pluralité de régions prescrites dans l'image capturée par l'unité d'imagerie qui sont prédéterminées en fonction de la forme et de la pente d'un pare-brise; et une unité d'étalonnage qui corrige le déplacement de tous les pixels sur la base d'informations provenant de l'unité d'estimation. L'unité d'estimation utilise une image détectée à partir de la pluralité de régions prescrites dans l'image capturée par l'unité d'imagerie pour estimer un déplacement de pixel dans l'image entière.
PCT/JP2022/004627 2021-07-27 2022-02-07 Caméra montée sur véhicule et procédé d'étalonnage pour caméra montée sur véhicule WO2023007775A1 (fr)

Priority Applications (1)

Application Number Priority Date Filing Date Title
DE112022001700.7T DE112022001700T5 (de) 2021-07-27 2022-02-07 Fahrzeuginterne kamera und kalibrierungsverfahren der fahrzeuginternen kamera

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2021122576A JP7459023B2 (ja) 2021-07-27 2021-07-27 車載カメラおよび車載カメラのキャリブレーション方法
JP2021-122576 2021-07-27

Publications (1)

Publication Number Publication Date
WO2023007775A1 true WO2023007775A1 (fr) 2023-02-02

Family

ID=85086638

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2022/004627 WO2023007775A1 (fr) 2021-07-27 2022-02-07 Caméra montée sur véhicule et procédé d'étalonnage pour caméra montée sur véhicule

Country Status (3)

Country Link
JP (1) JP7459023B2 (fr)
DE (1) DE112022001700T5 (fr)
WO (1) WO2023007775A1 (fr)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2017062198A (ja) * 2015-09-25 2017-03-30 富士重工業株式会社 幾何歪除去再現装置
CN111798521A (zh) * 2019-04-09 2020-10-20 Oppo广东移动通信有限公司 标定方法、装置、存储介质及电子设备
JP2021009113A (ja) * 2019-07-02 2021-01-28 株式会社リコー 計測用具、校正装置、およびプログラム

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2011049733A (ja) 2009-08-26 2011-03-10 Clarion Co Ltd カメラキャリブレーション装置および映像歪み補正装置
JP5834810B2 (ja) 2011-11-17 2015-12-24 株式会社リコー カメラキャリブレーション装置、乗員支援装置、カメラキャリブレーション用歪み係数生成方法

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2017062198A (ja) * 2015-09-25 2017-03-30 富士重工業株式会社 幾何歪除去再現装置
CN111798521A (zh) * 2019-04-09 2020-10-20 Oppo广东移动通信有限公司 标定方法、装置、存储介质及电子设备
JP2021009113A (ja) * 2019-07-02 2021-01-28 株式会社リコー 計測用具、校正装置、およびプログラム

Also Published As

Publication number Publication date
DE112022001700T5 (de) 2024-01-11
JP2023018438A (ja) 2023-02-08
JP7459023B2 (ja) 2024-04-01

Similar Documents

Publication Publication Date Title
CN108292439B (zh) 校准安装至车辆的摄像机的取向的方法和存储介质
CN107103625B (zh) 用于成像系统的方法和设备
JP5745178B2 (ja) 3次元測定方法、装置及びシステム、並びに画像処理装置
US8885049B2 (en) Method and device for determining calibration parameters of a camera
US10641617B2 (en) Calibration device and calibration method
US9369695B2 (en) Method and apparatus for three-dimensional measurement and image processing device
JP4803450B2 (ja) 車載カメラの校正装置及び当該装置を用いた車両の生産方法
JP2012253758A (ja) 車両視覚システムの較正方法および車両視覚システム
US20120033087A1 (en) Calibration target detection apparatus, calibration target detecting method for detecting calibration target, and program for calibration target detection apparatus
CN101416022A (zh) 用于测量反射表面形状的方法和系统
CN110211185B (zh) 在一组候选点内识别校准图案的特征点的方法
EP3505865B1 (fr) Caméra embarquée, procédé de réglage de caméra embarquée, et système de caméra embarquée
KR101926258B1 (ko) Avm 시스템의 자동 캘리브레이션 방법
US20180058850A1 (en) Method and apparatus for single camera optical measurements
WO2023007775A1 (fr) Caméra montée sur véhicule et procédé d'étalonnage pour caméra montée sur véhicule
JP2016225719A (ja) ステレオカメラ装置、車両、および較正方法
US8102516B2 (en) Test method for compound-eye distance measuring apparatus, test apparatus, and chart used for the same
US20090087013A1 (en) Ray mapping
RU2592711C1 (ru) Способ и система калибровки комплекса измерения скорости транспортных средств
CN111563936A (zh) 一种相机外部参数自动标定方法及行车记录仪
JP2013239905A (ja) 車載カメラのキャリブレーション装置
JP2013005032A (ja) 車載カメラ姿勢検出装置
JP2018125706A (ja) 撮像装置
CN114299164A (zh) 摄像头标定方法、存储介质和电子设备
KR100831452B1 (ko) Dlt 알고리즘을 이용한 영상왜곡 보정방법

Legal Events

Date Code Title Description
WWE Wipo information: entry into national phase

Ref document number: 112022001700

Country of ref document: DE