WO2016084308A1 - Image transformation apparatus and image transformation method - Google Patents

Image transformation apparatus and image transformation method Download PDF

Info

Publication number
WO2016084308A1
WO2016084308A1 PCT/JP2015/005512 JP2015005512W WO2016084308A1 WO 2016084308 A1 WO2016084308 A1 WO 2016084308A1 JP 2015005512 W JP2015005512 W JP 2015005512W WO 2016084308 A1 WO2016084308 A1 WO 2016084308A1
Authority
WO
WIPO (PCT)
Prior art keywords
vehicle
image
angle
bird
feature points
Prior art date
Application number
PCT/JP2015/005512
Other languages
French (fr)
Japanese (ja)
Inventor
田中 仁
森下 洋司
宗昭 松本
Original Assignee
株式会社デンソー
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 株式会社デンソー filed Critical 株式会社デンソー
Priority to DE112015005317.4T priority Critical patent/DE112015005317B4/en
Publication of WO2016084308A1 publication Critical patent/WO2016084308A1/en

Links

Images

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60RVEHICLES, VEHICLE FITTINGS, OR VEHICLE PARTS, NOT OTHERWISE PROVIDED FOR
    • B60R1/00Optical viewing arrangements; Real-time viewing arrangements for drivers or passengers using optical image capturing systems, e.g. cameras or video systems specially adapted for use in or on vehicles
    • B60R1/20Real-time viewing arrangements for drivers or passengers using optical image capturing systems, e.g. cameras or video systems specially adapted for use in or on vehicles
    • B60R1/22Real-time viewing arrangements for drivers or passengers using optical image capturing systems, e.g. cameras or video systems specially adapted for use in or on vehicles for viewing an area outside the vehicle, e.g. the exterior of the vehicle
    • B60R1/23Real-time viewing arrangements for drivers or passengers using optical image capturing systems, e.g. cameras or video systems specially adapted for use in or on vehicles for viewing an area outside the vehicle, e.g. the exterior of the vehicle with a predetermined field of view
    • B60R1/27Real-time viewing arrangements for drivers or passengers using optical image capturing systems, e.g. cameras or video systems specially adapted for use in or on vehicles for viewing an area outside the vehicle, e.g. the exterior of the vehicle with a predetermined field of view providing all-round vision, e.g. using omnidirectional cameras
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/18Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast
    • H04N7/183Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast for receiving images from a single remote source
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60RVEHICLES, VEHICLE FITTINGS, OR VEHICLE PARTS, NOT OTHERWISE PROVIDED FOR
    • B60R2300/00Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle
    • B60R2300/30Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle characterised by the type of image processing
    • B60R2300/302Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle characterised by the type of image processing combining image information with GPS information or vehicle data, e.g. vehicle speed, gyro, steering angle data
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60RVEHICLES, VEHICLE FITTINGS, OR VEHICLE PARTS, NOT OTHERWISE PROVIDED FOR
    • B60R2300/00Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle
    • B60R2300/60Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle characterised by monitoring and displaying vehicle exterior scenes from a transformed perspective
    • B60R2300/607Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle characterised by monitoring and displaying vehicle exterior scenes from a transformed perspective from a bird's eye viewpoint

Definitions

  • This disclosure is applied to a vehicle including an in-vehicle camera, and relates to a technique for adjusting a setting value used when a captured image acquired from the in-vehicle camera is converted into a bird's-eye view image.
  • Various techniques for allowing the driver to check the situation around the vehicle have been put into practical use by photographing the surroundings of the vehicle with the in-vehicle camera and displaying the obtained captured image on the in-vehicle monitor. Furthermore, it is widely performed that a captured image acquired from an in-vehicle camera is not displayed as it is on the in-vehicle monitor but is converted into a bird's-eye view image and then displayed on the in-vehicle monitor.
  • the bird's-eye view image refers to an image obtained by converting a photographed image as if it was photographed by looking down at the scenery in the photographed image from above the vehicle. If the bird's-eye view image is correctly converted, the positional relationship between the other vehicle, pedestrian, etc.
  • the in-vehicle camera is attached to the vehicle at the designed position and angle, and the position and angle of the in-vehicle camera are individually measured before the vehicle is shipped from the factory in order to reduce the influence of installation errors.
  • the position and angle of the in-vehicle camera can be calculated as follows. First, a plurality of feature points are extracted from a photographed image by the in-vehicle camera, and the coordinates on the photographed image where each feature point is located are associated with the coordinates in the photographed real space. Then, the positions of the respective feature points associated with each other are substituted into a relational expression using the position and angle of the in-vehicle camera as variables. Thereafter, the position and angle of the in-vehicle camera can be calculated by solving the obtained relational expression.
  • the in-vehicle camera fixed to the vehicle may be displaced in position and angle due to loosening of the fastening part after shipment from the factory.
  • the position and angle of the vehicle are different from those at the time of factory shipment. Therefore, a technique has been proposed that enables feature points to be extracted from an image taken while the vehicle is running and recalculate the position and angle of the in-vehicle camera (Patent Document 1).
  • One of the objects of the present disclosure is to provide a technique that can accurately calculate the position and angle of the in-vehicle camera and convert it into an accurate bird's-eye view image without bringing the vehicle into a maintenance shop.
  • a captured image is acquired from a vehicle-mounted camera that captures the surroundings of the vehicle, and the captured image is converted into a bird's-eye image as if it was captured from a viewpoint above the vehicle.
  • An image conversion apparatus for displaying on a monitor wherein the image conversion unit converts the captured image into the bird's-eye image according to the position and angle of the in-vehicle camera, and extracts a plurality of feature points from the bird's-eye image or the captured image
  • a feature point extraction unit, a calculation determination unit that determines whether to calculate the position and angle of the in-vehicle camera based on the distribution of the plurality of feature points extracted from the bird's-eye view image or the captured image
  • An image conversion device comprising: a calculation unit that calculates the position and angle of the in-vehicle camera based on the plurality of feature points when the calculation determination unit determines to calculate the position and angle of the in-vehicle camera.
  • the present invention is applied to a vehicle equipped with an in-vehicle camera, acquires a captured image from the in-vehicle camera that captures the periphery of the vehicle, and the captured image is viewed from an upper viewpoint of the vehicle.
  • a position and an angle can be calculated using feature points having a distribution suitable for calculating the position and angle of the in-vehicle camera.
  • feature points are extracted from the scenery in the shooting range at that time, and thus feature points with various distributions are extracted. Therefore, when the vehicle is traveling, it is not always possible to extract feature points suitable for calculating the position and angle of the in-vehicle camera.
  • the position and angle of the in-vehicle camera are calculated when the distribution of the extracted feature points is suitable for the calculation of the position and angle of the in-vehicle camera, even if the vehicle is running,
  • the position and angle can be calculated with high accuracy. As a result, it is possible to accurately convert the captured image into a bird's-eye view image.
  • FIG. 1 is an explanatory view showing a vehicle equipped with the image conversion apparatus of this embodiment.
  • FIG. 2 is a schematic explanatory diagram of the principle of converting a captured image into a bird's-eye view image.
  • FIG. 3 is an explanatory diagram showing a state in which a captured image is converted into a bird's-eye image with reference to a conversion table.
  • FIG. 4 is a block diagram showing the internal structure of the image conversion apparatus.
  • FIG. 5 is a flowchart of the first half of the conversion table update process executed by the image conversion apparatus.
  • FIG. 6 is a flowchart of the latter half of the conversion table update process executed by the image conversion apparatus.
  • FIG. 7 is a diagram showing a specific example of the environmental score to be given to the score sensor
  • FIG. 8 is an explanatory diagram showing how feature points are extracted from each of a captured image and a bird's-eye view image.
  • FIG. 9 is an explanatory diagram showing how to check the distribution of feature points.
  • FIG. 10 is a diagram illustrating a bird's-eye view image when the vehicle is tilted in the pitch direction.
  • FIG. 11 is an example of a bird's-eye view image when the vehicle is tilted in the roll direction.
  • FIG. 12 is a diagram illustrating a bird's-eye view image when the vehicle sinks downward;
  • FIG. 13 is a diagram exemplifying selection of which variable of pitch angle, roll angle, and height of the in-vehicle camera is calculated according to the distribution of feature points.
  • FIG. 1 shows a rough structure of a vehicle 1 on which an image conversion device 10 is mounted.
  • the vehicle 1 includes an in-vehicle camera 2 attached to the front portion of the vehicle 1 and an in-vehicle monitor 3 that can be viewed from the driver's seat, in addition to the image conversion device 10.
  • the vehicle 1 includes a vehicle speed sensor 41 that acquires a traveling speed, a steering sensor 42 that acquires a steering angle, height sensors 43a to 43d that are arranged at four corners of the vehicle 1 and detect vertical displacement, and solar radiation.
  • a solar radiation sensor 44 that detects the intensity of the amount and a rain sensor 45 that detects the amount of raindrops are provided.
  • the in-vehicle camera 2 is a camera with a fish-eye lens and a wide angle of view, and captures a scene including the road surface around the front of the vehicle 1 in the captured image.
  • the image conversion device 10 converts the captured image acquired from the in-vehicle camera 2 into a bird's-eye image and displays the bird's-eye image on the in-vehicle monitor 3. Since the photographed image shows the road surface, the bird's-eye view image obtained by converting the photographed image is an image as if the photograph was taken by looking down the road surface in front of the vehicle 1 in the photographed image from above.
  • an outline of conversion from a captured image to a bird's-eye view image will be described.
  • FIG. 2 schematically shows the principle of converting a captured image into a bird's-eye view image.
  • the captured image 22 is on the road surface 21 in the imaging area of the in-vehicle camera 2 on an image plane that intersects the line of sight of the in-vehicle camera 2.
  • the road surface plane 23 is a plane at the same position as the road surface 21 in FIG. 2A, and may be considered as a screen. If a virtual camera 24 provided so that the road surface reflected on the road surface plane 23 is looked down from directly above is photographed, a bird's-eye view image 25 as if the road surface in front of the in-vehicle camera 2 was photographed from directly above is obtained. can get.
  • the position and shape of the area surrounded by the trapezoid of the road surface 21 shown as the imaging region in FIG. Is determined according to the position and angle. Therefore, in order to convert the captured image into a bird's-eye view image, the position and angle of the in-vehicle camera 2 are constant between when the image is actually captured and when the captured image is projected onto the virtual road surface plane 23 (screen). It is necessary to keep the state.
  • the image conversion apparatus 10 prepares in advance a conversion table in which the projection relationship as described in FIG. 2 is calculated under a certain position and angle of the in-vehicle camera 2.
  • FIG. 3 shows a state in which a captured image is converted to a bird's-eye image with reference to the conversion table.
  • the conversion table is data representing the correspondence between the pixel position of the bird's-eye view image and the pixel position of the captured image.
  • the pixel at the position of the coordinates (Xe, Yf) of the bird's-eye image in FIG. 3B is associated with the pixel at the position of the coordinates (Ag, Bh) of the photographed image in FIG. It has been.
  • the image data (luminance, saturation, etc.) of the pixel at the coordinate (Ag, Bh) of the captured image is applied to the pixel at the coordinate (Xe, Yf) of the bird's-eye view image.
  • the captured image can be converted to the bird's-eye view image.
  • the image conversion apparatus 10 has the function of converting a captured image into a bird's-eye image, the position of the in-vehicle camera 2, A function of recalculating the angle and updating the conversion table according to the new position and angle is provided.
  • FIG. 4 shows the internal structure of the image conversion apparatus 10.
  • the image conversion apparatus 10 includes a captured image acquisition unit 11, an image conversion unit 12, and a display unit 13.
  • the captured image acquisition unit 11 acquires a captured image from the in-vehicle camera 2.
  • the image conversion unit 12 converts the captured image into a bird's-eye view image with reference to the conversion table as described above.
  • the display unit 13 displays the bird's-eye view image on the in-vehicle monitor 3.
  • the image conversion apparatus 10 calculates the position and angle of the in-vehicle camera 2 and updates the conversion table, so that the score evaluation unit 14, the feature point extraction unit 15, the calculation determination unit 16, and the calculation unit 17
  • the fluctuation amount acquisition unit 18 and the conversion table update unit 19 are provided.
  • the score evaluation unit 14 acquires detection information detected by the vehicle speed sensor 41, the steering sensor 42, the height sensors 43a to 43d, the solar radiation sensor 44, and the rain sensor 45 provided in the vehicle 1.
  • these various sensors are collectively referred to as a score sensor 4.
  • the score evaluation part 14 provides a score according to the detection information acquired from each of the score sensors 4, and comprehensively evaluates the score.
  • the score evaluation will be described later with a specific example.
  • the outline of the score evaluation is information on the environment around the vehicle 1 or the state of the vehicle 1 itself, and evaluates whether the situation is suitable for feature point extraction. Is to do.
  • a feature point is a point that can be distinguished from other points by having a distinctive feature in the shooting scenery, and a point on the image coordinate can be specified by a shooting image or a bird's-eye view image of the shooting scenery.
  • the feature points are extracted in order to calculate the position and angle of the in-vehicle camera 2.
  • score evaluation unit 14 corresponds to the extraction determination unit because it determines whether or not to extract feature points in this way.
  • the feature point extraction unit 15 extracts a plurality of feature points from the photographed image or the bird's-eye view image in order to calculate the position and angle of the in-vehicle camera 2.
  • the calculation determination unit 16 confirms the distribution status of the plurality of feature points, and determines whether or not to calculate the position and angle of the in-vehicle camera 2 based on the information indicating the distribution width obtained from the confirmation.
  • the calculation unit 17 calculates the position and angle of the in-vehicle camera 2 based on the extracted feature points.
  • the consistency between the positional relationship between the plurality of feature points extracted from the captured image or the bird's-eye view image and the positional relationship between the plurality of feature points in the actual captured scene is used. Since the calculation method of the position and angle of the vehicle-mounted camera 2 is well known, detailed description thereof will be omitted.
  • the fluctuation amount obtaining unit 18 obtains the fluctuation amount by comparing the position and angle after the calculation unit 17 newly calculates with the position and angle before the new calculation.
  • the conversion table update unit 19 updates the conversion table based on the newly calculated position and angle of the in-vehicle camera 2 when the fluctuation amount acquired by the fluctuation amount acquisition unit 18 is equal to or less than a predetermined fluctuation threshold.
  • the image conversion unit 12 converts the captured image into a bird's-eye image with reference to the new conversion table.
  • these nine units from the captured image acquisition unit 11 to the conversion table update unit 19 are the concepts in which the inside of the image conversion apparatus 10 is classified from the viewpoint of function, and the image conversion apparatus 10 is physically divided into nine parts. It does not represent being classified. Therefore, these units can be realized as a computer program executed by the CPU, can be realized as an electronic circuit including an LSI and a memory, or can be realized by combining them.
  • the conversion table update process executed by providing these nine units will be described in detail.
  • Bird's-eye view image generation processing 5 and 6 show flowcharts of the conversion table update process executed by the image conversion apparatus 10 according to the present embodiment.
  • This conversion table update process is a process for updating a conversion table used for converting a captured image into a bird's-eye view image, and is executed separately from the conversion process of the bird's-eye view image to be displayed on the in-vehicle monitor 3.
  • the bird's-eye image conversion processing is repeatedly executed at a cycle corresponding to the shooting cycle (for example, 30 Hz) of the in-vehicle camera 2, but the conversion table update processing described here is executed at the same cycle as the bird's-eye image conversion processing. Alternatively, it may be executed at an appropriate interval (for example, every 10 minutes).
  • the environmental score is an index for scoring detection information acquired from various score sensors 4 to evaluate the surrounding environment of the vehicle 1 and the state of the vehicle 1 itself.
  • FIG. 7 shows a specific example of the environmental score.
  • the environmental score is detected information obtained from each of the vehicle speed sensor 41, the steering sensor 42, the height sensors 43a to 43d, the solar radiation sensor 44, and the rain sensor 45 constituting the score sensor 4. Is granted to. Since the detection information handled here is digitized, a score is given according to the magnitude of those numeric values.
  • the environmental score given for each piece of detection information of the score sensor 4 will be referred to with the score symbols S1 to S5 in order from the vehicle speed sensor 41 to the rain sensor 45.
  • FIG. 7B shows an example of giving the environmental score S1.
  • the reason why the environmental score S1 is lowered as the vehicle speed increases is that subject blur tends to increase in the captured image, and the feature point extraction accuracy becomes unstable.
  • environmental scores are individually assigned to the score sensors 4 other than the vehicle speed sensor 41, that is, the steering sensor 42, the height sensors 43a to 43d, the solar radiation sensor 44, and the rain sensor 45.
  • the steering sensor 42 detects the amount of displacement per unit time of the steering angle.
  • the environmental score S2 is lowered as the displacement amount increases. This is because when the amount of displacement of the steering angle is large, the vehicle 1 tilts due to a change in load, so that it is not suitable for extracting feature points.
  • the height sensors 43a to 43d detect the amount of displacement per unit time of the inclination of the vehicle 1 by detecting the amount of displacement in the vertical direction at the front, rear, left and right of the vehicle 1, respectively. Considering the same as S2 because the vehicle 1 is inclined, the environmental score S3 for the height sensors 43a to 43d is set to a lower score as the displacement amount is larger.
  • the environmental score S4 assigned to the amount of solar radiation detected by the solar radiation sensor 44 is set to a lower score as the amount of solar radiation is smaller. This is because, as the amount of solar radiation is smaller, the photographic scene is darker and the difference in luminance value of the photographed image is likely to be small, so the feature point extraction accuracy is considered to be low.
  • the environmental score S5 assigned to the rainfall detected by the rain sensor 45 is set to a lower score as the rainfall increases. This is because there is a high possibility that the photographed image becomes unclear as the amount of rain increases, so that the feature point extraction accuracy is considered to be low.
  • the total environmental score S is the sum of S1 to S5, but it is sufficient if information detected by each of the score sensors 4 can be taken into consideration, and other calculation methods may be used. For example, when it is determined that the influence of the vehicle speed is large among various types of detection information, the environmental score S1 may be evaluated with priority.
  • the score sensor 4 may be other sensors in addition to or in place of the various sensors described above.
  • the environmental score can be given by detecting the inclination of the vehicle 1 from the gyro sensor in the same way as in S3.
  • the environmental score may be calculated by predicting the travel speed of the vehicle 1 and the inclination of the vehicle 1 from the traffic information and road information acquired from the car navigation system, and taking these information into consideration.
  • the environmental score evaluates the surrounding environment in which the vehicle 1 is placed, such as the difference in weather and ambient brightness, or the vehicle 1 itself in which the posture of the vehicle 1 changes or vibrations occur. It is an index for evaluating the state of.
  • an environmental score is high, it can be considered that there is a high possibility of obtaining a clear photographed image by reducing the influence of disturbance caused by the surrounding environment and the vehicle state.
  • the fact that a clear captured image is obtained means that the situation is suitable for extracting feature points.
  • a predetermined score threshold (S102).
  • the object to be determined here is the above-mentioned total environmental score S, and this score threshold may be arbitrarily determined.
  • the surrounding environment of the vehicle 1 and the state of the vehicle 1 itself are comprehensively determined from the detection information acquired from the various score sensors 4. Evaluation can be made to determine whether or not the situation is suitable for feature point extraction.
  • the environmental score evaluation method is not limited to this. For example, if a certain number of environmental scores among the individual environmental scores S1 to S5 do not satisfy a predetermined standard, it is determined that the environmental score is not suitable for feature point extraction. You may decide to do it.
  • a captured image is acquired (S103), the captured image is converted into a bird's-eye image (S104), and feature points are extracted (S105).
  • FIG. 8 shows a state in which a plurality of feature points are extracted from each of the photographed image and the bird's-eye view image.
  • a feature point is a point that can be distinguished from other points by having a distinctive feature in the shooting scenery, and the position on the image coordinate in the shooting image or the bird's-eye view image showing the shooting scenery. It is a point that can be identified.
  • a typical feature point is a corner where a straight line and a straight line intersect, but it can also be extracted from the boundary of the straight line. For example, for road markings such as lane separators, pedestrian crossings, and stop lines, feature points can be extracted by using the fact that the difference in luminance value increases at the boundary between these display parts and asphalt parts.
  • the black circle points represent the locations where feature points are extracted.
  • the feature points are extracted in order to calculate the position and angle of the in-vehicle camera 2.
  • the position and angle of the in-vehicle camera 2 are calculated using the consistency between the positional relationship between the plurality of feature points extracted on the image and the positional relationship between the plurality of feature points in the actual shooting scene.
  • feature points may be extracted from an object whose shape in an actual shooting scene is known. In the example shown in FIG. 8, feature points are extracted from the lane breaks on the left and right sides of the vehicle 1 and the characters “stop” on the road marking.
  • the actual lane separation is parallel on both the left and right sides, and there are a plurality of crossing at right angles to the characters “stop” on the actual road marking. It is a known shape that there is a corner.
  • the shape and dimensions of road markings suitable for feature point extraction may be stored in a database in advance, and the feature point extraction unit 15 may simplify feature point extraction by referring to the database.
  • feature points can be extracted from the photographed image as shown in FIG. 8A
  • the feature points are extracted from the bird's-eye view image as shown in FIG. 8B in the conversion table update processing of this embodiment. Therefore, the difference between the case where feature points are extracted from the captured image in FIG. 8A and the case where feature points are extracted from the bird's-eye image in FIG. 8B will be described below.
  • the photographed image in FIG. 8A is an image photographed while the vehicle 1 is traveling straight in a straight lane.
  • the actual lane separator is a straight line parallel to each other along the lane, but the left and right lane separators shown in the photographed image of FIG.
  • the letters “Stop” on the road marking also indicate that the parts that are parallel to each other are not actually shown in parallel, or the parts that intersect at right angles are not shown at right angles.
  • the left and right lane divisions are straight lines that are parallel to each other as in the real world, and the characters “stop” on the road marking are also in parallel as in the real world.
  • the straight lines are parallel and the crossing at right angles is shown at right angles.
  • the position and angle of the in-vehicle camera 2 can be calculated more easily and accurately.
  • feature points are extracted (S105). From the viewpoint of reducing the processing burden, an object for extracting feature points for lane delimiters and various road markings is determined in advance, and only the object is converted into a bird's-eye image in the scenery shown in the photographed image. May be.
  • the distribution of the feature points is confirmed (S106), and based on the distribution of the feature points, it is determined whether or not to calculate the position and angle of the in-vehicle camera 2 (S107).
  • This determination is a determination as to whether or not the distribution of feature points is a distribution suitable for calculating the position and angle of the in-vehicle camera 2 with high accuracy, and indicates, for example, the width of the distribution of feature points. Judgment can be made based on information.
  • the information indicating the distribution width here is information indicating the size of the region in which the feature points are distributed with respect to the size of the image region. As described below, the suitability of the distribution may be comprehensively determined in consideration of the position of the distribution area of the feature points together with the width of the distribution and the degree of dispersion of the feature points in the distribution area.
  • FIG. 9 shows a bird's-eye view when the feature point distribution is good and when the feature point distribution is not good.
  • the scene where the shooting scene is omitted and the feature points are extracted is represented by black circles.
  • the extracted feature points are viewed as a whole, and the feature points located on the outer periphery are connected as vertices, and the region inside the convex polygon formed thereby is used as the feature point distribution region (shaded portion) .
  • the area of the distribution region is large as shown in FIG. 9A and the position of the distribution region is close to the center of the image, it is determined that the distribution of the feature points is good, and the distribution region is displayed as shown in FIG.
  • the area is small or when it is biased to a part of the image, it is determined that the distribution of the feature points is poor.
  • the suitability of the distribution of the feature points may be determined simply by examining the size and position of the horizontal and vertical widths of the region in which the feature points are distributed, instead of the region indicated by the shaded area.
  • the horizontal width of the feature point distribution area is Ua
  • the vertical width is Va. This is sufficiently larger than the image size, and since the intermediate values of Ua and Va are close to the center of the image, it can be determined that the feature point distribution is good.
  • the size and position of the horizontal width Uc and the vertical width Vc of the region where the feature points are distributed are the same as those in the example shown in FIG. Distribution is biased. Therefore, not only the width (range) in which the feature points are distributed but also the degree of dispersion should be considered.
  • feature points that are far away from the average position of the distribution are not treated as outliers that form feature point distribution areas, and are indicated by double diagonal lines.
  • the region is considered as a feature point distribution region. As a result, in the example shown in FIG. 9C, it is determined that the distribution of feature points is poor.
  • the position and angle of the in-vehicle camera 2 use the consistency between the positional relationship of the plurality of feature points extracted from the bird's-eye image and the positional relationship of the plurality of feature points in the actual shooting scene. To do. Therefore, the position and angle of the in-vehicle camera 2 can be calculated even when the distribution of feature points is poor as shown in FIGS. 9B and 9C. However, for areas where feature points do not exist, it is impossible to confirm the consistency between the positional relationship of multiple feature points on the image and the positional relationship of multiple feature points in the actual shooting scene. It is difficult to maintain the accuracy of the position and angle of the in-vehicle camera 2 to be used.
  • the calculated fluctuation amount of the position and angle is confirmed (S109 in FIG. 6). What is confirmed here is the amount of variation between the newly calculated position and angle of the in-vehicle camera 2 and the position and angle of the in-vehicle camera 2 used for updating the conversion table currently used. If the fluctuation amount is equal to or less than the fluctuation threshold (S110: yes), the conversion table is updated (S111), and the conversion table update process is terminated. On the other hand, if the fluctuation amount is larger than the fluctuation threshold (S110: no), the process ends without updating the conversion table.
  • the position and angle of the in-vehicle camera 2 show extreme values. Therefore, when the calculated fluctuation amount of the position and angle of the in-vehicle camera 2 is larger than a predetermined fluctuation threshold, the position and angle of the in-vehicle camera 2 with good accuracy can be obtained by not updating the conversion table.
  • the conversion table is updated only in the case where it is considered to have occurred. Further, as in the case of determining the distribution of feature points, the position and angle of the in-vehicle camera 2 are not calculated when good accuracy cannot be expected, so that it is possible to avoid an extra processing load on the CPU.
  • the accuracy of the feature points extracted by the evaluation of the environmental score is improved (S102), and the distribution of the feature points is confirmed.
  • the accuracy of the position and angle of the in-vehicle camera 2 to be calculated is improved (S107).
  • the position and angle of the in-vehicle camera 2 are calculated with high accuracy, and further, the calculated amount of variation in the position and angle of the in-vehicle camera is confirmed (S110). Will be updated. Since the image conversion apparatus 10 refers to the conversion table with high accuracy and reliability obtained in this way, it can be converted into an accurate bird's-eye view image.
  • the fact that the vehicle 1 is inclined with respect to the road surface as in the latter case is caused by a change in the loaded weight, acceleration / deceleration of the vehicle 1 or the like and is a temporary change.
  • the two have been described without particularly distinguishing between them.
  • the in-vehicle camera is used for a temporary change in which the attitude of the vehicle 1 changes with respect to the road surface in the latter case. The difference from the present embodiment when calculating the position and angle of 2 will be described.
  • the position of the in-vehicle camera 2 has components in three directions of vertical (traveling direction of the vehicle 1), lateral (horizontal direction with respect to the traveling direction of the vehicle 1), and height. ⁇ There are three angles of yaw. Therefore, in principle, when calculating the position and angle of the in-vehicle camera 2, it is necessary to calculate the components in three directions and the angles in three directions (total of six values) for each in-vehicle camera 2. However, when the reason why the position and the angle need to be calculated is the latter, that is, because the posture of the vehicle 1 has changed with respect to the road surface, the height, the roll angle, and the pitch of the in-vehicle camera 2 What is necessary is just to calculate three values of an angle. This is due to the following reason.
  • a state in which the posture of the vehicle 1 with respect to the road surface is fixed (for example, a state in which the vehicle 1 is kept horizontal) is assumed. Even if such a state is maintained, if the vehicle 1 moves, the vertical and horizontal positions of the individual in-vehicle cameras 2 change. Moreover, if the direction of the vehicle 1 changes, the angle of the yaw direction of each vehicle-mounted camera 2 also changes. Therefore, the influence of the change in the posture of the vehicle 1 with respect to the road surface is not the vertical / horizontal position of the in-vehicle camera 2 or the change in the position in the height direction, the roll angle and the pitch angle, but the angle in the yaw direction. You can think that it will appear. From this, when the vehicle 1 is moving while the attitude of the vehicle 1 changes with respect to the road surface, three values of the height, roll angle, and pitch angle of the in-vehicle camera 2 may be calculated.
  • FIG. 10 shows an example of a bird's-eye view image when the vehicle 1 is tilted in the pitch direction.
  • the angle in the pitch direction of the in-vehicle camera 2 changes downward as the front portion of the vehicle 1 tilts downward.
  • FIG. 10B shows a bird's-eye view image in which lane breaks on the left and right sides of the vehicle 1 are displayed in this state.
  • the lane divisions on the left and right of the vehicle 1 are displayed in a square shape. If the vehicle 1 is not tilted, the left and right lane divisions should be displayed parallel to each other.
  • the pitch angle of the in-vehicle camera 2 corresponding to the inclination of the vehicle 1 can be set. It can be recalculated.
  • the black circle shown on the boundary line of a lane division in FIG.10 (b) is an example of the extracted feature point. If this feature point is connected, it will be the same as the line separating the lanes, so the illustration of this feature point is omitted in the following bird's-eye view examples.
  • FIG. 10 (c) shows a bird's-eye view image displaying a stop line in front of the vehicle 1 in the state of FIG. 10 (a).
  • two straight lines extending in the vertical direction are formed in a square shape, and two straight lines extending in the horizontal direction are displayed as parallel trapezoids. If the vehicle 1 is not tilted, the stop line should be displayed in a rectangle that is long in the horizontal direction. Therefore, as in the case of FIG. 10B, the pitch angle of the in-vehicle camera 2 corresponding to the inclination of the vehicle 1 is calculated by adjusting the two straight lines inclined in the C shape so that they are parallel to each other. You can fix it.
  • the accuracy of the calculated pitch angle of the in-vehicle camera 2 is compared between the case of FIG. 10B and the case of FIG. 10C.
  • the inclination can be examined from the line segment having the length L1
  • the inclination is examined from the line segment having the length L2 shorter than the length L1. It will be.
  • the calculation in the case of FIG. 10B is calculated rather than the case of FIG. 10C. It is considered that the accuracy of the pitch angle of the in-vehicle camera 2 is good.
  • the width in which the feature points are distributed in the vertical direction (the traveling direction of the vehicle 1) on the bird's-eye view image is large. It is understood that this is important for accurate calculation. Therefore, when calculating the pitch angle of the in-vehicle camera 2, it is preferable that the width in which the feature points are distributed in the vertical direction on the bird's-eye view image is larger than a predetermined vertical threshold. Note that the vertical threshold here corresponds to the first threshold.
  • FIG. 11 shows an example of a bird's-eye view image when the vehicle 1 is tilted in the roll direction.
  • FIG. 11A when the angle of the in-vehicle camera 2 in the roll direction changes to the right as the right side inclines downward toward the front of the vehicle 1, as shown in FIG. Therefore, the display mode of the left and right lane divisions of the vehicle 1 changes.
  • the width W ⁇ b> 1 is wide and the display position is far from the vehicle 1.
  • the right lane segment is displayed with a narrow width W2 and the display position is approaching from the vehicle 1.
  • the roll angle of the in-vehicle camera 2 according to the inclination of the vehicle 1 can be recalculated by adjusting the width of the left and right lane divisions to be the same.
  • the actual width of the lane division is 15 cm to 20 cm, and the width of the lane division on the bird's-eye view image is also displayed smaller than the display area of the image. Compared to the case of FIG. It is difficult to adjust accurately.
  • the width in which the feature points are distributed in the lateral direction on the bird's-eye view image is larger than a predetermined lateral threshold.
  • the lateral direction threshold here corresponds to the second threshold.
  • FIG. 12 shows an example of a bird's-eye view image when the height of the vehicle 1 changes.
  • the height of the in-vehicle camera 2 is lowered as the vehicle 1 sinks as a whole.
  • the lane breaks displayed in this state are as shown in FIG. 12B, and the stop lines are as shown in FIG.
  • the display before the height of the vehicle 1 changes is indicated by a broken line.
  • the left and right lane divisions in FIG. 12B are widened and displayed at positions away from the vehicle 1.
  • the width of the stop line in FIG. 12C is also increased and is displayed at a position away from the vehicle 1.
  • the height of the vehicle 1 can be calculated as follows. That is, in the case of FIG. 12B, if data regarding the actual lane separation width is prepared in advance, the lane separation width displayed in the bird's-eye view image matches the actual lane separation width. By adjusting, the height of the vehicle-mounted camera 2 according to the height of the vehicle 1 can be recalculated. In the case of FIG. 12B, if data regarding the actual lane separation width is prepared in advance, the lane separation width displayed in the bird's-eye view image matches the actual lane separation width. By adjusting, the height of the vehicle-mounted camera 2 according to the height of the vehicle 1 can be recalculated. In the case of FIG.
  • the height of the in-vehicle camera 2 corresponding to the height of the vehicle 1 can be recalculated.
  • the scale of the image can be evaluated accurately only after the situation of the entire image can be grasped. Therefore, if the vertical scale of the bird's-eye image is to be evaluated, the distribution of feature points in at least the vertical direction must be good, and if the horizontal scale is to be evaluated, at least in the horizontal direction. The distribution of feature points must be good.
  • the display on the bird's-eye view image changes so as to be enlarged or reduced.
  • the pitch direction of the vehicle 1 is inclined, it is enlarged or reduced.
  • the display on the bird's-eye image may change as if it were.
  • the tendency of such display change is calculated because it is affected by the mounting position such as where the vehicle 1 is before, after, left and right, and the installation state of the in-vehicle camera 2 such as the difference in the amount of sinking due to the hardness of the bumper. It can be assumed that the variables are mistaken.
  • the pitch angle, the roll angle, and the height change individually, and these may change in combination. Even in such a case, the pitch direction angle or the roll direction angle of the in-vehicle camera 2 is good with respect to whether the feature point distribution is good in the vertical direction or the horizontal direction on the bird's-eye view image. The relationship of whether or not it can be calculated with accuracy is the same. Therefore, it is preferable to select a variable to be calculated as shown in FIG.
  • FIG. 13 shows an example of selecting variables according to the distribution of feature points.
  • the distribution of feature points is divided into a vertical direction and a horizontal direction on the bird's-eye view image, and the distribution of the feature points in the vertical direction and the horizontal direction is divided according to whether the width is large or small. I am doing.
  • the pitch angle is calculated. In this case, the pitch angle can be calculated with good accuracy as described with reference to FIG. 10B, but the roll angle is difficult to obtain with accuracy as described with reference to FIG. 11B. .
  • the roll angle is calculated. In this case, the roll angle can be calculated with good accuracy as described with reference to FIG. 11C, but the pitch angle is difficult to obtain with accuracy as described with reference to FIG. 10C. .
  • the height of the in-vehicle camera 2 may be calculated if the distribution width of the feature points is large in either the vertical direction or the horizontal direction. Further, the height of the in-vehicle camera 2 may be calculated on the condition that feature points that can achieve scale consistency such as lane separation and stop line width are extracted.
  • the distribution of the feature points is confirmed for each of the vertical direction and the horizontal direction, and the condition of the pitch angle, roll angle, and height of the in-vehicle camera 2 is determined according to the result. If only good variables are selected and calculated, the accuracy of the calculated variables can be kept good. Further, since the time required for the calculation can be reduced by selecting the variable to be calculated, it becomes easy to cope with a temporary change in the vehicle posture. Other variables that are not selected may be calculated when a feature point with good conditions is obtained from a bird's-eye image obtained by subsequent shooting. Then, as described in the present embodiment, after confirming the newly calculated variable fluctuation amount (S109 and S110 in FIG. 6), the conversion table is updated.
  • the conversion table may be updated using the obtained variable.
  • the image conversion apparatus 10 can calculate the pitch angle, the roll angle, and the height of the in-vehicle camera 2 when the conditions are good, and refers to the conversion table obtained based on this, so from the captured image It can be accurately converted into a bird's-eye view image.
  • Example and a modification are not restricted to said Example and a modification, It can be set as a various aspect in the range which does not deviate from the summary of this indication.

Abstract

Provided is an image transformation apparatus (10) wherein an image captured by a vehicle-mounted camera (2) is transformed to a bird's eye view image, which is then displayed on a vehicle-mounted monitor (3). The image transformation apparatus comprises: an image transformation unit (12) that transforms the captured image to the bird's eye view image in accordance with the position and angle of the vehicle-mounted camera; a feature point extraction unit (15) that extracts a plurality of feature points from the bird's eye view image or from the captured image; a calculation determination unit (16) that determines, on the basis of the distribution of the plurality of feature points extracted from the bird's eye view image or from the captured image, whether to calculate the position and angle of the vehicle-mounted camera; and a calculation unit (17) that calculates the position and angle of the vehicle-mounted camera on the basis of the plurality of feature points if the calculation determination unit determines that the position and angle of the vehicle-mounted camera are to be calculated.

Description

画像変換装置および画像変換方法Image conversion apparatus and image conversion method 関連出願の相互参照Cross-reference of related applications
 本出願は、2014年11月26日に出願された日本国特許出願2014-239377号に基づくものであり、その開示をここに参照により援用する。 This application is based on Japanese Patent Application No. 2014-239377 filed on November 26, 2014, the disclosure of which is incorporated herein by reference.
 本開示は、車載カメラを備える車両に適用されて、車載カメラから取得した撮影画像を鳥瞰画像に変換する際に使用する設定値を調整するための技術に関する。 This disclosure is applied to a vehicle including an in-vehicle camera, and relates to a technique for adjusting a setting value used when a captured image acquired from the in-vehicle camera is converted into a bird's-eye view image.
 車載カメラで車両の周囲を撮影して、得られた撮影画像を車載モニターに表示することで、運転者が車両周囲の状況を確認するための種々の技術が実用化されている。更に、車載カメラから取得した撮影画像をそのまま車載モニターに表示するのではなく、鳥瞰画像に変換してから車載モニターに表示することも広く行われている。ここで、鳥瞰画像とは、撮影画像に写った景色を車両の上方から見下ろして撮影したかのように撮影画像を変換した画像をいう。鳥瞰画像が正確に変換されていれば、他車両や歩行者等と自車両との位置関係が現実と同じ位置関係を保ったまま表示されるので、この画像を見れば車両周囲の状況をより容易かつ正確に把握することができる。撮影画像を正確な鳥瞰画像に変換するためには、車載カメラの位置及び角度を精度良く取得しておく必要がある。 Various techniques for allowing the driver to check the situation around the vehicle have been put into practical use by photographing the surroundings of the vehicle with the in-vehicle camera and displaying the obtained captured image on the in-vehicle monitor. Furthermore, it is widely performed that a captured image acquired from an in-vehicle camera is not displayed as it is on the in-vehicle monitor but is converted into a bird's-eye view image and then displayed on the in-vehicle monitor. Here, the bird's-eye view image refers to an image obtained by converting a photographed image as if it was photographed by looking down at the scenery in the photographed image from above the vehicle. If the bird's-eye view image is correctly converted, the positional relationship between the other vehicle, pedestrian, etc. and the host vehicle is displayed while maintaining the same positional relationship as the actual vehicle. It can be easily and accurately grasped. In order to convert a captured image into an accurate bird's-eye view image, it is necessary to accurately acquire the position and angle of the in-vehicle camera.
 そこで、車載カメラを設計通りの位置及び角度で車両に取り付け、さらに取付けの誤差の影響を少なくするために、車両の工場出荷前には車載カメラの位置及び角度を個別に測定しておく。車載カメラの位置及び角度は以下のようにして算出することができる。先ず、車載カメラによる撮影画像から複数の特徴点を抽出して、それぞれの特徴点が位置する撮影画像上の座標を、撮影された実空間上の座標に対応づける。そして、対応づけられたそれぞれの特徴点の位置を、車載カメラの位置及び角度を変数とする関係式に代入する。その後、得られた関係式を解けば、車載カメラの位置及び角度を算出できる。 Therefore, the in-vehicle camera is attached to the vehicle at the designed position and angle, and the position and angle of the in-vehicle camera are individually measured before the vehicle is shipped from the factory in order to reduce the influence of installation errors. The position and angle of the in-vehicle camera can be calculated as follows. First, a plurality of feature points are extracted from a photographed image by the in-vehicle camera, and the coordinates on the photographed image where each feature point is located are associated with the coordinates in the photographed real space. Then, the positions of the respective feature points associated with each other are substituted into a relational expression using the position and angle of the in-vehicle camera as variables. Thereafter, the position and angle of the in-vehicle camera can be calculated by solving the obtained relational expression.
 もっとも、車両に固定された車載カメラは、工場出荷後に締結部が緩むなどして、その位置及び角度にズレが生じる場合がある。こうした場合、車両の位置及び角度は、工場出荷時とは異なった状態となる。そこで、車両が走行中に撮影した画像から特徴点を抽出して、車載カメラの位置及び角度を算出し直すことを可能とする技術が提案されている(特許文献1)。 However, the in-vehicle camera fixed to the vehicle may be displaced in position and angle due to loosening of the fastening part after shipment from the factory. In such a case, the position and angle of the vehicle are different from those at the time of factory shipment. Therefore, a technique has been proposed that enables feature points to be extracted from an image taken while the vehicle is running and recalculate the position and angle of the in-vehicle camera (Patent Document 1).
JP2013-222302AJP2013-222302A
 しかし、上記の提案の技術では、整備工場に持ち込んだ場合と同様に撮影画像から特徴点を抽出して車載カメラの位置及び角度を算出しているにもかかわらず、十分な精度が得られないことがあった。 However, with the proposed technique, sufficient accuracy cannot be obtained even though the feature point is extracted from the captured image and the position and angle of the in-vehicle camera are calculated in the same manner as when brought to the maintenance shop. There was a thing.
 本開示の目的の一つは、車両を整備工場に持ち込むことなく、車載カメラの位置及び角度を精度よく算出し、正確な鳥瞰画像に変換することが可能な技術の提供することにある。 One of the objects of the present disclosure is to provide a technique that can accurately calculate the position and angle of the in-vehicle camera and convert it into an accurate bird's-eye view image without bringing the vehicle into a maintenance shop.
 本開示の一観点によれば、車両の周囲を撮影する車載カメラから撮影画像を取得して、該撮影画像を前記車両の上方の視点から撮影したかのような鳥瞰画像に変換してから車載モニターに表示する画像変換装置であって、前記車載カメラの位置及び角度に応じて前記撮影画像を前記鳥瞰画像に変換する画像変換部と、前記鳥瞰画像または前記撮影画像から複数の特徴点を抽出する特徴点抽出部と、前記鳥瞰画像または前記撮影画像から抽出された前記複数の特徴点の分布に基づいて前記車載カメラの位置及び角度を算出するか否かを判断する算出判断部と、前記算出判断部が前記車載カメラの位置及び角度を算出すると判断した場合に、前記複数の特徴点に基づいて前記車載カメラの位置及び角度を算出する算出部と、を備えた画像変換装置が提供される。 According to one aspect of the present disclosure, a captured image is acquired from a vehicle-mounted camera that captures the surroundings of the vehicle, and the captured image is converted into a bird's-eye image as if it was captured from a viewpoint above the vehicle. An image conversion apparatus for displaying on a monitor, wherein the image conversion unit converts the captured image into the bird's-eye image according to the position and angle of the in-vehicle camera, and extracts a plurality of feature points from the bird's-eye image or the captured image A feature point extraction unit, a calculation determination unit that determines whether to calculate the position and angle of the in-vehicle camera based on the distribution of the plurality of feature points extracted from the bird's-eye view image or the captured image, An image conversion device comprising: a calculation unit that calculates the position and angle of the in-vehicle camera based on the plurality of feature points when the calculation determination unit determines to calculate the position and angle of the in-vehicle camera. There is provided.
 本開示の他の観点によれば、車載カメラを搭載した車両に適用されて、該車両の周囲を撮影した前記車載カメラから撮影画像を取得して、該撮影画像を前記車両の上方の視点から撮影したかのような鳥瞰画像に変換してから車載モニターに表示する画像変換方法であって、前記車載カメラの位置及び角度に応じて前記撮影画像を前記鳥瞰画像に変換する工程と、前記鳥瞰画像または前記撮影画像から複数の特徴点を抽出する工程と、前記鳥瞰画像または前記撮影画像から抽出された前記複数の特徴点の分布に基づいて前記車載カメラの位置及び角度を算出するか否かを判断する工程と、前記車載カメラの位置及び角度を算出すると判断した場合に、前記複数の特徴点に基づいて前記車載カメラの位置及び角度を算出する工程と、を備えた画像変換方法が提供される。 According to another aspect of the present disclosure, the present invention is applied to a vehicle equipped with an in-vehicle camera, acquires a captured image from the in-vehicle camera that captures the periphery of the vehicle, and the captured image is viewed from an upper viewpoint of the vehicle. An image conversion method for converting a photographed image into a bird's-eye image according to a position and an angle of the vehicle-mounted camera, wherein the image is converted into a bird's-eye image as if taken and then displayed on the vehicle-mounted monitor. Whether to calculate the position and angle of the in-vehicle camera based on the step of extracting a plurality of feature points from the image or the captured image and the distribution of the plurality of feature points extracted from the bird's-eye view image or the captured image And a step of calculating the position and angle of the in-vehicle camera based on the plurality of feature points when it is determined to calculate the position and angle of the in-vehicle camera. Conversion method is provided.
 例えば、整備工場などでは特徴点を人為的に配置できるので、車載カメラの位置及び角度を算出するのに適した分布の特徴点を用いて、位置及び角度を算出することができる。これに対して、車両が走行中の場合には、その時々の撮影範囲にある景色から特徴点を抽出することになるので、様々な分布の特徴点が抽出されることになる。従って、車両が走行中の場合には、車載カメラの位置及び角度の算出に適した特徴点が抽出できるとは限らない。そこで、抽出された特徴点の分布が車載カメラの位置及び角度の算出に適する場合に車載カメラの位置及び角度を算出するようにすれば、例え車両が走行中であったとしても、車載カメラの位置及び角度を精度良く算出することができる。その結果、撮影画像から鳥瞰画像に正確に変換することが可能となる。 For example, since a feature point can be artificially arranged in a maintenance shop or the like, a position and an angle can be calculated using feature points having a distribution suitable for calculating the position and angle of the in-vehicle camera. On the other hand, when the vehicle is running, feature points are extracted from the scenery in the shooting range at that time, and thus feature points with various distributions are extracted. Therefore, when the vehicle is traveling, it is not always possible to extract feature points suitable for calculating the position and angle of the in-vehicle camera. Therefore, if the position and angle of the in-vehicle camera are calculated when the distribution of the extracted feature points is suitable for the calculation of the position and angle of the in-vehicle camera, even if the vehicle is running, The position and angle can be calculated with high accuracy. As a result, it is possible to accurately convert the captured image into a bird's-eye view image.
 本開示についての上記および他の目的、特徴や利点は、添付の図面を参照した下記の詳細な説明から、より明確になる。添付図面において
図1は、本実施例の画像変換装置を搭載した車両を示す説明図であり、 図2は、撮影画像を鳥瞰画像に変換する原理の模式的な説明図であり、 図3は、変換テーブルを参照して撮影画像から鳥瞰画像に変換する様子を示した説明図であり、 図4は、画像変換装置の内部構造を示したブロック図であり、 図5は、画像変換装置が実行する変換テーブル更新処理の前半部分のフローチャートであり、 図6は、画像変換装置が実行する変換テーブル更新処理の後半部分のフローチャートであり、 図7は、スコア用センサーに付与する環境スコアの具体例を示す図であり、 図8は、撮影画像および鳥瞰画像のそれぞれから特徴点を抽出する様子を示した説明図であり、 図9は、特徴点の分布状況を確認する様子を示した説明図であり、 図10は、車両がピッチ方向に傾いた場合の鳥瞰画像を例示する図であり、 図11は、車両がロール方向に傾いた場合の鳥瞰画像の例示であり、 図12は、車両が下方に沈んだ場合の鳥瞰画像を例示する図であり、 図13は、特徴点の分布に応じて車載カメラのピッチ角度、ロール角度、高さの何れの変数を算出するかという選択を例示する図である。
The above and other objects, features and advantages of the present disclosure will become more apparent from the following detailed description with reference to the accompanying drawings. In the attached drawings
FIG. 1 is an explanatory view showing a vehicle equipped with the image conversion apparatus of this embodiment. FIG. 2 is a schematic explanatory diagram of the principle of converting a captured image into a bird's-eye view image. FIG. 3 is an explanatory diagram showing a state in which a captured image is converted into a bird's-eye image with reference to a conversion table. FIG. 4 is a block diagram showing the internal structure of the image conversion apparatus. FIG. 5 is a flowchart of the first half of the conversion table update process executed by the image conversion apparatus. FIG. 6 is a flowchart of the latter half of the conversion table update process executed by the image conversion apparatus. FIG. 7 is a diagram showing a specific example of the environmental score to be given to the score sensor, FIG. 8 is an explanatory diagram showing how feature points are extracted from each of a captured image and a bird's-eye view image. FIG. 9 is an explanatory diagram showing how to check the distribution of feature points. FIG. 10 is a diagram illustrating a bird's-eye view image when the vehicle is tilted in the pitch direction. FIG. 11 is an example of a bird's-eye view image when the vehicle is tilted in the roll direction. FIG. 12 is a diagram illustrating a bird's-eye view image when the vehicle sinks downward; FIG. 13 is a diagram exemplifying selection of which variable of pitch angle, roll angle, and height of the in-vehicle camera is calculated according to the distribution of feature points.
 以下では、画像変換装置の実施例について説明する。 Hereinafter, embodiments of the image conversion apparatus will be described.
 A-1.本実施例の装置構成:
 図1には、画像変換装置10を搭載した車両1の大まかな構造が示されている。図示されるように車両1は、画像変換装置10に加えて、車両1の前部に取り付けられた車載カメラ2と、運転席から視認できる車載モニター3とを備えている。これらの他に車両1は、走行速度を取得する車速センサー41と、操舵角を取得するステアリングセンサー42と、車両1の四隅に配置され上下方向の変位を検知するハイトセンサー43a~43dと、日射量の強さを検知する日射センサー44と、雨滴量を検知するレインセンサー45とを備える。これら各種のセンサーは、車両1の周辺環境または車両1自体の状態を検知して得た検知情報を画像変換装置10に提供するものであって、検知器に対応する。
A-1. Apparatus configuration of this embodiment:
FIG. 1 shows a rough structure of a vehicle 1 on which an image conversion device 10 is mounted. As shown in the drawing, the vehicle 1 includes an in-vehicle camera 2 attached to the front portion of the vehicle 1 and an in-vehicle monitor 3 that can be viewed from the driver's seat, in addition to the image conversion device 10. In addition to these, the vehicle 1 includes a vehicle speed sensor 41 that acquires a traveling speed, a steering sensor 42 that acquires a steering angle, height sensors 43a to 43d that are arranged at four corners of the vehicle 1 and detect vertical displacement, and solar radiation. A solar radiation sensor 44 that detects the intensity of the amount and a rain sensor 45 that detects the amount of raindrops are provided. These various sensors provide the image conversion apparatus 10 with detection information obtained by detecting the surrounding environment of the vehicle 1 or the state of the vehicle 1 itself, and correspond to detectors.
 車載カメラ2は、魚眼レンズを有する画角の広いカメラであり、その撮影画像に車両1の前方周辺の路面を含む景色を写す。画像変換装置10は、車載カメラ2から取得した撮影画像を鳥瞰画像に変換して、その鳥瞰画像を車載モニター3に表示する。撮影画像には路面が写っているので、これを変換した鳥瞰画像は、撮影画像に写った車両1の前方の路面を上方から見下ろして撮影したかのような画像となる。以下、撮影画像から鳥瞰画像への変換について概要を説明する。 The in-vehicle camera 2 is a camera with a fish-eye lens and a wide angle of view, and captures a scene including the road surface around the front of the vehicle 1 in the captured image. The image conversion device 10 converts the captured image acquired from the in-vehicle camera 2 into a bird's-eye image and displays the bird's-eye image on the in-vehicle monitor 3. Since the photographed image shows the road surface, the bird's-eye view image obtained by converting the photographed image is an image as if the photograph was taken by looking down the road surface in front of the vehicle 1 in the photographed image from above. Hereinafter, an outline of conversion from a captured image to a bird's-eye view image will be described.
 図2には撮影画像から鳥瞰画像に変換する原理が模式的に示されている。図2(a)に示されるように、車載カメラ2が路面21を撮影すると、その撮影画像22は、車載カメラ2の視線と垂直に交わる画像平面に、車載カメラ2の撮影領域にある路面21が投影された画像となる。この投影関係を逆に見ると、車載カメラ2の位置に置いた光源の光で撮影画像を路面21に投影すれば、投影された画像は実際の路面21の状況と一致する筈である。 FIG. 2 schematically shows the principle of converting a captured image into a bird's-eye view image. As shown in FIG. 2A, when the in-vehicle camera 2 images the road surface 21, the captured image 22 is on the road surface 21 in the imaging area of the in-vehicle camera 2 on an image plane that intersects the line of sight of the in-vehicle camera 2. Becomes a projected image. Looking back at this projection relationship, if the photographed image is projected onto the road surface 21 with the light of the light source placed at the position of the in-vehicle camera 2, the projected image should match the actual situation of the road surface 21.
 図2(b)の路面平面23には、このように撮影画像に写した路面が投影されている。路面平面23は、図2(a)の路面21と同じ位置にある平面であって、ちょうどスクリーンのようなものと考えて良い。そして、この路面平面23に写った路面を真上から見下ろすようにして設けた仮想のカメラ24で撮影すれば、車載カメラ2の前方の路面を真上から撮影したかのような鳥瞰画像25が得られる。 2B is projected on the road surface plane 23 in FIG. 2B. The road surface plane 23 is a plane at the same position as the road surface 21 in FIG. 2A, and may be considered as a screen. If a virtual camera 24 provided so that the road surface reflected on the road surface plane 23 is looked down from directly above is photographed, a bird's-eye view image 25 as if the road surface in front of the in-vehicle camera 2 was photographed from directly above is obtained. can get.
 ここで、車載カメラ2の位置及び角度と撮影領域との関係を見ると、図2(a)で撮影領域として示された路面21の台形で囲われた領域の位置や形状は、車載カメラ2の位置及び角度に応じて決定される。従って、撮影画像を鳥瞰画像に変換するためには、実際に撮影した時と、その撮影画像を仮想の路面平面23(スクリーン)に投影した時とで、車載カメラ2の位置及び角度が一定の状態を保っていることが必要となる。 Here, looking at the relationship between the position and angle of the in-vehicle camera 2 and the imaging region, the position and shape of the area surrounded by the trapezoid of the road surface 21 shown as the imaging region in FIG. Is determined according to the position and angle. Therefore, in order to convert the captured image into a bird's-eye view image, the position and angle of the in-vehicle camera 2 are constant between when the image is actually captured and when the captured image is projected onto the virtual road surface plane 23 (screen). It is necessary to keep the state.
 実際の車載カメラ2は、車両1に固定して取り付けられるため、通常は、路面21に対する車載カメラ2の位置及び角度が一定と考えて良い。そこで、画像変換装置10は、図2で説明したような投影関係を一定の車載カメラ2の位置及び角度の下で計算した変換テーブルを予め作成しておく。そして、撮影画像を鳥瞰画像に変換する際には、その変換テーブルを参照することで、撮影ごとに上記の投影関係を計算する必要を無くして、処理負担を軽減している。 Since the actual in-vehicle camera 2 is fixedly attached to the vehicle 1, it may be considered that the position and angle of the in-vehicle camera 2 with respect to the road surface 21 are usually constant. Therefore, the image conversion apparatus 10 prepares in advance a conversion table in which the projection relationship as described in FIG. 2 is calculated under a certain position and angle of the in-vehicle camera 2. When converting a captured image into a bird's-eye view image, by referring to the conversion table, it is not necessary to calculate the above-described projection relationship for each shooting, thereby reducing the processing load.
 図3には、変換テーブルを参照して撮影画像から鳥瞰画像に変換する様子が示されている。変換テーブルは、鳥瞰画像の画素位置と撮影画像の画素位置との対応関係を表すデータである。例えば、図3(b)の鳥瞰画像の座標(Xe,Yf)の位置にある画素は、図3(a)の撮影画像の座標(Ag,Bh)の位置にある画素に変換テーブルによって対応付けられている。これに従って、鳥瞰画像の座標(Xe,Yf)の位置にある画素には、撮影画像の座標(Ag,Bh)の位置にある画素の画像データ(輝度や彩度等)が適用される。鳥瞰画像の画素それぞれについて変換テーブルを参照すれば、撮影画像から鳥瞰画像に変換することができる。 FIG. 3 shows a state in which a captured image is converted to a bird's-eye image with reference to the conversion table. The conversion table is data representing the correspondence between the pixel position of the bird's-eye view image and the pixel position of the captured image. For example, the pixel at the position of the coordinates (Xe, Yf) of the bird's-eye image in FIG. 3B is associated with the pixel at the position of the coordinates (Ag, Bh) of the photographed image in FIG. It has been. Accordingly, the image data (luminance, saturation, etc.) of the pixel at the coordinate (Ag, Bh) of the captured image is applied to the pixel at the coordinate (Xe, Yf) of the bird's-eye view image. By referring to the conversion table for each pixel of the bird's-eye view image, the captured image can be converted to the bird's-eye view image.
 上述したように、この変換テーブルは、車載カメラ2の位置及び角度に応じて作成されているから、画像変換装置10は、撮影画像を鳥瞰画像に変換する機能のほか、車載カメラ2の位置及び角度を再算出して、新たな位置及び角度に応じて変換テーブルを更新する機能を備えている。 As described above, since this conversion table is created according to the position and angle of the in-vehicle camera 2, the image conversion apparatus 10 has the function of converting a captured image into a bird's-eye image, the position of the in-vehicle camera 2, A function of recalculating the angle and updating the conversion table according to the new position and angle is provided.
 図4には、画像変換装置10の内部構造が示されている。図示されるように、画像変換装置10は、撮影画像取得部11と、画像変換部12と、表示部13とを備えている。撮影画像取得部11は車載カメラ2から撮影画像を取得する。画像変換部12は、上述したように変換テーブルを参照して撮影画像を鳥瞰画像に変換する。そして、表示部13は鳥瞰画像を車載モニター3に表示する。 FIG. 4 shows the internal structure of the image conversion apparatus 10. As illustrated, the image conversion apparatus 10 includes a captured image acquisition unit 11, an image conversion unit 12, and a display unit 13. The captured image acquisition unit 11 acquires a captured image from the in-vehicle camera 2. The image conversion unit 12 converts the captured image into a bird's-eye view image with reference to the conversion table as described above. The display unit 13 displays the bird's-eye view image on the in-vehicle monitor 3.
 また、画像変換装置10は、車載カメラ2の位置及び角度を算出して変換テーブルを更新するために、スコア評価部14と、特徴点抽出部15と、算出判断部16と、算出部17と、変動量取得部18と、変換テーブル更新部19とを備えている。 Further, the image conversion apparatus 10 calculates the position and angle of the in-vehicle camera 2 and updates the conversion table, so that the score evaluation unit 14, the feature point extraction unit 15, the calculation determination unit 16, and the calculation unit 17 The fluctuation amount acquisition unit 18 and the conversion table update unit 19 are provided.
 スコア評価部14は、車両1に備わる車速センサー41、ステアリングセンサー42、ハイトセンサー43a~43d、日射センサー44およびレインセンサー45が検知した検知情報を取得する。以下、これら各種のセンサーを総じてスコア用センサー4と称する。また、スコア評価部14は、スコア用センサー4のそれぞれから取得した検知情報に応じてスコアを付与し、そのスコアを総合的に評価する。このスコア評価については具体例を挙げて後述するが、その概要は車両1周辺の環境または車両1自体の状態に関する情報をスコア化して、特徴点の抽出に適した状況であるか否かを評価するということである。特徴点とは、撮影景色の中で際立った特徴を有することにより他の点と区別できる点であって、その撮影景色を写した撮影画像または鳥瞰画像で画像座標上の位置を特定できる点をいう。特徴点を抽出するのは車載カメラ2の位置及び角度を算出するためである。 The score evaluation unit 14 acquires detection information detected by the vehicle speed sensor 41, the steering sensor 42, the height sensors 43a to 43d, the solar radiation sensor 44, and the rain sensor 45 provided in the vehicle 1. Hereinafter, these various sensors are collectively referred to as a score sensor 4. Moreover, the score evaluation part 14 provides a score according to the detection information acquired from each of the score sensors 4, and comprehensively evaluates the score. The score evaluation will be described later with a specific example. The outline of the score evaluation is information on the environment around the vehicle 1 or the state of the vehicle 1 itself, and evaluates whether the situation is suitable for feature point extraction. Is to do. A feature point is a point that can be distinguished from other points by having a distinctive feature in the shooting scenery, and a point on the image coordinate can be specified by a shooting image or a bird's-eye view image of the shooting scenery. Say. The feature points are extracted in order to calculate the position and angle of the in-vehicle camera 2.
 尚、スコア評価部14は、このように特徴点を抽出するか否かを判断することから、抽出判断部に対応する。 Note that the score evaluation unit 14 corresponds to the extraction determination unit because it determines whether or not to extract feature points in this way.
 特徴点抽出部15は、車載カメラ2の位置及び角度を算出するために、複数の特徴点を撮影画像または鳥瞰画像から抽出する。 The feature point extraction unit 15 extracts a plurality of feature points from the photographed image or the bird's-eye view image in order to calculate the position and angle of the in-vehicle camera 2.
 算出判断部16は、複数の特徴点の分布状況を確認し、その確認から得られた分布の幅を示す情報に基づいて車載カメラ2の位置及び角度を算出するか否かを判断する。 The calculation determination unit 16 confirms the distribution status of the plurality of feature points, and determines whether or not to calculate the position and angle of the in-vehicle camera 2 based on the information indicating the distribution width obtained from the confirmation.
 算出部17は、抽出した複数の特徴点に基づいて車載カメラ2の位置及び角度を算出する。車載カメラ2の位置及び角度を算出するには、撮影画像または鳥瞰画像から抽出した複数の特徴点の位置関係と、実際の撮影景色における複数の特徴点の位置関係との整合性を利用する。車載カメラ2の位置及び角度の算出方法は周知のため詳細な説明については省略する。 The calculation unit 17 calculates the position and angle of the in-vehicle camera 2 based on the extracted feature points. In order to calculate the position and angle of the in-vehicle camera 2, the consistency between the positional relationship between the plurality of feature points extracted from the captured image or the bird's-eye view image and the positional relationship between the plurality of feature points in the actual captured scene is used. Since the calculation method of the position and angle of the vehicle-mounted camera 2 is well known, detailed description thereof will be omitted.
 変動量取得部18は、算出部17が新たに算出した後の位置及び角度を、新たに算出する前の位置及び角度と比較して、変動量を取得する。 The fluctuation amount obtaining unit 18 obtains the fluctuation amount by comparing the position and angle after the calculation unit 17 newly calculates with the position and angle before the new calculation.
 変換テーブル更新部19は、変動量取得部18が取得した変動量が所定の変動閾値以下である場合に、新たに算出された車載カメラ2の位置及び角度に基づいて、変換テーブルを更新する。変換テーブルが更新された場合には、画像変換部12はその新たな変換テーブルを参照して撮影画像を鳥瞰画像に変換することになる。 The conversion table update unit 19 updates the conversion table based on the newly calculated position and angle of the in-vehicle camera 2 when the fluctuation amount acquired by the fluctuation amount acquisition unit 18 is equal to or less than a predetermined fluctuation threshold. When the conversion table is updated, the image conversion unit 12 converts the captured image into a bird's-eye image with reference to the new conversion table.
 尚、これら撮影画像取得部11から変換テーブル更新部19までの9つの部は、画像変換装置10の内部を機能の観点から分類した概念であり、画像変換装置10が物理的に9つの部分に区分されることを表すものではない。従って、これらの部は、CPUで実行されるコンピュータープログラムとして実現することもできるし、LSIやメモリーを含む電子回路として実現することもできるし、これらを組み合わせることによって実現することもできる。以下、これら9つの部を備えることによって実行される変換テーブル更新処理について詳しく説明する。 Note that these nine units from the captured image acquisition unit 11 to the conversion table update unit 19 are the concepts in which the inside of the image conversion apparatus 10 is classified from the viewpoint of function, and the image conversion apparatus 10 is physically divided into nine parts. It does not represent being classified. Therefore, these units can be realized as a computer program executed by the CPU, can be realized as an electronic circuit including an LSI and a memory, or can be realized by combining them. Hereinafter, the conversion table update process executed by providing these nine units will be described in detail.
 A-2.鳥瞰画像生成処理: 
 図5および図6には、本実施例の画像変換装置10が実行する変換テーブル更新処理のフローチャートが示されている。この変換テーブル更新処理は、撮影画像を鳥瞰画像に変換するために用いられる変換テーブルを更新するための処理であって、車載モニター3に表示するための鳥瞰画像の変換処理とは別個に実行される処理である。鳥瞰画像の変換処理は車載カメラ2の撮影周期(例えば30Hz)に対応する周期で繰り返し実行されるが、ここで説明する変換テーブル更新処理については、鳥瞰画像の変換処理と同様の周期で実行してもよいし、適宜の間隔(例えば10分間隔)で実行してもよい。
A-2. Bird's-eye view image generation processing:
5 and 6 show flowcharts of the conversion table update process executed by the image conversion apparatus 10 according to the present embodiment. This conversion table update process is a process for updating a conversion table used for converting a captured image into a bird's-eye view image, and is executed separately from the conversion process of the bird's-eye view image to be displayed on the in-vehicle monitor 3. Process. The bird's-eye image conversion processing is repeatedly executed at a cycle corresponding to the shooting cycle (for example, 30 Hz) of the in-vehicle camera 2, but the conversion table update processing described here is executed at the same cycle as the bird's-eye image conversion processing. Alternatively, it may be executed at an appropriate interval (for example, every 10 minutes).
 変換テーブル更新処理を開始すると先ず、環境スコアを算出する(S101)。環境スコアは、各種のスコア用センサー4から取得した検知情報をスコア化して、車両1の周辺環境および車両1自体の状態を評価するための指標である。 When the conversion table update process is started, an environmental score is first calculated (S101). The environmental score is an index for scoring detection information acquired from various score sensors 4 to evaluate the surrounding environment of the vehicle 1 and the state of the vehicle 1 itself.
 図7には環境スコアの具体例が示されている。環境スコアは、図7(a)に示されるように、スコア用センサー4を構成する車速センサー41、ステアリングセンサー42、ハイトセンサー43a~43d、日射センサー44およびレインセンサー45のそれぞれから取得した検知情報に対して付与される。ここで扱う検知情報は数値化されているので、それらの数値の大きさに応じてスコアを付与する。以下、これらのスコア用センサー4の検知情報ごとに付与する環境スコアに対して、車速センサー41からレインセンサー45まで順にS1~S5というスコア記号を付して呼称する。 FIG. 7 shows a specific example of the environmental score. As shown in FIG. 7A, the environmental score is detected information obtained from each of the vehicle speed sensor 41, the steering sensor 42, the height sensors 43a to 43d, the solar radiation sensor 44, and the rain sensor 45 constituting the score sensor 4. Is granted to. Since the detection information handled here is digitized, a score is given according to the magnitude of those numeric values. Hereinafter, the environmental score given for each piece of detection information of the score sensor 4 will be referred to with the score symbols S1 to S5 in order from the vehicle speed sensor 41 to the rain sensor 45.
 ここで、検知情報に応じて環境スコアを付与する方法について、車速センサー41を例にして説明する。車速センサー41の検知情報は車両1の走行速度であり、走行速度が大きいほど、環境スコアS1を低くする。図7(b)には、その環境スコアS1の付与例が示されている。図示されるように、車速が0~20(km/h)の時の環境スコアをS1=100とし、車速が20~80(km/h)の時はS1=80とし、車速が80(km/h)以上の時はS1=60とする。車速が大きいほど環境スコアS1を低くする理由は、撮影画像に被写体ブレが大きくなりやすく、特徴点の抽出精度が不安定となるからである。 Here, a method for assigning an environmental score according to detection information will be described using the vehicle speed sensor 41 as an example. The detection information of the vehicle speed sensor 41 is the traveling speed of the vehicle 1, and the environmental score S1 is lowered as the traveling speed increases. FIG. 7B shows an example of giving the environmental score S1. As shown in the figure, the environmental score when the vehicle speed is 0 to 20 (km / h) is S1 = 100, and when the vehicle speed is 20 to 80 (km / h), S1 = 80 and the vehicle speed is 80 (km / H) S1 = 60 for more than The reason why the environmental score S1 is lowered as the vehicle speed increases is that subject blur tends to increase in the captured image, and the feature point extraction accuracy becomes unstable.
 車速センサー41の他のスコア用センサー4、すなわち、ステアリングセンサー42、ハイトセンサー43a~43d、日射センサー44およびレインセンサー45についても、車速センサー41と同様にして個々に環境スコアを付与する。 Similarly to the vehicle speed sensor 41, environmental scores are individually assigned to the score sensors 4 other than the vehicle speed sensor 41, that is, the steering sensor 42, the height sensors 43a to 43d, the solar radiation sensor 44, and the rain sensor 45.
 ステアリングセンサー42は操舵角の単位時間当たりの変位量を検知する。この変位量が大きいほど、環境スコアS2を低くする。これは、操舵角の変位量が大きい時には、荷重の変化によって車両1が傾くため、特徴点の抽出には不向きであることによる。 The steering sensor 42 detects the amount of displacement per unit time of the steering angle. The environmental score S2 is lowered as the displacement amount increases. This is because when the amount of displacement of the steering angle is large, the vehicle 1 tilts due to a change in load, so that it is not suitable for extracting feature points.
 ハイトセンサー43a~43dは、車両1の前後左右それぞれで上下方向の変位量を検知することにより、車両1の傾きの単位時間当たりの変位量を検知する。車両1が傾くことからS2と同様に考えて、ハイトセンサー43a~43dについての環境スコアS3は、変位量が大きいほど低スコアとする。 The height sensors 43a to 43d detect the amount of displacement per unit time of the inclination of the vehicle 1 by detecting the amount of displacement in the vertical direction at the front, rear, left and right of the vehicle 1, respectively. Considering the same as S2 because the vehicle 1 is inclined, the environmental score S3 for the height sensors 43a to 43d is set to a lower score as the displacement amount is larger.
 日射センサー44が検知する日射量に対して付与する環境スコアS4は、日射量が小さいほど低スコアにする。これは、日射量が小さいほど撮影景色が暗く、撮影画像の輝度値の差が小さくなる可能性が高いため、特徴点の抽出精度が低いと考えられることによる。 The environmental score S4 assigned to the amount of solar radiation detected by the solar radiation sensor 44 is set to a lower score as the amount of solar radiation is smaller. This is because, as the amount of solar radiation is smaller, the photographic scene is darker and the difference in luminance value of the photographed image is likely to be small, so the feature point extraction accuracy is considered to be low.
 レインセンサー45が検知する雨量に対して付与する環境スコアS5は、雨量が多いほど低スコアにする。これは、雨量が多いほど撮影画像が不鮮明になる可能性が高いため、特徴点の抽出精度が低いと考えられることによる。 The environmental score S5 assigned to the rainfall detected by the rain sensor 45 is set to a lower score as the rainfall increases. This is because there is a high possibility that the photographed image becomes unclear as the amount of rain increases, so that the feature point extraction accuracy is considered to be low.
 車速センサー41、ステアリングセンサー42、ハイトセンサー43a~43d、日射センサー44およびレインセンサー45のそれぞれの検知情報に対して個別の環境スコアS1~S5を付与したら、図7(c)に示すように総合環境スコアSを算出する。 When individual environmental scores S1 to S5 are assigned to the detection information of the vehicle speed sensor 41, the steering sensor 42, the height sensors 43a to 43d, the solar radiation sensor 44, and the rain sensor 45, as shown in FIG. An environmental score S is calculated.
 尚、ここでは、総合環境スコアSをS1ないしS5の総和としているが、スコア用センサー4のそれぞれが検知した情報を考慮できればよく、その他の計算方法をとってもよい。例えば、各種の検知情報のうち、車速が与える影響が大きいと判断されるときには、環境スコアS1を重点的に評価してもよい。 Note that, here, the total environmental score S is the sum of S1 to S5, but it is sufficient if information detected by each of the score sensors 4 can be taken into consideration, and other calculation methods may be used. For example, when it is determined that the influence of the vehicle speed is large among various types of detection information, the environmental score S1 may be evaluated with priority.
 また、スコア用センサー4は、上記の各種センサーに追加して、又はこれらに代えて、他のセンサーを用いることもできる。例えば、S3と同様の考え方によりジャイロセンサーから車両1の傾きを検知して環境スコアを付与することができる。また、S5と同様の考え方によりワイパー信号から雨量を検知して環境スコアを付与することもできる。さらに、カーナビゲーションシステムから取得される交通情報や道路情報等から、車両1の走行速度や車両1の傾きの多寡を予測し、これらの情報を加味して環境スコアを算出してもよい。 Further, the score sensor 4 may be other sensors in addition to or in place of the various sensors described above. For example, the environmental score can be given by detecting the inclination of the vehicle 1 from the gyro sensor in the same way as in S3. Moreover, it is also possible to detect an amount of rain from a wiper signal and give an environmental score based on the same idea as in S5. Further, the environmental score may be calculated by predicting the travel speed of the vehicle 1 and the inclination of the vehicle 1 from the traffic information and road information acquired from the car navigation system, and taking these information into consideration.
 上述してきたように、環境スコアは、天候や周囲の明るさの違いといった車両1が置かれた周辺環境を評価し、又は、車両1の姿勢が変化したり、振動が生じたりといった車両1自体の状態を評価するための指標である。こうした環境スコアが高いときには、周辺環境や車両状態に起因する外乱の影響を少なくして、鮮明な撮影画像が得られる可能性が高いと考えることができる。そして、鮮明な撮影画像が得られるということは、特徴点の抽出に適した状況であるということである。 As described above, the environmental score evaluates the surrounding environment in which the vehicle 1 is placed, such as the difference in weather and ambient brightness, or the vehicle 1 itself in which the posture of the vehicle 1 changes or vibrations occur. It is an index for evaluating the state of. When such an environmental score is high, it can be considered that there is a high possibility of obtaining a clear photographed image by reducing the influence of disturbance caused by the surrounding environment and the vehicle state. The fact that a clear captured image is obtained means that the situation is suitable for extracting feature points.
 このようにして環境スコアを算出したら(S101)、その環境スコアが所定のスコア閾値以上か否かを判断する(S102)。ここで判断する対象は上述の総合環境スコアSであり、このスコア閾値は任意に定めればよい。総合環境スコアSが所定のスコア閾値以上であるか否かを判断することで、各種のスコア用センサー4から取得した検知情報から、車両1の周辺の環境および車両1自体の状態を総合的に評価して、特徴点の抽出に適した状況であるか否かを判断することができる。尚、環境スコアの評価方法はこれに限られず、例えば、個別の環境スコアS1ないしS5のうち、一定数の環境スコアが所定の基準を満たさない場合には、特徴点の抽出に適さないと判断することにしてもよい。 When the environmental score is calculated in this way (S101), it is determined whether or not the environmental score is equal to or higher than a predetermined score threshold (S102). The object to be determined here is the above-mentioned total environmental score S, and this score threshold may be arbitrarily determined. By determining whether or not the total environmental score S is equal to or greater than a predetermined score threshold, the surrounding environment of the vehicle 1 and the state of the vehicle 1 itself are comprehensively determined from the detection information acquired from the various score sensors 4. Evaluation can be made to determine whether or not the situation is suitable for feature point extraction. Note that the environmental score evaluation method is not limited to this. For example, if a certain number of environmental scores among the individual environmental scores S1 to S5 do not satisfy a predetermined standard, it is determined that the environmental score is not suitable for feature point extraction. You may decide to do it.
 環境スコアが所定のスコア閾値に満たないと判断される場合には(S102:no)、この変換テーブル更新処理を終了する(図6)。 If it is determined that the environmental score is less than the predetermined score threshold (S102: no), the conversion table update process is terminated (FIG. 6).
 一方、環境スコアが所定のスコア閾値以上であると判断される場合には(S102:yes)、特徴点を抽出する。本実施例では、撮影画像を取得し(S103)、その撮影画像を鳥瞰画像に変換してから(S104)、特徴点を抽出することになる(S105)。 On the other hand, when it is determined that the environmental score is equal to or higher than the predetermined score threshold (S102: yes), feature points are extracted. In this embodiment, a captured image is acquired (S103), the captured image is converted into a bird's-eye image (S104), and feature points are extracted (S105).
 図8には、撮影画像および鳥瞰画像のそれぞれから複数の特徴点を抽出する様子が示されている。上述したように、特徴点は、撮影景色の中で際立った特徴を有することにより他の点と区別できる点であって、その撮影景色を写した撮影画像または鳥瞰画像で画像座標上の位置を特定できる点である。典型的な特徴点として、直線と直線とが交わる角部が挙げられるが、直線の境界部から抽出することもできる。例えば、車線区切りや横断歩道、停止線等の道路標示を対象にして、これらの表示部分とアスファルト部分との境界では輝度値の差が大きくなることを利用すれば特徴点を抽出できる。 FIG. 8 shows a state in which a plurality of feature points are extracted from each of the photographed image and the bird's-eye view image. As described above, a feature point is a point that can be distinguished from other points by having a distinctive feature in the shooting scenery, and the position on the image coordinate in the shooting image or the bird's-eye view image showing the shooting scenery. It is a point that can be identified. A typical feature point is a corner where a straight line and a straight line intersect, but it can also be extracted from the boundary of the straight line. For example, for road markings such as lane separators, pedestrian crossings, and stop lines, feature points can be extracted by using the fact that the difference in luminance value increases at the boundary between these display parts and asphalt parts.
 図8(a),(b)のそれぞれに表された黒丸の点は、特徴点を抽出する箇所を例示している。上述したように、特徴点を抽出するのは、車載カメラ2の位置及び角度を算出するためである。また、車載カメラ2の位置及び角度は、画像上で抽出した複数の特徴点の位置関係と、実際の撮影景色における複数の特徴点の位置関係との整合性を利用して算出される。このような整合性を利用するには、例えば、実際の撮影景色における形状が既知の対象物から特徴点を抽出すればよい。図8に示した例では、車両1の左右両側にある車線区切りと、道路標示の「止まれ」の文字とから特徴点を抽出している。この例では、車両1が直線に延びる車線を走行しているときの実際の車線区切りは左右両側が平行であることや、実際の道路標示の「止まれ」の文字には直角に交わっている複数の角部があることが既知の形状となる。特徴点の抽出に適した道路標示の形状や寸法を予めデータベース化しておき、特徴点抽出部15はそのデータベースを参照することで特徴点の抽出を簡素化してもよい。 8A and 8B, the black circle points represent the locations where feature points are extracted. As described above, the feature points are extracted in order to calculate the position and angle of the in-vehicle camera 2. The position and angle of the in-vehicle camera 2 are calculated using the consistency between the positional relationship between the plurality of feature points extracted on the image and the positional relationship between the plurality of feature points in the actual shooting scene. In order to use such consistency, for example, feature points may be extracted from an object whose shape in an actual shooting scene is known. In the example shown in FIG. 8, feature points are extracted from the lane breaks on the left and right sides of the vehicle 1 and the characters “stop” on the road marking. In this example, when the vehicle 1 is traveling in a lane that extends in a straight line, the actual lane separation is parallel on both the left and right sides, and there are a plurality of crossing at right angles to the characters “stop” on the actual road marking. It is a known shape that there is a corner. The shape and dimensions of road markings suitable for feature point extraction may be stored in a database in advance, and the feature point extraction unit 15 may simplify feature point extraction by referring to the database.
 図8(a)のように撮影画像から特徴点を抽出することもできるが、本実施例の変換テーブル更新処理では、図8(b)のように鳥瞰画像から特徴点を抽出する。そこで、図8(a)の撮影画像から特徴点を抽出する場合と、図8(b)の鳥瞰画像から特徴点を抽出する場合との違いについて以下に説明する。 Although feature points can be extracted from the photographed image as shown in FIG. 8A, the feature points are extracted from the bird's-eye view image as shown in FIG. 8B in the conversion table update processing of this embodiment. Therefore, the difference between the case where feature points are extracted from the captured image in FIG. 8A and the case where feature points are extracted from the bird's-eye image in FIG. 8B will be described below.
 図8(a)の撮影画像は、一直線に延びた車線を車両1が直進している中で撮影された画像である。実際の車線区切りは車線に沿って互いに平行な直線であるが、図8(a)の撮影画像に写った左右の車線区切りは略ハの字形に写っている。道路標示の「止まれ」の文字も、実際には互いに平行である部分が平行に写っていなかったり、実際には直角に交わっている部分が直角に写っていなかったりしている。このように表示された車線区切りや道路標示から特徴点(同図の黒丸で示した箇所)を抽出した場合、画像上のそれらの特徴点の位置関係は、実際の特徴点の位置関係を保っていないことになる。 The photographed image in FIG. 8A is an image photographed while the vehicle 1 is traveling straight in a straight lane. The actual lane separator is a straight line parallel to each other along the lane, but the left and right lane separators shown in the photographed image of FIG. The letters “Stop” on the road marking also indicate that the parts that are parallel to each other are not actually shown in parallel, or the parts that intersect at right angles are not shown at right angles. When feature points (locations indicated by black circles in the figure) are extracted from lane breaks and road markings displayed in this way, the positional relationship between those feature points on the image is the same as the actual feature points. Will not be.
 一方、図8(b)の鳥瞰画像を見ると、左右の車線区切りは現実と同じように互いに平行な直線であるし、道路標示の「止まれ」の文字についても現実と同じように平行な2直線は平行に、直角に交わっている部分は直角に写っている。このように表示された車線区切りや道路標示から特徴点(同図の黒丸で示した箇所)を抽出した場合、画像上のそれらの特徴点の位置関係は、実際の特徴点の位置関係を保っていることになる。 On the other hand, in the bird's-eye view image of FIG. 8B, the left and right lane divisions are straight lines that are parallel to each other as in the real world, and the characters “stop” on the road marking are also in parallel as in the real world. The straight lines are parallel and the crossing at right angles is shown at right angles. When feature points (locations indicated by black circles in the figure) are extracted from lane breaks and road markings displayed in this way, the positional relationship between those feature points on the image is the same as the actual feature points. Will be.
 従って、現実の位置関係を保たずに表示する撮影画像よりも、現実の位置関係を保って表示する鳥瞰画像から特徴点を抽出した方が、特徴点の位置関係を現実の位置関係と整合させやすく、車載カメラ2の位置及び角度をより容易かつ正確に算出できる。 Therefore, it is better to extract the feature points from the bird's-eye view image that is displayed while maintaining the actual positional relationship than the captured image that is displayed without maintaining the actual positional relationship. The position and angle of the in-vehicle camera 2 can be calculated more easily and accurately.
 このような理由から、撮影画像を鳥瞰画像に変換してから(図5のS104)、特徴点を抽出している(S105)。処理負担軽減の観点から、予め車線区切りや各種の道路標示などに特徴点を抽出する対象物を決めておき、撮影画像に写った景色のうち、その対象物だけを鳥瞰画像に変換するようにしてもよい。 For this reason, after converting the photographed image into a bird's-eye view image (S104 in FIG. 5), feature points are extracted (S105). From the viewpoint of reducing the processing burden, an object for extracting feature points for lane delimiters and various road markings is determined in advance, and only the object is converted into a bird's-eye image in the scenery shown in the photographed image. May be.
 次に、特徴点の分布を確認し(S106)、その特徴点の分布に基づいて、車載カメラ2の位置及び角度を算出するか否かを判断する(S107)。この判断は、特徴点の分布が、車載カメラ2の位置及び角度を精度良く算出するのに適した分布であるか否かということについての判断であり、例えば、特徴点の分布の幅を示す情報に基づいて判断することができる。ここでいう分布の幅を示す情報とは、画像領域の大きさに対して特徴点が分布する領域の大きさを表す情報である。以下に説明するように、分布の幅とともに特徴点の分布領域の位置や、分布領域の中での特徴点の散らばり具合を考慮して分布の適否を総合的に判断してもよい。 Next, the distribution of the feature points is confirmed (S106), and based on the distribution of the feature points, it is determined whether or not to calculate the position and angle of the in-vehicle camera 2 (S107). This determination is a determination as to whether or not the distribution of feature points is a distribution suitable for calculating the position and angle of the in-vehicle camera 2 with high accuracy, and indicates, for example, the width of the distribution of feature points. Judgment can be made based on information. The information indicating the distribution width here is information indicating the size of the region in which the feature points are distributed with respect to the size of the image region. As described below, the suitability of the distribution may be comprehensively determined in consideration of the position of the distribution area of the feature points together with the width of the distribution and the degree of dispersion of the feature points in the distribution area.
 図9には特徴点の分布が良好である場合とそうでない場合の鳥瞰画像の様子が示されている。ここでは撮影景色を省略して、特徴点を抽出した箇所を黒丸で表している。また、抽出した特徴点を一体的に見て、その外周に位置する各特徴点を頂点にして繋ぎ、これによって形成された凸多角形内部の領域を特徴点の分布領域とする(斜線部)。図9(a)のように分布領域の面積が大きく、分布領域の位置が画像の中心に近い場合に特徴点の分布が良好であると判断し、図9(b)のように分布領域の面積が小さい場合や画像上の一部に偏っている場合を特徴点の分布が不良であると判断する。 FIG. 9 shows a bird's-eye view when the feature point distribution is good and when the feature point distribution is not good. Here, the scene where the shooting scene is omitted and the feature points are extracted is represented by black circles. Also, the extracted feature points are viewed as a whole, and the feature points located on the outer periphery are connected as vertices, and the region inside the convex polygon formed thereby is used as the feature point distribution region (shaded portion) . When the area of the distribution region is large as shown in FIG. 9A and the position of the distribution region is close to the center of the image, it is determined that the distribution of the feature points is good, and the distribution region is displayed as shown in FIG. When the area is small or when it is biased to a part of the image, it is determined that the distribution of the feature points is poor.
 特徴点の分布の適否は、斜線部で示した領域そのものでなく、特徴点が分布する領域の横幅および縦幅の大きさと位置とを調べることで簡略的に判断するようにしてもよい。 The suitability of the distribution of the feature points may be determined simply by examining the size and position of the horizontal and vertical widths of the region in which the feature points are distributed, instead of the region indicated by the shaded area.
 図9(a)では、特徴点の分布領域の横幅はUaであり、縦幅はVaである。これは画像サイズと比べて十分に大きく、Ua,Vaそれぞれの中間値も画像の中心に近いため、特徴点の分布状況が良好であると判断できる。 In FIG. 9A, the horizontal width of the feature point distribution area is Ua, and the vertical width is Va. This is sufficiently larger than the image size, and since the intermediate values of Ua and Va are close to the center of the image, it can be determined that the feature point distribution is good.
 一方、図9(b)では、特徴点の分布領域の横幅Ubおよび縦幅Vbともに画像サイズと比べて小さいため、特徴点の分布が不良であると判断できる。また、Ua,Vaそれぞれの中間値が画像の中心から大きく外れていることからも、特徴点の分布が不良であると判断できる。 On the other hand, in FIG. 9B, since the horizontal width Ub and the vertical width Vb of the feature point distribution region are both smaller than the image size, it can be determined that the distribution of the feature points is poor. Further, since the intermediate values of Ua and Va are greatly deviated from the center of the image, it can be determined that the distribution of the feature points is poor.
 上記のように特徴点の分布領域の大きさと位置に加えて、特徴点の散らばり具合を考慮することとしてもよい。図9(c)で示す例では、特徴点が分布する領域の横幅Ucおよび縦幅Vcの大きさと位置は図9(a)に示した例と同じであるが、分布領域内部の特徴点の分布には偏りがある。そこで、特徴点が分布する幅(範囲)に限らず、散らばり具合についても考慮するとよい。図9(c)で示す例では、分布の平均位置等から大きく離れた特徴点については外れ値として、特徴点の分布領域を形成するものとは扱わないこととし、二重斜線部で示した領域を特徴点の分布領域にして考えることとなる。その結果、図9(c)に示す例では、特徴点の分布が不良であると判断されることとなる。 As described above, in addition to the size and position of the distribution area of feature points, it is also possible to consider the dispersion of feature points. In the example shown in FIG. 9C, the size and position of the horizontal width Uc and the vertical width Vc of the region where the feature points are distributed are the same as those in the example shown in FIG. Distribution is biased. Therefore, not only the width (range) in which the feature points are distributed but also the degree of dispersion should be considered. In the example shown in FIG. 9C, feature points that are far away from the average position of the distribution are not treated as outliers that form feature point distribution areas, and are indicated by double diagonal lines. The region is considered as a feature point distribution region. As a result, in the example shown in FIG. 9C, it is determined that the distribution of feature points is poor.
 上述したように、車載カメラ2の位置及び角度を算出するには、鳥瞰画像から抽出した複数の特徴点の位置関係と、実際の撮影景色における複数の特徴点の位置関係との整合性を利用する。従って、図9(b),(c)のように特徴点の分布状況が不良の場合であっても、車載カメラ2の位置及び角度を算出することはできる。しかし、特徴点が存在していない領域については、画像上の複数の特徴点の位置関係と、実際の撮影景色における複数の特徴点の位置関係との整合性を確認することができず、算出される車載カメラ2の位置及び角度の精度を保つことが難しい。 As described above, in order to calculate the position and angle of the in-vehicle camera 2, use the consistency between the positional relationship of the plurality of feature points extracted from the bird's-eye image and the positional relationship of the plurality of feature points in the actual shooting scene. To do. Therefore, the position and angle of the in-vehicle camera 2 can be calculated even when the distribution of feature points is poor as shown in FIGS. 9B and 9C. However, for areas where feature points do not exist, it is impossible to confirm the consistency between the positional relationship of multiple feature points on the image and the positional relationship of multiple feature points in the actual shooting scene. It is difficult to maintain the accuracy of the position and angle of the in-vehicle camera 2 to be used.
 そこで、特徴点の分布が不良であれば、車載カメラの位置及び角度を算出しないと判断し(図5のS107:no)、変換テーブル更新処理を終了する(図6)。 Therefore, if the distribution of the feature points is poor, it is determined that the position and angle of the in-vehicle camera are not calculated (S107: no in FIG. 5), and the conversion table update process is terminated (FIG. 6).
 一方、図9(a)のように鳥瞰画像上で特徴点の分布状況が良好であれば、車載カメラ2の位置及び角度を算出するように判断し(S107:yes)、車載カメラ2の位置及び角度の算出を実行する(S108)。こうすれば、良好な精度を確保できる場合にだけ車載カメラ2の位置及び角度を算出することができる。また、精度が確保できないときには車載カメラ2の位置及び角度を算出しないので、CPUに余計な処理負荷が生じることを避けることができる。 On the other hand, if the distribution of the feature points is good on the bird's-eye view image as shown in FIG. 9A, it is determined to calculate the position and angle of the in-vehicle camera 2 (S107: yes), and the position of the in-vehicle camera 2 is determined. The angle is calculated (S108). In this way, the position and angle of the in-vehicle camera 2 can be calculated only when good accuracy can be ensured. Further, since the position and angle of the in-vehicle camera 2 are not calculated when the accuracy cannot be ensured, it is possible to avoid an extra processing load on the CPU.
 車載カメラ2の位置及び角度を算出したら(S108)、算出した位置及び角度の変動量を確認する(図6のS109)。ここで確認するのは、新たに算出した車載カメラ2の位置及び角度と、現在使用している変換テーブルの更新に用いた車載カメラ2の位置及び角度との変動量である。変動量が変動閾値以下であれば(S110:yes)、変換テーブルを更新して(S111)、変換テーブル更新処理を終了する。一方、変動量が変動閾値より大きければ(S110:no)、変換テーブルを更新することなく終了する。 When the position and angle of the in-vehicle camera 2 are calculated (S108), the calculated fluctuation amount of the position and angle is confirmed (S109 in FIG. 6). What is confirmed here is the amount of variation between the newly calculated position and angle of the in-vehicle camera 2 and the position and angle of the in-vehicle camera 2 used for updating the conversion table currently used. If the fluctuation amount is equal to or less than the fluctuation threshold (S110: yes), the conversion table is updated (S111), and the conversion table update process is terminated. On the other hand, if the fluctuation amount is larger than the fluctuation threshold (S110: no), the process ends without updating the conversion table.
 特徴点の抽出や車載カメラ2の位置及び角度の算出、その他不測の事態によって異常が発生した場合には、車載カメラ2の位置及び角度が極端な値を示すことが想定される。そこで、算出した車載カメラ2の位置及び角度の変動量が所定の変動閾値より大きい場合には、変換テーブルの更新をしないとすることで、良好な精度の車載カメラ2の位置及び角度が得られたと考えられる場合に限って変換テーブルが更新されることとなる。また、特徴点の分布を判断した時と同様に、良好な精度が期待できないときには車載カメラ2の位置及び角度を算出しないので、CPUに余計な処理負荷が生じることを避けることができる。 When the abnormality occurs due to the extraction of feature points, the calculation of the position and angle of the in-vehicle camera 2, and other unexpected situations, it is assumed that the position and angle of the in-vehicle camera 2 show extreme values. Therefore, when the calculated fluctuation amount of the position and angle of the in-vehicle camera 2 is larger than a predetermined fluctuation threshold, the position and angle of the in-vehicle camera 2 with good accuracy can be obtained by not updating the conversion table. The conversion table is updated only in the case where it is considered to have occurred. Further, as in the case of determining the distribution of feature points, the position and angle of the in-vehicle camera 2 are not calculated when good accuracy cannot be expected, so that it is possible to avoid an extra processing load on the CPU.
 以上説明してきたように、本実施例の画像変換装置10が実行する変換テーブル更新処理では、環境スコアの評価によって抽出する特徴点の精度が高められ(S102)、特徴点の分布状況の確認によって算出する車載カメラ2の位置及び角度の精度が高められることになる(S107)。このように車載カメラ2の位置及び角度を精度良く算出する上、さらに、算出された車載カメラの位置及び角度の変動量を確認するので(S110)、最終的に三重のチェックを経てから変換テーブルが更新されることとなる。そして、画像変換装置10は、このようにして得られた精度および信頼度の高い変換テーブルを参照するので、正確な鳥瞰画像に変換することが可能となる。 As described above, in the conversion table update process executed by the image conversion apparatus 10 according to the present embodiment, the accuracy of the feature points extracted by the evaluation of the environmental score is improved (S102), and the distribution of the feature points is confirmed. The accuracy of the position and angle of the in-vehicle camera 2 to be calculated is improved (S107). In this way, the position and angle of the in-vehicle camera 2 are calculated with high accuracy, and further, the calculated amount of variation in the position and angle of the in-vehicle camera is confirmed (S110). Will be updated. Since the image conversion apparatus 10 refers to the conversion table with high accuracy and reliability obtained in this way, it can be converted into an accurate bird's-eye view image.
 B.変形例:
 上述した本実施例では、路面21に対して車載カメラ2の位置及び角度が変化することを前提として、変化した位置及び角度を算出する方法について説明した。しかし、路面21に対して車載カメラ2の位置及び角度が変化する理由には2つの理由が考えられる。すなわち、車両1に対して車載カメラ2の取付状態(位置及び角度)が変化し、その結果として路面21に対する取り付け状態が変化する場合と、車両1の姿勢が路面に対して変化する場合とが考えられる。前者の場合は、車載カメラ2の締結部が緩むこと等によって発生する恒常的な変化である。これに対して後者のように車両1が路面に対して傾くのは、積載重量の変化や車両1の加減速等によって発生するものであって、一時的な変化である。上述した本実施例では両者を特に区別せずに説明したが、以下の変形例では、後者の場合の、車両1の姿勢が路面に対して変化するという一時的な変化に対して、車載カメラ2の位置及び角度を算出する場合の本実施例との相違点に焦点を当てて説明する。
B. Variations:
In the present embodiment described above, the method for calculating the changed position and angle on the assumption that the position and angle of the in-vehicle camera 2 change with respect to the road surface 21 has been described. However, there are two reasons why the position and angle of the in-vehicle camera 2 change with respect to the road surface 21. That is, the mounting state (position and angle) of the in-vehicle camera 2 with respect to the vehicle 1 changes, and as a result, the mounting state with respect to the road surface 21 changes, and the posture of the vehicle 1 changes with respect to the road surface. Conceivable. In the former case, this is a constant change that occurs when the fastening portion of the in-vehicle camera 2 is loosened. On the other hand, the fact that the vehicle 1 is inclined with respect to the road surface as in the latter case is caused by a change in the loaded weight, acceleration / deceleration of the vehicle 1 or the like and is a temporary change. In the above-described embodiment, the two have been described without particularly distinguishing between them. However, in the following modification, the in-vehicle camera is used for a temporary change in which the attitude of the vehicle 1 changes with respect to the road surface in the latter case. The difference from the present embodiment when calculating the position and angle of 2 will be described.
 一般に、車載カメラ2の位置には縦(車両1の進行方向)・横(車両1の進行方向に対して左右方向)・高さの三方向の成分があり、車載カメラ2にはロール・ピッチ・ヨウの三方向の角度がある。従って、原則的には、車載カメラ2の位置及び角度を算出するに際しては、個々の車載カメラ2について三方向の成分および三方向の角度(合計で6つの値)を算出する必要がある。しかし、位置及び角度の算出が必要となった理由が、後者の理由、すなわち、車両1の姿勢が路面に対して変化したためであった場合には、車載カメラ2の高さ、ロール角度及びピッチ角度の3つの値を算出すればよい。これは次のような理由による。 In general, the position of the in-vehicle camera 2 has components in three directions of vertical (traveling direction of the vehicle 1), lateral (horizontal direction with respect to the traveling direction of the vehicle 1), and height.・ There are three angles of yaw. Therefore, in principle, when calculating the position and angle of the in-vehicle camera 2, it is necessary to calculate the components in three directions and the angles in three directions (total of six values) for each in-vehicle camera 2. However, when the reason why the position and the angle need to be calculated is the latter, that is, because the posture of the vehicle 1 has changed with respect to the road surface, the height, the roll angle, and the pitch of the in-vehicle camera 2 What is necessary is just to calculate three values of an angle. This is due to the following reason.
 先ず、路面に対する車両1の姿勢が固定された状態(例えば水平に保たれた状態)を想定する。このような状態が保たれていたとしても、車両1が移動すれば、個々の車載カメラ2の縦・横の位置は変化する。また、車両1の向きが変われば、個々の車載カメラ2のヨウ方向の角度も変化する。従って、路面に対する車両1の姿勢が変化したことによる影響は、車載カメラ2の縦・横方向の位置や、ヨウ方向の角度ではなく、高さ方向の位置と、ロール角度およびピッチ角度の変化に現れると考えて良い。このことから、車両1の走行中に、車両1の姿勢が路面に対して変化した場合には、車載カメラ2の高さ、ロール角度及びピッチ角度の3つの値を算出すればよいのである。 First, a state in which the posture of the vehicle 1 with respect to the road surface is fixed (for example, a state in which the vehicle 1 is kept horizontal) is assumed. Even if such a state is maintained, if the vehicle 1 moves, the vertical and horizontal positions of the individual in-vehicle cameras 2 change. Moreover, if the direction of the vehicle 1 changes, the angle of the yaw direction of each vehicle-mounted camera 2 also changes. Therefore, the influence of the change in the posture of the vehicle 1 with respect to the road surface is not the vertical / horizontal position of the in-vehicle camera 2 or the change in the position in the height direction, the roll angle and the pitch angle, but the angle in the yaw direction. You can think that it will appear. From this, when the vehicle 1 is moving while the attitude of the vehicle 1 changes with respect to the road surface, three values of the height, roll angle, and pitch angle of the in-vehicle camera 2 may be calculated.
 尚、車両1の走行中は、路面に対する車両1の姿勢が頻繁に変わるから、車載カメラ2の位置及び角度を算出する頻度が高くなり、そのための計算負荷が大きくなり易い。こうした観点からすると、車載カメラ2の高さ、ロール角度及びピッチ角度の3つの値を算出すればよくなれば、計算負荷を半減させることができるので、大きなメリットがある。 In addition, since the attitude | position of the vehicle 1 with respect to a road surface changes frequently during the driving | running | working of the vehicle 1, the frequency which calculates the position and angle of the vehicle-mounted camera 2 becomes high, and the calculation load for it tends to become large. From this point of view, if the three values of the height, the roll angle, and the pitch angle of the in-vehicle camera 2 only have to be calculated, the calculation load can be halved, which is a great advantage.
 図10には、車両1がピッチ方向に傾いた場合の鳥瞰画像の例が示されている。図10(a)のように、車両1の前部が下向きに傾いたことに伴って、車載カメラ2はピッチ方向の角度が下向きに変化することとなる。図10(b)には、この状態で車両1の左右にある車線区切りを表示した鳥瞰画像の様子が示されている。図示されるように、車両1の左右にある車線区切りはハの字形に表示されるようになる。仮に車両1が傾いていないのならば、左右の車線区切りは互いに平行に表示される筈である。そこで、車線区切りの境界上から特徴点を抽出して、ハの字形に傾いた車線区切りが互いに平行に表示されるように調整すれば、車両1の傾きに応じた車載カメラ2のピッチ角度を算出し直すことができる。尚、図10(b)で車線区切りの境界線上に示した黒丸は抽出した特徴点の例である。この特徴点を結べば、車線区切りの線と同じになるので、以降の鳥瞰画像の例ではこの特徴点の図示を省略する。 FIG. 10 shows an example of a bird's-eye view image when the vehicle 1 is tilted in the pitch direction. As illustrated in FIG. 10A, the angle in the pitch direction of the in-vehicle camera 2 changes downward as the front portion of the vehicle 1 tilts downward. FIG. 10B shows a bird's-eye view image in which lane breaks on the left and right sides of the vehicle 1 are displayed in this state. As shown in the figure, the lane divisions on the left and right of the vehicle 1 are displayed in a square shape. If the vehicle 1 is not tilted, the left and right lane divisions should be displayed parallel to each other. Therefore, if the feature points are extracted from the boundaries of the lane divisions and adjusted so that the lane divisions inclined in the letter C shape are displayed in parallel with each other, the pitch angle of the in-vehicle camera 2 corresponding to the inclination of the vehicle 1 can be set. It can be recalculated. In addition, the black circle shown on the boundary line of a lane division in FIG.10 (b) is an example of the extracted feature point. If this feature point is connected, it will be the same as the line separating the lanes, so the illustration of this feature point is omitted in the following bird's-eye view examples.
 図10(c)には、図10(a)の状態で車両1の前方にある停止線を表示した鳥瞰画像の様子が示されている。図示されるように、停止線は縦方向に延びる二直線がハの字形になって、横方向に延びる二直線が平行な台形に表示されている。仮に車両1が傾いていないのならば、停止線は横方向に長い矩形に表示される筈である。そこで、図10(b)の場合と同様にして、ハの字形に傾いた二直線が互いに平行になるように調整することで、車両1の傾きに応じた車載カメラ2のピッチ角度を算出し直すことができる。 FIG. 10 (c) shows a bird's-eye view image displaying a stop line in front of the vehicle 1 in the state of FIG. 10 (a). As shown in the drawing, two straight lines extending in the vertical direction are formed in a square shape, and two straight lines extending in the horizontal direction are displayed as parallel trapezoids. If the vehicle 1 is not tilted, the stop line should be displayed in a rectangle that is long in the horizontal direction. Therefore, as in the case of FIG. 10B, the pitch angle of the in-vehicle camera 2 corresponding to the inclination of the vehicle 1 is calculated by adjusting the two straight lines inclined in the C shape so that they are parallel to each other. You can fix it.
 ここで、図10(b)の場合と図10(c)の場合とで、算出される車載カメラ2のピッチ角度の精度を比較する。図10(b)の場合では、長さL1の線分から傾きを調べることができるのに対して、図10(c)の場合では、長さL1よりも短い長さL2の線分から傾きを調べることとなる。当然、長さL1の線分よりも、長さL2の線分から傾きを調べる方が、誤差が大きくなるので、図10(c)の場合よりも図10(b)の場合の方が、算出される車載カメラ2のピッチ角度の精度が良好であると考えられる。 Here, the accuracy of the calculated pitch angle of the in-vehicle camera 2 is compared between the case of FIG. 10B and the case of FIG. 10C. In the case of FIG. 10B, the inclination can be examined from the line segment having the length L1, whereas in the case of FIG. 10C, the inclination is examined from the line segment having the length L2 shorter than the length L1. It will be. Naturally, since the error becomes larger when the inclination is examined from the line segment of the length L2 than in the line segment of the length L1, the calculation in the case of FIG. 10B is calculated rather than the case of FIG. 10C. It is considered that the accuracy of the pitch angle of the in-vehicle camera 2 is good.
 続いて、図10(b)の場合と図10(c)の場合とで、特徴点の分布の違いを見てみると、図10(b)の場合では、鳥瞰画像上の縦方向では黒丸で示した特徴点が分布している幅は大きい。これに対して横方向では左右端の領域と中央の領域とで黒丸で示した特徴点が存在しておらず、特徴点が分布している幅は小さい。 Subsequently, in the case of FIG. 10B and the case of FIG. 10C, looking at the difference in distribution of feature points, in the case of FIG. The width of distribution of the feature points indicated by is large. On the other hand, in the horizontal direction, feature points indicated by black circles do not exist in the left and right end regions and the central region, and the width in which the feature points are distributed is small.
 図10(c)の場合では、特徴点の黒丸の図示が省略されているが、停止線の表示領域を確認すると、縦方向で特徴点が分布している幅が小さく、横方向では特徴点が分布している幅が大きいことがわかる。 In the case of FIG. 10C, the black circles of the feature points are not shown. However, when the display area of the stop line is confirmed, the width in which the feature points are distributed in the vertical direction is small, and the feature points in the horizontal direction. It can be seen that the width of the distribution is large.
 以上のことから、図10に示されたように車載カメラ2のピッチ角度を算出し直す場合では、鳥瞰画像上の縦方向(車両1の進行方向)で特徴点が分布している幅が大きいことが、精度良く算出する上で重要であることがわかる。そこで、車載カメラ2のピッチ角度を算出する際には、鳥瞰画像上の縦方向で特徴点が分布している幅が所定の縦方向閾値よりも大きいことを条件とするとよい。尚、ここでいう縦方向閾値が第1閾値に対応する。 From the above, when the pitch angle of the in-vehicle camera 2 is recalculated as shown in FIG. 10, the width in which the feature points are distributed in the vertical direction (the traveling direction of the vehicle 1) on the bird's-eye view image is large. It is understood that this is important for accurate calculation. Therefore, when calculating the pitch angle of the in-vehicle camera 2, it is preferable that the width in which the feature points are distributed in the vertical direction on the bird's-eye view image is larger than a predetermined vertical threshold. Note that the vertical threshold here corresponds to the first threshold.
 図11には、車両1がロール方向に傾いた場合の鳥瞰画像の例が示されている。図11(a)のように、車両1の正面に向かって右側が下向きに傾いたことに伴って、車載カメラ2のロール方向の角度が右方向に変化した場合、図11(b)のように車両1の左右の車線区切りの表示態様が変化することになる。図11(b)の左側の車線区切りは、幅W1が広く表示され、表示位置が車両1から遠ざかっている。一方、右側の車線区切りは、幅W2が狭く表示され、表示位置が車両1から近づいている。こうした場合、例えば、左右の車線区切りの幅が同じになるように調整することで、車両1の傾きに応じた車載カメラ2のロール角度を算出し直すことができる。しかし、実際の車線区切りの幅は15cm~20cmであり、鳥瞰画像上の車線区切りの幅も画像の表示領域に対して小さく表示されるので、以下の図11(c)の場合に比べれば、精度良く調整することが難しい。 FIG. 11 shows an example of a bird's-eye view image when the vehicle 1 is tilted in the roll direction. As shown in FIG. 11A, when the angle of the in-vehicle camera 2 in the roll direction changes to the right as the right side inclines downward toward the front of the vehicle 1, as shown in FIG. Therefore, the display mode of the left and right lane divisions of the vehicle 1 changes. In the left lane separation in FIG. 11B, the width W <b> 1 is wide and the display position is far from the vehicle 1. On the other hand, the right lane segment is displayed with a narrow width W2 and the display position is approaching from the vehicle 1. In such a case, for example, the roll angle of the in-vehicle camera 2 according to the inclination of the vehicle 1 can be recalculated by adjusting the width of the left and right lane divisions to be the same. However, the actual width of the lane division is 15 cm to 20 cm, and the width of the lane division on the bird's-eye view image is also displayed smaller than the display area of the image. Compared to the case of FIG. It is difficult to adjust accurately.
 図11(c)に示された停止線は、縦方向に延びる二直線が平行で、横方向に延びる二直線がハの字形の台形となっている。そこで、横方向のハの字形の二直線が互いに平行になるように調整することで、車両1の傾きに応じた車載カメラ2のロール角度を算出し直すことができる。この横方向のハの字形の二直線の長さL3は、図11(b)で示される幅W1,W2よりも明らかに長いため、図11(b)の場合よりも図11(c)の場合の方が、算出される車載カメラ2のロール角度の精度が良好であると考えられる。 In the stop line shown in FIG. 11 (c), two straight lines extending in the vertical direction are parallel and two straight lines extending in the horizontal direction are trapezoidal. Therefore, the roll angle of the in-vehicle camera 2 according to the inclination of the vehicle 1 can be recalculated by adjusting the horizontal C-shaped two straight lines so that they are parallel to each other. Since the length L3 of the horizontal C-shaped two straight lines is clearly longer than the widths W1 and W2 shown in FIG. 11B, the length L3 in FIG. 11C is more than that in FIG. 11B. In this case, it is considered that the accuracy of the calculated roll angle of the in-vehicle camera 2 is better.
 従って、車載カメラ2のロール角度を算出する際には、鳥瞰画像上の横方向で特徴点が分布している幅が所定の横方向閾値よりも大きいことを条件とするとよい。尚、ここでいう横方向閾値が第2閾値に対応する。 Therefore, when calculating the roll angle of the in-vehicle camera 2, it is preferable that the width in which the feature points are distributed in the lateral direction on the bird's-eye view image is larger than a predetermined lateral threshold. Note that the lateral direction threshold here corresponds to the second threshold.
 図12には、車両1の高さが変化した場合の鳥瞰画像の例が示されている。図12(a)のように、車両1が全体的に沈んだことに伴って車載カメラ2の高さが低くなる。この状態で表示される車線区切りは図12(b)のようになり、停止線は図12(c)のようになる。図12(b),図12(c)ともに、車両1の高さが変化する前の表示を破線で示している。この破線と比べると、図12(b)の左右の車線区切りはそれぞれ幅が広くなって、車両1から遠ざかった位置に表示されている。図12(c)の停止線の幅も同様に大きくなって、車両1から遠ざかった位置に表示されている。 FIG. 12 shows an example of a bird's-eye view image when the height of the vehicle 1 changes. As shown in FIG. 12 (a), the height of the in-vehicle camera 2 is lowered as the vehicle 1 sinks as a whole. The lane breaks displayed in this state are as shown in FIG. 12B, and the stop lines are as shown in FIG. In both FIG. 12B and FIG. 12C, the display before the height of the vehicle 1 changes is indicated by a broken line. Compared with the broken line, the left and right lane divisions in FIG. 12B are widened and displayed at positions away from the vehicle 1. The width of the stop line in FIG. 12C is also increased and is displayed at a position away from the vehicle 1.
 このように、車両1の高さが変化した場合、鳥瞰画像上の表示は、拡大または縮小するように変化するので、図10,図11で説明してきたような形状を利用して特徴点の整合性をとることはできないが、例えば、以下のようにして車両1の高さを算出することができる。すなわち、図12(b)の場合は、実際の車線区切りの幅に関するデータを予め用意しておけば、鳥瞰画像に表示された車線区切りの幅が、実際の車線区切りの幅と整合するように調整することで、車両1の高さに応じた車載カメラ2の高さを算出し直すことができる。図12(c)の場合についても同様に停止線の幅に関するデータを予め用意しておけば、車両1の高さに応じた車載カメラ2の高さを算出し直すことができる。このように、車載カメラ2の高さを算出する際には、形状の整合性ではなく、縮尺の整合性をとることになる。画像の縮尺は画像全体の状況が把握できて始めて正確に評価することが可能となる。従って、鳥瞰画像の縦方向への縮尺を評価するのであれば、少なくとも縦方向での特徴点の分布が良好でなければならず、横方向への縮尺を評価するのであれば、少なくとも横方向での特徴点の分布が良好でなければならない。 As described above, when the height of the vehicle 1 changes, the display on the bird's-eye view image changes so as to be enlarged or reduced. Therefore, the feature points can be obtained using the shapes described with reference to FIGS. Although consistency cannot be obtained, for example, the height of the vehicle 1 can be calculated as follows. That is, in the case of FIG. 12B, if data regarding the actual lane separation width is prepared in advance, the lane separation width displayed in the bird's-eye view image matches the actual lane separation width. By adjusting, the height of the vehicle-mounted camera 2 according to the height of the vehicle 1 can be recalculated. In the case of FIG. 12C as well, if data relating to the width of the stop line is prepared in advance, the height of the in-vehicle camera 2 corresponding to the height of the vehicle 1 can be recalculated. Thus, when calculating the height of the in-vehicle camera 2, not the shape consistency but the scale consistency. The scale of the image can be evaluated accurately only after the situation of the entire image can be grasped. Therefore, if the vertical scale of the bird's-eye image is to be evaluated, the distribution of feature points in at least the vertical direction must be good, and if the horizontal scale is to be evaluated, at least in the horizontal direction. The distribution of feature points must be good.
 上記のように、車両1の高さが変化した場合には、鳥瞰画像上の表示が拡大または縮小するように変化するのだが、車両1のピッチ方向が傾いた場合にも、あたかも拡大または縮小したかのように鳥瞰画像上の表示が変化することがある。さらに、こうした表示の変化の傾向は、車両1の前後左右のどこであるかといった搭載位置や、バンパーの硬さによる沈み込み量の違いなどの車載カメラ2の設置状態に影響を受けるので、算出する変数を取り違える場合も想定され得る。そうした場合には、車載カメラ2を車両1に搭載した状態で、車両1の姿勢変化と鳥瞰画像上の表示態様の変化との傾向を表すデータを予め取得しておき、そのデータを参照することで車載カメラ2の高さ、ロール角度及びピッチ角度の3つの変数のうち、何れを算出するのかを決定するとよい。 As described above, when the height of the vehicle 1 changes, the display on the bird's-eye view image changes so as to be enlarged or reduced. However, even when the pitch direction of the vehicle 1 is inclined, it is enlarged or reduced. The display on the bird's-eye image may change as if it were. Further, the tendency of such display change is calculated because it is affected by the mounting position such as where the vehicle 1 is before, after, left and right, and the installation state of the in-vehicle camera 2 such as the difference in the amount of sinking due to the hardness of the bumper. It can be assumed that the variables are mistaken. In such a case, in a state where the vehicle-mounted camera 2 is mounted on the vehicle 1, data representing a tendency between the change in the posture of the vehicle 1 and the change in the display mode on the bird's-eye view image is acquired in advance, and the data is referred to. Therefore, it is preferable to determine which of the three variables of the height, roll angle, and pitch angle of the in-vehicle camera 2 is to be calculated.
 また、車両1の姿勢が変化する場合は、上述のピッチ角度、ロール角度、高さが個々に変化する場合に限られず、これらが複合的に変化することもある。こうした場合であっても、鳥瞰画像上の縦方向または横方向の何れで特徴点の分布が良好であるかということに対して、車載カメラ2のピッチ方向の角度またはロール方向の角度を良好な精度で算出できるか否かという関係は同様である。そこで、図13のように算出する変数を選択するとよい。 Further, when the posture of the vehicle 1 changes, it is not limited to the case where the pitch angle, the roll angle, and the height change individually, and these may change in combination. Even in such a case, the pitch direction angle or the roll direction angle of the in-vehicle camera 2 is good with respect to whether the feature point distribution is good in the vertical direction or the horizontal direction on the bird's-eye view image. The relationship of whether or not it can be calculated with accuracy is the same. Therefore, it is preferable to select a variable to be calculated as shown in FIG.
 図13には、特徴点の分布に応じた変数の選択例が示されている。上述してきたように特徴点の分布を鳥瞰画像上の縦方向と横方向とに分けて考え、縦方向および横方向のそれぞれで特徴点の分布している幅が大きいか、小さいかで場合分けをしている。 FIG. 13 shows an example of selecting variables according to the distribution of feature points. As described above, the distribution of feature points is divided into a vertical direction and a horizontal direction on the bird's-eye view image, and the distribution of the feature points in the vertical direction and the horizontal direction is divided according to whether the width is large or small. I am doing.
 特徴点の分布している幅が縦方向および横方向の何れも小さい場合については、本実施例で上述したように、車載カメラ2のピッチ角度、ロール角度、高さの何れの変数についても算出しないこととする。 When the feature point distribution width is small in both the vertical direction and the horizontal direction, as described above in the present embodiment, calculation is performed for any of the pitch angle, roll angle, and height variables of the in-vehicle camera 2. Do not do.
 特徴点の分布している幅が縦方向で大きく、横方向で小さい場合には、ピッチ角度を算出する。この場合、ピッチ角度については図10(b)を用いて説明したように良好な精度で算出できるが、ロール角度については図11(b)を用いて説明したように精度を得にくいからである。 ピ ッ チ If the feature point distribution width is large in the vertical direction and small in the horizontal direction, the pitch angle is calculated. In this case, the pitch angle can be calculated with good accuracy as described with reference to FIG. 10B, but the roll angle is difficult to obtain with accuracy as described with reference to FIG. 11B. .
 特徴点の分布している幅が縦方向で小さく、横方向で大きい場合には、ロール角度を算出する。この場合、ロール角度については図11(c)を用いて説明したように良好な精度で算出できるが、ピッチ角度については図10(c)を用いて説明したように精度を得にくいからである。 ¡If the distribution point of feature points is small in the vertical direction and large in the horizontal direction, the roll angle is calculated. In this case, the roll angle can be calculated with good accuracy as described with reference to FIG. 11C, but the pitch angle is difficult to obtain with accuracy as described with reference to FIG. 10C. .
 特徴点の分布している幅が縦方向および横方向の何れでも大きい場合には、本実施例と同様に考えて、車載カメラ2のピッチ角度、ロール角度、高さの全ての変数を算出すればよい。車載カメラ2の高さについては、この場合の他、特徴点の分布している幅が縦方向または横方向の何れかで大きければ算出することにしてもよい。また、車線区切りや停止線の幅など、尺度の整合性をとることができる特徴点を抽出したことを条件として、車載カメラ2の高さを算出するようにしてもよい。 If the feature point distribution width is large in both the vertical and horizontal directions, all variables of the pitch angle, roll angle, and height of the in-vehicle camera 2 are calculated in the same manner as in this embodiment. That's fine. In addition to this case, the height of the in-vehicle camera 2 may be calculated if the distribution width of the feature points is large in either the vertical direction or the horizontal direction. Further, the height of the in-vehicle camera 2 may be calculated on the condition that feature points that can achieve scale consistency such as lane separation and stop line width are extracted.
 図10~図13を用いて説明してきたように、特徴点の分布を縦方向および横方向ごとに確認し、その結果に応じて車載カメラ2のピッチ角度、ロール角度、高さのうち条件の良い変数だけを選択して算出すれば、算出した変数の精度を良好に保つことができる。また、算出する変数を選択した分、算出に必要な時間を減らすことができるため、一時的な車両姿勢の変化に対応しやすくなる。選択しなかったその他の変数については、その後の撮影による鳥瞰画像から条件の良い特徴点が得られたときに算出すればよい。そして、本実施例で説明したように、新たに算出された変数の変動量を確認した上で(図6のS109,S110)、変換テーブルを更新する。 As described with reference to FIGS. 10 to 13, the distribution of the feature points is confirmed for each of the vertical direction and the horizontal direction, and the condition of the pitch angle, roll angle, and height of the in-vehicle camera 2 is determined according to the result. If only good variables are selected and calculated, the accuracy of the calculated variables can be kept good. Further, since the time required for the calculation can be reduced by selecting the variable to be calculated, it becomes easy to cope with a temporary change in the vehicle posture. Other variables that are not selected may be calculated when a feature point with good conditions is obtained from a bird's-eye image obtained by subsequent shooting. Then, as described in the present embodiment, after confirming the newly calculated variable fluctuation amount (S109 and S110 in FIG. 6), the conversion table is updated.
 また、車載カメラ2のピッチ角度、ロール角度、高さを算出するときには、特徴点の分布状況によらず全ての変数を算出し、変換テーブルを更新する際に、図13のようにして選択された変数を用いて変換テーブルを更新するようにしてもよい。 Further, when calculating the pitch angle, roll angle, and height of the in-vehicle camera 2, all variables are calculated regardless of the feature point distribution state, and when the conversion table is updated, it is selected as shown in FIG. The conversion table may be updated using the obtained variable.
 この変形例による画像変換装置10は、車載カメラ2のピッチ角度、ロール角度、高さの何れも条件の良いときに算出でき、これに基づいて得られた変換テーブルを参照するので、撮影画像から鳥瞰画像に正確に変換することができる。 The image conversion apparatus 10 according to this modification can calculate the pitch angle, the roll angle, and the height of the in-vehicle camera 2 when the conditions are good, and refers to the conversion table obtained based on this, so from the captured image It can be accurately converted into a bird's-eye view image.
 以上、実施例および変形例を例示したが、実施例および変形例は上記の実施例および変形例に限られるものではなく、本開示の要旨を逸脱しない範囲において種々の態様とすることができる。

 
As mentioned above, although an Example and a modification were illustrated, an Example and a modification are not restricted to said Example and a modification, It can be set as a various aspect in the range which does not deviate from the summary of this indication.

Claims (7)

  1.  車両(1)の周囲を撮影する車載カメラ(2)から撮影画像を取得して、該撮影画像を前記車両の上方の視点から撮影したかのような鳥瞰画像に変換してから車載モニター(3)に表示する画像変換装置(10)であって、
     前記車載カメラの位置及び角度に応じて前記撮影画像を前記鳥瞰画像に変換する画像変換部(12)と、
     前記鳥瞰画像または前記撮影画像から複数の特徴点を抽出する特徴点抽出部(15)と、
     前記鳥瞰画像または前記撮影画像から抽出された前記複数の特徴点の分布に基づいて前記車載カメラの位置及び角度を算出するか否かを判断する算出判断部(16)と、
     前記算出判断部が前記車載カメラの位置及び角度を算出すると判断した場合に、前記複数の特徴点に基づいて前記車載カメラの位置及び角度を算出する算出部(17)と
     を備えた画像変換装置。
    A captured image is acquired from an in-vehicle camera (2) that captures the surroundings of the vehicle (1), and the captured image is converted into a bird's-eye image as if it was captured from a viewpoint above the vehicle, and then the in-vehicle monitor (3 An image conversion device (10) to be displayed on
    An image conversion unit (12) that converts the captured image into the bird's-eye view image according to the position and angle of the in-vehicle camera;
    A feature point extraction unit (15) for extracting a plurality of feature points from the bird's-eye view image or the captured image;
    A calculation determination unit (16) for determining whether or not to calculate the position and angle of the in-vehicle camera based on the distribution of the plurality of feature points extracted from the bird's-eye view image or the captured image;
    An image conversion apparatus comprising: a calculation unit (17) that calculates the position and angle of the in-vehicle camera based on the plurality of feature points when the calculation determination unit determines to calculate the position and angle of the in-vehicle camera. .
  2.  請求項1に記載の画像変換装置であって、
     前記算出判断部は、前記鳥瞰画像または前記撮影画像から抽出された前記複数の特徴点の分布の幅に基づいて、前記車載カメラの位置及び角度を算出するか否かを判断する
     画像変換装置。
    The image conversion apparatus according to claim 1,
    The calculation determination unit determines whether to calculate a position and an angle of the in-vehicle camera based on a distribution width of the plurality of feature points extracted from the bird's-eye view image or the captured image.
  3.  請求項1または請求項2に記載の画像変換装置であって、
     前記車両の周辺環境または前記車両の状態を検知する検知器(4)が取得した検知情報に応じて前記特徴点を抽出するか否かを判断する抽出判断部(14)を備える
     画像変換装置。
    The image conversion apparatus according to claim 1 or 2, wherein
    An image conversion apparatus comprising: an extraction determination unit (14) that determines whether or not to extract the feature points according to detection information acquired by a detector (4) that detects a surrounding environment of the vehicle or a state of the vehicle.
  4.  請求項1ないし請求項3の何れか一項に記載の画像変換装置であって、
     前記画像変換部は、前記鳥瞰画像中の画素位置に対応する前記撮影画像中の画素位置が前記車載カメラの位置及び角度に基づいて算出された変換テーブルを参照することによって、前記撮影画像を前記鳥瞰画像に変換し、
     前記算出部が新たに算出した前記車載カメラの位置及び角度と、新たに算出する前の前記車載カメラの位置及び角度とを比較した変動量を取得する変動量取得部(18)と、
     該変動量が所定の変動閾値以下であった場合に、新たに算出された前記車載カメラの位置及び角度に基づいて前記変換テーブルを更新する変換テーブル更新部(19)と
     を備えた画像変換装置。
    An image conversion device according to any one of claims 1 to 3,
    The image conversion unit refers to the conversion table in which the pixel position in the captured image corresponding to the pixel position in the bird's-eye image is calculated based on the position and angle of the in-vehicle camera, and thereby the captured image is converted into the captured image. Converted to a bird's-eye view,
    A fluctuation amount acquisition unit (18) that acquires a fluctuation amount by comparing the position and angle of the in-vehicle camera newly calculated by the calculation unit with the position and angle of the in-vehicle camera before the calculation;
    An image conversion apparatus comprising: a conversion table update unit (19) that updates the conversion table based on a newly calculated position and angle of the in-vehicle camera when the amount of change is equal to or less than a predetermined change threshold. .
  5.  請求項4に記載の画像変換装置であって、
     前記算出判断部は、 
     前記鳥瞰画像の縦方向での前記特徴点の分布の幅が所定の第1閾値よりも大きいときは、前記車載カメラの光軸を上下方向に動かすピッチ方向の角度を算出すると判断し、 
     前記鳥瞰画像の横方向での前記特徴点の分布の幅が所定の第2閾値よりも大きいときは、前記車載カメラの光軸周りに回転させるロール方向の角度を算出すると判断する
     画像変換装置。
    The image conversion device according to claim 4,
    The calculation determination unit
    When the width of the distribution of the feature points in the vertical direction of the bird's-eye view image is larger than a predetermined first threshold, it is determined to calculate an angle in the pitch direction that moves the optical axis of the in-vehicle camera in the vertical direction;
    An image conversion apparatus that determines to calculate an angle in a roll direction that is rotated around the optical axis of the in-vehicle camera when a width of the distribution of the feature points in the horizontal direction of the bird's-eye view image is larger than a predetermined second threshold value.
  6.  請求項1ないし請求項4の何れか一項に記載の画像変換装置であって、
     前記特徴点抽出部は、前記鳥瞰画像から前記複数の特徴点を抽出する
     画像変換装置。
    An image conversion apparatus according to any one of claims 1 to 4, wherein
    The feature point extraction unit is an image conversion apparatus that extracts the plurality of feature points from the bird's-eye view image.
  7.  車載カメラを搭載した車両に適用されて、該車両の周囲を撮影した前記車載カメラから撮影画像を取得して、該撮影画像を前記車両の上方の視点から撮影したかのような鳥瞰画像に変換してから車載モニターに表示する画像変換方法であって、
     前記車載カメラの位置及び角度に応じて前記撮影画像を前記鳥瞰画像に変換する工程(S104)と、
     前記鳥瞰画像または前記撮影画像から複数の特徴点を抽出する工程(S105)と、
     前記鳥瞰画像または前記撮影画像から抽出された前記複数の特徴点の分布に基づいて前記車載カメラの位置及び角度を算出するか否かを判断する工程(S107)と、
     前記車載カメラの位置及び角度を算出すると判断した場合に、前記複数の特徴点に基づいて前記車載カメラの位置及び角度を算出する工程(S108)と
     を備えた画像変換方法。 

     
    Applied to a vehicle equipped with a vehicle-mounted camera, acquires a captured image from the vehicle-mounted camera that captured the surroundings of the vehicle, and converted the captured image into a bird's-eye image as if it was captured from a viewpoint above the vehicle Image conversion method for displaying on the vehicle monitor after
    Converting the captured image into the bird's-eye view image according to the position and angle of the in-vehicle camera (S104);
    Extracting a plurality of feature points from the bird's-eye view image or the captured image (S105);
    Determining whether to calculate the position and angle of the in-vehicle camera based on the distribution of the plurality of feature points extracted from the bird's-eye view image or the captured image (S107);
    A step (S108) of calculating the position and angle of the in-vehicle camera based on the plurality of feature points when it is determined to calculate the position and angle of the in-vehicle camera.

PCT/JP2015/005512 2014-11-26 2015-11-03 Image transformation apparatus and image transformation method WO2016084308A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
DE112015005317.4T DE112015005317B4 (en) 2014-11-26 2015-11-03 IMAGE CONVERSION DEVICE AND IMAGE CONVERSION METHOD

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2014239377A JP6507590B2 (en) 2014-11-26 2014-11-26 Image conversion apparatus and image conversion method
JP2014-239377 2014-11-26

Publications (1)

Publication Number Publication Date
WO2016084308A1 true WO2016084308A1 (en) 2016-06-02

Family

ID=56073907

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2015/005512 WO2016084308A1 (en) 2014-11-26 2015-11-03 Image transformation apparatus and image transformation method

Country Status (3)

Country Link
JP (1) JP6507590B2 (en)
DE (1) DE112015005317B4 (en)
WO (1) WO2016084308A1 (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP6565769B2 (en) * 2016-04-03 2019-08-28 株式会社デンソー In-vehicle camera mounting angle detection device, mounting angle calibration device, mounting angle detection method, mounting angle calibration method, and computer program
CN111246098B (en) * 2020-01-19 2022-02-22 深圳市人工智能与机器人研究院 Robot photographing method and device, computer equipment and storage medium

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2008011174A (en) * 2006-06-29 2008-01-17 Hitachi Ltd Calibration device of on-vehicle camera, program, and car navigation system
JP2008131250A (en) * 2006-11-20 2008-06-05 Aisin Seiki Co Ltd Correcting device for on-board camera and production method for vehicle using same correcting device
JP2008182652A (en) * 2007-01-26 2008-08-07 Sanyo Electric Co Ltd Camera posture estimation device, vehicle, and camera posture estimating method
JP2013115540A (en) * 2011-11-28 2013-06-10 Clarion Co Ltd On-vehicle camera system, and calibration method and program for same

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP5991842B2 (en) 2012-04-16 2016-09-14 アルパイン株式会社 Mounting angle correction device and mounting angle correction method for in-vehicle camera

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2008011174A (en) * 2006-06-29 2008-01-17 Hitachi Ltd Calibration device of on-vehicle camera, program, and car navigation system
JP2008131250A (en) * 2006-11-20 2008-06-05 Aisin Seiki Co Ltd Correcting device for on-board camera and production method for vehicle using same correcting device
JP2008182652A (en) * 2007-01-26 2008-08-07 Sanyo Electric Co Ltd Camera posture estimation device, vehicle, and camera posture estimating method
JP2013115540A (en) * 2011-11-28 2013-06-10 Clarion Co Ltd On-vehicle camera system, and calibration method and program for same

Also Published As

Publication number Publication date
DE112015005317T5 (en) 2017-08-24
JP6507590B2 (en) 2019-05-08
JP2016100887A (en) 2016-05-30
DE112015005317B4 (en) 2022-01-27

Similar Documents

Publication Publication Date Title
KR102022388B1 (en) Calibration system and method using real-world object information
WO2018225446A1 (en) Map points-of-change detection device
JP4820221B2 (en) Car camera calibration device and program
JP6019646B2 (en) Misalignment detection apparatus, vehicle, and misalignment detection method
US8428362B2 (en) Scene matching reference data generation system and position measurement system
JP5962771B2 (en) Moving object position / posture angle estimation apparatus and moving object position / posture angle estimation method
US8452103B2 (en) Scene matching reference data generation system and position measurement system
JP6560355B2 (en) Landmark recognition apparatus and recognition method
JP2002197469A (en) Device for detecting traffic lane
JP6822427B2 (en) Map change point detector
JP5539250B2 (en) Approaching object detection device and approaching object detection method
CN110023953A (en) Information processing equipment, imaging device, apparatus control system, moving body, information processing method and computer program product
JP4296287B2 (en) Vehicle recognition device
WO2018149539A1 (en) A method and apparatus for estimating a range of a moving object
JP2018085059A (en) Information processing device, imaging apparatus, device control system, information processing method and program
WO2016084308A1 (en) Image transformation apparatus and image transformation method
JP2940366B2 (en) Object tracking recognition device
JP6331629B2 (en) Lane recognition device
JP3304905B2 (en) Object tracking recognition device
KR102368262B1 (en) Method for estimating traffic light arrangement information using multiple observation information
CN112414430A (en) Electronic navigation map quality detection method and device
JP4106163B2 (en) Obstacle detection apparatus and method
EP3389015A1 (en) Roll angle calibration method and roll angle calibration device
JP4629638B2 (en) Vehicle periphery monitoring device
JP6331628B2 (en) Lane recognition device

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 15863927

Country of ref document: EP

Kind code of ref document: A1

WWE Wipo information: entry into national phase

Ref document number: 112015005317

Country of ref document: DE

122 Ep: pct application non-entry in european phase

Ref document number: 15863927

Country of ref document: EP

Kind code of ref document: A1