JP4788518B2 - Image conversion apparatus and image conversion method - Google Patents

Image conversion apparatus and image conversion method Download PDF

Info

Publication number
JP4788518B2
JP4788518B2 JP2006214361A JP2006214361A JP4788518B2 JP 4788518 B2 JP4788518 B2 JP 4788518B2 JP 2006214361 A JP2006214361 A JP 2006214361A JP 2006214361 A JP2006214361 A JP 2006214361A JP 4788518 B2 JP4788518 B2 JP 4788518B2
Authority
JP
Japan
Prior art keywords
image
lane
angle
conversion
overhead
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
JP2006214361A
Other languages
Japanese (ja)
Other versions
JP2008040799A (en
Inventor
洋 及川
和英 足立
Original Assignee
アイシン・エィ・ダブリュ株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by アイシン・エィ・ダブリュ株式会社 filed Critical アイシン・エィ・ダブリュ株式会社
Priority to JP2006214361A priority Critical patent/JP4788518B2/en
Publication of JP2008040799A publication Critical patent/JP2008040799A/en
Application granted granted Critical
Publication of JP4788518B2 publication Critical patent/JP4788518B2/en
Application status is Active legal-status Critical
Anticipated expiration legal-status Critical

Links

Images

Description

  The present invention relates to an image conversion apparatus and method for detecting other vehicles existing around a host vehicle.

Conventionally, a road is photographed with a camera mounted on the host vehicle, the captured image is converted to a bird's-eye view, and the characteristics of the vehicle are extracted from the image after the bird's-eye view conversion, and other vehicles existing around the host vehicle are detected. Technology is known. For example, Patent Literature 1 discloses a technique for extracting an edge from an image after overhead conversion and detecting the presence of another vehicle based on the edge.
JP-A-4-163249

  In the prior art, other vehicles cannot be detected accurately with a low calculation load. That is, in the above-described conventional technology, since the overhead view conversion is performed without considering the direction in which the lane extends, the direction of the lane included in the overhead image is not constant. For example, on roads where the angle between the traveling lane of the host vehicle and the approach destination lane into which the host vehicle enters is not constant (such as a junction of a highway), the direction of the approach destination lane is in various directions in the overhead view image.

In this case, in order to accurately detect the position of the other vehicle in the approach destination lane, the distance to the other vehicle, and the like, it is necessary to perform calculation in consideration of the inclination of the lane, which increases calculation load and performs accurate calculation. It was difficult.
The present invention has been made in view of the above problems, and an object thereof is to provide a technique for accurately detecting another vehicle with a low calculation load.

  In order to achieve the above object, in the present invention, based on the angle between the traveling direction of the host vehicle and the destination lane of the host vehicle, the destination lane is directed in a specific direction in the overhead image. Identify the conversion process. And a bird's-eye view image is acquired by this conversion processing. As a result, in the bird's-eye view image, the lane of the approach destination always faces in a predetermined direction, and it is possible to obtain an overhead image that can accurately detect other vehicles with a low calculation load. become.

  In other words, if the traveling direction of the host vehicle and the lane of the destination of the host vehicle are not parallel on the road on which the host vehicle travels, the host vehicle may vary depending on the laying situation of each road, the position of the host vehicle on the lane, etc. The direction of the approaching lane as seen from can vary. Therefore, when the image acquired by the image acquisition means fixed to the host vehicle is simply converted into a bird's-eye view image, the direction of the approaching lane is not constant in the bird's-eye view image.

  On the other hand, in order to detect the position of the other vehicle (relative relationship between the host vehicle and the other vehicle) based on the image of the approach destination lane included in the overhead view image, the direction of the image of the approach destination lane is specified. There is a need to. In general, it is difficult to accurately specify the direction of the lane in the image, and processing for specifying the direction of the lane in the image is required. Therefore, in this case, the calculation load increases and other vehicles cannot be detected accurately.

  However, according to the present invention, the direction of the approach destination lane is always in a certain direction in the overhead view image, so the position of the other vehicle can be determined very easily based on the image on the approach destination lane. can do. That is, since the position on the bird's-eye view image and the actual position can be accurately associated in advance, the position of the image can be accurately specified by the position of the image included in the bird's-eye view image. For this reason, it is possible to accurately detect other vehicles with a low calculation load.

  Further, as described above, according to the conversion process in which the lane of the approach destination is directed in a specific direction in the overhead view image, it is possible to obtain a preferable overhead image by using it in a navigation device or the like mounted on the host vehicle. That is, if the bird's-eye view image is output by a navigation device or the like, it is possible to intuitively and easily grasp the destination lane because the destination lane is always in a specific direction. Furthermore, the most important part can be arranged in the center by arranging the lane of the approach destination in the center of the display device.

  Note that the image acquisition unit only needs to be able to acquire an image including the detection area of the other vehicle, and various configurations such as a configuration of photographing the lane of the entry destination where the own vehicle enters with a camera mounted on the own vehicle. Can be adopted. In the angle acquisition means, it is only necessary to acquire the traveling direction of the host vehicle and the direction in which the lane of the approach destination extends, and to acquire the angle at which the extension directions cross each other, and various configurations can be adopted.

  For example, the direction of the host vehicle may be obtained by acquiring the direction of the host vehicle, the travel locus, or the like by a sensor that acquires the behavior of the host vehicle, or the host vehicle may be based on a map database stored in a storage medium in advance. The lane in which the vehicle is traveling may be specified, and the direction in which the lane extends may be set as the traveling direction of the host vehicle. The direction in which the lane of the approach destination extends is the same, and it is possible to adopt a configuration that identifies the lane of the approach destination to which the lane in which the host vehicle is traveling is connected based on the map database and acquires the direction in which the lane extends It is.

  Of course, in addition to the above-described configuration, an image of the lane in which the host vehicle travels and an image of the lane of the approach destination are acquired from the image acquired by the image acquisition unit, and the direction in which the lane extends is determined based on both images. The angle may be acquired by specifying. In the present invention, since the lane of the destination is always directed in a specific direction, the angle acquisition unit acquires the angle every time an image is acquired by the image acquisition unit, and an appropriate conversion is performed for each image. It is preferable to be able to carry out the treatment.

  The overhead conversion means only needs to be able to convert the image including the lane of the approach destination into an overhead image as viewed from above, such as coordinate conversion for converting each pixel value of the image before the overhead view conversion into a pixel value of the overhead image. Can be adopted. The overhead conversion process is a viewpoint conversion process, but it is possible to turn the approach lane in a specific direction in the overhead image by performing a conversion accompanied by a rotation process.

  Therefore, it is possible to specify the conversion process involving rotation at a desired rotation angle by substantially specifying the rotation angle of the rotation process accompanying the viewpoint conversion. Further, the angle between the traveling direction of the host vehicle and the lane of the approach destination has a one-to-one correspondence with the direction of the lane of the approach destination included in the image before the overhead view conversion. Therefore, in the present invention, the rotation angle of the rotation process accompanying the viewpoint conversion is substantially specified based on the angle. As a result, it is possible to specify the conversion process for directing the lane of the approach destination in a specific direction in the overhead image.

  Further, the conversion process may be a conversion process in which the lane of the destination is directed in a specific direction in the overhead image, but the direction parallel to any of the orthogonal axes that define the coordinates of the overhead image is specified. It is good also as a direction. According to this configuration, it is possible to acquire an overhead image so that the lane of the approach destination is always in the vertical direction or the horizontal direction of the image. Therefore, the position of the other vehicle in the overhead image can be specified extremely easily and accurately by determining in advance the relative relationship between the image existing at each position in the vertical and horizontal directions in the overhead image and the own vehicle. It becomes possible. Moreover, if this overhead view image is output by a navigation device or the like, it is possible to intuitively and easily grasp the lane of the destination.

  Furthermore, as the overhead conversion processing, a table that defines the correspondence relationship between coordinates before and after conversion may be defined in advance, and a configuration that refers to this table may be adopted, or the coordinates after conversion for each pixel in the image. A configuration for calculating the above may be adopted.

  For example, as an example of the former, a configuration may be adopted in which coordinate correspondence data that defines a correspondence relationship between coordinates before and after the overhead conversion is stored for each of a plurality of angles in a predetermined storage medium. That is, table data is prepared in advance for performing coordinate conversion for directing the lane of the approach destination in a specific direction for each angle acquired by the angle acquisition means. According to this configuration, it is possible to perform the overhead conversion process by referring to the coordinate correspondence data corresponding to the angle acquired by the angle acquisition means, and the overhead conversion can be performed with a very low calculation load.

  As an example of the latter, in the configuration in which the correction processing for correcting the image orientation based on the angle between the direction parallel to the traveling direction of the host vehicle and the optical axis in the image acquisition unit is performed together with the overhead conversion, the correction What is necessary is just to employ | adopt the structure which adjusts quantity. In other words, according to the correction process, it is possible to substantially perform the rotation process of rotating the optical axis relative to the traveling direction of the host vehicle. Therefore, referring to the angle between the traveling direction of the host vehicle and the lane of the approach destination, an angle of rotation processing (correction amount) necessary to direct the lane of the approach destination in a specific direction in the overhead image. Can be specified. According to this configuration, regardless of the direction of the approach destination lane in the image before the overhead view conversion, the approach destination lane can be accurately directed in a specific direction.

  Furthermore, you may employ | adopt the structure which correct | amends the influence on the image by the behavior of the own vehicle. For example, it is configured to include various sensors that acquire information corresponding to the behavior of the host vehicle, and adopts a configuration that performs overhead conversion while offsetting image changes due to the behavior of the host vehicle based on information corresponding to the behavior Is possible. According to this configuration, regardless of how the host vehicle behaves, it is possible to always point the lane of the approach destination in a specific direction in the overhead image. The sensor for acquiring the behavior of the host vehicle may be a three-axis acceleration sensor that detects the yaw, roll, pitch, etc. of the host vehicle, or a sensor that detects the direction of the host vehicle. Well, various sensors can be used.

  Further, the method of directing the approach lane in a specific direction in the overhead image based on the angle between the traveling direction of the subject vehicle and the approach lane of the subject vehicle as in the present invention is also applied as a program or method. Is possible. In addition, the image conversion apparatus, the program, and the method as described above may be realized as a single image conversion apparatus or may be realized by using parts shared with each part provided in the vehicle. The embodiment is included. For example, it is possible to provide a navigation device, a method, and a program including the image conversion device as described above. Further, some changes may be made as appropriate, such as a part of software and a part of hardware. Furthermore, the invention is also established as a recording medium for a program for controlling the image conversion apparatus. Of course, the software recording medium may be a magnetic recording medium, a magneto-optical recording medium, or any recording medium to be developed in the future.

Here, embodiments of the present invention will be described in the following order.
(1) Configuration of navigation device:
(2) Guidance processing:
(3) Creation of coordinate correspondence data:
(4) Other embodiments:

(1) Configuration of navigation device:
FIG. 1 is a block diagram showing a configuration of a navigation apparatus 10 including an image conversion apparatus according to the present invention. The navigation device 10 includes a control unit 20 including a CPU, RAM, ROM, and the like and a storage medium 30, and the control unit 20 can execute a program stored in the storage medium 30 or the ROM. In this embodiment, the navigation program 21 can be implemented as one of the programs, and the navigation program 21 has a function of performing guidance for causing the host vehicle to enter the lane of the destination as one of the functions. ing. Note that an image conversion process is executed to perform this guidance. That is, the lane of the approach destination is shown in the converted image, and whether or not another vehicle is present in the lane of the approach destination is detected based on the converted image.

  In order to realize the navigation program 21, the host vehicle (vehicle equipped with the navigation device 10) includes a camera 40, a GPS receiver 41, a vehicle speed sensor 42, a rudder angle sensor 43, a three-axis acceleration sensor 44, and an orientation sensor 45. A speaker 46 and a display unit 47 are provided, and exchange of signals between these units and the control unit 20 is realized by an interface (not shown).

  The camera 40 is attached to the host vehicle so that the road behind the host vehicle is included in the field of view, and outputs image data indicating a captured image. The control unit 20 acquires this image data through an interface (not shown), converts the image, and uses it for detection of other vehicles and various types of guidance. The GPS receiver 41 receives radio waves from GPS satellites and outputs information for calculating the current position of the host vehicle via an interface (not shown). The control unit 20 acquires this signal and acquires the current position of the host vehicle.

  The vehicle speed sensor 42 outputs a signal corresponding to the rotational speed of the wheels provided in the host vehicle. The control unit 20 acquires this signal via an interface (not shown), and acquires the speed (relative speed between the road surface and the wheel) for each wheel. The steering angle sensor 43 outputs a signal corresponding to the rotation angle of the steering of the host vehicle. The control unit 20 acquires this signal via an interface (not shown), and acquires information indicating the steering rotation angle.

  The triaxial acceleration sensor 44 outputs a signal corresponding to accelerations in three orthogonal directions. The control unit 20 acquires this signal via an interface (not shown), and acquires information corresponding to the behavior (yaw, roll, pitch) of the host vehicle. The direction sensor 45 outputs a signal corresponding to the traveling direction of the host vehicle. The control unit 20 can acquire this signal through an interface (not shown) and can specify the traveling direction of the host vehicle. Of course, the above-described sensor can adopt various modes, and a sensor used for attitude control of the host vehicle may be used.

  By executing the navigation program 21, the control unit 20 outputs information indicating the presence of other vehicles behind the host vehicle, the route to the destination, and the like based on the various information acquired as described above. Guide the vehicle. That is, the control unit 20 outputs a control signal for performing various types of guidance by voice to the speaker 46, and outputs voice as guidance information from the speaker 46. Further, the control unit 20 outputs a control signal for performing various types of guidance based on the image to the display unit 47 and displays the image on the display unit 47.

  In the present embodiment, as a part of the function of the navigation program 21 described above, an image is converted into a bird's-eye view image, and other vehicles are detected based on the bird's-eye view image, and the vehicle has a function of guiding entry into the destination lane. The navigation device 10 functions as a navigation device according to the present invention by this processing. In order to perform this processing, the navigation program 21 includes an image conversion unit 22 and a guide unit 23. The image conversion unit 22 further includes an image acquisition unit 22a, an angle acquisition unit 22b, a behavior information acquisition unit 22c, and a bird's eye conversion. Part 22d. Further, the storage medium 30 stores map information 30a for performing guidance by the navigation program 21, and stores coordinate correspondence data 30b for performing image conversion.

  The map information 30a includes node data indicating nodes set on the road, link data indicating connection between nodes, data indicating targets, and the like, and is used for specifying the position of the host vehicle and guiding to the destination. The In the present embodiment, the map information 30a is also used when acquiring the angle between the travel lane in which the host vehicle travels and the destination lane in which the host vehicle enters.

  The coordinate correspondence data 30b is data for converting the image acquired by the camera 40 into a bird's-eye view image, and associates the coordinates of each pixel in the image before conversion with the coordinates of each pixel in the image after overhead conversion. Table data. The coordinate correspondence data 30b only needs to associate the coordinates of the image before and after the conversion, and may define the correspondence relationship for all the coordinate values necessary for the image, or the correspondence relationship for the representative coordinate values. Between the representative coordinate values may be calculated by interpolation processing.

  Further, in the present embodiment, the coordinate correspondence data 30b includes data corresponding to each of a plurality of camera parameters, and is set so that the rotation angle with respect to the optical axis of the camera 40 is different for each camera parameter. That is, the camera parameter is a parameter corresponding to the angle between the traveling lane and the approach destination lane and the behavior of the host vehicle, and the coordinate correspondence data 30b is a bird's-eye view of images taken at the angle and the behavior corresponding to each camera parameter. When converted, the approach lane is set to be parallel to the vertical direction of the image. Details of the camera parameters and creation of the coordinate correspondence data 30b will be described later.

  On the other hand, the image acquisition unit 22a is a module that acquires image data from the camera 40, and the acquired data is delivered to the overhead conversion unit 22d. The angle acquisition unit 22b is a module that acquires the angle between the traveling direction of the host vehicle and the approach destination lane. In the present embodiment, the current position of the host vehicle is specified with reference to the map information 30a, and this position The crossing angle of the extension line of the lane is taken as the angle.

  That is, the angle acquisition unit 22b specifies the position of the host vehicle based on the signal from the GPS receiving unit 41, and acquires the extension direction of the traveling lane in which the host vehicle is traveling with reference to the map information 30a. Furthermore, the extension direction of the approach destination lane next to the travel lane is acquired. And the angle which the extension direction of both lanes crosses is acquired. This angle can be part of the camera parameters described above.

  The behavior information acquisition unit 22c acquires information indicating the direction in which the host vehicle is facing based on the signals from the steering angle sensor 43 and the direction sensor 45, and the yaw of the host vehicle based on the signal from the three-axis acceleration sensor 44. Information indicating the pitch and roll is acquired. These pieces of information are behavior information indicating the behavior of the host vehicle, and can be part of the above-described camera parameters.

  The overhead view conversion unit 22d is a module that converts an image captured by the camera 40 into an overhead view image while directing the approach lane in the vertical direction. That is, a camera parameter is specified based on the above-described angle and behavior information, and coordinate conversion is performed with reference to the coordinate correspondence data 30b associated with the camera parameter. The converted overhead image is transferred to the guide unit 23 and used for guidance.

  That is, the guide unit 23 is a module that detects another vehicle on the approach destination lane based on the bird's-eye view image, and guides the host vehicle so that the host vehicle can enter a position where no other vehicle exists. To the unit 47. When detecting other vehicles, various configurations are adopted, such as extracting edges and feature values (luminance distribution, etc.) from the image and detecting the presence of other vehicles when the vehicle edges and feature values are extracted. Is possible.

  When another vehicle is detected, the relative relationship (distance) between the other vehicle and the host vehicle is acquired based on the position in the bird's-eye view image, and guidance information is output so as to enter the destination lane in a state where no other vehicle exists. In the present embodiment, the bird's-eye view image is displayed on the display unit 47, and guidance for acceleration / deceleration necessary for the host vehicle to move the host vehicle to a position where no other vehicle exists is provided next to the host vehicle. When there is no other vehicle, guidance is provided to urge lane change, and other vehicles approaching the host vehicle are notified.

(2) Guidance processing:
Next, guidance processing performed by the navigation device 10 in the above configuration will be described. FIG. 2 is a flowchart showing the processing of the navigation program 21 in the present embodiment. In the present embodiment, the navigation program 21 provides guidance to the destination, and when the turn signal of the turn signal is turned on or the own vehicle is about to join the main road of the highway based on information from the GPS receiver 41. 2 is started, and the process shown in FIG. 2 is started.

When this process is started, the image acquisition unit 22a acquires image data from the camera 40 (step S100). FIG. 3 shows an example of an image, FIG. 3A shows an image acquired by the camera 40, and FIGS. 3B and 3C show images after overhead conversion. In this embodiment, the camera 40 is attached to the rearview mirror of the host vehicle, and the optical axis is directed so that the approach destination lane located next to the travel lane on which the host vehicle travels is captured at the center of the field of view. . Further, in this example, an example is shown in which the lane on the right is the entry lane. Therefore, lane located at the center are entered destination lane L 2 in FIG. 3A, lane located on the lower right is a traffic lane L 1.

3B and 3C show a part of the image after the overhead conversion, and FIG. 3B shows that the traveling lane L 1 is always in the vertical direction of the image (the vertical direction among the orthogonal axes that define the coordinates of the image). shows a state where the overview transformation so as to face a direction) parallel to, FIG. 3C shows a state in which the entering destination lane L 2 always overview transformation to face the vertical direction of the image. If the relative relationship between the lane L 1 and the host vehicle is almost constant, the conversion as shown in FIG. 3B can be performed by the predetermined overhead conversion process. However, as described above, the angle between the travel lane L 1 and the approach destination lane L 2 is not constant.

Therefore, in this embodiment, the entry destination lane L 2 constantly oriented in the vertical direction of the image, angle acquisition unit 22b acquires the angle between the traffic lane L 1 and enters the destination lane L 2 (step S110), the behavior The information acquisition unit 22c acquires the behavior information of the host vehicle (step S120). That is, the angle acquisition unit 22b specifies the position of the host vehicle based on the signal from the GPS reception unit 41, specifies the direction in which the travel lane L 1 and the destination lane L 2 extend at this position, and sets the angle between the two. get.

The behavior information acquisition unit 22 c acquires the angle between the direction in which the host vehicle is facing and the travel lane L 1 based on the signals from the rudder angle sensor 43 and the direction sensor 45, and based on the signal from the triaxial acceleration sensor 44. Get the yaw, pitch, and roll angles of your vehicle. Then, the overhead conversion unit 22d determines camera parameters based on information corresponding to these angles (step S130). In the present embodiment, the camera parameter is a parameter corresponding to a deviation between the reference posture of the camera 40 and the actual posture.

In the present embodiment, as described above, the coordinate correspondence data 30b corresponding to a plurality of camera parameters is stored in the storage medium 30. If the coordinate correspondence data 30b corresponding to each camera parameter is referred to, each camera parameter is assigned to each camera parameter. the overview transformation based on the captured image in a state of behavior of the corresponding lane direction and the vehicle directing the entry destination lane L 2 in the vertical direction of the image can be implemented. Therefore, the overhead conversion unit 22d identifies the coordinate correspondence data 30b corresponding to the camera parameter acquired in step S130 (step S140), and performs the overhead conversion by performing coordinate transformation with reference to the coordinate correspondence data 30b. (Step S150).

The overview transformation described above, regardless of the behavior of lane angle and the vehicle, always enters destination lane L 2 is an image as in Figure 3C facing the vertical image obtained. Therefore, the guide unit 23 detects the presence of another vehicle in the entry destination lane L 2 based on the image after the overhead conversion, the vehicle in a position where the other vehicle is not present to perform guided for entry. In this embodiment, a plurality of detection areas are set in the approach destination lane L 2 , and processing for detecting the presence or absence of other vehicles is performed for each detection area. For this purpose, the approach destination lane L 2 is displayed in the overhead image. A detection area R having a predetermined size is set to a plurality of detection areas while being shifted in a certain direction at intervals r (step S160). In FIG. 3C, an example of the detection region R is indicated by a solid rectangle, and a detection region set at a position different from a certain detection region R is indicated by a broken line.

  The guide unit 23 obtains the luminance of each pixel in these detection areas, generates a histogram, determines whether or not a distribution corresponding to the feature of the image of the other vehicle appears in the histogram, and determines the other vehicle. Detection is performed (step S170). For example, when a peak having a specific width appears in a specific luminance range in an image of another vehicle, the other vehicle is detected depending on whether a distribution similar to this peak appears in the histogram.

According to the above detection, since the presence or absence of other vehicles can be specified for each detection region, the acceleration, lane change timing, and the like necessary for the own vehicle can be determined based on the presence or absence of the other vehicles. Therefore, the guide unit 23 displays the bird's-eye view image shown in FIG. 3C on the display unit 47, and outputs guidance information indicating acceleration, lane change timing, and the like necessary for the host vehicle to the speaker 46 and the display unit 47 ( Step S180). As a result, the driver of the vehicle can grasp easily enter destination lane entry timing and necessary driving operation to L 2, it is possible to perform the approach.

In the present embodiment, since performing conversions to face always vertically enters destination lane L 2 in bird's-eye view image as described above, it is possible to improve the detection accuracy of the other vehicle. That is, when it detects another vehicle, but to specify the position where there is another vehicle by the position of the detection area in the overhead view image, the bird's-eye view image in this embodiment is always vertically enters destination lane L 2 as shown in FIG. 3C Since it faces the direction, it can be identified very easily by associating the position in the overhead image with the position of the other vehicle in advance.

On the other hand, as shown in FIG. 3B, when the approach lane L 2 is inclined and the inclination angle is not constant, the inclination angle of the lane is calculated sequentially, and the position of the lane in the image is It is necessary to calculate for each inclination angle whether such a relationship exists. Therefore, compared with this embodiment, these calculation precisions fall and it cannot detect the position of other vehicles correctly. In addition, it is necessary to calculate the lane inclination angle and position, and a higher calculation load is required as compared with the present embodiment. For this reason, according to the present embodiment, detection with high calculation accuracy can be performed with a low calculation load.

  Further, as described above, when setting the detection area, in the present embodiment, the detection area can be set very easily by shifting the frame by a certain interval r in the vertical direction. In order to set the detection area at a constant interval in the direction parallel to the lane in the inclined lane as described above, complicated processing such as calculation of the inclination angle is required, and the calculation load increases. When the detection area is divided in a direction perpendicular to the vertical direction of the image in the inclined lane as shown in FIG. 3B, the direction perpendicular to the traveling direction of the vehicle is different from the direction of the side of the detection area. The position accuracy at the time of detection tends to decrease. Further, when the display as shown in FIG. 3C is performed on the display unit 47, an important part is arranged at the center as compared with the display as shown in FIG. 3B, and an unnecessary part such as a zebra zone image is placed at the center of the image. Arrangement can be prevented.

(3) Creation of coordinate correspondence data:
Next, the creation of the coordinate correspondence data 30b and an example of camera parameters will be described in detail. The coordinate correspondence data 30b may be data that defines a correspondence relationship between a three-dimensional coordinate system and a coordinate system of an image acquired by the camera 40 (hereinafter referred to as a camera image). This correspondence is obtained by conversion.

  FIG. 4 is a diagram showing an example of a coordinate system set when the coordinate correspondence data 30b is created. FIG. 4A shows the relationship between the three-dimensional coordinate system and the posture of the camera 40, and FIG. 3 shows the relationship between the optical axis and the coordinate system of the overhead image and camera image. That is, in the example shown in FIG. 4A, a three-dimensional coordinate system is defined by a Z-axis that faces vertically upward, an X-axis that faces the rear of the vehicle, and a Y-axis that faces the right side of the vehicle. The origin is the height of the ground in the Z-axis direction.

Further, the optical axis O of the camera 40 is indicated by a broken line, and the lens center of the camera 40 is defined as coordinates (X 0 , Y 0 , Z 0 ) in the three-dimensional coordinate system. Furthermore, in this embodiment, the angle between the projection straight line when the optical axis O is projected onto the XZ plane and the straight line Y = 0, Z = Z 0 is the tilt angle θ T , and the optical axis O is projected onto the XZ plane. The angle between the projection line and the optical axis O is the pan angle θ P , the angle between the projection line and the X axis when the optical axis O is projected onto the XY plane is the angle θ X , and the optical axis O is projected onto the YZ plane. The angle between the projected straight line and the Z axis is defined as the angle θ Y.

On the other hand, in the example shown in FIG. 4B, an orthogonal coordinate system having a W axis in a direction parallel to the optical axis O and a U axis and a V axis on a plane parallel to the camera image plane is set. O is set so as to extend from the camera 40 to the outside of the camera, the V-axis is set to the right side of the vehicle, and the U-axis is set to face substantially above the vehicle. The origin in this UVW coordinate system is the lens center of the camera 40. In FIG. 4B, the camera image plane is schematically shown as a plane P 0 . In the camera image, the position of each pixel is defined in a coordinate system (hereinafter referred to as an image coordinate system) defined by an axis set parallel to the U axis and V axis on the plane P 0. In the correspondence data 30b, the coordinates in the image coordinate system may be associated with the X and Y coordinates in the three-dimensional coordinate system.

Therefore, in the present embodiment, the coordinates of the three-dimensional coordinate system viewed from the coordinates (X 0 , Y 0 , Z 0 ) are expressed in the UVW coordinate system by coordinate transformation, and the virtual plane P 1 (perpendicular to the W axis). The plane is projected onto a plane whose origin coincides with the intersection of the optical axis O and the XY plane, and the coordinates are converted to obtain the correspondence with the image coordinate system.

More specifically, when the rotation angle between the three-dimensional coordinate system (XYZ coordinate system) and the UVW coordinate system is a rotation angle α around the U axis, a rotation angle β around the V axis, and a rotation angle γ around the W axis. Both coordinate transformations are expressed by the following equations (1) and (2).
Here, Up, Vp, and Wp are coordinates when coordinates (X, Y, Z) in the XYZ coordinate system are expressed in the UVW coordinate system.

As described above, when the UVW coordinates when the coordinates (X, Y, Z) in the XYZ coordinate system are expressed in the UVW coordinate system are determined, the coordinates Uc and Vc when the coordinates are projected onto the virtual plane P 1 are obtained. It can obtain | require by the following formula | equation (3) (4).
Here, Lc is the distance between the origin of the UVW coordinate system and the origin of the virtual plane P 1 .

According to these equations (3), (4), and (1), the coordinates Uc, Vc on the virtual plane can be associated with the coordinates of the XYZ coordinate system. Furthermore, the correspondence between the virtual plane P 1 and the plane P 0 that is the camera image plane can be defined by analyzing in advance the traveling characteristics of the light passing through the lens. Based on this correspondence If data indicating the correspondence between the coordinates of the camera image plane and the coordinates X and Y of the XYZ coordinate system is created, the coordinate correspondence data 30b can be acquired.

In this embodiment, since the overhead view conversion is performed so that the approach destination lane L 2 always faces in the vertical direction regardless of the relationship between the host vehicle and the approach destination lane L 2 and the state of the behavior of the host vehicle, The above rotation angles α, β, γ and the distance Lc are defined based on the posture and the height. For example, the rotation angles α and β and the distance Lc are defined by the equations (5), (6), and (7) based on the tilt angle θ T , the pan angle θ P , and the height H of the camera 40, respectively. .
Here, the rotation angle is set to 0 assuming that the fluctuation around the optical axis O is negligible. However, the conversion may be performed in consideration of the fluctuation around the optical axis O.

If the tilt angle θ T , pan angle θ P , and height H of the camera 40 when the host vehicle is stationary are substituted into the above formula, the front-rear direction (X axis) of the host vehicle is the top and bottom of the image. A bird's-eye view conversion to the direction is made. Therefore, if the tilt angle θ T , the pan angle θ P , and the height H are corrected based on the posture and height of the camera 40 when the host vehicle is traveling, what is the posture and height of the camera 40? even those which can be implemented the overview transformation directing accurately the entry destination lane L 2 in the vertical direction of the image.

  FIG. 5 shows an example of this correction. FIG. 5A shows the correction based on the rotation of the XYZ coordinate system around the Z axis, and FIG. 5B shows the rotation of the XYZ coordinate system around the Y axis. FIG. 5C shows a state of correction based on rotation around the X axis of the XYZ coordinate system.

5A, when the angle between the optical axis O and the X axis of the camera is the angle θ X shown in FIG. 4A and the direction of the approach destination lane L 2 is an arrow L, the X axis and the approach destination lane The angle with L 2 is Δθ X. On the other hand, tan θ X = tan θ P / cos θ T based on the angle relationship shown in FIG. 4A, and therefore, if the corrected pan angle θ P ′ is defined as in equation (8), the traveling direction of the host vehicle is X When it is parallel to the axis, it is possible to perform overhead conversion so that the angle deviation Δθ X is canceled and the approach destination lane L 2 faces the vertical direction of the image.
Of course, also corrected in a similar manner the angular offset of both when it based on the orientation of the vehicle which is obtained as the behavior information and vehicle orientation and the traffic lane L 1 direction does not match is found .

5B, the angle between the straight line S parallel to the optical axis O and the X-axis of the camera is the tilt angle theta T shown in FIG. 4A, a camera 40 pitching the vehicle occurs and the angle about the Y axis If the tilt angle θ T ′ after correction is defined by equation (9) and the height H ′ of the camera 40 after correction is defined by equation (10) when rotated by Δθ P , this angle deviation Δθ P can be obtained. offsetting enters destination lane L 2 and it is possible to perform the overhead conversion to face the vertical direction of the image.
Here, the rotation center position when the camera 40 rotates according to the behavior of the host vehicle is set as coordinates (X 1 , Y 1 , Z 1 ) (see FIG. 6A).

In FIG. 5C, the angle between the optical axis O and the Z axis of the camera is the angle θ Y shown in FIG. 4A, and when the vehicle 40 rolls and the camera 40 rotates about the X axis by an angle Δθ Y. If the corrected pan angle θ P ′ is defined as equation (11) and the corrected height H ′ of the camera 40 is defined as equation (12), the angle deviation Δθ Y is canceled and the approach destination lane is corrected. L 2 is possible to perform the overhead conversion to face the vertical direction of the image.
Here, the rotation center position when the camera 40 rotates according to the behavior of the host vehicle is set as coordinates (X 2 , Y 2 , Z 2 ) (see FIG. 6B).

Since the direction of the approach destination lane L 2 can be adjusted by the overhead view conversion as described above, the equations (8) to (12) are calculated from the angles Δθ X , Δθ P , Δθ Y corresponding to the posture of the camera 40. If the results of substituting the results into the equations (5) to (7) are substituted into the equations (1), (3), and (4), the coordinate correspondence data 30b corresponding to these angles can be defined. Therefore, in this embodiment, the angles Δθ X , Δθ P , and Δθ Y are used as camera parameters, and coordinate correspondence data 30b corresponding to each camera parameter is created in advance and stored in the storage medium 30.

On the other hand, these angles Δθ X , Δθ P , Δθ Y can be calculated based on the angles acquired by the angle acquisition unit 22b and the behavior information acquisition unit 22c. Accordingly, in this embodiment, always enters destination lane L 2 by on the basis of the acquired angle to calculate the camera parameters at step S130 described above, referring to the coordinates corresponding data 30b corresponding to the camera parameter image The overhead view conversion can be performed so as to face the up and down direction. In the conversion using the coordinate correspondence data 30b, the overhead conversion can be performed by simply performing the coordinate conversion with reference to the coordinate correspondence data 30b. Therefore, the overhead conversion can be performed with a very low calculation load.

(4) Other embodiments:
The above embodiment is an example for carrying out the present invention, as long as the lane can be directed in a specific direction in the overhead view image based on the angle between the traveling direction of the host vehicle and the lane of the destination. Various other embodiments can be adopted. For example, in the above-described embodiment, the detection area is set on the lane so that a part of each detection area overlaps, but it is not essential to overlap a part of the detection areas. The detection of other vehicles is not necessarily performed based on the feature amount of the image in the detection area, and edge detection or the like may be used. Moreover, the camera 40 should just be able to image | photograph the road which should detect the presence of other vehicles, and may be attached to any position of the own vehicle, and the number of cameras is not specifically limited.

  Furthermore, since the angle acquisition unit 22b only needs to be able to acquire the angle between the traveling direction of the host vehicle and the lane of the destination, it is essential to refer to the map information 30a when specifying the traveling direction of the host vehicle. Do not mean. For example, the traveling direction of the host vehicle may be specified by a sensor such as an orientation sensor mounted on the host vehicle. Furthermore, the specific direction described above is not limited to the vertical direction of the image, and may be set such that, for example, the approach lane is parallel to the horizontal direction of the image.

  Furthermore, in the above-described example, the configuration is such that the direction of the destination lane is adjusted based on the angle between the traveling lane and the destination lane of the host vehicle and the behavior of the host vehicle. When the influence on the image is small, the overhead view conversion may be performed to adjust the direction of the destination lane based on the lane angle without considering the behavior of the host vehicle. In addition, the above-described camera parameter is a parameter corresponding to the deviation between the reference posture and the actual posture in the camera 40, but various parameters can be adopted as the camera parameter, for example, the above-described angle α Etc.

Furthermore, in the above-described bird's-eye view conversion, a conversion process is performed in which the approach lane is directed in a specific direction in the bird's-eye view image based on the coordinate correspondence data 30b. May be. That is, if the angles Δθ X , Δθ P , and Δθ Y corresponding to the posture of the camera 40 are acquired by the angle acquisition unit 22b and the behavior information acquisition unit 22c, the equations (8) to (12) are calculated, and the result The coordinates of the camera image corresponding to the three-dimensional coordinates can be obtained by substituting the results of substituting (5) to (7) into formulas (1), (3), and (4).

Therefore, if a function corresponding to the inverse transformation of this transformation and having the angles Δθ X , Δθ P , and Δθ Y as parameters is obtained in advance, the coordinates of the camera image are set to the XY coordinates of the three-dimensional coordinate system. (That is, it can be converted into a bird's-eye view image). In the above configuration, a control unit that is faster than the above-described embodiment is required, but it is not necessary to store the coordinate correspondence data 30b, so that memory consumption can be suppressed. Moreover, the calculation about an arbitrary angle is possible and the direction of the approach destination lane in the bird's-eye view image can be accurately adjusted.

It is a block diagram of the navigation apparatus containing an image converter. It is a flowchart which shows a guidance process. It is a schematic diagram which shows the example of the image before and after overhead conversion. It is a figure which shows the example of a coordinate system, (4A) shows the relationship between a three-dimensional coordinate system and the attitude | position of a camera, (4B) has shown the relationship between the optical axis of a camera, a bird's-eye view image, and the coordinate system of a camera image. . Examples of correction are shown. (5A) shows correction based on rotation around the Z axis, (5B) shows correction based on rotation around the Y axis, and (5C) shows correction based on rotation around the X axis. Is shown. The example of the rotation center of a camera is shown, (6A) shows the rotation center when the own vehicle is pitched, and (6B) shows the rotation center when the own vehicle is rolling.

Explanation of symbols

DESCRIPTION OF SYMBOLS 10 ... Navigation apparatus, 20 ... Control part, 21 ... Navigation program, 22 ... Image conversion part, 22a ... Image acquisition part, 22b ... Angle acquisition part, 22c ... Behavior information acquisition part, 22d ... Overhead conversion part, 23 ... Guide part 30 ... Storage medium, 30a ... Map information, 30b ... Coordinate correspondence data, 40 ... Camera, 41 ... GPS receiver, 42 ... Vehicle speed sensor, 43 ... Rudder angle sensor, 44 ... Triaxial acceleration sensor, 45 ... Direction sensor, 46 ... Speaker, 47 ... Display

Claims (6)

  1. Image acquisition means for acquiring an image including a lane of an approach destination of the own vehicle;
    The lane of the approach destination to which the lane in which the host vehicle is traveling is connected is specified to obtain the direction in which the lane of the approach destination extends, and the angle between the traveling direction of the host vehicle and the lane of the approach destination is determined. An angle acquisition means for acquiring;
    An overhead conversion means for converting the image into an overhead image, wherein a conversion process for directing the approaching lane in a specific direction in the overhead image based on the angle is specified, and the overhead image is acquired by the conversion process A bird's-eye view conversion means,
    An image conversion apparatus comprising:
  2. The conversion process is a process of directing the lane of the approach destination in a direction parallel to any of the orthogonal axes that define the coordinates of the overhead image.
    The image conversion apparatus according to claim 1.
  3. Coordinate correspondence data storage means for storing coordinate correspondence data in which the coordinates of the image before being converted into the overhead image and the coordinates of the overhead image are associated with each other for a plurality of angles,
    The overhead view conversion means performs the conversion processing with reference to coordinate correspondence data corresponding to the angle acquired by the angle acquisition means.
    The image conversion apparatus according to claim 1.
  4. The conversion process includes a correction process for correcting the orientation of an image based on an angle between a direction parallel to the traveling direction of the host vehicle and an optical axis in the image acquisition unit, and the overhead conversion unit includes: Adjusting the correction amount in the correction process based on the angle between the traveling direction and the lane of the approach destination, and directing the lane of the approach destination in a specific direction in the overhead image,
    The image conversion apparatus in any one of Claims 1-3.
  5. Comprising behavior information acquisition means for acquiring information corresponding to the behavior of the host vehicle;
    The overhead view converting means converts the image into the overhead view image by offsetting a change in the image due to the behavior of the host vehicle based on the information corresponding to the behavior.
    The image conversion apparatus in any one of Claims 1-4.
  6. An image acquisition step of acquiring an image including a lane of an approach destination of the own vehicle;
    The lane of the approach destination to which the lane in which the host vehicle is traveling is connected is specified to obtain the direction in which the lane of the approach destination extends, and the angle between the traveling direction of the host vehicle and the lane of the approach destination is determined. An angle acquisition step to acquire;
    A bird's-eye conversion process for converting the image into a bird's-eye view image, which specifies a conversion process for directing the lane of the approach destination in a specific direction in the bird's-eye view image based on the angle, and obtains the bird's-eye view image by this conversion process The overhead conversion process to
    An image conversion method including:
JP2006214361A 2006-08-07 2006-08-07 Image conversion apparatus and image conversion method Active JP4788518B2 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
JP2006214361A JP4788518B2 (en) 2006-08-07 2006-08-07 Image conversion apparatus and image conversion method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
JP2006214361A JP4788518B2 (en) 2006-08-07 2006-08-07 Image conversion apparatus and image conversion method

Publications (2)

Publication Number Publication Date
JP2008040799A JP2008040799A (en) 2008-02-21
JP4788518B2 true JP4788518B2 (en) 2011-10-05

Family

ID=39175725

Family Applications (1)

Application Number Title Priority Date Filing Date
JP2006214361A Active JP4788518B2 (en) 2006-08-07 2006-08-07 Image conversion apparatus and image conversion method

Country Status (1)

Country Link
JP (1) JP4788518B2 (en)

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP3786113B2 (en) * 2003-12-22 2006-06-14 日産自動車株式会社 Approach prediction device
JP4134939B2 (en) * 2004-04-22 2008-08-20 株式会社デンソー Vehicle periphery display control device
JP2006164197A (en) * 2004-12-10 2006-06-22 Equos Research Co Ltd Running support system
JP2007333502A (en) * 2006-06-14 2007-12-27 Nissan Motor Co Ltd Merging support device, and merging support method

Also Published As

Publication number Publication date
JP2008040799A (en) 2008-02-21

Similar Documents

Publication Publication Date Title
JP3853542B2 (en) Image processing apparatus, image processing method, and navigation apparatus
US8666571B2 (en) Flight control system for flying object
EP1701306B1 (en) Driving support system
US7688221B2 (en) Driving support apparatus
KR101018620B1 (en) Object recognition device
EP2306423B1 (en) Train-of-vehicle travel support device
DE102005016288B4 (en) Device for detecting an object in front of a vehicle
JP2010156608A (en) Automotive display system and display method
JPWO2006035755A1 (en) Mobile navigation information display method and mobile navigation information display device
US8725412B2 (en) Positioning device
US7266454B2 (en) Obstacle detection apparatus and method for automotive vehicle
DE102007043110B4 (en) A method and apparatus for detecting a parking space using a bird's-eye view and a parking assistance system using the same
US8410919B2 (en) Driving support apparatus
US8094190B2 (en) Driving support method and apparatus
JP6058256B2 (en) In-vehicle camera attitude detection apparatus and method
WO2010032523A1 (en) Device for detecting/judging road boundary
US20090122136A1 (en) Object detection device
US6005492A (en) Road lane recognizing apparatus
EP1637836A1 (en) Device and method of supporting stereo camera, device and method of detecting calibration, and stereo camera system
JP2011221653A (en) Apparatus for identifying vehicle to be tracked
JP4820221B2 (en) Car camera calibration device and program
EP2415625A1 (en) Information display apparatus
US20050074143A1 (en) Vehicle backing assist apparatus and vehicle backing assist method
US8363104B2 (en) Lane determining device and navigation system
JP3898709B2 (en) Vehicle lane marking recognition device

Legal Events

Date Code Title Description
A621 Written request for application examination

Free format text: JAPANESE INTERMEDIATE CODE: A621

Effective date: 20090312

A131 Notification of reasons for refusal

Free format text: JAPANESE INTERMEDIATE CODE: A131

Effective date: 20110201

RD04 Notification of resignation of power of attorney

Free format text: JAPANESE INTERMEDIATE CODE: A7424

Effective date: 20110204

A521 Written amendment

Free format text: JAPANESE INTERMEDIATE CODE: A523

Effective date: 20110401

TRDD Decision of grant or rejection written
A01 Written decision to grant a patent or to grant a registration (utility model)

Free format text: JAPANESE INTERMEDIATE CODE: A01

Effective date: 20110621

A01 Written decision to grant a patent or to grant a registration (utility model)

Free format text: JAPANESE INTERMEDIATE CODE: A01

A61 First payment of annual fees (during grant procedure)

Free format text: JAPANESE INTERMEDIATE CODE: A61

Effective date: 20110704

FPAY Renewal fee payment (event date is renewal date of database)

Free format text: PAYMENT UNTIL: 20140729

Year of fee payment: 3

R150 Certificate of patent or registration of utility model

Free format text: JAPANESE INTERMEDIATE CODE: R150