JP2011215052A - Own-vehicle position detection system using scenic image recognition - Google Patents

Own-vehicle position detection system using scenic image recognition Download PDF

Info

Publication number
JP2011215052A
JP2011215052A JP2010084624A JP2010084624A JP2011215052A JP 2011215052 A JP2011215052 A JP 2011215052A JP 2010084624 A JP2010084624 A JP 2010084624A JP 2010084624 A JP2010084624 A JP 2010084624A JP 2011215052 A JP2011215052 A JP 2011215052A
Authority
JP
Japan
Prior art keywords
image
reference data
vehicle
matching
position
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
JP2010084624A
Other languages
Japanese (ja)
Other versions
JP5333860B2 (en
Inventor
Takayuki Miyajima
孝幸 宮島
Original Assignee
Aisin Aw Co Ltd
アイシン・エィ・ダブリュ株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Aisin Aw Co Ltd, アイシン・エィ・ダブリュ株式会社 filed Critical Aisin Aw Co Ltd
Priority to JP2010084624A priority Critical patent/JP5333860B2/en
Publication of JP2011215052A publication Critical patent/JP2011215052A/en
Application granted granted Critical
Publication of JP5333860B2 publication Critical patent/JP5333860B2/en
Application status is Active legal-status Critical
Anticipated expiration legal-status Critical

Links

Images

Abstract

An object of the present invention is to provide a vehicle position detection system capable of efficiently detecting a vehicle position without requiring a human judgment while using a landscape image recognition technique.
Reference data storing a reference data group that associates a shooting position of a corresponding captured image with image feature point data generated by extracting an image feature point from a captured image of a landscape photographed from a vehicle. Extracted from the reference data database as a matching partner candidate of the matching data based on the distribution state and the shooting position of the image feature points in the matching data consisting of the database and the image feature points extracted from the actual shot image of the scenery by the in-vehicle camera An extracted reference data determination unit that determines reference data to be determined, and a vehicle position determination that determines the vehicle position based on a shooting position related to the reference data that has been successfully matched with the determined reference data and matching data Department.
[Selection] Figure 1

Description

  The present invention relates to a vehicle position detection system that obtains the current position of a running vehicle using a landscape image taken from the vehicle.

  Conventionally, in the technical field of car navigation, as a method of calculating the current position of a vehicle, a method of using information acquired from a sensor such as a gyroscope or a geomagnetic sensor (autonomous navigation), a method of using a signal from a GPS satellite, Or the method of combining autonomous navigation and GPS is adopted. Furthermore, in order to calculate the current position with high accuracy, the provisional current position is obtained using a signal from a positioning satellite, and the provisional current position is used as a reference using the captured image in front of the vehicle. The coordinate of the feature point of the road marking (automotive coordinate system feature point) in the coordinate system (car coordinate system) is calculated, and the calculated coordinate point of the car coordinate system and the coordinate of the feature point of the stored road sign (world 2. Description of the Related Art Position positioning devices configured to calculate the current position of a vehicle using a coordinate system) are known (for example, see Patent Document 1). In this apparatus, even if positioning based on signals from positioning satellites and signals from various sensors includes an error, it is possible to calculate a current position with high accuracy. However, in the positioning device according to Patent Document 1, the spatial coordinates of the feature points of the road marking on the road are obtained from the stereo image, and obtained from the latitude / longitude of the road marking having the feature point stored in the road marking information database. Since the vehicle position is calculated using the coordinates obtained, it cannot be used in a place where there is no road marking. In addition, since it is necessary to calculate the spatial coordinates of the feature points recognized by the image processing, a high calculation capability is required for the apparatus, resulting in a cost increase.

  Also, in the same car navigation technology field, images with a high similarity are searched from an image database using a photographed image of a building or landscape as a search key, and one or more searched images are displayed together with corresponding point specifying information. There is known a navigation device that presents to a user and sets position information associated with an image selected by the user viewing this display as a destination (see, for example, Patent Document 2). When this navigation device is used, if a captured image taken at a certain point is input and a search is executed, an image that matches the input image and the corresponding point specifying information are hit in the search and presented to the user, The point is set as the destination by the user's selection. This makes it possible to set the destination using the captured image of the point even if the information such as the position of the point to be set as the destination and the name of the facility is not known at all. However, in specifying the position using such a landscape photographed image, the navigation device presents an image that matches the input image as a search key, and the user needs to confirm it. It cannot be applied to vehicle position detection.

JP 2007-208043 (paragraph number 0009-0013, FIG. 1) Japanese Patent Laying-Open No. 2004-333233 (paragraph numbers 0010-0043, FIG. 1)

  From the above situation, it is desired to realize a vehicle position detection system that can efficiently detect the vehicle position without requiring human judgment while using the landscape image recognition technology.

The feature configuration of the vehicle position detection system using landscape image recognition according to the present invention is based on image feature point data generated by extracting image feature points from a captured image obtained by photographing a landscape from a vehicle. A reference data database that stores a reference data group that associates the shooting position of a shot image corresponding to feature point data, and an actual shot image of a landscape by an in-vehicle camera, and image feature points are extracted from the actual shot image A captured image processing unit that outputs the matching data generated by performing a shooting position acquisition unit that acquires a shooting position of the actual captured image;
A distribution state calculation unit for calculating a distribution state of the image feature points constituting the matching data in the photographed image; and the reference data as a matching partner candidate of the matching data based on the distribution state and the photographing position An extracted reference data determining unit that determines reference data to be extracted from the database, a matching execution unit that performs matching between the reference data determined by the extracted reference data determining unit and the matching data, and the reference data that has been successfully matched The vehicle is provided with a vehicle position determining unit that determines the vehicle position based on the shooting position associated with the vehicle.

  Image feature points in landscape images (captured images) taken sequentially from vehicles traveling on the road, especially contour points of objects in the landscape image, move in a predetermined direction within the image area as the vehicle travels. And disappear. When such an image feature point moves from a point belonging to the distant region of the landscape image to a point belonging to the close region as the vehicle travels, the image feature point moves from the image central region to the peripheral region, and finally When the vehicle passes the position indicated by the image feature point, the image area disappears. Therefore, from the movement of the image feature point, that is, the distribution state of the image feature point in the photographed image, the photographing position of the photographed image where the specific image feature point continues to exist can be estimated to some extent. The same applies to the reference data created based on the photographed image. If this is used, the extraction range of the reference data extracted as the matching partner candidate at a certain estimated vehicle position is determined from the distribution state of the image feature points in the matching data generated from the actual captured image. It can be selected effectively. From this, the extraction reference data determination unit, which is a feature configuration of the present invention, the reference as a matching partner candidate of the matching data based on a distribution state of the image feature points constituting the matching data in the captured image By determining the reference data to be extracted from the data database, it is possible to effectively extract reference data with less waste. As a result, a vehicle position detection system capable of efficiently detecting the vehicle position without requiring a human judgment while using the landscape image recognition technology is realized.

Assuming that the shot image handled by the vehicle position detection system was shot in the shooting direction facing the traveling direction of the vehicle, the image feature point first exists in the central area of the shot image, and then for a while Even if it exists in the central area, it rapidly moves to the area around the captured image and disappears in the vicinity where the vehicle passes the position indicated by the image feature point. From this, the image feature points obtained in the peripheral area of the actual captured image disappear immediately as the vehicle travels, but it is highly possible that the captured image in front of it is located on the central area side. . In order to make effective use of this, in one of the preferred embodiments of the present invention, the extraction reference data determination unit determines that the actual captured image is in a case where the distribution state is unevenly distributed in a peripheral region of the captured image. A plurality of pieces of matching data are determined as the matching partner candidates so as to be biased in the vehicle traveling direction in the direction opposite to the photographing direction of the in-vehicle camera with reference to the photographing position. In other words, when the in-vehicle camera is configured as a front camera so as to capture a landscape in front of the traveling direction, a plurality of the above-mentioned matching items are set so as to be biased backward in the vehicle traveling direction with reference to the capturing position of the actual captured image. Data is determined as the matching partner candidate. Further, when the in-vehicle camera is configured as a rear camera so as to capture a landscape behind the traveling direction, a plurality of the above-mentioned matching items are set so as to be biased forward in the vehicle traveling direction with reference to the photographing position of the actual captured image. Data is determined as the matching partner candidate.

The image feature point is likely to exist in the matching data sequence that is the reference data generated from the captured image sequence on the rear side in the vehicle traveling direction with respect to the photographing position. As described above, when a plurality of the matching data are selected as the matching partner candidates, even if there is an error in the estimated vehicle position, the possibility of detecting the image feature point is increased, and the matching success probability is improved. .

  Similarly, in order to effectively use the fact that there is a high possibility that the image feature point is long in the central region of the captured images sequentially captured from the traveling vehicle, in one of the preferred embodiments of the present invention, the extraction is performed. When the distribution state is a state where the captured image is concentrated in a central area, the reference data determination unit is configured to perform a plurality of the matching so as to be biased toward the shooting direction of the camera in the vehicle traveling direction based on the shooting position of the actual captured image. Data is determined as the matching partner candidate. Even with this feature configuration, even if there is an error in the estimated vehicle position, the possibility of detecting the image feature point is increased, and the probability of successful matching is improved.

  Furthermore, in a preferred embodiment of the present invention, the vehicle further includes an estimated own vehicle position acquisition unit that acquires an estimated own vehicle position that is an estimated position of the own vehicle, and the imaging position acquisition unit includes the estimated own vehicle position. Is used to acquire the photographing position. For example, the estimated vehicle position estimated by GPS position calculation or dead reckoning navigation, which has been used in the past, is secured to a certain degree of reliability. Thus, appropriate reference data can be extracted, and the efficiency of the matching process is improved.

It is a schematic diagram explaining the basic concept of the own vehicle position detection technique in the own vehicle position detection system using landscape image recognition according to the present invention. It is a functional block diagram which shows the main functions of the image processing system which produces | generates the reference data used for the own vehicle position detection system by this invention. It is a schematic diagram which shows typically the adjustment of the weighting coefficient using an adjustment coefficient. It is a functional block diagram which shows the function of the car navigation system which employ | adopted the own vehicle position detection system by this invention. It is a schematic diagram for demonstrating typically the extraction algorithm which extracts a reference data sequence from reference data DB performed in a landscape matching part. It is a functional block diagram which shows the function of a scenery matching part.

  Hereinafter, the present invention will be described in detail with reference to the drawings. FIG. 1 shows an image recognition of a landscape image from an in-vehicle camera arranged so as to photograph the front in the vehicle traveling direction through a matching process using reference data created by the image processing system according to the present invention. 3 schematically shows the basic concept of position positioning technology for determining the position where the landscape image is taken, that is, the position of the vehicle.

  First, the construction procedure of the reference data database 92 (hereinafter simply referred to as “reference data DB”) will be described. As shown in FIG. 1, a shooting image obtained by shooting a landscape from a vehicle in the middle of traveling and shooting attribute information including a shooting position and shooting direction at the time of shooting are input (# 01). A feature point detection process for detecting an image feature point, for example, an edge detection process is performed on the input photographed image (# 02). Here, an edge point corresponding to one pixel or a plurality of pixels constitutes a line segment such as an outline is called an edge, and an intersection point where a plurality of edges intersect is called a corner. However, an example of the image feature point is the edge and the corner. An edge including a corner is extracted as an image feature point from the edge detection image obtained by the edge detection process (# 03).

  In consideration of being used for landscape image recognition, not all extracted image feature points have the same importance. For example, the importance of each image feature point may differ depending on the coordinate position in the corresponding captured image. Therefore, while applying rules such as reducing the importance of image feature points that are not suitable for landscape image recognition and increasing the importance of image feature points that are important for landscape image recognition, the importance of each image feature point It is preferable to determine the degree (# 04). When the importance of each image feature point is determined, a weighting coefficient matrix that defines the assignment of the weighting coefficient to each image feature point according to the importance is generated (# 05).

  Subsequently, using the weighting coefficient matrix, the image feature points are organized to generate image feature point data for each captured image (# 07). In this image feature point data generation process, image feature points having a weighting factor equal to or lower than a predetermined threshold level are discarded, or image feature points having a weighting factor equal to or higher than a predetermined threshold level and surrounding image feature points Sorting process to discard other than is performed. Since the image feature point data generated here is used as a pattern when pattern matching is adopted for the purpose of landscape image recognition, it is necessary to provide only image feature points effective in pattern matching of landscape images. Important for speed and accuracy. The generated image feature point data is associated with the shooting position and shooting direction of the shot image corresponding to the image feature point data, and becomes database data that can use the shooting position and shooting direction as a search key (# 08). ). That is, the image feature point data is stored in the reference data used for landscape image recognition, that is, the reference data DB 92 to be extracted as a matching partner (# 09).

  Next, a procedure for determining the position of the vehicle (vehicle position) during actual vehicle travel will be described using the reference data DB constructed by the above-described procedure. As shown in FIG. 1, an estimated vehicle position estimated using a GPS measurement unit or the like, and an actual captured image captured by an in-vehicle camera with the estimated vehicle position as a shooting position and a shooting direction thereof are input ( # 11). First, matching data, which is image feature point data, is generated from the input photographed image through the processing procedures of steps # 02 to # 07 (# 12). Furthermore, using the input shooting position and shooting direction as search conditions, reference data extraction processing is performed in which reference data of the corresponding shooting position and a predetermined number of reference data as the shooting position reference are extracted as matching partner candidates.

  In this reference data extraction process, first, the distribution state in the captured image of the image feature points constituting the matching data (image feature point data) is calculated (# 13). In the extraction of the reference data, normally, a reference data position related to the shooting position closest to the input shooting position is used as a reference position, and a predetermined number of reference data is extracted before and after the reference position. However, in the present invention, the reference position is offset in the front-rear direction according to the distribution state of the feature points of the matching data image calculated previously. For example, when the distribution state is unevenly distributed in the peripheral region of the captured image (image feature points are concentrated in the peripheral region), the reference position is offset backward in the vehicle traveling direction. That is, the reference data string extracted by this offset is a reference data string on the rear side than usual. Further, when the distribution state is a concentration state in the central area of the captured image, the reference position is offset forward in the vehicle traveling direction. In other words, the reference data string extracted by this offset is a reference data string on the front side than usual. The reason for this will be described in detail later, but in this vehicle position detection system, the reference data string extracted from the reference data database 92 as the matching partner candidate of the matching data is adjusted depending on the distribution state described above. (# 14).

  Reference data is set as a pattern one by one from the extracted matching candidate reference data string, and pattern matching processing with the currently generated matching data is executed as landscape image recognition (# 15). If the matching is successful, the shooting position associated with the reference data that is the target is read (# 16), and this shooting position is determined as the official vehicle position that replaces the estimated vehicle position (#). 17).

Next, an example of an image processing system that creates reference data used in the above-described vehicle position detection system from a photographed image obtained by photographing a landscape from a vehicle will be described. FIG. 2 is a functional block diagram schematically showing the functions of such an image processing system.
This image processing system includes a data input unit 51, a feature point extraction unit 52, a feature point importance degree determination unit 53, a weighting unit 55, an adjustment coefficient setting unit 54, an image feature point data generation unit 56, and a reference data database generation unit 57. , Etc., and each of these functions can be created by hardware, software, or a combination thereof.

  The data input unit 51 includes a photographic image obtained by photographing a landscape by a camera mounted on a vehicle traveling for reference data creation, photographing attribute information including a photographing position and a photographing direction at the time of photographing, and a photographing situation. Information is entered. That is, the data input unit 51 functions as a captured image acquisition unit, a shooting position acquisition unit, a shooting situation information acquisition unit, and the like. In the form in which the image processing system is mounted on the traveling vehicle, the captured image, the photographing attribute information, and the photographing situation information are input to the input unit 51 in real time. In such a configuration, the photographed image, photographing attribute information, and photographing state information are temporarily recorded on a recording medium, and these data inputs are performed in a batch process. Since the method for generating the photographed image and the photographing attribute information is well known, the description thereof is omitted.

  The shooting situation information is information indicating the possibility that a specific subject is included in the shot image, and the contents included in the shooting situation information of this embodiment include travel lane data, moving object data, and area attribute data. It is. The traveling lane data is data indicating the traveling lane region of the own vehicle and the region outside the road in the photographed image obtained from the recognition result of the white line, the guide rail, and the safety zone obtained through image processing on the photographed image. The moving object data is data indicating an existing area in a captured image of a moving object present around the vehicle that is recognized by an in-vehicle sensor that detects an obstacle such as a radar. Area attribute data is data indicating area attributes such as mountain area, suburban area, urban area, high-rise city area, etc. is there.

  The feature point extraction unit 52 extracts edge points as image feature points from the captured image using an appropriate operator. The feature point importance determination unit 53 determines the importance of the image feature points extracted by the feature point extraction unit 52 based on the contents of each data included in the shooting situation information. For example, when the content of the travel lane data is used, higher importance is assigned to image feature points belonging to a region further away from the travel lane closer to the shoulder in the captured image. When moving object data is used, a low importance is assigned to image feature points belonging to a region where a moving object exists in a captured image. Further, when the contents of the area attribute data are used, the importance assigning rule corresponding to the position in the captured image is changed according to the area attribute. For example, in a captured image of a mountain area, since there is a high possibility that the upper part of the photographing center optical axis is empty and the left and right are forests, high importance is set for the central region around the photographing central optical axis. In the photographed image of the suburban area, there is little traffic of cars, and structures such as houses spread around, so a high importance is set for the area below the photographing center optical axis. In the captured image of the urban area, since there is a lot of traffic, a high importance is set for the area above the optical axis of the image capturing center. In the photographed image of the high-rise city area, since there are many elevated roads and viaducts, high importance is set for the region above the photographing center optical axis.

  The weighting unit 55 assigns a weighting factor to the image feature point according to the importance determined by the feature point importance determining unit 53. High importance is set for image feature points that are considered important for accurate image recognition (pattern matching), so a large weighting factor is assigned to image feature points with high importance. Considering that image feature points with low weighting factors are not likely to be used in actual image recognition or deleted from reference data, they can be used to determine the selection of image feature points Calculated.

  The adjustment coefficient setting unit 54 calculates an adjustment coefficient for changing the weighting coefficient assigned by the weighting unit 55 from the viewpoint of the distribution state in the corresponding captured image region. In other words, the importance determined based on the shooting situation information for the image feature points extracted by the feature point extraction unit 52 includes a certain degree of error, and image feature points having a certain degree of importance are also randomly generated. It is also possible that For this reason, when there is an uneven distribution of image feature points, in other words, an uneven distribution of the weighting coefficient assigned by the weighting unit 55, the adjustment coefficient setting unit 54 is used for the purpose of easing the uneven distribution. When the distribution of image feature points obtained by calculation processing indicates the uneven distribution of image feature points, the adjustment coefficient is set so that the weighting factor of image feature points belonging to the region where the density of image feature points is small is increased. The adjustment coefficient is set so that the weighting coefficient of the image feature point belonging to the region where the density of the image feature points is large becomes small.

  The image feature point data generation unit 56 organizes the image feature points based on the weighting coefficient assigned by the weighting unit 55 and the adjustment coefficient assigned in some cases, and obtains image feature point data for each recorded image. Generate. At this time, the image feature points can be narrowed down so that the matching process is efficiently performed by deleting the image feature points having a weighting coefficient equal to or less than a predetermined threshold value. In addition, this weighting factor is attached to the image feature point data so that it can be related to each image feature point in the reference data as it is, and for weighting similarity calculation at the time of matching processing using the reference data with the weighting factor. It may be used for.

  Here, a process of spreading image feature points as widely as possible over the entire photographed image area using the above-described adjustment coefficient will be described with reference to a schematic explanatory diagram shown in FIG. A feature point image (FIG. 3B) is generated by extracting image feature points from the photographed image (FIG. 3A), and importance is given to each image feature point of the feature point image. 3C, the importance corresponding to each image feature point is shown in the form of the importance layer corresponding to the feature point image so that the degree to which the degree is given can be schematically understood. 3D, a weighting factor is assigned to each image feature point, and in FIG.3 (d), the weighting factor is in the form of a feature point image in which image feature points are drawn such that the larger the weighting factor, the larger the point. Here, when image feature points are arranged such that image feature points assigned weighting factors equal to or less than a predetermined threshold are removed, for example, FIG. Only image feature points that are large in 3 (d) are selected. Then, the image feature points located in the lower region of the feature point image are excluded, and the distribution of the remaining image feature points is greatly uneven. The adjustment coefficient is set so as to increase the weighting coefficient of the image feature points in the region where the density of the image feature points to be selected is reduced as a result. As can be understood, in FIG. 3 (e), the adjustment coefficient group is shown in the form of an adjustment coefficient layer arranged in a matrix (here, in units of a plurality of pixel areas) so as to correspond to the feature point image. The image feature point data generation unit 56 organizes each image feature point using the weighting coefficient finally set based on such weighting coefficient and adjustment coefficient, and is shown in FIG. Image feature point data It is generated for each shadow image.

  The reference data database creating unit 57 associates the image feature point data generated by the image feature point data generating unit 56 with at least the shooting position included in the shooting attribute information of the shot image corresponding to the image feature point data. Thus, it is made into a database as reference data used for landscape image recognition. The reference data stored in a database is stored in the reference data DB 92.

  In the above description, the degree of importance is determined for each image feature point, and as a result, a weighting factor is set for each image feature point. However, these processes can be performed in units of groups. is there. In that case, for example, the photographed image area is divided into a plurality of image sections, and the feature point importance degree determination unit 53 groups the image feature points belonging to the same image section as an image feature point group and handles them in a unified manner. The image feature points included in the image feature point group may be given the same importance, and the weighting unit 55 may set the weight coefficient in units of image feature point groups in the same manner. In addition, the image sections handled here may be handled in units of one pixel constituting the captured image, but may be handled in units of a plurality of pixels. Therefore, in the present invention, the image section is composed of one or a plurality of pixels.

  Next, an in-vehicle car that corrects the vehicle position by landscape image recognition (image feature point pattern matching) using the basic concept of the vehicle position detection system described above and the reference data DB 92 created by the image processing system described above. A navigation system will be described. FIG. 4 is a functional block showing such a car navigation system incorporated in an in-vehicle LAN. This car navigation system includes an input operation module 21, a navigation control module 3, a vehicle position detection module 4, a photographing situation information generation unit 7, a road map DB 91 (hereinafter simply referred to as “car map navigation road map data”). A database 9 having an abbreviated road map DB).

  The navigation control module 3 includes a route setting unit 31, a route search unit 32, and a route guide unit 33. The route setting unit 31 sets, for example, a departure point such as the vehicle position, an input destination, a passing point, and traveling conditions (whether or not an expressway is used). The route search unit 32 is a processing unit that performs arithmetic processing for searching for a guide route from the departure point to the destination based on the conditions set by the route setting unit 31. The route guidance unit 33 provides appropriate route guidance to the driver by guidance display on the display screen of the monitor 12 or voice guidance by the speaker 13 according to the route from the departure place to the destination searched by the route search unit 32. It is a process part which performs the arithmetic processing for performing.

  The own vehicle position detection module 4 corrects the estimated own vehicle position obtained by the conventional position calculation by GPS and the position calculation by dead reckoning with the own vehicle position determined by landscape image recognition using the estimated own vehicle position. It has the function to do. The own vehicle position detection module 4 includes a GPS processing unit 41, dead reckoning processing unit 42, own vehicle position coordinate calculation unit 43, map matching unit 44, own vehicle position determination unit 45, captured image processing unit 5, and landscape matching unit 6. I have. A GPS measurement unit 15 that receives GPS signals from GPS satellites is connected to the GPS processing unit 41. The GPS processing unit 41 analyzes the signal from the GPS satellite received by the GPS measurement unit 15, calculates the current position (latitude and longitude) of the vehicle, and sends it to the own vehicle position coordinate calculation unit 43 as GPS position coordinate data. The dead reckoning processing unit 42 is connected to the distance sensor 16 and the azimuth sensor 17. The distance sensor 16 is a sensor that detects a vehicle speed or a moving distance of the vehicle. For example, a vehicle speed pulse sensor that outputs a pulse signal every time a drive shaft, a wheel, or the like of the vehicle rotates by a certain amount detects the acceleration of the host vehicle C. And a circuit that integrates the detected acceleration. The distance sensor 16 outputs information on the vehicle speed and moving distance as the detection result to the dead reckoning processing unit 42. The direction sensor 17 includes, for example, a gyro sensor, a geomagnetic sensor, an optical rotation sensor attached to the rotating part of the handle, a rotary resistance volume, an angle sensor attached to the wheel part, and the like, and the direction of the detection result Information is output to dead reckoning processing unit 42. The dead reckoning processing unit 42 calculates dead reckoning position coordinates based on the moving distance information and the direction information sent every moment, and sends the dead reckoning position coordinate data to the own vehicle position coordinate calculating unit 43 as dead reckoning position coordinate data. The own vehicle position coordinate calculation unit 43 performs an operation for specifying the position of the vehicle from the GPS position coordinate data and the dead reckoning position coordinate data by a known method. The calculated vehicle position information is information including measurement errors and the like, and in some cases, the vehicle position information is off the road. Therefore, the map matching unit 44 determines the vehicle position on the road indicated on the road map. Correction is performed. The vehicle position coordinates are sent to the vehicle position determination unit 45 as the estimated vehicle position. That is, the own vehicle position detection module 4 functions as an estimated own vehicle position acquisition unit that acquires the estimated own vehicle position, and this estimated own vehicle position is used as the shooting position of the actual captured image.

  The photographed image processing unit 5 substantially includes most of the functional units that constitute the image processing system shown in FIG. The captured image processing unit 5 includes a data input unit 51, a feature point extraction unit 52, a feature point importance degree determination unit 53, a weighting unit 55, an adjustment coefficient setting unit 54, and an image feature point data generation unit 56. 14 is input to the data input unit 51, the image feature point data is output from the image feature point data generation unit 56 through the procedure described above. Note that the shooting situation information used in the feature point importance determination unit 53 is generated by the shooting situation information generation unit 7 mounted on the vehicle and sent to the shot image processing unit 5. The shooting state information generation unit 7 is connected to the in-vehicle camera 14 to generate the travel lane data, and receives the same shot image sent to the shot image processing unit 5. Traveling lane data is created by image processing the received captured image using a known algorithm. The photographing situation information generation unit 7 is connected to an obstacle detection sensor group 18 in order to create the moving object data. Moving object data is created based on sensor information from the sensor group 18. Further, the shooting situation information generation unit 7 is connected to the vehicle position determination unit 45 and the database 9 in order to create the above. The database 9 is searched using the vehicle position coordinates and the search key from the vehicle position determination unit 45 to acquire area attributes (mountainous area, city area, etc.) of the current traveling location, and the area attribute data is based on the area attributes. Created.

  The landscape matching unit 6 extracts a reference data string made up of a predetermined number of reference data from the reference data DB 92 based on the extraction algorithm described in detail later based on the estimated vehicle position sent from the vehicle position determination unit 45. To do. Further, reference data is sequentially set as a pattern from the extracted reference data string, and pattern matching processing is performed on the image feature point data sent from the captured image processing unit 5. If the pattern matching is successful, the shooting position associated with the reference data that is the matching pattern is read out. This photographing position is transferred to the own vehicle position determination unit 45 as the own vehicle position. The own vehicle position determination unit 45 corrects the own vehicle position by replacing the transferred own vehicle position with the estimated own vehicle position.

  This car navigation system also includes an input device 11 such as a touch panel 11 or a switch as a peripheral device, and an operation input evaluation unit 21a that changes an operation input through the input device 11 into an appropriate operation signal and transfers the operation signal to the inside. An input operation module 21, a display module 22 for displaying image information necessary for car navigation on the monitor 12, a voice generation module 23 for sending voice information necessary for car navigation from the speaker 13 or a buzzer, braking, acceleration, steering, etc. The vehicle behavior detection module 24 that detects various behaviors of the vehicle based on behavior data sent through the in-vehicle LAN is provided.

Next, an extraction algorithm for extracting a reference data string from the reference data DB 92, which is performed by the landscape matching unit 6, will be described with reference to the schematic diagram of FIG. FIG. 5 shows image feature point data generated from actual captured images obtained by capturing a front landscape at each vehicle position (photographing position): P00, P01. Data: G00, G01... Are schematically shown. Here, the matching data is assumed to be the same as the reference data generated in advance (the matching partner of the matching data). Therefore, if the estimated own vehicle position is the same as the actual actual vehicle position, the own vehicle position associated with the reference data matched with the matching data of the actual captured image matches.
However, the estimated own vehicle position and the actual own vehicle position are generally different. Therefore, with reference to the estimated vehicle position (shooting position), a plurality of reference data is extracted from the reference data DB 92 before and after the estimated vehicle position (shooting position), and is sequentially used as a matching pattern. Matched.

The image feature points indicating the corners OJ of the artifacts installed along the road shown in the matching data string shown in FIG. 5 are simplified for the purpose of explanation. It is indicated by a black circle. Assuming that these matching data strings are based on images taken at substantially equal intervals, the feature point in the matching data indicated by G01 moves to the upper left end in G06, and in the next G07 Disappeared. When the vehicle is far from the artifact corner: OJ, the feature point moves slowly in the central area, but when the vehicle approaches the corner: OJ, it moves rapidly in the upper left area. This means that if the corner: OJ feature points are acquired at equidistant intervals, the stay in the central region is long and the stay in the peripheral region is short. Therefore, when a predetermined number of reference data strings are extracted based on the estimated own vehicle position, if a feature point based on the actual captured image is located in the periphery, the vehicle traveling direction is determined from the captured position corresponding to the captured image. It is more likely that the reference data at the rear photographing position has the feature point than the reference data at the front photographing position.
Next, this will be described with an example. As shown in FIG. 5, if the actual vehicle position is P06 and the estimated vehicle position is P09, matching data based on the actual captured image corresponds to G06, and two data are obtained before and after the estimated vehicle position as a reference. When the reference data is extracted one by one, G07 to G11 are obtained. In this case, since the feature point existing in G06 does not exist in any of the extracted reference data strings, the matching is unsuccessful. On the other hand, if the reference point offset correction is performed, for example, if the reference point is lowered backward by 2 (offset from P09 to P07), the extracted reference data string becomes G05 to G09, and the characteristic points in G05 and G06 Because there exists, matching succeeds.
Conversely, if an image feature point exists in the center area of the matching data based on the actual captured image, the same effect can be obtained by offsetting the reference point based on the estimated vehicle position forward in the vehicle direction. It is done. For example, even when the actual vehicle position is P02 and the estimated vehicle position is further rearward than P00, the possibility of successful matching is increased by offsetting the reference point forward in the vehicle direction.
That is, (1) when the distribution state of feature points is concentrated in the peripheral area of the captured image, the reference point for extracting reference data is offset backward in the vehicle traveling direction, and (2) the distribution state of the feature points is When concentrated in the central area of the captured image, the reference point for extracting reference data is offset forward in the vehicle traveling direction, and (3) the distribution state of the feature points is generally expanded and the degree of dispersion is high The reference data string can be effectively extracted from the reference data DB 92 by adopting a reference point extraction algorithm that does not offset the reference reference point for extraction of reference data.

  FIG. 6 is a functional block diagram for explaining each functional unit provided in the landscape matching unit 6 for executing the above extraction algorithm. The landscape matching unit 6 includes a feature point distribution state calculation unit 61, an extraction reference point offset calculation unit 62, a reference data extraction / search condition determination unit 63, a matching execution unit 64, and a matching photographing position extraction unit 65. The feature point distribution state calculation unit 61 calculates the distribution state in the captured image of the image feature points constituting the matching data, and the image feature in the matching data that is image feature point data generated from the actual captured image. The distribution state of the points is calculated by calculating the dispersion degree. Various statistics representing the degree of dispersion are known, but are not limited to special ones here. Any device that can calculate the spread of the image feature amount by a simple calculation may be used. The extraction reference point offset calculation unit 62 determines whether or not the image feature points are partially and in which region based on the scatter degree calculated by the feature point distribution state calculation unit 61, and extracts the reference point Calculate the offset amount. The reference data extraction / search condition determining unit 63 determines a reference data extraction / search condition for extracting a reference data string from the reference data DB 92 based on the shooting direction, the estimated vehicle position, and the offset amount. Extract columns. The matching execution unit 64 performs pattern matching on the matching data while sequentially setting the extracted reference data string as a pattern. When the matching is successful, the matching photographing position extraction unit 65 reads out the photographing position (own vehicle position) related to the reference image as a successful matching partner, and uses this as a high-accuracy own vehicle position. Send to decision unit 45. Therefore, the reference data extraction / search condition determination unit 63 extracts from the reference data database as a matching partner candidate of the matching data based on the distribution state calculated by the feature point distribution state calculation unit 61 and the shooting position. It can be understood that it functions as an extraction reference data determination unit that determines reference data.

  In the above-described embodiment, the image feature point is an edge point obtained by edge detection processing, particularly a line segment edge constituting one line segment or an intersection point where such a line segment intersects, preferably approximately orthogonally. A corner edge is treated as an effective image feature point. Further, the matching execution unit 64 employs a general pattern matching algorithm. However, in the matching (matching) of corner points (intersection edge points as intersections of two linear components) having high importance in a landscape image. A weighted pattern matching algorithm that gives a higher matching evaluation than other edge points may be employed. At that time, it is preferable to construct the reference data DB 92 in a form in which corner attribute values indicating the coordinate values of the corner points are attached to the reference data. However, each time the reference data is extracted, the corner points may be detected. Good. The matching data may be given a label at a corner point in the process of being generated from the actual captured image.

In the above-described embodiment, the image feature point is an edge point obtained by edge detection processing, particularly a line segment edge constituting one line segment or an intersection point where such a line segment intersects, preferably approximately orthogonally. A corner edge is treated as an effective image feature point. However, the present invention is not limited to such edge points as image feature points. For example, typical edge points that form geometric shapes such as circles and quadrilaterals (three points on the circumference if circles) or the center of gravity of a geometric shape and the point as the center of gravity are also effective depending on the landscape. It is used because it becomes a typical image feature point. It is also preferable to adopt edge strength as a factor for calculating importance. For example, if a line segment is a strong edge, the start point and end point of the line segment are treated as highly important image feature points. be able to. In addition, a specific point in a characteristic geometric shape, for example, an end point of a symmetrical object can be handled as a highly important image feature point.
Furthermore, in addition to the edge points obtained by the edge detection process, it is also possible to regard a captured image as a change in hue or saturation and to employ a point with a large change as an image feature point. Similarly, an end point of an object having a high color temperature can be handled as a highly important image feature point based on color information.
That is, the image feature points handled in the present invention are all used if they are effective for similarity determination between the reference data and the image feature amount data generated from the actual captured image, for example, pattern matching. It becomes.

In the above-described embodiment, the reference data stored in the reference data DB 92 is associated with the shooting position and the shooting direction (camera optical axis direction). The date and time and the weather at the time of shooting may also be related.
The photographing position may be two-dimensional data such as latitude / longitude data at least, but may be three-dimensional data in addition to height data.
Further, it is not essential to relate the shooting direction to the reference data. For example, when creating reference data and when recognizing a landscape image using this reference data, the shooting direction is not required if it is guaranteed that the road is shot with substantially the same shooting direction. It becomes.
Conversely, in the case where reference data in which the shooting direction is appropriately shifted from the reference data in one basic shooting direction can be prepared, based on the traveling direction of the vehicle calculated from information such as a direction sensor, It is also possible to set only the reference data suitable for the traveling direction as the object of landscape image recognition.
The on-vehicle camera handled in the present invention is most suitable for photographing a landscape in front of the vehicle traveling direction. However, the camera may be a camera that takes a landscape oblique to the front, or may be a camera that captures a landscape behind. However, if you use a rear camera that captures the landscape behind, the moving direction of the feature points that move as the vehicle travels is the opposite of that of the front camera, so if the feature points are concentrated in the vicinity, The direction in which the points are offset is not the rear but the front. When the feature points are concentrated in the center, the direction in which the reference point is offset is set to the rear rather than the front. That is, since it can be used with either a front camera or a rear camera, a captured image handled in the present invention is not limited to a photograph of a front landscape in the vehicle traveling direction.

  The functional units shown in the functional block diagrams used in the description of the above-described embodiments are for easy understanding, and the present invention is not limited to the divisions shown here. It is possible to freely combine function parts or further divide one function part

  The image processing system of the present invention can be applied not only to car navigation but also to a technical field that measures the current position and orientation by landscape image recognition.

3: navigation control module 4: own vehicle position detection module 41: GPS processing unit 42: dead reckoning processing unit 43: own vehicle position coordinate calculation unit 44: map matching unit 45: own vehicle position determination unit 5: captured image processing unit 51 : Data input unit 52: feature point extraction unit 53: feature point importance determination unit 54: weighting unit 55: adjustment coefficient setting unit 56: image feature point data generation unit 57: reference data database creation unit 6: landscape matching unit 61: Feature point distribution state calculation unit 62: extraction reference point offset calculation unit 63: reference data extraction search condition determination unit 64: matching execution unit 65: matching shooting position extraction unit 14: camera 92: reference data DB
91: Road map DB

Claims (4)

  1. A reference data group in which image feature point data generated by extracting image feature points from a photographed image obtained by photographing a landscape from a vehicle and the photographing position of the photographed image corresponding to the image feature point data is stored. A reference data database,
    A captured image processing unit that outputs a matching image generated by inputting an actual captured image of a landscape by an in-vehicle camera and extracting image feature points from the actual captured image;
    A shooting position acquisition unit for acquiring a shooting position of the actual shot image;
    A distribution state calculation unit for calculating a distribution state in the captured image of the image feature points constituting the matching data;
    An extraction reference data determination unit that determines reference data to be extracted from the reference data database as a matching partner candidate of the matching data based on the distribution state and the photographing position;
    A matching execution unit that performs matching between the reference data determined by the extracted reference data determination unit and the matching data;
    A vehicle position determination unit that determines the vehicle position based on a shooting position related to the reference data that has been successfully matched;
    A vehicle position detection system using landscape image recognition.
  2. When the distribution state is unevenly distributed in the peripheral region of the captured image, the extraction reference data determination unit is biased in a direction opposite to the capturing direction of the in-vehicle camera with respect to the capturing position of the actual captured image The vehicle position detection system according to claim 1, wherein a plurality of the matching data are determined as the matching partner candidates.
  3. When the distribution state is a state where the captured image is concentrated in a central area, the extraction reference data determination unit includes a plurality of extraction reference data determination units so as to be biased toward the camera shooting direction in the vehicle traveling direction based on the shooting position of the actual captured image. The own vehicle position detection system according to claim 1, wherein the matching data is determined as the matching partner candidate.
  4. An estimated vehicle position acquisition unit that acquires an estimated vehicle position that is an estimated position of the vehicle is further provided, and the imaging position acquisition unit acquires the imaging position using the estimated vehicle position. The vehicle position detection system according to any one of the above.

JP2010084624A 2010-03-31 2010-03-31 Vehicle position detection system using landscape image recognition Active JP5333860B2 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
JP2010084624A JP5333860B2 (en) 2010-03-31 2010-03-31 Vehicle position detection system using landscape image recognition

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
JP2010084624A JP5333860B2 (en) 2010-03-31 2010-03-31 Vehicle position detection system using landscape image recognition

Publications (2)

Publication Number Publication Date
JP2011215052A true JP2011215052A (en) 2011-10-27
JP5333860B2 JP5333860B2 (en) 2013-11-06

Family

ID=44944932

Family Applications (1)

Application Number Title Priority Date Filing Date
JP2010084624A Active JP5333860B2 (en) 2010-03-31 2010-03-31 Vehicle position detection system using landscape image recognition

Country Status (1)

Country Link
JP (1) JP5333860B2 (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104748756A (en) * 2013-12-31 2015-07-01 现代自动车株式会社 Method for measuring position of vehicle using cloud computing
KR20180015961A (en) * 2016-08-04 2018-02-14 영남대학교 산학협력단 Method of estimating the location of object image-based and apparatus therefor

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH08247775A (en) * 1995-03-15 1996-09-27 Toshiba Corp Device and method for identification of self position of moving body
WO2005038402A1 (en) * 2003-10-21 2005-04-28 Waro Iwane Navigation device
JP2005265494A (en) * 2004-03-17 2005-09-29 Hitachi Ltd Car location estimation system and drive support device using car location estimation system and drive support device using this
WO2006106694A1 (en) * 2005-03-31 2006-10-12 Pioneer Corporation Route guidance system, route guidance method, route guidance program, and recording medium
JP2007108043A (en) * 2005-10-14 2007-04-26 Xanavi Informatics Corp Location positioning device, location positioning method
JP2009074986A (en) * 2007-09-21 2009-04-09 Xanavi Informatics Corp Device, method, and program for calculating one's-own-vehicle position
JP2009250718A (en) * 2008-04-03 2009-10-29 Nissan Motor Co Ltd Vehicle position detecting apparatus and vehicle position detection method

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH08247775A (en) * 1995-03-15 1996-09-27 Toshiba Corp Device and method for identification of self position of moving body
WO2005038402A1 (en) * 2003-10-21 2005-04-28 Waro Iwane Navigation device
JP2005265494A (en) * 2004-03-17 2005-09-29 Hitachi Ltd Car location estimation system and drive support device using car location estimation system and drive support device using this
WO2006106694A1 (en) * 2005-03-31 2006-10-12 Pioneer Corporation Route guidance system, route guidance method, route guidance program, and recording medium
JP2007108043A (en) * 2005-10-14 2007-04-26 Xanavi Informatics Corp Location positioning device, location positioning method
JP2009074986A (en) * 2007-09-21 2009-04-09 Xanavi Informatics Corp Device, method, and program for calculating one's-own-vehicle position
JP2009250718A (en) * 2008-04-03 2009-10-29 Nissan Motor Co Ltd Vehicle position detecting apparatus and vehicle position detection method

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104748756A (en) * 2013-12-31 2015-07-01 现代自动车株式会社 Method for measuring position of vehicle using cloud computing
US9465099B2 (en) 2013-12-31 2016-10-11 Hyundai Motor Company Method for measuring position of vehicle using cloud computing
KR20180015961A (en) * 2016-08-04 2018-02-14 영남대학교 산학협력단 Method of estimating the location of object image-based and apparatus therefor
KR101885961B1 (en) * 2016-08-04 2018-08-06 영남대학교 산학협력단 Method of estimating the location of object image-based and apparatus therefor

Also Published As

Publication number Publication date
JP5333860B2 (en) 2013-11-06

Similar Documents

Publication Publication Date Title
US9430871B2 (en) Method of generating three-dimensional (3D) models using ground based oblique imagery
EP3112802B1 (en) Road feature measurement apparatus and road feature measuring method
US8379923B2 (en) Image recognition processing device, method, and program for processing of image information obtained by imaging the surrounding area of a vehicle
US9880010B2 (en) Method of and arrangement for mapping range sensor data on image sensor data
JP4569837B2 (en) Feature information collecting apparatus and feature information collecting method
EP2273337B1 (en) Generating a graphic model of a geographic object and systems thereof
US20120191346A1 (en) Device with camera-info
US20180268226A1 (en) Forward-facing multi-imaging system for navigating a vehicle
US20120224060A1 (en) Reducing Driver Distraction Using a Heads-Up Display
US9116011B2 (en) Three dimensional routing
CN101675442B (en) Lane determining device, lane determining method and navigation apparatus using the same
EP1855263B1 (en) Map display device
JP2005268847A (en) Image generating apparatus, image generating method, and image generating program
JP3958133B2 (en) Vehicle position measuring apparatus and method
US8325979B2 (en) Method and apparatus for detecting objects from terrestrial based mobile mapping data
EP2320382A1 (en) Bird's-eye image forming device, bird's-eye image forming method, and bird's-eye image forming program
JP2010530997A (en) Method and apparatus for generating road information
JP4950858B2 (en) Image recognition apparatus and image recognition program
US20140015919A1 (en) Reimaging Based on Depthmap Information
JP2008299650A (en) Ground object extraction method, image recognition method using it and ground object database production method
US20110103651A1 (en) Computer arrangement and method for displaying navigation data in 3d
AU2007355942B2 (en) Arrangement and method for providing a three dimensional map representation of an area
US8195386B2 (en) Movable-body navigation information display method and movable-body navigation information display unit
JP2019527832A (en) System and method for accurate localization and mapping
US8280107B2 (en) Method and apparatus for identification and position determination of planar objects in images

Legal Events

Date Code Title Description
A621 Written request for application examination

Free format text: JAPANESE INTERMEDIATE CODE: A621

Effective date: 20120229

A977 Report on retrieval

Free format text: JAPANESE INTERMEDIATE CODE: A971007

Effective date: 20130626

TRDD Decision of grant or rejection written
A01 Written decision to grant a patent or to grant a registration (utility model)

Free format text: JAPANESE INTERMEDIATE CODE: A01

Effective date: 20130704

A61 First payment of annual fees (during grant procedure)

Free format text: JAPANESE INTERMEDIATE CODE: A61

Effective date: 20130717

R150 Certificate of patent or registration of utility model

Free format text: JAPANESE INTERMEDIATE CODE: R150