JP4450532B2 - Relative position measuring device - Google Patents

Relative position measuring device Download PDF

Info

Publication number
JP4450532B2
JP4450532B2 JP2001218425A JP2001218425A JP4450532B2 JP 4450532 B2 JP4450532 B2 JP 4450532B2 JP 2001218425 A JP2001218425 A JP 2001218425A JP 2001218425 A JP2001218425 A JP 2001218425A JP 4450532 B2 JP4450532 B2 JP 4450532B2
Authority
JP
Japan
Prior art keywords
marker
image
distance
position
preceding vehicle
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
JP2001218425A
Other languages
Japanese (ja)
Other versions
JP2003030628K1 (en
JP2003030628A (en
Inventor
知信 ▲高▼島
大輔 上野
寿雄 伊藤
正則 佐藤
忠雄 尾身
康治 田口
幸三 馬場
智行 高橋
Original Assignee
トヨタ自動車株式会社
富士通株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by トヨタ自動車株式会社, 富士通株式会社 filed Critical トヨタ自動車株式会社
Priority to JP2001218425A priority Critical patent/JP4450532B2/en
Publication of JP2003030628A publication Critical patent/JP2003030628A/en
Publication of JP2003030628K1 publication Critical patent/JP2003030628K1/ja
Application granted granted Critical
Publication of JP4450532B2 publication Critical patent/JP4450532B2/en
Application status is Expired - Fee Related legal-status Critical
Anticipated expiration legal-status Critical

Links

Images

Description

[0001]
BACKGROUND OF THE INVENTION
The present invention relates to a relative position measurement apparatus that measures a relative position with respect to a preceding vehicle by image recognition, and more particularly to a relative position measurement apparatus that measures a relative position by imaging a marker included in a preceding vehicle.
[0002]
[Prior art]
In order to solve traffic problems such as traffic congestion in large cities due to the increase in the population of cities, research on intelligent transport systems (ITS) is being conducted by various organizations. ITS is a road traffic system for the purpose of improving road traffic safety, transport efficiency, and comfort.
[0003]
Among the latest ITS technologies, there is an advanced medium-range / medium-volume transport system technology called IMTS (Intelligent Multi-Mode Transit System). IMTS is a system in which a transport vehicle such as a bus performs autonomous driving and platooning.
[0004]
In order to perform platooning in IMTS, it is necessary to measure the relative position of the following vehicle with the preceding vehicle and maintain a certain distance between the vehicles. One of the techniques for measuring the relative position with respect to the preceding vehicle is a technique for recognizing a marker installed on the preceding vehicle with an image and deriving the relative position between the vehicles by pattern matching.
[0005]
In the pattern matching measurement technique, an image (template) of a marker arranged in the rear part of the preceding vehicle is stored in the following vehicle. Then, a marker at the rear of the preceding vehicle is photographed by the camera from the succeeding vehicle, and the relative position of the preceding vehicle is calculated by performing pattern matching between the photographed image and a template stored in advance.
[0006]
[Problems to be solved by the invention]
However, when a marker pattern is recognized by image processing, a large number of pattern matching templates are required in advance to extract a correct pattern. In other words, since the size and shape of the marker pattern displayed in the image changes depending on the distance or rotation to the preceding vehicle, it is necessary to store a plurality of templates in accordance with the change in the following vehicle, increasing the memory usage capacity. Invite.
[0007]
Further, as the number of templates increases, the number of pattern matching processes increases, and a lot of calculation processes are required. As a result, it takes time to measure the relative position.
[0008]
In order to ensure the reliability and safety of IMTS, it is desired that the relative position of the following vehicle is always accurately recognized with respect to the preceding vehicle. For that purpose, the measurement of the relative position is required to be performed in a short time.
[0009]
Note that, as in the technique described in Japanese Patent Application Laid-Open No. 2000-337871, there is a method of calculating a relative distance by arranging markers in a line at the rear of a preceding vehicle and detecting the marker by the following vehicle. According to this method, it is possible to calculate the relative position in a short time, but it is difficult to accurately measure the distance when the preceding vehicle turns in a lateral direction at a corner (when a yaw angle occurs). . In other words, this technique calculates the inter-vehicle distance based on the distance between the markers in the image captured by the camera, but when the preceding vehicle faces in the lateral direction, the inter-vehicle distance does not change even if the inter-vehicle distance does not change. The distance between markers is shortened. As a result, a measurement error occurs.
[0010]
Thus, with the conventional image processing technology, it is difficult to calculate the inter-vehicle distance and relative angle with the preceding vehicle with high accuracy in a short time.
The present invention has been made in view of these points, and an object of the present invention is to provide a relative position measuring apparatus capable of accurately measuring a relative position in a short time using image recognition.
[0011]
[Means for Solving the Problems]
In the present invention, in order to solve the above-described problem, a relative position measuring apparatus 2 as shown in FIG. 1 is provided. The relative position measuring device 2 according to the present invention includes an imaging unit 2a for imaging markers 1a to 1e arranged in a plurality of axial directions at the rear part of the preceding vehicle 1, and a frame image 2b captured by the imaging unit 2a. Based on the extraction unit 2c that extracts a marker pattern 2d composed of a plurality of marker images representing 1 to 1e and the marker pattern 2d extracted by the extraction unit 2c, a calculation that calculates at least a relative distance or a relative angle with the preceding vehicle And means 2e.
[0012]
According to such a relative position measuring device 2, the markers 1a to 1e arranged in the plurality of axial directions on the rear portion of the preceding vehicle are photographed by the imaging means 2a, and a frame image 2b is obtained. The marker pattern 2d is extracted from the frame image 2b by the extracting means 2c. Then, the calculation means 2e calculates the relative distance or relative angle with the preceding vehicle based on the marker pattern 2d.
[0013]
DETAILED DESCRIPTION OF THE INVENTION
Hereinafter, embodiments of the present invention will be described with reference to the drawings.
FIG. 1 is a principle configuration diagram of the present invention. The relative position measuring device 2 of the present invention includes an imaging unit 2a that images the markers 1a to 1e arranged in a plurality of axial directions at the rear of the preceding vehicle 1, and a frame image 2b that is captured by the imaging unit 2a. Extraction means 2c for extracting a marker pattern 2d composed of a plurality of marker images representing 1e, and calculation means 2e for calculating a relative distance or a relative angle with the preceding vehicle based on the marker pattern 2d extracted by the extraction means 2c. And having.
[0014]
According to such a relative position measuring apparatus, the markers 1a to 1e arranged in the plurality of axial directions on the rear portion of the preceding vehicle are photographed by the imaging means 2a, and a frame image 2b is obtained. The marker pattern 2d is extracted from the frame image 2b by the extracting means 2c. Then, the calculation means 2e calculates the relative distance or relative angle with the preceding vehicle based on the marker pattern 2d.
[0015]
Thus, in the present invention, since the relative position is calculated from the markers 1a to 1e arranged in a plurality of axial directions (for example, the vertical direction and the horizontal direction), it is not necessary to store a large number of templates. As a result, the memory usage can be reduced and the processing time can be shortened. Moreover, since the marker patterns are arranged in a plurality of axial directions, even if the yaw angle of the preceding vehicle occurs, the relative distance can be accurately measured by the markers arranged in the vertical direction, and the measurement result is highly accurate. Can be obtained.
[0016]
Such a relative position measuring device can be applied to a technique such as IMTS. For example, when the vehicle travels in a row, if the relative position measuring device according to the present invention is mounted on the following vehicle, the relative position measurement with high reliability is possible. Hereinafter, an embodiment of the present invention will be specifically described by taking the case where the present invention is applied to IMTS as an example.
[0017]
FIG. 2 is a diagram showing an outline of the row running by IMTS. In IMTS, a plurality of vehicles 10, 20, and 30 perform platooning. Vehicles 10, 20, and 30 are transport vehicles such as buses, for example. By tracking the leading vehicle 10 by the vehicle 20 and tracking the vehicle 20 by the vehicle 30, the vehicle 20 and the vehicle 30 can travel by automatic driving.
[0018]
A marker panel 11 is attached to the rear portion of the vehicle 10. A camera 21 is attached in front of the vehicle 20, and a marker panel 22 is attached behind the vehicle 20. Similarly, a camera 31 is attached in front of the vehicle 30, and a marker panel 32 is attached behind the vehicle 30.
[0019]
On the marker panels 11, 22, and 32, a plurality of markers are distributed in a predetermined pattern in a plurality of axial directions (for example, the horizontal direction and the vertical direction). The relative position of the vehicle 10 with respect to the vehicle 20 can be measured by imaging the marker panel 11 of the preceding vehicle 10 with the camera 21 of the vehicle 20. Moreover, the relative positional relationship of the vehicle 20 with respect to the vehicle 30 can be measured by imaging the marker panel 22 of the preceding vehicle 20 with the camera 31 of the vehicle 30.
[0020]
Based on the relative positions of the vehicles 20 and 30 with the preceding vehicles, the steering wheel, the accelerator, the brake, and the like are automatically controlled so as to follow a certain distance from the preceding vehicle.
[0021]
FIG. 3 is a view of the preceding vehicle as viewed from the rear. On the marker panel 11 of the preceding vehicle 10, five markers 11a to 11e are arranged. The markers 11a to 11e are distributed in the vertical direction and the horizontal direction. In the present embodiment, a marker pattern is formed by these five markers 11a to 11e.
[0022]
The shape of the markers 11a to 11e is a rectangle. Each marker 11a-11e is a self-luminous body comprised by many LED (Light Emitting Diode), for example. Note that the markers 11a to 11e do not have to emit light as long as they can be distinguished from the periphery in appearance. For example, the marker panel 11 may be black and the markers 11a to 11e may be white so that they can be distinguished.
[0023]
In the example of FIG. 3, five markers 11 a to 11 e are arranged. However, if three or more markers are distributed in a plurality of directions, recognition as a marker pattern is possible. The distance can be calculated even with one marker.
[0024]
FIG. 4 is a diagram illustrating the arrangement of the markers. FIG. 4A is a front view of the marker panel. FIG. 4B is a cross-sectional view taken along line XX in FIG.
The marker 11a is disposed on the upper left side, and the marker 11b is disposed on the upper right side. The marker 11a and the marker 11b are arranged horizontally, and the width of the marker 11a and the marker 11b is h. M1 It is. The marker 11c is arranged on the lower left side, and the marker 11d is arranged on the lower right side. The marker 11c and the marker 11d are arranged horizontally, and the width of the marker 11c and the marker 11d is h. M2 It is. The difference in height between the markers 11a, 11b in the upper row and the markers 11c, 11d in the lower row is V M It is.
[0025]
The marker 11e is disposed in the central depression of the marker panel 11. The depth of the marker 11e with reference to the markers 11a to 11d is d M It is.
As described above, the markers 11a to 11e are distributed in the three axial directions (vertical, horizontal, and depth). Thereby, the relative position between the preceding vehicle 10 and the succeeding vehicle 20 can be accurately measured. Hereinafter, the markers 11a to 11d are referred to as peripheral markers, and the marker 11e is referred to as a central marker.
[0026]
By arranging the markers in the depth direction (the unevenness on the marker panel), the yaw angle θ y Can be calculated with high accuracy.
That is, when each marker is arranged on the same plane, a change in the yaw angle of the preceding vehicle 10 is detected by a change in the aspect ratio of the marker pattern or each marker image. That is, as the yaw angle increases, the marker pattern or each marker image contracts in the horizontal direction, so the degree of horizontal contraction can be determined from the aspect ratio, and the yaw angle can be calculated from the degree of contraction.
[0027]
However, when the preceding vehicle 10 also rotates in the pitch direction (a direction around an axis defined laterally from the center of the vehicle), when the pitch angle increases, the marker pattern or each marker contracts in the vertical direction. Therefore, it is difficult to accurately calculate the yaw angle only with the marker pattern or the aspect ratio of each marker.
[0028]
Therefore, in the present embodiment, by adding unevenness to the marker pattern, the change in the yaw angle appears in the change in the arrangement of the marker images constituting the marker pattern in the captured image (hereinafter referred to as a frame image). I have to.
[0029]
FIG. 5 is a diagram illustrating an example of a frame image. In FIG. 5, the horizontal direction of the frame image 40 is taken as the x axis and the vertical direction is taken as the y axis.
In the frame image 40, marker images 41 to 45 corresponding to the five markers 11a to 11e are drawn. When the relative yaw angle of the preceding vehicle 10 is generated, a marker that is not on the same plane has a difference in horizontal azimuth with other markers. This difference in azimuth angle increases in proportion to the magnitude of the yaw angle.
[0030]
When the center marker (marker 11e) is in the same plane as the peripheral markers (markers 11a to 11d), the center marker (marker 11e) is drawn at the center position P0 of the marker images 41 to 44. 11e) is behind the peripheral markers (markers 11a to 11d). Therefore, the marker image 45 is displayed shifted to the left from the position P0.
[0031]
An accurate yaw angle can be calculated based on the deviation width of the marker image 45 from the position P0. For example, the difference between the x coordinate of the intersection P1 of the diagonal lines 46 and 47 of the peripheral markers (markers 11a to 11d) existing on the same plane and the x coordinate of the marker image 45 of the actual center marker (marker 11e) The angle can be calculated.
[0032]
FIG. 6 is a diagram illustrating the relationship of the relative positions of the vehicles. The relative position is measured with reference to the following vehicle 20. The relative positions are the distance d between vehicles, the azimuth angle θ, and the yaw angle θ. y Specified by. The distance d between vehicles is a distance from the rear part of the preceding vehicle 10 to the front part of the succeeding vehicle 20. The azimuth angle θ is defined by the axis 101 defined in the longitudinal direction of the following vehicle 20 (the axis parallel to the traveling direction when the vehicle 20 goes straight), the rear part of the preceding vehicle 10 and the front part of the subsequent vehicle 20. This is the angle formed by the connecting line segment 102. Yaw angle θ y Are an axis 103 parallel to the axis 101 defined in the longitudinal direction of the following vehicle 20, and an axis 104 defined in the longitudinal direction of the preceding vehicle 10 (an axis parallel to the advancing direction when the vehicle 10 goes straight). Is the angle formed by
[0033]
FIG. 7 is a block diagram showing a configuration of an automatic tracking function installed in the vehicle. The vehicle 20 includes a camera 21, an image processing unit 23, a control unit 24, a memory 25, a sensor 26, and an actuator 27 for tracking the preceding vehicle 10.
[0034]
The camera 21 includes an optical system 21a and a CCD (Charge Coupled Device) 21b. The optical system 21a is composed of a plurality of lenses, and condenses light from the preceding vehicle 10 on the light receiving surface of the CCD 21b. The CCD 21b converts the light collected by the optical system 21a into a corresponding image signal by photoelectric conversion.
[0035]
The image processing unit 23 extracts a marker image by performing, for example, contour extraction processing or the like on the image signal output from the CCD 21b.
The control unit 24 calculates the relative position of the vehicle 10 using the marker image extracted by the image processing unit 23, and controls the actuator 27 according to the calculated relative position.
[0036]
The memory 25 is configured by a semiconductor memory or the like, and temporarily stores an image corresponding to the marker extracted by the image processing unit 23, or data in the middle of calculation when the control unit 24 performs predetermined control. Store programs temporarily. Furthermore, the memory 25 also includes a nonvolatile memory, and stores various programs executed by the control unit 24, data (for example, distance between markers), and the like.
[0037]
The sensor 26 detects information such as the vehicle speed, the accelerator opening, the transmission state, and supplies the information to the control unit 24. The actuator 27 includes an accelerator controller, a brake controller, and the like, and controls the traveling state of the vehicle 20 based on a command from the control unit 24.
[0038]
Note that the image processing unit 23, the control unit 24, and the memory 25 illustrated in FIG. 7 are mounted on the vehicle 20 as one control device.
Next, the relative position calculation process in the vehicle 20 will be described.
[0039]
FIG. 8 is a flowchart showing a processing procedure of the entire relative position calculation processing. Below, the process shown in FIG. 8 is demonstrated along a step number.
[Step S11] The control unit 24 performs initial processing. The initial process is a process of setting values of various variables such as a distance between markers to initial values, and securing a working storage area in the memory 25. Since there is a possibility that the distance between the markers and the marker size may be changed, they can be changed at any time by parameters.
[0040]
[Step S12] The controller 24 performs a marker pattern full search process. The full search process is a process for searching for a marker pattern for the entire range in the frame image taken by the camera 21. Details of the full search process will be described later.
[0041]
[Step S13] The control unit 24 calculates a relative position based on the marker pattern searched in step S12.
[Step S14] The control unit 24 determines whether or not a preceding vehicle has been detected. If a preceding vehicle is detected, the process proceeds to step S15. If no preceding vehicle is detected, the process proceeds to step S12. Thus, the full search process is repeatedly executed until a vehicle is detected.
[0042]
[Step S15] The control unit 24 determines whether the detected distance to the vehicle is far. Whether or not the distance is far is determined based on whether or not the detected distance to the vehicle is greater than a preset threshold. If the distance is far, the process proceeds to step S18. If the distance is not far, the process proceeds to step S16.
[0043]
[Step S16] The control unit 24 performs a marker image tracking process. The tracking process is a process of performing tracking for each marker image using the vicinity of the position where the marker image is detected immediately before in the frame image as a search range of the marker image. Details of the tracking process will be described later.
[0044]
[Step S17] The control unit 24 calculates the relative position of the preceding vehicle based on the marker pattern detected in step S16. Thereafter, the process proceeds to step S20.
[0045]
[Step S18] The control unit 24 performs a small area full search process. The small area full search process is the same as the full search process except that the marker pattern search range is narrowed. Therefore, the determination method of the search range in the small region full search will be described later, and the details of the processing procedure of the small region full search will not be described using the flowchart.
[0046]
[Step S19] The control unit 24 calculates the relative position of the preceding vehicle based on the marker pattern searched in step S18. Thereafter, the process proceeds to step S20.
[0047]
[Step S20] The control unit 24 determines whether a vehicle is not detected continuously for a predetermined time. The predetermined time is set in advance. The predetermined time can be defined, for example, by the number of times that the vehicle cannot be detected continuously in the tracking process (step S16) or the small area full search process (step S18). The number of times is set in advance as a parameter.
[0048]
If no vehicle is detected for a predetermined time, the process proceeds to step S12, and a full search process is performed. If a vehicle is detected within the predetermined time, the process proceeds to step S15, and the tracking process (step S16) or the small area full search process (step S18) is repeated.
[0049]
FIG. 9 is a flowchart showing the procedure of the full search process. In the following, the process illustrated in FIG. 9 will be described in order of step number.
[Step S21] The control unit 24 binarizes the frame image acquired from the camera 21. Binarization is a process of replacing an input image from the camera 21 with white portions brighter than a certain brightness and black portions dark.
[0050]
[Step S22] The control unit 24 performs labeling. Labeling is a process of assigning an identification number to white portions (marker images) scattered in a binarized frame image. That is, the high-luminance area in the frame image is an area that may become a marker image.
[0051]
[Step S23] The control unit 24 determines whether the marker image satisfies the marker condition, and narrows down the marker image.
The first marker condition is that the area of the marker image is within a predetermined range. The maximum area and the minimum area of the marker image are set in advance, and the marker condition is set to be not less than the minimum area and not more than the maximum area.
[0052]
The second marker condition is that the aspect ratio of the marker image is within a predetermined range. The maximum value and the minimum value of the aspect ratio of the marker image can be specified by parameters. The third marker condition is that there is no abnormality in the filling rate of the marker image. The filling rate is the ratio of the marker image (white portion) in the rectangular area surrounding the marker image. When the filling rate is abnormally small, it is determined that the marker image does not reflect a real marker (a false marker).
[0053]
The fourth marker condition is that there are not too many marker images as marker candidates. If there are too many marker images, it is determined that the image is not reliable, and the process ends with an error without performing the subsequent processing. The number of allowed marker images can be specified in advance by a parameter.
[0054]
[Step S24] The control unit 24 sorts the identification numbers set in the marker image in the order of increasing area of the marker image. By sorting by area and performing marker determination in the sorted order, it is possible to prevent pattern determination using markers with significantly different sizes, and improve the accuracy of marker pattern detection. Therefore, the processing speed can be increased.
[0055]
[Step S25] The control unit 24 performs marker pattern detection processing. Details of the marker pattern detection process will be described later.
[Step S26] The control unit 24 determines whether a marker pattern has been detected by the marker pattern detection process. If a marker pattern is detected, the process proceeds to step S27. If no marker pattern is detected, the process proceeds to step S28.
[0056]
[Step S27] The control unit 24 outputs the detected marker pattern and advances the process to step S13 in FIG.
[Step S28] The control unit 24 outputs an error message indicating that the preceding vehicle cannot be detected, and the process proceeds to step S13 in FIG.
[0057]
FIG. 10 is a flowchart illustrating a procedure of marker pattern detection processing. In the following, the process illustrated in FIG. 10 will be described in order of step number.
[Step S31] The control unit 24 sequentially acquires three unprocessed points in the marker image from the front of the sorted array of identification numbers. The selection condition at this time is that the areas of the selected marker images are not significantly different.
[0058]
For example, when the area of the marker image having the largest size among the selected marker images is MS1, and the area of the marker image having the smallest size is MS2, the selection condition is that MS1 / K ≦ MS2 is satisfied. Become. Here, K is a threshold value set in advance as a parameter, and is a real number of 1 or more. A value of 1 / K may be set as a parameter.
[0059]
As a result, determination between marker images having significantly different sizes is not performed, the accuracy of relative position measurement is improved, and processing is speeded up because unnecessary determination is not performed.
[0060]
[Step S32] The control unit 24 determines whether or not the acquired three marker images satisfy the conditions of the peripheral markers.
The first condition for the three marker images to be peripheral markers is that the two marker images are horizontal. In this case, even if the line is not completely horizontal, if the slope of the straight line passing through the two points is less than or equal to the threshold value, the two points are regarded as horizontal. The threshold of inclination can be set with a parameter.
[0061]
The second condition for the three marker images to be peripheral markers is the distance between two horizontal points (number of pixels in the image), the x direction on the image from the two points to the other point, The ratio with the distance in the y direction approximates a predetermined ratio. The predetermined ratio is determined based on the arrangement of the peripheral markers (markers 11a to 11d) on the marker panel 11. The allowable range of error can be specified by a parameter.
[0062]
If the peripheral marker condition is satisfied, the process proceeds to step S33. If the peripheral marker condition is not satisfied, the process proceeds to step S40.
[Step S33] The control unit 24 estimates the positions of other points based on the acquired three marker images. The positions of other points are estimated based on the ratio of the distances between the markers 11a to 11e of the marker panel 11.
[0063]
[Step S34] The control unit 24 applies the marker image existing near the position estimated in step S33 as another marker image. The allowable range of whether or not it is close can be set by a parameter.
[0064]
[Step S35] The control unit 24 determines whether the marker image detected as the marker pattern is N points or more. N is a parameter having a value of 3, 4, or 5. If it is N points or more, the process proceeds to step S36, and if it is less than N points, the process proceeds to step S40.
[0065]
[Step S36] The control unit 24 determines whether or not a false marker image is included. For example, when the average height of all acquired marker images is a and the height difference of one acquired marker image is b, b × K1 <a <b × K2 (K1 <K2: K1 , K2 is a positive real number), it is determined that a false marker is included. K1 and K2 are parameters set in advance based on the arrangement of the markers 11a to 11e on the marker panel 11.
[0066]
In addition, if the ratio between the size of the known marker and the size of the marker pattern is compared with the ratio of the marker image to the size of the marker pattern on the image and the error is small, a fake marker image is included. It can be judged that there is not.
[0067]
If a false marker image is not included, the process proceeds to step S37. If a false marker image is included, the process proceeds to step S40.
[0068]
[Step S37] The control unit 24 recognizes the acquired marker image group as a marker pattern, and adds 1 to the value of the recognized number of vehicles.
[Step S38] The control unit 24 determines whether the marker pattern recognized in step S37 is larger than any already recognized marker pattern (a maximum marker pattern). The size of the marker pattern is determined by the height of the marker pattern, for example. The vehicle indicated by the maximum marker pattern on the frame image photographed by the camera 21 is determined as the closest vehicle.
[0069]
If the marker pattern recognized in step S37 is the maximum marker pattern, the process proceeds to step S39. If the marker pattern recognized in step S37 is not the maximum marker pattern, the process proceeds to step S40.
[0070]
[Step S39] The control unit 24 registers the marker pattern recognized in step S37 as a marker pattern of the latest vehicle.
[Step S40] The control unit 24 determines whether there is a combination of remaining marker images. If there are remaining combinations, the process proceeds to step S31. If there are no remaining combinations, the process proceeds to step S26 shown in FIG.
[0071]
Next, details of the tracking process will be described.
FIG. 11 is a flowchart showing the procedure of the tracking process. In the following, the process illustrated in FIG. 11 will be described in order of step number.
[0072]
[Step S51] The control unit 24 selects one marker image in the marker pattern detected by the previous marker pattern detection.
[Step S52] The control unit 24 acquires a search range around the marker image. The search range is a predetermined range with the marker predicted position as the center. When the marker position is not predicted, the previous marker position is the center.
[0073]
When the marker pattern is recognized in the detection of the previous marker pattern, the larger of the size of each marker image included in the marker pattern × L (L is a positive real number) and M (M is a positive real number) The value is the radius of the search range (or the width and height of the search range).
[0074]
When the marker pattern is not recognized, the larger value of the average size of the marker image × L and M is set as the radius of the search range (or the width and height of the search range).
Note that L and M can be set by parameters.
[0075]
[Step S53] The control unit 24 binarizes and labels an image acquired from the camera 21.
[Step S54] The control unit 24 acquires a marker image from the search range in the binarized image. The condition is that the marker image acquired here is not in contact with both the upper and lower boundaries of the search range and is not in contact with both the right and left of the search range boundary. When a plurality of marker images are detected within the search range, the marker image closest to the center of the search range is acquired.
[0076]
[Step S55] The control unit 24 determines whether a marker image is found. If a marker image is found, the process proceeds to step S56. If a marker image is not found, the process proceeds to step S57.
[0077]
[Step S56] The control unit 24 determines a condition as a genuine marker image. The first condition for the authentic marker image is that the area of the marker is not less than the minimum value and not more than the maximum value. The second condition for the authentic marker image is that the aspect ratio of the marker is within the threshold value. The third condition for the authentic marker image is that the filling rate is not abnormal. A marker image that does not satisfy one of these conditions is determined to be a fake marker image.
[0078]
[Step S57] The control unit 24 determines whether there is another marker image. If there is another marker image, the process proceeds to step S51. If there is no other marker image, the process proceeds to step S58.
[0079]
[Step S58] The control unit 24 performs pattern determination. In the pattern determination, as a first condition, it is evaluated that two marker images are on the horizontal. An allowable range for satisfying the level is set in advance.
[0080]
As the second condition, the positional relationship between the two marker images evaluated as horizontal under the first condition and the other marker images is evaluated. That is, the ratio between the distance between the two horizontal points (the number of pixels in the image) and the distance in the x direction and y direction on the image from the two points to the other point is the marker 11a to 11a on the marker panel 11. It is evaluated whether the ratio approximates to a predetermined ratio based on the arrangement of 11e. The allowable range of error can be specified by a parameter.
[0081]
As a third condition, when the same marker image is detected in tracking a plurality of different markers, it is determined which marker is the tracking target of the marker image. For example, the reliability of each marker as a marker image is evaluated from the positional relationship with other marker images. Specifically, if another marker image (other than the marker image detected redundantly) has moved to the left of the position in the previous frame image, the marker image detected redundantly is The reliability of the marker displayed on the right side in the previous frame image as the marker image is highly evaluated. Then, the marker image is excluded from the tracking target of the marker having the lower reliability.
[0082]
Marker images that do not satisfy the conditions for pattern determination are excluded from the marker images to be tracked.
[Step S59] The control unit 24 predicts the position of the marker (invalid marker) from the position of another marker image for the marker that could not be tracked.
[0083]
[Step S60] The control unit 24 predicts the next frame position. For example, the control unit 24 predicts the position in the next frame by primary extrapolation from the position of the previous marker image and the position of the current marker image. However, if the vehicle vibration is strong and the reliability of position prediction cannot be ensured, the position prediction function can be stopped by setting parameters. Thereafter, the process proceeds to step S17 in FIG.
[0084]
By performing the processing as described above, a marker pattern is extracted from the frame image, and the relative position with the preceding vehicle is calculated based on the marker pattern.
[0085]
[Relative position calculation]
Next, details of the relative position calculation method will be described.
FIG. 12 is a diagram illustrating an example of a frame image in which a rear portion of a preceding vehicle is copied. In the frame image 50, the rear portion of the vehicle 10 (shown in FIG. 3) is projected. The frame image 50 includes marker images 51 to 55. The positions of the marker images 51 to 55 are defined at the center of gravity of the marker images 51 to 55, for example. The marker images 51 to 54 of the peripheral markers form a trapezoid.
[0086]
Here, the distance (number of pixels) on the screen of the marker images 51 and 52 arranged on the upper base 56 of the trapezoid is represented by h. 1 (The right side of the image is positive). The distance (number of pixels) on the screen of the marker images 53 and 54 arranged on the lower base 57 of the trapezoid is represented by h. 2 (The right side of the image is positive). The distance (number of pixels) between the midpoint of the trapezoid upper base 56 and the midpoint of the trapezoid lower base 57 is the height (vertical interval) v of the marker pattern. The distance (number of pixels) between the intersection of the two trapezoidal diagonal lines 58 and 59 and the center point of the frame image 50 (indicated by x in FIG. 12) is h. c (The right side of the image is positive). The distance (number of pixels) between the marker image 55 of the center marker and the center point of the frame image 50 is m. c (The right side of the image is positive).
[0087]
In the present embodiment, the distance d, the azimuth angle θ, and the yaw angle θ from the position of each marker image 51 to 55 constituting the marker pattern to the marker. y Ask for. The distance d to the marker is obtained from the vertical interval v between the marker images 51 to 55 (it can also be obtained from the horizontal interval). The azimuth angle θ is obtained from the position on the frame image 50 of the marker reference point (for example, the intersection of diagonal lines 58 and 59 of the marker images 51 to 54 of the peripheral markers). Yaw angle θ y Is obtained from the horizontal displacement of the marker image 55, which is the central marker (when the central marker is not found, the vertical interval v and the horizontal interval between the markers).
[0088]
In the case of a marker pattern using five markers 11a to 11e as in the present embodiment, the horizontal interval is obtained from the lengths of the upper base 56 and the lower base 57 of the trapezoid formed from the peripheral markers at the four surrounding points. . Also, the vertical interval v is obtained from the length of the line connecting the midpoints of the upper base 56 and the lower base 57. Further, the distance h from the center coordinate of the camera 21 to the intersection of the diagonal lines 58 and 59 of the peripheral markers c To determine the azimuth angle θ.
[0089]
It is also possible to obtain the distance from the distance between the markers in the horizontal direction by an instruction with a parameter. In addition, the marker pattern is detected last time, and the distance between the vehicles is determined to be a short distance (a part of the marker pattern from the angle of view may be removed and only less than three markers may appear on the image). If there is a distance, the distance can also be obtained from the horizontal interval between the two markers or the size of one marker.
[0090]
The central marker giving depth is the yaw angle θ y Used when calculating. That is, the distance m from the intersection of the diagonal lines 58 and 59 of the marker images 51 to 54 that are the peripheral markers and the marker image 55 that is the center marker. c And yaw angle θ y Is required.
[0091]
When the marker images of all the five markers are not recognized, the relative position can be calculated based on the four-point, three-point, two-point, and one-point marker images. As a result, even if several markers are lost (cannot be detected) due to disturbance such as weather, the distance, azimuth angle, and the like can be obtained, and the detection rate of the position of the preceding vehicle is improved.
[0092]
A specific distance calculation method is shown below.
i) A calculation example when all five marker images are found as shown in FIG. 12 will be described. First, the width h of the marker pattern is
[0093]
[Expression 1]
h = (w 1 h 1 + W 2 h 2 ) / C (1)
It is defined as Where w 1 Is a weight given according to the actual distance between the markers on the upper base. w 2 Is a weight given according to the actual distance between the markers at the bottom. For example,
[0094]
[Expression 2]
w 1 : W 2 = H M1 : H M2 (2)
And
[0095]
C is w 1 : W 2 Is a constant given according to the marker pattern shape, for example,
[0096]
[Equation 3]
c = w 1 h M1 + W 2 h M2 / V M .... (3)
It is defined as By this constant c, the yaw angle θ y When is 0, v = h.
[0097]
In addition, this is a parameter determined by the camera characteristics. M The number of pixels on the line image of C c And
At this time, azimuth angle θ, distance d, yaw angle θ y Is calculated by the following equation.
[0098]
[Expression 4]
θ = −tan -1 (C c h c (4)
[0099]
[Equation 5]
d = C c / (Vcosθ) (5)
[0100]
[Formula 6]
θ y = -Tan -1 (C c m c )
−sin -1 ((M c -H c ) V M cos (−tan -1 (C c m c )) / (Vd M )) + Θ ··· (6)
ii) Next, a case where four marker images excluding the center marker are found will be described. In this case, the azimuth angle θ and the distance d are calculated by the equations (4) and (5), respectively. Yaw angle θ y Is obtained from two sets of parallel marker images as shown in the following equation (7), and a weighted average based on the length between the markers is taken.
[0101]
[Expression 7]
θ y = ± cos -1 (Dv M (P - + P + ) Cosθ / c c L√ (1 + α 2 )) + Α (7)
Where p + Is the x coordinate (pixel) of the left marker image among the parallel marker images. p - Is the x coordinate (pixel) of the marker image on the right side of the parallel marker images. L is the actual width of the parallel marker image (h 1 Or h 2 ). α is
[0102]
[Equation 8]
α = − (p + + P-) v M / 2c c .... (8)
It is.
[0103]
Where cos in equation (7) -1 If the argument is greater than 1, the yaw angle θ y Is 0.
iii) Next, a case where four points excluding one point of the peripheral marker are found will be described.
[0104]
FIG. 13 is a diagram illustrating an example of a frame image when four points excluding one point of the peripheral marker are found. The frame image 60 includes marker images 62 to 65 corresponding to the markers 11b to 11e, but no marker image 61 corresponding to the marker 11a is found. Therefore, when only three of the surrounding four marker images are found, first, the position of the marker image 61 that has not been found is estimated.
[0105]
First, the inclination of the straight line 66 passing through the two marker images 63 and 64 is obtained. Then, from the marker image 62 that is paired in the horizontal direction of the marker image 61 that has not been found, a straight line 67 having the opposite inclinations of the remaining two marker images 63 and 64 is drawn.
[0106]
Next, the inclination of a straight line 68 passing through the two marker images 62 and 64 is obtained. Then, from the marker image 63 paired in the vertical direction of the marker image 61 that was not found, a straight line 69 having the opposite inclinations of the remaining two marker images 62 and 64 is drawn.
[0107]
The intersection of the straight line 67 and the straight line 69 is set as the estimated position of the marker image 61.
Including the marker image 61 estimated in this way, the azimuth angle θ, the distance d, and the yaw angle θ according to the equations (4), (5), and (6). y Ask for.
[0108]
The yaw angle θ y When it is known that a roll angle (rotation around the axis in the front-rear direction of the vehicle) occurs without occurrence of the above, the vertical interval v can be obtained as follows.
[0109]
FIG. 14 is a diagram illustrating an example of a frame image when a roll angle is generated. The frame image 60a includes four marker images 62a, 63a, 64a, and 65a among the peripheral markers. The marker image 61a corresponding to the marker 11a is not detected.
[0110]
When the roll angle is generated, the shortest distance between the straight line 66a passing through the two marker images 63a and 64a present horizontally and the other marker image 62a can be set as the vertical interval v. This utilizes the fact that the upper side and the lower side are parallel if the roll angle does not occur but the yaw angle does not occur.
[0111]
iv) If only three marker images of peripheral markers are found, as described in iii), the position of the marker image of another peripheral marker that could not be detected is estimated. Then, similarly to i), the azimuth angle θ and the distance d are obtained from the equations (4) and (5), and the yaw angle θ is obtained from the equation (7) as in ii). y Ask for.
[0112]
v) Next, a case where only two peripheral markers and a center marker can be detected will be described. In this case, it is divided into a case where two marker images paired in the horizontal direction among the peripheral markers are detected, and a case other than two marker images paired in the horizontal direction among the peripheral markers are detected.
[0113]
FIG. 15 is a diagram illustrating an example of a frame image when two marker images that are paired in the horizontal direction among the peripheral markers are detected. The frame image 70 includes two marker images 73 and 74 corresponding to the peripheral markers and a marker image 75 corresponding to the center marker. Marker images 71 and 72 corresponding to other peripheral markers are not detected.
[0114]
In this case, the length of the vertical line 77 drawn from the marker image 75 of the center marker to the straight line 76 passing through the horizontal marker images 73 and 74 is v / 2, twice that length (the vertical position of the center marker is the upper base of the trapezoid)・ If it is just in the middle of the bottom, let v be the vertical interval v. Regarding the horizontal interval h, the weight corresponding to the side where the marker image pair does not exist (w in the example of FIG. 15). 1 ) Is set to 0, the distance d, the azimuth angle θ, and the yaw angle θ in the same manner as in i) y Ask for.
[0115]
FIG. 16 is a first example of a frame image when a peripheral marker other than the two marker images paired in the horizontal direction is detected. The frame image 80 includes two marker images 82 and 84 corresponding to the peripheral markers and a marker image 85 corresponding to the center marker. Marker images 81 and 83 corresponding to other peripheral markers are not detected.
[0116]
In the case as shown in FIG. 16, a right triangle having a line segment 86 connecting two marker images 82 and 84 in the peripheral portion as a hypotenuse is formed, and the height of the right triangle is defined as a vertical interval v. The width h of the right triangle Three 2v M / (H M2 -H M1 ) Is defined as a horizontal interval h. As a result, an equivalent to h when four peripheral markers can be confirmed is obtained. From these conditions, the distance d, the azimuth angle θ, and the yaw angle θ are the same as in i). y Ask for.
[0117]
FIG. 17 is a second example of a frame image when a peripheral marker other than the two marker images paired in the horizontal direction is detected. The frame image 90 includes two marker images 92 and 93 corresponding to the peripheral markers and a marker image 95 corresponding to the center marker. Marker images 91 and 94 corresponding to other peripheral markers are not detected.
[0118]
In the case as shown in FIG. 17, a right triangle having a line segment 96 connecting the two marker images 92 and 93 in the peripheral portion as a hypotenuse is formed, and the height is set as a vertical interval v. The width h of the right triangle Four 2v M / (H M2 + H M1 ) Is defined as a horizontal interval h. As a result, an equivalent of h when four peripheral markers can be confirmed is obtained. From these conditions, the distance d, the azimuth angle θ, and the yaw angle θ are the same as in i). y Ask for.
[0119]
vi) If only two markers are found, the distance is determined to be a short distance in the previous determination, and if the marker image is determined to be two points in the horizontal direction, 2 The distance d is calculated from the interval between the marker images. The azimuth angle θ is obtained from the center of the two marker images.
[0120]
If it is determined in the previous determination that the distance is not a short distance, the distance calculation is not performed.
By using this detection method only when the previous determination is a short distance (for example, a distance of 3 m or less) where the number of markers in the angle of view is expected to be less than 3, erroneous detection of sunlight or the like Can be minimized. In the case of a short distance, there is almost no influence of sunlight due to concealment by the vehicle ahead.
[0121]
vii) When only one marker is found, if it is determined that the distance is a short distance in the previous determination, the distance d is calculated from the size (vertical size) of the one marker. In addition, it is determined which marker the detected marker image is based on the previous detection results, and the azimuth angle θ is calculated. If it is determined in the previous determination that the distance is not a short distance, the distance calculation is not performed.
[0122]
By using this detection method only when the previous determination is a short distance (for example, a distance within 1 m) where the number of markers in the angle of view is expected to be less than 2, erroneous detection of sunlight or the like Can be minimized. In the case of a short distance, there is almost no influence of sunlight due to concealment by the vehicle ahead.
[0123]
As described above, by arranging the markers in the vertical direction and the horizontal direction, the position of the preceding vehicle can be measured with high accuracy and in a short time with a small amount of memory usage. In addition, if three marker images are detected from the surrounding markers, the other position can be estimated and the relative position can be calculated, ensuring reliability that can withstand use in the ever-changing natural environment. can do.
[0124]
[Calculation of center of gravity]
Next, the center of gravity calculation method will be described. In the present embodiment, in order to increase the calculation accuracy of the center of gravity, the control unit 24 performs weighting based on the luminance distribution information of a plurality of marker images. Then, the gravity center position of the marker image is calculated using the weighted luminance distribution information. This barycentric position is set as the position of the marker image (marker reference position).
[0125]
FIG. 18 is an enlarged view of the marker image. As shown in FIG. 18, when a marker of a preceding vehicle is photographed, the contour of the marker image is blurred due to the influence of dust and dirt in the air.
[0126]
FIG. 19 is a diagram illustrating a luminance histogram of a marker image. In FIG. 19, the horizontal axis indicates the X coordinate, and the vertical axis indicates the luminance value. As shown in FIG. 19, the marker image has a high luminance value near the center, and the luminance value decreases as it approaches the periphery.
[0127]
Thus, when the marker is rectangular, the central portion is bright and the peripheral portion is blurred. The influence is further increased by the occurrence of blur due to disturbance and rotation of the marker itself (rotation in the roll direction of the vehicle). In particular, it may bleed in one direction due to natural conditions (rain, snow, fog, etc.).
[0128]
Here, as a method for calculating the marker reference position, the following three methods can be considered.
(1) Marker center position of binary image
(Midpoint of maximum value and minimum value in X direction, midpoint of maximum value and minimum value in Y direction)
(2) Calculating the center of gravity of a binary image
(3) Center of gravity calculation with luminance weight
If the marker image of a clean rectangular marker becomes a clean rectangle when digitized, (1) is sufficient. However, in practice, “bleeding” and rotation due to disturbance occur, and the digital image is not displayed in a rectangle.
[0129]
FIG. 20 is a diagram illustrating an example of the binarization processing of the marker image. 20A shows the luminance value for each pixel of the marker image before binarization, and FIG. 20B shows the luminance value for each pixel of the marker image after binarization.
[0130]
In the example of FIG. 20, binarization processing is performed with a threshold value of 20. That is, a pixel whose luminance value before binarization is less than 20 has a luminance value after binarization of 0. A pixel having a luminance value before binarization of 20 or more has a luminance value after binarization of 1. In FIG. 20B, pixels whose luminance value after binarization is 1 (a threshold value or more at the time of binarization) are indicated by white squares, and the luminance value after binarization is 0 (2 Pixels that are less than the threshold at the time of value conversion are indicated by black squares.
[0131]
In the example of FIG. 20A, in the marker image 111 before binarization, there is a pixel having the highest luminance value in the vicinity of the center, and the luminance value decreases as it approaches the periphery. When the marker image 111 is binarized, the binarized marker image 112 is not an accurate rectangle as shown in FIG. In the example of FIG. 20B, the marker image 112 is distorted by the presence of the pixel 112a having a luminance value of 1 at a position protruding from the rectangle.
[0132]
In this way, when binarization is performed, it is affected by a point that is greater than or equal to a threshold value generated by blurring or the like (a bright point generated by irregular reflection or the like at a position where no actual marker exists). Therefore, in the case of the marker reference position calculation method (1), a large error occurs. Even if the protrusion is at most 1 pixel, the reference position is affected in units of 0.5 pixels.
[0133]
In contrast, when calculating the center of gravity of the marker reference position calculation method (2), the influence of one pixel as shown in FIG. 20 can be reduced compared to the calculation method (1), and the accuracy is stochastically accurate. Will improve. That is, by calculating the center of gravity, it is possible to detect the reference position with an accuracy (sub-pixel) equal to or less than the pixel unit.
[0134]
It is also expected that “smudge” will occur in one direction due to rain or the like. Even when the threshold value is exceeded during binarization, the luminance of the normal marker position is high and the luminance of the “smear” portion is low. Using this characteristic, in the marker reference position calculation method (3), each pixel is weighted by the luminance value before binarization when calculating the center of gravity. Thereby, the marker reference position can be obtained more accurately.
[0135]
That is, in the marker reference position calculation method (3), a weight corresponding to the luminance value of the image sent from the camera 21 is set for each pixel of the marker image. And the virtual mass according to a weight is given with respect to each pixel, and the mass center of a marker image is calculated | required.
[0136]
FIG. 21 is a diagram illustrating an example of calculating the center of gravity of the luminance weight. In FIG. 21, the brightness distribution of the marker image is shown as a brightness histogram in which the x-axis is set on the horizontal axis and the brightness value is set on the vertical axis. In the figure, a line 121 indicates the barycentric position of the luminance weight. A line 122 indicates the barycentric position of the binarization result.
[0137]
In the example of FIG. 21, bleeding occurs in one direction (left side in the figure) due to rain or the like, and the high luminance area spreads. Therefore, the barycentric position of the binarization result is shifted to the left as indicated by the line 122. Here, when the weight is obtained by assigning a luminance weight, the center of gravity is obtained at a position having a high luminance value as indicated by a line 121.
[0138]
In this way, when the histogram is like the graph of FIG. 21, the centroid is near the midpoint from both ends in the “method for obtaining the center of the pixel at both ends” and the “method for obtaining the centroid of the binarized result”. . However, when the marker is far away, there is a blurred part on the video, and the brightest part is close to the center of gravity. Therefore, the “center of gravity with luminance weight” indicates a position closer to the center of the actual marker.
[0139]
[Marker condition judgment and filling rate]
Next, the determination of the marker condition and the filling rate will be described. In the present embodiment, a predetermined marker is recognized from the marker barycentric coordinate position.
[0140]
When running a vehicle in a natural environment, an undetected marker may be generated due to the shape of the rectangular marker being disturbed due to raindrops, or a marker other than the marker may be misrecognized as a marker due to sunlight. There is a case. For this reason, in this embodiment, the following is checked when examining the shape of the marker.
• Is the aspect ratio of the high brightness area close to the aspect ratio of the actual marker?
• Is the shape of the high brightness area rectangular (rectangular)? Specifically, the ratio (area filling factor) of the area of the minimum rectangle that covers the high-luminance area and the area of the high-luminance area is obtained, and it is checked whether this is close to the expected value or the specified value.
・ Is the size of the high brightness area too small or too large? In this case, the maximum value and the minimum value of the marker size in the assumed inter-vehicle distance range are set in advance, and it is checked that the size of the high luminance area is not less than the preset minimum value and not more than the maximum value.
[0141]
Those satisfying these evaluation criteria are considered to satisfy the “marker condition” and are set as marker image candidates. Moreover, it is possible to determine whether or not to consider the marker image by weighting the evaluation criterion of the marker and comprehensively evaluating the marker image candidate according to the weight. In this case, weighting can be set according to the operation mode (running course, surrounding environment, marker shape, etc.).
[0142]
For example, the weight of the filling rate is increased at a short distance, and the maximum area is determined at a long distance, so that an accurate determination according to the distance is possible. Unlike short distances, markers are rectangular at short distances. probability This processing can be realized by reflecting the characteristic that the distance is higher in the long distance and is often close to a circle compared to the short distance.
[0143]
By this determination processing, it is possible to prevent erroneous recognition even with respect to a light emitter group in which the positional relationship between the markers is close to a true marker pattern. In addition, since the number of marker candidates can be reduced, there is an effect that the final marker pattern determination is speeded up.
[0144]
FIG. 22 is a diagram illustrating an example of a determination processing result based on an aspect ratio. 22A shows an image input from the camera, FIG. 22B shows a labeling result, and FIG. 22C shows a marker image candidate extracted after shape determination.
[0145]
The frame image 130 input from the camera 21 includes a plurality of high brightness areas. In the example of FIG. 22A, the frame image 130 includes an elliptical area, a triangular area, and four rectangular areas. Therefore, the control unit 24 performs labeling and assigns an identification number to the high-luminance area. In the example of FIG. 22B, 1 is assigned to the elliptical area, 2 is assigned to the triangular area, and 3 to 6 identification numbers are assigned to the four rectangular areas.
[0146]
Here, the control unit 24 determines the aspect ratio. If the actual marker shape is a square, a shape that is too long horizontally such as the elliptical region shown in FIG. 22B or a shape that is too long vertically such as the triangular region is not a marker image (vertical and horizontal). The ratio is abnormal). As a result, as shown in FIG. 22C, only the rectangular area is extracted as a marker image candidate, and identification numbers 1 to 4 are reassigned to each.
[0147]
FIG. 23 is a diagram illustrating a filling rate determination example. When determining the filling rate, the control unit 24 sets a rectangular area 142 surrounding the marker image 141. When the width in the x-axis direction of the rectangular region 142 is x1, the width in the y-axis direction is y1, and the marker area S (the number of dots on the screen), the filling rate F is
[0148]
[Equation 9]
F = S / (x1 + y1) (9)
Can be calculated. The maximum value of the filling rate F is 1, and the filling rate F approaches 1 as the marker image 141 approaches a rectangle.
[0149]
In the following, an example is shown that is not actually a marker, but is erroneously recognized as a marker when determined only by the aspect ratio.
FIG. 24 is a diagram illustrating an example of an image that is abnormal in the determination of the filling rate. FIG. 24A shows an example of a distorted shape image, FIG. 24B shows an example of a ten-hour image, and FIG. 24C shows an example of a triangular image.
[0150]
Each of the images 151, 152, and 153 is determined to be normal in the aspect ratio, but is determined to be abnormal in the filling rate.
Even if the marker is rectangular, it cannot be generally regarded as a complete rectangle in the image information. However, there is a characteristic that the closer the distance between the imaging target and the imaging means, the closer the marker image is to a rectangle (the filling rate is close to 1). Therefore, by determining the marker image based on the filling rate, when there is a high-luminance region, it is possible to accurately determine other than the marker and the marker, and as a result, it is effective in preventing erroneous recognition.
[0151]
[Tracking process]
Next, the tracking process will be described. In the tracking process, the marker and the marker pattern are extracted when the movement of the target included in the marker pattern is within a predetermined range.
[0152]
FIG. 25 is a diagram showing a marker search range. In the example of FIG. 25, the search range 161b, 162b, 163b, 164b, 165b of the marker image in the image 160 is determined based on the reference position of the marker image 161a, 162a, 163a, 164a, 165a at the previous time. Then, marker images 161 to 165 at the current time are detected from the search ranges 161b, 162b, 163b, 164b, and 165b.
[0153]
For example, when the reference position of the marker image at the previous time is (x2, y2), the range from (x2-p, y2-q) to (x2 + p, y2 + q) is set as the marker search range. Here, p and q are variables indicating the search width, but the values of p and q are calculated based on information such as the marker width at the previous time.
[0154]
FIG. 26 is a diagram illustrating a search width calculation method. FIG. 26A shows a first candidate for the marker search range, and FIG. 26B shows a second candidate for the marker search range.
[0155]
Let cx be the width of the marker image 161a at the previous time, and cy be the height of the marker image 161a at the previous time. Here, it is assumed that x3 = cx / 2 and y3 = cy / 2.
In the first candidate 161c for the marker search range, as shown in FIG. 26 (A), x3 × n (n is a real number of 1 or more) in the horizontal direction from the reference position of the marker image 161a at the previous time, and in the vertical direction. The range within y3 × r (r is a real number of 1 or more) is the search range.
[0156]
In the second candidate 161d for the marker search range, as shown in FIG. 26B, x3 + m (m is a positive real number) in the horizontal direction and y3 + s (s in the vertical direction from the reference position of the marker image 161a at the previous time. Is a range within a positive real number).
[0157]
As the search width (p, q) for determining the actual search range, the larger value is adopted in each of the x-axis and y-axis directions.
For example, if a function that returns the maximum value of arguments is defined as max (argument # 1, argument # 2,...), The search width (p, q) can be obtained as follows.
[0158]
[Expression 10]
p = max (x × n, x + m) (10)
[0159]
## EQU11 ##
q = max (y × r, y + s) (11)
Thereby, when the size of the marker image is large (the marker is a short distance), the search range defined by the first candidate 161c of the marker search range is determined. In addition, when the size of the marker image is small (the marker is a long distance), a search range as defined by the second candidate 161d for the marker search range is determined. Thereby, the search range according to the distance is determined from a short distance to a long distance.
[0160]
The addition processing by “m” or “s” of the second candidate 161d of the marker search range can be effectively used when the movement of the marker pattern in the image within the imaging time of the camera 21 is in a certain direction. For example, there are the following cases.
・ When there is optical axis rotation vibration of the vehicle's camera (vehicle vibration (yaw angle, pitch angle), steering mechanism shake, camera jig shake).
-The forward vehicle position moves on the image when the road is linear (it may not be exactly the same angle).
・ When an offset margin should be set.
[0161]
On the other hand, the multiplication process by “n” or “r” of the first candidate 161c in the marker search range is effective when the movement amount within the photographing interval is inversely proportional to the distance. For example, it is effective in the following cases.
• When considering the amount of change in the position of the marker on the vehicle ahead (lateral position deviation, yaw angle change, pitch angle change).
・ When there is parallel movement vibration (lateral position deviation, etc.) of the host vehicle.
・ When coefficient margin should be set.
[0162]
Actually, the larger one of the values (the side that needs a margin) is adopted (max value). Considering these factors, the marker search range second candidate 161d is substantially used when the marker is at a far point, and the marker search range first candidate 161c is used at a near point.
[0163]
In addition, n, m, s, and r can be externally specified by parameters, so that they can correspond to various traveling courses. For example, it is possible to dynamically change the shape of the front course at that time.
[0164]
In determining the above settings,
・ Road alignment (vehicle movement along the road)
・ Blur of imaging means (camera)
・ Vehicle vibration
・ Margin
It is necessary to judge comprehensively.
[0165]
For example, the component of the vertical / horizontal change resulting from the above factors is defined as follows.
First, the estimated amount of misalignment of the marker pattern caused by various factors is calculated as road alignment (x11, y11), camera shake (x12, y12), vehicle vibration (x13, y13), roll (x14, y14), A margin (x15, y15) is assumed. At this time, the maximum vertical / horizontal change component is set as the following value.
[0166]
[Expression 12]
Maximum lateral change = x11 + x12 + x13 + x14 + x15 (12)
[0167]
[Formula 13]
Maximum value of longitudinal change = y11 + y12 + y13 + y14 + y15 (13)
In general, the road alignment has a tighter left / right curve than the up / down (x11> y11), and the camera 21 has a large lateral blur (x12> y12 when the horizontal turntable of the camera 21 is used). ). Therefore, a wide search range in the horizontal direction is set. This eliminates the need to search for an unnecessary vertical range, which is very effective in preventing misrecognition of sunlight and the like. In addition, an improvement in processing speed can be expected.
[0168]
[Small region search]
Next, the small area full search will be described. In the small area full search process, the coordinate range in which the marker pattern exists is limited / estimated from the marker pattern coordinate information at the previous time, and the marker and the marker pattern are extracted within the limited range.
[0169]
FIG. 27 is a diagram illustrating a search range of the small region full search process. In the image 170, the search range 176 is determined according to the positions of the marker images 171a, 172a, 173a, 174a, and 175a that form the marker pattern of the previous time. By performing a full search in the search range 176, the marker images 171 to 175 at the current time are detected, and the marker pattern at the current time is recognized.
[0170]
FIG. 28 is a diagram illustrating an example of a screen when a fake marker exists. In the image 180 of FIG. 28, genuine marker images 181 to 185 are shown in the search range 186 of the small region at the current time. Outside the search range 186, fake marker images 187a to 187d are shown.
[0171]
The control unit 24 calculates the position and size of the marker pattern from the coordinates (reference position) of the marker image constituting the marker pattern at the previous time. Then, the control unit 24 determines the marker pattern search range in consideration of the expected movement range. The control unit 24 performs marker detection and pattern detection by performing full search processing on the limited marker pattern search range.
[0172]
Thereby, it is possible to increase the speed by limiting the processing target. In addition, it is possible to prevent false recognition of a false marker image (a high-intensity region other than the marker) generated due to various factors as shown in FIG.
[0173]
Here, a calculation method of the search range in the small area full search will be described.
FIG. 29 is a diagram for explaining a search range calculation method in the small region full search. Based on the marker images 191 to 195 detected in the immediately preceding frame image 190, the control unit 24 determines the minimum area 196 that includes the marker pattern formed by the marker images 191 to 195. The center between the x coordinate and the y coordinate of the minimum area 196 is set as a center position P0 (x0, y0) of the marker pattern.
[0174]
Here, with reference to the center position P0 (x0, y0) of the marker pattern, (x0-p1, y0-q1) to (x0) + p1, y0 + The range of q1) is set as the marker pattern search range.
[0175]
Here, if the width of the marker pattern at the previous time is dx and the height of the marker pattern at the previous time is dy, it can be expressed as x4 = dx / 2 and y4 = dy / 2.
Here, the search width (p1, q1) is obtained as follows.
[0176]
[Expression 14]
p1 = max (x4 × n1, x4 + m1) (14)
[0177]
[Expression 15]
q1 = max (y4 × r1, y4 + s1) (15)
n1 and r1 are real numbers of 1 or more, and m1 and s1 are real numbers of 0 or more. The reason for taking the maximum value of the plurality of values in this way is to support a long distance from a short distance.
[0178]
The addition of variables such as “+ m1” and “+ s1” is effective when the movement direction of the marker pattern on the image within the imaging interval as viewed from the camera 21 is constant. For example, this is effective in the following cases.
・ When there is optical axis rotation vibration of the vehicle's camera (vehicle vibration (yaw angle, pitch angle), steering mechanism shake, camera jig shake).
-The forward vehicle position moves on the image when the road is linear (it may not be exactly the same angle).
・ When an offset margin should be set.
[0179]
On the other hand, multiplying variables such as “× n” and “× r” is effective when the amount of movement within the photographing interval is inversely proportional to the distance. For example, it is effective in the following cases.
• When considering the amount of change in the position of the marker on the vehicle ahead (lateral position deviation, yaw angle change, pitch angle change).
・ When there is parallel movement vibration (lateral position deviation, etc.) of the host vehicle.
・ When coefficient margin should be set.
[0180]
Among these multiple values, the side that requires a margin is adopted (max value). Considering these, the former is used substantially at the far point and the latter at the near point.
Moreover, n1, m1, s1, and r1 correspond to various traveling courses by enabling external designation by parameters. It is also possible to change dynamically considering the shape of the front course at that time.
[0181]
In determining the set value, it is necessary to comprehensively determine road alignment, imaging means (camera) blurring, vehicle vibration, margin, and the like.
For example, the component of the vertical / horizontal change resulting from the above factors is defined as follows.
Road alignment (x21, y21)
Image blur (x22, y22)
Vehicle vibration (x23, y23)
Roll (x24, y24)
Margin (x25, y25)
At this time, the maximum vertical / horizontal change component is set as the following value.
[0182]
[Expression 16]
Maximum lateral change = x21 + x22 + x23 + x24 + x25 (16)
[0183]
[Expression 17]
Maximum longitudinal change = y21 + y22 + y23 + y24 + y25 (17)
In general, the road alignment has a tighter left / right curve than the up / down (x21> y21), and the blur of the imaging means is large in the lateral direction when using the horizontal turntable of the camera (x22> y22). Therefore, a wide search range in the horizontal direction is set. This eliminates the need to search for an unnecessary vertical range, which is very effective in preventing misrecognition of sunlight and the like. In addition, an improvement in processing speed can be expected.
[0184]
[Tracking (position prediction)]
Next, the position prediction process will be described. The control unit 24 estimates the coordinate information of the marker and marker pattern at the next time from the past marker coordinate information and marker pattern coordinate information, and limits the coordinate range to be extracted.
[0185]
FIG. 30 is a diagram for explaining the process of predicting the position of the marker image. The amount of movement from the position of the marker image 211 in the previous image to the position of the marker image 212 in the current image is calculated. The amount of movement can be represented by a movement vector 214, for example.
[0186]
The control unit 24 predicts that the next marker image 213 will appear at the moved position that is the same as the movement amount from the previous position to the current position. For example, assuming a movement vector 215 having the same length and the same direction as the movement vector 214, the position moved by the movement vector 215 from the current position is set as the next predicted position. That is, in the next search for the marker image, the current position is predicted from the positions of the previous two marker images.
[0187]
The previous marker image position and the current marker image position are calculated, for example, by calculating the center of gravity when weighting is performed according to the luminance. Further, the certainty of marker discovery is improved by setting the reference position of the search range in tracking the marker image as the predicted position of the next marker image.
[0188]
The present embodiment is a particularly effective means when the premise of constant speed movement of the object (marker and marker pattern) is observed. When applying the position prediction processing, it is necessary to consider characteristics such as a traveling course, imaging means, own vehicle behavior, and preceding vehicle behavior.
[0189]
[Supports separation of light emitting elements (pixel fusion processing)]
In the case of a self-light emitting marker composed of a plurality of light emitting elements, there is a possibility that the light emitting elements are separated and detected when approaching. In this case, the control unit 24 obtains a marker candidate region by fusing the separated pixels on the image.
[0190]
FIG. 31 is a diagram illustrating an image when a marker including a plurality of light emitting elements is in proximity. As shown in FIG. 31, when the marker is composed of a plurality of LEDs, if the distance between the marker and the camera is too close, one marker image is separated and appears as a set of a plurality of high brightness areas. If the labeling process is performed as it is, each separated high-intensity region becomes a marker candidate.
[0191]
In order to suppress this separation, the control unit 24 can normally perform area expansion, image reduction, blurring processing, and the like, but this takes time.
Therefore, the control unit 24 performs not the image processing but the fusion processing of the high luminance pixels that are not adjacent to each other during the labeling but are present nearby. Thereby, erroneous recognition of the marker can be prevented and the processing speed can be increased.
[0192]
In the present embodiment, in the horizontal direction, pixel fusion processing is performed using a run that appears in the middle of labeling. A run is information used when an area is regarded as a collection of horizontal lines in order to reduce the storage capacity of a binarized image. When only the coordinates of the start point (left side) and end point (right side) of each horizontal line in an area set to high luminance after binarization are stored, this horizontal line is called a run. The run is information indicating the number of binary images connected in the horizontal direction (horizontal direction) on one line. The information includes a start x coordinate, an end x coordinate, and ay coordinate.
[0193]
In the case of the horizontal direction, if the y coordinate is the same and the end x coordinate of the run and the start x coordinate of the next run are small (a numerical value is specified by a parameter), the run is processed as being connected. Similarly, in the vertical direction, if the x coordinate is the same and the end y coordinate of the run and the start y coordinate of the next run are small (a numerical value is specified by a parameter), the run is processed as being connected.
[0194]
FIG. 32 is a diagram illustrating an example of the light emitting element image combining process. FIG. 32A shows a binary image before image combination of light emitting elements, and FIG. 32B shows image combination of light emitting elements. rear The binary image is shown.
[0195]
As described above, since the interval between the images (high luminance regions) for each light emitting element is narrow, the combination processing is performed to form one high luminance region. That is, it is handled as one marker image.
[0196]
In setting the parameter of the designated interval regarding pixel fusion, it is necessary to consider the characteristics of the light emitting elements, the arrangement interval, and the like. In general, the light emitting elements are separated when the self-light emitting marker and the imaging means are close to each other. However, it is important to set whether or not to apply this embodiment so that it can be determined depending on the situation. It is. For example, it can be applied only when close, applied when distance is unknown, applied when close or unknown distance, always applied, not always applied, so that it can be used in a wide range of operation modes such as light emitting element characteristics, driving course, weather etc. It becomes possible to respond.
[0197]
[Distance mode]
Next, the distance mode will be described. The control unit 24 can switch each function based on the distance information to the target at the previous time. For example, the control unit 24 divides the state into three patterns of long distance, medium distance, and short distance according to the distance of the previous detection result, and switches the processing method. By dynamically switching the optimal processing method according to the distance, it is possible to prevent erroneous recognition of markers and marker patterns in each distance range, and to treat distance ranges that cannot be handled with a single distance setting. It becomes possible.
[0198]
The distinction between distances for long distances and short distances is specified by parameters. The parameter setting can be adapted to a wide variety of usage forms by comprehensively determining and determining the following.
Marker and marker pattern size
・ Characteristics of light emitting element of marker
・ Road alignment (up / down / left / right curve)
・ Vehicle vibration characteristics
・ Behavior characteristics of imaging devices
a) Long distance
In the case of a long distance, the control unit 24 performs a small area full search as a marker pattern search method. The small area full search is a method of limiting the search range and performing a full search process. In this mode, the search range is limited, the processing speed can be increased, and the marker can be moved at a high speed.
[0199]
In the tracking process for each marker, a search is performed for each marker, and therefore it is not appropriate when all five markers move at high speed. On the other hand, the small area full search can cope with a case where the entire marker moves greatly. For this reason, it is possible to cope with a case where the target is at a long distance and the imaging device is blurred.
[0200]
However, since the processing takes time when the target is close and the search range is large, the small area full search process is a process in the case of a long distance where the entire marker appears small.
[0201]
b) Medium distance
In the case of a medium distance, the control unit 24 performs a tracking process for searching for each marker as a marker pattern search method. Since the distance is close to a certain extent, there is little influence even when the imaging device is shaken compared to a long distance optically.
[0202]
Since the search range of the marker pattern is limited in the small area full search, the search range is widened when the distance is not a long distance, and finally the entire image becomes the search range, resulting in poor processing efficiency. Furthermore, searching for an unnecessarily wide range also causes misrecognition.
[0203]
Therefore, in the case of a distance close to a certain degree, since the marker becomes large, it is more efficient to search for each marker, and the overall search range becomes narrower. Therefore, the possibility of erroneous recognition is low. Become.
[0204]
In addition, since it is known that the marker is large, it is possible to obtain a marker by performing image thinning as a search for one marker.
c) Short distance
In the case of a short distance, the control unit 24 performs a tracking process for searching for each marker as a marker pattern search method. Since it is known that the marker looks large, even if the number of markers is two or one, it is permitted to obtain the distance from the marker interval and the marker size. This makes it possible to calculate the distance even when only two or less markers are present within the angle of view. It is desirable not to use this method because three or more markers exist within the angle of view at a distance other than a short distance and the possibility of erroneous detection of sunlight or the like increases.
[0205]
Further, in the case of a short distance, the pixel fusion processing is performed repeatedly in consideration of separation of light emitting elements.
Furthermore, the exposure of the imaging means (camera 21) is increased only in the case of a short distance. Thereby, misrecognition by separation of light emission prevention can be prevented while minimizing misrecognition of disturbances such as sunlight.
[0206]
In the present embodiment, the processing is divided into three types of patterns: long distance, medium distance, and short distance. However, setting switching according to distance may be further subdivided. As a result, it is possible to further cope with each distance, and as a result, it is possible to prevent erroneous recognition and improve the recognition rate.
[0207]
[Limited processing range]
The control unit 24 can also change the search range of the marker pattern according to the distance to the target (as a function of the distance).
[0208]
FIG. 33 is a diagram illustrating a search range limited when the target is a long distance. When the distance to the tracking target vehicle is a long distance, the image 220 sent from the camera 21 is reflected by a cloud image 222 or a puddle other than the marker image 221 of the marker installed on the vehicle to be tracked. A reflected image 223 of the marker is displayed. In addition to the clouds, the sun can be present at the top of the image 220. These images cause misrecognition of the marker.
[0209]
Therefore, the control unit 24 excludes the predetermined range from the top and the predetermined range from the bottom of the marker pattern search range. Therefore, the search range 224 is the central portion excluding the top and bottom of the image 220. Thereby, a misrecognition factor can be removed and a misdetection can be prevented.
[0210]
It should be noted that the range in which the marker pattern can exist can be limited to the horizontal direction by the course shape (such as no steep curve).
FIG. 34 is a diagram illustrating a search range limited when the target is a short distance. When the distance to the vehicle to be tracked is a short distance, the marker image 231 constituting the marker pattern is greatly projected on the image 230 sent from the camera 21. In this case, the search range 232 is widened vertically depending on the distance of the previous detection result.
[0211]
That is, the search range is wide at a short distance and narrow at a long distance. As a result, the authentic marker image does not deviate from the search range at a short distance.
[0212]
[Limit search range according to position]
In IMTS, each vehicle itself can grasp the current position by laying magnetic lane markers at regular intervals in the traveling lane and detecting the magnetic lane markers. Therefore, in the present embodiment, the search range for the marker of the preceding vehicle is limited according to the current position of the vehicle.
[0213]
FIG. 35 is a functional block diagram illustrating a processing function for performing a search range restriction according to a position.
The magnetic lane marker detection unit 301 detects a magnetic lane marker laid on the traveling lane. The magnetic lane marker detection unit 301 is a function of the sensor 26 shown in FIG.
[0214]
The position information acquisition unit 302 acquires current position information based on the magnetic lane marker information detected by the magnetic lane marker detection unit 301. The search range extraction unit 303 extracts a search range for a marker image or a marker pattern based on the position information acquired by the position information acquisition unit 302. For example, if the current position and the map information of the travel lane indicate that the vehicle is traveling in the middle of a long straight line, the search range limited to the left and right is extracted. The search range extraction unit 303 passes the extracted search range to the marker extraction unit 305 and the marker pattern extraction unit 306.
[0215]
The imaging unit 304 captures a video of the preceding vehicle and generates an image. The imaging unit 304 is a function configured by the camera 21 and the image processing unit 23 illustrated in FIG.
The marker extraction unit 305 receives an image captured from the imaging unit 304 and extracts a marker image from the search range passed from the search range extraction unit 303 in the image.
[0216]
The marker pattern extraction unit 306 analyzes the marker image extracted by the marker extraction unit 305 and extracts a marker pattern. The marker pattern extraction unit 306 extracts a marker pattern only from the marker image within the search range sent from the search range extraction unit 303 when the marker extraction unit 305 does not limit the search range. .
[0217]
The relative position calculation unit 307 calculates the relative position of the preceding vehicle based on the marker pattern extracted by the marker pattern extraction unit 306.
As a result, a vehicle that tracks the preceding vehicle detects the magnetic lane marker, grasps its current position, and compares the current position with the map information of the travel lane, thereby accurately determining the search range. Is done. Then, a marker pattern is detected from the determined search range.
[0218]
This narrows the search range and increases the processing speed. In addition, since the search range can be narrowed down accurately, it is possible to prevent noise from being mistakenly recognized as a marker.
[0219]
In addition, position information acquisition can also be performed by GPS (Global Positioning System). At this time, the position obtained by the GPS can be corrected by the self-propelled distance or the cutting width of the steering wheel.
[0220]
[Rotation control of imaging device]
In the above example, the camera 21 as the imaging device has been described as being fixed, but the position of the camera 21 can be rotated according to the relative positional relationship with the preceding vehicle. Hereinafter, the rotation method of the camera 21 will be described.
[0221]
FIG. 36 is a top view of a camera having a rotation mechanism. As shown in FIG. 36, at the head of the vehicle 20, the camera 21 is installed on the turntable 27a. The turntable 27 a has a built-in motor, and the motor rotates according to a signal from the control unit 24 to rotate the camera 21. That is, the turntable 27 a is a part of the actuator 27.
[0222]
The rotation angle of the turntable 27a can be controlled according to the absolute position on the travel lane. A control method for performing such control will be described below.
FIG. 37 is a functional block diagram illustrating a rotation control function of the imaging apparatus. In FIG. 37, the magnetic lane marker detection unit 301 and the position information acquisition unit 302 are the same as the components having the same names shown in FIG. The magnetic lane marker detection unit 301 and the position information acquisition unit 302 acquire lane marker information using a magnetic lane marker provided on the lane. The lane marker information is associated with a two-dimensional or three-dimensional coordinate map by a predetermined road alignment.
[0223]
FIG. 38 is a diagram showing a correspondence table between lane markers and coordinates. Each lane marker is numbered sequentially from 001. Each lane marker is associated with an X-coordinate, a Y-coordinate, and a road linear absolute angle (hereinafter simply referred to as an absolute angle) with respect to the tangent. For example, the X coordinate of the kth (k is a natural number) lane marker is X (k), the Y coordinate is Y (k), and the absolute angle is φ (k). The X coordinate of the u (u is a natural number) lane marker is X (u), the Y coordinate is Y (u), and the absolute angle is φ (u).
[0224]
Returning to the description of FIG. 37, the target marker distance setting storage unit 311 sets a distance to be held between the marker set in the preceding vehicle. The control unit 24 controls an accelerator, a brake, and the like so as to keep a distance between the vehicles at a set distance.
[0225]
The target marker distance calculation unit 312 calculates the distance d from the marker set in the preceding vehicle.
The imaging device rotation angle calculation unit 313 uses the position information from the position information acquisition unit 302 and the distance from the target marker distance setting storage unit 311 or the distance d calculated by the target marker distance calculation unit 312. The rotation angle of the installed turntable 27a is calculated.
[0226]
A method for calculating the rotation angle of the imaging device having such a configuration will be described.
FIG. 39 is a view of a vehicle traveling in a row as viewed from above. In FIG. 39, the horizontal direction in the figure is the X axis, and the vertical direction is the Y axis.
[0227]
The preceding vehicle 10 and the following vehicle 20 are traveling along the traveling lane 320. A number of magnetic lane markers 330 are laid on the traveling lane 320. In the example of FIG. 39, the preceding vehicle 10 is traveling in a place where the u-th lane marker 332 of the traveling lane 320 is laid. The following vehicle 20 is traveling away from the vehicle 10 by a distance d in a place where the kth lane marker 331 of the traveling lane 320 is laid.
[0228]
FIG. 40 is an enlarged view of FIG. In FIG. 40, as an auxiliary line, a tangent line 341 of a road alignment (a line that smoothly connects the marker positions) in the k-th lane marker 331 laid at the position of the vehicle 20, an axis 342 in the front-rear direction of the vehicle 20, 10, a longitudinal axis 343 and a straight line 344 connecting the camera 21 and the marker panel 11 are shown.
[0229]
Here, the angle formed by the tangent line 341 and the axis 342 is φ1, the angle formed by the tangent line 341 and the axis 343 is φ2, and the axis 342 is a straight line. 344 Is defined as φa.
Assume that the vehicle 20 that is the host vehicle detects the k-th lane marker 331 and obtains a distance d from the target marker distance setting storage unit 311 or the target marker distance calculation unit 312 to the preceding vehicle 10. If the length of the car is d1 and the distance between the lane markers is d2, u can be obtained by equation (18).
[0230]
[Formula 18]
u = k + (d + d1) / d2 (18)
Further, when the lane marker interval is not uniform, it can also be obtained by an integral value of the distance between the lane markers.
[0231]
In calculating the rotation angle of the imaging device, it is necessary to consider the following.
1. Absolute angle of road line tangent (φ1k, φ1u)
2. Vehicle yaw angle (φ2k, φ2u) with respect to road linear tangent due to vehicle behavior
3. Amount of lateral displacement of the vehicle with respect to the road (x1k, y1k), (x1u, y1u)
4). Image pickup means position (dxc, dyc) with respect to the lane marker sensor in the vehicle
5). Marker position (dxm, dym) with respect to the lane marker sensor in the vehicle
The procedure for calculating the rotation angle of the imaging apparatus is as follows. A general geometric principle is used for calculation of various angles.
[0232]
FIG. 41 is a flowchart illustrating a procedure of imaging rotation angle calculation processing. In the following, the process illustrated in FIG. 41 will be described in order of step number.
[Step S101] The position information acquisition unit 302 acquires the detected lane marker number from the magnetic lane marker. Then, the position information acquisition unit 302 is based on the correspondence table between lane markers and coordinates shown in FIG. 38, and the road position (X (k), Y (k)), (X (u), Y (u)). To get. Further, the position information acquisition unit 302 acquires the lateral position shift amounts (x1k, y1k) and (x1u, y1u) with respect to the road of the vehicle from the magnetic lane marker detection unit 301, and the vertical position and the horizontal position ( The position of the lane marker sensor is a standard).
[0233]
[Step S102] The imaging device rotation angle calculation unit 313 calculates the vehicle angle from the absolute angle (φ1k, φ1u) of the road linear tangent and the vehicle yaw angle (φ2k, φ2u) with respect to the road linear tangent due to vehicle behavior. Determine.
[0234]
[Equation 19]
φ1 = φ1k + φ2k (19)
[0235]
[Expression 20]
φ2 = φ1u + φ2u (20)
[Step S103] The imaging device rotation angle calculator 313 uses the imaging means position (dxc, dyc) for the lane marker in the vehicle and the imaging means position (xc, yc) is calculated.
[0236]
[Step S104] The imaging device rotation angle calculation unit 313 uses the marker positions (dxm, dym) with respect to the lane marker sensors in the vehicle and steps S101, S102 and the marker position (xm, ym) is calculated.
[0237]
[Step S105] The imaging device rotation angle calculation unit 313 calculates the absolute angle of the marker from the imaging unit using the following equation.
[0238]
[Expression 21]
φa = tan -1 ((Ym-yc) / ( xm-xc))
.... (21)
[Step S106] The imaging device rotation angle calculation unit 313 calculates the rotation angle of the camera.
[0239]
[Expression 22]
φc = φa-φ1 (22)
Calculate as
[0240]
As described above, the rotation angle of the camera can be calculated. In addition, by further interpolating the discrete calculation processing at the lane marker interval with wheel pulses or the like, it is possible to rotate the imaging apparatus with less errors and smoothly. For example, by linearly interpolating the curvature of the road with wheel pulses, the error and discontinuity in the relaxation curve section (the part where the tightness of the curve changes with distance) can be improved.
[0241]
The imaging device rotation angle calculation unit 313 transmits the angle information of the value φc to the turntable 27a. The turntable 27a rotates the camera 21 based on the received angle information. As a result, the target marker can be tracked by the camera 21.
[0242]
In this way, by actively capturing the target marker in the field of view, it is difficult to be affected by disturbance noise other than the target marker, and a telephoto lens can be used. A system can be realized.
[0243]
Further, the relative angle from the marker to the camera 21 can be obtained from the absolute value of the difference between φa and φ2. As a result, by increasing the exposure of the imaging means only when the angle is large, it is possible to ensure detection with a small radius curve while minimizing misrecognition of disturbances such as sunlight.
[0244]
In the above embodiment, the case where the measurement of the relative position with respect to the preceding vehicle is used for automatic driving of the following vehicle has been described. However, the measurement result of the relative position is used as a driving assist function (driving assist). May be. For example, a warning can be issued to the driver when the inter-vehicle distance becomes short. Further, it may be announced to the driver that there is a corner ahead by the occurrence of the azimuth angle and yaw angle of the preceding vehicle.
[0245]
The above processing functions can be realized by a computer. In this case, a program describing the processing contents of the functions that the control device should have is provided. By executing the program on a computer, the above processing functions are realized on the computer. The program describing the processing contents can be recorded on a computer-readable recording medium. Examples of the computer-readable recording medium include a magnetic recording device, an optical disk, a magneto-optical recording medium, and a semiconductor memory. Examples of the magnetic recording device include a hard disk device (HDD), a flexible disk (FD), and a magnetic tape. Examples of the optical disc include a DVD (Digital Versatile Disc), a DVD-RAM (Random Access Memory), a CD-ROM (Compact Disc Read Only Memory), and a CD-R (Recordable) / RW (ReWritable). Magneto-optical recording media include MO (Magneto-Optical disc).
[0246]
When distributing the program, for example, portable recording media such as a DVD and a CD-ROM in which the program is recorded are sold. It is also possible to store the program in a storage device of a server computer and transfer the program from the server computer to another computer via a network.
[0247]
The computer that executes the program stores, for example, the program recorded on the portable recording medium or the program transferred from the server computer in its own storage device. Then, the computer reads the program from its own storage device and executes processing according to the program. The computer can also read the program directly from the portable recording medium and execute processing according to the program.
[0248]
(Supplementary Note 1) In a relative position measurement device that measures a relative position with a preceding vehicle by image analysis,
Imaging means for imaging a plurality of markers arranged in a plurality of axial directions at a rear position of the preceding vehicle, and generating a frame image;
Extracting means for extracting a marker pattern composed of a plurality of marker images representing the plurality of markers from the frame image generated by the imaging means;
Calculation means for calculating a relative distance and a relative angle with the preceding vehicle based on the marker pattern extracted by the extraction means;
A relative position measuring device comprising:
[0249]
(Additional remark 2) The said calculation means calculates the relative distance with the said preceding vehicle based on the distance in the said frame image of two marker images, The relative position measuring apparatus of Additional remark 1 characterized by the above-mentioned.
[0250]
(Additional remark 3) The said calculation means calculates | requires the reference position of the said marker pattern from the position in the said frame image of these marker images, and calculates a relative angle based on the said reference position. Relative position measuring device.
[0251]
(Additional remark 4) The said calculation means calculates | requires the reference position of the said marker pattern from the position in the said frame image of the said several marker image, The other marker image which has a difference in a depth direction with respect to the said several marker image The relative position measuring device according to claim 1, wherein a relative yaw angle with respect to the preceding vehicle is calculated based on a distance in the frame image between a position and a reference position of the marker pattern.
[0252]
(Additional remark 5) The said calculation means performs weighting for every pixel which comprises the said marker image based on the luminance distribution information of the said marker image, The gravity center position of the said marker image comprised by the said pixel is said marker image The relative position measuring device according to supplementary note 1, wherein
[0253]
(Additional remark 6) The said extraction means defines the rectangle which encloses the high-intensity area | region used as the said marker image candidate, calculates the filling rate which is the ratio which the said high-intensity area | region in the said rectangle occupies, and the said filling rate is predetermined The relative position measurement apparatus according to appendix 1, wherein the high brightness area is extracted as the marker image only when the value is equal to or greater than a value.
[0254]
(Additional remark 7) The said extraction means extracts the said high-intensity area | region as said marker image, only when the aspect ratio of the high-intensity area | region used as the said marker image candidate is within a predetermined ratio. The relative position measuring device described.
[0255]
(Additional remark 8) The said extraction means limits the search range of each marker image of the present time in the said frame image based on the position of each marker image of the previous time which comprises the marker pattern of the previous time, and is limited. The relative position measuring device according to claim 1, wherein the marker image at the current time is searched from the search range.
[0256]
(Additional remark 9) The said extraction means estimates the position of each marker image of the present time based on the moving direction and moving amount of each marker image of the previous time which comprises the marker pattern of the previous time, and estimated position The relative position measurement device according to claim 8, wherein a range based on the reference image is set as the search range of each marker image.
[0257]
(Additional remark 10) The said extraction means limits the search range of the marker pattern of the present time in the said frame image based on the position of the marker pattern of the previous time, and the marker of the present time from the limited said search range. The relative position measuring apparatus according to appendix 1, wherein a pattern is searched.
[0258]
(Additional remark 11) As for the said extraction means, when the said marker is an aggregate | assembly of several self-light-emitting body and the relative distance with respect to the said preceding vehicle is below a predetermined value, the approaching high-intensity area | region is made into the same high-intensity area | region. The relative position measuring device according to supplementary note 1, wherein the relative position measuring device is recognized.
[0259]
(Additional remark 12) The said extraction means has multiple restriction | limiting functions of the search range of a marker image or a marker pattern, and selects the said search range restriction | limiting function according to the relative distance with the said preceding vehicle, It is characterized by the above-mentioned. The relative position measuring device according to appendix 1.
[0260]
(Additional remark 13) The said extraction means changes the search range of the said marker pattern according to the relative distance with the said preceding vehicle, The relative position measuring apparatus of Additional remark 1 characterized by the above-mentioned.
[0261]
(Additional remark 14) It further has a position acquisition means to acquire its own absolute position on a predetermined traveling course, and limits the search range of the marker pattern according to the traveling course shape near the absolute position. The relative position measuring device according to appendix 1, characterized by:
[0262]
(Supplementary note 15) The relative according to Supplementary note 1, further comprising photographing direction control means for directing the photographing direction of the imaging means toward the preceding vehicle based on the relative angle calculated by the calculating means. Position measuring device.
[0263]
(Supplementary Note 16) In a relative position measurement method for measuring a relative position with a preceding vehicle by image analysis,
Image a plurality of markers arranged in a plurality of axial directions at the position of the rear part of the preceding vehicle, and generate a frame image,
Extracting a marker pattern composed of a plurality of marker images representing the plurality of markers from the frame image,
Based on the marker pattern, a relative distance and a relative angle with the preceding vehicle are calculated.
A relative position measuring method characterized by the above.
[0264]
(Supplementary Note 17) In a program for measuring a relative position with a preceding vehicle by image analysis,
Computer
Imaging means for imaging a plurality of markers arranged in a plurality of axial directions at the position of the rear portion of the preceding vehicle, and generating a frame image;
Extraction means for extracting a marker pattern composed of a plurality of marker images representing the plurality of markers from the frame image generated by the imaging means;
Calculation means for calculating a relative distance or a relative angle with at least the preceding vehicle based on the marker pattern extracted by the extraction means;
A program characterized by functioning as
[0265]
(Supplementary note 18) In a computer-readable recording medium in which a program for measuring a relative position with respect to a preceding vehicle is recorded by image analysis,
The computer,
Imaging means for imaging a plurality of markers arranged in a plurality of axial directions at the position of the rear portion of the preceding vehicle, and generating a frame image;
Extraction means for extracting a marker pattern composed of a plurality of marker images representing the plurality of markers from the frame image generated by the imaging means;
Calculation means for calculating a relative distance or a relative angle with at least the preceding vehicle based on the marker pattern extracted by the extraction means;
A recording medium characterized by functioning as a recording medium.
[0266]
(Supplementary Note 19) In a transport vehicle that travels in a row while maintaining a relative positional relationship with a preceding vehicle,
A camera that captures a plurality of markers arranged in a plurality of axial directions at a rear position of the preceding vehicle, and generates a frame image;
A marker pattern composed of a plurality of marker images representing the plurality of markers is extracted from the frame image generated by the camera, and at least a relative distance or a relative angle with the preceding vehicle based on the extracted marker pattern A control device for calculating
A transport vehicle characterized by comprising:
[0267]
【The invention's effect】
As described above, in the present invention, a marker pattern is detected from a frame image obtained by imaging a plurality of markers arranged in a plurality of axial directions at the rear of a preceding vehicle, and a relative position is calculated based on the marker pattern. Therefore, the relative position can be calculated with high accuracy without storing a large number of templates. As a result, the relative position with high accuracy can be calculated at high speed.
[Brief description of the drawings]
FIG. 1 is a principle configuration diagram of the present invention.
FIG. 2 is a diagram showing an outline of platooning by IMTS.
FIG. 3 is a view of a preceding vehicle as viewed from the rear.
FIG. 4 is a diagram showing the arrangement of markers. FIG. 4A is a front view of the marker panel. FIG. 4B is a cross-sectional view taken along line XX in FIG.
FIG. 5 is a diagram illustrating an example of a frame image.
FIG. 6 is a diagram showing a relationship of relative positions of vehicles.
FIG. 7 is a block diagram showing a configuration of an automatic tracking function installed in a vehicle.
FIG. 8 is a flowchart showing a processing procedure of the entire relative position calculation processing.
FIG. 9 is a flowchart illustrating a procedure of a full search process.
FIG. 10 is a flowchart illustrating a procedure of marker pattern detection processing.
FIG. 11 is a flowchart showing a procedure of tracking processing.
FIG. 12 is a diagram showing an example of a frame image showing a rear part of a preceding vehicle.
FIG. 13 is a diagram illustrating an example of a frame image when four points excluding one point of a peripheral marker are found.
FIG. 14 is a diagram illustrating an example of a frame image when a roll angle occurs.
FIG. 15 is a diagram illustrating an example of a frame image when two marker images that are paired in the horizontal direction among peripheral markers are detected.
FIG. 16 is a first example of a frame image when a peripheral marker other than two marker images paired in the horizontal direction is detected.
FIG. 17 is a second example of a frame image when a peripheral marker other than the two marker images paired in the horizontal direction is detected.
FIG. 18 is an enlarged view of a marker image.
FIG. 19 is a diagram illustrating a luminance histogram of a marker image.
FIG. 20 is a diagram illustrating an example of a binarization process of a marker image. 20A shows the luminance value for each pixel of the marker image before binarization, and FIG. 20B shows the luminance value for each pixel of the marker image after binarization.
FIG. 21 is a diagram illustrating a calculation example of the center of gravity of the luminance weight.
FIG. 22 is a diagram illustrating an example of a determination processing result based on an aspect ratio. 22A shows an image input from the camera, FIG. 22B shows a labeling result, and FIG. 22C shows a marker image candidate extracted after shape determination.
FIG. 23 is a diagram illustrating an example of filling rate determination.
FIG. 24 is a diagram illustrating an example of an image that is abnormal in determination of a filling rate. FIG. 24A shows an example of a distorted shape image, FIG. 24B shows an example of a ten-hour image, and FIG. 24C shows an example of a triangular image.
FIG. 25 is a diagram illustrating a marker search range.
FIG. 26 is a diagram for explaining a search width calculation method; FIG. 26A shows a first candidate for the marker search range, and FIG. 26B shows a second candidate for the marker search range.
FIG. 27 is a diagram illustrating a search range of a small region full search process;
FIG. 28 is a diagram illustrating an example of a screen when a fake marker exists.
FIG. 29 is a diagram for explaining a search range calculation method in a small area full search;
FIG. 30 is a diagram illustrating a process for predicting the position of a marker image.
FIG. 31 is a diagram illustrating an image when a marker including a plurality of light emitting elements is in proximity.
FIG. 32 is a diagram illustrating an example of a light emitting element image combining process. FIG. 32A shows a binary image before image combination of light emitting elements, and FIG. 32B shows image combination of light emitting elements. rear The binary image is shown.
FIG. 33 is a diagram illustrating a search range limited when the target is a long distance.
FIG. 34 is a diagram showing a search range limited when the target is a short distance.
FIG. 35 is a functional block diagram showing a processing function for performing a search range restriction according to a position.
FIG. 36 is a top view of a camera having a rotation mechanism.
FIG. 37 is a functional block diagram illustrating a rotation control function of the imaging apparatus.
FIG. 38 is a diagram showing a correspondence table between lane markers and coordinates.
FIG. 39 is a top view of a vehicle traveling in a row.
40 is an enlarged view of FIG. 39. FIG.
FIG. 41 is a flowchart illustrating a procedure of imaging rotation angle calculation processing.
[Explanation of symbols]
1 preceding vehicle
1a to 1d marker
2 Relative position measuring device
2a Imaging means
2b frame image
2c Extraction means
2d marker pattern
2e calculation means
10, 20, 30 Vehicle
11, 22, 32 Marker panel
11a-11e Marker
21,31 camera

Claims (9)

  1. In the relative position measurement device that measures the relative position with the preceding vehicle by image analysis,
    Imaging means for imaging a plurality of markers arranged in a plurality of axial directions including an axis in the depth direction at a rear position of the preceding vehicle, and generating a frame image;
    Extracting means for extracting a marker pattern composed of a plurality of marker images representing the plurality of markers from the frame image generated by the imaging means;
    Based on the marker pattern extracted by the extracting means, the reference position of the marker pattern is obtained from the position in the frame image of the plurality of marker images, and there is a difference in the depth direction with respect to the plurality of marker images. Calculating means for calculating a relative yaw angle with respect to the preceding vehicle based on a distance in the frame image between a position of the marker image and a reference position of the marker pattern ;
    A relative position measuring device comprising:
  2. Said calculation means, based on the brightness distribution information of the marker image, performs weighting of each pixel constituting the marker image, the center of gravity of the marker image composed of the pixels, the position of the marker image The relative position measuring apparatus according to claim 1.
  3. The extraction unit defines a rectangle surrounding a high-luminance region that is a candidate for the marker image, calculates a filling rate that is a ratio of the high-luminance region in the rectangle, and the filling rate is equal to or greater than a predetermined value. 2. The relative position measuring apparatus according to claim 1 , wherein the high luminance region is extracted as the marker image only in the case .
  4. The relative means according to claim 1, wherein the extraction unit extracts the high-luminance region as the marker image only when an aspect ratio of the high-luminance region that is a candidate for the marker image is within a predetermined ratio. Position measuring device.
  5. The extraction means recognizes the approaching high-intensity region as the same high-intensity region when the marker is an aggregate of a plurality of self-luminous bodies and the relative distance to the preceding vehicle is a predetermined value or less. The relative position measuring apparatus according to claim 1, wherein:
  6. The calculation means further calculates a relative distance from the preceding vehicle based on the marker pattern extracted by the extraction means,
    The extraction means has a plurality of marker image or marker pattern search range limiting functions, and selects the search range limiting function according to the relative distance to the preceding vehicle. The relative position measuring device described.
  7. The calculation means further calculates a relative distance from the preceding vehicle based on the marker pattern extracted by the extraction means,
    The relative position measuring apparatus according to claim 1 , wherein the extraction unit changes a search range of the marker pattern according to the relative distance to the preceding vehicle .
  8. It further has a position acquisition means for acquiring its absolute position on a predetermined traveling course,
    The relative position measuring device according to claim 1 , wherein the extraction unit limits a search range of the marker pattern according to a travel course shape near the absolute position .
  9. The calculating means further calculates an angle formed by a longitudinal axis of a vehicle on which the imaging means is mounted and a straight line connecting the imaging means and the marker,
    Claims wherein the calculating means based on the angle formed between the straight line and the axis calculated, characterized in that it further have a photographing direction control means, characterized in that direct the imaging direction of the imaging means to the preceding vehicle Item 2. The relative position measuring device according to Item 1.
JP2001218425A 2001-07-18 2001-07-18 Relative position measuring device Expired - Fee Related JP4450532B2 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
JP2001218425A JP4450532B2 (en) 2001-07-18 2001-07-18 Relative position measuring device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
JP2001218425A JP4450532B2 (en) 2001-07-18 2001-07-18 Relative position measuring device

Publications (3)

Publication Number Publication Date
JP2003030628A JP2003030628A (en) 2003-01-31
JP2003030628K1 JP2003030628K1 (en) 2003-01-31
JP4450532B2 true JP4450532B2 (en) 2010-04-14

Family

ID=19052604

Family Applications (1)

Application Number Title Priority Date Filing Date
JP2001218425A Expired - Fee Related JP4450532B2 (en) 2001-07-18 2001-07-18 Relative position measuring device

Country Status (1)

Country Link
JP (1) JP4450532B2 (en)

Families Citing this family (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7307736B2 (en) * 2004-03-31 2007-12-11 Mitutoyo Corporation Scale for use with a translation and orientation sensing system
JPWO2005106639A1 (en) * 2004-04-30 2008-03-21 株式会社ディー・ディー・エス Operation input device and operation input program
JP4771399B2 (en) * 2005-02-23 2011-09-14 本田技研工業株式会社 Vehicle recognition device
JP4548725B2 (en) * 2005-03-22 2010-09-22 本田技研工業株式会社 Recognized device for motorcycles
JP2007143748A (en) * 2005-11-25 2007-06-14 Sharp Corp Image recognition device, fitness aid device, fitness aid system, fitness aid method, control program and readable recording medium
JP4817854B2 (en) * 2006-01-27 2011-11-16 アルパイン株式会社 Peripheral object tracking device and peripheral object position prediction method
JP2008176663A (en) * 2007-01-19 2008-07-31 Toyota Central R&D Labs Inc Vehicle detector, light emitting device for vehicle detection, and vehicle detection system
JP2009266155A (en) * 2008-04-30 2009-11-12 Toshiba Corp Apparatus and method for mobile object tracking
JP5294995B2 (en) * 2009-06-03 2013-09-18 パナソニック株式会社 Distance measuring device and distance measuring method
JP2011069797A (en) * 2009-09-28 2011-04-07 Saxa Inc Displacement measuring device and displacement measuring method
SG182021A1 (en) * 2010-12-21 2012-07-30 Singapore Technologies Dynamics Pte Ltd System and method for tracking a lead object
JP5989443B2 (en) * 2012-07-31 2016-09-07 公立大学法人公立はこだて未来大学 Semiconductor integrated circuit and object distance measuring device
JP6488697B2 (en) * 2014-12-25 2019-03-27 株式会社リコー Optical flow calculation device, optical flow calculation method, and program
JP6217696B2 (en) * 2015-06-10 2017-10-25 ソニー株式会社 Information processing apparatus, information processing method, and program
JP6358629B2 (en) * 2016-06-28 2018-07-18 三菱ロジスネクスト株式会社 forklift
JPWO2018173192A1 (en) * 2017-03-23 2019-11-21 株式会社Fuji Parallelism determination method for articulated robot and tilt adjustment device for articulated robot

Also Published As

Publication number Publication date
JP2003030628K1 (en) 2003-01-31
JP2003030628A (en) 2003-01-31

Similar Documents

Publication Publication Date Title
Ferryman et al. Visual surveillance for moving vehicles
KR100201739B1 (en) Method for observing an object, apparatus for observing an object using said method, apparatus for measuring traffic flow and apparatus for observing a parking lot
US10115027B2 (en) Barrier and guardrail detection using a single camera
JP3352655B2 (en) Lane recognition device
US7765027B2 (en) Apparatus and method for estimating a position and an orientation of a mobile robot
KR100257592B1 (en) Lane detection sensor and navigation system employing the same
US5847755A (en) Method and apparatus for detecting object movement within an image sequence
EP0697641B1 (en) Lane image processing system for vehicle
US20040178945A1 (en) Object location system for a road vehicle
EP2383679A1 (en) Detecting and recognizing traffic signs
US8818026B2 (en) Object recognition device and object recognition method
US7266454B2 (en) Obstacle detection apparatus and method for automotive vehicle
Huang et al. Finding multiple lanes in urban road networks with vision and lidar
JP3049603B2 (en) Stereoscopic image - object detection method
JP3619628B2 (en) Driving environment recognition device
CN101966846B (en) Travel&#39;s clear path detection method for motor vehicle involving object deteciting and enhancing
CN101929867B (en) Clear path detection using road model
Jung et al. A lane departure warning system using lateral offset with uncalibrated camera
JP4118452B2 (en) Object recognition device
US7957559B2 (en) Apparatus and system for recognizing environment surrounding vehicle
JP3780922B2 (en) Road white line recognition device
JP2011511281A (en) Map matching method with objects detected by sensors
JP4914234B2 (en) Leading vehicle detection device
CN101950350B (en) Clear path detection using a hierachical approach
US9297641B2 (en) Detection of obstacles at night by analysis of shadows

Legal Events

Date Code Title Description
A621 Written request for application examination

Free format text: JAPANESE INTERMEDIATE CODE: A621

Effective date: 20070726

A131 Notification of reasons for refusal

Free format text: JAPANESE INTERMEDIATE CODE: A131

Effective date: 20090804

A521 Written amendment

Free format text: JAPANESE INTERMEDIATE CODE: A523

Effective date: 20091005

TRDD Decision of grant or rejection written
A01 Written decision to grant a patent or to grant a registration (utility model)

Free format text: JAPANESE INTERMEDIATE CODE: A01

Effective date: 20100126

A01 Written decision to grant a patent or to grant a registration (utility model)

Free format text: JAPANESE INTERMEDIATE CODE: A01

A61 First payment of annual fees (during grant procedure)

Free format text: JAPANESE INTERMEDIATE CODE: A61

Effective date: 20100126

R150 Certificate of patent or registration of utility model

Free format text: JAPANESE INTERMEDIATE CODE: R150

FPAY Renewal fee payment (event date is renewal date of database)

Free format text: PAYMENT UNTIL: 20130205

Year of fee payment: 3

FPAY Renewal fee payment (event date is renewal date of database)

Free format text: PAYMENT UNTIL: 20130205

Year of fee payment: 3

FPAY Renewal fee payment (event date is renewal date of database)

Free format text: PAYMENT UNTIL: 20140205

Year of fee payment: 4

LAPS Cancellation because of no payment of annual fees