JP2006350699A - Image processor and image processing method - Google Patents

Image processor and image processing method Download PDF

Info

Publication number
JP2006350699A
JP2006350699A JP2005176247A JP2005176247A JP2006350699A JP 2006350699 A JP2006350699 A JP 2006350699A JP 2005176247 A JP2005176247 A JP 2005176247A JP 2005176247 A JP2005176247 A JP 2005176247A JP 2006350699 A JP2006350699 A JP 2006350699A
Authority
JP
Japan
Prior art keywords
area
means
region
vehicle
gaze
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
JP2005176247A
Other languages
Japanese (ja)
Inventor
Yohei Aragaki
洋平 新垣
Original Assignee
Nissan Motor Co Ltd
日産自動車株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nissan Motor Co Ltd, 日産自動車株式会社 filed Critical Nissan Motor Co Ltd
Priority to JP2005176247A priority Critical patent/JP2006350699A/en
Publication of JP2006350699A publication Critical patent/JP2006350699A/en
Application status is Pending legal-status Critical

Links

Images

Abstract

An inexpensive image processing apparatus and method for detecting an obstacle are provided.
An infrared camera 1 that acquires a thermal image around a host vehicle, a feature point extraction unit 31 that extracts, as a feature point, a pixel having a large luminance change from a peripheral pixel from the acquired thermal image; In accordance with the surrounding environment or the state of the host vehicle, a gaze region to be observed in detail in the thermal image is set, and a gaze region setting unit 32 that calculates the distance from the host vehicle to the gaze region, and the gaze region An image region dividing unit 33 that divides a thermal image and sets a divided region according to a distance and a detection target, a divided region point scoring unit 34 that performs scoring based on feature points for the set divided region, and a score An obstacle area extracting unit 35 is provided for extracting a divided area having a high score as an obstacle area among the divided areas.
[Selection] Figure 1

Description

  The present invention relates to an image processing apparatus and method for detecting an obstacle such as a pedestrian around a host vehicle.

  As a technique for detecting a pedestrian from an image taken by a camera installed in a traveling vehicle, there is Patent Document 1 below. The technique described here binarizes a thermal image obtained from two infrared cameras in a temperature zone, and detects a pedestrian candidate region from the shape of the binarization result. The distance to the pedestrian is calculated in the manner of triangulation using the parallax of the pedestrian candidate area detected from the images of the two infrared cameras.

  Moreover, there exists the following patent document 2 as a technique which detects a pedestrian with the combination of an infrared camera and a visible camera. The technique described here detects a white line on a road from a visible camera by a combination of an infrared camera and a visible camera, and sets the area as a pedestrian detection target area. Next, the temperature distribution of the pedestrian detection target region is observed from the infrared camera, and the pedestrian is detected by matching the template of the pedestrian prepared in advance. Moreover, it has become the structure which determines the magnitude | size of the template of the pedestrian to use from the shape of the detected road surface area | region.

JP 2004-303219 A JP 2002-99997 A

  In the above-mentioned Patent Document 1, since two infrared cameras are used, it causes a cost increase. In addition, since the distance is calculated from the relationship between the two infrared cameras, it is necessary to strictly manage the output image characteristics and the positional relationship between the two cameras. This also causes an increase in cost. Furthermore, since a pedestrian is recognized using shape matching, a calculation cost is required.

In Patent Document 2, the processing area is set by a visible camera. Therefore, at least two infrared cameras and visible cameras are required, which increases costs. Furthermore, since template matching is used, it is necessary to perform enormous calculations in order to follow the shape of a pedestrian that changes in various ways, which is a factor that hinders an inexpensive system configuration.
An object of the present invention is to provide an inexpensive image processing apparatus and method for detecting an obstacle.

  In order to solve the above-described problem, the present invention provides a gaze region for observing in detail among acquired thermal images of the surroundings of the own vehicle according to the surrounding environment of the own vehicle and the state of at least one of the own vehicles. Is configured to set.

  According to the present invention, it is possible to provide an inexpensive image processing apparatus and method for detecting an obstacle.

Hereinafter, embodiments of the present invention will be described in detail with reference to the drawings. In the drawings described below, components having the same function are denoted by the same reference numerals, and repeated description thereof is omitted.
Embodiment 1
<Device configuration>
FIG. 1 is a configuration diagram of the image processing apparatus according to the first embodiment of the present invention.
In FIG. 1, 1 is an infrared camera, 2 is a storage unit, 3 is a calculation unit, 21 is an image memory, 22 is a memory, 31 is a feature point extraction unit, 32 is a gaze area setting unit, 33 is an image area division unit, 34 Is a divided region scoring unit, and 35 is an obstacle region extracting unit.

  In the present embodiment, the infrared camera 1 that is a thermal image acquisition unit, the storage unit 2 that stores the thermal image acquired by the infrared camera 1 and the calculation result of the calculation unit 3, and the thermal image acquired by the infrared camera 1. And an arithmetic unit 3 for calculating an area where an obstacle exists. As described above, the thermal image acquisition means includes the infrared camera 1 that detects temperature information around the host vehicle using far infrared rays and acquires a thermal image having thermal data. The acquired thermal image is stored in the storage unit 2. To the image memory 21. The calculation unit 3 performs a calculation based on information stored in the image memory 21 and the memory 22 of the storage unit 2 and an area where an obstacle exists in front of the host vehicle (hereinafter referred to as “obstacle area”). Is detected.

  The infrared camera 1 is installed in a vehicle at a position as shown in FIG. 2A and 2B are diagrams showing the installation state of the infrared camera 1. FIG. 2A shows the installation state when viewed from the lateral direction of the vehicle, and FIG. 2B shows the installation state when viewed from above the vehicle.

  As shown in FIG. 2, the infrared camera 1 is installed, for example, in a grill part of a vehicle, and is configured to acquire an image ahead of the vehicle from the position in a format including heat data. In addition, the installation position and imaging | photography direction of the infrared camera 1 are not limited to what is shown in FIG. 2, Other installation positions and imaging | photography directions may be sufficient.

  Here, an example of an image acquired by the infrared camera 1 will be described. FIG. 3 is a diagram illustrating an example of an image acquired by the infrared camera 1. In the figure, W is a pedestrian, E is a utility pole, C is a preceding vehicle, and R is a road. As shown in FIG. 3, the infrared camera 1 captures a forward direction of the vehicle and acquires a thermal image including, for example, a pedestrian W, a utility pole E, a preceding vehicle C, a road R, and the like. This thermal image is composed of a plurality of pixel data, and each pixel data includes coordinate data and thermal data.

  Reference is again made to FIG. The storage unit 2 includes an image memory 21 and a memory 22. The calculation unit 3 includes a feature point extraction unit 31, a gaze area setting unit 32, an image area dividing unit (divided area setting unit) 33, a divided area scoring unit (divided area voting unit) 34, and an obstacle area. And an extraction unit 35.

  The image memory 21 stores the thermal image data acquired from the infrared camera 1 in time series. That is, the image memory 21 stores the thermal image as shown in FIG. 3 including the coordinate data and thermal data of each pixel.

4A to 4D are diagrams for explaining the feature points.
The feature point extraction unit 31 in FIG. 1 compares a pixel having a large luminance change with the surrounding pixels, for example, the target pixel and the surrounding pixels as shown in FIG. 4 from the thermal data of the thermal image acquired by the infrared camera 1. At this time, a pixel having a luminance change more than a predetermined value that becomes a corner of the region is extracted as the feature point F. Here, the intensity of the feature point F may be set according to the luminance difference between the feature point F and the surroundings. For example, when FIG. 4C and FIG. 4D are compared, it is assumed that (c) having a larger luminance difference from the surroundings has higher strength than (d).

  As a method for extracting the feature point F, for example, a so-called corner filter may be used for image processing. Specifically, for example, Jianbo Shi, Carlo Tomasi, "Good Features to Track", IEEE Conference on Computer Vision and Pattern Recognition (CVPR'94) pp.593-600, image suitable pixels for tracking The method of extracting from the inside is described. What is extracted here is the corner of the imaged object region. Therefore, feature points can be extracted by using a method as proposed in this document.

  FIG. 5 is a diagram for explaining feature points extracted from a thermal image, setting of divided regions, and extraction of obstacle regions. In FIG. 5, D1, D2, and D3 are rectangular (strip-shaped) divided areas. As a result of extracting the feature points F from the thermal image of FIG. 3, output results of the feature points F indicated by a plurality of circles in FIG. 5 are obtained.

  The gaze area setting unit 32 in FIG. 1 extracts an area where an obstacle such as a pedestrian is likely to appear, calculates an area where the process is performed in detail (hereinafter referred to as a “gaze area”), The distance from the vehicle to the gaze area is calculated.

  The distance distribution in the image can be calculated from the position / posture of the camera mounted on the vehicle. Using this fact, a corresponding distance is calculated according to the appearance position of the gaze area as a distance calculation method from the own vehicle to the gaze area. If the road shape is known in advance, the distance distribution may be calculated according to the road shape.

  The image area dividing unit 33 divides the image for each of the gaze area calculated by the gaze area setting unit 32 and the other areas, and sets rectangular divided areas having different shapes.

The divided area scoring unit 34 votes the divided areas D1 to D3 at the positions to which the feature points F divided by the screen area dividing unit 33 and extracted by the feature point extracting unit 31 belong, and each divided area D1. A score of .about.D3 is made.
The obstacle area extracting unit 35 extracts an obstacle area having a score corresponding to an obstacle from the divided areas D1 to D3 scored by the divided area scoring unit 34.

  The memory 22 is connected to a feature point extraction unit 31, a gaze area setting unit 32, an image area division unit 33, a divided area score conversion unit 34, and an obstacle area extraction unit 35. The feature point (or its intensity) extracted by the feature point extraction unit 31, the gaze region calculated by the gaze region setting unit 32, the divided region divided by the image region dividing unit 33, and the number of divided regions The score for each divided region scored by the conversion unit 34 and the information on the obstacle region extracted by the obstacle region extraction unit 35 are stored and can be read out.

<Operation flow>
Next, the operation flow of this embodiment will be described. FIG. 6 is a flowchart showing an operation flow of the present embodiment.
First, in S11, imaging is performed by the infrared camera 1, and a thermal image as illustrated in FIG. 3 is acquired. After that, it is possible to reduce the noise by processing the image. As this noise reduction method, for example, a pixel combination using an average value of four adjacent pixels as an output value to create a reduced image in which the image is halved in both vertical and horizontal directions, or a median value with surrounding pixels is set. There is a method using a median filter for output. In order to improve the performance of subsequent processing, it is more preferable to use a technique for removing noise while storing edge portion information, which is called edge preserving smoothing in image processing.

  Next, in S12, feature points are extracted from the thermal image. As the feature point extraction method, for example, a corner filter that extracts a corner of an object region (or a pixel adjacent to the corner of the region) as a feature point as shown in FIG. 4 as described above can be used. Further, as described above, it is desirable that the intensity of the feature points is also calculated in the process of extracting the feature points. Note that this feature point does not appear in an area such as a road surface area where temperature changes continuously appear. There are many obstacle areas where there are obstacles that tend to change in the brightness value on the thermal image due to dense temperature changes, non-uniform surface configuration and non-uniform infrared radiation direction, etc. Appear.

Next, in S13, a gaze area where an obstacle extraction process is to be performed in detail is set in the thermal image.
FIG. 7A is a diagram illustrating an example of a gaze area in the present embodiment.
In FIG. 7A, G1 is a gaze area, and HL is a headlamp irradiation area. Here, assuming that the rectangular area far from the headlamp irradiation area HL in front of the host vehicle shown in FIG. 7A is an area to be seen in detail, G1 is set as the gaze area, and an image of the gaze area G1 is gazeed. Extract as a region image. At the same time, the distance to the gaze area G1 is determined. Here, since it is far from the headlamp irradiation area HL, it is determined that the area is, for example, 50 m to 60 m. Of course, it does not matter even if it is farther away than that.

  Next, in S14, the thermal image is divided into image areas having an aspect ratio corresponding to the target obstacle, and divided areas are set. Here, according to the gaze area calculated in S13, the width and length of the divided area to be divided are changed so that the obstacle can be detected in more detail. For example, if the entire acquired thermal image is divided into rectangular divided areas having a vertical length equal to the vertical length of the screen as indicated by D3 in FIG. Is a shorter rectangular divided region as indicated by D1 and D2. As a result, because it is far away, it becomes small on the screen, and if it is a normal rectangle, it will be set to an appropriate rectangular area size to detect obstacles that can not be separated from other objects By doing so, it becomes possible to detect. In this way, a gaze area to be viewed in detail is extracted from the entire image, and rectangular divided areas are individually set. However, the actual process is not limited to this, and the obstacle extraction process for the entire image and the gaze area image set as the gaze area may be processed as independent information, and the obstacle detection process in the subsequent stage may be performed. I do not care.

  Next, in S15, the feature points extracted in S12 are voted for each of the detected divided regions with respect to the divided regions divided in S14, thereby scoring the divided regions. . Here, the extracted feature points are classified into two types: a type belonging to the object region and a type not belonging to the object region but adjacent to the object region. However, since voting is performed on a larger bundle of divided areas, there is no problem in handling either type of feature point. Further, when the strength of the feature point is calculated in S12, the number of votes to be voted is weighted according to the strength. For example, when 300 feature points are extracted, 100 points that are the upper 1/3 are 3 points, 100 points that are the middle 1/3 are 2 points, 100 points that are the lower 1/3 are 1 point, and so on. . Of course, the inclination distribution of the score is not limited to this, and may be changed depending on an obstacle to be extracted.

  If the intensity of the feature point is added, the absolute value of the intensity itself is not important because the feature point itself sees a relative change with the surrounding pixels. This is because, for example, when the temperature rises in the summer, when the temperature difference between the structure, pedestrians on the road, and other obstacles becomes small, the appearance position of the feature point is the winter season when the temperature difference is large. The absolute value of the intensity tends to be weaker in summer compared to winter. However, when viewed in winter and summer images, the feature points with relatively strong intensity appear in the winter images where the feature points with relatively strong intensity appear in the summer images. It is almost the same place as Therefore, the score gradient distribution at the time of voting to the divided area does not use the absolute value of the feature points, but uses the relative strength among the feature points extracted from the entire image. It is better to do.

  Next, in S16, an area having a value equal to or greater than a predetermined threshold is extracted as an obstacle area from each of the divided areas scored in S15. It should be noted that a region having a temperature that is clearly different from the temperature assumed to be detected by using the characteristics of the thermal image is extracted at the feature point extraction step in S12 or the feature point voting step in S15. It is possible to eliminate. Thereby, it is possible to further improve the performance.

  Further, when extracting feature points from the entire image, it is possible to reduce the image to, for example, a 1/4 image that is ½ each vertically and horizontally. In this case, although the position accuracy of the feature points is lowered, the calculation amount by extracting the feature points can be reduced without changing the performance of the detection itself, and the calculation cost can be reduced. It is also possible to interchange the procedure of extracting feature points in S12 and calculating the gaze area in S13.

  As described above, the image processing apparatus and method according to the present embodiment acquires a thermal image around the host vehicle, and generates a thermal image according to at least one of the surrounding environment of the host vehicle and the host vehicle. It is configured to set a gaze area for detailed observation. In addition, an infrared camera 1 that acquires a thermal image around the host vehicle, and a feature point extraction unit 31 that extracts, from the thermal image acquired by the infrared camera 1, a pixel having a large luminance change from the surrounding pixels as a feature point; A gaze area setting unit 32 that sets a gaze area to be observed in detail in the thermal image acquired by the infrared camera 1 according to the surrounding environment of the own vehicle and the state of at least one of the own vehicle, and gaze area setting Depending on the distance to the gaze area calculated by the unit 32 and the detection target, the image area dividing unit 33 divides the thermal image acquired by the infrared camera 1 and sets the divided areas, and the image area dividing unit 33 sets the divided areas. The divided areas are scored based on the feature points extracted by the feature point extracting section 31 and the divided area scoring section 34 for scoring the divided areas. Of respective divided regions, and a obstacle region extraction unit 35 for extracting a high score divided area as an obstacle area.

  The gaze area setting unit 32 calculates the distance from the host vehicle to the gaze area. The image area dividing unit 33 divides the image area according to the distance to the gaze area calculated by the gaze area setting unit 32 and the detection target.

  As described above, the above-described Patent Document 1 uses two infrared cameras, which causes an increase in cost. In addition, since the distance is calculated from the relationship between the two infrared cameras, it is necessary to strictly manage the output image characteristics and the positional relationship between the two cameras. This also causes an increase in cost. In addition, since it is necessary to determine the positional relationship strictly, there is a limitation on the mounting position when mounting on a vehicle, and regulation of vehicle modeling is required. Furthermore, since a pedestrian is recognized using shape matching, a calculation cost is required. In Patent Document 2, the processing area is set by a visible camera. Therefore, at least two infrared cameras and visible cameras are required, which increases costs. Further, the white line of the road detected from the visible camera is used for setting the processing area. For this reason, it is difficult to accurately detect the road shape on a road where a white line is difficult to detect or a road without a white line. In such a case, it is difficult to limit the processing area, the processing speed may be reduced, or the size of the pedestrian template to be used cannot be set optimally, and the detection performance may be reduced. . In addition, as a method for limiting the detection processing area, only the inside of the road surface area is targeted for processing, so it may not be possible to respond to objects outside the road surface area that will jump out into the road surface area, or it may be slow. There is. In addition, since the area outside the headlamp irradiation range cannot be detected with a visible camera, if the road is curved to some extent far ahead of the headlamp irradiation range, the shape of the road can be detected. This may result in a problem that the area that should be processed cannot be processed, and an appropriate template size cannot be set. In addition, as a problem of the pedestrian detection method itself, pedestrian detection is realized by applying template matching to a thermal image obtained from an infrared camera. When the number of regions becomes larger, the detection performance may be greatly deteriorated. Furthermore, since template matching is used, it is necessary to perform enormous calculations in order to follow the shape of a pedestrian that changes in various ways, which is a factor that hinders an inexpensive system configuration.

  On the other hand, according to the image processing apparatus and method of the present embodiment configured as described above, it is suitable for the distance of the region to be detected by providing a gaze region to be observed in detail among the processing target regions. Detection parameters can be set, and the detection area can be expanded and the detection performance (detection rate) can be improved. In addition, since it is not necessary to calculate all screens in detail, the calculation cost can be reduced. In addition, as a configuration of the apparatus, since only one infrared camera may be used as long as the configuration is the minimum, it can be provided at a low cost, and the degree of layout freedom when mounted on a vehicle is greatly improved. In addition, since feature points extracted from the brightness difference from the surroundings are used to detect obstacles, even if the temperature becomes almost the same as the obstacle to be extracted in summer, It is possible to capture the inflection of the temperature existing on the obstacle region imaged by the outside image and extract it as a feature point, and provide obstacle detection with robustness and improved detection rate. In addition, the feature point extraction unit 31 extracts, from the thermal image acquired by the infrared camera 1, a pixel that has a large luminance change with the surrounding pixels and becomes a corner of the region as a feature point. This makes it possible to easily extract feature points.

<< Embodiment 2 >>
Next, a second embodiment of the present invention will be described. In the present embodiment, the following procedure is added to the procedure for calculating the gaze area in S13 of FIG. 6 in the first embodiment.
FIG. 7B is a diagram illustrating an example of a gaze area in the present embodiment.
In FIG. 7B, G2 and G3 are gaze areas. For example, assume a scene in which the preceding vehicle C precedes the host vehicle as shown in FIG. In this case, first, the preceding vehicle C in the screen is detected by a predetermined preceding vehicle detection means. As a method for detecting the preceding vehicle C, it is possible to detect a high-temperature part corresponding to the muffler of the preceding vehicle C from the thermal image, and to detect a horizontally long heat region including the high-temperature part. . Further, the preceding vehicle C may be detected using another sensor such as a laser radar.

  Next, based on the detected area of the preceding vehicle C, two areas on both sides of the preceding vehicle C are calculated as gaze areas G2 and G3. Based on the result of extracting the preceding vehicle C in this way, the regions on both sides of the preceding vehicle C are set as the gaze regions G2 and G3 as shown in FIG. 7B, and the process proceeds to S13 in FIG. Obstacles can be detected as in the first embodiment.

  As described above, in the present embodiment, the gaze area setting unit 32 (FIG. 1) has the preceding vehicle detection means for detecting the preceding vehicle, and based on the presence or absence of the preceding vehicle detected by the preceding vehicle detection means. A gaze area is set. By detecting the presence or absence of a preceding vehicle in this way, it is possible to further limit the area where obstacle detection processing is performed when there is a preceding vehicle, and the detection processing can be performed in an area where the appearance probability of a pedestrian or the like is high. It is possible to allocate more computational resources for the purpose. As a result, the detection performance can be further improved.

<< Embodiment 3 >>
Next, a third embodiment of the present invention will be described. In the present embodiment, in addition to the procedure of the second embodiment, vehicle speed information of the own vehicle is acquired by a predetermined vehicle information acquisition unit, and the size (length) of the gaze area is changed based on the vehicle speed information. . FIG. 7C is a diagram for explaining an example when the host vehicle speed is high among examples of the gaze area in the present embodiment, and FIG. 7D is a diagram showing an example when the host vehicle speed is low. G4 and G5 are gaze areas when the vehicle speed of the host vehicle is high, and G6 and G7 are gaze areas when the vehicle speed of the host vehicle is low.

  When the vehicle speed of the host vehicle increases, an area where there is a high possibility that an obstacle that may come into contact with the host vehicle appears. For this reason, as shown in FIG.7 (c), the gaze area | regions G4 and G5 are extended to the upper part of a screen. On the contrary, when the vehicle speed becomes slow, it is not necessary to target far away as an area where an obstacle that may come into contact with the host vehicle appears. For this reason, as shown in FIG. 7D, the gaze regions G6 and G7 can be compressed downward in the screen.

  Further, in this case, in the division of the image area in step S14 of FIG. 6 in the first embodiment, in the case of FIG. 7C where it is necessary to detect distant, a rectangular shape smaller than that in FIG. It can be divided into regions. Accordingly, it is possible to cope with the fact that an obstacle is displayed small in the image due to being far away, and the detection performance can be improved. On the contrary, in the case of FIG. 7D in which it is not necessary to detect a distant place, it can be divided into thick rectangular divided regions as compared with the case of FIG. 7B. As a result, it is possible to prevent the object in the region to be detected from being detected separately, and to improve the detection performance as in the case of a distant place.

  In the present embodiment, the example in which the gaze areas G2 to G5 are expanded / compressed only in the vertical direction has been described, but the method for changing the gaze area is not limited thereto. For example, when a pedestrian jumps out in front of the host vehicle, when traveling at a low speed, it takes a long time until the pedestrian jumps out and contacts the host vehicle. Therefore, in order to prevent contact with a pedestrian, it is possible to detect a pedestrian who may come into contact with the host vehicle in a wider area by extending the gaze area in the lateral direction.

  Moreover, since the traveling direction of the host vehicle can be acquired by obtaining the steering angle information of the host vehicle, it is possible to determine whether the vehicle is traveling on a straight road or a curved road. FIG. 7E is a diagram for explaining an example in the case where the road ahead is curved, among examples of the gaze area in the present embodiment. In this way, if the vehicle is traveling along a curved road, the tip of the curve can be set as the gaze region G8 as shown in FIG. 7 (e).

  As described above, in the present embodiment, gaze area setting unit 32 (FIG. 1) has vehicle information acquisition means for acquiring vehicle information of the host vehicle, and is based on the vehicle information acquired by this vehicle information acquisition means. The gaze area is set. By using vehicle information such as the vehicle speed and steering angle of the host vehicle in this way, when there is no need to detect a distant place such as when the vehicle speed is low, a calculation resource is used to detect an obstacle from a region near the vehicle. It becomes possible to sort more. As a result, the detection performance can be further improved. Further, since the parameters can be optimized in the detection process for the gaze area to be observed in detail, it is possible to improve the detection performance.

<< Embodiment 4 >>
Next, a fourth embodiment of the present invention will be described. In the present embodiment, after the procedure for extracting the feature points in S12 in FIG. 6 in the first embodiment, in the procedure for calculating the gaze area in S13, the host vehicle travels by the predetermined lane area periphery observation means. The road surface area of the running lane is extracted, and the area of the opposite lane (road surface area) is estimated based on the information. Then, a gaze area is set by determining whether or not the oncoming lane is congested by a predetermined traffic condition detection means. Specific processing will be described with reference to the flowchart of FIG.

9A and 9B are diagrams for explaining extraction of a road surface region, FIG. 9A is a diagram for explaining a mesh-like divided region, and FIG. 9B is a representative of a road surface end (road end). It is a figure explaining a point.
First, when the scene shown in FIG. 3 is assumed, as shown in FIG. 9A, feature points F extracted in the feature point extraction procedure of S12 in FIG. 6 are extracted as a plurality of points. Suppose that In this case, in S41 of FIG. 8, a divided area is set as a mesh-like voting space as shown in FIG. Next, in S42 of FIG. 8, the feature points extracted in S12 of FIG. 6 are voted for the respective divided areas, and each mesh-shaped divided area is scored. Then, a divided area where a plurality of feature points are voted and a divided area D5 where almost no feature points are voted appear as in the divided area D4 of FIG. 9A. Based on this result, in S43 of FIG. 8, a region where the number of voting points is equal to or less than a threshold value, such as a mesh-like divided region, is selected as a road surface candidate region (FIG. 9B). Can be extracted as a representative point N of the road surface end).

Next, in S44 of FIG. 8, a road surface area is extracted based on continuity from the lower part of the screen of the mesh-shaped divided area which is a road surface candidate area.
Next, in S45, the road surface end is output and the opposite lane is estimated. That is, in FIG. 9 (b), the upper-end center portion of the uppermost divided area having a low mesh score is extracted as the representative point N, and by connecting these points, it can be used as a road end line. Output. This road surface edge is excluded when there is an obstacle on the road. For this reason, when the vehicle exists in the oncoming lane, the area is excluded and the road surface area is extracted. In addition, when the road which is passing is left-hand traffic, the area on the right side of the road surface area or the area adjacent to the right side can be estimated as the area of the oncoming lane. It is also possible to estimate the travel lane area of the host vehicle, which is the procedure from S42 to S45, using the rudder angle that is information of the host vehicle.

  Next, in S46, the area of the oncoming lane estimated in S45 is observed, and it is detected whether there is a traffic jam in the vehicle, and the traffic situation on the oncoming lane is estimated. The detection of the traffic congestion situation in the oncoming lane area at S46 is because the area detected as the high temperature part by the infrared camera 1, such as the reflection of the oncoming car's grille, wheel, and muffler on the road, is small. It is possible to detect traffic jams by using continuous detection.

  FIG. 7F is a diagram for describing an example of a gaze area in the present embodiment. The traffic lane side area of the host vehicle is set as the gaze area G9 in the traffic congestion area of the opposite lane as shown in FIG. 7 (f), and then the process proceeds to S14 in FIG. The same operation as in the first embodiment is performed to perform an obstacle detection process.

  Note that the method of extracting the road surface area of the traveling lane of the host vehicle is not limited to the method described here, and other methods may be used. Further, the extraction of the area of the oncoming lane may be combined with the detection of the preceding vehicle in the second embodiment, and the right area of the preceding vehicle may be set as the oncoming lane if the road is a left-hand traffic.

Further, the horizontal width of the mesh-shaped divided region set in S41 is matched with the width of the rectangular divided region performed in the division of the image region in S14 of FIG. By making the width an integral multiple of the width of the mesh-like divided area, the result of voting to the mesh-like divided area in S42 can be used for the vote result of S15 in FIG. Can be achieved.
Furthermore, by using only the information on the road shape in front of the host vehicle, if the road shape in front of the host vehicle is curved as shown in FIG. It can also be an area.

  As described above, in the present embodiment, the gaze area setting unit 32 (FIG. 1) includes a lane area periphery observation unit that detects a traveling lane area in which the host vehicle is traveling and a situation around the traveling lane area; Based on the information detected by the lane area periphery observation means, it has traffic condition detection means for detecting the traffic condition of the surrounding lane, and sets the gaze area based on the information detected by the traffic condition detection means. It is like that. In this way, in order to detect traffic congestion on the opposite lane where pedestrians are likely to jump out between vehicles, and to observe the area in detail, calculation resources are preliminarily set in areas that are likely to fall into an unsafe state. It becomes possible to distribute. As a result, the detection performance can be further improved.

  Further, the traffic condition detection means detects whether a high temperature part is continuously observed in an area corresponding to the opposite lane around the traveling lane area of the host vehicle acquired from the lane area periphery observation means. It is designed to detect traffic conditions in lanes. As a result, it becomes possible to detect the traffic congestion of the oncoming lane only by simple processing of the infrared camera 1, and it is possible to realize a configuration inexpensively and simply.

<< Embodiment 5 >>
Next, a fifth embodiment of the present invention will be described. In the present embodiment, in the procedure for extracting the feature points in S12 of FIG. 6 in the first embodiment, first, as in the fourth embodiment, the procedure for extracting the road surface area of the traveling lane of the host vehicle (FIG. 8 S42-44). Of course, the traveling lane may be extracted using a method different from the above. In order to detect a parked vehicle, as a parked vehicle area, for example, if it is a left-handed road with respect to the road surface area of the detected lane, it is adjacent to the leftmost area of the road area or the left side of the road Observe the area to be. A vehicle parked in this area is detected by a predetermined parked vehicle detection means.

  In addition, as a parking / stopped vehicle detection means, the same technique as the preceding vehicle detection means of the said Embodiment 2 can be taken. Specifically, a muffler that can be measured as a high-temperature part is detected, and an area having a certain temperature zone that includes the area is detected, so that it is detected as a parked vehicle area. If the parked vehicle area is enlarged as the vehicle progresses and it can be determined that the vehicle is approaching, it can be detected that the vehicle is stopped. In addition, as a detection method of a parked vehicle, it is not limited to this, You may use this other method and another sensor.

  FIG. 7G is a diagram illustrating an example of a gaze area in the present embodiment. The periphery of the parked vehicle detected by the above method is set as a gaze area G10 as shown in FIG. 7 (g), and an obstacle is detected.

  As described above, in the present embodiment, the gaze area setting unit 32 (FIG. 1) includes a lane area periphery observation unit that detects a traveling lane area in which the host vehicle is traveling and a situation around the traveling lane area, Parking / stopped vehicle detecting means for detecting a parked / stopped vehicle in at least one of the adjacent area outside the travel lane area in the travel lane area of the own vehicle detected by the lane area periphery observation means. The area around the parked and stopped vehicle detected by the vehicle detection means is set as the gaze area. In this way, by detecting parked vehicles in the traveling lane of the host vehicle and adjacent areas, it is possible to efficiently detect pedestrians jumping out from the front and rear of the parked vehicles, people getting off from the parked vehicles themselves, or people getting on It becomes possible to detect, and it becomes possible to further improve the detection performance.

<< Embodiment 6 >>
Next, a sixth embodiment of the present invention will be described. In this embodiment, the detection result of an obstacle or an area that seems to be an obstacle is extracted before a predetermined time by any one of the above-described embodiments 1 to 5 or other methods. And The extraction result of the obstacle before this predetermined time and its equivalent and its periphery are set as the gaze area.

  Specifically, it is realized by expanding a region extracted a predetermined time ago by a predetermined value. For example, if the obstacle is moving, the area may be expanded to an area that is supposed to exist as a result of the movement. Obstacle detection processing is performed using this gaze area.

  As described above, in the present embodiment, the gaze area setting unit 32 (FIG. 1) uses the obstacle area extraction unit 35 as the periphery of the obstacle area extracted as having a high possibility of an obstacle existing a predetermined time ago. Is the gaze region. Thus, by using the immediately preceding obstacle detection result by the obstacle region extraction unit 35, it is possible to prevent the obstacle itself from being lost in the middle. It is also possible to determine whether the obstacle is an approaching obstacle or an obstacle moving away for the host vehicle. As a result, the detection performance can be further improved.

<< Embodiment 7 >>
Next, a seventh embodiment of the present invention will be described. FIG. 10 is a flowchart showing an operation flow of the present embodiment. In the present embodiment, as in the first embodiment, first, a thermal image is acquired in S71 of FIG. 10, and feature points are extracted from the acquired thermal image in S72.
Next, the road surface area is extracted in S73 using the road surface area extraction method described in the fourth embodiment, and the road surface edge is detected from the road shape.

FIG. 11A is a diagram for explaining the relationship between the road surface area and the pedestrian frequent line, and FIG. 11B is a diagram for explaining a method of dividing the divided area using the pedestrian frequent line. In the figure, RS is the shape of the road surface, WL is the pedestrian frequent line, and RF is the farthest position of the road surface ends.
Next, in S74, the predetermined pedestrian frequent line extraction means extracts the road speed of the own vehicle obtained from the road surface end of the road surface area, the distance distribution between the own vehicle and the feature point on the image, and vehicle information. Based on the steering angle, a pedestrian frequent line WL is set as a gaze line as shown in FIG.

  Specifically, a rough distance distribution on the screen is calculated based on the road surface shape RS in the obtained road surface information. By using this distance distribution and information on the vehicle speed and rudder angle of the host vehicle, it is possible to calculate the area that may come into contact with the host vehicle within a certain time when a pedestrian jumps out. It becomes. Assuming that this area is an area where an obstacle is desired to be calculated, a point at the farthest position RF on the road surface edge is calculated, and a straight line passing through the point in the vertical direction of the screen is set as a pedestrian frequent line WL. Here, the setting of the pedestrian frequent line WL is not limited to one, and two or three or more can be set as shown in FIG.

Next, the screen area is divided at S75 using the pedestrian frequent line WL set at S74. As shown in FIG. 11 (b), the divided area is divided into rectangles that have a narrow width around the pedestrian frequent line WL and gradually increase the width as the distance from the line increases. Let it be a segmented area. In addition, when the distance distribution obtained from the road surface end is used, the width of the divided area is set according to the distance distribution, and the width of the divided area along the pedestrian frequent line WL which is the narrowest part is set to 1. When the width of the divided area is expanded in proportion to the distance distribution from the distance, the width is doubled at the position corresponding to the half distance distribution from the position corresponding to the pedestrian frequent line WL, For example, the width may be quadrupled. Further, the width of the divided area is not limited to an integral multiple, and the width may be gradually changed.
Next, voting of feature points is performed for each divided area in S76, and an obstacle area is extracted in S77.

  As described above, in the present embodiment, the image area dividing unit 34 (FIG. 1) uses the road surface area extracting means for extracting the road surface area and the pedestrian's path based on the road surface area extracted by the road surface area extracting means. A pedestrian frequent line extraction means for extracting a position where the appearance frequency is high as a pedestrian frequent line WL, and the periphery of the pedestrian frequent line WL extracted by the pedestrian frequent line extraction means in more detail than other regions. It comes to divide. In this way, a part where a pedestrian is likely to appear is defined as a pedestrian frequent line WL, the periphery of the pedestrian frequent line WL is set as a detailed observation region, and a screen division parameter for detection is set. As a result, the detection result is applied to the entire screen while suppressing the increase in calculation cost without changing other processes at all, and at the same time, an output result that details the area where pedestrians are highly likely to appear is displayed. Can be obtained. Therefore, the detection performance can be further improved.

  Moreover, it has vehicle information acquisition means which detects the information of the own vehicle, and a pedestrian frequent line WL from the vehicle speed and the steering angle acquired by this vehicle information acquisition means and the road surface area extracted by the road surface area extraction means. Is set. Thus, by combining the vehicle information of the vehicle speed and the steering angle obtained from the host vehicle and the information on the road surface area, it becomes possible to calculate an area where the pedestrian needs to be detected for the host vehicle. Parameter setting according to the distance of the object can be performed. Thereby, the detection performance can be further improved.

<< Embodiment 8 >>
Next, an eighth embodiment of the present invention will be described with reference to the flowchart of FIG. First, as in the first embodiment, a thermal image is acquired in S71 of FIG. 10, and feature points are extracted from the acquired thermal image in S72.

Next, the road surface area is extracted in S73 using the road surface area extraction method shown in the fourth embodiment. A pedestrian frequent line WL is set as a gaze line at the road surface edge of the extracted road surface area. 12A illustrates an example in which the road surface end of the road surface area is set as the pedestrian frequent line WL, and FIG. 12B illustrates the relationship between the pedestrian frequent lines WL1 to WL3 and the feature point F. FIG. FIG.
For example, the case where the left end of the road is detected and set as a pedestrian frequent line WL as shown in FIG. The pedestrian frequent line WL may be set as either end of the road or one of them depending on the situation to be detected.

Next, in S75, as shown in FIG. 5, the screen area is divided into equally spaced rectangles, and the feature points extracted in S72 are voted for each rectangular divided area in S76. At the time of this voting, as shown in FIG. 12B, the feature point F that is close to the pedestrian frequent line WL1 that is a gaze line has a higher voting score than other points. This is to increase the extraction sensitivity for obstacles below a certain height near the road surface. Specifically, since the road shape is extracted using a predetermined feature point distance calculation means, the distance distribution on the image is estimated, and the obtained distance distribution is used to correspond to the road surface end. A straight line having a pedestrian equivalent height of, for example, 1.8 m from the pedestrian frequent line WL1 is defined as a pedestrian frequent line WL3. Further, a straight line having a height of 1 m is calculated as the pedestrian frequent line WL2 from the pedestrian frequent line WL1 for the feet and children of the pedestrian. And it casts by weighting about the feature point F which exists in the area | region near the road surface from each pedestrian frequent line WL1-3. That is, the feature points below the pedestrian frequent line WL2 are three points, the points below the pedestrian frequent line WL3 are two points, and the other points are one point. From this result, a rectangular divided area having a high score is extracted in S77, and a divided area having a high feature point appearance rate is extracted as an obstacle area.
Here, the pedestrian frequent lines WL1 to WL3 in which the height of the road surface edge from the road surface is calculated, and the voting points are weighted. May be determined, or the weighting method may be set more finely.

  As described above, in the present embodiment, the road surface area extracting means for extracting the road surface area, the road surface edge acquired by the road surface area extracted by the road surface area extracting means, and the feature point extracting unit 31 (FIG. 1). A feature point distance calculating unit that calculates a distance from the extracted feature point, and the divided region score conversion unit calculates the road surface end acquired by the road surface region extracting unit and the feature point distance extracting unit. The number of divided areas is changed according to the distance to the feature point. By increasing the weighting of the feature points extracted near the edge of the road surface detected in this way, it becomes easier to detect obstacles such as pedestrians walking on the side of the road, and the detection performance is improved. Can be planned.

With the configuration described in the first to eighth embodiments, it is possible to provide an image processing apparatus and method that improve the obstacle detection performance without increasing the amount of calculation.
In addition, Embodiment 1-8 demonstrated above was described in order to make an understanding of this invention easy, and was not described in order to limit this invention. Therefore, each element disclosed in the above embodiment includes all design changes and equivalents belonging to the technical scope of the present invention. For example, when a rectangular divided area is set to divide the image area, the width of the divided area is set according to the distance to be detected. Thereby, it becomes possible to expand the area | region which can be detected, and it becomes possible to contribute to the improvement of the detection performance of a pedestrian. Moreover, it is also possible to combine the said Embodiment 1-8 suitably. The term gaze area includes a gaze line.
The correspondence between each constituent element in the claims and each constituent element in the embodiment of the invention will be described. That is, the infrared camera 1 in the embodiment, S11 in FIG. 6, and S71 in FIG. 10 are the thermal image acquisition means in the claims. The feature point extraction unit 31, S12 in FIG. 6, and S72 in FIG. The gaze area setting unit 32, S13 in FIG. 6, S43 to S45 in FIG. 8, and S73 and S74 in FIG. 10 are the extraction area, the image area dividing unit 33, and S14 in FIG. S41 in FIG. 8 and S75 in FIG. 10 are the image area dividing means, the divided area scoring unit 34, S15 in FIG. 6, S42 in FIG. 8, and S76 in FIG. 35, S16 in FIG. 6, and S77 in FIG. 10 correspond to the obstacle region extracting means. Further, S43 to S45 in FIG. 8 correspond to the lane area periphery observation means, S46 in FIG. 8 corresponds to the traffic condition detection means, and S74 in FIG. 10 corresponds to the pedestrian frequent line extraction means.

It is a block diagram which shows the structure of the image processing apparatus of Embodiment 1-8 of this invention. It is a figure which shows the installation position of an infrared camera. It is a figure which shows the example of an image which an infrared camera acquires. It is a figure explaining a feature point. It is a figure for demonstrating the feature point extracted from the thermal image, the setting of a division area, and extraction of an obstruction area. It is a flowchart which shows the flow of operation | movement of Embodiment 1 of this invention. It is a figure explaining the example of the gaze area | region in Embodiment 1-5 of this invention. It is a flowchart which shows the flow of operation | movement of Embodiment 4 of this invention. It is a figure explaining extraction of the road surface area in Embodiment 4 of this invention. It is a flowchart which shows the flow of operation | movement of Embodiment 7 of this invention. (A) is a figure explaining the relationship between the road surface area | region and pedestrian frequent line in Embodiment 7 of this invention, (b) is the method of the division | segmentation of a divided area using a pedestrian frequent line. (A) is a figure explaining the example which sets the road surface end of the road surface area in Embodiment 8 of this invention as a pedestrian frequent line, (b) is a figure explaining the relationship between a pedestrian frequent line and a feature point. is there.

Explanation of symbols

DESCRIPTION OF SYMBOLS 1 ... Infrared camera 2 ... Memory | storage part 3 ... Calculation part 21 ... Image memory 22 ... Memory 31 ... Feature point extraction part 32 ... Gaze area setting part 33 ... Image area division | segmentation part 34 ... Divided area score conversion part 35 ... Obstacle area extraction Part W ... Pedestrian E ... Electric pole C ... Leading vehicle R ... Road F ... Feature points D1-D3 ... Divided areas G1-G10 ... Gaze area HL ... Headlamp irradiation areas D4, D5 ... Divided area N ... Representative point RS ... Road surface Shape WL ... Pedestrian frequency line RF ... Positions WL1-WL3 farthest from the road edge ... Pedestrian frequency line

Claims (15)

  1. Obtain a thermal image around the vehicle,
    An image processing apparatus, wherein a gaze area to be observed in detail in the thermal image is set according to a surrounding environment of the host vehicle and a state of at least one of the host vehicle.
  2. Thermal image acquisition means for acquiring a thermal image around the host vehicle;
    Feature point extraction means for extracting, from the thermal image acquired by the thermal image acquisition means, a pixel having a large luminance change with surrounding pixels as a feature point;
    Gaze area setting means for setting a gaze area to be observed in detail in the thermal image acquired by the thermal image acquisition means according to the surrounding environment of the host vehicle and the state of at least one of the host vehicles;
    Image area dividing means for setting a divided area by dividing the thermal image acquired by the thermal image acquiring means;
    Divided area point scoring means for scoring points based on the feature points extracted by the feature point extracting means for the divided areas set by the image area dividing means;
    An image processing apparatus comprising: obstacle region extracting means for extracting a divided region having a high score as an obstacle region among the divided regions scored by the divided region scoring means.
  3.   The image processing apparatus according to claim 2, further comprising a distance calculation unit that calculates a distance from the host vehicle to the gaze region set by the gaze region setting unit.
  4.   The image processing apparatus according to claim 3, wherein the image region dividing unit divides the image region according to a distance to the gaze region calculated by the distance calculating unit and a detection target.
  5.   3. The feature point extracting unit extracts, from the thermal image acquired by the thermal image acquiring unit, a pixel having a large luminance change with a surrounding pixel and serving as a corner of a region as a feature point. Image processing apparatus.
  6. The gaze area setting means has a preceding vehicle detection means for detecting a preceding vehicle,
    The image processing apparatus according to claim 2, wherein the gaze area is set based on presence or absence of a preceding vehicle detected by the preceding vehicle detection unit.
  7. The gaze area setting means has vehicle information acquisition means for acquiring vehicle information of the host vehicle,
    The image processing apparatus according to claim 2, wherein the gaze area is set based on vehicle information acquired by the vehicle information acquisition unit.
  8. The gaze area setting means includes:
    A lane region periphery observation means for detecting a traveling lane region in which the host vehicle is traveling and a situation around the traveling lane region;
    Based on the information detected by the lane area periphery observation means, the traffic situation detection means for detecting the traffic situation of the surrounding lane,
    The image processing apparatus according to claim 2, wherein the gaze area is set based on information detected by the traffic condition detection unit.
  9.   The traffic jam condition detecting means detects whether a high temperature part is continuously observed in an area corresponding to an opposite lane around the traveling lane area of the host vehicle acquired from the lane area periphery observation means, The image processing apparatus according to claim 8, wherein a traffic jam situation is detected.
  10. The gaze area setting means includes:
    Lane area periphery observation means for detecting a traveling lane area in which the host vehicle is traveling and a situation around the traveling lane area;
    A parking / stopped vehicle detection means for detecting a parked / stopped vehicle in at least one of the adjacent areas outside the travel lane area in the travel lane area of the host vehicle detected by the lane area periphery observation means;
    The image processing apparatus according to claim 2, wherein the gaze area is a periphery of a parked and stopped vehicle detected by the stopped vehicle detection unit.
  11.   The gaze area setting means sets the gaze area around the obstacle area extracted by the obstacle area extraction means as having a high possibility of an obstacle existing a predetermined time ago. The image processing apparatus according to any one of 2 to 9.
  12. The image area dividing means includes:
    Road surface area extracting means for extracting a road surface area;
    Based on the road surface area extracted by the road surface area extraction means, pedestrian frequent line extraction means for extracting a position where the appearance frequency of pedestrians is high as a pedestrian frequent line,
    The image processing apparatus according to any one of claims 2 to 11, wherein the pedestrian frequent line extracted by the pedestrian frequent line extracting means is divided in more detail than other regions.
  13. Vehicle information acquisition means for detecting information of the host vehicle;
    13. The image processing apparatus according to claim 12, wherein the pedestrian frequent line is set from a vehicle speed and a steering angle acquired by the vehicle information acquisition means and a road surface area extracted by the road surface area extraction means.
  14. Road surface area extracting means for extracting a road surface area;
    A feature point distance calculating unit that calculates a distance between a road edge acquired by the road surface region extracted by the road surface region extracting unit and a feature point extracted by the feature point extracting unit;
    The segmented area scoring means changes the score of the segmented area according to the distance between the road edge acquired by the road surface area extracting means and the feature point calculated by the feature point distance extracting means. The image processing apparatus according to claim 2.
  15. Obtain a thermal image around the vehicle,
    An image processing method comprising: setting a gaze area to be observed in detail in the thermal image according to a surrounding environment of the host vehicle and a state of at least one of the host vehicle.
JP2005176247A 2005-06-16 2005-06-16 Image processor and image processing method Pending JP2006350699A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
JP2005176247A JP2006350699A (en) 2005-06-16 2005-06-16 Image processor and image processing method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
JP2005176247A JP2006350699A (en) 2005-06-16 2005-06-16 Image processor and image processing method

Publications (1)

Publication Number Publication Date
JP2006350699A true JP2006350699A (en) 2006-12-28

Family

ID=37646467

Family Applications (1)

Application Number Title Priority Date Filing Date
JP2005176247A Pending JP2006350699A (en) 2005-06-16 2005-06-16 Image processor and image processing method

Country Status (1)

Country Link
JP (1) JP2006350699A (en)

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2009220798A (en) * 2008-03-19 2009-10-01 Casio Comput Co Ltd Vehicular alarm device and vehicular alarm processing program
WO2011101988A1 (en) * 2010-02-22 2011-08-25 トヨタ自動車株式会社 Risk degree calculation device
JP2012177675A (en) * 2011-02-25 2012-09-13 Guangzhou Sat Infrared Technology Co Ltd Road surface defect detection system and method
KR101251776B1 (en) * 2010-11-26 2013-04-05 현대자동차주식회사 A system controlling region of interest for vehicle sensor and method thereof
KR101264282B1 (en) * 2010-12-13 2013-05-22 재단법인대구경북과학기술원 detection method vehicle in road using Region of Interest
JP2016018494A (en) * 2014-07-10 2016-02-01 公立大学法人岩手県立大学 Track recognition device
WO2020031413A1 (en) * 2018-08-10 2020-02-13 株式会社Jvcケンウッド Recognition processing device, recognition processing method and recognition processing program

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2009220798A (en) * 2008-03-19 2009-10-01 Casio Comput Co Ltd Vehicular alarm device and vehicular alarm processing program
WO2011101988A1 (en) * 2010-02-22 2011-08-25 トヨタ自動車株式会社 Risk degree calculation device
CN102763146A (en) * 2010-02-22 2012-10-31 丰田自动车株式会社 Risk degree calculation device
JP5382198B2 (en) * 2010-02-22 2014-01-08 トヨタ自動車株式会社 Risk calculation device
US9135825B2 (en) 2010-02-22 2015-09-15 Toyota Jidosha Kabushiki Kaisha Risk degree calculation device
KR101251776B1 (en) * 2010-11-26 2013-04-05 현대자동차주식회사 A system controlling region of interest for vehicle sensor and method thereof
KR101264282B1 (en) * 2010-12-13 2013-05-22 재단법인대구경북과학기술원 detection method vehicle in road using Region of Interest
JP2012177675A (en) * 2011-02-25 2012-09-13 Guangzhou Sat Infrared Technology Co Ltd Road surface defect detection system and method
JP2016018494A (en) * 2014-07-10 2016-02-01 公立大学法人岩手県立大学 Track recognition device
WO2020031413A1 (en) * 2018-08-10 2020-02-13 株式会社Jvcケンウッド Recognition processing device, recognition processing method and recognition processing program

Similar Documents

Publication Publication Date Title
US9925939B2 (en) Pedestrian collision warning system
US10274598B2 (en) Navigation based on radar-cued visual imaging
JP6259928B2 (en) Lane data processing method, apparatus, storage medium and equipment
US20190251375A1 (en) Systems and methods for curb detection and pedestrian hazard assessment
US10043082B2 (en) Image processing method for detecting objects using relative motion
Yenikaya et al. Keeping the vehicle on the road: A survey on on-road lane detection systems
EP2728546B1 (en) Method and system for detecting object on a road
US9401028B2 (en) Method and system for video-based road characterization, lane detection and departure prevention
US20160014406A1 (en) Object detection apparatus, object detection method, object detection program, and device control system mountable to moveable apparatus
US8750567B2 (en) Road structure detection and tracking
US20140314279A1 (en) Clear path detection using an example-based approach
US9251708B2 (en) Forward collision warning trap and pedestrian advanced warning system
US10081308B2 (en) Image-based vehicle detection and distance measuring method and apparatus
US9536155B2 (en) Marking line detection system and marking line detection method of a distant road surface area
US8605947B2 (en) Method for detecting a clear path of travel for a vehicle enhanced by object detection
US20150294160A1 (en) Object detection apparatus, object detection method, object detection program and device control system for moveable apparatus
US9076046B2 (en) Lane recognition device
Kastrinaki et al. A survey of video processing techniques for traffic applications
Wu et al. Applying a functional neurofuzzy network to real-time lane detection and front-vehicle distance measurement
US8487991B2 (en) Clear path detection using a vanishing point
DE69736764T2 (en) Local positioning device and method therefor
Khammari et al. Vehicle detection combining gradient analysis and AdaBoost classification
JP3049603B2 (en) Stereoscopic image - object detection method
US7612800B2 (en) Image processing apparatus and method
EP1796043B1 (en) Object detection