WO2011065399A1 - 走路認識装置、車両、走路認識方法及び走路認識プログラム - Google Patents
走路認識装置、車両、走路認識方法及び走路認識プログラム Download PDFInfo
- Publication number
- WO2011065399A1 WO2011065399A1 PCT/JP2010/070983 JP2010070983W WO2011065399A1 WO 2011065399 A1 WO2011065399 A1 WO 2011065399A1 JP 2010070983 W JP2010070983 W JP 2010070983W WO 2011065399 A1 WO2011065399 A1 WO 2011065399A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- voting
- image
- cumulative
- time
- track
- Prior art date
Links
Images
Classifications
-
- G—PHYSICS
- G08—SIGNALLING
- G08G—TRAFFIC CONTROL SYSTEMS
- G08G1/00—Traffic control systems for road vehicles
- G08G1/16—Anti-collision systems
- G08G1/167—Driving aids for lane monitoring, lane changing, e.g. blind spot detection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/70—Determining position or orientation of objects or cameras
- G06T7/73—Determining position or orientation of objects or cameras using feature-based methods
- G06T7/74—Determining position or orientation of objects or cameras using feature-based methods involving reference images or patches
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/74—Image or video pattern matching; Proximity measures in feature spaces
- G06V10/75—Organisation of the matching processes, e.g. simultaneous or sequential comparisons of image or video features; Coarse-fine approaches, e.g. multi-scale approaches; using context analysis; Selection of dictionaries
- G06V10/753—Transform-based matching, e.g. Hough transform
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/50—Context or environment of the image
- G06V20/56—Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
- G06V20/588—Recognition of the road, e.g. of lane markings; Recognition of the vehicle driving pattern in relation to the road
-
- G—PHYSICS
- G08—SIGNALLING
- G08G—TRAFFIC CONTROL SYSTEMS
- G08G1/00—Traffic control systems for road vehicles
- G08G1/16—Anti-collision systems
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10016—Video; Image sequence
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20048—Transform domain processing
- G06T2207/20061—Hough transform
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30204—Marker
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30248—Vehicle exterior or interior
- G06T2207/30252—Vehicle exterior; Vicinity of vehicle
- G06T2207/30256—Lane; Road marking
Definitions
- the present invention relates to the recognition of a runway boundary represented by a saddle-shaped lane mark installed on a road surface.
- a runway recognition device that recognizes lane marks indicating lane boundaries provided on a road surface by visual sensor information mounted on a vehicle, and recognizes the relative positional relationship between the vehicle and the runway.
- lane mark such as a continuous line or a broken line painted in a color such as white or yellow on the road surface.
- a "ridge-like (type) lane mark” in which a reflective element or the like is embedded along the runway boundary.
- the wedge-shaped lane marks are referred to as a dot-like lane mark, a cat's eye, a bot's dot, a road stud or the like.
- Botts dot also referred to as Bott 'dot (s) or Bott dot (s)
- boxy lane mark used particularly in California, North America.
- the term "box-like lane mark” is a generic term including those exemplified above, and the term “box-like” does not limit a specific shape or the like.
- Patent Document 1 describes an example of a runway recognition device for linear lane marks in particular.
- the runway recognition device described in Patent Document 1 includes an image input device, an edge enhancement device, a straight line extraction processing device, a white line determination processing device, and a white line output device. Then, the runway recognition apparatus selects feature points of pixels with large brightness change in the image of the traveling road surface obtained by the imaging means, extracts an approximate straight line by Hough transformation, and determines this straight line as an edge of a white line. Do. Also, starting from the last pixel in the vicinity of the straight line, the pixel is searched in the direction of the image far from the vehicle, and it is determined to be a white line edge.
- the wedge-shaped lane mark has an extremely small number of edge features that can be acquired as compared with the linear lane mark. Therefore, the above method can not recognize well. Therefore, there is a method of detecting the estimated position of the wedge-shaped lane mark using a filter that returns a value only to the feature similar to the wedge-shaped lane mark in the image, and recognizing the track boundary line based on the estimated position information. It is common.
- Patent Document 2 as a feature extraction filter, an image of a trapezoid lane mark acquired in advance is used as a template. And the template matching method which extracts the area
- Patent Document 3 a smooth image is created by combining time-series feature images, and the wedge-like lane mark image is made to be linear on the composite image, and then the edge gradient is similar to the linear lane mark.
- a method to detect a lane position by detecting a feature by using H etc. and detecting a straight line by using Hough transform in the feature coordinates.
- the runway recognition device described in Patent Document 1 can not recognize the scaly lane mark well.
- the method of detecting a scaly lane mark by matching with a template image as in the technique described in Patent Document 2 has a problem that the detection performance is greatly reduced due to noise, soiling of the real thing, or minor change of the shape. is there. Furthermore, since there are several types of scaly lane marks, it is necessary to prepare a template image corresponding to all of them. Therefore, the amount of calculation increases and the amount of memory consumption also increases. In particular, in the case of a wedge-shaped lane mark of a square cross section, the apparent shape changes depending on the positional relationship between the installation position and the camera, so the method of detecting the wedge-shaped lane mark by matching with a template image is realistically Have difficulty.
- Patent Document 2 does not disclose any method for identifying noise and wedge-shaped lane marks.
- interval of the cage-like lane mark is synthesize
- the above problem is not solved because the method for identifying the noise and the scaly lane mark is not disclosed.
- Patent Document 3 needs to store and combine many past time-series input images (or feature images), and requires a large amount of calculation and memory. Further, in order to make the wedge-shaped lane mark linear on the composite image, the amount of change in the image must be small between the image frames used for the composition. In this respect, as the moving speed of the vehicle increases, a camera with a higher frame rate is required in order to keep the amount of change in the image during one frame small. Therefore, when the vehicle moves at a high speed, the wedge-shaped lane marks on the composite image become sparse, and there is a possibility that they can not be combined linearly.
- a track recognition device a track recognition method, and a track recognition program are provided.
- a feature extraction unit for extracting a candidate position of a lane mark from a received input image and a vote value for the extracted candidate position correspond to elapsed time.
- Cumulative voting section for generating a cumulative voting feature image by voting the parameter space of the approximate curve or the approximate straight line with weighting and cumulative route feature position based on the generated cumulative voting feature image
- a track recognition apparatus comprising: a track boundary determination unit that determines the track boundary position by extracting the candidates of the above.
- the track recognition device comprises an image output device for capturing an image and outputting the captured image
- a vehicle is provided, wherein the input image received by the feature extraction unit is an image output by the image output device.
- the candidate position of the lane mark is extracted from the received input image, and the vote value is weighted according to the elapsed time for the extracted candidate position.
- a feature extraction function of extracting a candidate position of a lane mark from a received input image, and a vote value corresponding to an elapsed time for the extracted candidate position Cumulative voting function for generating a cumulative voting feature image by voting the parameter space of the approximate curve or the approximate straight line with weighting and cumulative lane feature position based on the generated cumulative voting feature image
- a track recognition program is provided that causes a computer to realize a track boundary determination function of determining the track boundary position by extracting the candidates of the above.
- estimated positions of discretely placed lane marks are extracted as feature points using a feature image, and straight lines (or curves) drawn by the feature points in time series are extracted Therefore, it is possible to recognize the track boundary with high accuracy with respect to noise.
- FIG. 5 is a block diagram schematically showing a configuration essential to the runway recognition apparatus according to the first embodiment of the present invention when recognition is performed on a saddle-shaped lane mark.
- FIG. 7 is a block diagram schematically showing a configuration essential to the travel lane recognition apparatus according to the first embodiment of the present invention when recognition is made on linear lane marks.
- the runway recognition device according to the first embodiment of the present invention it is a view schematically showing an example of an input image to be input. It is the figure which showed typically the example of the feature image produced
- FIG. It is the flowchart which showed typically the specific example (Embodiment 3: Example 1) in the case of time-sequential cumulative voting calculation among operation
- FIG. 1 is a block diagram schematically showing the configuration of a vehicle including a roadway recognition device 2 according to a first embodiment of the present invention.
- the motor vehicle is imitated as an object by which the runway recognition apparatus 2 is mounted in FIG. 1, this is an example to the last.
- a transport vehicle traveling in the factory, a mobile robot or the like, a motorcycle or the like may be an object on which the runway recognition device 2 is mounted.
- the "vehicle" in the present specification and the present invention includes these objects.
- a travel path recognition device 2 is mounted on a vehicle 1.
- the travel path recognition device 2 has an electronic control device 10 and an image output device 20.
- lane marks are recognition targets in the present embodiment.
- the lane mark is a mark indicating a lane boundary provided on the road surface.
- the lane mark is a "linear lane mark” such as a continuous line or a broken line painted in a color such as white or yellow on the road surface, and a three-dimensional structure installed on the road surface, And includes "wedge-like lane marks" which are placed along the discrete pattern. Examples of cross-sections of the three-dimensional structure of the wedge-shaped lane mark are an ellipse, a square, a rectangle, a trapezoid and the like.
- the rod-like (type) lane mark is a row stud (road stud, raised pavement marker), Bott's dot (s), Bott 'dot (s), Bott dot (s), Botts dot (s), It is called Botts' dot (s), cat's eye, etc.
- the term "shaped lane mark” is a generic term including those exemplified, and "shaped” is a specific example. It does not limit the shape or the like. In the following description, it is simply described as a lane mark. When simply referred to as a lane mark, it refers to either a linear lane mark, a scaly lane mark, or another lane mark.
- the image output device 20 is a device that outputs at least image information to the electronic control device 10.
- the image output device 20 simulates an imaging device.
- the imaging apparatus is an apparatus for capturing an image in real time, and is mounted on the vehicle 1 so as to capture an image of a runway on which the vehicle travels.
- a video camera that outputs a National Television Standards Committee (NTSC) format can be used as the imaging device. It is also possible to use any image format other than the NTSC format.
- NTSC National Television Standards Committee
- the image output device 20 can use, for example, an image capture device that reads out image information stored in a storage medium and converts it into an NTSC output or the like besides the imaging device.
- the electronic control device 10 is an information processing device that performs information processing for recognizing a runway.
- FIG. 2 is a block diagram schematically showing the configuration of the runway recognition apparatus according to Embodiment 1 of the present invention.
- Each part implemented in the electronic control device 10 is implemented by hardware as an individual device or a part of the device or an electronic circuit.
- the electronic control unit 10 includes an image input receiving unit 10-1, a feature extracting unit 10-2, a time series cumulative voting unit 10-3, and a voting feature image.
- a storage unit 10-4 and a track boundary determination unit 10-5 are provided.
- the image input reception unit 10-1 acquires an image from the image output device 20. At the time of image acquisition, the image input acceptance unit 10-1 adjusts the acquired image format such as clipping of necessary image area, adjustment of resolution and size, extraction of odd (or even) field from NTSC format image, etc. You may go.
- the feature extraction unit 10-2 receives peripheral pixels as input at each position of the image acquired by the image input reception unit 10-1, and generates an integer value or a real value for each coordinate according to the probability that the lane mark exists. Apply a filtering process that outputs As a filter used here, an arbitrary filter such as a pattern correlation filter based on a template image acquired in advance, an annular filter for calculating a difference or a ratio of luminance statistics of the central region and the peripheral region can be used.
- a filter used here an arbitrary filter such as a pattern correlation filter based on a template image acquired in advance, an annular filter for calculating a difference or a ratio of luminance statistics of the central region and the peripheral region can be used.
- the time-series cumulative voting unit 10-3 extracts feature points that are candidates for lane mark positions from the feature image generated by the feature extraction unit 10-2, weights each elapsed time, and cumulatively approximates the curve Or vote on the approximate linear parameter space.
- a cumulative voting feature image including voting values at a plurality of past times is generated by performing cumulative voting with a weight given to each elapsed time as described above.
- Extraction of feature points may be performed by selecting each pixel on a properly thresholded feature image, or selecting a pixel having a maximum value from a small area which is a group of adjacent pixels having a value equal to or greater than the threshold value. Alternatively, the center of gravity of the small area may be selected.
- a method such as Hough transform (P.V.C. Hough, "Method and Means for Recognizing Complex Patterns," U.S. Patent No. 3069654, 1962) or an extended Hough transform can be used.
- a parameter space ( ⁇ , ⁇ ) to which values are calculated and to which the vote values of all extracted feature points are added constitutes a vote feature image.
- the calculated vote values may be weighted according to the size of the output in feature extraction of each feature point.
- the time-series cumulative voting unit 10-3 reads the voting feature image generated at the past time from the voting feature image storage unit 10-4, and uses the current feature time as the voting feature image generated from the features for a certain period of time in the past.
- a cumulative voting feature image accumulated in time series is generated by additively voting the voting value (assuming t).
- the image input reception unit 10-1 and the voting feature image storage unit 10-4 may not be provided. That is, the image to be recognized may be input from an external image input receiving unit. Alternatively, a voting feature image generated at a past time may be stored in an external voting feature image storage unit, and the voting feature image may be read from the external voting feature image storage unit. Therefore, as shown in FIG. 5, the electronic control unit 10 according to the present embodiment may be configured by the feature extraction unit 10-2, the time series cumulative voting unit 10-3, and the track boundary determination unit 10-5. Furthermore, in the above description, although the recognition target is described as a lane mark, the lane mark to be recognized may be a wedge-shaped lane mark or a linear lane mark. FIGS.
- FIG. 6 and 7 show block diagrams of the electronic control unit when the lane marks to be recognized are a wedge-shaped lane mark and a linear lane mark, respectively.
- the feature extraction unit 10-2 in FIG. 5 is replaced with a wedge-like lane mark feature extraction unit 10-6 that performs feature extraction of wedge-like lane marks.
- the feature extraction unit 10-2 in FIG. 5 is replaced with a linear lane mark feature extraction unit 10-7 that performs feature extraction of linear lane marks.
- the weight ⁇ i is a coefficient representing the degree of contribution in the cumulative voting feature image of the voting feature image at each time, and can be defined as, for example, a forgetting factor which decreases as time passes, as follows.
- k is a decimal fraction of 1.0 or more, and the weight ⁇ i is set to 1 / k each time the index i is increased by one.
- the voting feature image at time (t) is stored in the voting feature image storage unit 10-4.
- the voting feature image before time (t ⁇ T) may be held or deleted.
- the accumulated voting feature image stored here may be an array (x, y, z; t) of feature points at each time before calculating the voting value (however, x, y: coordinates of feature points in the input image, z: output at feature extraction, t: time, and so on.
- it may be a cumulative voting feature image in which voting values are arranged on a voting feature image ( ⁇ , ⁇ ) instead of the arrangement (x, y, z; t) of feature points at each time, or voting feature image coordinates and voting It may be array information of only voting feature points ( ⁇ , ⁇ , f; t) consisting of values. In any case, the data size can be made smaller than storing the feature image itself extracted by the feature extraction unit.
- the voting feature image is a space configured by the parameters of an approximate straight line (or approximate curve) passing through the feature points, this domain of region sets in advance a range that can be taken as a relative position between the runway boundary and the vehicle. Thus, it is possible to compress the amount of information to be smaller than that of the input image (or the characteristic image).
- the array information of only the voting feature points ( ⁇ , ⁇ , f; t) is smaller than the data size of the voting feature image, for the reason described above.
- the cumulative voting feature image generated at the previous time (t-1) is read out, and weighting with a forgetting factor ( ⁇ , 0 ⁇ ⁇ ⁇ 1) is performed on all values on the voting feature image. Furthermore, for each feature point extracted at the current time (t), a vote value for the parameter space ( ⁇ , ⁇ ) is calculated, and the vote value is added and voted on the vote feature image from which the vote value is read out. Store the cumulative voting feature image at time (t).
- the vote value calculated at a certain time (t) with respect to a certain coordinate ( ⁇ , ⁇ ) on the voting feature image is f ( ⁇ , ⁇ , t)
- the forgetting factor ⁇ (0 ⁇ ⁇ ⁇ 1)
- the cumulative vote value s ( ⁇ , ⁇ , t), which is an output on the generated cumulative vote feature image, can be calculated using the following equation.
- the data to be stored in the voting feature image storage unit 10-4 may be only one accumulated voting feature image in which the time-series voting values updated every time are accumulated with a weight.
- the technique for example, the technique described in Patent Document 3
- Example 2 Even in the method of retaining the features of the past fixed period as described in the first embodiment (Example 1), as compared with the case of storing the input image (or feature image) of each time on the time series interval as described above Although the memory to be occupied is reduced, according to the method of (Embodiment 1: Example 2), the size of data to be stored can be further reduced.
- the amount of calculation can be reduced as compared with the case of calculating the weighted sum as described in the first embodiment (example 1).
- the cumulative voting feature image stored here may be a cumulative voting feature image in which voting values are arranged on a voting feature image ( ⁇ , ⁇ ), or a voting feature point (,, ⁇ , consisting of voting feature image coordinates and a voting value).
- the sequence information of f; t) may be used. The latter can hold smaller data size than the former.
- the voting feature image storage unit 10-4 stores the cumulative voting feature image generated by the time-series cumulative voting unit 10-3, and provides the time-series cumulative voting unit 10-3 in the next cycle.
- the raceway boundary determination unit 10-5 determines the presence or absence and the position of the raceway boundary based on the voting value of the output of the time-series cumulative voting unit 10-3.
- a candidate with a track boundary position can be one whose output of the cumulative voting feature image is equal to or more than a threshold.
- the threshold value it is possible to output the result that the track boundary can not be determined.
- a possible range as a relative position between the runway boundary and the vehicle, and narrow down the candidates by selecting only the runway boundary position candidates within this range. Also, for each candidate, set an evaluation value representing the likelihood of a track boundary position, set the evaluation value higher as the value of the output of the cumulative voting feature image increases, and select the candidate with the higher evaluation value. It can be determined as the boundary position.
- the evaluation value can be changed according to the positional relationship with the past track boundary position, for example, the past track boundary so that the evaluation value of the candidate closer to the past track boundary position closer in time is raised. It is possible to set an evaluation value weighted by the distance to the line position.
- FIG. 3 is a flow chart schematically showing the operation of the electronic control unit 10 in the runway recognition apparatus 2 according to the first embodiment of the present invention.
- the operation shown in FIGS. 3 and 4 is only one specific example. As described above, this embodiment can adopt various operations and various calculation methods, and can realize operations modified from the specific examples shown in FIGS. 3 and 4.
- the image input reception unit 10-1 of the electronic control device 10 acquires an image to be recognized from the image output device 20 (step S100).
- the feature extraction unit 10-2 performs feature extraction (image creation) by applying a filter process that outputs a reaction value according to the probability of the presence of a lane mark at each position of the acquired image. (Step S200).
- the time-series cumulative voting unit 10-3 extracts feature points from the feature extraction image, and calculates a voting value from each feature point to a voting feature image. Then, the time-series cumulative voting unit 10-3 further votes the voting feature image stored in the voting feature image storage unit 10-4 to the voting feature image on which the voting value from the weighted past cycle is accumulated.
- a time-series cumulative voting feature image is generated by adding and voting values (step S300).
- raceway boundary determination unit 10-5 determines the presence / absence and the position of the raceway boundary line based on the vote value in the time-series cumulative voting feature image (step S400).
- FIG. 4 is a flow chart schematically showing the operation of a specific example of time-series cumulative voting calculation (step S300 in FIG. 3) among the operations of the electronic control unit 10 in the runway recognition apparatus according to Embodiment 1 of the present invention. .
- step S300 in the time-series cumulative voting calculation (step S300 in FIG. 3), first, the cumulative voting generated by the time-series cumulative voting unit 10-3 at the previous time (referred to as t-1)
- t-1 the cumulative voting generated by the time-series cumulative voting unit 10-3 at the previous time
- the feature image is read out from the feature image storage unit 10-4 (step S301).
- the time-series cumulative voting unit 10-3 weights all the values on the voting feature image with a forgetting factor ( ⁇ , 0 ⁇ ⁇ ⁇ 1) (step S302).
- the time-series cumulative voting unit 10-3 calculates a vote value for the parameter space ( ⁇ , ⁇ ) for each feature point extracted at the current time (t) (step S303).
- time-series cumulative voting unit 10-3 adds and votes the calculated voting value at time (t) on the read voting characteristic image (step S304).
- the time-series cumulative voting unit 10-3 stores the cumulative voting feature image at the generated time (t) in the feature image storage unit 10-4 (step S305).
- time-series cumulative voting calculation (step S300 in FIG. 3) is realized.
- FIGS. 8 to 12 schematically show examples of the results obtained in the operation according to the first embodiment described above.
- the lane mark is a scaly lane mark.
- FIG. 8 is a view schematically showing, as an example of an input image, an image acquired by an imaging device mounted such that a road surface on which a saddle-shaped lane mark is installed faces the front of the vehicle.
- FIG. 8 shows an input image in which a saddle-shaped lane mark 300 is provided on the road surface 200.
- FIG. 9 is a view schematically showing a feature image generated from the input image of FIG. 8 by the feature extraction unit 10-2 and feature points output on the feature image.
- FIG. 10 is a view schematically showing a voting feature image generated by voting the feature points in FIG.
- FIG. 11 shows a cumulative voting feature image generated by chronologically accumulating voting feature images as shown in FIG. 10 generated at each time in the time series cumulative voting unit 10-3, and a peak of an output. Is a view schematically showing a point indicating.
- FIG. 12 is a view schematically showing the track boundary position extracted from the peak value of the output in the cumulative voting feature image as shown in FIG. 11 in correspondence with the input image in the track boundary determination unit 10-5.
- the electronic control unit of the track recognition unit is only the electronic control unit for the wedge-shaped lane mark Or may be configured in combination with a general linear lane mark electronic control unit.
- a general linear lane mark electronic control device for example, the one described in Patent Document 1 can be used. Also, since such a general linear lane mark electronic control device is a technology understood by those skilled in the art, the detailed description will be omitted herein.
- one on the left and right of the lane is a wedge-shaped lane mark, and the other is a linear lane.
- the mark may be set.
- the lane mark types on the left and right of the vehicle may change due to the lane change.
- one road may switch from the linear lane mark to the wedge-shaped lane mark, and one region where a car is running
- linear lane marks may be used on roads, and in some other areas, recognition lanes shown below may be used selectively depending on the area.
- the wedge-shaped lane mark recognition unit 30-6 and the linear lane mark recognition unit 30-7 are combined with the image input reception unit 30-1 and the runway boundary determination unit 30-. It is conceivable to realize as an electronic control unit 30 having between 5 and 5.
- the image input accepting unit 30-1 has the same function as the image input accepting unit 10-1 in FIG.
- the travel path boundary determination unit 30-5 has the same function as the travel path boundary determination unit 10-5 in FIG.
- the runway boundary determination unit 30-5 detects the runway boundary determination unit 30-5 of the first embodiment with respect to both the output of the wedge-shaped lane mark recognition unit 30-6 and the linear lane mark recognition unit 30-7. Similar to the above, candidates for track boundary positions are extracted. Then, the runway boundary determination unit 30-5 does not distinguish between the runway boundary position candidates from the outputs of both the wedge-shaped lane mark recognition unit 30-6 and the linear lane mark recognition unit 30-7.
- the same judgment as that of the track boundary determination unit 30-5 according to the first embodiment may be performed, or the evaluation value may be weighted so as to preferentially select a candidate for the track boundary position by one of the outputs. Good.
- the scaly lane mark recognition unit 30-6 includes a scaly lane mark feature extraction unit 30-61, a time-series cumulative voting unit 30-62, and a voting feature image storage unit 30-63.
- the wedge-shaped lane mark feature extraction unit 30-61 has the same function as the feature extraction unit 10-2 in FIG. Further, the time-series cumulative voting unit 30-62 has the same function as the time-series cumulative voting unit 10-3 in FIG. Furthermore, the voting feature image storage unit 30-63 has the same function as the voting feature image storage unit 10-4 in FIG.
- the linear lane mark recognition unit 30-7 has a linear lane mark feature extraction unit 30-71, a time-series cumulative voting unit 30-72, and a voting feature image storage unit 30-73.
- the linear lane mark recognition unit 30-7 replaces only the feature extraction function of the wedge-shaped lane mark recognition unit 30-6 with the linear lane mark feature extraction unit 30-71 that extracts the features of the linear lane mark.
- gradient feature filters such as Prewitt and Sobel can be used as feature extraction filters of the linear lane mark feature extraction unit 30-71.
- time-series cumulative voting unit 30-72 has the same function as the time-series cumulative voting unit 10-3 in FIG.
- the voting feature image storage unit 30-73 has the same function as the voting feature image storage unit 10-4 in FIG.
- the voting feature image storage unit 30-63 and the voting feature image storage unit 30-73 are separately illustrated, but it is also possible to realize these two by one storage device. It is possible.
- the second embodiment makes it possible to determine the travel path boundary including the characteristics of the linear lane mark.
- Embodiment 3 which is an embodiment of the present invention will be described in detail.
- the third embodiment is a configuration (FIG. 16) obtained by adding a track boundary position storage unit 10-7 to the configuration of the first embodiment (FIG. 2).
- the difference in configuration between the third embodiment and the first embodiment is the presence or absence of the track boundary position storage unit 10-7 and the function of the time-series cumulative voting unit 10-3 as a time-series cumulative voting unit 10-6. It is a point that Therefore, the devices and functions other than the time-series cumulative voting unit 10-6 and the track boundary position storage unit 10-7 will be described with the same reference numerals. That is, the image output apparatus 20, the image input reception unit 10-1, the feature extraction unit 10-2, the voting feature image storage unit 10-4, and the runway boundary determination unit 10-5 in FIG. , Has the same function as each function.
- the track boundary position storage unit 10-7 added in the third embodiment stores the track boundary position detected in each frame in a predetermined time series section, and the information is stored in the time series cumulative voting section 10-6. provide.
- the time-series cumulative voting unit 10-6 in the third embodiment determines the relative track boundary position with respect to the vehicle based on the time-series change of the past track boundary position stored in the track boundary position storage unit 10-7. Then, the transition of "the lateral position" is appropriately estimated. Then, the time-series cumulative voting unit 10-6 in the third embodiment performs lateral position correction (hereinafter referred to as “lateral position correction” as appropriate) based on the estimated amount of transition on the cumulative voting feature image of the previous time. Then, as in the first embodiment and the second embodiment, a cumulative voting feature image is generated by additionally voting the current time voting value.
- the third embodiment is capable of highly accurate runway boundary position even when the relative lateral movement speed of the runway boundary position with respect to the vehicle (hereinafter appropriately referred to as "lateral speed”) can not be ignored. Can be detected.
- time-series cumulative voting unit 10-6 in the third embodiment will be shown as (third embodiment: example 1) and (third embodiment: example 2) below.
- the lateral velocity is estimated from the time-series change in the past track boundary position stored in the track boundary position storage unit 10-7.
- a method of estimating the lateral velocity it is possible to use, for example, a method of approximating the amount of change by linearly interpolating the relative travel path boundary position (lateral position) relative to the vehicle in a predetermined section in the past.
- the lateral position deviation between the time series when accumulating the voting feature image is corrected.
- the lateral velocity estimated value (VL) the difference between the acquisition time (t-1) of the past voting feature image used to generate the cumulative voting feature image and the current time (t) ((t)-(t) -1)), using the pixel corresponding to-(VL) x ((t)-(t-1)) as the origin of the lateral position coordinates of the past cumulative voting feature image (s (t-1))
- FIG. 17 is a conceptual diagram schematically showing one implementation example of this lateral position correction.
- each pixel value of the past accumulated voting feature image (s (t-1)) may be actually moved on the memory space, or for referring to the memory area of each pixel. It may be realized by changing a pointer variable.
- the lateral position correction is performed only on the cumulative voting feature image (s (t-1)) generated at the previous time, corresponding to the specific example of the first embodiment (the first embodiment: the second embodiment).
- the same processing is performed, and votes obtained at each time of a predetermined section ((t ⁇ T) to (t ⁇ 1)) in the past
- FIG. 18 is a conceptual diagram schematically showing one implementation example of the lateral position correction corresponding to this (Embodiment 1: Example 1).
- the lateral velocity estimation error is calculated by multiplying the lateral coordinate direction around the moved lateral coordinates by a distribution function that models the uncertainty of the lateral velocity estimated value. It can be absorbed.
- the distribution function it is possible to use, for example, a Gaussian distribution or trapezoid distribution whose dispersion is determined by a change in lateral velocity (hereinafter, appropriately referred to as “lateral acceleration”) calculated from past lateral positions.
- weighting with the same forgetting factor as in the first embodiment and the second embodiment may be included in the coefficient of the distribution function to perform weighted distribution processing with the distribution function of the estimation error.
- a cumulative voting feature image at the current time (t) is generated by additionally voting the vote value at the current time.
- the lateral position can be shifted from the past track boundary position stored in the track boundary position storage unit 10-7 and the time-series change in the past track boundary position. Calculate the range.
- the amount of transition that may occur from the lateral position of the previous time to the current time may be set as the transitionable range.
- a range having a certain probability As a possible transition range.
- one of the ranges is selected on the assumption that it is a correct value, and the origin of the lateral position coordinates of the past cumulative voting feature image (s (t-1)) is moved, and then the current time
- the lateral position correction is performed only on the cumulative voting feature image (s (t-1)) generated at the previous time, corresponding to the specific example of the first embodiment (the first embodiment: the second embodiment).
- the same processing is described, corresponding to the specific example of the first embodiment (the first embodiment: the first example), the same process is performed for each of the predetermined time intervals ((t ⁇ T) to (t ⁇ 1)) in the past. It can also be realized by performing lateral position correction individually on the acquired vote characteristic images (f (t-1), f (t-2),..., F (t-T)).
- an evaluation value (value (Si)) indicating whether the expected condition is satisfied is calculated and stored for the generated cumulative voting feature image.
- the maximum value of the vote values in each of the generated cumulative voting feature images may be used as the evaluation value.
- a smaller evaluation value may be given as the average value of the peak dispersion of local maximum values exceeding a certain threshold is smaller.
- a lane boundary position is detected using (Embodiment 3: Example 2) in a certain predetermined time section, and lateral velocity
- the time series information of the lane boundary position may be used as an initial value (Embodiment 3: Example 1).
- Example 3 As an example of the operation corresponding to Example 1,
- the previous time (t-1 and t-1) is calculated.
- Time series cumulative voting unit 10-6 reads out from the feature image storage unit 10-4 (step S311).
- the time-series cumulative voting unit 10-6 calculates a lateral speed estimated value at the current time (step S312).
- the time-series cumulative voting unit 10-6 corrects the lateral position coordinates of the cumulative voting feature image at the previous time based on the lateral velocity estimated value (step S313).
- the time-series cumulative voting unit 10-6 performs weighted dispersion processing of the cumulative voting feature image at the previous time in the lateral position coordinate direction (step S314).
- the time-series cumulative voting unit 10-6 calculates a vote value for the parameter space for each feature point extracted at the current time (t) (step S315).
- time-series cumulative voting section 10-6 adds and votes the calculated voting value at time (t) onto the corrected and weighted distributed voting characteristic image (step S316).
- the time-series cumulative voting unit 10-6 stores the cumulative voting feature image at the generated time (t) in the feature image storage unit 10-4 (step S317).
- time-series cumulative voting calculation (step S300 in FIG. 3) is realized.
- time-series cumulative voting unit 10-6 reads the cumulative voting feature image generated in (1) from the feature image storage unit 10-4 (step S321).
- the time-series cumulative voting unit 10-6 determines the lateral position at the current time (t) from the previous time (t-1). A transitionable range is calculated (step S322).
- the time-series cumulative voting unit 10-6 selects one lateral position shift amount from the lateral position shiftable range, and corrects the lateral position coordinates based on this (step S323).
- the time-series cumulative voting unit 10-6 calculates a vote value for the parameter space for each feature point extracted at the current time (t) (step S324).
- the time-series cumulative voting unit 10-6 calculates an evaluation value of the generated accumulated feature image (step S325).
- the time-series cumulative voting unit 10-6 checks whether the generation of the cumulative feature image and the calculation of the evaluation value have been performed in all the transitionable ranges (step S326).
- step S327 when the generation of the accumulated feature image and the calculation of the evaluation value are performed in all the transitionable ranges (Yes in step S326), the process proceeds to the next step, step S327.
- step S326 if it has not been performed in all the transitionable ranges (Yes in step S326), the operations in steps S324 to S326 are repeated (S326).
- the time-series cumulative voting unit 10-6 selects and stores a cumulative voting feature image with the largest evaluation value (S327).
- time-series cumulative voting calculation (step S300 in FIG. 3) is realized.
- the effect to be obtained is that the estimated position of the discretely placed lane mark is extracted as a feature point from the time-series input image, and a straight line (or curve) drawn by this feature point in time series is extracted By doing this, it is possible to extract a runway boundary even from a lane mark image that can not be distinguished from noise only from an image at a certain time.
- the voting feature image is a space configured by the parameters of an approximate straight line (or approximate curve) passing through the feature points, this domain of region sets in advance a range that can be taken as a relative position between the runway boundary and the vehicle. Thus, it is possible to compress the amount of information to be smaller than that of the input image (or the characteristic image).
- the embodiment of the present invention it is possible to constitute so as to obtain an effect by storing only the cumulative voting feature image (or an array of voting feature points) generated at the immediately preceding time. Therefore, the size of data to be stored can be greatly reduced as compared with the case of storing the input image (or the characteristic image) at each time on the time series interval.
- features detected like a scaly lane mark are temporally and spatially discrete. Even if the relative lateral velocity between the vehicle and the lane boundary position can not be ignored, the lane boundary position can be detected with high accuracy.
- the track recognition apparatus and the track recognition method according to the embodiment of the present invention are realized by hardware.
- the present invention can be realized by executing a program on a computer. That is, the present invention also relates to a computer, a program, and a computer readable recording medium in which the program is recorded. It can also be realized by reading and executing a program for functioning a computer as its track recognition device or track recognition method from a computer readable recording medium.
- the electronic control unit 10 is realized by hardware in the embodiment described above. However, when the present invention is executed by a computer using a program, the computer responds to the image information from the image output device 20 based on the predetermined program and the information stored in the database. It is possible to perform information processing to recognize the runway in In addition, a "runway" shall refer to a lane mark thru
- FIG. 21 is a diagram showing a configuration in which the functions of the electronic control unit are executed by a computer.
- a computer 100 functioning as an electronic control unit includes a central processing unit 11, a first storage unit 12, a second storage unit 13, and an interface 14.
- the central processing unit 11 is described as "CPU 11" in the drawing.
- the first storage device 12 is described as “Mem12”.
- the second storage device 13 is described as "DB 13".
- the interface 14 is described as "I / F 14".
- FIG. 1 illustrates the first storage device 12 and the second storage device 13 divided into two for convenience, they may be realized as one storage device.
- the interface 14 is a device that mediates the exchange of information between the central processing unit 11 and the image output device 20.
- the first storage device 12 is a device for storing temporary data, and is electrically connected to the central processing unit 11.
- the second storage device 13 mainly stores a database, and is electrically connected to the central processing unit 11. Although the first storage device 12 and the second storage device 13 are built in the computer 100 in FIG. 21, they may be implemented as external storage devices.
- the central processing unit 11 is a device that performs information processing, and is electrically connected to the interface 14, the first storage device 12, and the second storage device 13.
- the central processing unit 11 executes a program to store information stored in the first storage unit 12 and the second storage unit 13 based on image information input from the image output unit 20 through the interface 14.
- the computer 100 can realize various functions as shown in FIG. 2 by executing the software program in the central processing unit 11.
- a feature extraction unit that extracts a candidate position of a lane mark from the received input image;
- a cumulative voting unit that generates a cumulative voting feature image by voting the parameter value of the approximate curve or approximate line cumulatively by adding weights according to the elapsed time to the vote values for the extracted candidate positions;
- a track boundary determination unit that determines the track boundary position by extracting a track boundary position candidate based on the generated cumulative voting feature image;
- a track recognition apparatus comprising:
- the cumulative voting unit The candidate extracted at the current time with respect to a vote characteristic image in which weighting is performed using a coefficient of 0 or more and 1 or less meaning an forgetting effect due to the passage of time on all values on the accumulated voting feature image generated at the previous time
- a track recognition apparatus characterized in that the cumulative voting feature image at the current time is generated by adding the voting by position.
- the feature extraction unit includes a first feature extraction unit for extracting the candidate position of the scaly lane mark from the received input image;
- the cumulative voting unit includes a first cumulative voting unit that generates the cumulative voting feature image for the scaly lane mark for the candidate position of the scaly lane mark extracted,
- the raceway boundary determination unit is configured to determine the raceway boundary position by extracting the raceway boundary position candidate based on a cumulative voting feature image of the boat-like lane mark. apparatus.
- the feature extraction unit includes a second feature extraction unit that extracts the candidate position of a linear lane mark from the received input image;
- the cumulative voting unit includes a second cumulative voting unit that generates the cumulative voting feature image for the linear lane mark for the extracted candidate position of the linear lane mark,
- the runway boundary determination unit performs the runway boundary position determination by extracting the runway boundary position candidate based on the cumulative voting feature image of the linear lane mark. apparatus.
- the cumulative voting unit Lateral velocity of the current time is estimated based on information about time-series changes in the track boundary position detected in the past, and a voting feature image of a predetermined past interval or a cumulative voting feature image of the previous time of the current time By correcting the deviation with respect to the lateral position of the current time based on the estimated lateral velocity estimated value for any of the voting characteristic images, and accumulating the voting characteristic value of the current time in the corrected voting characteristic image And a track recognition device for generating a cumulative voting feature image of the current time.
- a track recognition apparatus comprising: generating a cumulative voting feature image of a current time, and selecting a cumulative voting feature image of the most appropriate current time from the generated cumulative voting feature images.
- the track recognition device includes an image output device that captures an image and outputs the captured image.
- a vehicle wherein the input image received by the feature extraction unit is the image output by the image output device.
- a track recognition method comprising: determining a track boundary position by extracting a track boundary position candidate based on the accumulated voting feature image generated.
- a runway recognition method comprising: generating an image; and sorting out a cumulative voting feature image of the most appropriate current time from the generated cumulative voting feature images.
- a feature extraction function of extracting a lane mark candidate position from the received input image With respect to the candidate position extracted by the feature extraction function, the voting value is weighted according to the elapsed time, and a cumulative voting feature image is generated by voting in the parameter space of the approximate curve or approximate line cumulatively.
- a track boundary determination function that determines the track boundary position by extracting a track boundary position candidate based on the generated cumulative voting feature image;
- a track recognition program characterized by causing a computer to be realized.
- the cumulative voting function is The candidate extracted at the current time with respect to a vote characteristic image in which weighting is performed using a coefficient of 0 or more and 1 or less meaning an forgetting effect due to the passage of time on all values on the accumulated voting feature image generated at the previous time
- a runway recognition program characterized by generating the cumulative voting feature image at the current time by adding the voting by position.
- the feature extraction function includes a first feature extraction function of extracting the candidate position of the scaly lane mark from the received input image;
- the cumulative voting function includes a first cumulative voting function that generates the cumulative voting feature image for the squad lane mark for the extracted candidate positions of the squad lane mark,
- the runway boundary determination function is characterized in that the runway boundary position determination is performed by extracting the runway boundary position candidate based on the cumulative voting feature image for the boat-like lane mark. program.
- the feature extraction function includes a second feature extraction function of extracting the candidate position of a linear lane mark from the received input image
- the cumulative voting function includes a second cumulative voting function that generates the cumulative voting feature image for the linear lane mark for the candidate position of the linear lane mark extracted
- the track boundary determination function is characterized by performing determination of the track boundary position by extracting the candidate of the track boundary position based on the cumulative voting feature image for the linear lane mark. program.
- the cumulative voting function is Lateral velocity of the current time is estimated based on information about time-series changes in the track boundary position detected in the past, and a voting feature image of a predetermined past interval or a cumulative voting feature image of the previous time of the current time By correcting the deviation with respect to the lateral position of the current time based on the estimated lateral velocity estimated value for any of the voting characteristic images, and accumulating the voting characteristic value of the current time in the corrected voting characteristic image
- a runway recognition program characterized by generating a cumulative voting feature image of the current time.
- the cumulative voting function estimates an area in which lateral positions at the current time can exist based on information on time-series changes in track boundary positions detected in the past, and based on all estimated lateral positions in the section.
- a track recognition program characterized by generating a cumulative voting feature image of the current time, and selecting a cumulative voting feature image of the most appropriate current time out of the generated cumulative voting feature images.
Landscapes
- Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Multimedia (AREA)
- Health & Medical Sciences (AREA)
- Databases & Information Systems (AREA)
- Evolutionary Computation (AREA)
- General Health & Medical Sciences (AREA)
- Medical Informatics (AREA)
- Software Systems (AREA)
- Computing Systems (AREA)
- Artificial Intelligence (AREA)
- Image Analysis (AREA)
- Traffic Control Systems (AREA)
- Image Processing (AREA)
Abstract
Description
また、解像度をぼかした共通のテンプレート画像を用いれば、些細な変化を吸収することが可能となるが、鋲状レーンマークでない領域もノイズとして検出してしまう。このような場合にノイズと鋲状レーンマークを識別する方法について、特許文献2では一切開示されていない。また、特許文献2では検出可能性を高めるために、鋲状レーンマークの設置間隔の定数分の一進行方向にずらした時系列画像を合成し、1枚の合成画像とした上で上記の方法で鋲状レーンマークを検出することを提案している。しかし、ノイズと鋲状レーンマークを識別する方法について、開示されていないため、上記の問題は解決されない。
図1は、本発明の実施形態1である走路認識装置2を含む車両の構成を模式的に示したブロック図である。
さらに、上記の説明では、認識対象をレーンマークとして説明したが、認識対象のレーンマークは、鋲状レーンマークであっても、線状レーンマークであってもよい。認識対象のレーンマークを、それぞれ鋲状レーンマーク、線状レーンマークとしたときの電子制御装置のブロック図を図6、図7に示す。図6では、図5の特徴抽出部10-2を鋲状レーンマークの特徴抽出を行う鋲状レーンマーク特徴抽出部10-6に置き換えている。図7では、図5の特徴抽出部10-2を線状レーンマークの特徴抽出を行う線状レーンマーク特徴抽出部10-7に置き換えている。
実施形態1の第1の具体例としては、時刻(t-T)から時刻(t-1)までの過去の、それぞれの時刻で抽出された特徴点のみから生成された投票特徴画像をすべて保存し、時刻(t)における投票特徴画像と合わせて、時刻(t-T)~時刻(t)までのT時間の時系列投票値から累積投票特徴画像を生成してもよい。このとき、ある時刻(t)において計算された投票値をf(ρ,θ,t)とすると、各時刻の投票値に対する重みをαi(i=0,1,…,T)として生成される、累積投票特徴画像上の出力である累積投票値s(ρ,θ,t)は次式で計算できる。
実施形態1の第2の具体例として、次のように忘却係数による重み付けを行いながら時系列累積投票特徴画像を生成する方法を用いることもできる。
上記で、特にレーンマークによって表現された走路境界を認識するための走路認識装置、方法及びプログラムについての説明を行ったが、走路認識装置の電子制御装置は、鋲状レーンマーク用電子制御装置のみから構成されても、また、一般的な線状レーンマーク用電子制御装置と組み合わせて構成されてもよい。なお、一般的な線状レーンマーク用電子制御装置としては、例えば特許文献1に記載のものを用いることができる。そして、このような一般的な線状レーンマーク用電子制御装置は当業者において理解されている技術であるため本明細書では詳細な説明は省略する。
次に、本発明の実施形態である実施形態3について詳細に説明する。実施形態3は実施形態1の構成(図2)に、走路境界位置記憶部10-7を加えた構成(図16)である。
実施形態3の第1の具体例では、はじめに、前記走路境界位置記憶部10-7に記憶された過去の走路境界位置の時系列的な変化から、ラテラル速度を推定する。
実施形態3の第2の具体例では、はじめに、走路境界位置記憶部10-7に記憶された過去の走路境界位置、および過去の走路境界位置の時系列的な変化から、ラテラル位置の推移可能範囲を計算する。具体的な方法として、例えば、アプリケーションの仕様が定める最大ラテラル速度を仮定して前時刻のラテラル位置から現時刻までに起こりうる推移量を推移可能範囲としてもよい。また他の具体的な方法としては、例えば、(実施形態3:例1)と同様に、推定ラテラル速度、および推定ラテラル加速度から定められるガウス分布や台形分布を仮定し、一定の確率をもつ範囲を推移可能範囲としてもよい。
次に、時系列累積投票部10-6が、生成された累積特徴画像の評価値を計算する(ステップS325)。
抽出された前記候補位置について、投票値に経過時間に応じた重みをつけて累積的に近似曲線または近似直線のパラメータ空間への投票を行うことにより累積投票特徴画像を生成する累積投票部と、
生成した前記累積投票特徴画像に基づいて走路境界線位置の候補を抽出することにより、前記走路境界線位置の判定を行う走路境界判定部と、
を備えることを特徴とする走路認識装置。
前記累積投票部は、前記投票にHough変換を用いることを特徴とする走路認識装置。
前記累積投票部が、
前時刻に生成された前記累積投票特徴画像上の全ての値について時間経過による忘却効果を意味する0以上1以下の係数による重み付けを行った投票特徴画像に対し、現時刻に抽出された前記候補位置による前記投票を加えることによって、現時刻における前記累積投票特徴画像を生成することを特徴とする走路認識装置。
前記特徴抽出部は、前記受け付けた入力画像から鋲状レーンマークの前記候補位置を抽出する第1の特徴抽出部を含み、
前記累積投票部は、抽出された前記鋲状レーンマークの前記候補位置について、前記鋲状レーンマークについての前記累積投票特徴画像を生成する第1の累積投票部を含み、
前記走路境界判定部は、前記鋲状レーンマークについての累積投票特徴画像に基づいて前記走路境界線位置の候補を抽出することにより、前記走路境界線位置の判定を行うことを特徴とする走路認識装置。
前記特徴抽出部は、前記受け付けた入力画像から線状レーンマークの前記候補位置を抽出する第2の特徴抽出部を含み、
前記累積投票部は、抽出された前記線状レーンマークの候補位置について、前記線状レーンマークについての前記累積投票特徴画像を生成する第2の累積投票部を含み、
前記走路境界判定部は、前記線状レーンマークについての累積投票特徴画像に基づいて前記走路境界線位置の候補を抽出することにより、前記走路境界線位置の判定を行うことを特徴とする走路認識装置。
前記累積投票部が、
過去に検出された走路境界位置の時系列的な変化についての情報に基づき現時刻のラテラル速度を推定し、過去所定区間の投票特徴画像、若しくは、現時刻の前時刻の累積投票特徴画像、の何れかの投票特徴画像を対象として前記推定されたラテラル速度推定値に基づき現時刻のラテラル位置に対するズレの補正をし、当該補正後の投票特徴画像に現時刻の投票特徴値を累積させることにより、現時刻の累積投票特徴画像を生成することを特徴とする走路認識装置。
前記累積投票部が、過去に検出された走路境界位置の時系列的な変化についての情報に基づき現時刻のラテラル位置の存在可能な区域を推定し、当該区間におけるすべての推定ラテラル位置に基づく前記現時刻の累積投票特徴画像を生成し、当該生成した累積投票特徴画像の中から最も適切な現時刻の累積投票特徴画像を選別することを特徴とする走路認識装置。
該走路認識装置は、画像を撮像し、当該撮像した画像を出力する画像出力装置を備え、
前記特徴抽出部が受け付ける前記入力画像は前記画像出力装置により前記出力された画像であることを特徴とする車両。
抽出された前記候補位置について、投票値に経過時間に応じた重みをつけて累積的に近似曲線または近似直線のパラメータ空間への投票を行うことにより累積投票特徴画像を生成し、
生成された前記累積投票特徴画像に基づいて走路境界線位置の候補を抽出することにより、前記走路境界線位置の判定を行うことを特徴とする走路認識方法。
前記投票にHough変換を用いることを特徴とする走路認識方法。
前記投票を行う際に、
前時刻に生成された前記累積投票特徴画像上の全ての値について時間経過による忘却効果を意味する0以上1以下の係数による重み付けを行った投票特徴画像に対し、現時刻に抽出された前記候補位置による前記投票を加えることによって、現時刻における前記累積投票特徴画像を生成することを特徴とする走路認識方法。
前記受け付けた入力画像から鋲状レーンマークの前記候補位置を抽出し、
抽出された前記鋲状レーンマークの前記候補位置について、前記鋲状レーンマークについての前記累積投票特徴画像を生成し、
前記走路境界判定を行う際に、前記鋲状レーンマークについての累積投票特徴画像に基づいて前記走路境界線位置の候補を抽出することにより、前記走路境界線位置の判定を行うことを特徴とする走路認識方法。
前記受け付けた入力画像から線状レーンマークの前記候補位置を抽出し、
抽出された前記線状レーンマークの前記候補位置について、前記線状レーンマークについての前記累積投票特徴画像を生成し、
前記走路境界判定を行う際に、前記線状レーンマークについての累積投票特徴画像に基づいて前記走路境界線位置の候補を抽出することにより、前記走路境界線位置の判定を行うことを特徴とする走路認識方法。
過去に検出された走路境界位置の時系列的な変化についての情報に基づき現時刻のラテラル速度を推定し、過去所定区間の投票特徴画像、若しくは、現時刻の前時刻の累積投票特徴画像、の何れかの投票特徴画像を対象として前記推定されたラテラル速度推定値に基づき現時刻のラテラル位置に対するズレの補正をし、当該補正後の投票特徴画像に現時刻の投票特徴値を累積させることにより、現時刻の累積投票特徴画像を生成することを特徴とする走路認識方法。
過去に検出された走路境界位置の時系列的な変化についての情報に基づき現時刻のラテラル位置の存在可能な区域を推定し、当該区間におけるすべての推定ラテラル位置に基づく前記現時刻の累積投票特徴画像を生成し、当該生成した累積投票特徴画像の中から最も適切な現時刻の累積投票特徴画像を選別することを特徴とする走路認識方法。
前記特徴抽出機能により抽出された前記候補位置について、投票値に経過時間に応じた重みをつけて累積的に近似曲線または近似直線のパラメータ空間への投票を行うことにより累積投票特徴画像を生成する累積投票機能と、
生成された前記累積投票特徴画像に基づいて走路境界線位置の候補を抽出することにより、前記走路境界線位置の判定を行う走路境界判定機能と、
をコンピュータに実現させることを特徴とする走路認識プログラム。
前記時系列累積投票機能は、前記投票にHough変換を用いることを特徴とする走路認識プログラム。
前記累積投票機能が、
前時刻に生成された前記累積投票特徴画像上の全ての値について時間経過による忘却効果を意味する0以上1以下の係数による重み付けを行った投票特徴画像に対し、現時刻に抽出された前記候補位置による前記投票を加えることによって、現時刻における前記累積投票特徴画像を生成することを特徴とする走路認識プログラム。
前記特徴抽出機能は、前記受け付けた入力画像から鋲状レーンマークの前記候補位置を抽出する第1の特徴抽出機能を含み、
前記累積投票機能は、抽出された前記鋲状レーンマークの前記候補位置について、前記鋲状レーンマークについての前記累積投票特徴画像を生成する第1の累積投票機能を含み、
前記走路境界判定機能は、前記鋲状レーンマークについての累積投票特徴画像に基づいて前記走路境界線位置の候補を抽出することにより、前記走路境界線位置の判定を行うことを特徴とする走路認識プログラム。
前記特徴抽出機能は、前記受け付けた入力画像から線状レーンマークの前記候補位置を抽出する第2の特徴抽出機能を含み、
前記累積投票機能は、抽出された前記線状レーンマークの前記候補位置について、前記線状レーンマークについての前記累積投票特徴画像を生成する第2の累積投票機能を含み、
前記走路境界判定機能は、前記線状レーンマークについての累積投票特徴画像に基づいて前記走路境界線位置の候補を抽出することにより、前記走路境界線位置の判定を行うことを特徴とする走路認識プログラム。
前記累積投票機能が、
過去に検出された走路境界位置の時系列的な変化についての情報に基づき現時刻のラテラル速度を推定し、過去所定区間の投票特徴画像、若しくは、現時刻の前時刻の累積投票特徴画像、の何れかの投票特徴画像を対象として前記推定されたラテラル速度推定値に基づき現時刻のラテラル位置に対するズレの補正をし、当該補正後の投票特徴画像に現時刻の投票特徴値を累積させることにより、現時刻の累積投票特徴画像を生成することを特徴とする走路認識プログラム。
前記累積投票機能が、過去に検出された走路境界位置の時系列的な変化についての情報に基づき現時刻のラテラル位置の存在可能な区域を推定し、当該区間におけるすべての推定ラテラル位置に基づく前記現時刻の累積投票特徴画像を生成し、当該生成した累積投票特徴画像の中から最も適切な現時刻の累積投票特徴画像を選別することを特徴とする走路認識プログラム。
2 走路認識装置
10、30 電子制御装置
10-1、30-1 画像入力受付部
10-2 特徴抽出部
10-3、10-6、30-62、30-72 時系列累積投票部
10-4、30-63、30-72 投票特徴画像記憶部
10-5、30-5 走路境界判別部
10-7 走路境界位置記憶部
11 中央演算装置
12 第1の記憶装置
13 第2の記憶装置
14 インタフェース
20 画像出力装置
30-6 鋲状レーンマーク認識部
30-61 鋲状レーンマーク特徴抽出部
30-7 線状レーンマーク認識部
30-71 線状レーンマーク特徴抽出部
Claims (10)
- 受け付けた入力画像からレーンマークの候補位置を抽出する特徴抽出部と、
抽出された前記候補位置について、投票値に経過時間に応じた重みをつけて累積的に近似曲線または近似直線のパラメータ空間への投票を行うことにより累積投票特徴画像を生成する累積投票部と、
生成された前記累積投票特徴画像に基づいて走路境界線位置の候補を抽出することにより、前記走路境界線位置の判定を行う走路境界判定部と、
を備えることを特徴とする走路認識装置。 - 請求項1に記載の走路認識装置において、
前記累積投票部は、前記投票にHough変換を用いることを特徴とする走路認識装置。 - 請求項1又は2に記載の走路認識装置において、
前記累積投票部が、
前時刻に生成された前記累積投票特徴画像上の全ての値について時間経過による忘却効果を意味する0以上1以下の係数による重み付けを行った投票特徴画像に対し、現時刻に抽出された前記候補位置による前記投票を加えることによって、現時刻における前記累積投票特徴画像を生成することを特徴とする走路認識装置。 - 請求項1乃至3の何れか1項に記載の走路認識装置において、
前記特徴抽出部は、前記受け付けた入力画像から鋲状レーンマークの前記候補位置を抽出する第1の特徴抽出部を含み、
前記累積投票部は、抽出された前記鋲状レーンマークの前記候補位置について、前記鋲状レーンマークについての前記累積投票特徴画像を生成する第1の累積投票部を含み、
前記走路境界判定部は、前記鋲状レーンマークについての累積投票特徴画像に基づいて前記走路境界線位置の候補を抽出することにより、前記走路境界線位置の判定を行うことを特徴とする走路認識装置。 - 請求項1乃至3の何れか1項に記載の走路認識装置において、
前記特徴抽出部は、前記受け付けた入力画像から線状レーンマークの前記候補位置を抽出する第2の特徴抽出部を含み、
前記累積投票部は、抽出された前記線状レーンマークの候補位置について、前記線状レーンマークについての前記累積投票特徴画像を生成する第2の累積投票部を含み、
前記走路境界判定部は、前記線状レーンマークについての累積投票特徴画像に基づいて前記走路境界線位置の候補を抽出することにより、前記走路境界線位置の判定を行うことを特徴とする走路認識装置。 - 請求項1乃至5の何れか1項に記載の走路認識装置において、
前記累積投票部が、
過去に検出された走路境界位置の時系列的な変化についての情報に基づき現時刻のラテラル速度を推定し、過去所定区間の投票特徴画像、若しくは、現時刻の前時刻の累積投票特徴画像、の何れかの投票特徴画像を対象として前記推定されたラテラル速度推定値に基づき現時刻のラテラル位置に対するズレの補正をし、当該補正後の投票特徴画像に現時刻の投票特徴値を累積させることにより、現時刻の累積投票特徴画像を生成することを特徴とする走路認識装置。 - 請求項6に記載の走路認識装置において、
前記累積投票部が、過去に検出された走路境界位置の時系列的な変化についての情報に基づき現時刻のラテラル位置の存在可能な区域を推定し、当該区間におけるすべての推定ラテラル位置に基づく前記現時刻の累積投票特徴画像を生成し、当該生成した累積投票特徴画像の中で最も適切な累積投票特徴画像を現時刻の累積投票特徴画像として採用することを特徴とする走路認識装置。 - 請求項1乃至7の何れか1項に記載の走路認識装置を有し、
該走路認識装置は、画像を撮像し、当該撮像した画像を出力する画像出力装置を備え、
前記特徴抽出部が受け付ける前記入力画像は前記画像出力装置により前記出力された画像であることを特徴とする車両。 - 受け付けた入力画像からレーンマークの候補位置を抽出し、
抽出された前記候補位置について、投票値に経過時間に応じた重みをつけて累積的に近似曲線または近似直線のパラメータ空間への投票を行うことにより累積投票特徴画像を生成し、
生成された前記累積投票特徴画像に基づいて走路境界線位置の候補を抽出することにより、前記走路境界線位置の判定を行うことを特徴とする走路認識方法。 - 受け付けた入力画像からレーンマークの候補位置を抽出する特徴抽出機能と、
抽出された前記候補位置について、投票値に経過時間に応じた重みをつけて累積的に近似曲線または近似直線のパラメータ空間への投票を行うことにより累積投票特徴画像を生成する累積投票機能と、
生成された前記累積投票特徴画像に基づいて走路境界線位置の候補を抽出することにより、前記走路境界線位置の判定を行う走路境界判定機能と、
をコンピュータに実現させることを特徴とする走路認識プログラム。
Priority Applications (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2011543282A JP5716671B2 (ja) | 2009-11-25 | 2010-11-25 | 走路認識装置、車両、走路認識方法及び走路認識プログラム |
US13/511,875 US8867845B2 (en) | 2009-11-25 | 2010-11-25 | Path recognition device, vehicle, path recognition method, and path recognition program |
Applications Claiming Priority (4)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2009-267421 | 2009-11-25 | ||
JP2009267421 | 2009-11-25 | ||
JP2010071931 | 2010-03-26 | ||
JP2010-071931 | 2010-03-26 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2011065399A1 true WO2011065399A1 (ja) | 2011-06-03 |
Family
ID=44066504
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/JP2010/070983 WO2011065399A1 (ja) | 2009-11-25 | 2010-11-25 | 走路認識装置、車両、走路認識方法及び走路認識プログラム |
Country Status (3)
Country | Link |
---|---|
US (1) | US8867845B2 (ja) |
JP (1) | JP5716671B2 (ja) |
WO (1) | WO2011065399A1 (ja) |
Cited By (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20090157292A1 (en) * | 2007-07-13 | 2009-06-18 | Robert Currie | System and method for providing shared information about traveled road segments |
JP2013050798A (ja) * | 2011-08-30 | 2013-03-14 | Nissan Motor Co Ltd | 動き検出装置 |
JP2013050797A (ja) * | 2011-08-30 | 2013-03-14 | Nissan Motor Co Ltd | 動き検出装置 |
JP2013193545A (ja) * | 2012-03-19 | 2013-09-30 | Nippon Soken Inc | 駐車支援装置 |
KR101584907B1 (ko) * | 2014-07-29 | 2016-01-22 | 울산대학교 산학협력단 | 관심 영역을 이용한 차선 인식 방법 및 그 장치 |
GB2550296A (en) * | 2011-07-12 | 2017-11-15 | Lucasfilm Entertainment Co Ltd | Scale independent tracking system |
WO2019214372A1 (zh) * | 2018-05-07 | 2019-11-14 | 腾讯科技(深圳)有限公司 | 地面标志提取的方法、模型训练的方法、设备及存储介质 |
Families Citing this family (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
EP2574958B1 (en) * | 2011-09-28 | 2017-02-22 | Honda Research Institute Europe GmbH | Road-terrain detection method and system for driver assistance systems |
DE112019000122T5 (de) * | 2018-02-27 | 2020-06-25 | Nvidia Corporation | Echtzeiterfassung von spuren und begrenzungen durch autonome fahrzeuge |
CN113494915A (zh) * | 2020-04-02 | 2021-10-12 | 广州汽车集团股份有限公司 | 一种车辆横向定位方法、装置及系统 |
US12055411B2 (en) * | 2022-07-01 | 2024-08-06 | Here Global B.V. | Method and apparatus for providing a path-based map matcher |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPH06149359A (ja) * | 1992-11-02 | 1994-05-27 | Matsushita Electric Ind Co Ltd | 走行レーン検出装置 |
JPH06213660A (ja) * | 1993-01-19 | 1994-08-05 | Aisin Seiki Co Ltd | 像の近似直線の検出方法 |
JP2007141167A (ja) * | 2005-11-22 | 2007-06-07 | Fujitsu Ten Ltd | 車線検出装置、車線検出方法、及び車線検出プログラム |
Family Cites Families (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPH07302346A (ja) | 1994-05-06 | 1995-11-14 | Nippon Soken Inc | 路面の白線検出方法 |
JP4671991B2 (ja) | 2001-11-30 | 2011-04-20 | 日立オートモティブシステムズ株式会社 | レーンマーク認識装置 |
JP2007293892A (ja) | 2001-11-30 | 2007-11-08 | Hitachi Ltd | レーンマーク認識方法 |
JP4016735B2 (ja) | 2001-11-30 | 2007-12-05 | 株式会社日立製作所 | レーンマーク認識方法 |
JP2004252827A (ja) | 2003-02-21 | 2004-09-09 | Nissan Motor Co Ltd | 車線認識装置 |
JP3864945B2 (ja) * | 2003-09-24 | 2007-01-10 | アイシン精機株式会社 | 路面走行レーン検出装置 |
JP4637618B2 (ja) | 2005-03-18 | 2011-02-23 | 株式会社ホンダエレシス | 車線認識装置 |
JP4358147B2 (ja) | 2005-04-28 | 2009-11-04 | 本田技研工業株式会社 | 車両及びレーンマーク認識装置 |
WO2007138898A1 (ja) * | 2006-05-25 | 2007-12-06 | Nec Corporation | 認識システム、認識方法および認識プログラム |
-
2010
- 2010-11-25 WO PCT/JP2010/070983 patent/WO2011065399A1/ja active Application Filing
- 2010-11-25 US US13/511,875 patent/US8867845B2/en active Active
- 2010-11-25 JP JP2011543282A patent/JP5716671B2/ja active Active
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPH06149359A (ja) * | 1992-11-02 | 1994-05-27 | Matsushita Electric Ind Co Ltd | 走行レーン検出装置 |
JPH06213660A (ja) * | 1993-01-19 | 1994-08-05 | Aisin Seiki Co Ltd | 像の近似直線の検出方法 |
JP2007141167A (ja) * | 2005-11-22 | 2007-06-07 | Fujitsu Ten Ltd | 車線検出装置、車線検出方法、及び車線検出プログラム |
Cited By (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20090157292A1 (en) * | 2007-07-13 | 2009-06-18 | Robert Currie | System and method for providing shared information about traveled road segments |
US8660794B2 (en) * | 2007-07-13 | 2014-02-25 | Dash Navigation, Inc. | System and method for providing shared information about traveled road segments |
GB2550296A (en) * | 2011-07-12 | 2017-11-15 | Lucasfilm Entertainment Co Ltd | Scale independent tracking system |
GB2550296B (en) * | 2011-07-12 | 2018-01-31 | Lucasfilm Entertainment Co Ltd | Scale independent tracking pattern |
JP2013050798A (ja) * | 2011-08-30 | 2013-03-14 | Nissan Motor Co Ltd | 動き検出装置 |
JP2013050797A (ja) * | 2011-08-30 | 2013-03-14 | Nissan Motor Co Ltd | 動き検出装置 |
JP2013193545A (ja) * | 2012-03-19 | 2013-09-30 | Nippon Soken Inc | 駐車支援装置 |
KR101584907B1 (ko) * | 2014-07-29 | 2016-01-22 | 울산대학교 산학협력단 | 관심 영역을 이용한 차선 인식 방법 및 그 장치 |
WO2019214372A1 (zh) * | 2018-05-07 | 2019-11-14 | 腾讯科技(深圳)有限公司 | 地面标志提取的方法、模型训练的方法、设备及存储介质 |
US11410435B2 (en) | 2018-05-07 | 2022-08-09 | Tencent Technology (Shenzhen) Company Limited | Ground mark extraction method, model training METHOD, device and storage medium |
Also Published As
Publication number | Publication date |
---|---|
JP5716671B2 (ja) | 2015-05-13 |
US8867845B2 (en) | 2014-10-21 |
JPWO2011065399A1 (ja) | 2013-04-18 |
US20120288206A1 (en) | 2012-11-15 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2011065399A1 (ja) | 走路認識装置、車両、走路認識方法及び走路認識プログラム | |
Mithun et al. | Detection and classification of vehicles from video using multiple time-spatial images | |
EP1796043B1 (en) | Object detection | |
JP4429298B2 (ja) | 対象個数検出装置および対象個数検出方法 | |
US9607220B1 (en) | Image-based vehicle speed estimation | |
US20120166080A1 (en) | Method, system and computer-readable medium for reconstructing moving path of vehicle | |
CN107305635A (zh) | 对象识别方法、对象识别装置和分类器训练方法 | |
CN104183127A (zh) | 交通监控视频检测方法和装置 | |
JP5146446B2 (ja) | 移動体検知装置および移動体検知プログラムと移動体検知方法 | |
KR101348680B1 (ko) | 영상추적기를 위한 표적포착방법 및 이를 이용한 표적포착장치 | |
CN112598743B (zh) | 一种单目视觉图像的位姿估算方法及相关装置 | |
CN114419098A (zh) | 基于视觉变换的运动目标轨迹预测方法及装置 | |
US20220245831A1 (en) | Speed estimation systems and methods without camera calibration | |
CN104200492A (zh) | 基于轨迹约束的航拍视频目标自动检测跟踪方法 | |
JP4882577B2 (ja) | 物体追跡装置およびその制御方法、物体追跡システム、物体追跡プログラム、ならびに該プログラムを記録した記録媒体 | |
JP6916975B2 (ja) | 標識位置特定システム及びプログラム | |
CN109492454A (zh) | 对象识别方法及装置 | |
JP3879874B2 (ja) | 物流計測装置 | |
WO2015027649A1 (zh) | 一种多尺度模型车辆检测方法 | |
JP2002367077A (ja) | 交通渋滞判定装置及び交通渋滞判定方法 | |
JP5903901B2 (ja) | 車両位置算出装置 | |
CN114663793A (zh) | 目标行为识别方法及装置、存储介质、终端 | |
CN113516685A (zh) | 目标跟踪方法、装置、设备及存储介质 | |
JP2009205695A (ja) | 対象個数検出装置および対象個数検出方法 | |
JP4479174B2 (ja) | 物体検出装置 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 10833245 Country of ref document: EP Kind code of ref document: A1 |
|
WWE | Wipo information: entry into national phase |
Ref document number: 2011543282 Country of ref document: JP |
|
WWE | Wipo information: entry into national phase |
Ref document number: 13511875 Country of ref document: US |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 10833245 Country of ref document: EP Kind code of ref document: A1 |