US20230014725A1 - Map generation device, map generation method, and map generation ocmputer program - Google Patents
Map generation device, map generation method, and map generation ocmputer program Download PDFInfo
- Publication number
- US20230014725A1 US20230014725A1 US17/935,638 US202217935638A US2023014725A1 US 20230014725 A1 US20230014725 A1 US 20230014725A1 US 202217935638 A US202217935638 A US 202217935638A US 2023014725 A1 US2023014725 A1 US 2023014725A1
- Authority
- US
- United States
- Prior art keywords
- road
- lane
- map generation
- image
- map
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title description 9
- 239000000284 extract Substances 0.000 claims abstract description 11
- 230000004044 response Effects 0.000 claims description 4
- 238000001514 detection method Methods 0.000 description 79
- 238000012545 processing Methods 0.000 description 53
- 238000000605 extraction Methods 0.000 description 17
- 238000010586 diagram Methods 0.000 description 13
- 238000004891 communication Methods 0.000 description 12
- 238000004590 computer program Methods 0.000 description 10
- 230000004048 modification Effects 0.000 description 9
- 238000012986 modification Methods 0.000 description 9
- 238000013527 convolutional neural network Methods 0.000 description 7
- 230000011218 segmentation Effects 0.000 description 5
- 239000004065 semiconductor Substances 0.000 description 4
- 230000006870 function Effects 0.000 description 3
- 238000012706 support-vector machine Methods 0.000 description 3
- 230000008878 coupling Effects 0.000 description 2
- 238000010168 coupling process Methods 0.000 description 2
- 238000005859 coupling reaction Methods 0.000 description 2
- 238000002372 labelling Methods 0.000 description 2
- 230000003287 optical effect Effects 0.000 description 2
- 240000004050 Pentaglottis sempervirens Species 0.000 description 1
- 235000004522 Pentaglottis sempervirens Nutrition 0.000 description 1
- 238000013528 artificial neural network Methods 0.000 description 1
- 238000004422 calculation algorithm Methods 0.000 description 1
- 230000008859 change Effects 0.000 description 1
- 238000012937 correction Methods 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 239000004973 liquid crystal related substance Substances 0.000 description 1
- 230000002093 peripheral effect Effects 0.000 description 1
- 238000007781 pre-processing Methods 0.000 description 1
- 238000007637 random forest analysis Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01C—MEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
- G01C21/00—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
- G01C21/38—Electronic maps specially adapted for navigation; Updating thereof
- G01C21/3804—Creation or updating of map data
- G01C21/3807—Creation or updating of map data characterised by the type of data
- G01C21/3815—Road data
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01C—MEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
- G01C21/00—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
- G01C21/26—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network
- G01C21/34—Route searching; Route guidance
- G01C21/36—Input/output arrangements for on-board computers
- G01C21/3626—Details of the output of route guidance instructions
- G01C21/3658—Lane guidance
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01C—MEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
- G01C21/00—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
- G01C21/26—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network
- G01C21/34—Route searching; Route guidance
- G01C21/36—Input/output arrangements for on-board computers
- G01C21/3667—Display of a road map
- G01C21/367—Details, e.g. road map scale, orientation, zooming, illumination, level of detail, scrolling of road map or positioning of current position marker
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/74—Image or video pattern matching; Proximity measures in feature spaces
- G06V10/75—Organisation of the matching processes, e.g. simultaneous or sequential comparisons of image or video features; Coarse-fine approaches, e.g. multi-scale approaches; using context analysis; Selection of dictionaries
- G06V10/751—Comparing pixel values or logical combinations thereof, or feature values having positional relevance, e.g. template matching
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/10—Terrestrial scenes
- G06V20/182—Network patterns, e.g. roads or rivers
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/50—Context or environment of the image
- G06V20/56—Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
- G06V20/588—Recognition of the road, e.g. of lane markings; Recognition of the vehicle driving pattern in relation to the road
-
- G—PHYSICS
- G08—SIGNALLING
- G08G—TRAFFIC CONTROL SYSTEMS
- G08G1/00—Traffic control systems for road vehicles
- G08G1/01—Detecting movement of traffic to be counted or controlled
- G08G1/0104—Measuring and analyzing of parameters relative to traffic conditions
- G08G1/0125—Traffic data processing
- G08G1/0129—Traffic data processing for creating historical data or processing based on historical data
-
- G—PHYSICS
- G08—SIGNALLING
- G08G—TRAFFIC CONTROL SYSTEMS
- G08G1/00—Traffic control systems for road vehicles
- G08G1/01—Detecting movement of traffic to be counted or controlled
- G08G1/0104—Measuring and analyzing of parameters relative to traffic conditions
- G08G1/0137—Measuring and analyzing of parameters relative to traffic conditions for specific applications
- G08G1/0141—Measuring and analyzing of parameters relative to traffic conditions for specific applications for traffic information dissemination
-
- G—PHYSICS
- G08—SIGNALLING
- G08G—TRAFFIC CONTROL SYSTEMS
- G08G1/00—Traffic control systems for road vehicles
- G08G1/01—Detecting movement of traffic to be counted or controlled
- G08G1/0104—Measuring and analyzing of parameters relative to traffic conditions
- G08G1/0137—Measuring and analyzing of parameters relative to traffic conditions for specific applications
- G08G1/0145—Measuring and analyzing of parameters relative to traffic conditions for specific applications for active traffic flow control
Abstract
A map generation device extracts, by inputting an image in which a road is represented to a classifier that outputs, for each pixel of the image, a type of a feature object on the road represented by the pixel, a pixel representing a boundary feature object that represents a boundary of a lane among feature objects on the road, calculates a Voronoi boundary by Voronoi-dividing the image with each pixel representing the boundary feature object as a generating point, detects each of the calculated Voronoi boundaries as one lane, and generates map information representing each of the detected lanes.
Description
- The present invention relates to a map generation device, a map generation method, and a map generation computer program that generate a map from an image.
- A technique for generating a map to be used in a navigation system and the like has been proposed (e.g., see Japanese Unexamined Patent Publication (Kokai) No. 2018-106017). For example, a map information generation device disclosed in Japanese Unexamined Patent Publication (Kokai) No. 2018-106017 combines three-dimensional point group information of latitude/longitude/altitude for each point at a predetermined interval and two-dimensional map information of latitude/longitude for each pixel or several pixels, generates basic map information acquired by adding information about the altitude for each point to the two-dimensional map information, and generates road map information by adding information about a road identified on the basis of a movement path of positional information acquired from a moving object to the basic map information.
- In recent years, a technique of automatic driving control of a vehicle has been developed. In the automatic driving control of a vehicle, it is required to appropriately perform control related to a lane change, merging, passing, or the like. Further, a traffic regulation that varies from lane to lane (e.g., a right lane is a right-turn-only lane, a left lane is a straight-ahead lane or a left-turn-only lane, and the like) may be applied depending on a road. Thus, it is preferable that map information used in the automatic driving control includes information about each lane on an individual road. However, the map information generated by the technique described above does not include information about each lane on an individual road. Further, a vehicle actually travels on a road and a video acquired by a camera mounted on the vehicle is referred, and map information including information about each lane on an individual road can be generated. However, extremely great man-hours and costs are required in order to generate the map information in such a manner.
- Therefore, an object of the present invention is to provide a map generation device that generates map information including information about each lane on a road from an image in which the road is represented.
- As one aspect of the present invention, a map generation device is provided. A map generation device includes: a processor configured to: extract, by inputting an image in which a road is represented to a classifier that outputs, for each pixel of the image, a type of a feature object on the road represented by the pixel, a pixel representing a boundary feature object that represents a boundary of a lane among feature objects on the road; calculate a Voronoi boundary by Voronoi-dividing the image with each pixel representing the boundary feature object as a generating point; detect each of the calculated Voronoi boundaries as one lane; and generate map information representing each of the detected lanes.
- In the map generation device, it is preferable that the processor is further configured to detect a predetermined region including a point where two or more of the Voronoi boundaries intersect each other as an intersection area.
- In addition, in the map generation device, it is preferable that the processor is further configured to further extract a pixel representing a regulation feature object representing a traffic regulation among feature objects on the road, and associate, for each of the detected lanes, a traffic regulation represented by the regulation feature object on the lane with the lane.
- In this case, it is preferable that the processor associates a traffic regulation related to an intersection with the intersection area closest to the regulation feature object representing the traffic regulation related to the intersection.
- As another aspect of the present invention, a map generation device is provided. The map generation device includes: a processor configured to: detect a plurality of vehicles located on a road from an image in which the road is represented; detect each line in which two or more of the plurality of detected vehicles are aligned as an individual lane; and generate map information representing each of the detected lanes.
- In the map generation device, it is preferable that the processor detects the road from the image, and detects, as the lane, a line in which two or more of the plurality of detected vehicles are aligned along the road.
- As still another aspect of the present invention, a map generation method is provided. The map generation method includes: extracting, by inputting an image in which a road is represented to a classifier that outputs, for each pixel of the image, a type of a feature object on the road represented by the pixel, a pixel representing a boundary feature object that represents a boundary of a lane among feature objects on the road; calculating a Voronoi boundary by Voronoi-dividing the image with each pixel representing the boundary feature object as a generating point; detecting each of the calculated Voronoi boundaries as one lane; and generating map information representing each of the detected lanes.
- As still another aspect of the present invention, a non-transitory recording medium in which a map generation computer program is recorded is provided. The map generation computer program cases a computer to execute: extracting, by inputting an image in which a road is represented to a classifier that outputs, for each pixel of the image, a type of a feature object on the road represented by the pixel, a pixel representing a boundary feature object that represents a boundary of a lane among feature objects on the road; calculating a Voronoi boundary by Voronoi-dividing the image with each pixel representing the boundary feature object as a generating point; detecting each of the calculated Voronoi boundaries as one lane; and generating map information representing each of the detected lanes.
- As still another aspect of the present invention, a map generation device is provided. The map generation device includes: a processor configured to: extract a road and a stop line from an image in which the road is represented; detect a lane included on the road in response to a ratio of a length of the stop line to a width of the road; and generate map information representing each of the detected lanes.
- A map generation device according to the present invention achieves an advantageous effect of being able to generate map information including information about each lane on a road from an image in which the road is represented.
-
FIG. 1 is a hardware configuration diagram of a map generation device according to a first embodiment. -
FIG. 2 is a functional block diagram of a processor of the map generation device according to the first embodiment. -
FIG. 3 is a diagram illustrating one example of a detection result of a boundary feature object, a Voronoi boundary, and an intersection area. -
FIG. 4 is an operation flowchart of map generation processing according to the first embodiment. -
FIG. 5 is a functional block diagram of a processor included in a map generation device according to a second embodiment. -
FIG. 6 is a diagram illustrating one example of lane detection on the basis of an individual vehicle detected from an image. -
FIG. 7 is an operation flowchart of map generation processing according to the second embodiment. -
FIG. 8A is a diagram illustrating one example of a stop line drawn on a road. -
FIG. 8B is a diagram illustrating one example of a stop line drawn on a road. - A map generation device, and a map generation method and a map generation computer program that are used in the map generation device will be described below with reference to the drawings. The map generation device extracts a pixel representing a boundary feature object representing a boundary line of a lane from an image in which a road is represented, and divides the road into lanes on the basis of the boundary feature object. Alternatively, the map generation device detects an individual vehicle on a road that is captured in an image in which the road is represented, and detects each line in which the detected vehicle is aligned as an individual lane.
- Note that, in each embodiment or a modification example described below, an image in which a road is represented, which is a target of map generation processing, is, for example, a bird’s-eye image which is acquired by capturing the ground from vertically above and in which an individual road marking represented on a road is recognizable (e.g., an image in which a high resolution satellite photograph or a high resolution aerial photograph is represented). Further, in the following description, an image to be a target of the map generation processing in which a road is represented, may be simply referred to as an image.
- First, a map generation device according to a first embodiment will be described. The map generation device according to the first embodiment detects a boundary feature object represented in an image, and divides a road into lanes on the basis of the boundary feature object.
-
FIG. 1 is a hardware configuration diagram of the map generation device according to the first embodiment. As illustrated inFIG. 1 , a map generation device 1 includes acommunication interface 2, an input device 3, adisplay device 4, a memory 5, a storage medium access device 6, and a processor 7. - The
communication interface 2 includes a communication interface for connection to a communication network and a control circuit thereof compliant with a communication standard such as Ethernet (registered trademark). Thecommunication interface 2 receives various types of pieces of information or data from other equipment (not illustrated) connected via the communication network, and passes the received various types of pieces of information or data to the processor 7. The data received by thecommunication interface 2 may include an image in which a road is represented, which is a target of map generation processing, and information representing a geographical range represented in the image (e.g., a latitude and a longitude of a predetermined position (e.g., an upper left end or the center) of a region represented in the image, a real space size in a horizontal direction and a vertical direction of the region, and a bearing). Further, thecommunication interface 2 may output a road map that is received from the processor 7 and is one example of map information generated as an execution result of the map generation processing to other equipment via the communication network. - The input device 3 includes, for example, a keyboard and a pointing device such as a mouse. Then, the input device 3 generates an operation signal in response to an operation by a user, such as an operation of selecting an image being a target of the map generation processing, an operation of instructing an execution start of the map generation processing, or an operation of causing the
display device 4 to display a generated road map, for example, and outputs the generated operation signal to the processor 7. - The
display device 4 includes, for example, a liquid crystal display or an organic EL display. Then, thedisplay device 4 displays display data received from the processor 7, such as data representing a candidate for an image on which the map generation processing is executed, a generated road map, or a part of the generated road map, for example. - Note that the input device 3 and the
display device 4 may be a device integrally formed like a touch panel display. - The memory 5 is one example of a storage unit, and is, for example, a readable-writable semiconductor memory and a read-only semiconductor memory. Then, the memory 5 stores, for example, a computer program for the map generation processing executed by the processor 7, various types of pieces of data used in the map generation processing, such as a parameter set that defines a classifier used in the map generation processing, for example, and various types of pieces of data generated during execution of the map generation processing. Furthermore, the memory 5 may store an image being a target of the map generation processing, and information representing a geographical range represented in the image. Furthermore, the memory 5 may store a generated road map.
- The storage medium access device 6 is a device that accesses a
storage medium 8 such as a magnetic disk, a semiconductor memory card, and an optical storage medium, for example. Note that the storage medium access device 6 together with thestorage medium 8 constitute another example of the storage unit. The storage medium access device 6 reads, for example, a computer program for the map generation processing executed on the processor 7 or an image being a target of the map generation processing that is stored in thestorage medium 8, and passes the read computer program or the read image to the processor 7. Alternatively, the storage medium access device 6 may receive a generated road map from the processor 7, and write the received road map to thestorage medium 8. - The processor 7 is one example of a processing unit, and includes, for example, one or a plurality of CPUs and peripheral circuits thereof. Furthermore, the processor 7 may include an arithmetic circuit for a numerical operation, an arithmetic circuit for graphic processing, and an arithmetic circuit for a logical operation. Then, the processor 7 controls the entirety of the map generation device 1. Further, the processor 7 executes the map generation processing on an image to be target in which a road is represented.
-
FIG. 2 is a functional block diagram of the processor 7 according to the first embodiment. As illustrated inFIG. 2 , the processor 7 includes anextraction unit 11, alane detection unit 12, an intersectionarea detection unit 13, a trafficregulation setting unit 14, and amap generation unit 15. Each of these units included in the processor 7 is, for example, a function module achieved by a computer program executed on the processor 7. Alternatively, each of these units included in the processor 7 may be a dedicated arithmetic circuit provided in the processor 7. - The
extraction unit 11 inputs an image in which a road is represented, which is a target of the map generation processing, to a classifier that outputs, for each pixel of an image, a type of a feature object on the road represented in the pixel. In this way, theextraction unit 11 extracts, from the image, a pixel representing a boundary feature object that represents a boundary of a lane among feature objects on the road, a pixel representing a regulation feature object that represents a traffic regulation, and the like. The boundary feature object includes, for example, a lane division line such as a white line or a yellow line, and a road boundary line such as a curb or a median strip. Further, the regulation feature object includes, for example, a road marking that represents an arrow such as a right turn and a left turn, a stop line, a pedestrian crossing, a speed display, no stopping or a special lane (such as a bus lane and a bicycle lane). - The
extraction unit 11 can use, as a classifier, for example, a convolutional neural network (CNN) including an input layer to which an image is input, an output layer that outputs a classification result of a feature object represented by each of a plurality of pixels included in an image, and a plurality of hidden layers connected between the input layer and the output layer. Specifically, theextraction unit 11 can use, as a classifier, a CNN for semantic segmentation, e.g., a CNN such as a fully convolutional network (FCN) (Long, J., Shelhamer and other, “Fully convolutional networks for semantic segmentation”, In CVPR, 2015), SegNet, DeepLab, RefineNet, or PSPNet. Alternatively, theextraction unit 11 may use, as a classifier, a classifier in accordance with another semantic segmentation technique such as a random forest. - The
extraction unit 11 obtains, for each pixel included in an image, classification result data representing a feature object represented in the pixel by inputting the image to the classifier as described above. The classification result data are represented as two-dimensional data that have the same size as that of an image and include each pixel having a value corresponding to a feature object represented by the pixel, for example (e.g., a white line is 1, an arrow is 2, a pedestrian crossing is 3, etc.). Note that a plurality of regulation feature objects of the same type may be represented in an image. Thus, theextraction unit 11 may classify, by executing labeling processing or clustering processing on a set of pixels representing regulation feature objects of the same type, the set of pixels representing the regulation feature objects into individual regulation feature objects. In this case, pixels representing different regulation feature objects may have values different from each other in the classification result data. Theextraction unit 11 passes the classification result data to thelane detection unit 12, the intersectionarea detection unit 13, the trafficregulation setting unit 14, and themap generation unit 15. - The
lane detection unit 12 detects, for an individual road represented in an image, an individual lane included on the road on the basis of a set of pixels that is included in the classification result data and represents a boundary feature object. In the present embodiment, thelane detection unit 12 calculates a Voronoi boundary by Voronoi-dividing the image with each of the pixels representing the boundary feature object as a generating point, and detects each of the calculated Voronoi boundaries as one lane. Note that thelane detection unit 12 may calculate a Voronoi boundary in accordance with any algorithm that executes Voronoi division. - A Voronoi boundary is provided in a position at equal distances from two closest generating points. Therefore, with each boundary feature object as a generating point, the Voronoi boundary is calculated in such a way as to extend along an individual lane and to be located in the individual lane. Therefore, the
lane detection unit 12 can accurately detect an individual lane by calculating the Voronoi boundary with each of pixels representing the boundary feature object as a generating point. - The
lane detection unit 12 passes lane detection result data representing an individual Voronoi boundary (i.e., an individual lane) to the intersectionarea detection unit 13. Note that the lane detection result data can be, for example, a binary image that has the same size as that of an image being a target of the map generation processing and includes a pixel representing a Voronoi boundary and another pixel having values different from each other. - The intersection
area detection unit 13 detects an area including an intersection on an image as an intersection area. In the present embodiment, a Voronoi boundary representing an individual lane is calculated, and thus Voronoi boundaries intersect each other at an intersection. Therefore, a point where the Voronoi boundaries intersect each other, i.e., a Voronoi point appears at an individual intersection. Thus, the intersectionarea detection unit 13 detects a predetermined region including a Voronoi point as an intersection area. In this way, the intersectionarea detection unit 13 can accurately detect an individual intersection from an image. - For this purpose, the intersection
area detection unit 13 identifies a Voronoi point represented in the lane detection result data. Then, the intersectionarea detection unit 13 refers to the classification result data for each identified Voronoi point, and sets, as an intersection area, an area including from the Voronoi point to the closest stop line or the closest pedestrian crossing along an individual Voronoi boundary. -
FIG. 3 is a diagram illustrating one example of a detection result of a boundary feature object, a Voronoi boundary, and an intersection area. In animage 300 illustrated inFIG. 3 , apixel 301 representing a boundary feature object such as a lane division line and a lane boundary line is extracted. Then, it is clear that eachindividual Voronoi boundary 302 calculated with thepixel 301 representing the boundary feature object as a generating point represents one lane. Further, it is clear that, for eachVoronoi point 303, anintersection area 304 is provided in such a way as to include the Voronoi point. - The intersection
area detection unit 13 divides an individual Voronoi boundary, i.e., an individual lane, into any of an intersection area and a single road connecting between intersection areas. In other words, the intersectionarea detection unit 13 sets, as a single road, a section of an individual Voronoi boundary that is not included in any of intersection areas. Note that, when there are a plurality of Voronoi boundaries connecting between the same two intersection areas, the intersectionarea detection unit 13 sets the Voronoi boundaries as one single road. Therefore, a single road representing a road on which a plurality of lanes are set, includes Voronoi boundaries having the same number of lanes included on the road. In this way, a lane network representing the individual lane of the road represented in the image is formed. - The intersection
area detection unit 13 passes, to the trafficregulation setting unit 14, information representing an individual intersection area, an individual single road, and a lane included on the individual single road that are detected, i.e., information representing a lane network. Note that the information representing the lane network includes, for example, information indicating a position and a range of an individual intersection area, a position of an individual single road, a position of an individual lane included on the single road, and an intersection area connected to the single road in an image being a target of the map generation processing. - The traffic
regulation setting unit 14 associates, for each detected lane, a traffic regulation represented by a regulation feature object in the lane with the lane. For example, the trafficregulation setting unit 14 calculates, for each regulation feature object represented in the classification result data, the centroid of a set of pixels representing the regulation feature object. Then, for a regulation feature object representing a traffic regulation set to each individual lane, such as a special lane or right-turn and left-turn arrows, among the regulation feature objects represented in the classification result data, the trafficregulation setting unit 14 associates a traffic regulation represented by the regulation feature object with a lane corresponding to a Voronoi boundary closest to the centroid of the set of pixels representing the regulation feature object. - Further, for a regulation feature object representing a traffic regulation set to a road itself, such as a speed limit, a stop, or no stopping, among the regulation feature objects represented in the classification result data, the traffic
regulation setting unit 14 associates a traffic regulation represented by the regulation feature object with a single road on which the centroid of the set of pixels representing the regulation feature object is located. - Furthermore, for a regulation feature object representing a traffic regulation related to an intersection, such as right-turn and left-turn arrows or a stop, the traffic
regulation setting unit 14 associates a traffic regulation represented by the regulation feature object with both of an intersection area closest to the centroid of the set of pixels representing the regulation feature object and a single road on which the centroid is located. As described above, the lane network with which the traffic regulation is associated is calculated. - The traffic
regulation setting unit 14 passes the information representing the lane network with which the traffic regulation is associated to themap generation unit 15. - The
map generation unit 15 generates, for each single road, a road map including information about a detected individual lane. Specifically, themap generation unit 15 associates information representing that a stop line is located in a position of the set of pixels representing the stop line represented in the classification result data with the information representing the lane network with which the traffic regulation is associated. Similarly, themap generation unit 15 associates information representing that a pedestrian crossing is located in a position of the set of pixels representing the pedestrian crossing with the information representing the lane network with which the traffic regulation is associated. Furthermore, themap generation unit 15 associates information representing that a lane division line is located in a position of the pixels representing the lane division line represented in the classification result data with the information representing the lane network with which the traffic regulation is associated. Similarly, themap generation unit 15 associates information representing that a road boundary line is located in a position of the pixels representing the road boundary line represented in the classification result data with the information representing the lane network with which the traffic regulation is associated. In this way, the road map of a geographical range represented in the image being the target of the map generation processing is generated. - Further, the
map generation unit 15 may refer to, for each intersection area, information representing the geographical range represented in the image being the target of the map generation processing and a position of the intersection area on the image, calculate a position of an intersection in the intersection area, and associate positional information representing the position of the intersection (e.g., a latitude and a longitude) with the road map. Furthermore, themap generation unit 15 may divide, for each single road, the single road into sections at a predetermined length, refer to, for each of the sections, information representing the geographical range represented in the image being the target of the map generation processing and a position of the section on the image, calculate a position of the section, and associate positional information representing the position of the section with the road map. - Furthermore, the
map generation unit 15 may generate a road map of a wider range by coupling road maps generated for each image. At this time, themap generation unit 15 may couple road maps generated from individual images in such a way that the same positions of the same roads overlap each other by referring to, for each image, the information representing the geographical range represented in the image. - The
map generation unit 15 stores the generated road map in the memory 5, or writes the generated road map to thestorage medium 8 via the storage medium access device 6. Alternatively, themap generation unit 15 may output the generated road map to other equipment via thecommunication interface 2. -
FIG. 4 is an operation flowchart of the map generation processing according to the first embodiment. The processor 7 may execute the map generation processing according to the following operation flowchart on each image being a target of the map generation processing. - The
extraction unit 11 of the processor 7 extracts a pixel representing a boundary feature object, a pixel representing a regulation feature object, and the like by inputting an image to a classifier (step S101). - The
lane detection unit 12 of the processor 7 calculates a Voronoi boundary by Voronoi-dividing the image with each pixel representing the boundary feature object as a generating point, and detects each of the calculated Voronoi boundaries as one lane (step S102). - The intersection
area detection unit 13 of the processor 7 detects, for each Voronoi point where the Voronoi boundaries intersect each other, a predetermined region including the Voronoi point as an intersection area (step S103). Then, the intersectionarea detection unit 13 calculates a lane network by dividing the individual Voronoi boundary into any of an individual intersection area and a single road connecting between intersection areas (step S104). - The traffic
regulation setting unit 14 of the processor 7 associates, for each lane of the lane network, a traffic regulation represented by a regulation feature object in the lane with the lane (step S105). Further, the trafficregulation setting unit 14 associates, for each single road of the lane network, a traffic regulation represented by a regulation feature object being located on the single road and set to the road itself with the single road (step S106). Furthermore, the trafficregulation setting unit 14 associates, for each intersection area of the lane network, a traffic regulation related to an intersection and represented by a regulation feature object located in the intersection area or in the vicinity thereof with the intersection area (step S107). - The
map generation unit 15 of the processor 7 generates a road map including information related to the detected individual lane (step S108). At this time, themap generation unit 15 may associate, with a position in which each of a stop line, a pedestrian crossing, a lane division line, and a road boundary line is detected, information representing that each of them is present. Then, the processor 7 ends the map generation processing in the road map. - As described above, the map generation device extracts a pixel representing a feature object that represents a boundary line of a lane from an image in which a road is represented, and detects, as one lane, each Voronoi boundary calculated by Voronoi division with the extracted pixel as a generating point. Thus, the map generation device can generate map information including information about each lane on the road from the image in which the road is represented. Further, the map generation device can include information related to an intersection in the map information by setting an intersection area on the basis of a Voronoi point where the Voronoi boundaries intersect each other. Furthermore, the map generation device can automatically associate, with each lane, a traffic regulation of the lane, and can also automatically associate a traffic regulation related to an intersection with each lane.
- Next, a map generation device according to a second embodiment will be described. The map generation device according to the second embodiment detects a plurality of vehicles captured in an image in which a road is represented, and identifies each line in which two or more of the plurality of detected vehicles are aligned as an individual lane.
- Note that the map generation device according to the first embodiment and the map generation device according to the second embodiment are different from each other in processing executed by a processor 7. Thus, details of map generation processing according to the second embodiment being executed by the processor 7 will be described below.
-
FIG. 5 is a functional block diagram of the processor 7 according to the second embodiment. As illustrated inFIG. 5 , the processor 7 includes avehicle detection unit 21, alane detection unit 22, and amap generation unit 23. Each of these units included in the processor 7 is, for example, a function module achieved by a computer program executed on the processor 7. Alternatively, each of these units included in the processor 7 may be a dedicated arithmetic circuit provided in the processor 7. - The
vehicle detection unit 21 detects a vehicle represented in an image being a target of road generation processing. For example, thevehicle detection unit 21 detects, by inputting an image to a classifier for vehicle detection, a vehicle represented in the input image. As the classifier, thevehicle detection unit 21 can use, for example, a deep neural network (DNN) which has been trained in advance in such a way as to detect, from an input image, a vehicle represented in the image. Thevehicle detection unit 21 can use, as such a DNN, a DNN having a CNN type architecture, such as a Single Shot MultiBox Detector (SSD) (see Wei Liu and other, “SSD: Single Shot MultiBox Detector”, ECCV2016, 2016) or a Faster R-CNN (see Shaoqing Ren and other, “Faster R-CNN: Towards Real-Time Object Detection with Region Proposal Networks”, NIPS, 2015), for example. - Alternatively, the
vehicle detection unit 21 may use a classifier other than the DNN as the classifier. For example, thevehicle detection unit 21 may use, as the classifier, a support vector machine (SVM) or an adaBoost classifier which has been trained in advance in such a way as to output, with a feature (e.g., Histograms of Oriented Gradients, HOG) calculated with respect to a window set on an image as an input, a determination result whether or not a vehicle is represented in the window. Thevehicle detection unit 21 calculates, while variously changing a position, a size, and an aspect ratio of a window to be set on an image, the feature with respect to the window, and obtains the determination result whether or not the vehicle is represented in the window by inputting the calculated feature to the SVM or the adaBoost classifier. - Alternatively, the
vehicle detection unit 21 may use, as a classifier, a classifier for semantic segmentation similarly to the classifier used by theextraction unit 11 according to the first embodiment. In this case, a set of pixels representing a vehicle or a part of the vehicle is extracted by inputting an image to the classifier. Thus, thevehicle detection unit 21 may detect, as one vehicle, individual pixel groups connected to each other by executing labeling processing on the extracted set of pixels. - The
vehicle detection unit 21 notifies, for each detected vehicle, thelane detection unit 22 of a predetermined position in a region (e.g., the centroid of the region) in which the vehicle is represented as a position of the vehicle. - The
lane detection unit 22 detects a lane on the basis of a plurality of vehicles detected from an image. Herein, an individual vehicle generally travels along a lane. Thus, it is assumed that, for each lane, a plurality of vehicles are aligned along the lane on an image. Thus, thelane detection unit 22 detects a line in which two or more of a plurality of vehicles detected on an image are aligned, and detects the detected individual line as one lane. - For example, the
lane detection unit 22 assumes that there are a plurality of combinations of lines in which vehicles are aligned. Then, thelane detection unit 22 identifies a combination in which a total distance from a position of each vehicle to any of lines included in the assumed combinations is minimum among the plurality of combinations, and detects an individual line included in the identified combination as a lane. - In order to determine an assumed combination of lines in which vehicles are aligned, the
lane detection unit 22 detects a road from an image, for example. For this purpose, thelane detection unit 22 extracts a set of pixels representing the road by inputting an image to the classifier for semantic segmentation which has been trained in advance in such a way as to extract the pixel representing the road, for example. Note that thelane detection unit 22 can use, as such a classifier, a classifier similar to the classifier used by theextraction unit 11 according to the first embodiment. Thelane detection unit 22 sets an individual line obtained by executing thinning processing on the set of pixels representing the road as one road, and detects a position in which the lines intersect each other as an intersection. Thelane detection unit 22 may set each of roads connecting two intersections as a single road being a target of lane detection. Alternatively, by applying Voronoi division to the set of pixels representing the road, thelane detection unit 22 may divide the set into a single road and an intersection area, similarly to the first embodiment. In this case, thelane detection unit 22 can calculate a Voronoi boundary for each single road by executing Voronoi division with, as a generating point, a pixel adjacent to a pixel representing other than the road among the set of pixels representing the road. Thus, similarly to the first embodiment, thelane detection unit 22 may identify an intersection area on the basis of a Voronoi point where Voronoi boundaries intersect each other, and set, as an individual single road, a section that is not included in the intersection area among a set of pixels representing the Voronoi boundary and a road around the Voronoi boundary. Note that, when a single road being a target of the lane detection is a curved line, thelane detection unit 22 may divide the single road into sections that can be approximated with a straight line, and detect a lane for each of the sections. Further, for each single road being a target of the lane detection, thelane detection unit 22 calculates a width of the single road by counting the number of extracted pixels aligned in a direction orthogonal to an extending direction of the single road from the extracted set of pixels representing the road. Thelane detection unit 22 can determine an assumed combination of lines in which vehicles are aligned by assuming one line for the center of the width of the single road or for each individual section defined by equally dividing the width of the single road by a predetermined number (e.g., two to six). -
FIG. 6 is a diagram illustrating one example of the lane detection on the basis of an individual vehicle detected from an image. Anindividual vehicle 601 is detected in an image 600 illustrated inFIG. 6 . Then, eachline 602 in which thevehicle 601 is aligned along a road is detected as one lane. - The
lane detection unit 22 passes lane detection result data representing an individual lane to themap generation unit 23. - The
map generation unit 23 refers to the lane detection result data, and generates a road map. At this time, themap generation unit 23 includes, in the road map, information related to an individual lane included in each single road, e.g., information about the number of lanes included in each single road and the like. Furthermore, similarly to themap generation unit 15 according to the first embodiment, themap generation unit 23 may refer to information representing a geographical range represented in an image being a target of the map generation processing and positions of an individual intersection and an individual single road on the image, and associate, for each intersection and each single road, positional information about the intersection or the single road (e.g., a latitude and a longitude) with the road map. Still further, similarly to themap generation unit 15 according to the first embodiment, themap generation unit 23 may generate a road map of a wider range by coupling road maps generated for each image. - The
map generation unit 23 stores the generated road map in the memory 5, or writes the generated road map to thestorage medium 8 via the storage medium access device 6. Alternatively, themap generation unit 23 may output the generated road map to other equipment via thecommunication interface 2. - Note that, in the present embodiment, the processor 7 may extract a pixel representing a regulation feature object from an image by executing processing similar to that of the
extraction unit 11 and the trafficregulation setting unit 14 according to the first embodiment, and associate, for each regulation feature object, with each lane, each single road, or each intersection, a traffic regulation corresponding to the regulation feature object related to the lane, the single road, or the intersection on the basis of a set of pixels representing the regulation feature object. -
FIG. 7 is an operation flowchart of the map generation processing according to the second embodiment. The processor 7 may execute the map generation processing according to the following operation flowchart on each image being a target of the map generation processing. - The
vehicle detection unit 21 of the processor 7 detects a plurality of vehicles represented in an image by inputting the image to a classifier (step S201). - The
lane detection unit 22 of the processor 7 detects a line in which two or more of the plurality of vehicles detected on the image are aligned, and detects each detected individual line as one lane (step S202). - The
map generation unit 23 of the processor 7 generates a road map including information related to the detected individual lane (step S203). Then, the processor 7 ends the map generation processing. - As described above, the map generation device detects a vehicle from an image in which a road is represented, and detects each line in which two or more detected vehicles are aligned as one lane. Thus, the map generation device can detect a lane even when a lane division line is not displayed on a road. As a result, the map generation device can generate map information including information about each lane on the road from the image in which the road is represented.
- According to a modification example of each of the embodiments described above, the processor 7 may execute, as preprocessing, super-resolution processing on an image being a target of the map generation processing to improve resolution, execute orthographic projection correction processing to correct a tilt of a building represented in the image, or execute processing of removing a shadow. Note that the processor 7 may use known processing as the processing above described.
- Further, according to another modification example, the processor 7 of the map generation device may execute processing of the
vehicle detection unit 21 and thelane detection unit 22 according to the second embodiment in addition to the processing of each of the units according to the first embodiment. For example, the processor 7 may execute the processing of thevehicle detection unit 21 and thelane detection unit 22 according to the second embodiment on a single road including one lane detected by executing the processing of theextraction unit 11 and thelane detection unit 12 according to the first embodiment, i.e., detected on the basis of a Voronoi boundary, detect a line in which vehicles are aligned, to detect each detected individual line as a lane included on the single road. In this way, the map generation device can detect an individual lane on the basis of a road boundary line including a lane division line for a road on which the lane division line is drawn, and also detect an individual lane for a road on which a lane division line is not drawn. - According to still another modification example of the first embodiment, the
lane detection unit 12 may determine the number of lanes by determining whether or not a stop line is provided across the entire single road on the basis of a set of pixels representing the stop line being extracted by theextraction unit 11. -
FIGS. 8A and 8B are diagrams each illustrating one example of a stop line drawn on a road. In the example illustrated inFIG. 8A , aroad 801 is a single-lane road, and thus astop line 802 is drawn across the entire width of theroad 801 and connected to road boundary lines at both ends of theroad 801. In contrast, in the example illustrated inFIG. 8B , aroad 811 is a two-lane road, and thus astop line 812 is drawn across only a lane targeted by the stop line, i.e., across a substantially half of the width of theroad 811. In this way, a ratio of a length of the stop line to a road width varies in response to the number of lanes included on the road. - Thus, according to the modification example, the
lane detection unit 12 calculates, for a single road that includes a set of pixels representing the stop line being extracted among single roads identified by the intersectionarea detection unit 13, the ratio of the length of the stop line to the width of the single road on the basis of the set of pixels. Note that the width of the single road can be, for example, an interval between pixels representing road boundary lines at both ends of the single road being extracted by theextraction unit 11 for the single road. Alternatively, a classifier used by theextraction unit 11 may be trained in advance in such a way as to be able to extract a pixel representing a road itself. In this case, the width of the single road can be the number of continuous pixels representing the road itself in a direction across the single road. Then, thelane detection unit 12 detects one lane on the single road when a ratio of a length of a stop line to the width of the single road is greater than a predetermined threshold value (e.g., 0.6 to 0.8), and, on the other hand, detects two lanes on the single road when the ratio is equal to or less than the predetermined threshold value. Alternatively, thelane detection unit 12 may detect one lane on the single road when a set of pixels representing the stop line is connected to a set of pixels representing a boundary feature object at each of both ends of the single road, and, on the other hand, may detect two lanes on the single road when the set of pixels representing the stop line is connected to a set of pixels representing a boundary feature object at any one of end portions of the single road. Note that, in the modification example, themap generation unit 15 may generate a road map representing a detected individual lane, similarly to the embodiments described above. - Note that, in the modification example, the
lane detection unit 12 may execute the above-described processing of determining the number of lanes on the basis of a stop line only on a single road including one lane detected on the basis of a Voronoi boundary. - According to the modification example, the lane detection unit can detect an individual lane for a road on which a lane division line is not drawn.
- Furthermore, a computer program that causes a computer to execute a function of each unit included in the processor of the map generation device according to each of the embodiments or the modification examples described above may be provided in form of being stored in a computer-readable recording medium. Note that the computer-readable recording medium can be, for example, a magnetic recording medium, an optical recording medium, or a semiconductor memory.
Claims (5)
1. -4. (canceled)
5. A map generation device comprising:
a processor configured to:
detect a plurality of vehicles located on a road from an image in which the road is represented;
detect each line in which two or more of the plurality of detected vehicles are aligned as an individual lane; and
generate map information representing each of the detected lanes.
6. The map generation device according to claim 5 , wherein the processor detects the road from the image, and detects, as the lane, a line in which two or more of the plurality of detected vehicles are aligned along the road.
7. -8. (canceled)
9. A map generation device comprising:
a processor configured to:
extract a road and a stop line from an image in which the road is represented;
detect a lane included on the road in response to a ratio of a length of the stop line to a width of the road; and
generate map information representing each of the detected lanes.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US17/935,638 US20230014725A1 (en) | 2019-06-07 | 2022-09-27 | Map generation device, map generation method, and map generation ocmputer program |
Applications Claiming Priority (4)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2019107123A JP2020201649A (en) | 2019-06-07 | 2019-06-07 | Map generation device, map generation method and computer program for map generation |
JP2019-107123 | 2019-06-07 | ||
US16/894,736 US11486727B2 (en) | 2019-06-07 | 2020-06-05 | Map generation device, map generation method, and map generation computer program |
US17/935,638 US20230014725A1 (en) | 2019-06-07 | 2022-09-27 | Map generation device, map generation method, and map generation ocmputer program |
Related Parent Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US16/894,736 Division US11486727B2 (en) | 2019-06-07 | 2020-06-05 | Map generation device, map generation method, and map generation computer program |
Publications (1)
Publication Number | Publication Date |
---|---|
US20230014725A1 true US20230014725A1 (en) | 2023-01-19 |
Family
ID=73650007
Family Applications (2)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US16/894,736 Active 2040-08-18 US11486727B2 (en) | 2019-06-07 | 2020-06-05 | Map generation device, map generation method, and map generation computer program |
US17/935,638 Pending US20230014725A1 (en) | 2019-06-07 | 2022-09-27 | Map generation device, map generation method, and map generation ocmputer program |
Family Applications Before (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US16/894,736 Active 2040-08-18 US11486727B2 (en) | 2019-06-07 | 2020-06-05 | Map generation device, map generation method, and map generation computer program |
Country Status (2)
Country | Link |
---|---|
US (2) | US11486727B2 (en) |
JP (1) | JP2020201649A (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20220107205A1 (en) * | 2020-10-06 | 2022-04-07 | Toyota Jidosha Kabushiki Kaisha | Apparatus, method and computer program for generating map |
Families Citing this family (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2020257366A1 (en) * | 2019-06-17 | 2020-12-24 | DeepMap Inc. | Updating high definition maps based on lane closure and lane opening |
US11549816B2 (en) * | 2019-07-31 | 2023-01-10 | Here Global B.V. | Systems and methods for controlling mapping information inaccuracies |
US20220373354A1 (en) * | 2021-05-18 | 2022-11-24 | Argo AI, LLC | Automatic generation of vector map for vehicle navigation |
US11842429B2 (en) | 2021-11-12 | 2023-12-12 | Rockwell Collins, Inc. | System and method for machine code subroutine creation and execution with indeterminate addresses |
US11887222B2 (en) | 2021-11-12 | 2024-01-30 | Rockwell Collins, Inc. | Conversion of filled areas to run length encoded vectors |
US11915389B2 (en) | 2021-11-12 | 2024-02-27 | Rockwell Collins, Inc. | System and method for recreating image with repeating patterns of graphical image file to reduce storage space |
US11748923B2 (en) | 2021-11-12 | 2023-09-05 | Rockwell Collins, Inc. | System and method for providing more readable font characters in size adjusting avionics charts |
US11954770B2 (en) | 2021-11-12 | 2024-04-09 | Rockwell Collins, Inc. | System and method for recreating graphical image using character recognition to reduce storage space |
US11854110B2 (en) | 2021-11-12 | 2023-12-26 | Rockwell Collins, Inc. | System and method for determining geographic information of airport terminal chart and converting graphical image file to hardware directives for display unit |
US20230222788A1 (en) | 2022-01-12 | 2023-07-13 | Woven Alpha, Inc. | Roadmap generation system and method of using |
CN114419338B (en) * | 2022-03-28 | 2022-07-01 | 腾讯科技(深圳)有限公司 | Image processing method, image processing device, computer equipment and storage medium |
Family Cites Families (13)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2003132353A (en) * | 2001-10-29 | 2003-05-09 | Pasuko:Kk | Center line generator |
JP5075331B2 (en) * | 2005-09-30 | 2012-11-21 | アイシン・エィ・ダブリュ株式会社 | Map database generation system |
JP2007128141A (en) * | 2005-11-01 | 2007-05-24 | Hitachi Software Eng Co Ltd | System and method for determining road lane number in road image |
JP4459162B2 (en) | 2005-12-06 | 2010-04-28 | 池上通信機株式会社 | Velocity measuring device, method and program |
JP5301361B2 (en) * | 2009-06-01 | 2013-09-25 | 株式会社エヌ・ティ・ティ・ドコモ | Information processing apparatus, communication system, and information processing method |
JP2017041126A (en) | 2015-08-20 | 2017-02-23 | 株式会社デンソー | On-vehicle display control device and on-vehicle display control method |
US9418546B1 (en) * | 2015-11-16 | 2016-08-16 | Iteris, Inc. | Traffic detection with multiple outputs depending on type of object detected |
EP3252658B1 (en) | 2016-05-30 | 2021-08-11 | Kabushiki Kaisha Toshiba | Information processing apparatus and information processing method |
JP6886171B2 (en) | 2016-12-27 | 2021-06-16 | 株式会社オゼットクリエイティブ | Map information creation device, map information creation program and map information creation method |
JP7003512B2 (en) * | 2017-09-12 | 2022-01-20 | 日産自動車株式会社 | Vehicle driving control method and equipment |
KR102421855B1 (en) * | 2017-09-28 | 2022-07-18 | 삼성전자주식회사 | Method and apparatus of identifying driving lane |
DE112018006996B4 (en) | 2018-03-01 | 2022-11-03 | Mitsubishi Electric Corporation | Image processing device and image processing method |
CN111902697B (en) * | 2018-03-23 | 2024-05-07 | 三菱电机株式会社 | Driving support system, driving support method, and computer-readable storage medium |
-
2019
- 2019-06-07 JP JP2019107123A patent/JP2020201649A/en active Pending
-
2020
- 2020-06-05 US US16/894,736 patent/US11486727B2/en active Active
-
2022
- 2022-09-27 US US17/935,638 patent/US20230014725A1/en active Pending
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20220107205A1 (en) * | 2020-10-06 | 2022-04-07 | Toyota Jidosha Kabushiki Kaisha | Apparatus, method and computer program for generating map |
US11835359B2 (en) * | 2020-10-06 | 2023-12-05 | Toyota Jidosha Kabushiki Kaisha | Apparatus, method and computer program for generating map |
Also Published As
Publication number | Publication date |
---|---|
JP2020201649A (en) | 2020-12-17 |
US11486727B2 (en) | 2022-11-01 |
US20200386567A1 (en) | 2020-12-10 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US11486727B2 (en) | Map generation device, map generation method, and map generation computer program | |
US11113544B2 (en) | Method and apparatus providing information for driving vehicle | |
EP3462377B1 (en) | Method and apparatus for identifying driving lane | |
US11670087B2 (en) | Training data generating method for image processing, image processing method, and devices thereof | |
Alvarez et al. | Combining priors, appearance, and context for road detection | |
Jung et al. | An efficient lane detection algorithm for lane departure detection | |
US20180164812A1 (en) | Apparatus and method for generating training data to train neural network determining information associated with road included in image | |
US20210110180A1 (en) | Method and apparatus for traffic sign detection, electronic device and computer storage medium | |
US11280630B2 (en) | Updating map data | |
US11842494B2 (en) | Apparatus, method, and computer program for correcting road region | |
CN111091037A (en) | Method and device for determining driving information | |
JP7380532B2 (en) | Map generation device, map generation method, and map generation computer program | |
US20220219700A1 (en) | Apparatus, method, and computer program for generating map | |
KR102316818B1 (en) | Method and apparatus of updating road network | |
US11835359B2 (en) | Apparatus, method and computer program for generating map | |
Imad et al. | Navigation system for autonomous vehicle: A survey | |
Chen et al. | Integrated vehicle and lane detection with distance estimation | |
JP2017162125A (en) | Image analysis program, image analyzer and image analysis method | |
WO2021199584A1 (en) | Detecting debris in a vehicle path | |
JP2022082569A (en) | Map generation device, map generation method, and computer program for map generation | |
US20240119740A1 (en) | Method and apparatus with lane line determination | |
NL2031097B1 (en) | Ai based change detection system for executing a method to detect changes in geo-tagged videos to update hd maps | |
Yang et al. | Panorama-based multilane recognition for advanced navigation map generation | |
Hoveidar-Sefid et al. | Autonomous Trail Following. | |
Manoharan et al. | Autonomous lane detection on hilly terrain for perception-based navigation applications |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
STPP | Information on status: patent application and granting procedure in general |
Free format text: DOCKETED NEW CASE - READY FOR EXAMINATION |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: NON FINAL ACTION MAILED |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: RESPONSE TO NON-FINAL OFFICE ACTION ENTERED AND FORWARDED TO EXAMINER |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: FINAL REJECTION MAILED |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: ADVISORY ACTION MAILED |