WO2016031229A1 - Road map creation system, data processing device, and on-board device - Google Patents

Road map creation system, data processing device, and on-board device Download PDF

Info

Publication number
WO2016031229A1
WO2016031229A1 PCT/JP2015/004248 JP2015004248W WO2016031229A1 WO 2016031229 A1 WO2016031229 A1 WO 2016031229A1 JP 2015004248 W JP2015004248 W JP 2015004248W WO 2016031229 A1 WO2016031229 A1 WO 2016031229A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
road
common
images
vehicle
Prior art date
Application number
PCT/JP2015/004248
Other languages
French (fr)
Japanese (ja)
Inventor
朋明 阿部
Original Assignee
パナソニックIpマネジメント株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by パナソニックIpマネジメント株式会社 filed Critical パナソニックIpマネジメント株式会社
Priority to JP2016544958A priority Critical patent/JPWO2016031229A1/en
Publication of WO2016031229A1 publication Critical patent/WO2016031229A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformation in the plane of the image
    • G06T3/40Scaling the whole image or part thereof
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09BEDUCATIONAL OR DEMONSTRATION APPLIANCES; APPLIANCES FOR TEACHING, OR COMMUNICATING WITH, THE BLIND, DEAF OR MUTE; MODELS; PLANETARIA; GLOBES; MAPS; DIAGRAMS
    • G09B29/00Maps; Plans; Charts; Diagrams, e.g. route diagram

Definitions

  • the present invention relates to a road map creation system, a data processing device, and an in-vehicle device that create a high-definition road image.
  • a system for automatic driving of vehicles is being studied.
  • this system for example, how to automatically steer a vehicle based on road map data, for example, by determining the curvature, slope, unevenness, and the exact position of the vehicle on the road, etc. Judge. For this reason, high-definition road map data is required.
  • High-definition road map data can be created by preparing a special road measurement vehicle and running the road measurement vehicle on an actual road to measure the road.
  • Patent Document 3 a system for creating and updating a stereoscopic image of a building or the like on a map by combining a photographed image with two-dimensional map data has been disclosed (for example, see Patent Document 3).
  • a road map creation system is mounted on a plurality of vehicles, roads using a plurality of in-vehicle devices that transmit outdoor captured images of the vehicles to a server device, and a plurality of captured images collected by the server device.
  • a data processing device for creating an image, and the data processing device searches for a plurality of common photographed images including a common point image from the collected photographed images;
  • a feature extraction unit that extracts features in the collected images of the plurality of captured images, and the plurality of common captured images that are based on the features and the positions of the images are aligned.
  • a high-definition processing unit that generates a road image having a higher definition than that of each common captured image from information of each pixel of the common captured image.
  • a data processing device of the present disclosure is a data processing device that creates a road image using a plurality of captured images that are captured by a plurality of vehicles and collected by a server device, and the data processing device of the collected plurality of captured images
  • An image search unit that searches for a plurality of common captured images including images of a common spot, a feature extraction unit that extracts features in the collected images of the plurality of captured images, and the feature Based on the information of each pixel of the plurality of common captured images obtained by aligning the positions of the images based on the plurality of common captured images, a high-definition road image that is higher in definition than the common captured images is generated.
  • a processing unit that creates a road image using a plurality of captured images that are captured by a plurality of vehicles and collected by a server device, and the data processing device of the collected plurality of captured images
  • An image search unit that searches for a plurality of common captured images including images of a common spot
  • a feature extraction unit that extracts features
  • the in-vehicle device includes: an image search unit that searches for a plurality of common captured images including a common spot image from a plurality of captured images collected by the server device; and the plurality of collected captured images And a feature extraction unit that extracts a feature in the image, and aligns the position of the image between the plurality of common captured images based on the feature, and information on each pixel of the plurality of common captured images
  • a data processing device having a high-definition processing unit that generates a high-definition road image than each common captured image, and the vehicle-mounted device mounted on the vehicle,
  • An input unit that inputs an outdoor captured image from the imaging device mounted on the camera, and a communication unit that transmits the outdoor captured image to the server device.
  • FIG. 1A is a block diagram illustrating a road map creation system according to an embodiment.
  • FIG. 1B is a block diagram illustrating a server device and a data processing device of the road map creation system according to the embodiment.
  • FIG. 2 is a diagram showing transmission data sent from the in-vehicle device of the road map creation system of the embodiment to the server device.
  • FIG. 3 is a data chart showing a modification of transmission data sent from the in-vehicle device to the server device of the road map creation system of the embodiment.
  • FIG. 4 is a diagram illustrating a vehicle database of the road map creation system according to the embodiment.
  • FIG. 5 is a flowchart showing the operation of the data processing apparatus of the road map creation system of the embodiment.
  • FIG. 6 is a diagram showing a photographed image of the road map creation system of the embodiment.
  • FIG. 7 is a diagram for explaining the process of expanding the road map coordinates according to the embodiment of the road map creation system.
  • FIG. 8 is a diagram for explaining the feature extraction process of the road map creation system according to the embodiment.
  • FIG. 9A is a diagram illustrating a process of copying captured image data to a high-definition data series in the road map creation system of the embodiment.
  • FIG. 9B is a diagram illustrating a process of copying photographed image data to a high-definition data series in the road map creation system according to the embodiment.
  • FIG. 9C is a diagram illustrating a process of copying captured image data to a high-definition data series in the road map creation system according to the embodiment.
  • FIG. 10 is a diagram for explaining the principle of generating high-definition image data from a plurality of image data in the road map creation system of the embodiment.
  • the vehicle is often equipped with a camera for photographing the surroundings of the vehicle, such as a drive recorder or a parking guide camera.
  • the vehicle often has a function of communicating with a server device such as a navigation device. Therefore, it is also possible to consider collecting captured images from a plurality of vehicles and generating map data as in the prior art disclosed in Patent Documents 1 and 2.
  • the captured image of the vehicle has a low resolution and cannot be used for creating high-definition road map data as it is.
  • FIG. 1A is a block diagram showing a road map creation system according to an embodiment of the present invention.
  • the road map creation system of the present embodiment includes a plurality of in-vehicle devices 10 respectively mounted on a plurality of vehicles 100, a server device 30 that is communicably connected to the in-vehicle devices 10 via a network, and road map data. And a data processing device 50 to be created.
  • FIG. 1B is a block diagram showing the server device 30 and the data processing device 50.
  • the server device 30 and the data processing device 50 may be configured by a single computer, or may be configured to perform processing distributed by more computers.
  • the in-vehicle device 10 includes a camera 11 that is one or a plurality of photographing devices, a position acquisition unit 12, a communication unit 13, a display 14, a control unit 15, an interface (I / F) 16, and a vehicle speed sensor 17. And.
  • the in-vehicle device 10 can be configured by adding a control function to an existing device such as a car navigation device.
  • the in-vehicle device 10 does not include the camera 11, the position acquisition unit 12, and the vehicle speed sensor 17, and may be configured to input data from the camera 11, the position acquisition unit 12, and the vehicle speed sensor 17 mounted on the vehicle 100. Good. Moreover, the vehicle-mounted apparatus 10 is good also as a structure by which the display 14 was abbreviate
  • the camera 11 is mounted so as to photograph the outside of the passenger compartment such as the front or rear of the vehicle 100.
  • the camera 11 performs a photographing process outside the passenger compartment in accordance with a command from the control unit 15, and sends captured image data to the control unit 15.
  • the position acquisition unit 12 calculates the current position using, for example, a GPS (Global Positioning System), an accelerator sensor, and a vehicle speed sensor.
  • the acquired current position data is sent to the control unit 15.
  • the vehicle speed sensor 17 is, for example, a wheel speed sensor, and sends current vehicle speed data to the control unit 15.
  • the communication unit 13 can transmit data to the server device 30 via the network.
  • the communication unit 13 may be configured to perform data transmission via a mobile communication network and the Internet.
  • the interface 16 connects the control unit 15 and each unit so that data can be input / output.
  • the interface 16 functions as an input unit that inputs a captured image.
  • the control unit 15 comprehensively controls each unit of the in-vehicle device 10.
  • the control unit 15 inputs position information indicating the position of the vehicle 100, that is, the in-vehicle device 10 and vehicle speed information indicating the speed of the vehicle 100 from the position acquisition unit 12 and the vehicle speed sensor 17.
  • the vehicle ID identification signal
  • the server device 30 via the communication unit 13.
  • FIG. 2 is a data chart showing transmission data sent from the in-vehicle device to the server device.
  • FIG. 3 is a data chart showing a modification of the transmission data of the in-vehicle device.
  • the transmission data 20 of the in-vehicle device 10 includes data of each item in FIG. That is, the captured image data, captured image acquisition date and time, position information and vehicle speed information, and data format information are included.
  • the data serial number is a serial number assigned to each vehicle 100.
  • the vehicle ID and camera number are information for identifying the vehicle 100 and information for identifying the camera 11 when a plurality of cameras 11 are mounted on the vehicle 100.
  • the vehicle ID and camera number are registered in the vehicle database 33 (see FIG. 4).
  • the vehicle ID and the camera number are identification information that indicates the mounting position of the in-vehicle camera on the vehicle 100.
  • the transmission data 20 may include information on the traveling direction of the vehicle.
  • the transmission data 20A sent from the in-vehicle device 10 to the server device 30 includes camera position information (information such as front and rear, left and right, height, and elevation angle) instead of the camera number. Also good.
  • the server device 30 receives and collects data of captured images including roads from the plurality of in-vehicle devices 10.
  • the server device 30 includes a communication unit 31, a control unit 32, a vehicle database 33, and a captured image database 34.
  • the communication unit 31 receives transmission data of the in-vehicle device 10 including a captured image via the network.
  • the photographed image database 34 has a large-scale storage device and stores photographed images sent from a plurality of in-vehicle devices 10.
  • the captured image is stored in association with additional information.
  • the additional information includes shooting position (latitude and longitude) information, camera position information, and shooting date and time information.
  • the additional information may include vehicle speed information at the time of shooting, an image data format, a serial number, and the like. The vehicle speed information can be used to supplement the shooting position information.
  • the control unit 32 registers a photographed image including a road in the photographed image database 34 in association with necessary information from the information in the vehicle database 33 and the transmission data of the in-vehicle device 10.
  • FIG. 4 is a data chart showing the vehicle database.
  • the vehicle database 33 information on a plurality of vehicles 100 on which the in-vehicle device 10 is mounted is registered.
  • the information on the vehicle 100 includes a vehicle ID (identification information) for identifying each vehicle 100, the number of cameras 11 mounted so that a road can be photographed, and a mounting position of each camera 11 mounted on the vehicle 100. Camera position information to be displayed.
  • the camera position information includes information indicating the horizontal mounting position of the vehicle 100, information on the height position of the camera 11, information on the elevation angle of the camera 11, and the like.
  • the control unit 32 of the server device 30 reads the camera position information corresponding to the vehicle ID and the camera number from the vehicle database 33 when receiving the transmission data 20 from the in-vehicle device 10. Then, the read camera position information and captured image data are registered in the captured image database 34 in association with each other.
  • the data processing device 50 includes an image search unit 51, a feature extraction unit 52, a high-definition processing unit 53, a position designation unit 54, a map data management unit 55, and a road map database 56. Yes.
  • the position designation unit 54 designates position information indicating the position of a point where a high-definition road image is created or a point where a high-definition road image is updated.
  • the position designation unit 54 may designate the position information based on input information from the outside. For example, a configuration may be adopted in which an operator inputs a command to create road map information for a district from the outside, and the position designation unit 54 designates position information based on the command.
  • the position specifying unit 54 may be configured to specify position information using a predetermined algorithm.
  • the position specifying unit 54 may be configured to sequentially specify position information at predetermined meter intervals in a predetermined area on the earth specified from the outside.
  • the position specifying unit 54 may be configured to specify a point that is reflected in the captured image received by the server device 30 from the in-vehicle device 10.
  • the image search unit 51 searches the photographed image database 34 to find a photographed image including the image of the spot designated by the position designation unit 54.
  • the image search unit 51 can find the corresponding captured image from the position information associated with the captured image. Further, the image search unit 51 may find a corresponding captured image in consideration of information on the mounting position of the camera 11 and information on the traveling direction of the vehicle. For example, if the camera 11 captures the front, the position of the subject area of the camera 11 can be calculated by adding the amount of displacement in the traveling direction to the position information.
  • the feature extraction unit 52 extracts a feature contained in the image in order to detect the positional deviation of the plurality of captured images with high accuracy.
  • the characteristic object is a fixed object fixed on the land, and for example, an artificial or natural mark on the road can be applied. Marks on roads that are artificially included include road markings (white lines, orange lines, etc.), cracks generated in road markings, manholes, and the like. A natural mark on a road includes a crack or depression in the road.
  • a traffic facility such as a guardrail or a sign, or a building facility such as a building or a signboard can be applied.
  • the feature extraction unit 52 extracts, for example, an edge of a feature object.
  • the high-definition processing unit 53 creates a high-definition road image more than each captured image by using a plurality of captured images with slightly different shooting targets. A method for creating a high-definition image will be described later.
  • the map data management unit 55 registers the high-definition road image data of each point created by the high-definition processing unit 53 in the road map database 56 together with the position information. Thereby, high-definition road images of a plurality of sections are collected, and road map data is generated.
  • the road map database 56 has a large-scale storage device, and stores position information and high-definition images of roads in association with each other.
  • FIG. 5 is a flowchart showing an example of the operation of the data processing apparatus.
  • step S1 the position designation unit 54 designates a point where a high-definition road image is created.
  • the point designation method is as described in the explanation part of the position designation unit 54.
  • step S2 the image search unit 51 searches the photographed image database 34 for a plurality of photographed images (hereinafter, also referred to as “common photographed images”) including images of common points.
  • the search method is as described in the explanation part of the image search unit 51.
  • step S3 the feature extraction unit 52 first reads the photographed image selected in step S2 and additional information (position information, information on the in-vehicle position of the camera, etc.), expands the photographed image into road surface coordinates, Further, the luminance is normalized. This process will be described with reference to FIGS.
  • FIG. 6 is a diagram illustrating an example of a captured image.
  • FIG. 7 is a diagram for explaining the development process to road surface coordinates.
  • the photographed image As shown in FIG. 6, in the photographed image, the vicinity of the road is displayed large, and the distance from the road is displayed small.
  • the height and angle of the photographing position from the road can be specified by the information on the camera position. From these pieces of information, it is possible to obtain a mapping function from a captured image to an image when viewed in plan when the imaging target is considered as a plane.
  • the feature extraction unit 52 obtains the above mapping function, further recognizes the road image from the captured image, and develops the road image into planar road surface coordinates as shown in FIG.
  • the developed image has different resolutions on the near side and the far side, and the resolution on the far side is low.
  • FIG. 7 shows the size of the pixel 81 in the range A and the size of the pixel 82 in the range B.
  • the normalization of the brightness is by calculating the average value of the brightness of all the pixels on the road and subtracting or adding the brightness of all the pixels uniformly so that the average value becomes a predetermined reference value. Done. Thereby, the brightness
  • luminance normalization may be performed by adding correction according to the perspective of the road.
  • the feature extraction unit 52 performs such normalization processing.
  • the feature extraction unit 52 may perform a normalization process on the hue or saturation according to the characteristics of the camera 11.
  • step S4 the feature extraction unit 52 extracts a feature in the image from the developed image.
  • the feature extraction process will be described with reference to FIG.
  • FIG. 8 is a diagram for explaining an example of feature extraction processing.
  • the feature extraction unit 52 performs, for example, edge detection in an image to extract feature objects.
  • the extraction of the feature object may be performed on a mark on the road and an area on the near side (area with high resolution) in the captured image.
  • the feature extraction unit 52 performs finer extraction on the near white line f1 (see FIG. 7) in the captured image.
  • the feature extraction unit 52 also extracts a crack or blur of the white line f1. By comparing the feature extracted in this way, it is possible to determine whether or not the white line f1 is the same among a plurality of captured images.
  • the feature extraction unit 52 performs development on road surface coordinates and feature extraction for each common captured image found in step S2.
  • the feature object extraction data and the image data developed to the road surface coordinates are sent to the high-definition processing unit 53.
  • step S2 the search for the common captured image (step S2), the development from the captured image to the road surface coordinates (the first stage of step S3), and the feature extraction (step S4) need not be performed in this order.
  • feature extraction may be performed sequentially on a plurality of received captured images.
  • the extracted feature data may be stored in association with the photographed image.
  • the extracted feature data can also be used when searching for a plurality of common captured images.
  • step S5 the high-definition processing unit 53 aligns a plurality of common captured images based on the extracted features.
  • the common captured image to be aligned is an image developed on road surface coordinates.
  • the alignment is performed so that the same feature overlaps between a plurality of common captured images.
  • the position information using GPS includes a relatively large error. For this reason, the alignment accuracy between a plurality of common captured images is increased by performing the alignment so that the feature objects in the image overlap each other.
  • step S5 if there is a common captured image in which a common feature is not found, this image may be excluded and processed.
  • step S6 the high-definition processing unit 53 copies the pixel data of a plurality of common captured images to the high-definition data series, and weights and adds the pixel data of each common captured image.
  • FIG. 9A to FIG. 9C are diagrams for explaining processing for copying photographed image data to a high-definition data series.
  • FIG. 9A shows an enlarged display of a part of pixels 83 of one common photographed image developed on road surface coordinates.
  • FIG. 9B shows an enlarged display of a part of the arrangement of high-definition pixels corresponding to the high-definition data series.
  • FIG. 9C shows an example of the pixel data (a) copied to a high-definition data series.
  • the common captured image developed on the road surface coordinates includes a large pixel 83 as shown in FIG. 9A.
  • the data series of the high-definition image is a data series corresponding to the arrangement of the fine pixels 90 as shown in FIG. 9B.
  • FIG. 9B and FIG. 9C only one of the plurality of pixels 90 is denoted by a symbol in order to avoid complexity.
  • step S6 first, as shown in FIGS. 9A to 9C, the fine pixels 90 of the high-definition image and the large pixels 83 of the common captured image are superposed. Then, the pixel data of the pixel 83 where the common photographed image is superimposed is copied as fine data of each pixel 90. Thereby, the data of one common photographed image is copied into a high-definition data series.
  • a plurality of adjacent pixels 83 may overlap with one pixel 90.
  • pixel data of the pixel 83 having a larger overlapping range among the plurality of pixels 83 may be copied as data of one pixel 90.
  • the pixel data of a plurality of pixels 83 may be averaged and copied as data of one pixel 90.
  • step S6 the pixel data of a plurality of common captured images are weighted and added as the data of the pixels 90 of the high-definition image.
  • the weighting is preferably performed so that the weighting value increases according to the reliability of the pixel data.
  • the reliability of the pixel data of each common captured pixel is assumed to be the same, and the data value of each pixel is weighted by 1 / n. Good.
  • the former is heavily weighted (for example, (m ⁇ 1) / m), and the latter may be given a small weight (for example, 1 / m).
  • FIG. 10 is a diagram for explaining the principle of generating high-definition image data from a plurality of image data.
  • FIG. 10 shows a pixel 87 (4 ⁇ 4 pixels) in a section of the first image, a pixel 88 (4 ⁇ 4 pixels) in a section of the second image, a plurality of pixels 87, and a plurality of pixels. 88 is superimposed.
  • the pixel 87 of the first image and the pixel 88 of the second image are pixels obtained by imaging the same subject with a slight shift.
  • the positional deviation is shown as a deviation from the origin O on the xy plane. In order to avoid complications, only one of each of the plurality of pixels 87 and 88 is given a reference numeral.
  • One pixel data is a value obtained by averaging the luminance and color of the object area corresponding to this pixel. For this reason, when there are the first image pixel 87 and the second image pixel 88 obtained by photographing the same subject area slightly shifted, the two positions are aligned and the two are overlapped. It is possible to estimate the brightness and color of a region finer than each of the pixels 87 and 88. For example, as shown in FIG. 10C, a range where one pixel 87 of the first image and one pixel 88 of the second image overlap is smaller than one pixel 87 or pixel 88. The actual luminance and color information of the subject in this overlapped range is reflected in the data of the pixels 87 and 88 that overlap this region.
  • the luminance and color of the area finer than the pixels 87 and 88 are estimated by weighting and adding the luminance and color of the pixels 87 and 88 slightly shifted in position.
  • the more images that are taken with respect to the same subject the more the images are taken, the more accurately the brightness and color of the fine area are estimated, and a high-definition image can be obtained.
  • step S6 based on such a principle, the pixel data of a plurality of common captured images with low resolution are aligned and weighted to add the luminance and color of a region smaller than the pixels of the common captured image.
  • a high-definition road image can be obtained.
  • step S7 the map data management unit 55 registers the high-definition road image obtained in step S6 together with the position information in the road map database.
  • road map data in which high-definition road images are continuous is generated.
  • the data processing device 50 can create road map data covering many districts by performing the processing of FIG.
  • road map creation system of the present embodiment for example, a large number of captured images are collected using an in-vehicle device mounted on a general automobile, and high-definition road map data is obtained from these. Can be created. Therefore, it is not necessary to prepare many special road measurement vehicles, and road map data can be created efficiently.
  • the present invention is not limited to the specific contents described in the above embodiment.
  • a photographed image including an image of the same point is searched from position information and information of a photographing direction.
  • a photographed image that is estimated to include an image of approximately the same point from only position information. You may make it look for.
  • the ratio of the photographed image that does not include the image of the same point is high, but by the feature extraction process and the alignment process based on the feature, The captured image can be excluded and processed.
  • processing for creating a high-definition road image from a plurality of captured images is shown.
  • the processing content of the embodiment is merely an example, and the image was captured with a slight shift in position.
  • Various known image techniques for creating a high-definition image using a plurality of images may be applied.
  • road map data is finally created.
  • the road map data does not need to include road images of all sections, and road images in which some sections are missing. It may be road map data.
  • the present invention can be used in a system for creating a road map.

Abstract

A road map creation system is provided with a plurality of on-board devices that are mounted in a plurality of vehicles and each of which transmits a captured image of the outside of each vehicle to a server device, and a data processing device that creates a road image by using a plurality of captured images collected in the server device. The data processing device includes an image search unit that searches for a plurality of common captured images each including an image of a common spot among the collected captured images, a feature extracting unit that extracts a feature within the plurality of collected captured images, and a high-definition processing unit that generates a higher-definition road image than the common captured images by combining the common captured images through association of pixels representing an identical part of a subject in the plurality of common captured images on the basis of the feature.

Description

道路地図作成システム、データ処理装置および車載装置Road map creation system, data processing device and in-vehicle device
 本発明は、高精細な道路画像を作成する道路地図作成システム、データ処理装置、および車載装置に関する。 The present invention relates to a road map creation system, a data processing device, and an in-vehicle device that create a high-definition road image.
 車両の自動運転を行うシステムが検討されている。このシステムでは、道路地図データから、例えば、現在走行している道路のカーブの曲率、傾斜、凹凸、および、車両の道路上の正確な位置等を判断して、どのように車両を自動操縦するか判断する。このため、高精細な道路地図データが必要となる。 A system for automatic driving of vehicles is being studied. In this system, for example, how to automatically steer a vehicle based on road map data, for example, by determining the curvature, slope, unevenness, and the exact position of the vehicle on the road, etc. Judge. For this reason, high-definition road map data is required.
 高精細の道路地図データは、特殊な道路計測車両を用意し、実際の道路に道路計測車両を走らせて道路の計測を実施することで、作成できる。 High-definition road map data can be created by preparing a special road measurement vehicle and running the road measurement vehicle on an actual road to measure the road.
 従来、複数の車両から撮影画像を収集し、或いは、複数のユーザから撮影画像を収集し、これら収集した複数の撮影画像を組み合わせて、立体地図データを生成するシステムが提案されている(例えば、特許文献1、2を参照)。 Conventionally, a system has been proposed that collects captured images from a plurality of vehicles or collects captured images from a plurality of users, and generates a three-dimensional map data by combining these collected captured images (for example, (See Patent Documents 1 and 2).
 また、従来、2次元地図データに撮影画像を組み合わせて、地図上の建物等の立体画像を作成および更新するシステムが開示されている(例えば、特許文献3を参照)。 In addition, a system for creating and updating a stereoscopic image of a building or the like on a map by combining a photographed image with two-dimensional map data has been disclosed (for example, see Patent Document 3).
特開2002-189412号公報Japanese Patent Laid-Open No. 2002-189412 特許第5472142号公報Japanese Patent No. 5472142 国際公開第2008/062819号International Publication No. 2008/062819
 本開示の道路地図作成システムは、複数の車両に搭載され、各車両の室外の撮影画像をサーバ装置に送信する複数の車載装置と、前記サーバ装置に収集された複数の撮影画像を用いて道路画像を作成するデータ処理装置と、を備え、前記データ処理装置は、前記収集された複数の撮影画像の中から、共通した地点の画像が含まれる複数の共通撮影画像を検索する画像検索部と、前記収集された複数の撮影画像の画像内の特徴物を抽出する特徴抽出部と、前記特徴物に基づいて前記複数の共通撮影画像の間で画像の位置を合わせ、位置を合せた前記複数の共通撮影画像の各画素の情報から、各共通撮影画像よりも高精細な道路画像を生成する高精細処理部と、を備えている。 A road map creation system according to the present disclosure is mounted on a plurality of vehicles, roads using a plurality of in-vehicle devices that transmit outdoor captured images of the vehicles to a server device, and a plurality of captured images collected by the server device. A data processing device for creating an image, and the data processing device searches for a plurality of common photographed images including a common point image from the collected photographed images; A feature extraction unit that extracts features in the collected images of the plurality of captured images, and the plurality of common captured images that are based on the features and the positions of the images are aligned. A high-definition processing unit that generates a road image having a higher definition than that of each common captured image from information of each pixel of the common captured image.
 本開示のデータ処理装置は、複数の車両で撮影され、サーバ装置に収集された、複数の撮影画像を用いて道路画像を作成するデータ処理装置であって、前記収集された複数の撮影画像の中から、共通した地点の画像が含まれる複数の共通撮影画像を検索する画像検索部と、前記収集された複数の撮影画像の画像内の特徴物を抽出する特徴抽出部と、前記特徴物に基づいて前記複数の共通撮影画像の間で画像の位置を合わせ、位置を合せた前記複数の共通撮影画像の各画素の情報から、各共通撮影画像よりも高精細な道路画像を生成する高精細処理部と、を備えている。 A data processing device of the present disclosure is a data processing device that creates a road image using a plurality of captured images that are captured by a plurality of vehicles and collected by a server device, and the data processing device of the collected plurality of captured images An image search unit that searches for a plurality of common captured images including images of a common spot, a feature extraction unit that extracts features in the collected images of the plurality of captured images, and the feature Based on the information of each pixel of the plurality of common captured images obtained by aligning the positions of the images based on the plurality of common captured images, a high-definition road image that is higher in definition than the common captured images is generated. And a processing unit.
 本開示の車載装置は、サーバ装置に収集された複数の撮影画像の中から、共通した地点の画像が含まれる複数の共通撮影画像を検索する画像検索部と、前記収集された複数の撮影画像の画像内の特徴物を抽出する特徴抽出部と、前記特徴物に基づいて前記複数の共通撮影画像の間で画像の位置を合わせ、位置を合せた前記複数の共通撮影画像の各画素の情報から、各共通撮影画像よりも高精細な道路画像を生成する高精細処理部と、を有するデータ処理装置に、前記撮影画像を提供し、且つ、車両に搭載される車載装置であって、車両に搭載された撮影装置から室外の撮影画像を入力する入力部と、前記サーバ装置へ前記室外の撮影画像を送信する通信部と、を備えている。 The in-vehicle device according to the present disclosure includes: an image search unit that searches for a plurality of common captured images including a common spot image from a plurality of captured images collected by the server device; and the plurality of collected captured images And a feature extraction unit that extracts a feature in the image, and aligns the position of the image between the plurality of common captured images based on the feature, and information on each pixel of the plurality of common captured images A data processing device having a high-definition processing unit that generates a high-definition road image than each common captured image, and the vehicle-mounted device mounted on the vehicle, An input unit that inputs an outdoor captured image from the imaging device mounted on the camera, and a communication unit that transmits the outdoor captured image to the server device.
図1Aは実施の形態の道路地図作成システムを示すブロック図である。FIG. 1A is a block diagram illustrating a road map creation system according to an embodiment. 図1Bは実施の形態の道路地図作成システムのサーバ装置とデータ処理装置を示すブロック図である。FIG. 1B is a block diagram illustrating a server device and a data processing device of the road map creation system according to the embodiment. 図2は実施の形態の道路地図作成システムの車載装置からサーバ装置へ送られる送信データを示す図である。FIG. 2 is a diagram showing transmission data sent from the in-vehicle device of the road map creation system of the embodiment to the server device. 図3は実施の形態の道路地図作成システムの車載装置からサーバ装置へ送られる送信データの変形例を示すデータチャートである。FIG. 3 is a data chart showing a modification of transmission data sent from the in-vehicle device to the server device of the road map creation system of the embodiment. 図4は実施の形態の道路地図作成システムの車両データベースを示す図である。FIG. 4 is a diagram illustrating a vehicle database of the road map creation system according to the embodiment. 図5は実施の形態の道路地図作成システムのデータ処理装置の動作を示すフローチャートである。FIG. 5 is a flowchart showing the operation of the data processing apparatus of the road map creation system of the embodiment. 図6は実施の形態の道路地図作成システムの撮影画像を示す図である。FIG. 6 is a diagram showing a photographed image of the road map creation system of the embodiment. 図7は実施の形態の道路地図作成システムの道路面座標への展開処理を説明する図である。FIG. 7 is a diagram for explaining the process of expanding the road map coordinates according to the embodiment of the road map creation system. 図8は実施の形態の道路地図作成システムの特徴物の抽出処理を説明する図である。FIG. 8 is a diagram for explaining the feature extraction process of the road map creation system according to the embodiment. 図9Aは実施の形態の道路地図作成システムの撮影画像データを高精細のデータ系列へ複写する処理を説明する図である。FIG. 9A is a diagram illustrating a process of copying captured image data to a high-definition data series in the road map creation system of the embodiment. 図9Bは実施の形態の道路地図作成システムの撮影画像データを高精細のデータ系列へ複写する処理を説明する図である。FIG. 9B is a diagram illustrating a process of copying photographed image data to a high-definition data series in the road map creation system according to the embodiment. 図9Cは実施の形態の道路地図作成システムの撮影画像データを高精細のデータ系列へ複写する処理を説明する図である。FIG. 9C is a diagram illustrating a process of copying captured image data to a high-definition data series in the road map creation system according to the embodiment. 図10は実施の形態の道路地図作成システムの複数の画像データから高精細な画像データを生成する原理を説明する図である。FIG. 10 is a diagram for explaining the principle of generating high-definition image data from a plurality of image data in the road map creation system of the embodiment.
 前述の従来の技術では、全国各地の道路地図データを、数十台又は数百台の道路計測車両を用いて作成するには、多大な時間とコストとを要する。さらに、道路の工事は様々な箇所で頻繁に行われるため、常に最新の道路地図データを用意するには、一定台数の道路計測車両を用いて対応するには限界がある。 In the above-described conventional technology, it takes a lot of time and cost to create road map data for various places in the country using tens or hundreds of road measurement vehicles. Furthermore, since road construction is frequently performed at various locations, there is a limit to using a fixed number of road measurement vehicles to always prepare the latest road map data.
 また、現在、車両には、ドライブレコーダまたは駐車ガイド用のカメラなど、車両の周囲を撮影するカメラが取り付けられていることが多い。さらに、車両には、ナビゲーション装置など、サーバ装置と通信する機能を備えていることが多い。よって、特許文献1、2に開示の従来の技術のように、複数の車両から撮影画像を収集して地図データを生成することも検討できる。しかしながら、車両の撮影画像は、解像度が低く、そのままでは高精細な道路地図データを作成するのに使用することができない。 In addition, at present, the vehicle is often equipped with a camera for photographing the surroundings of the vehicle, such as a drive recorder or a parking guide camera. Furthermore, the vehicle often has a function of communicating with a server device such as a navigation device. Therefore, it is also possible to consider collecting captured images from a plurality of vehicles and generating map data as in the prior art disclosed in Patent Documents 1 and 2. However, the captured image of the vehicle has a low resolution and cannot be used for creating high-definition road map data as it is.
 以下、実施の形態の道路地図作成システムについて図面を参照して詳細に説明する。 Hereinafter, the road map creation system of the embodiment will be described in detail with reference to the drawings.
 図1Aは、本発明の実施の形態の道路地図作成システムを示すブロック図である。 FIG. 1A is a block diagram showing a road map creation system according to an embodiment of the present invention.
 本実施の形態の道路地図作成システムは、複数の車両100にそれぞれ搭載される複数の車載装置10と、ネットワークを介して車載装置10と通信可能に接続されるサーバ装置30と、道路地図データを作成するデータ処理装置50と、を備えている。 The road map creation system of the present embodiment includes a plurality of in-vehicle devices 10 respectively mounted on a plurality of vehicles 100, a server device 30 that is communicably connected to the in-vehicle devices 10 via a network, and road map data. And a data processing device 50 to be created.
 図1Bはサーバ装置30とデータ処理装置50を示すブロック図である。サーバ装置30とデータ処理装置50とは、1台のコンピュータにより構成してもよいし、もっと多くのコンピュータにより処理を分散して行うように構成してもよい。 FIG. 1B is a block diagram showing the server device 30 and the data processing device 50. The server device 30 and the data processing device 50 may be configured by a single computer, or may be configured to perform processing distributed by more computers.
 車載装置10は、1個または複数の撮影装置であるカメラ11と、位置取得部12と、通信部13と、ディスプレイ14と、制御部15と、インターフェース(I/F)16と、車速センサ17とを備えている。車載装置10は、例えばカーナビケーション装置など、既存の装置に制御機能を加えて構成することができる。 The in-vehicle device 10 includes a camera 11 that is one or a plurality of photographing devices, a position acquisition unit 12, a communication unit 13, a display 14, a control unit 15, an interface (I / F) 16, and a vehicle speed sensor 17. And. The in-vehicle device 10 can be configured by adding a control function to an existing device such as a car navigation device.
 なお、車載装置10は、カメラ11と、位置取得部12と、車速センサ17とを含まず、車両100に搭載されたカメラ11、位置取得部12、車速センサ17からデータを入力する構成としてもよい。また、車載装置10は、ディスプレイ14が省略された構成としてもよい。 The in-vehicle device 10 does not include the camera 11, the position acquisition unit 12, and the vehicle speed sensor 17, and may be configured to input data from the camera 11, the position acquisition unit 12, and the vehicle speed sensor 17 mounted on the vehicle 100. Good. Moreover, the vehicle-mounted apparatus 10 is good also as a structure by which the display 14 was abbreviate | omitted.
 カメラ11は、車両100の前方または後方など、車室外を撮影するように搭載されている。カメラ11は、制御部15の指令に従って車室外の撮影処理を行い、撮影画像のデータを制御部15へ送る。 The camera 11 is mounted so as to photograph the outside of the passenger compartment such as the front or rear of the vehicle 100. The camera 11 performs a photographing process outside the passenger compartment in accordance with a command from the control unit 15, and sends captured image data to the control unit 15.
 位置取得部12は、例えばGPS(全地球測位システム)、加速器センサ、および車速センサを利用して、現在位置を算出する。取得した現在位置のデータは制御部15へ送られる。 The position acquisition unit 12 calculates the current position using, for example, a GPS (Global Positioning System), an accelerator sensor, and a vehicle speed sensor. The acquired current position data is sent to the control unit 15.
 車速センサ17は、例えば車輪速センサであり、現在の車速のデータを制御部15へ送る。 The vehicle speed sensor 17 is, for example, a wheel speed sensor, and sends current vehicle speed data to the control unit 15.
 通信部13は、ネットワークを介してサーバ装置30へデータを送信することができる。通信部13は、移動体通信網とインターネットとを介してデータ送信を行う構成として良い。 The communication unit 13 can transmit data to the server device 30 via the network. The communication unit 13 may be configured to perform data transmission via a mobile communication network and the Internet.
 インターフェース16は、制御部15と各部とをデータ入出力可能に接続する。インターフェース16は、撮影画像を入力する入力部の機能を担う。 The interface 16 connects the control unit 15 and each unit so that data can be input / output. The interface 16 functions as an input unit that inputs a captured image.
 制御部15は、車載装置10の各部を統括的に制御する。制御部15は、カメラ11から撮影画像を入力したら、位置取得部12および車速センサ17から車両100すなわち車載装置10の位置を示す位置情報と車両100の速度を示す車速情報とを入力し、これらと車両ID(識別信号)とを関連づけて通信部13を介してサーバ装置30へ送信する。 The control unit 15 comprehensively controls each unit of the in-vehicle device 10. When the captured image is input from the camera 11, the control unit 15 inputs position information indicating the position of the vehicle 100, that is, the in-vehicle device 10 and vehicle speed information indicating the speed of the vehicle 100 from the position acquisition unit 12 and the vehicle speed sensor 17. And the vehicle ID (identification signal) are associated with each other and transmitted to the server device 30 via the communication unit 13.
 図2は、車載装置からサーバ装置へ送られる送信データを示すデータチャートである。図3は、車載装置の送信データの変形例を示すデータチャートである。 FIG. 2 is a data chart showing transmission data sent from the in-vehicle device to the server device. FIG. 3 is a data chart showing a modification of the transmission data of the in-vehicle device.
 車載装置10の送信データ20には、図2の各項目のデータが含まれる。すなわち、撮影画像データと、撮影画像の取得日時、位置情報および車速情報、データ形式の情報が含まれる。データシリアル番号は、車両100ごとに付される通し番号である。車両IDとカメラナンバーは、車両100を識別する情報と、車両100に複数のカメラ11が搭載されている場合にカメラ11を識別する情報である。車両IDとカメラナンバーとは、車両データベース33(図4参照)に登録されている。車両IDおよびカメラナンバーは、車載カメラの車両100上の搭載位置が分かる識別情報となる。送信データ20には、車両の走行方向の情報を含めてもよい。 The transmission data 20 of the in-vehicle device 10 includes data of each item in FIG. That is, the captured image data, captured image acquisition date and time, position information and vehicle speed information, and data format information are included. The data serial number is a serial number assigned to each vehicle 100. The vehicle ID and camera number are information for identifying the vehicle 100 and information for identifying the camera 11 when a plurality of cameras 11 are mounted on the vehicle 100. The vehicle ID and camera number are registered in the vehicle database 33 (see FIG. 4). The vehicle ID and the camera number are identification information that indicates the mounting position of the in-vehicle camera on the vehicle 100. The transmission data 20 may include information on the traveling direction of the vehicle.
 なお、図3に示すように、車載装置10からサーバ装置30へ送られる送信データ20Aには、カメラナンバーに代えて、カメラ位置の情報(前後左右、高さ、仰角などの情報)を含めてもよい。 As shown in FIG. 3, the transmission data 20A sent from the in-vehicle device 10 to the server device 30 includes camera position information (information such as front and rear, left and right, height, and elevation angle) instead of the camera number. Also good.
 サーバ装置30は、複数の車載装置10から道路を含む撮影画像のデータを受信および収集する。サーバ装置30は、通信部31と、制御部32と、車両データベース33と、撮影画像データベース34とを有する。 The server device 30 receives and collects data of captured images including roads from the plurality of in-vehicle devices 10. The server device 30 includes a communication unit 31, a control unit 32, a vehicle database 33, and a captured image database 34.
 通信部31は、ネットワークを介して、撮影画像を含む車載装置10の送信データを受信する。 The communication unit 31 receives transmission data of the in-vehicle device 10 including a captured image via the network.
 撮影画像データベース34は、大規模記憶装置を有し、複数の車載装置10から送られてきた撮影画像を蓄積する。撮影画像は、付加情報が関連付けられて記憶される。付加情報には、撮影位置(緯度経度)の情報、カメラ位置の情報、および、撮影日時の情報が含まれる。付加情報には、撮影時の車速の情報、画像データ形式、シリアル番号などが含まれてもよい。車速の情報は、撮影位置の情報を補完するために使用することができる。 The photographed image database 34 has a large-scale storage device and stores photographed images sent from a plurality of in-vehicle devices 10. The captured image is stored in association with additional information. The additional information includes shooting position (latitude and longitude) information, camera position information, and shooting date and time information. The additional information may include vehicle speed information at the time of shooting, an image data format, a serial number, and the like. The vehicle speed information can be used to supplement the shooting position information.
 制御部32は、車両データベース33の情報と、車載装置10の送信データとから、道路を含む撮影画像を必要な情報と関連づけて撮影画像データベース34へ登録する。 The control unit 32 registers a photographed image including a road in the photographed image database 34 in association with necessary information from the information in the vehicle database 33 and the transmission data of the in-vehicle device 10.
 図4は、車両データベースを示すデータチャートである。 FIG. 4 is a data chart showing the vehicle database.
 車両データベース33には、車載装置10を搭載している複数の車両100の情報が登録されている。車両100の情報には、各車両100を識別する車両ID(識別情報)と、道路を撮影できるように搭載されたカメラ11の台数と、各カメラ11の車両100で搭載されている搭載位置を示すカメラ位置情報と、が含まれる。 In the vehicle database 33, information on a plurality of vehicles 100 on which the in-vehicle device 10 is mounted is registered. The information on the vehicle 100 includes a vehicle ID (identification information) for identifying each vehicle 100, the number of cameras 11 mounted so that a road can be photographed, and a mounting position of each camera 11 mounted on the vehicle 100. Camera position information to be displayed.
 カメラ位置情報は、車両100の水平方向の搭載位置を示す情報、カメラ11の高さ位置の情報、ならびに、カメラ11の仰角の情報などを含む。 The camera position information includes information indicating the horizontal mounting position of the vehicle 100, information on the height position of the camera 11, information on the elevation angle of the camera 11, and the like.
 サーバ装置30の制御部32は、車載装置10から送信データ20を受信したときに、車両データベース33から車両IDおよびカメラナンバーに対応するカメラ位置情報を読み出す。そして、読み出したカメラ位置情報と撮影画像データとを対応づけて撮影画像データベース34に登録する。 The control unit 32 of the server device 30 reads the camera position information corresponding to the vehicle ID and the camera number from the vehicle database 33 when receiving the transmission data 20 from the in-vehicle device 10. Then, the read camera position information and captured image data are registered in the captured image database 34 in association with each other.
 データ処理装置50は、図1Bに示すように、画像検索部51、特徴抽出部52、高精細処理部53、位置指定部54、地図データ管理部55、および、道路地図データベース56を有している。 As shown in FIG. 1B, the data processing device 50 includes an image search unit 51, a feature extraction unit 52, a high-definition processing unit 53, a position designation unit 54, a map data management unit 55, and a road map database 56. Yes.
 位置指定部54は、高精細な道路画像を作成する地点、或いは、高精細な道路画像を更新する地点の位置を示す位置情報を指定する。位置指定部54は、外部からの入力情報に基づき、位置情報の指定を行ってもよい。例えば、オペレータが外部から何処地区の道路地図情報を作成するよう指令を入力し、この指令に基づき、位置指定部54が位置情報を指定する構成としてもよい。または、位置指定部54は、所定のアルゴリズムで位置情報を指定する構成としてもよい。例えば、位置指定部54は、外部から指定された地球上の所定の区域において、所定メートル間隔で位置情報を順次指定していく構成としてもよい。また、位置指定部54は、サーバ装置30が車載装置10から受信した撮影画像に写っている地点を指定するように構成してもよい。 The position designation unit 54 designates position information indicating the position of a point where a high-definition road image is created or a point where a high-definition road image is updated. The position designation unit 54 may designate the position information based on input information from the outside. For example, a configuration may be adopted in which an operator inputs a command to create road map information for a district from the outside, and the position designation unit 54 designates position information based on the command. Alternatively, the position specifying unit 54 may be configured to specify position information using a predetermined algorithm. For example, the position specifying unit 54 may be configured to sequentially specify position information at predetermined meter intervals in a predetermined area on the earth specified from the outside. In addition, the position specifying unit 54 may be configured to specify a point that is reflected in the captured image received by the server device 30 from the in-vehicle device 10.
 画像検索部51は、撮影画像データベース34を検索して、位置指定部54により指定された地点の画像が含まれる撮影画像を探し出す。画像検索部51は、撮影画像に対応づけられている位置情報から該当する撮影画像を見つけことができる。画像検索部51は、さらに、カメラ11の搭載位置の情報と、車両の走行方向の情報とを考慮して、該当する撮影画像を見つけるようにしてもよい。例えば、カメラ11が前方を撮影するものであれば、位置情報に走行方向の変位量を加えることで、カメラ11の被写領域の位置を計算することができる。 The image search unit 51 searches the photographed image database 34 to find a photographed image including the image of the spot designated by the position designation unit 54. The image search unit 51 can find the corresponding captured image from the position information associated with the captured image. Further, the image search unit 51 may find a corresponding captured image in consideration of information on the mounting position of the camera 11 and information on the traveling direction of the vehicle. For example, if the camera 11 captures the front, the position of the subject area of the camera 11 can be calculated by adding the amount of displacement in the traveling direction to the position information.
 特徴抽出部52は、複数の撮影画像の位置のズレを精度良く検出するために、画像中に含まれる特徴物を抽出する。特徴物は、土地に定着した固定物であり、例えば、人工的或いは自然にできた道路上のマークを適用できる。人口的にできた道路上のマークには、道路標示(白線、オレンジ線など)、道路標示に生じた亀裂、マンホールなどが含まれる。自然にできた道路上のマークには、道路の亀裂または陥没穴などが含まれる。その他、特徴物としては、ガードレール又は標識などの交通用施設、建物または看板などの建造施設を適用することもできる。特徴抽出部52は、例えば特徴物のエッジを抽出する。 The feature extraction unit 52 extracts a feature contained in the image in order to detect the positional deviation of the plurality of captured images with high accuracy. The characteristic object is a fixed object fixed on the land, and for example, an artificial or natural mark on the road can be applied. Marks on roads that are artificially included include road markings (white lines, orange lines, etc.), cracks generated in road markings, manholes, and the like. A natural mark on a road includes a crack or depression in the road. In addition, as a feature, a traffic facility such as a guardrail or a sign, or a building facility such as a building or a signboard can be applied. The feature extraction unit 52 extracts, for example, an edge of a feature object.
 高精細処理部53は、撮影対象が少しずれた複数の撮影画像を用いて、各撮影画像よりも高精細な道路の画像を作成する。高精細な画像の作成方法については、後述する。 The high-definition processing unit 53 creates a high-definition road image more than each captured image by using a plurality of captured images with slightly different shooting targets. A method for creating a high-definition image will be described later.
 地図データ管理部55は、高精細処理部53が作成した各地点の高精細な道路画像のデータを、位置情報とともに、道路地図データベース56に登録する。これにより、複数の区間の高精細な道路画像が集合されて、道路地図データが生成される。 The map data management unit 55 registers the high-definition road image data of each point created by the high-definition processing unit 53 in the road map database 56 together with the position information. Thereby, high-definition road images of a plurality of sections are collected, and road map data is generated.
 道路地図データベース56は、大規模記憶装置を有し、位置情報と道路の高精細画像とが対応づけられて記憶される。 The road map database 56 has a large-scale storage device, and stores position information and high-definition images of roads in association with each other.
 <データ処理>
 続いて、データ処理装置50の処理内容について説明する。
<Data processing>
Subsequently, processing contents of the data processing device 50 will be described.
 図5は、データ処理装置の動作の一例を示すフローチャートである。 FIG. 5 is a flowchart showing an example of the operation of the data processing apparatus.
 ステップS1では、位置指定部54が高精細な道路画像を作成する地点の指定を行う。地点の指定方法は、位置指定部54の説明箇所で述べた通りである。 In step S1, the position designation unit 54 designates a point where a high-definition road image is created. The point designation method is as described in the explanation part of the position designation unit 54.
 ステップS2では、画像検索部51が撮影画像データベース34から共通地点の画像を含む複数の撮影画像(以下、共通撮影画像とも呼ぶ。)を検索する。検索方法は、画像検索部51の説明箇所で述べた通りである。 In step S2, the image search unit 51 searches the photographed image database 34 for a plurality of photographed images (hereinafter, also referred to as “common photographed images”) including images of common points. The search method is as described in the explanation part of the image search unit 51.
 ステップS3では、特徴抽出部52が、先ず、ステップS2で選択された撮影画像と付加情報(位置情報、カメラの車載位置の情報など)とを読み込んで、撮影画像を道路面座標へ展開し、さらに、輝度を正規化する。この処理について、図6と図7とを用いて説明する。 In step S3, the feature extraction unit 52 first reads the photographed image selected in step S2 and additional information (position information, information on the in-vehicle position of the camera, etc.), expands the photographed image into road surface coordinates, Further, the luminance is normalized. This process will be described with reference to FIGS.
 図6は、撮影画像の一例を示す図である。図7は、道路面座標への展開処理を説明する図である。 FIG. 6 is a diagram illustrating an example of a captured image. FIG. 7 is a diagram for explaining the development process to road surface coordinates.
 図6に示すように、撮影画像において、道路の近くは大きく表示され、道路の遠くは小さく表示されている。撮影画像に関連付けられた付加情報のうち、カメラ位置の情報により、撮影位置の道路上からの高さと角度とが特定できる。これらの情報から、撮影対象を平面と見なしたときに、撮影画像から、平面視したときの画像へのマッピング関数を求めることができる。 As shown in FIG. 6, in the photographed image, the vicinity of the road is displayed large, and the distance from the road is displayed small. Of the additional information associated with the photographed image, the height and angle of the photographing position from the road can be specified by the information on the camera position. From these pieces of information, it is possible to obtain a mapping function from a captured image to an image when viewed in plan when the imaging target is considered as a plane.
 特徴抽出部52は、上記のマッピング関数を求め、更に、撮影画像から道路の画像認識を行って、図7に示すように、道路の画像を平面状の道路面座標へ展開する。展開された画像は、手前側と奥側とで解像度が異なり、奥側の解像度が低くなる。図7には、範囲Aの画素81のサイズと、範囲Bの画素82のサイズとを示す。 The feature extraction unit 52 obtains the above mapping function, further recognizes the road image from the captured image, and develops the road image into planar road surface coordinates as shown in FIG. The developed image has different resolutions on the near side and the far side, and the resolution on the far side is low. FIG. 7 shows the size of the pixel 81 in the range A and the size of the pixel 82 in the range B.
 輝度の正規化は、一例としては、道路の全画素の輝度の平均値を計算し、平均値が予め定められた基準値になるように、全画素の輝度を一律に減算または加算することで行われる。これにより、暗いときに撮影された画像と、明るいときに撮影された画像との輝度を、標準的な輝度に合わせることができる。また、道路の遠近度に応じた補正を付加して輝度の正規化を行ってもよい。特徴抽出部52は、このような正規化処理を行う。また、特徴抽出部52は、カメラ11の特性に応じて色合いまたは彩度について正規化処理を行ってもよい。 As an example, the normalization of the brightness is by calculating the average value of the brightness of all the pixels on the road and subtracting or adding the brightness of all the pixels uniformly so that the average value becomes a predetermined reference value. Done. Thereby, the brightness | luminance of the image image | photographed when it was dark and the image image | photographed when it was bright can be match | combined with standard brightness | luminance. In addition, luminance normalization may be performed by adding correction according to the perspective of the road. The feature extraction unit 52 performs such normalization processing. In addition, the feature extraction unit 52 may perform a normalization process on the hue or saturation according to the characteristics of the camera 11.
 ステップS4では、特徴抽出部52が、展開した画像から、画像内の特徴物を抽出する。特徴物の抽出処理について図8を用いて説明する。 In step S4, the feature extraction unit 52 extracts a feature in the image from the developed image. The feature extraction process will be described with reference to FIG.
 図8は、特徴物の抽出処理の一例を説明する図である。 FIG. 8 is a diagram for explaining an example of feature extraction processing.
 特徴抽出部52は、例えば、画像中のエッジ検出を行って、特徴物の抽出を行う。ここで、特徴物の抽出は、道路上のマークで、且つ、撮影画像内の手前側の領域(解像度の高い領域)で行うとよい。例えば、道路上の白線を抽出する場合、特徴抽出部52は、撮影画像内の手前側の白線f1(図7参照)について、より細かな抽出を行う。特徴抽出部52は、白線f1の亀裂又はかすれなどの抽出も行う。このように抽出された特徴物を比較することで、複数の撮影画像の間で、同一の白線f1か否かを判断することができる。 The feature extraction unit 52 performs, for example, edge detection in an image to extract feature objects. Here, the extraction of the feature object may be performed on a mark on the road and an area on the near side (area with high resolution) in the captured image. For example, when extracting a white line on a road, the feature extraction unit 52 performs finer extraction on the near white line f1 (see FIG. 7) in the captured image. The feature extraction unit 52 also extracts a crack or blur of the white line f1. By comparing the feature extracted in this way, it is possible to determine whether or not the white line f1 is the same among a plurality of captured images.
 特徴抽出部52は、ステップS2で探し出された各共通撮影画像について、道路面座標への展開と、特徴物の抽出とを行う。特徴物の抽出データと、道路面座標へ展開された画像データとは、高精細処理部53へ送られる。 The feature extraction unit 52 performs development on road surface coordinates and feature extraction for each common captured image found in step S2. The feature object extraction data and the image data developed to the road surface coordinates are sent to the high-definition processing unit 53.
 なお、共通撮影画像の検索(ステップS2)と、撮影画像から道路面座標への展開(ステップS3の前段)と、特徴物の抽出(ステップS4)とは、この順番で行う必要はない。例えば、特徴物の抽出は、受信された複数の撮影画像について順次行っておいてもよい。抽出された特徴物のデータは、撮影画像と対応づけて記憶しておけばよい。抽出された特徴物のデータは、複数の共通撮影画像を検索する際に使用することもできる。 It should be noted that the search for the common captured image (step S2), the development from the captured image to the road surface coordinates (the first stage of step S3), and the feature extraction (step S4) need not be performed in this order. For example, feature extraction may be performed sequentially on a plurality of received captured images. The extracted feature data may be stored in association with the photographed image. The extracted feature data can also be used when searching for a plurality of common captured images.
 ステップS5では、高精細処理部53は、抽出された特徴物に基づいて、複数の共通撮影画像の位置合わせを行う。位置合わせされる共通撮影画像は、道路面座標に展開された画像である。位置合わせは、複数の共通撮影画像の間で、同一の特徴物が重なり合うように行われる。GPSを用いた位置情報には、比較的に大きな誤差が含まれる。このため、画像中の特徴物が重なり合うように位置合わせすることで、複数の共通撮影画像の間の位置合わせの精度が高くなる。 In step S5, the high-definition processing unit 53 aligns a plurality of common captured images based on the extracted features. The common captured image to be aligned is an image developed on road surface coordinates. The alignment is performed so that the same feature overlaps between a plurality of common captured images. The position information using GPS includes a relatively large error. For this reason, the alignment accuracy between a plurality of common captured images is increased by performing the alignment so that the feature objects in the image overlap each other.
 なお、ステップS5において、共通の特徴物が見つからない共通撮影画像があった場合には、この画像を除外して処理すればよい。 In step S5, if there is a common captured image in which a common feature is not found, this image may be excluded and processed.
 ステップS6では、高精細処理部53は、高精細のデータ系列に、複数の共通撮影画像の画素データを複写して、各共通撮影画像の画素データを重み付け加算する。これらの処理について、図9Aから図9Cと図10とを用いて説明する。 In step S6, the high-definition processing unit 53 copies the pixel data of a plurality of common captured images to the high-definition data series, and weights and adds the pixel data of each common captured image. These processes will be described with reference to FIGS. 9A to 9C and FIG.
 図9Aから図9Cは、撮影画像データを高精細のデータ系列へ複写する処理を説明する図である。図9Aは、道路面座標に展開した1つの共通撮影画像の一部の画素83を拡大表示している。図9Bは、高精細のデータ系列に対応する高精細の画素の並びの一部を拡大表示している。図9Cは、高精細のデータ系列へ複写された(a)の画素データの一例を示している。 FIG. 9A to FIG. 9C are diagrams for explaining processing for copying photographed image data to a high-definition data series. FIG. 9A shows an enlarged display of a part of pixels 83 of one common photographed image developed on road surface coordinates. FIG. 9B shows an enlarged display of a part of the arrangement of high-definition pixels corresponding to the high-definition data series. FIG. 9C shows an example of the pixel data (a) copied to a high-definition data series.
 道路面座標に展開された共通撮影画像は、図9Aに示すように、大きな画素83を含んでいる。一方、高精細画像のデータ系列は、図9Bに示すように、細かい画素90の配列に対応したデータ系列である。図9B、図9Cでは、煩雑を避けるために複数の画素90のうち1つのみに符号を付している。 The common captured image developed on the road surface coordinates includes a large pixel 83 as shown in FIG. 9A. On the other hand, the data series of the high-definition image is a data series corresponding to the arrangement of the fine pixels 90 as shown in FIG. 9B. In FIG. 9B and FIG. 9C, only one of the plurality of pixels 90 is denoted by a symbol in order to avoid complexity.
 ステップS6では、先ず、図9Aから図9Cに示すように、高精細画像の細かい画素90の並びに、共通撮影画像の大きな画素83を重ね合わせる。そして、細かい各画素90のデータとして、共通撮影画像の重ね合わさっている画素83の画素データを複写する。これにより、1つの共通撮影画像のデータが高精細のデータ系列に複写される。 In step S6, first, as shown in FIGS. 9A to 9C, the fine pixels 90 of the high-definition image and the large pixels 83 of the common captured image are superposed. Then, the pixel data of the pixel 83 where the common photographed image is superimposed is copied as fine data of each pixel 90. Thereby, the data of one common photographed image is copied into a high-definition data series.
 なお、高精細画像の画素90の位置によっては、1つの画素90に、互いに隣接する複数の画素83が重なることがある。この場合には、複数の画素83のうち、重なる範囲が大きい方の画素83の画素データを、1つの画素90のデータとして複写してもよい。或いは、複数の画素83の画素データを平均化して、1つの画素90のデータとして複写してもよい。 Note that, depending on the position of the pixel 90 of the high-definition image, a plurality of adjacent pixels 83 may overlap with one pixel 90. In this case, pixel data of the pixel 83 having a larger overlapping range among the plurality of pixels 83 may be copied as data of one pixel 90. Alternatively, the pixel data of a plurality of pixels 83 may be averaged and copied as data of one pixel 90.
 さらに、ステップS6では、複数の共通撮影画像の画素データを、高精細画像の画素90のデータとして、重み付け加算していく。重み付けは、画素データの信頼度に応じて重み付けの値が大きくなるように行うとよい。例えば、n個の共通撮影画像の画素データを加算する場合には、各共通撮影画素の画素データの信頼度は同じと見なして、各画素のデータ値に1/nの重み付けを行うようにしてよい。また、既に、(m-1)個の共通撮影画像を用いて求めた画素データに、新たな1個の共通撮影画像の画素データを加算する場合には、前者に大きな重み付け(例えば(m-1)/m)を行い、後者に小さな重み付け(例えば1/m)を行うようにしてよい。 Further, in step S6, the pixel data of a plurality of common captured images are weighted and added as the data of the pixels 90 of the high-definition image. The weighting is preferably performed so that the weighting value increases according to the reliability of the pixel data. For example, when adding pixel data of n common captured images, the reliability of the pixel data of each common captured pixel is assumed to be the same, and the data value of each pixel is weighted by 1 / n. Good. When the pixel data of one new common photographed image is already added to the pixel data obtained using (m−1) common photographed images, the former is heavily weighted (for example, (m− 1) / m), and the latter may be given a small weight (for example, 1 / m).
 図10は、複数の画像データから高精細な画像データを生成する原理を説明する図である。図10は、第1画像の一区画の画素87(4×4個の画素)と、第2画像の一区画の画素88(4×4個の画素)と、複数の画素87と複数の画素88との重ね合わせとを示す。第1画像の画素87と第2画像の画素88とは、同一の被写体を僅かにずらして撮像して得られた画素である。図10において、位置のズレはxy平面の原点Oからのズレとして示している。また、煩雑を避けるために複数の画素87,88のうち各1つのみに符号を付している。 FIG. 10 is a diagram for explaining the principle of generating high-definition image data from a plurality of image data. FIG. 10 shows a pixel 87 (4 × 4 pixels) in a section of the first image, a pixel 88 (4 × 4 pixels) in a section of the second image, a plurality of pixels 87, and a plurality of pixels. 88 is superimposed. The pixel 87 of the first image and the pixel 88 of the second image are pixels obtained by imaging the same subject with a slight shift. In FIG. 10, the positional deviation is shown as a deviation from the origin O on the xy plane. In order to avoid complications, only one of each of the plurality of pixels 87 and 88 is given a reference numeral.
 1個の画素データは、この画素に対応する被写領域の輝度および色をそれぞれ平均化した値になっている。このため、同一の被写領域を僅かにずらして撮影して得られた第1画像の画素87と第2画像の画素88とがある場合、両者の位置を合わせて両者を重ね合わせることで、各画素87,88より細かい領域の輝度および色を推計することが可能となる。例えば、図10(c)に示すように、第1画像の一つの画素87と第2画像の一つの画素88とが重なった範囲は、1つの画素87或いは画素88より小さい。そして、この重なった範囲における被写体の実際の輝度および色の情報は、この領域に重なっている画素87,88のデータに反映されている。よって、このように僅かに位置がずれた画素87,88の輝度および色を重み付け加算することで、画素87,88よりも細かい領域の輝度および色が推計される。同一の被写体に対して位置をずらして撮影した多くの画像を用いて処理するほど、細かい領域の輝度および色をより正確に推計し、高精細の画像を得ることが可能となる。 One pixel data is a value obtained by averaging the luminance and color of the object area corresponding to this pixel. For this reason, when there are the first image pixel 87 and the second image pixel 88 obtained by photographing the same subject area slightly shifted, the two positions are aligned and the two are overlapped. It is possible to estimate the brightness and color of a region finer than each of the pixels 87 and 88. For example, as shown in FIG. 10C, a range where one pixel 87 of the first image and one pixel 88 of the second image overlap is smaller than one pixel 87 or pixel 88. The actual luminance and color information of the subject in this overlapped range is reflected in the data of the pixels 87 and 88 that overlap this region. Therefore, the luminance and color of the area finer than the pixels 87 and 88 are estimated by weighting and adding the luminance and color of the pixels 87 and 88 slightly shifted in position. The more images that are taken with respect to the same subject, the more the images are taken, the more accurately the brightness and color of the fine area are estimated, and a high-definition image can be obtained.
 ステップS6では、このような原理から、解像度の低い複数の共通撮影画像の画素データを、位置を合わせて、重み付け加算していくことで、共通撮影画像の画素よりも細かい領域の輝度および色を推計して、高精細の道路画像を得ることができる。 In step S6, based on such a principle, the pixel data of a plurality of common captured images with low resolution are aligned and weighted to add the luminance and color of a region smaller than the pixels of the common captured image. By estimation, a high-definition road image can be obtained.
 ステップS7では、地図データ管理部55が、ステップS6で得られた高精細な道路画像を、位置情報とともに、道路地図データベースに登録する。これにより、高精細な道路画像が連続する道路地図データが生成されていく。 In step S7, the map data management unit 55 registers the high-definition road image obtained in step S6 together with the position information in the road map database. Thus, road map data in which high-definition road images are continuous is generated.
 データ処理装置50は、図5の処理を多数の地点について行うことで、多くの地区をカバーする道路地図データを作成すことができる。 The data processing device 50 can create road map data covering many districts by performing the processing of FIG.
 以上のように、本実施の形態の道路地図作成システムによれば、例えば一般の自動車に搭載される車載装置を利用して、多数の撮影画像を収集し、これらから高精細な道路地図データを作成することができる。よって、特殊な道路計測車両を多数用意する必要がなくなり、効率的に道路地図データを作成することができる。 As described above, according to the road map creation system of the present embodiment, for example, a large number of captured images are collected using an in-vehicle device mounted on a general automobile, and high-definition road map data is obtained from these. Can be created. Therefore, it is not necessary to prepare many special road measurement vehicles, and road map data can be created efficiently.
 なお、本発明は、上記実施の形態で説明された具体的な内容に限られるものではない。例えば、撮影画像の検索処理では、位置情報と撮影方向の情報とから同じ地点の画像を含む撮影画像を探し出すと説明したが、位置情報だけから大凡同じ地点の画像を含むと推定される撮影画像を探し出すようにしてもよい。この場合、探し出された共通撮影画像の中には、同じ地点の画像を含まない撮影画像が混じる割合が高くなるが、特徴物の抽出処理と、特徴物に基づく位置合わせの処理とにより、この撮影画像を除外して処理することができる。 Note that the present invention is not limited to the specific contents described in the above embodiment. For example, in the photographed image search processing, it has been described that a photographed image including an image of the same point is searched from position information and information of a photographing direction. However, a photographed image that is estimated to include an image of approximately the same point from only position information. You may make it look for. In this case, in the common photographed image that has been found, the ratio of the photographed image that does not include the image of the same point is high, but by the feature extraction process and the alignment process based on the feature, The captured image can be excluded and processed.
 また、上記実施の形態では、複数の撮影画像から高精細な道路画像を作成する処理の一例を示したが、実施の形態の処理内容は一例に過ぎず、僅かに位置をずらして撮影された複数の画像を使用して高精細な画像を作成する公知の様々な画像技術を適用してもよい。 In the above-described embodiment, an example of processing for creating a high-definition road image from a plurality of captured images is shown. However, the processing content of the embodiment is merely an example, and the image was captured with a slight shift in position. Various known image techniques for creating a high-definition image using a plurality of images may be applied.
 また、上記実施の形態では、最終的に道路地図データを作成すると説明したが、道路地図データは、全ての区間の道路画像を含んでいる必要はなく、一部区間が欠損された道路画像を有する道路地図データであってもよい。 In the above embodiment, it has been explained that road map data is finally created. However, the road map data does not need to include road images of all sections, and road images in which some sections are missing. It may be road map data.
 また、上記実施の形態では、本発明をハードウェアで構成する場合を例にとって説明したが、本発明はハードウェアとの連携においてソフトウェアで実現することも可能である。 Further, although cases have been described with the above embodiment as examples where the present invention is configured by hardware, the present invention can also be realized by software in cooperation with hardware.
 本発明は、道路地図を作成するシステムに利用できる。 The present invention can be used in a system for creating a road map.
10  車載装置
11  カメラ
12  位置取得部
13  通信部
14  ディスプレイ
15  制御部
16  インターフェース
20,20A  送信データ
30  サーバ装置
31  通信部
32  制御部
33  車両データベース
34  撮影画像データベース
50  データ処理装置
51  画像検索部
52  特徴抽出部
53  高精細処理部
54  位置指定部
55  地図データ管理部
56  道路地図データベース
DESCRIPTION OF SYMBOLS 10 In-vehicle apparatus 11 Camera 12 Position acquisition part 13 Communication part 14 Display 15 Control part 16 Interface 20, 20A Transmission data 30 Server apparatus 31 Communication part 32 Control part 33 Vehicle database 34 Shooting image database 50 Data processing apparatus 51 Image search part 52 Features Extraction unit 53 High-definition processing unit 54 Position specifying unit 55 Map data management unit 56 Road map database

Claims (8)

  1.  複数の車両にそれぞれ搭載され、前記複数の車両の室外の複数の撮影画像をサーバ装置に送信する複数の車載装置と、
     前記送信された複数の撮影画像を用いて道路画像を作成するデータ処理装置と、
    を備え、
    前記データ処理装置は、
     前記送信された複数の撮影画像の中から、共通した地点の画像が含まれる複数の共通撮影画像を検索する画像検索部と、
     前記送信された複数の撮影画像内の特徴物を抽出する特徴抽出部と、
     前記特徴物に基づいて前記複数の共通撮影画像の位置を合わせ、位置を合せた前記複数の共通撮影画像の各画素の情報から、前記複数の共通撮影画像のそれぞれよりも高精細な道路画像を生成する高精細処理部と、
    を備えた道路地図作成システム。
    A plurality of in-vehicle devices that are respectively mounted on a plurality of vehicles and transmit a plurality of captured images of the plurality of vehicles to a server device;
    A data processing device for creating a road image using the plurality of captured images transmitted;
    With
    The data processing device includes:
    An image search unit for searching for a plurality of common photographed images including images of common points from the plurality of photographed images transmitted;
    A feature extraction unit for extracting features in the plurality of transmitted captured images;
    Based on the feature, the positions of the plurality of common photographed images are aligned, and from each pixel information of the plurality of common photographed images that have been aligned, a road image with higher definition than each of the plurality of common photographed images is obtained. A high-definition processing unit to be generated;
    Road map creation system with
  2. 前記複数の撮影画像は前記複数の車両の複数の搭載位置にそれぞれ搭載された複数の撮影装置でそれぞれ撮影されており、
    前記複数の車載装置は、前記複数の撮影画像を撮影した時の前記車載装置の位置をそれぞれ示す複数の位置情報と、前記複数の搭載位置がそれぞれ分かる複数の識別情報と、前記複数の撮影画像とを前記サーバ装置に送信し、
    前記画像検索部は、前記複数の位置情報と前記複数の搭載位置の情報とを用いて前記複数の共通撮影画像を検索する、請求項1記載の道路地図作成システム。
    The plurality of photographed images are respectively photographed by a plurality of photographing devices respectively mounted at a plurality of mounting positions of the plurality of vehicles.
    The plurality of in-vehicle devices include a plurality of pieces of position information respectively indicating positions of the in-vehicle devices when the plurality of photographed images are photographed, a plurality of pieces of identification information indicating the plurality of mounting positions, and the plurality of photographed images. To the server device,
    The road map creation system according to claim 1, wherein the image retrieval unit retrieves the plurality of common captured images using the plurality of position information and the plurality of mounting position information.
  3. 前記複数の撮影画像は道路を含み、
    前記特徴抽出部は、前記特徴物として前記道路上のマークを抽出する、請求項1記載の道路地図作成システム。
    The plurality of captured images include a road,
    The road map creation system according to claim 1, wherein the feature extraction unit extracts a mark on the road as the feature.
  4. 前記データ処理装置は、複数の区間の前記高精細な道路画像を集合して高精細な道路地図データを作成する地図データ管理部をさらに備えている、請求項1記載の道路地図作成システム。 The road map creation system according to claim 1, wherein the data processing device further includes a map data management unit that collects the high-definition road images of a plurality of sections to create high-definition road map data.
  5. 複数の車両で撮影されてサーバ装置に収集された複数の撮影画像を用いて道路画像を作成するデータ処理装置であって、
     前記収集された複数の撮影画像の中から共通した地点の画像が含まれる複数の共通撮影画像を検索する画像検索部と、
     前記収集された複数の撮影画像内の特徴物を抽出する特徴抽出部と、
     前記特徴物に基づいて前記複数の共通撮影画像の位置を合わせ、位置を合せた前記複数の共通撮影画像の各画素の情報から、前記複数の共通撮影画像のそれぞれよりも高精細な道路画像を生成する高精細処理部と、
    を備えているデータ処理装置。
    A data processing apparatus that creates a road image using a plurality of captured images captured by a plurality of vehicles and collected by a server device,
    An image search unit for searching for a plurality of common photographed images including an image of a common spot from the collected photographed images;
    A feature extraction unit for extracting features in the collected captured images;
    Based on the feature, the positions of the plurality of common photographed images are aligned, and from each pixel information of the plurality of common photographed images that have been aligned, a road image with higher definition than each of the plurality of common photographed images is obtained. A high-definition processing unit to be generated;
    A data processing apparatus comprising:
  6.  サーバ装置に収集された複数の撮影画像の中から、共通した地点の画像が含まれる複数の共通撮影画像を検索する画像検索部と、
     前記収集された複数の撮影画像内の特徴物を抽出する特徴抽出部と、
     前記特徴物に基づいて前記複数の共通撮影画像の位置を合わせ、位置を合せた前記複数の共通撮影画像の各画素の情報から、各共通撮影画像よりも高精細な道路画像を生成する高精細処理部と、を有するデータ処理装置に、前記複数の撮影画像のうちの1つの撮影画像を提供し、且つ、車両に搭載される車載装置であって、
     前記車両に搭載された撮影装置から前記車両の室外の撮影画像を入力する入力部と、
     前記サーバ装置へ前記室外の撮影画像を前記1つの撮影画像として送信する通信部と、
    を備えている車載装置。
    An image search unit for searching for a plurality of common photographed images including a common point image from a plurality of photographed images collected in the server device;
    A feature extraction unit for extracting features in the collected captured images;
    A high-definition image that generates a high-definition road image that is higher than each common-captured image from the information of each pixel of the plurality of common-captured images that are aligned with each other based on the features. A data processing device having a processing unit, providing one captured image of the plurality of captured images, and being mounted on a vehicle,
    An input unit for inputting a photographed image outside the vehicle from a photographing device mounted on the vehicle;
    A communication unit that transmits the outdoor photographed image as the one photographed image to the server device;
    In-vehicle device equipped with.
  7. 前記撮影装置に道路を含む領域を撮影させる制御部をさらに備え、
    前記通信部は、前記サーバ装置へ前記道路を含む撮影画像を前記1つの撮影画像として送信する、請求項6記載の車載装置。
    A control unit that causes the photographing apparatus to photograph a region including a road;
    The in-vehicle device according to claim 6, wherein the communication unit transmits a captured image including the road as the one captured image to the server device.
  8. 前記車載装置の位置を示す位置情報を取得する位置取得部をさらに備え、
    前記通信部は、前記室外の撮影画像に、前記室外の撮影画像を撮影した時の前記車両前記位置情報と、前記撮影装置の車両上の搭載位置が分かる識別情報とが関連づけられたデータを送信する、請求項6記載の車載装置。
    A position acquisition unit that acquires position information indicating the position of the in-vehicle device;
    The communication unit transmits data in which the position information of the vehicle when the outdoor captured image is captured and identification information that identifies the mounting position of the imaging device on the vehicle are transmitted to the outdoor captured image. The in-vehicle device according to claim 6.
PCT/JP2015/004248 2014-08-27 2015-08-25 Road map creation system, data processing device, and on-board device WO2016031229A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
JP2016544958A JPWO2016031229A1 (en) 2014-08-27 2015-08-25 Road map creation system, data processing device and in-vehicle device

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2014172128 2014-08-27
JP2014-172128 2014-08-27

Publications (1)

Publication Number Publication Date
WO2016031229A1 true WO2016031229A1 (en) 2016-03-03

Family

ID=55399143

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2015/004248 WO2016031229A1 (en) 2014-08-27 2015-08-25 Road map creation system, data processing device, and on-board device

Country Status (2)

Country Link
JP (1) JPWO2016031229A1 (en)
WO (1) WO2016031229A1 (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2017090239A (en) * 2015-11-10 2017-05-25 パイオニア株式会社 Information processing device, control method, program, and storage media
WO2019102968A1 (en) * 2017-11-22 2019-05-31 三菱電機株式会社 Map collecting system, map server device, vehicle-mounted device, and map collecting method
KR20190082071A (en) * 2017-12-29 2019-07-09 바이두 온라인 네트웍 테크놀러지 (베이징) 캄파니 리미티드 Method, apparatus, and computer readable storage medium for updating electronic map
CN110023712A (en) * 2017-02-28 2019-07-16 松下知识产权经营株式会社 It is displaced measuring device and displacement measuring method
JP2020073931A (en) * 2020-02-05 2020-05-14 パイオニア株式会社 Information processing device, control method, program, and storage media

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2008243037A (en) * 2007-03-28 2008-10-09 National Univ Corp Shizuoka Univ Image processor, image processing method and image processing program
JP2009124621A (en) * 2007-11-19 2009-06-04 Sanyo Electric Co Ltd Super-resolution processing apparatus and method, and imaging apparatus
JP2012155660A (en) * 2011-01-28 2012-08-16 Denso Corp Map data generation device and travel support device

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2008243037A (en) * 2007-03-28 2008-10-09 National Univ Corp Shizuoka Univ Image processor, image processing method and image processing program
JP2009124621A (en) * 2007-11-19 2009-06-04 Sanyo Electric Co Ltd Super-resolution processing apparatus and method, and imaging apparatus
JP2012155660A (en) * 2011-01-28 2012-08-16 Denso Corp Map data generation device and travel support device

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2017090239A (en) * 2015-11-10 2017-05-25 パイオニア株式会社 Information processing device, control method, program, and storage media
CN110023712A (en) * 2017-02-28 2019-07-16 松下知识产权经营株式会社 It is displaced measuring device and displacement measuring method
WO2019102968A1 (en) * 2017-11-22 2019-05-31 三菱電機株式会社 Map collecting system, map server device, vehicle-mounted device, and map collecting method
JP6541924B1 (en) * 2017-11-22 2019-07-10 三菱電機株式会社 Map collection system, map server device, in-vehicle device, and map collection method
KR20190082071A (en) * 2017-12-29 2019-07-09 바이두 온라인 네트웍 테크놀러지 (베이징) 캄파니 리미티드 Method, apparatus, and computer readable storage medium for updating electronic map
KR102338270B1 (en) 2017-12-29 2021-12-30 아폴로 인텔리전트 드라이빙 테크놀로지(베이징) 컴퍼니 리미티드 Method, apparatus, and computer readable storage medium for updating electronic map
JP2020073931A (en) * 2020-02-05 2020-05-14 パイオニア株式会社 Information processing device, control method, program, and storage media
JP2022066276A (en) * 2020-02-05 2022-04-28 ジオテクノロジーズ株式会社 Information processing device, control method, program, and storage media

Also Published As

Publication number Publication date
JPWO2016031229A1 (en) 2017-06-22

Similar Documents

Publication Publication Date Title
CN108694882B (en) Method, device and equipment for labeling map
US11094112B2 (en) Intelligent capturing of a dynamic physical environment
EP2458336B1 (en) Method and system for reporting errors in a geographic database
WO2016031229A1 (en) Road map creation system, data processing device, and on-board device
US20130010074A1 (en) Measurement apparatus, measurement method, and feature identification apparatus
KR102200299B1 (en) A system implementing management solution of road facility based on 3D-VR multi-sensor system and a method thereof
EP3671623B1 (en) Method, apparatus, and computer program product for generating an overhead view of an environment from a perspective image
JP2007108043A (en) Location positioning device, location positioning method
JP2010530997A (en) Method and apparatus for generating road information
US10872246B2 (en) Vehicle lane detection system
CN110998684B (en) Image collection system, image collection method, image collection device, and recording medium
JP4596566B2 (en) Self-vehicle information recognition device and self-vehicle information recognition method
US20230138487A1 (en) An Environment Model Using Cross-Sensor Feature Point Referencing
US20160169662A1 (en) Location-based facility management system using mobile device
JP2019078700A (en) Information processor and information processing system
JP2015194397A (en) Vehicle location detection device, vehicle location detection method, vehicle location detection computer program and vehicle location detection system
JP2011170599A (en) Outdoor structure measuring instrument and outdoor structure measuring method
KR102195040B1 (en) Method for collecting road signs information using MMS and mono camera
KR100981588B1 (en) A system for generating geographical information of city facilities based on vector transformation which uses magnitude and direction information of feature point
WO2019119358A1 (en) Method, device and system for displaying augmented reality poi information
NL2016718B1 (en) A method for improving position information associated with a collection of images.
CN115917255A (en) Vision-based location and turn sign prediction
WO2024062602A1 (en) Three-dimensionalization system, three-dimensionalization method, and recording medium for recording program
Angelats et al. A Parallax Based Robust Image Matching for Improving Multisensor Navigation in GNSS-denied Environments
JP2022153492A (en) Information processing device, information processing method, information processing program, and computer-readable recording medium having information processing program stored therein

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 15835745

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 2016544958

Country of ref document: JP

Kind code of ref document: A

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 15835745

Country of ref document: EP

Kind code of ref document: A1