WO2011049118A1 - Road surface image capturing/editing device and road surface image capturing/editing program - Google Patents

Road surface image capturing/editing device and road surface image capturing/editing program Download PDF

Info

Publication number
WO2011049118A1
WO2011049118A1 PCT/JP2010/068460 JP2010068460W WO2011049118A1 WO 2011049118 A1 WO2011049118 A1 WO 2011049118A1 JP 2010068460 W JP2010068460 W JP 2010068460W WO 2011049118 A1 WO2011049118 A1 WO 2011049118A1
Authority
WO
WIPO (PCT)
Prior art keywords
road surface
surface image
road
camera
image
Prior art date
Application number
PCT/JP2010/068460
Other languages
French (fr)
Japanese (ja)
Inventor
秀明 黒須
近邦 前田
Original Assignee
株式会社パスコ
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 株式会社パスコ filed Critical 株式会社パスコ
Publication of WO2011049118A1 publication Critical patent/WO2011049118A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/90Dynamic range modification of images or parts thereof
    • G06T5/94Dynamic range modification of images or parts thereof based on local image properties, e.g. for local contrast enhancement
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30248Vehicle exterior or interior
    • G06T2207/30252Vehicle exterior; Vicinity of vehicle
    • G06T2207/30256Lane; Road marking
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/10Image acquisition
    • G06V10/16Image acquisition using multiple overlapping images; Image stitching

Definitions

  • the present invention relates to improvement of a road surface image capturing / editing device and a road surface image capturing / editing program.
  • Patent Document 1 discloses a road surface crack detection device having a continuous road surface image acquisition function for detecting cracks in road surface property measurement.
  • a road surface image is acquired at regular time intervals, vehicle mileage data is associated with the road surface image, an overlap amount of each road surface image is calculated based on this distance data, and the road surface image is cut while cutting the overlapping portion. It is configured to edit.
  • An object of the present invention is to provide a road surface image capturing / editing apparatus and a road surface image capturing / editing program for capturing a road surface image of a sunlit or shaded road in an identifiable manner. It is to provide.
  • a road surface image photographing / editing device is mounted on a vehicle, and a road width is partly overlapped with a part of a road width, and the photographing ranges partially overlap each other.
  • a shaded imaging means for taking a picture in a distinguishable manner, a trapezoidal correction of a road surface image taken by a plurality of cameras in the sunny imaging means, and a trapezoidal correction of a road surface image taken by a plurality of cameras in the shaded imaging means.
  • Correction data acquisition means for acquiring correction data, and an overlapping area of the road width direction and the vehicle traveling direction of the road surface image corrected by the correction data and photographed by a plurality of cameras in the sun imaging means, and the shade imaging means
  • the degree of coincidence of the overlapping region of the road width direction and the vehicle traveling direction of the road image after correction is calculated by the correction data is calculated by the correction data, and the joint of the road surface image is edited based on the degree of coincidence
  • an editing means for performing the above.
  • the editing unit calculates the degree of coincidence based on a luminance of a pixel in an overlapping region of the road surface image. .
  • the road surface image photographing / editing device is a road surface imaged by the sunlit area of the road surface image photographed by the camera in the sun imaging means and the camera in the shade imaging means.
  • the shade area of the image is joined at the boundary between the shade area and the shade area, and the brightness near the border is a value between the brightness of the shade area and the brightness area of the road image. Means are provided.
  • the road surface image capturing / editing device has a luminance lower than the luminance of the identifiable sunny area or the luminance of the identifiable shaded area of the road image.
  • the present invention is characterized by comprising a sun / shade joining means for joining the road image taken by the intermediate image taking means for taking a road image between the sun region and the shade region.
  • the invention of the road surface image photographing / editing program according to claim 5 is a computer mounted on a vehicle, wherein a part of the road width is taken as a photographing range, and a part of the photographing range is mutually overlapped, and the whole road width is photographed.
  • hinata imaging means that shoots a road surface image in the hinata in an identifiable manner, and a shooting range that is mounted on the vehicle and that captures a part of the road width And having a plurality of cameras arranged so that all of the road width can be photographed while partly overlapping each other's photographing range, and photographing a road surface image in the shade at the same time for each predetermined travel distance of the vehicle.
  • Storage means for storing each road image captured by the shade imaging means, correction data acquisition means for acquiring correction data for performing trapezoidal correction of the road surface image stored in the storage means, Means for storing images taken by a plurality of cameras in the sun imaging means, and an overlapping area in the road width direction and the vehicle traveling direction of the road surface image corrected by the correction data, and a plurality of cameras in the shade imaging means Editing means for calculating the coincidence degree of the overlapping region of the road width direction and the traveling direction of the vehicle in the road surface image corrected by the correction data and editing the joint portion of the road surface image based on the degree of coincidence It is characterized by functioning as.
  • a road surface image of a sunlit or shaded road surface can be taken in a daytime so that it can be identified.
  • FIGS. 1A and 1B show a configuration example of a vehicle equipped with a road surface image capturing / editing apparatus according to the embodiment.
  • FIG. 1A is a side view
  • FIG. 1B is a rear view.
  • a vehicle 100 is equipped with photographing means 10 and 12, a laser scanner 14, a coordinate acquisition device 16, an odometer 18, and a control device 20.
  • the photographing means 10 and 12 are the photographing means 10 on the left side and the photographing means 12 on the right side with respect to the traveling direction of the vehicle 100 (the direction when the vehicle 100 is viewed from the back).
  • the photographing means 10 and 12 include a road surface photographing camera such as a CCD camera. Details will be described with reference to FIG.
  • the laser scanner 14 measures the height from the road surface and measures the edge, and also uses it when acquiring correction data for correcting the keystone of the camera, as will be described later.
  • the coordinate acquisition device 16 includes a GPS receiver and the like, and acquires the geographical coordinates of the imaging positions of the imaging means 10 and 12.
  • the odometer 18 is a device that measures the mileage of the vehicle 100, and includes a configuration that calculates a mileage from a vehicle speed pulse of the vehicle 100, a configuration that uses a non-contact type distance meter such as a spatial filter method, and the like. However, it is not limited to these.
  • control device 20 is constituted by a personal computer or the like, and performs trapezoid correction of a road surface image taken by the camera, control of the gain of the camera, and the like.
  • FIG. 2 shows a configuration example of the photographing means 10 and 12.
  • the photographing means 10 disposed on the left side with respect to the traveling direction of the vehicle 100 includes a camera 10s, a camera 10k, and a camera 10m.
  • the photographing means 12 disposed on the right side with respect to the traveling direction of the vehicle 100 includes a camera 12s, a camera 12k, and a camera 12m.
  • the camera 10s and the camera 12s have a part of the road width as a shooting range, and the shooting range of each other partially overlaps in the direction of the road width. Further, when the shooting ranges of the camera 10s and the camera 12s are matched, the entire road width can be shot.
  • These cameras 10 s and 12 s simultaneously capture daytime road surface images for each predetermined travel distance of the vehicle measured by the odometer 18. At this time, the road surface image in the daytime sun is photographed so as to be identifiable.
  • “identifiable” means that the image of the sunny area of the road surface of the road is not crushed white, and the road surface properties (such as cracks) can be visually observed.
  • the camera 10k and the camera 12k have a part of the road width as a shooting range, and the shooting range of each other partially overlaps in the direction of the road width. Further, when the shooting ranges of the camera 10k and the camera 12k are matched, the entire road width can be shot.
  • the camera 10k and the camera 12k simultaneously capture daytime road surface images for each predetermined travel distance of the vehicle measured by the odometer 18. At this time, a road surface image in the daytime shade is photographed so as to be identifiable.
  • “identifiable” means that the image of the shaded area of the road surface of the road is not crushed black and the road surface property can be observed visually.
  • the camera 10s, camera 12s, camera 10k, and camera 12k described above are set appropriately for the shutter speed and gain of each camera in order to shoot the sunlit area and the shaded area on the road surface in a distinguishable manner.
  • the brightness of the pixels in the sun or shade area is adjusted to a value that can be visually identified.
  • the shutter speed is set, for example, in the range of 1/100 seconds to 1/20000 seconds
  • the gain is set, for example, in the range of 0 dB to +12 dB. These ranges are appropriately determined according to the camera.
  • the gain may be controlled by the control device 20.
  • the camera 10m and the camera 12m have a part of the road width as a shooting range, and the shooting range of each other partially overlaps in the direction of the road width. Further, when the shooting ranges of the camera 10m and the camera 12m are combined, the entire road width can be shot.
  • the camera 10m and the camera 12m simultaneously capture daytime road surface images for each predetermined travel distance of the vehicle measured by the odometer 18.
  • the shutter speed and gain of the camera 10m and the camera 12m are set to a gain that is faster and smaller than the shutter speeds of the camera 10s and 12s, or slower than the shutter speeds of the cameras 10k and 12k.
  • the shutter speed and gain of the camera 10m and the camera 12m are adjusted as described above, and the sunlit area where the luminance is low is defined as the sunlit area of the road surface image photographed by the camera 10s and 12s and the road surface photographed by the camera 10k and camera 12k.
  • the shaded area of the road image captured by the camera 10s and the camera 12k and the shaded area of the road image captured by the camera 10k and the camera 12k are inserted into the shaded area side of the boundary of the shaded area of the image or the shaded area having a high luminance. By sandwiching the boundary on the shaded area side, the junction line between the sunny area and the shaded area can be made inconspicuous.
  • the sunny area and the shade area may be joined by a method in which the brightness near the boundary is adjusted by the sunny / shade junction 48, as will be described later with reference to FIGS.
  • the camera 10m and the camera 12m may be omitted.
  • a road surface image is taken by arranging a plurality of cameras in the road width direction. This is because when the entire road width is photographed by one camera, it is necessary to increase the setting position of the camera, and a high-resolution camera is used, so that the photographing interval becomes long. .
  • FIG. 3 (a) and 3 (b) show examples of road surface images that are photographed so as to be able to identify the sunlit area and the shaded area on the road surface of the road.
  • FIG. 3A shows an example in which the sunny area is photographed so as to be identifiable
  • FIG. 3B shows an example in which the shaded area is photographed so that it can be identified. Both are monochrome images.
  • FIG. 3 (a) the sunny area S is photographed so that the state of the road surface can be identified visually, but the shaded area K is crushed in black.
  • FIG. 3B the sunny area S is crushed in white, but the shade area K is photographed so that the state of the road surface can be visually identified.
  • FIG. 4 shows an example of a hardware configuration of a computer constituting the control device 20 according to the embodiment.
  • the control device 20 includes a central processing unit (a CPU such as a microprocessor can be used) 22, a random access memory (RAM) 24, a read only memory (ROM) 26, an input device 28, a display device. 30, a communication device 32, and a storage device 34, and these components are connected to each other by a bus 36. Further, the input device 28, the display device 30, the communication device 32, and the storage device 34 are connected to the bus 36 via the respective input / output interfaces 38.
  • a CPU such as a microprocessor can be used
  • RAM random access memory
  • ROM read only memory
  • FIG. 4 shows an example of a hardware configuration of a computer constituting the control device 20 according to the embodiment.
  • the control device 20 includes a central processing unit (a CPU such as a microprocessor can be used) 22, a random access memory (RAM) 24, a read only memory (ROM) 26, an input device 28, a display device
  • the CPU 22 controls the operation of each unit to be described later based on a control program stored in the RAM 24 or the ROM 26.
  • the RAM 24 mainly functions as a work area for the CPU 22, and the ROM 26 stores a control program such as BIOS and other data used by the CPU 22.
  • the input device 28 includes a keyboard, a pointing device, and the like, and is used by a user to input operation instructions and the like.
  • the display device 30 is configured by a liquid crystal display or the like, and displays a processing result or the like by the CPU 22.
  • the communication device 32 includes a USB (Universal Serial Bus) port, a network port, and other appropriate interfaces, and is used by the CPU 22 to exchange data with an external device via a communication means such as a network.
  • USB Universal Serial Bus
  • the storage device 34 is a magnetic storage device such as a hard disk, and stores various data necessary for processing to be described later.
  • a nonvolatile storage device such as an EEPROM may be used instead of the hard disk.
  • FIG. 5 is a functional block diagram of the control device 20 according to the embodiment.
  • the control device 20 includes an imaging control unit 40, an imaging information adjustment unit 42, a correction data acquisition unit 44, an image editing unit 46, a sun / shade junction unit 48, and a communication unit 50.
  • This function is realized by, for example, the CPU 22 and a program for controlling the processing operation of the CPU 22.
  • the photographing control unit 40 obtains the travel distance data of the vehicle 100 from the odometer 18 and simultaneously captures the camera 10s, the camera 10k, the camera 10m, the camera 12s, the camera 12k, and the camera 12m for each predetermined travel distance. Is output. Each of the cameras captures a road surface image at the same time based on the imaging instruction information.
  • the shooting information adjustment unit 42 uses the correction data acquired by the correction data acquisition unit 44 described later on the road surface images captured by the cameras 10s, 10k, 10m, and 12s, 12k, and 12m, and uses a trapezoid. Make corrections.
  • the road surface image subjected to the trapezoid correction is temporarily stored in the storage device 34.
  • the photographing information adjusting unit 42 adjusts the gain of the road surface image photographed by each of the above cameras, and makes the sunny area of the road surface image photographed by the camera 10s and the camera 12s distinguishable, and the camera 10k and the camera 12k photograph the images. It is assumed that the shaded area of the road surface image can be identified.
  • the imaging information adjustment unit 42 also adjusts the gains of the camera 10m and the camera 12m. The gain of each camera may be adjusted in a circuit that outputs image information of each camera.
  • the correction data acquisition unit 44 acquires correction data for performing trapezoidal correction of a road surface image photographed by the camera 10s, the camera 10k, the camera 10m, the camera 12s, the camera 12k, and the camera 12m. The procedure for acquiring correction data will be described later.
  • the image editing unit 46 calculates the degree of coincidence between the road width direction and the overlapping direction of the traveling direction of the vehicle 100 in the road surface image that can identify the sunny area captured by the camera 10s and the camera 12s, and the road surface based on the degree of coincidence. Edit the joint of the image. Similarly, the degree of coincidence of the overlapping areas of the road width direction and the traveling direction of the vehicle in the road surface image that can identify the shaded areas photographed by the camera 10k and the camera 12k is calculated, and the joining of the images is performed based on the degree of coincidence. Edit the part.
  • the degree of coincidence is calculated by, for example, a method of comparing the luminance of pixels in a region set in the image. Further, the degree of coincidence is calculated for the image information after the photographing information adjustment unit 42 performs the keystone correction.
  • the sun / shade junction 48 joins the sun area of the road surface image taken by the camera 10s or the camera 12s and the shade area of the road image taken by the camera 10k or the camera 12k at the boundary between the sun area and the shade area.
  • This combination of joints is a road surface image of the camera 10s, a road surface image of the camera 10k, a road surface image of the camera 12s, and a road surface image of the camera 12k.
  • simply joining the joint lines may remain in the road surface image and may be erroneously determined as a crack on the road surface.
  • the sun / shade junction 48 treats the brightness near the boundary as a value between the brightness of the sunlit area and the brightness of the shaded area of the road image so that the boundary joining line is not noticeable.
  • this processing is performed by using the camera 10m and the camera 12m that capture a road surface image at a brightness between the brightness of the sunlit area and the shaded area of the road surface image. You may carry out by joining on both sides.
  • the communication unit 50 transmits the road image and the like to an external device via the communication device 32.
  • the road surface image may be stored in an appropriate storage medium instead of the communication unit 50 and provided to an external device.
  • the photographing information adjustment unit 42, the image editing unit 46, and the sun / shade joint unit 48 may be provided on a computer different from the computer mounted on the vehicle 100.
  • the road surface image data is exchanged between the computer mounted on the vehicle 100 and the communication unit 50 or the USB memory, DVD, or other appropriate storage medium.
  • FIG. 6 shows an explanatory diagram of a method for acquiring correction data for performing keystone correction of a road surface image.
  • a sheet 52 used for obtaining correction data is placed on the ground, and the sheet 52 is photographed by a camera 10s, a camera 10k, a camera 10m, a camera 12s, a camera 12k, and a camera 12m.
  • circles are drawn at equal distances in the vertical and horizontal directions.
  • the laser scanner 14 is mounted on the vehicle 100. Therefore, the laser beam is irradiated linearly on the sheet 52 from the laser scanner 14, and the sheet is arranged so that the circled rows (rows) coincide with the straight line displayed by the laser beam.
  • a straight line displayed with laser light is indicated by L.
  • the left and right cameras are arranged (10k, 10m, 10s, 12s, 12m, and 12k in order from the left as shown in FIG. 2), a circle (column, sheet) It is preferable to display a visually identifiable line on the center line (C).
  • the center line C of the sheet 52 is positioned at the left and right center of the overlapping region, and the camera 10s and 12s, the camera 10k and the camera 12k, and the camera 10m and the camera 12m are photographed. Adjust so that the images overlap each other by a certain area.
  • the circles on the sheet 52 are arranged in a grid pattern, and the amount of distortion of the image can be calculated from the coordinate values of the circles of the images taken by each camera.
  • the correction data acquisition unit 44 obtains trapezoid correction data of an image captured by each camera from the calculation result.
  • FIG. 7 is an explanatory diagram of road surface photographing by the road surface image photographing / editing device according to the embodiment.
  • a road surface image is photographed for each predetermined travel distance by each camera included in the photographing means 10 and 12 based on the photographing instruction information output by the photographing control unit 40.
  • FIG. 7 shows an example of shooting every 1.5 m, but is not limited to this.
  • the respective shooting ranges of each camera are overlapped by a predetermined distance in the traveling direction of the vehicle 100 (traveling direction overlapping region D).
  • the angle of view is adjusted. This angle of view adjustment is also performed using the sheet 52 described with reference to FIG.
  • FIG. 8 is an explanatory diagram of the shooting range in the road width direction. 8 each camera included in the imaging means 10, 12 taking part in the road width W ranges A L, and an A R, mutual photographing range A L, of A R, the left image capturing unit 10
  • the right region of the left road image (shooting range A L ) photographed by each camera partially overlaps the left region of the right road image (shooting range A R ) photographed by each camera of the right photographing means 12.
  • Road width direction overlapping region D ′ Also, all of the road width is in the photographable
  • the angle of view is adjusted so as to be within the photographing range shown in FIG. 8 using the sheet 52 described with reference to FIG.
  • FIG. 9 shows a flow of an operation example of the road surface image capturing / editing apparatus according to the embodiment.
  • road images that can identify the sunny area with the camera 10s and the camera 12s are simultaneously photographed for each predetermined travel distance of the vehicle 100, and the shaded area can be identified with the camera 10k and the camera 12k.
  • a good road surface image is simultaneously captured for each predetermined travel distance of the vehicle 100.
  • the road surface image for every predetermined distance in a desired road section is acquired (S1).
  • the acquired road surface images are numbered in the order of acquisition, and stored in the storage device 34 together with the acquired camera identification information.
  • the keystone correction is performed using the correction data read from the apparatus 34 (S2) and acquired by the correction data acquisition unit 44 (S3).
  • the imaging information adjustment unit 42 reads the (N + 1) th road surface image from the storage device 34 (S4) and uses the correction data acquired by the correction data acquisition unit 44 in the same manner as in the steps S2 and S3. Keystone correction is performed (S5).
  • the image editing unit 46 calculates the degree of coincidence of the road width direction overlapping region D ′ of the road surface image subjected to the trapezoidal correction by the imaging information adjusting unit 42, and edits the joint portion of the road surface image based on the degree of coincidence, that is, the road surface Image joining processing is performed (S6). Further, the image editing unit 46 calculates the degree of coincidence of the traveling direction overlap region D of the vehicle 100 of the road surface image that has been subjected to the trapezoidal correction by the imaging information adjustment unit 42, and performs the road surface image joining process based on the degree of coincidence. (S7). Note that the processing order of steps S6 and S7 may be reversed.
  • FIGS. 10A and 10B are explanatory diagrams of a method by which the image editing unit 46 calculates the degree of coincidence of the overlapping areas of the road surface image.
  • FIG. 10A shows the case in the road width direction
  • FIG. 10B shows the case in which the vehicle 100 travels.
  • the right region of the left road surface image photographed by the camera 10s or camera 10k of the left photographing means 10 and the left region of the right road image photographed by the camera 12s or camera 12k of the right photographing means 12 are As described with reference to FIG. 8, the images are taken so as to partially overlap. Therefore, the image editing unit 46 determines the pixel column (the vertical column of the image) of the P L pixel from the left of one road surface image (for example, the left road surface image) (however, in the road width direction overlapping region D ′).
  • the road image (e.g., the right side road surface image) in the region of the left from the left and right n pixels P R th pixel of a pixel column width, go overlapped by one pixel column, and calculates the matching degree of brightness.
  • a coincidence degree it can obtain
  • the image editing unit 46 the degree of coincidence calculated brightness is joined to the left road image and right road image should come becomes highest pixel column to the position P L.
  • P L for example a 950 pixels
  • P R is set to 100 pixels
  • n represents can be a 30-pixel is not limited thereto, can be determined as appropriate depending on the size and the like of the road image .
  • calculate the degree of coincidence for all pixel columns of a road width direction overlap region D', this value is the highest row of pixels may be P L.
  • the road surface images taken by the left photographing means 10 and the right photographing means 12 are overlapped by a predetermined distance D in the traveling direction of the vehicle 100 as described in FIG. Have been taken. Therefore, the image editing unit 46 is the pixel row of the P B pixel (however, in the traveling direction overlapping region D) from the top of the road image taken at a certain time (Nth) among the road images taken continuously. the row) of the image, the region of the width of the upper and lower m pixels P F th pixel row of pixels from the top of the road surface image capturing order is following (N + 1 th), go to overlap by one pixel row, a degree of matching of the luminance calculate.
  • the image editing unit 46 joins the two road surface images so that the pixel row P B comes to a position where the calculated brightness coincidence is the highest.
  • the road surface image with the first shooting order (Nth) is used below the pixel row P B
  • the road surface image with the second shooting order (N + 1) is used on the upper side.
  • P B for example a 3-pixel
  • P F is 750 pixels
  • m is can be a 30-pixel is not limited thereto, can be determined as appropriate depending on the size and the like of the road image .
  • the degree of coincidence may be calculated for all the pixel rows in the traveling direction overlapping region D, and the pixel column having the highest value may be set as P B.
  • the above-described program for executing each step of FIG. 9 can be stored in a recording medium, and the program may be provided by communication means.
  • the above-described program may be regarded as an invention of a “computer-readable recording medium recording a program” or an invention of a “data signal”.
  • FIGS. 11A and 11B show examples of road surface images joined by the image editing unit 46.
  • FIG. 11A shows an example of joining road surface images that can be identified with the sunlit area taken by the camera 10s and the camera 12s.
  • FIG. 11B shows a shaded area taken by the camera 10k and the camera 12k. It is the example which joined the road surface image which can be identified.
  • FIG.11 (c) is the example which joined the sunny area
  • the examples of FIGS. 11A and 11B are those in which the bonding process is performed by the process of FIG.
  • the sunny / shade junction 48 extracts the sunny area of FIG. 11A and the shaded area of FIG. 11B based on the luminance value, and these are extracted as the sunny area. Are joined at a boundary B between the shaded area and the shaded area.
  • the brightness near the boundary B is set as a value between the brightness of the sunlit area and the brightness of the shaded area of the road image so that the joint line of the boundary B is not noticeable. ing.
  • the road surface image is displayed with a luminance lower than the luminance of the sunny area or higher than the luminance of the shaded area of the road image captured by the camera 10s (or camera 12s) and the camera 10k (or camera 12k).
  • the road surface images taken by the camera 10m and the camera 12m may be joined between the sunlit area and the shaded area.

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Image Processing (AREA)
  • Length Measuring Devices By Optical Means (AREA)
  • Closed-Circuit Television Systems (AREA)
  • Road Repair (AREA)
  • Image Analysis (AREA)
  • Traffic Control Systems (AREA)

Abstract

Disclosed are a road surface image capturing/editing device and a road surface image capturing/editing program for identifiably capturing images of road surfaces which are sunlit or shaded during the daytime. Capturing means (10, 12) appropriately adjust the shutter speed and gain of a camera comprising the same, and, in accordance with instructions of a controller (20), simultaneously capture, every time a vehicle (100) travels a predetermined distance, a road surface image in which a sunlit region can be identified and a road surface image in which a shaded region can be identified while said images are partially overlapped in the road width direction and the direction of advance of the vehicle (100). The controller (20) then performs keystone correction for the road surface images, calculates the degree of conformity of the overlapping regions in the road width direction and the degree of conformity of the overlapping regions in the direction of advance of the vehicle (100) in the road surface images, and joins the road surface images according to the degrees of conformity.

Description

路面画像撮影・編集装置及び路面画像撮影・編集プログラムRoad surface image capturing / editing device and road surface image capturing / editing program
 本発明は、路面画像撮影・編集装置及び路面画像撮影・編集プログラムの改良に関する。 The present invention relates to improvement of a road surface image capturing / editing device and a road surface image capturing / editing program.
 従来より、車両に搭載したカメラにより路面を撮影し、路面のひび割れを観察する技術が提案されている。例えば、下記特許文献1には、路面性状測定におけるひびわれ検出のための連続路面映像取得機能を有する路面ひびわれ検出装置が開示されている。本従来例では、一定時間毎に路面映像を取得し、路面映像に車両の走行距離データを関連付け、この距離データに基づいて各路面映像の重なり量を算出し、重なり部分を切り取りながら路面映像を編集する構成となっている。 Conventionally, a technique for photographing a road surface with a camera mounted on a vehicle and observing a crack on the road surface has been proposed. For example, Patent Document 1 below discloses a road surface crack detection device having a continuous road surface image acquisition function for detecting cracks in road surface property measurement. In this conventional example, a road surface image is acquired at regular time intervals, vehicle mileage data is associated with the road surface image, an overlap amount of each road surface image is calculated based on this distance data, and the road surface image is cut while cutting the overlapping portion. It is configured to edit.
特開平9-96515号公報JP-A-9-96515
 しかし、上記従来の技術においては、日向・日陰領域を撮影した場合に日向領域・日陰領域間のコントラストが強すぎるため、路面性状(ひび割れ等)を観察するためには手動による撮影設定の調整を頻繁に行わなければならないという問題がある。このため、通常、路面の撮影作業は、太陽光による日向領域及び日陰領域が生じない夜間に行うのが一般的であり、作業者の負担が大きいという問題があった。 However, in the above conventional technique, when the sunlit / shaded area is photographed, the contrast between the sunlit area / shaded area is too strong. Therefore, in order to observe road surface properties (cracking, etc.), manual photographing setting adjustment is required. There is a problem that it must be done frequently. For this reason, usually, the road surface photographing work is generally performed at night when sunlight and shade areas do not occur due to sunlight, and there is a problem that the burden on the operator is large.
 本発明は、上記従来の課題に鑑みなされたものであり、その目的は、日向、日陰ができた路面の路面画像を識別可能に撮影する路面画像撮影・編集装置及び路面画像撮影・編集プログラムを提供することにある。 The present invention has been made in view of the above-described conventional problems. An object of the present invention is to provide a road surface image capturing / editing apparatus and a road surface image capturing / editing program for capturing a road surface image of a sunlit or shaded road in an identifiable manner. It is to provide.
 上記目的を達成するために、請求項1記載の路面画像撮影・編集装置の発明は、車両に搭載され、道路幅員の一部を撮影範囲とし、互いの前記撮影範囲が一部重複しつつ道路幅員の全部が撮影可能に配置された複数のカメラを有し、車両の所定走行距離毎に、同時に、日向における路面画像を識別可能に撮影する日向撮像手段と、車両に搭載され、道路幅員の一部を撮影範囲とし、互いの前記撮影範囲が一部重複しつつ道路幅員の全部が撮影可能に配置された複数のカメラを有し、車両の所定走行距離毎に、同時に、日陰における路面画像を識別可能に撮影する日陰撮像手段と、前記日向撮像手段における複数のカメラで撮影した路面画像の台形補正並びに前記日陰撮像手段における複数のカメラで撮影した路面画像の台形補正を行うための補正データを取得する補正データ取得手段と、前記日向撮像手段における複数のカメラで撮影し、前記補正データにより補正後の路面画像の前記道路幅員方向及び車両の進行方向の重なり領域及び前記日陰撮像手段における複数のカメラで撮影し、前記補正データにより補正後の路面画像の前記道路幅員方向及び車両の進行方向の重なり領域の一致度を算出し、前記一致度に基づいて路面画像の接合部の編集を行う編集手段と、を備えることを特徴とする。 In order to achieve the above object, a road surface image photographing / editing device according to claim 1 is mounted on a vehicle, and a road width is partly overlapped with a part of a road width, and the photographing ranges partially overlap each other. There are a plurality of cameras arranged so that the entire width can be photographed, and for each predetermined travel distance of the vehicle, simultaneously with the hinata imaging means for photographing the road surface image in the hinata in an identifiable manner, and mounted on the vehicle, A road surface image in the shade at the same time for each predetermined mileage of the vehicle, having a plurality of cameras that are partly shooting ranges, and the shooting ranges are partially overlapping with each other and the entire width of the road is shootable. A shaded imaging means for taking a picture in a distinguishable manner, a trapezoidal correction of a road surface image taken by a plurality of cameras in the sunny imaging means, and a trapezoidal correction of a road surface image taken by a plurality of cameras in the shaded imaging means. Correction data acquisition means for acquiring correction data, and an overlapping area of the road width direction and the vehicle traveling direction of the road surface image corrected by the correction data and photographed by a plurality of cameras in the sun imaging means, and the shade imaging means The degree of coincidence of the overlapping region of the road width direction and the vehicle traveling direction of the road image after correction is calculated by the correction data is calculated by the correction data, and the joint of the road surface image is edited based on the degree of coincidence And an editing means for performing the above.
 請求項2記載の発明は、請求項1記載の路面画像撮影・編集装置において、前記編集手段が、前記路面画像の重なり領域における画素の輝度に基づいて前記一致度を算出することを特徴とする。 According to a second aspect of the present invention, in the road surface image capturing / editing device according to the first aspect, the editing unit calculates the degree of coincidence based on a luminance of a pixel in an overlapping region of the road surface image. .
 請求項3記載の発明は、請求項1または請求項2記載の路面画像撮影・編集装置が、前記日向撮像手段におけるカメラで撮影した路面画像の日向領域と前記日陰撮像手段におけるカメラで撮影した路面画像の日陰領域とを、前記日向領域と日陰領域との境界で接合し、前記境界近傍の輝度を前記路面画像の日向領域の輝度と日陰領域の輝度との間の値とする日向・日陰接合手段を備えることを特徴とする。 According to a third aspect of the present invention, the road surface image photographing / editing device according to the first or second aspect of the present invention is a road surface imaged by the sunlit area of the road surface image photographed by the camera in the sun imaging means and the camera in the shade imaging means. The shade area of the image is joined at the boundary between the shade area and the shade area, and the brightness near the border is a value between the brightness of the shade area and the brightness area of the road image. Means are provided.
 請求項4記載の発明は、請求項1または請求項2記載の路面画像撮影・編集装置が、前記路面画像の識別可能な日向領域の輝度より低い輝度または識別可能な日陰領域の輝度より高い輝度で路面画像を撮影する中間画像撮像手段が撮影した路面画像を、前記日向領域と日陰領域との間に挟んで接合する日向・日陰接合手段を備えることを特徴とする。 According to a fourth aspect of the present invention, the road surface image capturing / editing device according to the first or second aspect of the present invention has a luminance lower than the luminance of the identifiable sunny area or the luminance of the identifiable shaded area of the road image. The present invention is characterized by comprising a sun / shade joining means for joining the road image taken by the intermediate image taking means for taking a road image between the sun region and the shade region.
 請求項5記載の路面画像撮影・編集プログラムの発明は、コンピュータを、車両に搭載され、道路幅員の一部を撮影範囲とし、互いの前記撮影範囲が一部重複しつつ道路幅員の全部が撮影可能に配置された複数のカメラを有し、車両の所定走行距離毎に、同時に、日向における路面画像を識別可能に撮影する日向撮像手段と、車両に搭載され、道路幅員の一部を撮影範囲とし、互いの前記撮影範囲が一部重複しつつ道路幅員の全部が撮影可能に配置された複数のカメラを有し、車両の所定走行距離毎に、同時に、日陰における路面画像を識別可能に撮影する日陰撮像手段とがそれぞれ撮影した各路面画像を記憶する記憶手段、前記記憶手段が記憶している路面画像の台形補正を行うための補正データを取得する補正データ取得手段、前記記憶手段が記憶している、前記日向撮像手段における複数のカメラで撮影し、前記補正データにより補正後の路面画像の前記道路幅員方向及び車両の進行方向の重なり領域及び前記日陰撮像手段における複数のカメラで撮影し、前記補正データにより補正後の路面画像の前記道路幅員方向及び車両の進行方向の重なり領域の一致度を算出し、前記一致度に基づいて路面画像の接合部の編集を行う編集手段、として機能させることを特徴とする。 The invention of the road surface image photographing / editing program according to claim 5 is a computer mounted on a vehicle, wherein a part of the road width is taken as a photographing range, and a part of the photographing range is mutually overlapped, and the whole road width is photographed. A plurality of cameras arranged in a possible manner, and for every predetermined mileage of the vehicle, at the same time, hinata imaging means that shoots a road surface image in the hinata in an identifiable manner, and a shooting range that is mounted on the vehicle and that captures a part of the road width And having a plurality of cameras arranged so that all of the road width can be photographed while partly overlapping each other's photographing range, and photographing a road surface image in the shade at the same time for each predetermined travel distance of the vehicle. Storage means for storing each road image captured by the shade imaging means, correction data acquisition means for acquiring correction data for performing trapezoidal correction of the road surface image stored in the storage means, Means for storing images taken by a plurality of cameras in the sun imaging means, and an overlapping area in the road width direction and the vehicle traveling direction of the road surface image corrected by the correction data, and a plurality of cameras in the shade imaging means Editing means for calculating the coincidence degree of the overlapping region of the road width direction and the traveling direction of the vehicle in the road surface image corrected by the correction data and editing the joint portion of the road surface image based on the degree of coincidence It is characterized by functioning as.
 本発明によれば、昼間に日向、日陰ができた路面の路面画像を識別可能に撮影することができる。 According to the present invention, a road surface image of a sunlit or shaded road surface can be taken in a daytime so that it can be identified.
実施形態にかかる路面画像撮影・編集装置を搭載した車両の構成例を示す図である。It is a figure showing an example of composition of vehicles carrying a road surface picture photographing and editing device concerning an embodiment. 実施形態にかかる撮影手段の構成例を示す図である。It is a figure which shows the structural example of the imaging | photography means concerning embodiment. 道路の路面の日向領域及び日陰領域を識別可能に撮影した路面画像の例を示す図である。It is a figure which shows the example of the road surface image image | photographed so that the sunny area and shade area of the road surface of a road could be identified. 実施形態にかかる制御装置を構成するコンピュータのハードウェア構成の例を示す図である。It is a figure which shows the example of the hardware constitutions of the computer which comprises the control apparatus concerning embodiment. 実施形態にかかる制御装置の機能ブロック図である。It is a functional block diagram of a control device concerning an embodiment. 路面画像の台形補正を行うための補正データを取得する方法の説明図である。It is explanatory drawing of the method of acquiring the correction data for performing the trapezoid correction | amendment of a road surface image. 実施形態にかかる路面画像撮影・編集装置による路面撮影の説明図である。It is explanatory drawing of the road surface imaging | photography by the road surface image imaging / editing apparatus concerning embodiment. 道路幅員方向の撮影範囲の説明図である。It is explanatory drawing of the imaging | photography range of a road width direction. 実施形態にかかる路面画像撮影・編集装置の動作例のフロー図である。It is a flowchart of the operation example of the road surface image photographing / editing device according to the embodiment. 実施形態にかかる画像編集部が路面画像の重なり領域の一致度を算出する方法の説明図である。It is explanatory drawing of the method by which the image editing part concerning embodiment calculates the coincidence degree of the overlapping area | region of a road surface image. 実施形態にかかる画像編集部が接合した路面画像の例を示す図である。It is a figure which shows the example of the road surface image which the image editing part concerning embodiment joined.
 10,12 撮影手段、10s,10k,10m、12s,12k,12m カメラ、14 レーザスキャナ、16 座標取得装置、18 走行距離計、20 制御装置、22 CPU、24 RAM、26 ROM、28 入力装置、30 表示装置、32 通信装置、34 記憶装置、36 バス、38 入出力インターフェース、40 撮影制御部、42 撮影情報調整部、44 補正データ取得部、46 画像編集部、48 日向・日陰接合部、50 通信部、52 シート、100 車両。 10, 12 photographing means, 10s, 10k, 10m, 12s, 12k, 12m camera, 14 laser scanner, 16 coordinate acquisition device, 18 odometer, 20 control device, 22 CPU, 24 RAM, 26 ROM, 28 input device, 30 display device, 32 communication device, 34 storage device, 36 bus, 38 input / output interface, 40 shooting control unit, 42 shooting information adjustment unit, 44 correction data acquisition unit, 46 image editing unit, 48 sun / shade junction, 50 Communication department, 52 seats, 100 vehicles.
 以下、本発明を実施するための形態(以下、実施形態という)を説明する。 Hereinafter, modes for carrying out the present invention (hereinafter referred to as embodiments) will be described.
 図1(a),(b)には、実施形態にかかる路面画像撮影・編集装置を搭載した車両の構成例が示される。図1(a)が側面図であり、図1(b)が背面図である。 FIGS. 1A and 1B show a configuration example of a vehicle equipped with a road surface image capturing / editing apparatus according to the embodiment. FIG. 1A is a side view, and FIG. 1B is a rear view.
 図1(a),(b)において、車両100には、撮影手段10,12、レーザスキャナ14、座標取得装置16、走行距離計18及び制御装置20が搭載されている。ここで、撮影手段10,12は、車両100の進行方向(車両100を背面からみた方向)に対して左側が撮影手段10であり、右側が撮影手段12である。この撮影手段10,12は、CCDカメラ等の路面撮影用のカメラを備えている。詳細については図2において説明する。 1 (a) and 1 (b), a vehicle 100 is equipped with photographing means 10 and 12, a laser scanner 14, a coordinate acquisition device 16, an odometer 18, and a control device 20. Here, the photographing means 10 and 12 are the photographing means 10 on the left side and the photographing means 12 on the right side with respect to the traveling direction of the vehicle 100 (the direction when the vehicle 100 is viewed from the back). The photographing means 10 and 12 include a road surface photographing camera such as a CCD camera. Details will be described with reference to FIG.
 また、レーザスキャナ14は、路面からの高さを測定し、わだちの計測を行うほか、後述するように、上記カメラの台形補正を行うための補正データを取得する際にも使用する。 Further, the laser scanner 14 measures the height from the road surface and measures the edge, and also uses it when acquiring correction data for correcting the keystone of the camera, as will be described later.
 また、座標取得装置16は、GPS受信機等を含んで構成され、撮影手段10,12の撮影位置の地理座標を取得する。 The coordinate acquisition device 16 includes a GPS receiver and the like, and acquires the geographical coordinates of the imaging positions of the imaging means 10 and 12.
 また、走行距離計18は、車両100の走行距離を測定する装置であり、車両100の車速パルスから走行距離を算出する構成、空間フィルター法等の非接触型距離計を使用する構成等があるが、これらに限定されるものではない。 The odometer 18 is a device that measures the mileage of the vehicle 100, and includes a configuration that calculates a mileage from a vehicle speed pulse of the vehicle 100, a configuration that uses a non-contact type distance meter such as a spatial filter method, and the like. However, it is not limited to these.
 また、制御装置20は、パーソナルコンピュータ等により構成され、上記カメラが撮影する路面画像の台形補正、上記カメラのゲインの制御等を行う。 Further, the control device 20 is constituted by a personal computer or the like, and performs trapezoid correction of a road surface image taken by the camera, control of the gain of the camera, and the like.
 図2には、撮影手段10,12の構成例が示される。図2において、車両100の進行方向に対して左側に配置された撮影手段10は、カメラ10s,カメラ10k,カメラ10mを備えている。また、車両100の進行方向に対して右側に配置された撮影手段12は、カメラ12s,カメラ12k,カメラ12mを備えている。 FIG. 2 shows a configuration example of the photographing means 10 and 12. In FIG. 2, the photographing means 10 disposed on the left side with respect to the traveling direction of the vehicle 100 includes a camera 10s, a camera 10k, and a camera 10m. The photographing means 12 disposed on the right side with respect to the traveling direction of the vehicle 100 includes a camera 12s, a camera 12k, and a camera 12m.
 カメラ10s及びカメラ12sは、道路幅員の一部を撮影範囲とし、互いの撮影範囲が道路幅員の方向で一部重複している。また、カメラ10s及びカメラ12sの撮影範囲を合わせると、道路幅員の全部が撮影可能となっている。これらのカメラ10s及びカメラ12sは、走行距離計18が計測した車両の所定走行距離毎に、同時に昼間の路面画像を撮影する。この際、昼間の日向における路面画像を識別可能に撮影する。ここで、識別可能とは、道路の路面の日向領域の画像が白くつぶれず、目視による路面性状(ひび割れ等)の観察が可能なことをいう。 The camera 10s and the camera 12s have a part of the road width as a shooting range, and the shooting range of each other partially overlaps in the direction of the road width. Further, when the shooting ranges of the camera 10s and the camera 12s are matched, the entire road width can be shot. These cameras 10 s and 12 s simultaneously capture daytime road surface images for each predetermined travel distance of the vehicle measured by the odometer 18. At this time, the road surface image in the daytime sun is photographed so as to be identifiable. Here, “identifiable” means that the image of the sunny area of the road surface of the road is not crushed white, and the road surface properties (such as cracks) can be visually observed.
 また、カメラ10k及びカメラ12kは、道路幅員の一部を撮影範囲とし、互いの撮影範囲が道路幅員の方向で一部重複している。また、カメラ10k及びカメラ12kの撮影範囲を合わせると、道路幅員の全部が撮影可能となっている。これらのカメラ10k及びカメラ12kは、走行距離計18が計測した車両の所定走行距離毎に、同時に昼間の路面画像を撮影する。この際、昼間の日陰における路面画像を識別可能に撮影する。ここで、識別可能とは、道路の路面の日陰領域の画像が黒くつぶれず、目視による路面性状の観察が可能なことをいう。 In addition, the camera 10k and the camera 12k have a part of the road width as a shooting range, and the shooting range of each other partially overlaps in the direction of the road width. Further, when the shooting ranges of the camera 10k and the camera 12k are matched, the entire road width can be shot. The camera 10k and the camera 12k simultaneously capture daytime road surface images for each predetermined travel distance of the vehicle measured by the odometer 18. At this time, a road surface image in the daytime shade is photographed so as to be identifiable. Here, “identifiable” means that the image of the shaded area of the road surface of the road is not crushed black and the road surface property can be observed visually.
 以上に述べたカメラ10s、カメラ12s及びカメラ10k、カメラ12kにより、道路の路面の日向領域及び日陰領域を識別可能に撮影するために、各カメラのシャッター速度及びゲインが適宜設定されており、それぞれの日向領域または日陰領域の画素の輝度が、目視で識別可能な値に調整される。シャッター速度は、例えば1/100秒から1/20000秒の範囲で設定し、ゲインは、例えば0dBから+12dBの範囲で設定するが、これらの範囲はカメラに応じて適宜決定する。また、上記ゲインは、制御装置20で制御してもよい。 The camera 10s, camera 12s, camera 10k, and camera 12k described above are set appropriately for the shutter speed and gain of each camera in order to shoot the sunlit area and the shaded area on the road surface in a distinguishable manner. The brightness of the pixels in the sun or shade area is adjusted to a value that can be visually identified. The shutter speed is set, for example, in the range of 1/100 seconds to 1/20000 seconds, and the gain is set, for example, in the range of 0 dB to +12 dB. These ranges are appropriately determined according to the camera. The gain may be controlled by the control device 20.
 また、カメラ10m及びカメラ12mは、道路幅員の一部を撮影範囲とし、互いの撮影範囲が道路幅員の方向で一部重複している。また、カメラ10m及びカメラ12mの撮影範囲を合わせると、道路幅員の全部が撮影可能となっている。これらのカメラ10m及びカメラ12mは、走行距離計18が計測した車両の所定走行距離毎に、同時に、昼間の路面画像を撮影する。ここで、カメラ10m及びカメラ12mのシャッター速度及びゲインは、カメラ10s、カメラ12sのシャッター速度より速く、かつ小さなゲインとするか、カメラ10k、カメラ12kのシャッター速度より遅く、かつ大きなゲインとする。シャッター速度を速く、かつ小さなゲインとすると、路面画像の輝度が低く(暗く)なり、シャッター速度を遅く、かつ大きなゲインとすると、路面画像の輝度が高く(明るく)なる。そこで、カメラ10m及びカメラ12mのシャッター速度及びゲインを上記のように調整し、輝度が低い日向領域を、カメラ10s、カメラ12sが撮影した路面画像の日向領域とカメラ10k、カメラ12kが撮影した路面画像の日陰領域の境界の日向領域側に挟み、または、輝度が高い日陰領域を、カメラ10s、カメラ12sが撮影した路面画像の日向領域とカメラ10k、カメラ12kが撮影した路面画像の日陰領域の境界の日陰領域側に挟むことにより、日向領域と日陰領域の境界の接合線を目立たなくすることができる。 In addition, the camera 10m and the camera 12m have a part of the road width as a shooting range, and the shooting range of each other partially overlaps in the direction of the road width. Further, when the shooting ranges of the camera 10m and the camera 12m are combined, the entire road width can be shot. The camera 10m and the camera 12m simultaneously capture daytime road surface images for each predetermined travel distance of the vehicle measured by the odometer 18. Here, the shutter speed and gain of the camera 10m and the camera 12m are set to a gain that is faster and smaller than the shutter speeds of the camera 10s and 12s, or slower than the shutter speeds of the cameras 10k and 12k. When the shutter speed is fast and the gain is small, the brightness of the road surface image is low (dark), and when the shutter speed is slow and the gain is high, the brightness of the road image is high (bright). Therefore, the shutter speed and gain of the camera 10m and the camera 12m are adjusted as described above, and the sunlit area where the luminance is low is defined as the sunlit area of the road surface image photographed by the camera 10s and 12s and the road surface photographed by the camera 10k and camera 12k. The shaded area of the road image captured by the camera 10s and the camera 12k and the shaded area of the road image captured by the camera 10k and the camera 12k are inserted into the shaded area side of the boundary of the shaded area of the image or the shaded area having a high luminance. By sandwiching the boundary on the shaded area side, the junction line between the sunny area and the shaded area can be made inconspicuous.
 なお、日向領域と日陰領域の接合は、後述する図5、図11(c)等で説明するように、境界近傍の輝度を日向・日陰接合部48が調整する方法で行ってもよく、この場合にはカメラ10m及びカメラ12mを省略してもよい。 It should be noted that the sunny area and the shade area may be joined by a method in which the brightness near the boundary is adjusted by the sunny / shade junction 48, as will be described later with reference to FIGS. In some cases, the camera 10m and the camera 12m may be omitted.
 以上に述べたように、本実施形態においては、道路幅員方向に複数のカメラを配置して路面画像を撮影する。これは、1台のカメラにより道路幅員の全部を撮影する場合には、カメラの設定位置を高くする必要があり、高解像度のカメラを使用することになるので、撮影間隔が長くなるからである。 As described above, in this embodiment, a road surface image is taken by arranging a plurality of cameras in the road width direction. This is because when the entire road width is photographed by one camera, it is necessary to increase the setting position of the camera, and a high-resolution camera is used, so that the photographing interval becomes long. .
 図3(a),(b)には、道路の路面の日向領域及び日陰領域を識別可能に撮影した路面画像の例が示される。図3(a)が日向領域を識別可能に撮影した例であり、図3(b)が日陰領域を識別可能に撮影した例である。何れも白黒画像である。 3 (a) and 3 (b) show examples of road surface images that are photographed so as to be able to identify the sunlit area and the shaded area on the road surface of the road. FIG. 3A shows an example in which the sunny area is photographed so as to be identifiable, and FIG. 3B shows an example in which the shaded area is photographed so that it can be identified. Both are monochrome images.
 図3(a)において、日向領域Sは、目視で路面の状態を識別可能に撮影されているが、日陰領域Kは、黒くつぶれている。一方、図3(b)においては、日向領域Sが白くつぶれているが、日陰領域Kは、目視で路面の状態を識別可能に撮影されている。 In FIG. 3 (a), the sunny area S is photographed so that the state of the road surface can be identified visually, but the shaded area K is crushed in black. On the other hand, in FIG. 3B, the sunny area S is crushed in white, but the shade area K is photographed so that the state of the road surface can be visually identified.
 図4には、実施形態にかかる制御装置20を構成するコンピュータのハードウェア構成の例が示される。図4において、制御装置20は、中央処理装置(例えばマイクロプロセッサ等のCPUを使用することができる)22、ランダムアクセスメモリ(RAM)24、読み出し専用メモリ(ROM)26、入力装置28、表示装置30、通信装置32及び記憶装置34を含んで構成されており、これらの構成要素は、バス36により互いに接続されている。また、入力装置28、表示装置30、通信装置32及び記憶装置34は、それぞれ各入出力インターフェース38を介してバス36に接続されている。 FIG. 4 shows an example of a hardware configuration of a computer constituting the control device 20 according to the embodiment. In FIG. 4, the control device 20 includes a central processing unit (a CPU such as a microprocessor can be used) 22, a random access memory (RAM) 24, a read only memory (ROM) 26, an input device 28, a display device. 30, a communication device 32, and a storage device 34, and these components are connected to each other by a bus 36. Further, the input device 28, the display device 30, the communication device 32, and the storage device 34 are connected to the bus 36 via the respective input / output interfaces 38.
 CPU22は、RAM24またはROM26に格納されている制御プログラムに基づいて、後述する各部の動作を制御する。RAM24は主としてCPU22の作業領域として機能し、ROM26にはBIOS等の制御プログラムその他のCPU22が使用するデータが格納されている。 The CPU 22 controls the operation of each unit to be described later based on a control program stored in the RAM 24 or the ROM 26. The RAM 24 mainly functions as a work area for the CPU 22, and the ROM 26 stores a control program such as BIOS and other data used by the CPU 22.
 また、入力装置28は、キーボード、ポインティングデバイス等により構成され、使用者が動作指示等を入力するために使用する。 The input device 28 includes a keyboard, a pointing device, and the like, and is used by a user to input operation instructions and the like.
 また、表示装置30は、液晶ディスプレイ等により構成され、CPU22による処理結果等を表示する。 Further, the display device 30 is configured by a liquid crystal display or the like, and displays a processing result or the like by the CPU 22.
 また、通信装置32は、USB(ユニバーサルシリアルバス)ポート、ネットワークポートその他の適宜なインターフェースにより構成され、CPU22がネットワーク等の通信手段を介して外部の装置とデータをやり取りするために使用する。 The communication device 32 includes a USB (Universal Serial Bus) port, a network port, and other appropriate interfaces, and is used by the CPU 22 to exchange data with an external device via a communication means such as a network.
 また、記憶装置34は、ハードディスク等の磁気記憶装置であり、後述する処理に必要となる種々のデータを記憶する。なお、記憶装置34としては、ハードディスクの代わりに、EEPROM等の不揮発性記憶装置を使用してもよい。 The storage device 34 is a magnetic storage device such as a hard disk, and stores various data necessary for processing to be described later. As the storage device 34, a nonvolatile storage device such as an EEPROM may be used instead of the hard disk.
 図5には、実施形態にかかる制御装置20の機能ブロック図が示される。図5において、制御装置20は、撮影制御部40、撮影情報調整部42、補正データ取得部44、画像編集部46、日向・日陰接合部48及び通信部50を含んで構成されており、これらの機能は例えばCPU22とCPU22の処理動作を制御するプログラムにより実現される。 FIG. 5 is a functional block diagram of the control device 20 according to the embodiment. In FIG. 5, the control device 20 includes an imaging control unit 40, an imaging information adjustment unit 42, a correction data acquisition unit 44, an image editing unit 46, a sun / shade junction unit 48, and a communication unit 50. This function is realized by, for example, the CPU 22 and a program for controlling the processing operation of the CPU 22.
 撮影制御部40は、上記走行距離計18から車両100の走行距離データを取得し、所定走行距離毎に、カメラ10s,カメラ10k,カメラ10m及びカメラ12s,カメラ12k,カメラ12mに同時に撮影指示情報を出力する。上記各カメラは、この撮影指示情報に基づき、同時に路面画像を撮影する。 The photographing control unit 40 obtains the travel distance data of the vehicle 100 from the odometer 18 and simultaneously captures the camera 10s, the camera 10k, the camera 10m, the camera 12s, the camera 12k, and the camera 12m for each predetermined travel distance. Is output. Each of the cameras captures a road surface image at the same time based on the imaging instruction information.
 撮影情報調整部42は、カメラ10s,カメラ10k,カメラ10m及びカメラ12s,カメラ12k,カメラ12mが撮影した路面画像に対して、後述する補正データ取得部44が取得した補正データを使用し、台形補正を行う。台形補正を行った路面画像は、一旦記憶装置34に記憶しておく。また、撮影情報調整部42は、上記各カメラが撮影した路面画像のゲインを調整し、カメラ10s及びカメラ12sが撮影した路面画像の日向領域を識別可能な画像とし、カメラ10k、カメラ12kが撮影した路面画像の日陰領域を識別可能な画像とする。さらに、撮影情報調整部42は、カメラ10m及びカメラ12mのゲインの調整も行う。なお、各カメラのゲインは、各カメラの画像情報を出力する回路において調整してもよい。 The shooting information adjustment unit 42 uses the correction data acquired by the correction data acquisition unit 44 described later on the road surface images captured by the cameras 10s, 10k, 10m, and 12s, 12k, and 12m, and uses a trapezoid. Make corrections. The road surface image subjected to the trapezoid correction is temporarily stored in the storage device 34. The photographing information adjusting unit 42 adjusts the gain of the road surface image photographed by each of the above cameras, and makes the sunny area of the road surface image photographed by the camera 10s and the camera 12s distinguishable, and the camera 10k and the camera 12k photograph the images. It is assumed that the shaded area of the road surface image can be identified. Furthermore, the imaging information adjustment unit 42 also adjusts the gains of the camera 10m and the camera 12m. The gain of each camera may be adjusted in a circuit that outputs image information of each camera.
 補正データ取得部44は、カメラ10s,カメラ10k,カメラ10m及びカメラ12s,カメラ12k,カメラ12mで撮影した路面画像の台形補正を行うための補正データを取得する。補正データの取得手順については後述する。 The correction data acquisition unit 44 acquires correction data for performing trapezoidal correction of a road surface image photographed by the camera 10s, the camera 10k, the camera 10m, the camera 12s, the camera 12k, and the camera 12m. The procedure for acquiring correction data will be described later.
 画像編集部46は、カメラ10s及びカメラ12sにより撮影した日向領域を識別可能な路面画像の上記道路幅員方向及び車両100の進行方向の重なり領域の一致度を算出し、この一致度に基づいて路面画像の接合部の編集を行う。また、同様に、カメラ10k及びカメラ12kにより撮影した日陰領域を識別可能な路面画像の上記道路幅員方向及び車両の進行方向の重なり領域の一致度を算出し、この一致度に基づいて画像の接合部の編集を行う。ここで、一致度は、例えば画像中に設定した領域の画素の輝度を比較する等の方法で算出する。また、一致度の算出は、上記撮影情報調整部42が台形補正を行った後の画像情報について行う。 The image editing unit 46 calculates the degree of coincidence between the road width direction and the overlapping direction of the traveling direction of the vehicle 100 in the road surface image that can identify the sunny area captured by the camera 10s and the camera 12s, and the road surface based on the degree of coincidence. Edit the joint of the image. Similarly, the degree of coincidence of the overlapping areas of the road width direction and the traveling direction of the vehicle in the road surface image that can identify the shaded areas photographed by the camera 10k and the camera 12k is calculated, and the joining of the images is performed based on the degree of coincidence. Edit the part. Here, the degree of coincidence is calculated by, for example, a method of comparing the luminance of pixels in a region set in the image. Further, the degree of coincidence is calculated for the image information after the photographing information adjustment unit 42 performs the keystone correction.
 日向・日陰接合部48は、カメラ10sまたはカメラ12sが撮影した路面画像の日向領域と、カメラ10kまたはカメラ12kが撮影した路面画像の日陰領域とを、日向領域と日陰領域の境界で接合する。なお、この接合の組み合わせは、カメラ10sの路面画像とカメラ10kの路面画像及びカメラ12sの路面画像とカメラ12kの路面画像となる。この場合、単に接合しただけでは、路面画像中に接合線が残り、路面のひび割れと誤って判別される可能性がある。このため、日向・日陰接合部48は、上記境界近傍の輝度を上記路面画像の日向領域の輝度と日陰領域の輝度との間の値として、境界の接合線が目立たないように処理する。また、この処理は、路面画像の日向領域の輝度と日陰領域の輝度との間の輝度で路面画像を撮影するカメラ10m及びカメラ12mが撮影した路面画像を、上記日向領域と日陰領域との間に挟んで接合することにより行ってもよい。 The sun / shade junction 48 joins the sun area of the road surface image taken by the camera 10s or the camera 12s and the shade area of the road image taken by the camera 10k or the camera 12k at the boundary between the sun area and the shade area. This combination of joints is a road surface image of the camera 10s, a road surface image of the camera 10k, a road surface image of the camera 12s, and a road surface image of the camera 12k. In this case, simply joining the joint lines may remain in the road surface image and may be erroneously determined as a crack on the road surface. For this reason, the sun / shade junction 48 treats the brightness near the boundary as a value between the brightness of the sunlit area and the brightness of the shaded area of the road image so that the boundary joining line is not noticeable. In addition, this processing is performed by using the camera 10m and the camera 12m that capture a road surface image at a brightness between the brightness of the sunlit area and the shaded area of the road surface image. You may carry out by joining on both sides.
 通信部50は、通信装置32を介して、上記路面画像等を外部の装置に送信する。なお、路面画像は、通信部50の代わりに、適宜な記憶媒体に記憶させて外部の装置に提供する構成でもよい。 The communication unit 50 transmits the road image and the like to an external device via the communication device 32. The road surface image may be stored in an appropriate storage medium instead of the communication unit 50 and provided to an external device.
 なお、上記撮影情報調整部42、画像編集部46及び日向・日陰接合部48は、車両100に搭載するコンピュータとは別のコンピュータ上に設けてもよい。この場合には、車両100に搭載するコンピュータと通信部50またはUSBメモリ、DVDその他の適宜な記憶媒体により路面画像のデータの授受を行う。 Note that the photographing information adjustment unit 42, the image editing unit 46, and the sun / shade joint unit 48 may be provided on a computer different from the computer mounted on the vehicle 100. In this case, the road surface image data is exchanged between the computer mounted on the vehicle 100 and the communication unit 50 or the USB memory, DVD, or other appropriate storage medium.
 図6には、路面画像の台形補正を行うための補正データを取得する方法の説明図が示される。図6において、地面上に補正データの取得に使用するシート52を置き、このシート52をカメラ10s,カメラ10k,カメラ10m及びカメラ12s,カメラ12k,カメラ12mにより撮影する。シート52の表面には、図6に示されるように、縦横等距離に丸印が描かれている。また、上述したように、車両100には、レーザスキャナ14が搭載されている。そこで、レーザ光をレーザスキャナ14からシート52上に直線状に照射し、このレーザ光で表示される直線上に上記丸印の列(横列)が一致するようにシートを配置する。図6の例では、レーザ光で表示される直線がLで示されている。なお、左右のカメラを対象に配置する場合(図2のように、左から順に10k、10m、10s、12s、12m、12k)には、シート52の中央における丸印の列(縦列、シートの中央線)Cに、目視で識別可能な線を表示しておくのが好適である。 FIG. 6 shows an explanatory diagram of a method for acquiring correction data for performing keystone correction of a road surface image. In FIG. 6, a sheet 52 used for obtaining correction data is placed on the ground, and the sheet 52 is photographed by a camera 10s, a camera 10k, a camera 10m, a camera 12s, a camera 12k, and a camera 12m. On the surface of the sheet 52, as shown in FIG. 6, circles are drawn at equal distances in the vertical and horizontal directions. Further, as described above, the laser scanner 14 is mounted on the vehicle 100. Therefore, the laser beam is irradiated linearly on the sheet 52 from the laser scanner 14, and the sheet is arranged so that the circled rows (rows) coincide with the straight line displayed by the laser beam. In the example of FIG. 6, a straight line displayed with laser light is indicated by L. In the case where the left and right cameras are arranged (10k, 10m, 10s, 12s, 12m, and 12k in order from the left as shown in FIG. 2), a circle (column, sheet) It is preferable to display a visually identifiable line on the center line (C).
 この状態で各カメラによりシート52の表面を撮影すると、撮影画像には、上記丸印、レーザ光の直線L、シートの中央線Cが写る。ここで、カメラ10sとカメラ12s、カメラ10kとカメラ12k、カメラ10mとカメラ12mの各組み合わせにおいてそれぞれ撮影した画像中のレーザ光の直線Lが一致するように各カメラの画角を調整する。これにより、左右の撮影手段10,12により撮影される画像の、車両100の進行方向に対するずれをなくすことができる。 When the surface of the sheet 52 is photographed by each camera in this state, the above-mentioned circle mark, the straight line L of the laser beam, and the center line C of the sheet are reflected in the photographed image. Here, the angle of view of each camera is adjusted so that the straight lines L of the laser light in the images captured in the respective combinations of the cameras 10s and 12s, the cameras 10k and 12k, and the cameras 10m and 12m match. Thereby, the shift | offset | difference with respect to the advancing direction of the vehicle 100 of the image image | photographed by the imaging | photography means 10 and 12 on either side can be eliminated.
 また、左右のカメラを対称に配置する場合には、シート52の中央線Cが重複領域の左右中央に位置し、かつカメラ10sとカメラ12s、カメラ10kとカメラ12k、カメラ10mとカメラ12mの撮影画像が相互に一定領域重なるように調整する。 When the left and right cameras are arranged symmetrically, the center line C of the sheet 52 is positioned at the left and right center of the overlapping region, and the camera 10s and 12s, the camera 10k and the camera 12k, and the camera 10m and the camera 12m are photographed. Adjust so that the images overlap each other by a certain area.
 さらに、シート52上の丸印は、碁盤の目状に配列されており、各カメラで撮影した画像の丸印の座標値から、画像の歪み量を算出することができる。補正データ取得部44は、この算出結果から、各カメラが撮影した画像の台形補正データを求める。 Furthermore, the circles on the sheet 52 are arranged in a grid pattern, and the amount of distortion of the image can be calculated from the coordinate values of the circles of the images taken by each camera. The correction data acquisition unit 44 obtains trapezoid correction data of an image captured by each camera from the calculation result.
 図7には、実施形態にかかる路面画像撮影・編集装置による路面撮影の説明図が示される。図7において、走行距離計18が測定した車両100の走行距離に基づき、撮影制御部40が出力する撮影指示情報により、撮影手段10,12が備える各カメラにより所定走行距離毎に路面画像を撮影する。図7では、1.5m毎に撮影する例が示されているが、これに限定されるものではない。図7に示されるように、1.5mの走行距離毎に路面を撮影する場合、それぞれの撮影範囲が、車両100の進行方向に所定距離重複(進行方向重複領域D)するように各カメラの画角が調整されている。この画角調整も、図6で説明したシート52を使用して行う。 FIG. 7 is an explanatory diagram of road surface photographing by the road surface image photographing / editing device according to the embodiment. In FIG. 7, based on the travel distance of the vehicle 100 measured by the odometer 18, a road surface image is photographed for each predetermined travel distance by each camera included in the photographing means 10 and 12 based on the photographing instruction information output by the photographing control unit 40. To do. FIG. 7 shows an example of shooting every 1.5 m, but is not limited to this. As shown in FIG. 7, when shooting the road surface for every 1.5 m travel distance, the respective shooting ranges of each camera are overlapped by a predetermined distance in the traveling direction of the vehicle 100 (traveling direction overlapping region D). The angle of view is adjusted. This angle of view adjustment is also performed using the sheet 52 described with reference to FIG.
 図8には、道路幅員方向の撮影範囲の説明図が示される。図8において、撮影手段10,12が備える各カメラは、道路幅員Wの一部を撮影範囲A,Aとしており、互いの撮影範囲A,Aのうち、左側の撮影手段10の各カメラで撮影した左側路面画像(撮影範囲A)の右側領域と、右側の撮影手段12の各カメラで撮影した右側路面画像(撮影範囲A)の左側領域とが一部重複している(道路幅員方向重複領域D´)。また、これらの撮影範囲A,Aにより道路幅員の全部が撮影可能となっている。図8に示される撮影範囲となるような画角調整も、図6で説明したシート52を使用して行う。 FIG. 8 is an explanatory diagram of the shooting range in the road width direction. 8, each camera included in the imaging means 10, 12 taking part in the road width W ranges A L, and an A R, mutual photographing range A L, of A R, the left image capturing unit 10 The right region of the left road image (shooting range A L ) photographed by each camera partially overlaps the left region of the right road image (shooting range A R ) photographed by each camera of the right photographing means 12. (Road width direction overlapping region D ′). Also, all of the road width is in the photographable These photographing range A L, A R. The angle of view is adjusted so as to be within the photographing range shown in FIG. 8 using the sheet 52 described with reference to FIG.
 図9には、実施形態にかかる路面画像撮影・編集装置の動作例のフローが示される。図9において、撮影制御部40の指示により、カメラ10s及びカメラ12sで日向領域が識別可能な路面画像を車両100の所定走行距離毎に同時に撮影し、カメラ10k及びカメラ12kで日陰領域が識別可能な路面画像を車両100の所定走行距離毎に同時に撮影する。これにより、所望の道路区間における所定距離毎の路面画像を取得する(S1)。なお、取得した路面画像には、それぞれ取得順に番号を付し、取得したカメラの識別情報とともに記憶装置34に記憶する。 FIG. 9 shows a flow of an operation example of the road surface image capturing / editing apparatus according to the embodiment. In FIG. 9, according to an instruction from the imaging control unit 40, road images that can identify the sunny area with the camera 10s and the camera 12s are simultaneously photographed for each predetermined travel distance of the vehicle 100, and the shaded area can be identified with the camera 10k and the camera 12k. A good road surface image is simultaneously captured for each predetermined travel distance of the vehicle 100. Thereby, the road surface image for every predetermined distance in a desired road section is acquired (S1). The acquired road surface images are numbered in the order of acquisition, and stored in the storage device 34 together with the acquired camera identification information.
 次に、撮影情報調整部42が、上記4台のカメラ(カメラ10s、カメラ12s、カメラ10k、カメラ12k)で取得した路面画像のうちN番目(N=1、2…)の路面画像を記憶装置34から読み出し(S2)、補正データ取得部44が取得した補正データを使用して台形補正を行う(S3)。 Next, the imaging information adjustment unit 42 stores the Nth (N = 1, 2,...) Road surface image among the road surface images acquired by the four cameras (camera 10s, camera 12s, camera 10k, camera 12k). The keystone correction is performed using the correction data read from the apparatus 34 (S2) and acquired by the correction data acquisition unit 44 (S3).
 次に、撮影情報調整部42は、上記S2、S3の工程と同様にして、N+1番目の路面画像を記憶装置34から読み出し(S4)、補正データ取得部44が取得した補正データを使用して台形補正を行う(S5)。 Next, the imaging information adjustment unit 42 reads the (N + 1) th road surface image from the storage device 34 (S4) and uses the correction data acquired by the correction data acquisition unit 44 in the same manner as in the steps S2 and S3. Keystone correction is performed (S5).
 画像編集部46は、撮影情報調整部42が台形補正を行った路面画像の道路幅員方向重複領域D´の一致度を算出し、この一致度に基づいて路面画像の接合部の編集、すなわち路面画像の接合処理を行う(S6)。また、画像編集部46は、撮影情報調整部42が台形補正を行った路面画像の車両100の進行方向重複領域Dの一致度を算出し、この一致度に基づいて路面画像の接合処理を行う(S7)。なお、ステップS6とステップS7の処理順序は逆でも構わない。 The image editing unit 46 calculates the degree of coincidence of the road width direction overlapping region D ′ of the road surface image subjected to the trapezoidal correction by the imaging information adjusting unit 42, and edits the joint portion of the road surface image based on the degree of coincidence, that is, the road surface Image joining processing is performed (S6). Further, the image editing unit 46 calculates the degree of coincidence of the traveling direction overlap region D of the vehicle 100 of the road surface image that has been subjected to the trapezoidal correction by the imaging information adjustment unit 42, and performs the road surface image joining process based on the degree of coincidence. (S7). Note that the processing order of steps S6 and S7 may be reversed.
 図10(a),(b)には、画像編集部46が路面画像の重なり領域の一致度を算出する方法の説明図が示される。図10(a)が道路幅員方向の場合であり、図10(b)が車両100の進行方向の場合である。 FIGS. 10A and 10B are explanatory diagrams of a method by which the image editing unit 46 calculates the degree of coincidence of the overlapping areas of the road surface image. FIG. 10A shows the case in the road width direction, and FIG. 10B shows the case in which the vehicle 100 travels.
 図10(a)において、左側の撮影手段10のカメラ10sまたはカメラ10kで撮影した左側路面画像の右側領域と右側の撮影手段12のカメラ12sまたはカメラ12kで撮影した右側路面画像の左側領域は、図8で説明したように、一部重複するように撮影されている。そこで、画像編集部46は、一方の路面画像(例えば、左側路面画像)の左からPピクセル目(但し、道路幅員方向重複領域D´内)のピクセル列(画像の縦列)を、他方の路面画像(例えば、右側路面画像)の左からPピクセル目のピクセル列の左右nピクセルの幅の領域に、1ピクセル列ずつ重ねて行き、輝度の一致度を算出する。一致度としては、例えば1ピクセル列内の輝度差分の合計値の逆数として求めることができる。次に、画像編集部46は、算出した輝度の一致度が最も高くなる位置にピクセル列Pがくるように左側路面画像と右側路面画像とを接合する。この場合、接合後の路面画像では、上記ピクセル列Pの左側は左側路面画像を使用し、右側は右側路面画像を使用する。なお、上記Pは、例えば950ピクセルとし、Pは100ピクセルとし、nは30ピクセルとすることができるが、これらに限定されるものではなく、路面画像のサイズ等に応じて適宜決定できる。例えば、道路幅員方向重複領域D´内の全ピクセル列に対して一致度を算出し、この値が最も高いピクセル列をPとしてもよい。 In FIG. 10A, the right region of the left road surface image photographed by the camera 10s or camera 10k of the left photographing means 10 and the left region of the right road image photographed by the camera 12s or camera 12k of the right photographing means 12 are As described with reference to FIG. 8, the images are taken so as to partially overlap. Therefore, the image editing unit 46 determines the pixel column (the vertical column of the image) of the P L pixel from the left of one road surface image (for example, the left road surface image) (however, in the road width direction overlapping region D ′). road image (e.g., the right side road surface image) in the region of the left from the left and right n pixels P R th pixel of a pixel column width, go overlapped by one pixel column, and calculates the matching degree of brightness. As a coincidence degree, it can obtain | require, for example as a reciprocal number of the total value of the brightness | luminance difference in 1 pixel row | line | column. Then, the image editing unit 46, the degree of coincidence calculated brightness is joined to the left road image and right road image should come becomes highest pixel column to the position P L. In this case, the road surface image after bonding, the left side of the pixel columns P L using the left road image, the right to use the right road image. The above P L, for example a 950 pixels, P R is set to 100 pixels, n represents can be a 30-pixel is not limited thereto, can be determined as appropriate depending on the size and the like of the road image . For example, calculate the degree of coincidence for all pixel columns of a road width direction overlap region D', this value is the highest row of pixels may be P L.
 また、図10(b)において、左側の撮影手段10と右側の撮影手段12の各カメラで撮影した路面画像は、図7で説明したように、車両100の進行方向に所定距離D重複するように撮影されている。そこで、画像編集部46は、連続して撮影した路面画像のうち、ある時点(N番目)で撮影した路面画像の上からPピクセル目(但し、進行方向重複領域D内)のピクセル行(画像の横列)を、撮影順序が次(N+1番目)の路面画像の上からPピクセル目のピクセル行の上下mピクセルの幅の領域に、1ピクセル行ずつ重ねて行き、輝度の一致度を算出する。次に、画像編集部46は、算出した輝度の一致度が最も高くなる位置にピクセル行Pがくるように上記2枚の路面画像を接合する。この場合、接合後の路面画像では、上記ピクセル行Pの下側は撮影順序が先(N番目)の路面画像を使用し、上側は撮影順序が後(N+1番目)の路面画像を使用する。なお、上記Pは、例えば3ピクセルとし、Pは750ピクセルとし、mは30ピクセルとすることができるが、これらに限定されるものではなく、路面画像のサイズ等に応じて適宜決定できる。例えば、進行方向重複領域D内の全ピクセル行に対して一致度を算出し、この値が最も高いピクセル列をPとしてもよい。 Further, in FIG. 10B, the road surface images taken by the left photographing means 10 and the right photographing means 12 are overlapped by a predetermined distance D in the traveling direction of the vehicle 100 as described in FIG. Have been taken. Therefore, the image editing unit 46 is the pixel row of the P B pixel (however, in the traveling direction overlapping region D) from the top of the road image taken at a certain time (Nth) among the road images taken continuously. the row) of the image, the region of the width of the upper and lower m pixels P F th pixel row of pixels from the top of the road surface image capturing order is following (N + 1 th), go to overlap by one pixel row, a degree of matching of the luminance calculate. Next, the image editing unit 46 joins the two road surface images so that the pixel row P B comes to a position where the calculated brightness coincidence is the highest. In this case, in the road surface image after joining, the road surface image with the first shooting order (Nth) is used below the pixel row P B , and the road surface image with the second shooting order (N + 1) is used on the upper side. . The above P B, for example a 3-pixel, P F is 750 pixels, m is can be a 30-pixel is not limited thereto, can be determined as appropriate depending on the size and the like of the road image . For example, the degree of coincidence may be calculated for all the pixel rows in the traveling direction overlapping region D, and the pixel column having the highest value may be set as P B.
 従来、路面画像を接合する場合には、路面画像が重なる長さ(距離データ)に基づいて接合処理が行われており、車両の振動等により路面画像が重なる長さが一定値ではないことによる誤差の影響を除去できなかった。これに対して、本実施形態では、上述したように、路面画像の重なり領域の一致度に基づいて路面画像を接合するので、車両の振動等の影響を除去でき、路面画像を正確に接合することができる。 Conventionally, when joining road surface images, joining processing is performed based on the length (distance data) that the road surface images overlap, and the length that the road surface images overlap due to the vibration of the vehicle is not a constant value. The effect of error could not be removed. On the other hand, in the present embodiment, as described above, the road surface images are joined based on the degree of coincidence of the overlapping areas of the road surface images. Therefore, the influence of vehicle vibration and the like can be removed, and the road surface images are joined accurately. be able to.
 次に、図9に戻り、撮影情報調整部42が、記憶装置34に記憶された路面画像の中に、未処理の路面画像が残っているか否かを判定する(S8)。S8において、未処理の路面画像が残っている場合には、N=N+1とし(S9)、S2からの工程を繰り返し、残っていない場合には、処理を終了する。 Next, returning to FIG. 9, the photographing information adjustment unit 42 determines whether or not an unprocessed road surface image remains in the road surface image stored in the storage device 34 (S8). If an unprocessed road surface image remains in S8, N = N + 1 is set (S9), and the processes from S2 are repeated. If there is no remaining road surface image, the process ends.
 上述した、図9の各ステップを実行するためのプログラムは、記録媒体に格納することも可能であり、また、そのプログラムを通信手段によって提供しても良い。その場合、例えば、上記説明したプログラムについて、「プログラムを記録したコンピュータ読み取り可能な記録媒体」の発明または「データ信号」の発明として捉えても良い。 The above-described program for executing each step of FIG. 9 can be stored in a recording medium, and the program may be provided by communication means. In that case, for example, the above-described program may be regarded as an invention of a “computer-readable recording medium recording a program” or an invention of a “data signal”.
 図11(a),(b),(c)には、画像編集部46が接合した路面画像の例が示される。図11(a)が、カメラ10s及びカメラ12sで撮影した、日向領域が識別可能な路面画像を接合した例であり、図11(b)が、カメラ10k及びカメラ12kで撮影した、日陰領域が識別可能な路面画像を接合した例である。また、図11(c)は、図11(a)の日向領域と図11(b)の日陰領域とを接合した例である。これらのうち、図11(a),(b)の例は、図9の工程により接合処理が行われたものである。 11 (a), (b), and (c) show examples of road surface images joined by the image editing unit 46. FIG. FIG. 11A shows an example of joining road surface images that can be identified with the sunlit area taken by the camera 10s and the camera 12s. FIG. 11B shows a shaded area taken by the camera 10k and the camera 12k. It is the example which joined the road surface image which can be identified. Moreover, FIG.11 (c) is the example which joined the sunny area | region of Fig.11 (a), and the shade area | region of FIG.11 (b). Among these, the examples of FIGS. 11A and 11B are those in which the bonding process is performed by the process of FIG.
 また、図11(c)の例は、日向・日陰接合部48が、図11(a)日向領域と図11(b)の日陰領域とを、輝度値に基づいて抽出し、これらを日向領域と日陰領域との境界Bで接合している。この際、図11(c)の例では、境界B近傍の輝度を上記路面画像の日向領域の輝度と日陰領域の輝度との間の値として、境界Bの接合線が目立たないように処理している。なお、図2で説明したように、カメラ10s(またはカメラ12s)及びカメラ10k(またはカメラ12k)が撮影した路面画像の日向領域の輝度より低い輝度または日陰領域の輝度より高い輝度で路面画像を撮影するカメラ10m及びカメラ12mが撮影した路面画像を、上記日向領域と日陰領域との間に挟んで接合してもよい。 Further, in the example of FIG. 11C, the sunny / shade junction 48 extracts the sunny area of FIG. 11A and the shaded area of FIG. 11B based on the luminance value, and these are extracted as the sunny area. Are joined at a boundary B between the shaded area and the shaded area. In this case, in the example of FIG. 11C, the brightness near the boundary B is set as a value between the brightness of the sunlit area and the brightness of the shaded area of the road image so that the joint line of the boundary B is not noticeable. ing. As described with reference to FIG. 2, the road surface image is displayed with a luminance lower than the luminance of the sunny area or higher than the luminance of the shaded area of the road image captured by the camera 10s (or camera 12s) and the camera 10k (or camera 12k). The road surface images taken by the camera 10m and the camera 12m may be joined between the sunlit area and the shaded area.
 なお、上記実施形態では昼間の日光により日陰領域が生じている場合を例に説明したが、夜間に強い照明で照らされていることにより日陰領域が生じている場合にも適用することができる。 In addition, although the said embodiment demonstrated as an example the case where the shade area | region has arisen by daylight in the daytime, it is applicable also when the shade area | region has arisen by being illuminated with the strong illumination at night.

Claims (5)

  1.  車両に搭載され、道路幅員の一部を撮影範囲とし、互いの前記撮影範囲が一部重複しつつ道路幅員の全部が撮影可能に配置された複数のカメラを有し、車両の所定走行距離毎に、同時に、日向における路面画像を識別可能に撮影する日向撮像手段と、
     車両に搭載され、道路幅員の一部を撮影範囲とし、互いの前記撮影範囲が一部重複しつつ道路幅員の全部が撮影可能に配置された複数のカメラを有し、車両の所定走行距離毎に、同時に、日陰における路面画像を識別可能に撮影する日陰撮像手段と、
     前記日向撮像手段における複数のカメラで撮影した路面画像の台形補正並びに前記日陰撮像手段における複数のカメラで撮影した路面画像の台形補正を行うための補正データを取得する補正データ取得手段と、
     前記日向撮像手段における複数のカメラで撮影し、前記補正データにより補正後の路面画像の前記道路幅員方向及び車両の進行方向の重なり領域及び前記日陰撮像手段における複数のカメラで撮影し、前記補正データにより補正後の路面画像の前記道路幅員方向及び車両の進行方向の重なり領域の一致度を算出し、前記一致度に基づいて路面画像の接合部の編集を行う編集手段と、
    を備えることを特徴とする路面画像撮影・編集装置。
    A plurality of cameras mounted on a vehicle, having a part of the road width as a shooting range, and arranged so that all of the road width can be shot while a part of the shooting range overlaps each other. At the same time, Hinata imaging means for photographing a road surface image in Hinata in an identifiable manner,
    A plurality of cameras mounted on a vehicle, having a part of the road width as a shooting range, and arranged so that all of the road width can be shot while a part of the shooting range overlaps each other. In addition, at the same time, the shade imaging means for photographing the road surface image in the shade so that it can be identified
    Correction data acquisition means for acquiring correction data for performing trapezoidal correction of road surface images photographed by a plurality of cameras in the sun imaging means and trapezoid correction of road surface images photographed by a plurality of cameras in the shade imaging means;
    Photographed with a plurality of cameras in the sun imaging means, photographed with a plurality of cameras in the shading imaging means, and an overlapping region of the road width direction and the vehicle traveling direction of the road surface image corrected by the correction data, and the correction data Editing means for calculating the degree of coincidence of the overlapping area of the road width direction and the traveling direction of the vehicle in the corrected road surface image, and editing the joint portion of the road surface image based on the degree of coincidence,
    A road surface image photographing / editing device comprising:
  2.  請求項1記載の路面画像撮影・編集装置において、前記編集手段は、前記路面画像の重なり領域における画素の輝度に基づいて前記一致度を算出することを特徴とする路面画像撮影・編集装置。 2. The road surface image photographing / editing device according to claim 1, wherein the editing unit calculates the degree of coincidence based on a luminance of a pixel in an overlapping region of the road surface image.
  3.  請求項1または請求項2記載の路面画像撮影・編集装置が、前記日向撮像手段におけるカメラで撮影した路面画像の日向領域と前記日陰撮像手段におけるカメラで撮影した路面画像の日陰領域とを、前記日向領域と日陰領域との境界で接合し、前記境界近傍の輝度を前記路面画像の日向領域の輝度と日陰領域の輝度との間の値とする日向・日陰接合手段を備えることを特徴とする路面画像撮影・編集装置。 The road surface image photographing / editing device according to claim 1 or 2, wherein the sunny area of the road surface image photographed by the camera in the sunny imaging means and the shaded area of the road image photographed by the camera in the shade imaging means, It is characterized by comprising a sunshine / shade junction means for joining at the boundary between the sunlit area and the shaded area and for setting the brightness in the vicinity of the boundary between the brightness of the sunlit area and the brightness of the shaded area of the road surface image. Road surface image capturing / editing device.
  4.  請求項1または請求項2記載の路面画像撮影・編集装置が、前記路面画像の識別可能な日向領域の輝度より低い輝度または識別可能な日陰領域の輝度より高い輝度で路面画像を撮影する中間画像撮像手段が撮影した路面画像を、前記日向領域と日陰領域との間に挟んで接合する日向・日陰接合手段を備えることを特徴とする路面画像撮影・編集装置。 3. An intermediate image in which the road surface image capturing / editing device according to claim 1 or 2 captures a road surface image with a luminance lower than a luminance of an identifiable sunlit region of the road surface image or a luminance higher than a luminance of an identifiable shaded region. A road image photographing / editing device comprising: a sun / shade joining means for joining a road surface image photographed by an imaging means between the sunlit area and the shaded area.
  5.  コンピュータを、
     車両に搭載され、道路幅員の一部を撮影範囲とし、互いの前記撮影範囲が一部重複しつつ道路幅員の全部が撮影可能に配置された複数のカメラを有し、車両の所定走行距離毎に、同時に、日向における路面画像を識別可能に撮影する日向撮像手段と、車両に搭載され、道路幅員の一部を撮影範囲とし、互いの前記撮影範囲が一部重複しつつ道路幅員の全部が撮影可能に配置された複数のカメラを有し、車両の所定走行距離毎に、同時に、日陰における路面画像を識別可能に撮影する日陰撮像手段とがそれぞれ撮影した各路面画像を記憶する記憶手段、
     前記記憶手段が記憶している路面画像の台形補正を行うための補正データを取得する補正データ取得手段、
     前記記憶手段が記憶している、前記日向撮像手段における複数のカメラで撮影し、前記補正データにより補正後の路面画像の前記道路幅員方向及び車両の進行方向の重なり領域及び前記日陰撮像手段における複数のカメラで撮影し、前記補正データにより補正後の路面画像の前記道路幅員方向及び車両の進行方向の重なり領域の一致度を算出し、前記一致度に基づいて路面画像の接合部の編集を行う編集手段、
    として機能させることを特徴とする路面画像撮影・編集プログラム。
    Computer
    A plurality of cameras mounted on a vehicle, having a part of the road width as a shooting range, and arranged so that all of the road width can be shot while a part of the shooting range overlaps each other. At the same time, hinata imaging means that shoots the road surface image in the hinata in an identifiable manner, and mounted on the vehicle, a part of the road width is set as the shooting range, and the entire shooting range is partially overlapped with each other. A storage unit that stores a plurality of cameras arranged so as to be photographed, and stores each road surface image captured by a shade imaging unit that shoots a road surface image in a shaded manner at the same time for each predetermined travel distance of the vehicle;
    Correction data acquisition means for acquiring correction data for performing keystone correction of the road surface image stored in the storage means;
    Photographed by a plurality of cameras in the sun imaging means stored in the storage means, and an overlapping area of the road width direction and the vehicle traveling direction of the road surface image corrected by the correction data, and a plurality in the shade imaging means The degree of coincidence of the overlapping area of the road width direction and the traveling direction of the vehicle in the road image after correction is calculated based on the correction data, and the joint portion of the road surface image is edited based on the degree of coincidence. Editing means,
    A road surface image capturing / editing program characterized by functioning as
PCT/JP2010/068460 2009-10-20 2010-10-20 Road surface image capturing/editing device and road surface image capturing/editing program WO2011049118A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2009241177A JP4709309B2 (en) 2009-10-20 2009-10-20 Road surface image capturing / editing device and road surface image capturing / editing program
JP2009-241177 2009-10-20

Publications (1)

Publication Number Publication Date
WO2011049118A1 true WO2011049118A1 (en) 2011-04-28

Family

ID=43900341

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2010/068460 WO2011049118A1 (en) 2009-10-20 2010-10-20 Road surface image capturing/editing device and road surface image capturing/editing program

Country Status (2)

Country Link
JP (1) JP4709309B2 (en)
WO (1) WO2011049118A1 (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2017202284A1 (en) * 2016-05-23 2017-11-30 桂仲成 Pavement autonomous detection intelligent apparatus, robot system and detection method
CN111391754A (en) * 2018-12-28 2020-07-10 丰田自动车株式会社 Electronic mirror system

Families Citing this family (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2010050162A1 (en) * 2008-10-28 2010-05-06 株式会社パスコ Road measurement device and method for measuring road
JP5938327B2 (en) * 2012-10-23 2016-06-22 西日本高速道路エンジニアリング関西株式会社 Road surface photographing system and tunnel wall surface photographing system
US9365217B2 (en) * 2013-06-03 2016-06-14 Booz Allen Hamilton Inc. Mobile pothole detection system and method
JP2015025727A (en) * 2013-07-26 2015-02-05 三菱電機株式会社 Road surface imaging apparatus
CN106770317B (en) * 2016-12-06 2020-06-09 中公高科养护科技股份有限公司 Pavement image acquisition method based on laser projection light supplement mode
JP7119462B2 (en) * 2018-03-19 2022-08-17 株式会社リコー Imaging system, imaging method, moving body equipped with imaging system, imaging device, and moving body equipped with imaging device
EP3830341B1 (en) 2018-07-30 2023-07-05 Ricoh Company, Ltd. Measurement apparatus and vehicle

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH06105224A (en) * 1992-09-17 1994-04-15 Fujitsu Ltd Dynamic range expansion device
JPH0996515A (en) * 1995-09-29 1997-04-08 Mitsubishi Heavy Ind Ltd Detection apparatus for crack on road surface
JPH10187929A (en) * 1996-11-08 1998-07-21 Olympus Optical Co Ltd Image processor
JP2001285678A (en) * 2000-03-30 2001-10-12 Nissan Motor Co Ltd Two-image simultaneous pickup processing apparatus
JP2002324235A (en) * 2001-04-24 2002-11-08 Matsushita Electric Ind Co Ltd Method for compositing and displaying image of on-vehicle camera and device for the same
JP2004096488A (en) * 2002-08-30 2004-03-25 Fujitsu Ltd Object detection apparatus, object detection method and object detection program
JP2008046065A (en) * 2006-08-21 2008-02-28 Nagoya City Road surface image creating method and road surface image producing device
JP2008059104A (en) * 2006-08-30 2008-03-13 Toyota Mapmaster:Kk Road image generation system

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2009010566A (en) * 2007-06-27 2009-01-15 Yamaguchi Univ Method for expanding dynamic range of photographic image and imaging apparatus
JP5163031B2 (en) * 2007-09-26 2013-03-13 株式会社ニコン Electronic camera

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH06105224A (en) * 1992-09-17 1994-04-15 Fujitsu Ltd Dynamic range expansion device
JPH0996515A (en) * 1995-09-29 1997-04-08 Mitsubishi Heavy Ind Ltd Detection apparatus for crack on road surface
JPH10187929A (en) * 1996-11-08 1998-07-21 Olympus Optical Co Ltd Image processor
JP2001285678A (en) * 2000-03-30 2001-10-12 Nissan Motor Co Ltd Two-image simultaneous pickup processing apparatus
JP2002324235A (en) * 2001-04-24 2002-11-08 Matsushita Electric Ind Co Ltd Method for compositing and displaying image of on-vehicle camera and device for the same
JP2004096488A (en) * 2002-08-30 2004-03-25 Fujitsu Ltd Object detection apparatus, object detection method and object detection program
JP2008046065A (en) * 2006-08-21 2008-02-28 Nagoya City Road surface image creating method and road surface image producing device
JP2008059104A (en) * 2006-08-30 2008-03-13 Toyota Mapmaster:Kk Road image generation system

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2017202284A1 (en) * 2016-05-23 2017-11-30 桂仲成 Pavement autonomous detection intelligent apparatus, robot system and detection method
CN111391754A (en) * 2018-12-28 2020-07-10 丰田自动车株式会社 Electronic mirror system

Also Published As

Publication number Publication date
JP4709309B2 (en) 2011-06-22
JP2011090367A (en) 2011-05-06

Similar Documents

Publication Publication Date Title
JP4709309B2 (en) Road surface image capturing / editing device and road surface image capturing / editing program
CN107133988B (en) Calibration method and calibration system for camera in vehicle-mounted panoramic looking-around system
US7139424B2 (en) Stereoscopic image characteristics examination system
JP5223811B2 (en) Image correction apparatus, image correction method, and conversion map creation method used therefor
US10740892B2 (en) Measurement support apparatus and measurement support method
US8077199B2 (en) Target position identifying apparatus
JP2009177251A (en) Generation method of orthophoto image and photographing device
JP6937391B2 (en) Repair length determination method and repair length determination device
KR20120068655A (en) Method and camera device for capturing iris or subject of good quality with one bandpass filter passing both visible ray and near infra red ray
TW201403553A (en) Method of automatically correcting bird's eye images
JP3575693B2 (en) Optical measuring device
JP2008268004A (en) Multipoint measuring method and survey instrument
JP6177006B2 (en) Camera calibration apparatus and camera calibration method
US10484619B2 (en) Imaging apparatus
JP2009284188A (en) Color imaging apparatus
KR101705558B1 (en) Top view creating method for camera installed on vehicle and AVM system
JP2005016991A (en) Infrared structure diagnosis system
JP2005016995A (en) Infrared structure diagnosis method
US20130021442A1 (en) Electronic camera
CN111527517B (en) Image processing apparatus and control method thereof
US8649561B2 (en) Subject designating device and subject tracking apparatus
JP6659929B2 (en) Road surface photographing system
JP3965894B2 (en) Image processing apparatus and image processing method
KR101602293B1 (en) Method and apparatus to estimate coastline using image processing
CN108665501A (en) Automobile viewing system three-dimensional scaling scene and the scaling method for using the scene

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 10824967

Country of ref document: EP

Kind code of ref document: A1

122 Ep: pct application non-entry in european phase

Ref document number: 10824967

Country of ref document: EP

Kind code of ref document: A1