WO2021241482A1 - Route detection device - Google Patents

Route detection device Download PDF

Info

Publication number
WO2021241482A1
WO2021241482A1 PCT/JP2021/019562 JP2021019562W WO2021241482A1 WO 2021241482 A1 WO2021241482 A1 WO 2021241482A1 JP 2021019562 W JP2021019562 W JP 2021019562W WO 2021241482 A1 WO2021241482 A1 WO 2021241482A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
dimensional
route
line segment
detected
Prior art date
Application number
PCT/JP2021/019562
Other languages
French (fr)
Japanese (ja)
Inventor
祐一 小林
Original Assignee
国立大学法人静岡大学
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 国立大学法人静岡大学 filed Critical 国立大学法人静岡大学
Priority to JP2022527018A priority Critical patent/JPWO2021241482A1/ja
Publication of WO2021241482A1 publication Critical patent/WO2021241482A1/en

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01BMEASURING LENGTH, THICKNESS OR SIMILAR LINEAR DIMENSIONS; MEASURING ANGLES; MEASURING AREAS; MEASURING IRREGULARITIES OF SURFACES OR CONTOURS
    • G01B11/00Measuring arrangements characterised by the use of optical techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/60Analysis of geometric attributes

Definitions

  • the embodiment relates to a route detection device that detects a route.
  • a device for detecting a feature point based on a detection result of a sensor such as a camera has been known for the purpose of robot control or the like.
  • a device that detects a plane on three-dimensional coordinates based on a distance image detected by a distance image sensor see Patent Document 1 below
  • a corner of a contour line that processes an image taken by a television camera is known.
  • Non-Patent Document 1 and Non-Patent Document 2 In the conventional detection method in the devices and the like described in Non-Patent Document 1 and Non-Patent Document 2 described above, it is desired to accurately detect a route in various environments regardless of whether it is outdoors or indoors.
  • the present embodiment has been made in view of the above problems, and an object thereof is to provide a route detection device capable of detecting a route with high accuracy in various environments.
  • the path detection device acquires a two-dimensional image reflecting a two-dimensional image in the visual field and depth information of the image corresponding to each pixel of the two-dimensional image.
  • the acquisition device is provided with at least one processor for processing the two-dimensional image and the depth information, and the at least one processor detects the position of the line image, which is an image similar to the line on the two-dimensional image.
  • the depth corresponding to the position of the line image is specified from the depth information, and the height and inclination of the line image in the three-dimensional space are determined based on the position and the depth corresponding to the position of the line image. Calculate and detect path candidates in 3D space based on height and tilt.
  • the "line segment" of the "line segment image” here is not limited to a straight line, but is a concept that includes a wide range of curves, polygonal lines, and the like.
  • the position of the line segment image is detected on the two-dimensional image acquired by the acquisition device, and the depth corresponding to the position of the line segment image is specified based on the depth information acquired by the acquisition device.
  • the height and inclination of the line segment image in the three-dimensional space are calculated from the position of the line segment image and the corresponding depth, and the candidate path in the three-dimensional space is specified based on them.
  • the route can be detected with high accuracy under various environments.
  • FIG. 1 shows the schematic structure of the route detection system which concerns on one preferred embodiment of this invention.
  • the route detection system 1 which is a preferred embodiment of the present invention shown in FIG. 1, is a computer system that detects movement routes such as roads, sidewalks, passages, and stairs in the surrounding environment for the purpose of robot control and the like. ..
  • the route detection system 1 includes a camera 2 which is an acquisition device for acquiring an image of an external environment in a visual field, and a computer 3 for processing the image.
  • the camera 2 has a built-in ranging camera 2a and a color camera 2b.
  • the camera 2 is controlled by the computer 3 to operate the color camera 2b to capture a two-dimensional image in the field of view and acquire a two-dimensional color image reflecting the image, and the camera 2 is controlled by the computer 3 for measurement. It has a function of operating the distance camera 2a to measure the distance from the camera 2 corresponding to the position of the two-dimensional image in the field of view and acquiring the depth information for each pixel of the image.
  • the camera 2 acquires data including brightness information of each pixel for each color component of RGB as a two-dimensional color image, and at the same time acquires a depth image showing depth information for each pixel.
  • the distance measurement function for obtaining a depth image is, for example, using the principle of triangulation using infrared rays with a stereo camera, irradiating an infrared pattern and measuring the distance from the change in the pattern, and TOF using infrared rays. (Time Of Flight) It can be realized by using a camera or by measuring the distance using a radar.
  • the computer 3 is a data processing device that detects the position in the three-dimensional space of a candidate path in the field of view of the camera 2 by processing the two-dimensional color image and the depth image acquired by controlling the camera 2. ..
  • the computer 3 is composed of one device, but may be composed of a plurality of devices, and these plurality of devices are connected to each other via a wired or wireless network. It may be configured to be connected to enable data communication.
  • FIG. 2 shows the hardware configuration of the computer 3.
  • the computer 3 is physically a CPU (Central Processing Unit) 101 which is a processor, a RAM (Random Access Memory) 102 or a ROM (Read Only Memory) 103 which is a recording medium, and a communication module 104.
  • a computer or the like including the input / output module 106 and the like, each of which is electrically connected.
  • the computer 3 may include a display, a keyboard, a mouse, a touch panel display, and the like as the input / output module 106, and may include a data recording device such as a hard disk drive and a semiconductor memory. Further, the computer 3 may be composed of a plurality of computers.
  • the computer 3 includes an image conversion unit 31, an edge detection unit 32, a Hough conversion unit 33, a two-dimensional position calculation unit 34, a mapping unit 35, a three-dimensional position calculation unit 36, and a route evaluation unit 37.
  • Each functional unit of the computer 3 shown in FIG. 1 operates the communication module 104, the input / output module 106, etc. under the control of the CPU 101 by loading the program on the hardware such as the CPU 101 and the RAM 102. , Is realized by reading and writing data in the RAM 102.
  • the CPU 101 of the computer 3 causes the computer 3 to function as each functional unit in FIG. 1, and sequentially executes processes corresponding to the route detection method described later.
  • Various data necessary for executing this computer program and various data generated by executing this computer program are all stored in an internal memory such as ROM 103 and RAM 102, or a storage medium such as a hard disk drive.
  • the image conversion unit 31 acquires a two-dimensional color image and a depth image by controlling the operation of the camera 2, and converts the resolution as a parameter when acquiring the two-dimensional color image into a plurality of types to convert the resolution. Generate a 2D color image. For example, the image conversion unit 31 converts the resolution into a resolution of 640 pixels ⁇ 480 pixels and a resolution of 128 pixels ⁇ 96 pixels to generate a two-dimensional color image having a plurality of resolutions. However, the type and number of resolutions converted by the image conversion unit 31 can be arbitrarily set. Further, the image conversion unit 31 grayscales each of the two-dimensional color images having a plurality of resolutions to generate a two-dimensional grayscale image having a plurality of resolutions. As a method of grayscale conversion, a method of extracting any one of the RGB color components of a two-dimensional color image, a method of weighting and averaging the brightness of the RGB color components by a predetermined weight, and the like are adopted.
  • the edge detection unit 32 performs edge detection on the two-dimensional grayscale image generated by the image conversion unit 31, and outputs the two-dimensional coordinates of the points detected as edges on the two-dimensional grayscale image.
  • the edge detection unit 32 uses, for example, the Canny method as an edge detection method.
  • the Hough transform unit 33 detects a plurality of positions of a line segment image that is approximated to a line segment on a two-dimensional grayscale image, targeting the two-dimensional coordinates of the edge output from the edge detection unit 32.
  • the Hough transform unit 33 detects the position of the line segment image by using, for example, a stochastic Hough transform method, and outputs the two-dimensional coordinates of both ends of the detected line segment image.
  • the two-dimensional position calculation unit 34 obtains a mathematical formula representing the position of the line segment image based on the two-dimensional coordinates at both ends of the line segment image output from the Hough transform unit 33.
  • the two-dimensional position calculation unit 34 changes x from x 1 to x 2 in the above equation to change x of all the pixels of the line segment image on the xy coordinates. Find the coordinates and y-coordinates.
  • the mapping unit 35 corresponds to the positions (x-coordinates, y-coordinates) of the pixels of the line segment image on the two-dimensional coordinates obtained by the two-dimensional position calculation unit 34 from the depth image. Extract the depth information of the pixel position.
  • the depth information included in the depth image is represented by the value of the z coordinate of the xyz three-dimensional coordinate system, which is the distance from the xy plane which is the image plane of the two-dimensional color image.
  • the value of the z coordinate is extracted and specified as a value indicating the depth of the position of the pixel of the line segment image.
  • the mapping unit 35 obtains a combination of the pixel position (x coordinate, y coordinate) of the line segment image on the two-dimensional coordinate and the depth information (z coordinate) of the pixel by the two-dimensional position calculation unit 34. It is generated and output for all the generated pixels.
  • the three-dimensional position calculation unit 36 is a predetermined three-dimensional space based on the positions (x-coordinates, y-coordinates) of the pixels of the line segment image output from the mapping unit 35 and the depth (z-coordinates) corresponding to the pixels. Calculate the position of the line image within.
  • the three-dimensional position calculation unit 36 uses a predetermined three-dimensional coordinate system (a three-dimensional coordinate system with a horizontal plane as an xy plane and an xy plane with the ground) based on the positions of the pixels of the line segment image and the corresponding depth. Calculate the coordinates of the line image in the 3D space of the 3D coordinate system, etc.).
  • the three-dimensional position calculation unit 36 outputs information on the position of the line segment image in the three-dimensional space.
  • the information of the output position may be the coordinate information of the start point and the end point of the line segment image in the three-dimensional space, or an approximate expression of a straight line, a curve, or a broken line that approximates the line segment image in the three-dimensional space. It may be the information of the above, or it may be the coordinate information of each pixel of the line segment image in the three-dimensional space.
  • the route evaluation unit 37 obtains information on the positions of the plurality of line image calculated by the three-dimensional position calculation unit 36 in the three-dimensional space from the plurality of line images detected by the Huff conversion unit 33. Based on this, the candidate route in the three-dimensional space is detected. Specifically, the path evaluation unit 37 calculates the height H [m] and the slope ⁇ [degree] from a predetermined plane (horizontal plane, ground, etc.) of a plurality of line segment images in a three-dimensional space.
  • the height H may be calculated as the height of the center of gravity of the line segment image, or may be calculated as the height of the start point or the end point of the line segment image.
  • the slope ⁇ may be calculated as the slope of the tangent line at the center of gravity, the start point, or the end point of the line segment image. Then, the path evaluation unit 37 compares the height H and the slope ⁇ of the plurality of line segment images with the first threshold value and the second threshold value, respectively, and based on the result, the path evaluation unit 37 selects the third order from the plurality of line segment images. Extract route candidates in the original space.
  • 0 m and 0.6 m are preset as the first threshold value
  • 0 degree and 10 degree are preset as the second threshold value
  • the route evaluation unit 37 uses the following equation; 0 degree ⁇ ⁇ ⁇ 10 degrees and 0 m ⁇ H ⁇ 0.6 m
  • the position of the line segment image that satisfies the condition is extracted as the position of the candidate route.
  • the route evaluation unit 37 outputs information on the positions of the extracted route candidates.
  • the position of the route candidate is highlighted on the two-dimensional color image acquired by the camera 2 and displayed on a display or the like, and the route candidate information is added to the map data or the like. Examples thereof include a mode of outputting to an internal or external recording medium, and a mode of outputting information on the three-dimensional position of a route candidate to an internal or external control unit that performs robot control or the like.
  • FIG. 3 is a flowchart showing a procedure for processing the route detection method.
  • the route detection process shown in FIG. 3 is started by the operation of the user of the route detection system 1, and is repeatedly executed at a periodic timing, a timing when a predetermined state is detected by various sensors, or the like.
  • the camera 2 is controlled by the computer 3 to acquire a two-dimensional color image in the field of view and a depth image in the field of view (step S1).
  • the image conversion unit 31 of the computer 3 converts the acquired two-dimensional color image into a two-dimensional grayscale image after being converted to a predetermined resolution (step S2).
  • the edge detection unit 32 of the computer 3 performs edge detection on the two-dimensional grayscale image, and outputs the two-dimensional coordinates of the points detected as edges (step S3).
  • the Hough transform unit 33 of the computer 3 executes a probabilistic Hough transform on the two-dimensional coordinates of the points detected as edges, thereby detecting a plurality of positions of the line segment image (step S4).
  • the two-dimensional position calculation unit 34 of the computer 3 calculates the two-dimensional coordinates of all the pixels of the plurality of line segment images based on the positions of the plurality of line segment images (step S5).
  • the mapping unit 35 of the computer 3 outputs the depth information in association with the two-dimensional coordinates of the pixels of the plurality of line segment images based on the depth image (step S6). Further, the three-dimensional position calculation unit 36 of the computer 3 calculates the positions (three-dimensional coordinates) of the plurality of line segment images in a predetermined three-dimensional space (step S7). After that, the path evaluation unit 37 of the computer 3 calculates the height and slope of the line segment image from the predetermined plane in the three-dimensional space, and by comparing the calculation result with the predetermined threshold value, a plurality of lines are obtained. The position of the candidate route is extracted from the position of the line segment, and the position of the extracted candidate route is output in a predetermined output mode (step S8).
  • the computer 3 determines whether or not to convert the two-dimensional color image to the next resolution (step S9), and when converting to the next resolution (step S9; Yes), the two-dimensional color image is converted to the next resolution.
  • step S9; Yes the two-dimensional color image is converted to the next resolution.
  • FIG. 4 shows an example of a two-dimensional color image acquired by the camera 2
  • the part (b) shows an example of the position of the line segment image detected by the computer 3.
  • FIG. 5 is a graph showing the distribution of heights and inclinations of a plurality of line segment images calculated by the computer 3
  • FIG. 6 is a diagram showing an output example of a route candidate by the computer 3.
  • the path detection process of the present embodiment after the two-dimensional positions of a plurality of line segment images on the image are detected for the two-dimensional color image acquired by the camera 2. , The height and inclination of the plurality of line segment images with respect to a predetermined surface in the three-dimensional space are calculated (FIGS. 4 and 5).
  • the two-dimensional positions of the detected line segment images are shown by a large number of white lines, but the white lines have a predetermined length or more and are considered as route candidates.
  • the line segment image is represented by a thick white line.
  • some line segment images that cannot be paths (for example, vertical line segment images) are also displayed as thick white lines.
  • a route candidate is extracted from the plurality of line segment images based on the calculated height and inclination of the line segment image, and the position is output.
  • the height H [m] and the slope ⁇ [degree] of the line segment image in the three-dimensional space are, for example, 0 degrees ⁇ ⁇ .
  • a line segment image within a range (a range surrounded by a circle) satisfying ⁇ 10 degrees and 0 m ⁇ H ⁇ 0.6 m is extracted as a route candidate.
  • the result is reflected in the three-dimensional space, and as shown in FIG. 6, two line segment images CR on both sides of the passage (colors different from white on the actual display, etc., and other line segment images as route candidates). (Displayed so that it can be distinguished from) is extracted and output as a route candidate.
  • FIG. 7 shows a schematic configuration of a robot 4 showing an application example of this embodiment.
  • the robot 4 further includes a drive mechanism 5 that enables the autonomous movement of the entire robot 4 under the control of the computer 3.
  • the computer 3 controls autonomous movement by the drive mechanism 5 based on the positions of the route candidates output by the route evaluation unit 37.
  • the drive mechanism 5 includes, for example, a wheel, a steering unit, a drive unit for the wheel or the steering unit, a drive power source such as a battery, and the like.
  • the position of the line image is detected on the two-dimensional color image acquired by the camera 2, and the depth corresponding to the position of the pixel of the line image is based on the depth information acquired by the camera 2.
  • the height and inclination of the line image with respect to a predetermined surface in the three-dimensional space are calculated from the positions of the pixels of the line image and the corresponding depth, and based on these, the height and inclination in the three-dimensional space are calculated.
  • Candidates for the route in are identified.
  • by determining the height and inclination of the line segment image in the three-dimensional space according to the environment in which the subject exists it is possible to detect the route candidate according to the environment with high accuracy.
  • the computer 3 repeatedly detects route candidates for the two-dimensional image whose resolution has been changed while changing the resolution of the two-dimensional color image to a plurality of.
  • a route candidate according to the environment with higher accuracy.
  • the path can be detected with high accuracy using an image with higher resolution, and the change in shade in an image such as outdoors tends to be random. Can detect the path with high accuracy using an image with reduced resolution.
  • FIGS. 8 and 9 show examples of detection results based on two-dimensional color images acquired at a plurality of resolutions in an outdoor environment.
  • Part (a) of FIG. 8 shows a two-dimensional color image acquired at a resolution of 640 pixels ⁇ 480 pixels and a frame rate of about 5 fps
  • part (b) of FIG. 8 shows this two-dimensional color image.
  • the detection result of the candidate of the route used is shown.
  • the part (a) of FIG. 9 shows a two-dimensional color image acquired at a resolution of 128 pixels ⁇ 96 pixels and a frame rate of about 15 fps
  • the part (b) of FIG. 9 shows the two-dimensional color.
  • the detection result of the route candidate using an image is shown.
  • the computer 3 detects the position of the line segment image by grayscale-converting the two-dimensional color image to generate a two-dimensional grayscale image and detecting the edge of the two-dimensional grayscale image. By doing so, the processing load of the edge detection processing from the two-dimensional image is reduced, and as a result, the processing load of the entire route detection can be reduced.
  • the computer 3 detects the position of the line segment image by performing a probabilistic Hough transform after performing edge detection of the two-dimensional grayscale image.
  • the line segment image can be detected without omission regardless of the resolution of the two-dimensional color image, and as a result, the path can be detected with high accuracy.
  • the computer 3 is a candidate for a route in the three-dimensional space based on the comparison result between the height of the line segment image and the first threshold value and the comparison result between the inclination of the line segment image and the second threshold value. Is being detected.
  • the threshold value according to the environment such as outdoor or indoor, it is possible to detect the route candidate according to the environment with higher accuracy.
  • the present invention is not limited to the above-described embodiment.
  • the configuration of the above embodiment can be changed in various ways.
  • the computer 3 of the above embodiment has detected a candidate for a repetitive route in a two-dimensional color image converted into a plurality of resolutions, but the processing is not limited to such a process.
  • the computer 3 uses some of the color components of the RGB color components of the two-dimensional color image, grayscale-converts the color components to generate a two-dimensional grayscale image, and extracts the two-dimensional color image from the two-dimensional color image.
  • the route candidate may be repeatedly detected for the image in which the brightness of each pixel of the extracted color component is converted into a grayscale image.
  • This color component is a parameter for acquiring a two-dimensional grayscale image to be processed. In this case, by using the image data of the color component suitable for the environment such as outdoor or indoor, it is possible to detect the route candidate according to the environment with higher accuracy.
  • the first and second threshold values used in the computer 3 for detecting the route candidate may be changed to arbitrary values.
  • each threshold value and judgment criteria can be set on the condition that the height of the line segment image is close to the wheel position of the robot and the inclination of the line segment image is close to the horizontal direction. Is set.
  • each threshold and judgment standard are set so that the inclination of the line segment image is close to the vertical direction and the lower end of the line segment image is close to the wheel position of the robot. Set.
  • the computer 3 adds a process of clustering short line segments to combine a direction vector or a group of line segments having close positions and detect it as a line segment image. You may.
  • the computer 3 may add the following processing in order to prevent erroneous detection of the route.
  • -Comparison between candidates that integrate other image features color area clustering processing, etc.
  • -Comparison with the processing result by 3D point cloud processing surface detection, etc.
  • LiDAR Light Detection and Ranging
  • -Use of an image classifier pattern recognition technology such as deep learning
  • the computer 3 When the computer 3 detects the position of the curved line segment image, it may be represented by a large number of short straight lines obtained by processing such as LSD (Line Segment Detector), or the Hough transform for detecting the curve is used. You may.
  • LSD Line Segment Detector
  • Hough transform for detecting the curve is used. You may.
  • the computer 3 may improve the diversity and accuracy of path feature detection by using deep learning in the process of extracting a linear or curved line segment image as a local feature.
  • the computer 3 may automatically select features such as data and parameters by using clustering by combining detection data from a plurality of sensors other than the camera 2 or a plurality of feature extraction methods. ..
  • a sensor to acquire depth information multiple sensors using infrared rays or lasers (LiDAR, TOF camera, etc.) are used at the same time, and detection data is automatically selected to improve accuracy (either one depending on the environment). You may select a sensor or use both sensor outputs). By doing so, when the environment is different, the accuracy can be improved by compensating for each other's weak points.
  • processing that is combined with methods such as artificial intelligence (AI), machine learning, and neural network may be executed.
  • AI artificial intelligence
  • machine learning machine learning
  • neural network neural network
  • the two-dimensional image acquired and processed by the computer 3 is not limited to a color image, but may be a black-and-white image or an infrared image. By acquiring an infrared image, it is possible to detect a route even in an environment such as a dark room at night.
  • At least one processor selects from a plurality of data or parameters when acquiring a two-dimensional image or depth information, and the two-dimensional image or the result obtained as a result of the selection. It is preferable to detect route candidates for depth information.
  • data or parameters suitable for the environment such as outdoors and indoors are selected, and by using the resulting two-dimensional image or depth information, route candidates according to the environment can be selected with higher accuracy. Can be detected.
  • At least one processor changes the resolution of the two-dimensional image and detects the route candidate for the two-dimensional image whose resolution has been changed.
  • at least one processor changes the resolution of the two-dimensional image and detects the route candidate for the two-dimensional image whose resolution has been changed.
  • the acquisition device acquires a two-dimensional image as a color image including the brightness of each of a plurality of colors, and at least one processor changes the color extracted from the two-dimensional image and of each pixel of the extracted color. It is also preferable to detect route candidates for brightness. In this case, by using the image data of the color component suitable for the environment such as outdoor or indoor, it is possible to detect the route candidate according to the environment with higher accuracy.
  • At least one processor detects the position of the line segment image by grayscale-converting the two-dimensional image to generate a grayscale image and performing edge detection of the grayscale image. By doing so, the processing load of the edge detection processing from the two-dimensional image is reduced, and as a result, the processing load of the entire route detection can be reduced.
  • At least one processor detects the position of the line segment image by performing Hough transform after performing edge detection of the grayscale image.
  • the line segment image can be detected without omission regardless of the resolution of the two-dimensional image, and as a result, the route can be detected with high accuracy.
  • At least one processor detects route candidates in the three-dimensional space based on the comparison result between the height and the first threshold value and the comparison result between the slope and the second threshold value. ..
  • the threshold value according to the environment such as outdoor or indoor, it is possible to detect the route candidate according to the environment with higher accuracy.
  • a drive mechanism that enables autonomous movement is further provided, and at least one processor controls autonomous movement by the drive mechanism based on the detected route candidate. According to such a configuration, highly accurate autonomous movement control is realized by using the detection result of the route.
  • a route detection device for detecting a route is used and can detect a route with high accuracy in various environments.
  • Route detection system route detection device
  • 2 Camera (acquisition device)
  • 3 Computer
  • 4 Robot
  • 5 Drive mechanism
  • 31 Image conversion unit
  • 32 Edge detection unit
  • 33 ... Hough conversion unit
  • 34 Two-dimensional position calculation unit
  • 35 ... Correspondence unit
  • 36 Three-dimensional position calculation unit
  • 37 ... Path evaluation unit.

Abstract

The purpose of an embodiment of the present invention is to highly accurately detect routes in various environments. A route detection system 1 is provided with: a camera 2 that acquires a two-dimensional color picture reflecting a two-dimensional image in a visual field and acquires depth information of an image corresponding to each pixel in the two-dimensional color picture; and a computer 3 that processes the two-dimensional color picture and the depth information. The computer 3 detects the positions of pixels of a line segment image which is an image approximated by line segments on the two-dimensional color picture, identifies the depth corresponding to the positions of the pixels of the line segment image, calculates the height and the slope of the line segment image in a three-dimensional space on the basis of the positions of the pixels of the line segment image and the depth corresponding to the positions, and detects a route candidate in the three-dimensional space on the basis of the height and the slope.

Description

経路検出装置Route detector
 実施形態は、経路を検出する経路検出装置に関する。 The embodiment relates to a route detection device that detects a route.
 従来から、ロボット制御等の目的でカメラ等のセンサの検出結果を基に特徴点を検出する装置が知られている。例えば、距離画像センサで検出された距離画像を基に三次元座標上の平面を検出する装置(下記特許文献1参照)、あるいは、テレビカメラで撮影された画像を処理して輪郭線のコーナーなどの特徴点を検出する装置(下記特許文献2参照)が知られている。 Conventionally, a device for detecting a feature point based on a detection result of a sensor such as a camera has been known for the purpose of robot control or the like. For example, a device that detects a plane on three-dimensional coordinates based on a distance image detected by a distance image sensor (see Patent Document 1 below), or a corner of a contour line that processes an image taken by a television camera. A device for detecting a feature point of the above (see Patent Document 2 below) is known.
特開2014-85940号公報Japanese Unexamined Patent Publication No. 2014-85940 特開平6-139357号公報Japanese Unexamined Patent Publication No. 6-139357
 上述した非特許文献1及び非特許文献2に記載の装置等における従来の検出手法では、屋外、屋内を問わない様々な環境下で、精度よく経路を検出することが望まれている。 In the conventional detection method in the devices and the like described in Non-Patent Document 1 and Non-Patent Document 2 described above, it is desired to accurately detect a route in various environments regardless of whether it is outdoors or indoors.
 本実施形態は、上記課題に鑑みて為されたものであり、様々な環境下で高精度に経路を検出することが可能な経路検出装置を提供することを目的とする。 The present embodiment has been made in view of the above problems, and an object thereof is to provide a route detection device capable of detecting a route with high accuracy in various environments.
 上記課題を解決するため、本開示の一形態にかかる経路検出装置は、視野内の二次元の像を反映した二次元画像と、二次元画像の各画素に対応する像の深度情報とを取得する取得装置と、二次元画像及び深度情報を処理する少なくとも1つのプロセッサとを備え、少なくとも1つのプロセッサは、二次元画像上で線分に近似される像である線分像の位置を検出し、深度情報の中から、線分像の位置に対応する深度を特定し、線分像の位置及び位置に対応する深度を基に、線分像の三次元空間内での高さ及び傾きを計算し、高さ及び傾きを基に三次元空間内での経路の候補を検出する。なお、ここでいう「線分像」の「線分」とは、直線には限定されず、曲線、折れ線等も広く含む概念である。 In order to solve the above problems, the path detection device according to one embodiment of the present disclosure acquires a two-dimensional image reflecting a two-dimensional image in the visual field and depth information of the image corresponding to each pixel of the two-dimensional image. The acquisition device is provided with at least one processor for processing the two-dimensional image and the depth information, and the at least one processor detects the position of the line image, which is an image similar to the line on the two-dimensional image. , The depth corresponding to the position of the line image is specified from the depth information, and the height and inclination of the line image in the three-dimensional space are determined based on the position and the depth corresponding to the position of the line image. Calculate and detect path candidates in 3D space based on height and tilt. The "line segment" of the "line segment image" here is not limited to a straight line, but is a concept that includes a wide range of curves, polygonal lines, and the like.
 上記一形態によれば、取得装置で取得された二次元画像上で線分像の位置が検出され、取得装置で取得された深度情報を基に線分像の位置に対応する深度が特定され、線分像の位置及びそれに対応する深度から、線分像の三次元空間内での高さ及び傾きが計算され、それらを基に三次元空間内での経路の候補が特定される。このとき、被写体の存在する環境に応じて、線分像の高さ及び傾きを判定することにより、その環境に応じた経路の候補を高精度に検出することができる。 According to the above aspect, the position of the line segment image is detected on the two-dimensional image acquired by the acquisition device, and the depth corresponding to the position of the line segment image is specified based on the depth information acquired by the acquisition device. , The height and inclination of the line segment image in the three-dimensional space are calculated from the position of the line segment image and the corresponding depth, and the candidate path in the three-dimensional space is specified based on them. At this time, by determining the height and inclination of the line segment image according to the environment in which the subject exists, it is possible to detect the route candidate according to the environment with high accuracy.
 実施形態によれば、様々な環境下で高精度に経路を検出することができる。 According to the embodiment, the route can be detected with high accuracy under various environments.
本発明の好適な一実施形態にかかる経路検出システムの概略構成を示す図である。It is a figure which shows the schematic structure of the route detection system which concerns on one preferred embodiment of this invention. 図1のコンピュータのハードウェア構成を示すブロック図である。It is a block diagram which shows the hardware configuration of the computer of FIG. 本発明の好適な一実施形態にかかる経路検出方法の手順を示すフローチャートである。It is a flowchart which shows the procedure of the route detection method which concerns on one preferred embodiment of this invention. カメラ2によって取得された二次元カラー画像、及び画像上でコンピュータ3によって検出された線分像の位置を示す図である。It is a figure which shows the position of the 2D color image acquired by the camera 2 and the line segment image detected by the computer 3 on the image. コンピュータ3によって計算された複数の線分像の高さ及び傾きの分布を示すグラフである。It is a graph which shows the distribution of the height and the inclination of a plurality of line segment images calculated by a computer 3. コンピュータ3による経路の候補の出力例を示す図である。It is a figure which shows the output example of the route candidate by a computer 3. 本実施形態の応用例を示すロボット4の概略構成を示す外観図である。It is an external view which shows the schematic structure of the robot 4 which shows the application example of this embodiment. カメラ2によって取得された二次元カラー画像、及び画像上でコンピュータ3によって検出された経路候補の位置を示す図である。It is a figure which shows the 2D color image acquired by a camera 2, and the position of a route candidate detected by a computer 3 on an image. カメラ2によって取得された二次元カラー画像、及び画像上でコンピュータ3によって検出された経路候補の位置を示す図である。It is a figure which shows the 2D color image acquired by a camera 2, and the position of a route candidate detected by a computer 3 on an image.
 以下、図面を参照しつつ本発明に係る経路検出装置の好適な実施形態について詳細に説明する。なお、図面の説明においては、同一又は相当部分には同一符号を付し、重複する説明を省略する。 Hereinafter, a preferred embodiment of the route detection device according to the present invention will be described in detail with reference to the drawings. In the description of the drawings, the same or corresponding parts are designated by the same reference numerals, and duplicate description will be omitted.
 図1に示す本発明の好適な一実施形態である経路検出システム1は、ロボット制御等を目的として、周囲の環境における道路、歩道、通路、階段等の移動経路の検出を行うコンピュータシステムである。経路検出システム1は、視野内の外部の環境の画像を取得する取得装置であるカメラ2と、その画像を処理するコンピュータ3とを備える。 The route detection system 1, which is a preferred embodiment of the present invention shown in FIG. 1, is a computer system that detects movement routes such as roads, sidewalks, passages, and stairs in the surrounding environment for the purpose of robot control and the like. .. The route detection system 1 includes a camera 2 which is an acquisition device for acquiring an image of an external environment in a visual field, and a computer 3 for processing the image.
 カメラ2は、測距カメラ2a、及びカラーカメラ2bが内蔵されている。カメラ2は、コンピュータ3による制御により、カラーカメラ2bを動作させて視野内の二次元の像を撮像してその像を反映した二次元カラー画像を取得する機能と、コンピュータ3による制御により、測距カメラ2aを動作させて視野内の二次元の像の位置に対応するカメラ2からの距離を計測してその像の画素毎の深度情報を取得する機能とを有する。例えば、カメラ2は、二次元カラー画像としてRGBの色成分毎の各画素の輝度情報を含むデータを取得し、同時に、その画素毎の深度情報を示す深度画像を取得する。深度画像を得るための測距機能は、例えば、ステレオカメラによる赤外線を用いた三角測量の原理を用いること、赤外線パターンを照射してそのパターンの変化から距離を計測すること、赤外線を用いたTOF(Time Of Flight)カメラを用いること、あるいは、レーダーを用いた測距等により実現されうる。 The camera 2 has a built-in ranging camera 2a and a color camera 2b. The camera 2 is controlled by the computer 3 to operate the color camera 2b to capture a two-dimensional image in the field of view and acquire a two-dimensional color image reflecting the image, and the camera 2 is controlled by the computer 3 for measurement. It has a function of operating the distance camera 2a to measure the distance from the camera 2 corresponding to the position of the two-dimensional image in the field of view and acquiring the depth information for each pixel of the image. For example, the camera 2 acquires data including brightness information of each pixel for each color component of RGB as a two-dimensional color image, and at the same time acquires a depth image showing depth information for each pixel. The distance measurement function for obtaining a depth image is, for example, using the principle of triangulation using infrared rays with a stereo camera, irradiating an infrared pattern and measuring the distance from the change in the pattern, and TOF using infrared rays. (Time Of Flight) It can be realized by using a camera or by measuring the distance using a radar.
 コンピュータ3は、カメラ2を制御して取得された二次元カラー画像及び深度画像を処理することによって、カメラ2の視野内の経路の候補の三次元空間内の位置を検出するデータ処理装置である。ここで、本実施形態では、コンピュータ3は1台の装置によって構成されているが、複数の装置によって構成されていてもよいし、これらの複数の装置は、有線又は無線のネットワークを介して相互にデータ通信可能に接続されて構成されていてもよい。 The computer 3 is a data processing device that detects the position in the three-dimensional space of a candidate path in the field of view of the camera 2 by processing the two-dimensional color image and the depth image acquired by controlling the camera 2. .. Here, in the present embodiment, the computer 3 is composed of one device, but may be composed of a plurality of devices, and these plurality of devices are connected to each other via a wired or wireless network. It may be configured to be connected to enable data communication.
 図2は、コンピュータ3のハードウェア構成を示している。図2に示すように、コンピュータ3は、物理的には、プロセッサであるCPU(Central Processing Unit)101、記録媒体であるRAM(Random Access Memory)102又はROM(Read Only Memory)103、通信モジュール104、及び入出力モジュール106等を含んだコンピュータ等であり、各々は電気的に接続されている。なお、コンピュータ3は、入出力モジュール106として、ディスプレイ、キーボード、マウス、タッチパネルディスプレイ等を含んでいてもよいし、ハードディスクドライブ、半導体メモリ等のデータ記録装置を含んでいてもよい。また、コンピュータ3は、複数のコンピュータによって構成されていてもよい。 FIG. 2 shows the hardware configuration of the computer 3. As shown in FIG. 2, the computer 3 is physically a CPU (Central Processing Unit) 101 which is a processor, a RAM (Random Access Memory) 102 or a ROM (Read Only Memory) 103 which is a recording medium, and a communication module 104. , And a computer or the like including the input / output module 106 and the like, each of which is electrically connected. The computer 3 may include a display, a keyboard, a mouse, a touch panel display, and the like as the input / output module 106, and may include a data recording device such as a hard disk drive and a semiconductor memory. Further, the computer 3 may be composed of a plurality of computers.
 図1に戻って、コンピュータ3の機能構成について説明する。コンピュータ3は、画像変換部31、エッジ検出部32、ハフ変換部33、二次元位置計算部34、対応付け部35、三次元位置計算部36、及び経路評価部37を備える。図1に示すコンピュータ3の各機能部は、CPU101及びRAM102等のハードウェア上にプログラムを読み込ませることにより、CPU101の制御のもとで、通信モジュール104、及び入出力モジュール106等を動作させるとともに、RAM102におけるデータの読み出し及び書き込みを行うことで実現される。コンピュータ3のCPU101は、このコンピュータプログラムを実行することによってコンピュータ3を図1の各機能部として機能させ、後述する経路検出方法に対応する処理を順次実行する。このコンピュータプログラムの実行に必要な各種データ、及び、このコンピュータプログラムの実行によって生成された各種データは、全て、ROM103、RAM102等の内蔵メモリ、又は、ハードディスクドライブなどの記憶媒体に格納される。 Returning to FIG. 1, the functional configuration of the computer 3 will be described. The computer 3 includes an image conversion unit 31, an edge detection unit 32, a Hough conversion unit 33, a two-dimensional position calculation unit 34, a mapping unit 35, a three-dimensional position calculation unit 36, and a route evaluation unit 37. Each functional unit of the computer 3 shown in FIG. 1 operates the communication module 104, the input / output module 106, etc. under the control of the CPU 101 by loading the program on the hardware such as the CPU 101 and the RAM 102. , Is realized by reading and writing data in the RAM 102. By executing this computer program, the CPU 101 of the computer 3 causes the computer 3 to function as each functional unit in FIG. 1, and sequentially executes processes corresponding to the route detection method described later. Various data necessary for executing this computer program and various data generated by executing this computer program are all stored in an internal memory such as ROM 103 and RAM 102, or a storage medium such as a hard disk drive.
 画像変換部31は、カメラ2の動作を制御することにより二次元カラー画像及び深度画像を取得し、二次元カラー画像を取得する際のパラメータとしての解像度を複数種類に変換して解像度を変換した二次元カラー画像を生成する。例えば、画像変換部31は、解像度を、640画素×480画素の解像度と、128画素×96画素の解像度とに変換して、複数解像度の二次元カラー画像を生成する。ただし、画像変換部31が変換する解像度の種類及び数は、任意に設定されうる。さらに、画像変換部31は、複数解像度の二次元カラー画像をそれぞれグレースケール変換して、複数解像度の二次元グレースケール画像を生成する。グレースケール変換の手法としては、二次元カラー画像のRGBの色成分のいずれか1つを抽出する方法、RGBの色成分の輝度に所定の重み付けをして加重平均する方法等が採られる。 The image conversion unit 31 acquires a two-dimensional color image and a depth image by controlling the operation of the camera 2, and converts the resolution as a parameter when acquiring the two-dimensional color image into a plurality of types to convert the resolution. Generate a 2D color image. For example, the image conversion unit 31 converts the resolution into a resolution of 640 pixels × 480 pixels and a resolution of 128 pixels × 96 pixels to generate a two-dimensional color image having a plurality of resolutions. However, the type and number of resolutions converted by the image conversion unit 31 can be arbitrarily set. Further, the image conversion unit 31 grayscales each of the two-dimensional color images having a plurality of resolutions to generate a two-dimensional grayscale image having a plurality of resolutions. As a method of grayscale conversion, a method of extracting any one of the RGB color components of a two-dimensional color image, a method of weighting and averaging the brightness of the RGB color components by a predetermined weight, and the like are adopted.
 エッジ検出部32は、画像変換部31によって生成された二次元グレースケール画像を対象に、エッジ検出を行い、二次元グレースケール画像上のエッジとして検出された点の二次元座標を出力する。エッジ検出部32は、エッジ検出の手法としては、例えば、Canny法を用いる。 The edge detection unit 32 performs edge detection on the two-dimensional grayscale image generated by the image conversion unit 31, and outputs the two-dimensional coordinates of the points detected as edges on the two-dimensional grayscale image. The edge detection unit 32 uses, for example, the Canny method as an edge detection method.
 ハフ変換部33は、エッジ検出部32から出力されたエッジの二次元座標を対象に、二次元グレースケール画像上で線分と近似される線分像の位置を複数検出する。ハフ変換部33は、例えば、確率的ハフ変換の手法を用いて、線分像の位置を検出し、検出した線分像の両端の二次元座標を出力する。 The Hough transform unit 33 detects a plurality of positions of a line segment image that is approximated to a line segment on a two-dimensional grayscale image, targeting the two-dimensional coordinates of the edge output from the edge detection unit 32. The Hough transform unit 33 detects the position of the line segment image by using, for example, a stochastic Hough transform method, and outputs the two-dimensional coordinates of both ends of the detected line segment image.
 二次元位置計算部34は、ハフ変換部33から出力された線分像の両端の二次元座標を基に、その線分像の位置を表す数式を求める。例えば、二次元位置計算部34は、線分像の両端の二次元座標が、カメラ2の画像面を基準にしたxy座標系において、(x,y)、(x,y)と検出された場合には、線分像の位置を表す数式を、下記式;
y=(y-y){(x-x)/(x-x)}+y
として表す。そして、二次元位置計算部34は、x≠xの場合には、上記式において、xをxからxまで変化させることにより、xy座標上の線分像の全ての画素のx座標及びy座標を求める。一方で、x=xの場合には、二次元位置計算部34は、x=xとし、yをyからyまで変化させることにより、xy座標上の線分像の全ての画素のx座標及びy座標を求める。
The two-dimensional position calculation unit 34 obtains a mathematical formula representing the position of the line segment image based on the two-dimensional coordinates at both ends of the line segment image output from the Hough transform unit 33. For example, in the two-dimensional position calculation unit 34, the two-dimensional coordinates at both ends of the line segment image are (x 1 , y 1 ), (x 2 , y 2 ) in the xy coordinate system with respect to the image plane of the camera 2. If it is detected, the formula that expresses the position of the line segment image is the following formula;
y = (y 2- y 1 ) {(x-x 1 ) / (x 2- x 1 )} + y 1
Expressed as. Then, in the case of x 1 ≠ x 2 , the two-dimensional position calculation unit 34 changes x from x 1 to x 2 in the above equation to change x of all the pixels of the line segment image on the xy coordinates. Find the coordinates and y-coordinates. On the other hand, in the case of x 1 = x 2 , the two-dimensional position calculation unit 34 sets x = x 1 and changes y from y 1 to y 2 , so that all the line segment images on the xy coordinates are displayed. The x-coordinate and y-coordinate of the pixel are obtained.
 対応付け部35は、二次元位置計算部34によって求められた二次元座標上の線分像の画素の位置(x座標,y座標)を対象に、深度画像の中から、その座標に対応する画素の位置の深度情報を抽出する。例えば、対応付け部35は、深度画像に含まれる深度情報が、二次元カラー画像の画像面であるxy平面からの距離である、xyz三次元座標系のz座標の値で表わされている場合には、線分像の画素の位置の深度を示す値としてz座標の値を抽出・特定する。そして、対応付け部35は、二次元座標上の線分像の画素の位置(x座標,y座標)と、その画素の深度情報(z座標)の組み合わせを、二次元位置計算部34によって求められた全ての画素について生成して出力する。 The mapping unit 35 corresponds to the positions (x-coordinates, y-coordinates) of the pixels of the line segment image on the two-dimensional coordinates obtained by the two-dimensional position calculation unit 34 from the depth image. Extract the depth information of the pixel position. For example, in the mapping unit 35, the depth information included in the depth image is represented by the value of the z coordinate of the xyz three-dimensional coordinate system, which is the distance from the xy plane which is the image plane of the two-dimensional color image. In this case, the value of the z coordinate is extracted and specified as a value indicating the depth of the position of the pixel of the line segment image. Then, the mapping unit 35 obtains a combination of the pixel position (x coordinate, y coordinate) of the line segment image on the two-dimensional coordinate and the depth information (z coordinate) of the pixel by the two-dimensional position calculation unit 34. It is generated and output for all the generated pixels.
 三次元位置計算部36は、対応付け部35から出力された線分像の画素の位置(x座標,y座標)及びその画素に対応する深度(z座標)を基に、所定の三次元空間内での線分像の位置を計算する。例えば、三次元位置計算部36は、線分像の画素の位置及びそれに対応する深度を基に、所定の三次元座標系(水平面をxy平面とした三次元座標系、地面をxy平面とした三次元座標系等)の三次元空間内の線分像の座標を計算する。そして、三次元位置計算部36は、三次元空間内の線分像の位置の情報を出力する。出力する位置の情報としては、三次元空間内の線分像の始点及び終点の座標情報であってもよいし、三次元空間内の線分像を近似した直線、曲線、あるいは折れ線の近似式の情報であってもよいし、三次元空間内の線分像の各画素の座標情報であってもよい。 The three-dimensional position calculation unit 36 is a predetermined three-dimensional space based on the positions (x-coordinates, y-coordinates) of the pixels of the line segment image output from the mapping unit 35 and the depth (z-coordinates) corresponding to the pixels. Calculate the position of the line image within. For example, the three-dimensional position calculation unit 36 uses a predetermined three-dimensional coordinate system (a three-dimensional coordinate system with a horizontal plane as an xy plane and an xy plane with the ground) based on the positions of the pixels of the line segment image and the corresponding depth. Calculate the coordinates of the line image in the 3D space of the 3D coordinate system, etc.). Then, the three-dimensional position calculation unit 36 outputs information on the position of the line segment image in the three-dimensional space. The information of the output position may be the coordinate information of the start point and the end point of the line segment image in the three-dimensional space, or an approximate expression of a straight line, a curve, or a broken line that approximates the line segment image in the three-dimensional space. It may be the information of the above, or it may be the coordinate information of each pixel of the line segment image in the three-dimensional space.
 経路評価部37は、ハフ変換部33によって検出された複数の線分像の中から、三次元位置計算部36によって計算されたそれらの複数の線分像の三次元空間内の位置の情報を基に、三次元空間内での経路の候補を検出する。具体的には、経路評価部37は、三次元空間内での複数の線分像の所定面(水平面あるいは地面等)からの高さH[m]及び傾きθ[度]を計算する。この高さHとしては、線分像の重心の高さとして計算してもよいし、線分像の始点あるいは終点の高さとして計算してもよい。また、傾きθとしては、線分像の重心、始点、あるいは終点における接線の傾きとして計算してもよい。そして、経路評価部37は、複数の線分像の高さH及び傾きθのそれぞれを第1の閾値及び第2の閾値と比較した結果を基に、複数の線分像の中から、三次元空間内での経路の候補を抽出する。例えば、第1の閾値として、0m、0.6mが、第2の閾値として、0度、10度がそれぞれ予め設定され、経路評価部37は、下記式;
0度≦θ≦10度、かつ、0m≦H≦0.6m
を満たす線分像の位置を、経路の候補の位置として抽出する。
The route evaluation unit 37 obtains information on the positions of the plurality of line image calculated by the three-dimensional position calculation unit 36 in the three-dimensional space from the plurality of line images detected by the Huff conversion unit 33. Based on this, the candidate route in the three-dimensional space is detected. Specifically, the path evaluation unit 37 calculates the height H [m] and the slope θ [degree] from a predetermined plane (horizontal plane, ground, etc.) of a plurality of line segment images in a three-dimensional space. The height H may be calculated as the height of the center of gravity of the line segment image, or may be calculated as the height of the start point or the end point of the line segment image. Further, the slope θ may be calculated as the slope of the tangent line at the center of gravity, the start point, or the end point of the line segment image. Then, the path evaluation unit 37 compares the height H and the slope θ of the plurality of line segment images with the first threshold value and the second threshold value, respectively, and based on the result, the path evaluation unit 37 selects the third order from the plurality of line segment images. Extract route candidates in the original space. For example, 0 m and 0.6 m are preset as the first threshold value, and 0 degree and 10 degree are preset as the second threshold value, and the route evaluation unit 37 uses the following equation;
0 degree ≤ θ ≤ 10 degrees and 0 m ≤ H ≤ 0.6 m
The position of the line segment image that satisfies the condition is extracted as the position of the candidate route.
 さらに、経路評価部37は、抽出した経路の候補の位置の情報を出力する。出力の態様としては、カメラ2によって取得された二次元カラー画像上に経路の候補の位置を強調表示させた状態でディスプレイ等に表示させる態様、地図データ等に経路の候補の情報を付加して内部または外部の記録媒体に出力する態様、あるいは、ロボット制御等を行う内部又は外部の制御部に経路の候補の三次元位置の情報を出力する態様等が挙げられる。 Further, the route evaluation unit 37 outputs information on the positions of the extracted route candidates. As the output mode, the position of the route candidate is highlighted on the two-dimensional color image acquired by the camera 2 and displayed on a display or the like, and the route candidate information is added to the map data or the like. Examples thereof include a mode of outputting to an internal or external recording medium, and a mode of outputting information on the three-dimensional position of a route candidate to an internal or external control unit that performs robot control or the like.
 次に、上述した経路検出システム1によって実行される経路検出方法の手順を説明する。図3は、経路検出方法の処理の手順を示すフローチャートである。図3に示す経路検出処理は、経路検出システム1のユーザの操作を契機に開始され、定期的なタイミング、あるいは、各種センサによって所定の状態が検出されたタイミング等で繰り返し実行される。 Next, the procedure of the route detection method executed by the route detection system 1 described above will be described. FIG. 3 is a flowchart showing a procedure for processing the route detection method. The route detection process shown in FIG. 3 is started by the operation of the user of the route detection system 1, and is repeatedly executed at a periodic timing, a timing when a predetermined state is detected by various sensors, or the like.
 まず、図3を参照して、経路検出処理が起動されると、コンピュータ3によってカメラ2が制御されて、視野内の二次元カラー画像及び視野内の深度画像が取得される(ステップS1)。次に、コンピュータ3の画像変換部31により、取得された二次元カラー画像が、所定の解像度に変換された後に二次元グレースケール画像に変換される(ステップS2)。 First, referring to FIG. 3, when the route detection process is activated, the camera 2 is controlled by the computer 3 to acquire a two-dimensional color image in the field of view and a depth image in the field of view (step S1). Next, the image conversion unit 31 of the computer 3 converts the acquired two-dimensional color image into a two-dimensional grayscale image after being converted to a predetermined resolution (step S2).
 その後、コンピュータ3のエッジ検出部32によって、二次元グレースケール画像上でエッジ検出が行われ、エッジとして検出された点の二次元座標が出力される(ステップS3)。そして、コンピュータ3のハフ変換部33により、エッジとして検出された点の二次元座標を対象に確率的ハフ変換が実行されることにより、線分像の位置が複数検出される(ステップS4)。その後、コンピュータ3の二次元位置計算部34により、複数の線分像の位置を基に、複数の線分像の全ての画素の二次元座標が計算される(ステップS5)。 After that, the edge detection unit 32 of the computer 3 performs edge detection on the two-dimensional grayscale image, and outputs the two-dimensional coordinates of the points detected as edges (step S3). Then, the Hough transform unit 33 of the computer 3 executes a probabilistic Hough transform on the two-dimensional coordinates of the points detected as edges, thereby detecting a plurality of positions of the line segment image (step S4). After that, the two-dimensional position calculation unit 34 of the computer 3 calculates the two-dimensional coordinates of all the pixels of the plurality of line segment images based on the positions of the plurality of line segment images (step S5).
 次に、コンピュータ3の対応付け部35により、深度画像を基に、複数の線分像の画素の二次元座標に対して、深度情報が対応付けて出力される(ステップS6)。さらに、コンピュータ3の三次元位置計算部36によって、複数の線分像の所定の三次元空間内での位置(三次元座標)が計算される(ステップS7)。その後、コンピュータ3の経路評価部37により、三次元空間内での線分像の所定面からの高さ及び傾きが計算され、その計算結果と所定の閾値とを比較することによって、複数の線分像の位置の中から経路の候補の位置が抽出され、抽出された経路の候補の位置が所定の出力態様で出力される(ステップS8)。さらに、コンピュータ3により、二次元カラー画像を次の解像度に変換するか否かが判定され(ステップS9)、次の解像度に変換する場合には(ステップS9;Yes)、次の解像度に変換した二次元カラー画像を対象にステップS2~S8の処理が繰り返され、全ての解像度への変換が完了した場合には(ステップS9;No)、経路検出処理を終了する。 Next, the mapping unit 35 of the computer 3 outputs the depth information in association with the two-dimensional coordinates of the pixels of the plurality of line segment images based on the depth image (step S6). Further, the three-dimensional position calculation unit 36 of the computer 3 calculates the positions (three-dimensional coordinates) of the plurality of line segment images in a predetermined three-dimensional space (step S7). After that, the path evaluation unit 37 of the computer 3 calculates the height and slope of the line segment image from the predetermined plane in the three-dimensional space, and by comparing the calculation result with the predetermined threshold value, a plurality of lines are obtained. The position of the candidate route is extracted from the position of the line segment, and the position of the extracted candidate route is output in a predetermined output mode (step S8). Further, the computer 3 determines whether or not to convert the two-dimensional color image to the next resolution (step S9), and when converting to the next resolution (step S9; Yes), the two-dimensional color image is converted to the next resolution. When the processes of steps S2 to S8 are repeated for the two-dimensional color image and the conversion to all resolutions is completed (step S9; No), the route detection process is terminated.
 次に、図4~図6を参照して、上述した実施形態に係る経路検出処理の結果の一例を示す。図4において、(a)部には、カメラ2によって取得された二次元カラー画像の一例を示し、(b)部には、コンピュータ3によって検出された線分像の位置の一例を示し、図5には、コンピュータ3によって計算された複数の線分像の高さ及び傾きの分布を示すグラフであり、図6には、コンピュータ3による経路の候補の出力例を示す図である。 Next, with reference to FIGS. 4 to 6, an example of the result of the route detection process according to the above-described embodiment is shown. In FIG. 4, the part (a) shows an example of a two-dimensional color image acquired by the camera 2, and the part (b) shows an example of the position of the line segment image detected by the computer 3. FIG. 5 is a graph showing the distribution of heights and inclinations of a plurality of line segment images calculated by the computer 3, and FIG. 6 is a diagram showing an output example of a route candidate by the computer 3.
 これらの図に示すように、本実施形態の経路検出処理によれば、カメラ2によって取得された二次元カラー画像を対象に、画像上の複数の線分像の二次元位置が検出された後に、それらの複数の線分像の三次元空間内の所定面を基準とした高さ及び傾きが計算される(図4、図5)。ここで、図4の(b)部には、検出された線分像の二次元位置が多数の白線で示されているが、それらの白線のうち所定の長さ以上を有し経路候補となる線分像が白い太線で表わされている。この段階では、経路となり得ない線分像(例えば、垂直方向の線分像)もいくつか白い太線として表示されている。そして、複数の線分像の中から、計算された線分像の高さ及び傾きを基に、経路の候補が抽出されてその位置が出力される。例えば、図5に示す線分像の高さ及び傾きの分布の中から、三次元空間内での線分像の高さH[m]及び傾きθ[度]が、例えば、0度≦θ≦10度、かつ、0m≦H≦0.6mを満たす範囲(円で囲んだ範囲)にある線分像が経路の候補として抽出される。その結果を三次元空間に反映し、図6に示すように、通路両側の2本の線分像CR(実際のディスプレイ上では白色とは異なる色などで、経路の候補として他の線分像とは区別できるよう表示される)が経路の候補として抽出されて出力される。 As shown in these figures, according to the path detection process of the present embodiment, after the two-dimensional positions of a plurality of line segment images on the image are detected for the two-dimensional color image acquired by the camera 2. , The height and inclination of the plurality of line segment images with respect to a predetermined surface in the three-dimensional space are calculated (FIGS. 4 and 5). Here, in the part (b) of FIG. 4, the two-dimensional positions of the detected line segment images are shown by a large number of white lines, but the white lines have a predetermined length or more and are considered as route candidates. The line segment image is represented by a thick white line. At this stage, some line segment images that cannot be paths (for example, vertical line segment images) are also displayed as thick white lines. Then, a route candidate is extracted from the plurality of line segment images based on the calculated height and inclination of the line segment image, and the position is output. For example, from the distribution of the height and slope of the line segment image shown in FIG. 5, the height H [m] and the slope θ [degree] of the line segment image in the three-dimensional space are, for example, 0 degrees ≤ θ. A line segment image within a range (a range surrounded by a circle) satisfying ≦ 10 degrees and 0 m ≦ H ≦ 0.6 m is extracted as a route candidate. The result is reflected in the three-dimensional space, and as shown in FIG. 6, two line segment images CR on both sides of the passage (colors different from white on the actual display, etc., and other line segment images as route candidates). (Displayed so that it can be distinguished from) is extracted and output as a route candidate.
 図7には、本実施形態の応用例を示すロボット4の概略構成を示している。このロボット4は、経路検出システム1としてのカメラ2及びコンピュータ3の他、コンピュータ3の制御によりロボット4の全体の自律移動を可能とする駆動機構5をさらに備える。コンピュータ3は、経路評価部37によって出力された経路の候補の位置を基に、駆動機構5による自律移動を制御する。駆動機構5は、例えば、車輪、操舵部、車輪あるいは操舵部用の駆動部、及びバッテリー等の駆動電源等を含んで構成される。 FIG. 7 shows a schematic configuration of a robot 4 showing an application example of this embodiment. In addition to the camera 2 and the computer 3 as the route detection system 1, the robot 4 further includes a drive mechanism 5 that enables the autonomous movement of the entire robot 4 under the control of the computer 3. The computer 3 controls autonomous movement by the drive mechanism 5 based on the positions of the route candidates output by the route evaluation unit 37. The drive mechanism 5 includes, for example, a wheel, a steering unit, a drive unit for the wheel or the steering unit, a drive power source such as a battery, and the like.
 上述した実施形態の経路検出システム1の作用効果について説明する。 The operation and effect of the route detection system 1 of the above-described embodiment will be described.
 本実施形態によれば、カメラ2で取得された二次元カラー画像上で線分像の位置が検出され、カメラ2で取得された深度情報を基に線分像の画素の位置に対応する深度が特定され、線分像の画素の位置及びそれに対応する深度から、線分像の三次元空間内での所定面を基準にした高さ及び傾きが計算され、それらを基に三次元空間内での経路の候補が特定される。このとき、被写体の存在する環境に応じて、線分像の三次元空間内での高さ及び傾きを判定することにより、その環境に応じた経路の候補を高精度に検出することができる。例えば、建物内の通路、あるいは屋外の歩道に沿って移動するロボットを制御したい場合にロボットの移動に適した経路を検出でき、屋外の建物の壁面あるいは柱等を移動するように制御したい場合に適した経路を検出することができる。 According to the present embodiment, the position of the line image is detected on the two-dimensional color image acquired by the camera 2, and the depth corresponding to the position of the pixel of the line image is based on the depth information acquired by the camera 2. Is specified, and the height and inclination of the line image with respect to a predetermined surface in the three-dimensional space are calculated from the positions of the pixels of the line image and the corresponding depth, and based on these, the height and inclination in the three-dimensional space are calculated. Candidates for the route in are identified. At this time, by determining the height and inclination of the line segment image in the three-dimensional space according to the environment in which the subject exists, it is possible to detect the route candidate according to the environment with high accuracy. For example, when you want to control a robot that moves along a passage in a building or an outdoor sidewalk, you can detect a route suitable for the robot to move, and you want to control to move the wall surface or pillar of an outdoor building. A suitable route can be detected.
 ここで、コンピュータ3は、二次元カラー画像の解像度を複数に変更しながら、解像度が変更された二次元画像を対象に、経路の候補を繰り返し検出している。この場合、屋外、屋内等の環境に応じて適した解像度に変更された二次元画像を用いることで、その環境に応じた経路の候補をより高精度に検出することができる。例えば、屋内等の画像の濃淡の変化が直線的になりやすい場合には解像度を高めた画像を用いて経路を高精度に検出でき、屋外等の画像の濃淡の変化がランダムになりやすい場合には解像度を低下させた画像を用いて経路を高精度に検出できる。 Here, the computer 3 repeatedly detects route candidates for the two-dimensional image whose resolution has been changed while changing the resolution of the two-dimensional color image to a plurality of. In this case, by using a two-dimensional image whose resolution is changed to be suitable for an environment such as outdoor or indoor, it is possible to detect a route candidate according to the environment with higher accuracy. For example, when the change in shade of an image such as indoors tends to be linear, the path can be detected with high accuracy using an image with higher resolution, and the change in shade in an image such as outdoors tends to be random. Can detect the path with high accuracy using an image with reduced resolution.
 例えば、図8及び図9には、屋外の環境下で複数の解像度で取得された二次元カラー画像を基にした検出結果の例を示している。図8の(a)部には、640画素×480画素の解像度、及びフレームレート約5fpsで取得された二次元カラー画像を示し、図8の(b)部には、この二次元カラー画像を用いた経路の候補の検出結果を示している。また、図9の(a)部には、128画素×96画素の解像度、及びフレームレート約15fpsで取得された二次元カラー画像を示し、図9の(b)部には、この二次元カラー画像を用いた経路の候補の検出結果を示している。図8の(b)部、図9の(b)には、白い太線で示す線分像のうち、経路の候補として抽出された位置を符号CRで示している。この結果が示すように、屋外のような直線部分が画像に写りにくい環境下では、高解像度の画像では経路の候補が、舗装路の明瞭な境界部分の1本しか検出されないのに対して、低解像度の画像では経路の候補が、あいまいな舗装路の境界も含めて3本検出されている。このように、環境による画像における経路の写り具合に応じて精度の高い経路検出が実現される。 For example, FIGS. 8 and 9 show examples of detection results based on two-dimensional color images acquired at a plurality of resolutions in an outdoor environment. Part (a) of FIG. 8 shows a two-dimensional color image acquired at a resolution of 640 pixels × 480 pixels and a frame rate of about 5 fps, and part (b) of FIG. 8 shows this two-dimensional color image. The detection result of the candidate of the route used is shown. Further, the part (a) of FIG. 9 shows a two-dimensional color image acquired at a resolution of 128 pixels × 96 pixels and a frame rate of about 15 fps, and the part (b) of FIG. 9 shows the two-dimensional color. The detection result of the route candidate using an image is shown. In the part (b) of FIG. 8 and the part (b) of FIG. 9, the positions extracted as route candidates from the line segment images shown by the thick white lines are indicated by the reference numerals CR. As this result shows, in an environment where straight lines are difficult to see in an image, such as outdoors, only one route candidate is detected in a high-resolution image, whereas only one clear boundary part of a paved road is detected. In the low-resolution image, three route candidates are detected, including the vague boundary of the paved road. In this way, highly accurate route detection is realized according to the appearance of the route in the image depending on the environment.
 また、コンピュータ3は、二次元カラー画像をグレースケール変換して二次元グレースケール画像を生成し、二次元グレースケール画像のエッジ検出を行うことにより、線分像の位置を検出している。こうすれば、二次元画像からのエッジ検出処理の処理負荷が低減され、その結果、全体の経路検出の処理負荷を低減することができる。 Further, the computer 3 detects the position of the line segment image by grayscale-converting the two-dimensional color image to generate a two-dimensional grayscale image and detecting the edge of the two-dimensional grayscale image. By doing so, the processing load of the edge detection processing from the two-dimensional image is reduced, and as a result, the processing load of the entire route detection can be reduced.
 また、コンピュータ3は、二次元グレースケール画像のエッジ検出を行った後に確率的ハフ変換を行うことにより、線分像の位置を検出している。この場合には、二次元カラー画像の解像度に関わらず漏れなく線分像が検出可能となり、結果として高精度に経路を検出することができる。 Further, the computer 3 detects the position of the line segment image by performing a probabilistic Hough transform after performing edge detection of the two-dimensional grayscale image. In this case, the line segment image can be detected without omission regardless of the resolution of the two-dimensional color image, and as a result, the path can be detected with high accuracy.
 また、コンピュータ3は、線分像の高さと第1の閾値との比較結果、及び、線分像の傾きと第2の閾値との比較結果を基に、三次元空間内での経路の候補を検出している。この場合には、屋外、屋内等の環境に応じて閾値を設定することで、その環境に応じた経路の候補をより高精度に検出することができる。 Further, the computer 3 is a candidate for a route in the three-dimensional space based on the comparison result between the height of the line segment image and the first threshold value and the comparison result between the inclination of the line segment image and the second threshold value. Is being detected. In this case, by setting the threshold value according to the environment such as outdoor or indoor, it is possible to detect the route candidate according to the environment with higher accuracy.
 また、上記の経路検出システム1を備えるロボット4の構成によれば、経路の検出結果を用いて精度の高い自律移動制御が実現される。 Further, according to the configuration of the robot 4 provided with the above-mentioned route detection system 1, highly accurate autonomous movement control is realized by using the route detection result.
 本発明は、上述した実施形態に限定されるものではない。上記実施形態の構成は様々変更されうる。 The present invention is not limited to the above-described embodiment. The configuration of the above embodiment can be changed in various ways.
 例えば、上記実施形態のコンピュータ3は、複数解像度に変換した二次元カラー画像を対象に繰り返し経路の候補を検出していたが、このような処理には限定されない。コンピュータ3は、二次元カラー画像のRGBの色成分の一部の色成分を用いて、その色成分をグレースケール変換して二次元グレースケール画像を生成し、二次元カラー画像のうちから抽出する色成分を変更しながら、抽出した色成分の各画素の輝度をグレースケール画像に変換した画像を対象に、経路の候補を繰り返し検出してもよい。この色成分は、処理対象の二次元グレースケール画像を取得するためのパラメータである。この場合、屋外、屋内等の環境に応じて適した色成分の画像データを用いることで、その環境に応じた経路の候補をより高精度に検出することができる。 For example, the computer 3 of the above embodiment has detected a candidate for a repetitive route in a two-dimensional color image converted into a plurality of resolutions, but the processing is not limited to such a process. The computer 3 uses some of the color components of the RGB color components of the two-dimensional color image, grayscale-converts the color components to generate a two-dimensional grayscale image, and extracts the two-dimensional color image from the two-dimensional color image. While changing the color component, the route candidate may be repeatedly detected for the image in which the brightness of each pixel of the extracted color component is converted into a grayscale image. This color component is a parameter for acquiring a two-dimensional grayscale image to be processed. In this case, by using the image data of the color component suitable for the environment such as outdoor or indoor, it is possible to detect the route candidate according to the environment with higher accuracy.
 また、コンピュータ3において経路の候補の検出に用いる第1及び第2の閾値は任意の値に変更されてもよい。例えば、水平面における経路を検出する場合には、線分像の高さがロボットの車輪位置に近いこと、線分像の傾きが水平方向に近いことを条件に設定できるように各閾値及び判定基準が設定される。また、昇降ロボットの経路を検出する場合には、線分像の傾きが垂直方向に近いこと、線分像の下端がロボットの車輪位置に近いことを条件とできるように各閾値及び判定基準が設定される。 Further, the first and second threshold values used in the computer 3 for detecting the route candidate may be changed to arbitrary values. For example, when detecting a path in a horizontal plane, each threshold value and judgment criteria can be set on the condition that the height of the line segment image is close to the wheel position of the robot and the inclination of the line segment image is close to the horizontal direction. Is set. In addition, when detecting the path of the elevating robot, each threshold and judgment standard are set so that the inclination of the line segment image is close to the vertical direction and the lower end of the line segment image is close to the wheel position of the robot. Set.
 また、コンピュータ3は、あいまいな線分を線分像として検出するため、短い線分をクラスタリングすることにより、方向ベクトル、あるいは位置の近い線分群を組み合わせて、線分像として検出する処理を加えてもよい。 Further, in order to detect an ambiguous line segment as a line segment image, the computer 3 adds a process of clustering short line segments to combine a direction vector or a group of line segments having close positions and detect it as a line segment image. You may.
 コンピュータ3は、カメラ2によって取得される二次元カラー画像において、屋外等における影が検出される場合には、経路の誤検出を防ぐために次のような処理を追加してもよい。
・他の画像特徴(色領域のクラスタリング処理等)を統合した候補間の比較(候補の経路間の投票)。
・三次元点群処理(面検出等)による処理結果、LiDAR(Light Detection and Ranging)を用いて検出される経路との比較。
・影と影以外とを区別する画像識別器(深層学習などのパターン認識技術)の利用。
When a shadow is detected outdoors or the like in the two-dimensional color image acquired by the camera 2, the computer 3 may add the following processing in order to prevent erroneous detection of the route.
-Comparison between candidates that integrate other image features (color area clustering processing, etc.) (voting between candidate paths).
-Comparison with the processing result by 3D point cloud processing (surface detection, etc.) and the path detected using LiDAR (Light Detection and Ranging).
-Use of an image classifier (pattern recognition technology such as deep learning) that distinguishes between shadows and non-shadows.
 コンピュータ3は、曲線状の線分像の位置を検出する際には、LSD(Line Segment Detector)などの処理によって得られる多数の短い直線で表わしてもよいし、曲線を検出するハフ変換を用いてもよい。 When the computer 3 detects the position of the curved line segment image, it may be represented by a large number of short straight lines obtained by processing such as LSD (Line Segment Detector), or the Hough transform for detecting the curve is used. You may.
 また、コンピュータ3は、直線状あるいは曲線状の線分像を局所特徴として抽出する過程において、深層学習を用いて経路特徴検出の多様性および精度を向上させてもよい。 Further, the computer 3 may improve the diversity and accuracy of path feature detection by using deep learning in the process of extracting a linear or curved line segment image as a local feature.
 また、コンピュータ3は、カメラ2以外の他の複数のセンサからの検出データ、あるいは、複数の特徴抽出手法を組み合わせて、クラスタリングを用いて、データ、パラメータ等の特徴の自動選択を行ってもよい。例えば、深度情報を取得するセンサとして、赤外線あるいはレーザを用いた複数のセンサ(LiDAR、TOFカメラ等)を同時に使用し、精度を上げるために検出データの自動選択(環境に応じて、いずれかのセンサを選択、あるいは両方のセンサ出力を利用でもよい)を行ってもよい。このようにすることにより、環境が異なった場合に、お互いの弱点を補うことにより、精度を向上させることができる。 Further, the computer 3 may automatically select features such as data and parameters by using clustering by combining detection data from a plurality of sensors other than the camera 2 or a plurality of feature extraction methods. .. For example, as a sensor to acquire depth information, multiple sensors using infrared rays or lasers (LiDAR, TOF camera, etc.) are used at the same time, and detection data is automatically selected to improve accuracy (either one depending on the environment). You may select a sensor or use both sensor outputs). By doing so, when the environment is different, the accuracy can be improved by compensating for each other's weak points.
 そのほか、経路検出システム1における各種の処理において、人工知能(AI)、機械学習、ニューラルネットワーク等の手法と組み合わせた処理を実行してもよい。 In addition, in various processes in the route detection system 1, processing that is combined with methods such as artificial intelligence (AI), machine learning, and neural network may be executed.
 コンピュータ3が取得及び処理する二次元画像は、カラー画像に限定されず、白黒画像あるいは赤外線画像でも良い。赤外線画像を取得することにより、夜間、暗い室内等の環境下でも経路検出が可能となる。 The two-dimensional image acquired and processed by the computer 3 is not limited to a color image, but may be a black-and-white image or an infrared image. By acquiring an infrared image, it is possible to detect a route even in an environment such as a dark room at night.
 ここで、上記実施形態では、ここで、少なくとも1つのプロセッサは、二次元画像あるいは深度情報を取得する際に、複数のデータあるいはパラメータの中から選択し、選択した結果得られた二次元画像あるいは深度情報を対象に、経路の候補を検出する、ことが好ましい。この場合、屋外、屋内等の環境に応じて適したデータあるいはパラメータが選択されて、その結果得られた二次元画像あるいは深度情報を用いることで、その環境に応じた経路の候補をより高精度に検出することができる。 Here, in the above embodiment, here, at least one processor selects from a plurality of data or parameters when acquiring a two-dimensional image or depth information, and the two-dimensional image or the result obtained as a result of the selection. It is preferable to detect route candidates for depth information. In this case, data or parameters suitable for the environment such as outdoors and indoors are selected, and by using the resulting two-dimensional image or depth information, route candidates according to the environment can be selected with higher accuracy. Can be detected.
 また、少なくとも1つのプロセッサは、二次元画像の解像度を変更し、解像度が変更された二次元画像を対象に、経路の候補を検出する、ことが好ましい。この場合、屋外、屋内等の環境に応じて適した解像度に変更された二次元画像を用いることで、その環境に応じた経路の候補をより高精度に検出することができる。 Further, it is preferable that at least one processor changes the resolution of the two-dimensional image and detects the route candidate for the two-dimensional image whose resolution has been changed. In this case, by using a two-dimensional image whose resolution is changed to be suitable for an environment such as outdoor or indoor, it is possible to detect a route candidate according to the environment with higher accuracy.
 また、取得装置は、二次元画像を複数の色毎の輝度を含むカラー画像として取得し、少なくとも1つのプロセッサは、二次元画像のうちから抽出する色を変更し、抽出した色の各画素の輝度を対象に、経路の候補を検出する、ことも好ましい。この場合、屋外、屋内等の環境に応じて適した色成分の画像データを用いることで、その環境に応じた経路の候補をより高精度に検出することができる。 Further, the acquisition device acquires a two-dimensional image as a color image including the brightness of each of a plurality of colors, and at least one processor changes the color extracted from the two-dimensional image and of each pixel of the extracted color. It is also preferable to detect route candidates for brightness. In this case, by using the image data of the color component suitable for the environment such as outdoor or indoor, it is possible to detect the route candidate according to the environment with higher accuracy.
 また、少なくとも1つのプロセッサは、二次元画像をグレースケール変換してグレースケール画像を生成し、グレースケール画像のエッジ検出を行うことにより、線分像の位置を検出する、ことも好ましい。こうすれば、二次元画像からのエッジ検出処理の処理負荷が低減され、その結果、全体の経路検出の処理負荷を低減することができる。 It is also preferable that at least one processor detects the position of the line segment image by grayscale-converting the two-dimensional image to generate a grayscale image and performing edge detection of the grayscale image. By doing so, the processing load of the edge detection processing from the two-dimensional image is reduced, and as a result, the processing load of the entire route detection can be reduced.
 また、少なくとも1つのプロセッサは、グレースケール画像のエッジ検出を行った後にハフ変換を行うことにより、線分像の位置を検出する、ことも好ましい。この場合には、二次元画像の解像度に関わらず漏れなく線分像が検出可能となり、結果として高精度に経路を検出することができる。 It is also preferable that at least one processor detects the position of the line segment image by performing Hough transform after performing edge detection of the grayscale image. In this case, the line segment image can be detected without omission regardless of the resolution of the two-dimensional image, and as a result, the route can be detected with high accuracy.
 また、少なくとも1つのプロセッサは、高さと第1の閾値との比較結果、及び傾きと第2の閾値との比較結果を基に、三次元空間内での経路の候補を検出する、ことも好ましい。この場合には、屋外、屋内等の環境に応じて閾値を設定することで、その環境に応じた経路の候補をより高精度に検出することができる。 It is also preferable that at least one processor detects route candidates in the three-dimensional space based on the comparison result between the height and the first threshold value and the comparison result between the slope and the second threshold value. .. In this case, by setting the threshold value according to the environment such as outdoor or indoor, it is possible to detect the route candidate according to the environment with higher accuracy.
 また、自律移動を可能にする駆動機構をさらに備え、少なくとも1つのプロセッサは、検出した経路の候補を基に、駆動機構による自律移動を制御する、ことが好ましい。かかる構成によれば、経路の検出結果を用いて精度の高い自律移動制御が実現される。 Further, it is preferable that a drive mechanism that enables autonomous movement is further provided, and at least one processor controls autonomous movement by the drive mechanism based on the detected route candidate. According to such a configuration, highly accurate autonomous movement control is realized by using the detection result of the route.
 本開示の一側面は、経路を検出する経路検出装置を使用用途とし、様々な環境下で高精度に経路を検出することができるものである。 One aspect of the present disclosure is that a route detection device for detecting a route is used and can detect a route with high accuracy in various environments.
 1…経路検出システム(経路検出装置)、2…カメラ(取得装置)、3…コンピュータ、4…ロボット、5…駆動機構、31…画像変換部、32…エッジ検出部、33…ハフ変換部、34…二次元位置計算部、35…対応付け部、36…三次元位置計算部、37…経路評価部。 1 ... Route detection system (route detection device), 2 ... Camera (acquisition device), 3 ... Computer, 4 ... Robot, 5 ... Drive mechanism, 31 ... Image conversion unit, 32 ... Edge detection unit, 33 ... Hough conversion unit, 34 ... Two-dimensional position calculation unit, 35 ... Correspondence unit, 36 ... Three-dimensional position calculation unit, 37 ... Path evaluation unit.

Claims (8)

  1.  視野内の二次元の像を反映した二次元画像と、前記二次元画像の各画素に対応する前記像の深度情報とを取得する取得装置と、
     前記二次元画像及び前記深度情報を処理する少なくとも1つのプロセッサとを備え、
     前記少なくとも1つのプロセッサは、
     前記二次元画像上で線分に近似される像である線分像の位置を検出し、
     前記深度情報の中から、前記線分像の位置に対応する深度を特定し、
     前記線分像の位置及び前記位置に対応する深度を基に、前記線分像の三次元空間内での高さ及び傾きを計算し、
     前記高さ及び傾きを基に三次元空間内での経路の候補を検出する、
    経路検出装置。
    An acquisition device that acquires a two-dimensional image that reflects a two-dimensional image in the field of view and depth information of the image corresponding to each pixel of the two-dimensional image.
    It comprises at least one processor for processing the two-dimensional image and the depth information.
    The at least one processor
    The position of the line segment image, which is an image approximated to the line segment, is detected on the two-dimensional image.
    From the depth information, specify the depth corresponding to the position of the line segment image, and
    Based on the position of the line segment image and the depth corresponding to the position, the height and inclination of the line segment image in the three-dimensional space are calculated.
    Detecting path candidates in three-dimensional space based on the height and inclination,
    Route detector.
  2.  前記少なくとも1つのプロセッサは、
     前記二次元画像あるいは前記深度情報を取得する際に、複数のデータあるいはパラメータの中から選択し、選択した結果得られた前記二次元画像あるいは前記深度情報を対象に、前記経路の候補を検出する、
    請求項1記載の経路検出装置。
    The at least one processor
    When acquiring the two-dimensional image or the depth information, a candidate for the route is detected from the two-dimensional image or the depth information obtained as a result of selecting from a plurality of data or parameters. ,
    The route detection device according to claim 1.
  3.  前記少なくとも1つのプロセッサは、
     前記二次元画像の解像度を変更し、解像度が変更された前記二次元画像を対象に、前記経路の候補を検出する、
    請求項2記載の経路検出装置。
    The at least one processor
    The resolution of the two-dimensional image is changed, and the candidate of the route is detected in the two-dimensional image whose resolution has been changed.
    The route detection device according to claim 2.
  4.  前記取得装置は、前記二次元画像を複数の色毎の輝度を含むカラー画像として取得し、
     前記少なくとも1つのプロセッサは、
     前記二次元画像のうちから抽出する色を変更し、抽出した色の各画素の輝度を対象に、前記経路の候補を検出する、
    請求項2記載の経路検出装置。
    The acquisition device acquires the two-dimensional image as a color image including the luminance for each of a plurality of colors.
    The at least one processor
    The color extracted from the two-dimensional image is changed, and the candidate of the route is detected for the brightness of each pixel of the extracted color.
    The route detection device according to claim 2.
  5.  前記少なくとも1つのプロセッサは、
     前記二次元画像をグレースケール変換してグレースケール画像を生成し、前記グレースケール画像のエッジ検出を行うことにより、前記線分像の位置を検出する、
    請求項1~4のいずれか1項に記載の経路検出装置。
    The at least one processor
    The position of the line segment image is detected by performing grayscale conversion of the two-dimensional image to generate a grayscale image and performing edge detection of the grayscale image.
    The route detection device according to any one of claims 1 to 4.
  6.  前記少なくとも1つのプロセッサは、
     前記グレースケール画像のエッジ検出を行った後にハフ変換を行うことにより、前記線分像の位置を検出する、
    請求項5に記載の経路検出装置。
    The at least one processor
    The position of the line segment image is detected by performing the Hough transform after performing the edge detection of the grayscale image.
    The route detection device according to claim 5.
  7.  前記少なくとも1つのプロセッサは、
     前記高さと第1の閾値との比較結果、及び前記傾きと第2の閾値との比較結果を基に、三次元空間内での経路の候補を検出する、
    請求項1~6のいずれか1項に記載の経路検出装置。
    経路検出装置。
    The at least one processor
    Based on the comparison result between the height and the first threshold value and the comparison result between the slope and the second threshold value, a route candidate in the three-dimensional space is detected.
    The route detection device according to any one of claims 1 to 6.
    Route detector.
  8.  自律移動を可能にする駆動機構をさらに備え、
     前記少なくとも1つのプロセッサは、
    検出した前記経路の候補を基に、前記駆動機構による自律移動を制御する、
    請求項1~7のいずれか1項に記載の経路検出装置。
    Further equipped with a drive mechanism that enables autonomous movement,
    The at least one processor
    Based on the detected path candidates, the autonomous movement by the drive mechanism is controlled.
    The route detection device according to any one of claims 1 to 7.
PCT/JP2021/019562 2020-05-25 2021-05-24 Route detection device WO2021241482A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
JP2022527018A JPWO2021241482A1 (en) 2020-05-25 2021-05-24

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2020090551 2020-05-25
JP2020-090551 2020-05-25

Publications (1)

Publication Number Publication Date
WO2021241482A1 true WO2021241482A1 (en) 2021-12-02

Family

ID=78744371

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2021/019562 WO2021241482A1 (en) 2020-05-25 2021-05-24 Route detection device

Country Status (2)

Country Link
JP (1) JPWO2021241482A1 (en)
WO (1) WO2021241482A1 (en)

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2007265038A (en) * 2006-03-28 2007-10-11 Pasuko:Kk Road image analysis device and road image analysis method
JP2013123221A (en) * 2011-12-09 2013-06-20 Ricoh Co Ltd Method and device for detecting road separator

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2007265038A (en) * 2006-03-28 2007-10-11 Pasuko:Kk Road image analysis device and road image analysis method
JP2013123221A (en) * 2011-12-09 2013-06-20 Ricoh Co Ltd Method and device for detecting road separator

Also Published As

Publication number Publication date
JPWO2021241482A1 (en) 2021-12-02

Similar Documents

Publication Publication Date Title
US11210806B1 (en) Using satellite imagery to enhance a 3D surface model of a real world cityscape
CN111486855B (en) Indoor two-dimensional semantic grid map construction method with object navigation points
Acharya et al. BIM-PoseNet: Indoor camera localisation using a 3D indoor model and deep learning from synthetic images
US10540777B2 (en) Object recognition device and object recognition system
Haala et al. Extraction of buildings and trees in urban environments
US9117281B2 (en) Surface segmentation from RGB and depth images
Zhang et al. Visual-lidar odometry and mapping: Low-drift, robust, and fast
US10254845B2 (en) Hand gesture recognition for cursor control
JP6560480B2 (en) Image processing system, image processing method, and program
US10477178B2 (en) High-speed and tunable scene reconstruction systems and methods using stereo imagery
Scaramuzza et al. Extrinsic self calibration of a camera and a 3d laser range finder from natural scenes
US9207069B2 (en) Device for generating a three-dimensional model based on point cloud data
US9641755B2 (en) Reimaging based on depthmap information
KR101650799B1 (en) Method for the real-time-capable, computer-assisted analysis of an image sequence containing a variable pose
US20130121564A1 (en) Point cloud data processing device, point cloud data processing system, point cloud data processing method, and point cloud data processing program
US6205242B1 (en) Image monitor apparatus and a method
CN112771539A (en) Using three-dimensional data predicted from two-dimensional images using neural networks for 3D modeling applications
JP2006302284A (en) System and method for transforming 2d image domain data into 3d high density range map
Yue et al. Fast 3D modeling in complex environments using a single Kinect sensor
Park et al. Comparison of plane extraction performance using laser scanner and Kinect
EP2372652B1 (en) Method for estimating a plane in a range image and range image camera
TW202230290A (en) Map construction apparatus and method
US20220148200A1 (en) Estimating the movement of an image position
Ramakrishnan et al. Shadow compensation for outdoor perception
WO2021241482A1 (en) Route detection device

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21811908

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 2022527018

Country of ref document: JP

Kind code of ref document: A

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 21811908

Country of ref document: EP

Kind code of ref document: A1