WO2014064990A1 - Plane detection device, autonomous locomotion device provided with plane detection device, method for detecting road level difference, device for detecting road level difference, and vehicle provided with device for detecting road level difference - Google Patents

Plane detection device, autonomous locomotion device provided with plane detection device, method for detecting road level difference, device for detecting road level difference, and vehicle provided with device for detecting road level difference Download PDF

Info

Publication number
WO2014064990A1
WO2014064990A1 PCT/JP2013/071855 JP2013071855W WO2014064990A1 WO 2014064990 A1 WO2014064990 A1 WO 2014064990A1 JP 2013071855 W JP2013071855 W JP 2013071855W WO 2014064990 A1 WO2014064990 A1 WO 2014064990A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
plane
detection
road surface
height
Prior art date
Application number
PCT/JP2013/071855
Other languages
French (fr)
Japanese (ja)
Inventor
透 花岡
松尾 順向
光平 松尾
岡田 和久
Original Assignee
シャープ株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from JP2012235950A external-priority patent/JP6030405B2/en
Priority claimed from JP2012238567A external-priority patent/JP2014089548A/en
Priority claimed from JP2012238566A external-priority patent/JP6072508B2/en
Application filed by シャープ株式会社 filed Critical シャープ株式会社
Publication of WO2014064990A1 publication Critical patent/WO2014064990A1/en

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01BMEASURING LENGTH, THICKNESS OR SIMILAR LINEAR DIMENSIONS; MEASURING ANGLES; MEASURING AREAS; MEASURING IRREGULARITIES OF SURFACES OR CONTOURS
    • G01B11/00Measuring arrangements characterised by the use of optical techniques
    • G01B11/24Measuring arrangements characterised by the use of optical techniques for measuring contours or curvatures
    • G01B11/25Measuring arrangements characterised by the use of optical techniques for measuring contours or curvatures by projecting a pattern, e.g. one or more lines, moiré fringes on the object
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01BMEASURING LENGTH, THICKNESS OR SIMILAR LINEAR DIMENSIONS; MEASURING ANGLES; MEASURING AREAS; MEASURING IRREGULARITIES OF SURFACES OR CONTOURS
    • G01B11/00Measuring arrangements characterised by the use of optical techniques
    • G01B11/24Measuring arrangements characterised by the use of optical techniques for measuring contours or curvatures
    • G01B11/245Measuring arrangements characterised by the use of optical techniques for measuring contours or curvatures using a plurality of fixed, simultaneously operating transducers
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/02Systems using the reflection of electromagnetic waves other than radio waves
    • G01S17/06Systems determining position data of a target
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/02Systems using the reflection of electromagnetic waves other than radio waves
    • G01S17/06Systems determining position data of a target
    • G01S17/46Indirect determination of position data
    • G01S17/48Active triangulation systems, i.e. using the transmission and reflection of electromagnetic waves other than radio waves
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/87Combinations of systems using electromagnetic waves other than radio waves
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/88Lidar systems specially adapted for specific applications
    • G01S17/89Lidar systems specially adapted for specific applications for mapping or imaging
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S7/00Details of systems according to groups G01S13/00, G01S15/00, G01S17/00
    • G01S7/48Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S17/00
    • G01S7/4808Evaluating distance, position or velocity data
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0231Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means
    • G05D1/0246Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using a video camera in combination with image processing means
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/12Edge-based segmentation

Definitions

  • the present invention relates to a flat surface detection device, an autonomous mobile device including the flat surface detection device, a road surface step detection method, a road surface step detection device, and a vehicle including the road surface step detection device.
  • Infrared or ultrasonic proximity sensors have been widely used for detecting such obstacles and steps.
  • an infrared or ultrasonic proximity sensor can determine the presence or absence of a front obstacle, but cannot obtain its detailed position and shape. Therefore, when it is attached to a robot, it can detect obstacles and steps that are present in the immediate vicinity, but it is used for applications that move while finding and avoiding obstacles and steps in a wide range ahead in the direction of travel. I can't. Therefore, distance sensors such as a laser range finder (LRF) have come to be used.
  • LRF laser range finder
  • FIG. 25 is a diagram of an autonomous mobile device 100 including a laser range finder 101 in the prior art.
  • (A) in FIG. 25 is a diagram showing an external view of the autonomous mobile device 100 including the laser range finder 101 in the prior art, and (b) in FIG. 25 includes the laser range finder 101 in the prior art. It is the figure of the upper obstacle 102 which the other autonomous mobile device 100 cannot detect, the lower obstacle 103, and the level
  • the autonomous mobile apparatus 100 is equipped with the laser range finder 101 in the position of height HLRF .
  • the laser range finder 101 performs an angle scan in the horizontal direction (in a plane perpendicular to the y-axis in the drawing) at the position of the height HLRF , and detects an obstacle and a step.
  • the laser range finder 101 has high measurement accuracy, it has a drawback that it cannot detect obstacles and steps at positions different from the measurement height.
  • the laser range finder 101 cannot detect the obstacle 102, the obstacle 103, and the step 104. Therefore, in order to detect the obstacle 102, the obstacle 103, and the step 104, in addition to the laser range finder 101, it is necessary to arrange a large number of infrared proximity sensors, ultrasonic proximity sensors, etc.
  • the apparatus has a complicated structure.
  • an object scene outside the vehicle is imaged by a camera mounted on the vehicle, and the captured image is image-processed to obtain a distance from the vehicle to the object, and the risk of a collision with a vehicle in front or a guardrail is detected.
  • a method of predicting and controlling the vehicle such as applying a brake has been put into practical use.
  • the distance measurement technology using such images is based on the principle of triangulation from the technology that estimates the distance to a target object from the monocular image using the relationship with the camera position and the stereo images captured by multiple cameras. It is roughly divided into the technology that calculates the distance.
  • the technique for obtaining the distance from the stereo image based on the principle of triangulation obtains the distance from the relative shift of the position of the same object in the left and right images, and therefore, an accurate distance can be obtained.
  • Patent Literature 1 discloses an image storage unit that stores images input by a plurality of cameras, a feature extraction unit that extracts a plurality of white lines existing on a road surface, and an arbitrary on a road surface from the extracted white lines.
  • An obstacle detection apparatus comprising a parameter calculation unit that obtains a relational expression that is established between projection positions of each point on each image, and a detection part that detects an object having a height from the road surface using the relational expression Disclosure.
  • the obstacle detection device of Patent Document 1 even when there is a change in the slope of the road surface, the road surface is recognized from the movement of the two white lines, and the obstacle present on the road surface is detected at high speed and with high accuracy. Can be detected.
  • Patent Document 2 a stereo camera is used to detect the road surface from the Hough transform result of the differential image of the left and right images, and other vehicles and pedestrians are detected using this as a clue.
  • plane information is extracted by a calculation method called RANSAC (RANdom Sampl Consensus) method from distance image data obtained by a distance image sensor of a TOF (Time Of Flight) method, and walking is performed using the information as a clue. Detect people.
  • RANSAC Random Sampl Consensus
  • the flat surface detection device is a flat surface detection device that detects a specific detection target plane from distance image data of a subject including a specific detection target plane, and the distance image data is converted into the specific detection target plane.
  • Projection image generation means for generating distributed projection image data, straight line detection means for detecting the linear straight line from the projection image data, and the inclination of the specific detection target plane based on the detection result of the straight line detection means
  • plane parameter calculation means for calculating plane parameters including information related to the above.
  • An autonomous mobile device includes the plane detection device, a distance image generation unit that generates the distance image data, and a travel unit.
  • a plane that becomes a travel path using the plane detection device is provided. It is characterized by detecting.
  • the first road surface level difference detection method of the present invention at least a first image and a second image obtained by photographing a road surface in stereo are projected onto XY plane coordinates, and specific coordinates (X, Y) are projected on the plane coordinates.
  • the height of the detection area from the road surface is detected by comparing the image of the detection area and the image of the comparison area.
  • the first road surface level difference detecting device of the present invention includes at least a first camera and a second camera that take a stereo image of a road surface, a first image captured by the first camera, and the second camera.
  • the second road surface level difference detection method of the present invention at least a first image and a second image obtained by taking a stereo image of a road surface are projected on XY plane coordinates, and an image is obtained for each row data of the image in a specific Y-axis direction.
  • the parallax is calculated when the vehicle is at the road surface position, a third image is generated by correcting the second image by shifting the parallax for each Y axis, and the first image and the third image are generated for each step detection region.
  • the height from the road surface is detected by comparison.
  • a second road surface level difference detecting device includes a stereo camera that performs stereo imaging of at least a first image and a second image of a road surface, and the first and second images captured in stereo as XY plane coordinates. Project and calculate the parallax when the image is at the road surface position for each row data of the above image in the specific Y-axis direction, and generate the third image corrected by shifting the parallax for each Y-axis And a height detection unit that detects the height from the road surface by comparing the first image and the third image for each step detection region.
  • a vehicle according to the present invention includes the first or second road surface level difference detecting device.
  • (A) is a figure which shows the external view of the cleaning robot using the plane detection apparatus which concerns on Embodiment 1 of this invention
  • (b) is the cleaning using the plane detection apparatus which concerns on Embodiment 1 of this invention.
  • It is sectional drawing of a robot.
  • It is a figure which shows the attachment position of the distance image sensor with which the cleaning robot which concerns on Embodiment 1 of this invention is equipped, and the measurement range of a distance image sensor.
  • (A) is the RGB image image
  • (b) is the cleaning robot which concerns on Embodiment 1 of this invention. Is a distance image photographed by the distance image sensor for long distances provided in FIG.
  • (c) is a figure which shows the three-dimensional coordinate system of the cleaning robot reference
  • (A) is a projection image on a yz plane generated from a distance image taken by a distance image sensor for long distances provided in the cleaning robot according to the first embodiment of the present invention
  • (b) is (A) is a bottom image
  • (c) is a projection onto a yz plane generated from a distance image captured by a short-distance distance image sensor provided in the cleaning robot according to the first embodiment of the present invention.
  • (D) is a bottom image of (c).
  • (A) is the projection image to the yz plane when the level
  • (b) is the bottom image of (a). It is a flowchart which shows the procedure which the arithmetic unit of the cleaning robot which concerns on Embodiment 2 of this invention processes.
  • (A) is a figure which shows the example of the three-dimensional coordinate data produced
  • (b) is ( (a) is a projection image obtained by projecting the three-dimensional coordinate data onto the yz plane, (c) is a projection image obtained by projecting the three-dimensional coordinate data (a) onto the xy plane, and (d) is a diagram (a).
  • FIG. 10 is a layout diagram of a stereo camera provided in the road surface level difference detection device according to the third embodiment. It is the image imaged with the stereo camera with which the road surface level
  • FIG. 10 is a flowchart of height detection processing in a calculation unit provided in the road surface level difference detection device of Embodiment 4;
  • A) is a figure which shows the external view of the autonomous mobile device provided with the laser range finder in a prior art,
  • (b) is the upper obstacle which the autonomous mobile device provided with the laser range finder in a prior art cannot detect It is a figure of a lower obstacle, and a level
  • a stereo camera is essential, and easy-to-understand clues for detecting a plane such as a white line on the road or a road surface end are required.
  • a plane such as a white line on the road or a road surface end
  • the mobile robot is provided with the flat surface detection apparatus according to Patent Document 2
  • the first embodiment solves the above-described problem and provides a plane detection device that can detect a plane included in the image from the distance image at high speed and more reliably.
  • a cleaning robot provided with the flat surface detection apparatus according to the present invention will be cited and described with reference to FIGS. 1 to 10.
  • FIG. 1 is a diagram showing a cleaning robot 1 according to the first embodiment.
  • (A) in FIG. 1 is an external view of the cleaning robot 1
  • (b) in FIG. 2 is a cross-sectional view illustrating an internal configuration of a housing 11 of the cleaning robot 1.
  • FIG. The cleaning robot 1 according to the first embodiment is an autonomous traveling type cleaning robot that performs cleaning while autonomously traveling on a floor surface.
  • the cleaning robot 1 has an essential configuration of a plane detection device that detects a plane from a distance image acquired by a distance image sensor.
  • the cleaning robot 1 can detect a plane by a plane detection device, determine obstacles and steps in the traveling direction, and can travel while avoiding them.
  • the flat surface detection apparatus may include a distance image sensor as a component, and the cleaning robot 1 is provided with the distance image sensor and the flat surface detection apparatus as described below in the first embodiment. It is also possible to adopt a mode in which the plane detection device acquires the range image acquired by the range image sensor and detects the plane.
  • the distance image sensor is not a constituent element of the flat surface detection device, and an aspect in which the distance image sensor is provided as an external configuration of the flat surface detection device will be described.
  • the cleaning robot 1 includes a housing 11 provided with a window 21, drive wheels 2 (traveling means), and a protection member 12. Various control systems and drive systems, which will be described later, are mounted inside the housing 11. When the drive wheels 2 are driven and controlled, the cleaning robot 1 travels on the traveling road surface and travels or stops traveling. Clean the road surface during.
  • the cleaning robot 1 includes a battery 4, a waste liquid recovery unit 45, a cleaning liquid discharge unit 46, a motor 10, and a distance image sensor inside a casing 11 provided with a window 21. 20 (distance image generating means) and an arithmetic unit 30 are mounted. Further, in (b) of FIG. 1, the cleaning robot 1 is connected to the outside of the housing 11, more specifically between the housing 11 and the traveling road surface, together with the driving wheel 2 described above, The cleaning brush 9 and the protection member 12 are provided.
  • the characteristic configuration of the first embodiment resides in the flat surface detection device 60 provided in a part of the arithmetic device 30. Therefore, while the characteristic configuration will be described below in detail, the configuration other than the characteristic configuration can be realized by a conventionally known configuration, and thus detailed description thereof will be omitted.
  • the cleaning robot 1 can move forward in the left direction of the paper, move backward in the right direction of the paper, turn in the back of the paper surface, or in the front direction.
  • the paper surface that is the main traveling direction is used. Movement in the left direction may be simply referred to as a traveling direction.
  • the drive wheels 2 are arranged on the left and right of the bottom of the cleaning robot 1 and are controlled by a drive motor (not shown) to realize the movement of the cleaning robot 1.
  • the follower wheel 3 is rotatably attached to the bottom of the cleaning robot 1.
  • the drive wheel 2 and the slave wheel 3 can move forward, backward, turn, and stop, and the cleaning robot 1 can freely travel by a combination thereof.
  • the battery 4 supplies power to the cleaning robot 1.
  • the battery 4 is charged by a well-known step-down circuit and a rectifying / smoothing circuit, and outputs a predetermined voltage.
  • the cleaning liquid discharge unit 46 includes a cleaning liquid tank 5 and a cleaning liquid discharge unit 6.
  • the cleaning liquid tank 5 stores the cleaning liquid. Further, the cleaning liquid discharge unit 6 is connected to the cleaning liquid tank 5 by a pipe, and discharges the cleaning liquid stored in the cleaning liquid tank 5.
  • the waste liquid recovery unit 45 has a waste liquid tank 7 and a suction port 8.
  • the waste liquid tank 7 stores the waste liquid (including dust and dirt) sucked into the cleaning robot 1.
  • the cleaning robot 1 sucks the waste liquid from the suction port 8 and discharges the waste liquid to the waste liquid tank 7 connected to the suction port 8 by a pipe.
  • the cleaning brush 9 is installed in the vicinity of the suction port 8 and is cleaned using the cleaning discharged from the cleaning liquid discharge unit 6.
  • the cleaning brush 9 is driven by a motor 10.
  • the protection member 12 is installed on the front side in the traveling direction at the bottom of the cleaning robot 1 in order to prevent the cleaning liquid from splashing and foreign matter from getting involved.
  • the distance image sensor 20 includes a distance image sensor 20a for a short distance and a distance image sensor 20b for a long distance. It should be noted that the configuration common to the distance image sensor 20a for short distance and the distance image sensor 20b for long distance may be described simply as the distance image sensor 20.
  • the distance image sensor 20 is an infrared light projection type distance image sensor, and includes a projection optical system including an infrared light projection element and an imaging optical system including an infrared light image sensor. By projecting and irradiating infrared light having a predetermined pattern to the outside and photographing reflected light from the external object with an image sensor, the distance to the object within the field of view of the imaging optical system can be measured.
  • the distance image sensor 20a for short distance and the distance image sensor 20b for long distance are disposed inside the housing 11, projecting infrared light to the outside through the window 21 of the housing 11, and Reflected light is incident from the outside through the window 21.
  • the distance measurement result of the distance image sensor 20 is output as a distance image (depth image, depth image) in which the distance to an object included in the visual field range is expressed as a grayscale value of a pixel on the image. Details of the distance image sensor 20a and the distance image sensor 20b in the present embodiment will be described later.
  • the computing device 30 acquires a distance image of the distance image sensor 20 and performs a process of detecting a plane. Details of the configuration and functions of the arithmetic unit 30 will be described later.
  • the cleaning robot 1 includes a configuration described later in addition to the above-described configurations.
  • an operation panel for selecting manual travel or automatic travel a travel switch for determining a travel direction during manual travel, and a control switch 50 (FIG. 4) such as an emergency stop switch for stopping operation in an emergency are provided.
  • a control switch 50 such as an emergency stop switch for stopping operation in an emergency.
  • the form of the cleaning robot 1 is not limited to the type of cleaning using the cleaning liquid as described above, and is an aspect like a so-called household vacuum cleaner provided with a fan, a dust collection chamber, a suction port, and the like. It may be a robot.
  • the autonomous mobile device includes the distance image sensor 20 as referred to in the cleaning robot 1 of the first embodiment and the plane detection device 60 of the arithmetic device 30 to be described later. Therefore, in the following, details of the distance image sensor 20 described above will be described, and details of the arithmetic unit 30 will be described.
  • FIG. 2 is a diagram illustrating the attachment position of the distance image sensor 20 and the measurement range of the distance image sensor 20 in the cleaning robot 1 according to the first embodiment of the present invention.
  • the distance image sensor 20 is attached to the front surface in the traveling direction of the cleaning robot 1 at a height position that is vertically separated from the floor surface (traveling road surface) to be cleaned. More specifically, although the optical axis of the distance image sensor 20 extends along the front and back in the traveling direction, the distance image sensor is inclined downward from the image sensor that is one end of the optical axis in the traveling direction, that is, the distance image sensor. 20 is attached to the floor surface obliquely downward.
  • FIG. 3A is an RGB image photographed by the long-distance distance image sensor 20b
  • FIG. 3B is a distance image photographed by the long-distance distance image sensor 20b
  • 3C is an RGB image photographed by the short distance image sensor 20a
  • FIG. 3D is a distance photographed by the short distance image sensor 20a. It is an image.
  • the distance images displayed as RGB images in FIGS. 3A and 3C in the field of view in FIGS. 3A and 3C are displayed with the distance images bright in the near distance and dark in the far distance in FIGS. 3B and 3D.
  • the distance image sensor 20a and the distance image sensor 20b have different positions at which they are attached and the angle with respect to the horizontal plane, so that the angle of the floor that is a plane is different. ing.
  • the optical axis of the distance image sensor 20 is arranged parallel to the floor surface, the vicinity of the main body of the cleaning robot 1 deviates from the angle of view of the distance image sensor. Therefore, a wide range in the short distance of the main body of the cleaning robot 1 becomes an out-of-view area, which makes measurement impossible.
  • the area outside the visual field in the vicinity can be reduced, so that it is possible to measure up to a relatively short distance of the cleaning robot 1 main body.
  • the visual field range of the short-distance distance image sensor 20a projected on the floor surface is a trapezoidal area of A 0 B 0 C 0 D 0 shown in FIG.
  • the visual field range of the distance image sensor 20b for a long distance is a trapezoidal area of A 1 B 1 C 1 D 1 in FIG.
  • the arrangement and the number of the distance image sensors 20 are not limited to the configuration of the first embodiment.
  • only one distance image sensor 20 may be mounted, or a plurality of distance image sensors 20 may be arranged in the horizontal direction. is there.
  • the distance image sensor 20a for short distance and the distance image sensor 20b for long distance use the infrared light source having the same wavelength, and therefore the visual field region A is used for the purpose of preventing mutual interference.
  • a slight gap is provided between 0 B 0 C 0 D 0 and A 1 B 1 C 1 D 1 as shown in FIG. If interference can be prevented by using light sources of different wavelengths, the distance image sensor 20a for short distance and the distance image sensor 20b for long distance are provided so as not to provide a gap between the two visual field regions. It is also possible to install.
  • FIG. 4 is a block diagram illustrating a configuration related to the travel function in the cleaning robot 1 of the first embodiment.
  • the cleaning robot 1 includes a travel control unit 41, a cleaning control unit 42, a map information memory unit 43, a status display unit 44, a rotary in addition to the distance image sensor 20 and the calculation device 30 described above.
  • An encoder 47, a drive wheel motor 48, a gyro sensor 49, and a control switch 50 are provided.
  • the computing device 30 acquires a distance image from the distance image sensor 20, and extracts the position, size, and shape of an obstacle or a step from the acquired distance image.
  • the extracted obstacles and step information (hereinafter referred to as obstacle / step data) are output to the traveling control unit 41. Details of the arithmetic unit 30 will be described later.
  • the traveling control unit 41 grasps the moving distance of the cleaning robot 1 and the current position and direction based on information from the rotary encoder 47 and the gyro sensor 49 attached to the drive wheel 2. Based on the map information stored in advance in the map information memory unit 43 and the obstacle / step data output from the arithmetic unit 30, the travel route is determined so as to avoid the obstacle and the step, and the drive wheel motor 48 is controlled. To do. Further, when a signal is acquired from the control switch 50, necessary control such as an emergency stop or a change in the traveling direction is performed accordingly. Information regarding these controls is displayed on the status display unit 44 and updated in real time.
  • the cleaning control unit 42 receives a command from the traveling control unit 41 and controls parts related to cleaning, such as operation start and stop switching of the cleaning brush 9, the waste liquid recovery unit 45, and the cleaning liquid discharge unit 46.
  • the map information memory unit 43 stores information such as obstacles and steps in a range to be cleaned by the cleaning robot 1.
  • the information stored in the map information memory unit 43 is updated by the travel control unit 41.
  • the state display unit 44 displays information related to the state of the cleaning robot 1. For example, manual travel or automatic travel display, emergency stop display, and the like.
  • the rotary encoder 47 is attached to the driving wheel 2 and outputs a rotational displacement to the travel control unit 41 as a digital signal. Based on the output of the rotary encoder 47, the travel control unit 41 can grasp the distance traveled.
  • the gyro sensor 49 detects a change in direction and outputs it to the traveling control unit 41. From the output of the gyro sensor 49, the traveling control unit 41 can grasp the traveling direction.
  • the arithmetic device 30 includes a flat surface detection device 60, an obstacle / step detection unit 35, and a data integration unit 36.
  • the plane detection device 60 includes a three-dimensional coordinate calculation unit 31 (three-dimensional coordinate calculation unit, second three-dimensional coordinate calculation unit) and a projection image generation unit 32 (projection image generation unit, second Projection image generating means), a straight line detecting section 33 (straight line detecting means, second straight line detecting means), and a plane detecting section 34 (plane parameter calculating means, second plane parameter calculating means).
  • the three-dimensional coordinate calculation unit 31 acquires a distance image from the distance image sensor 20 and converts the acquired distance image into three-dimensional coordinate data.
  • the definition of the coordinate system of the three-dimensional coordinate data will be described with reference to FIG.
  • FIG. 5 is a diagram showing a three-dimensional coordinate system based on the distance image sensor 20a for short distance
  • (b) in FIG. 5 is a three-dimensional based on the distance image sensor 20b for long distance. It is a figure which shows a coordinate system
  • (c) in FIG. 5 is a figure which shows the three-dimensional coordinate system of cleaning robot 1 reference
  • the vertical direction is the y-axis (upward is positive), and the front-rear direction, that is, the optical axis direction of the distance image sensor 20 is the z-axis (depth direction is positive). Since the distance image sensor 20a and the distance image sensor 20b have different attachment positions and angles, the coordinate systems are also different from each other as shown in FIGS. 5 (a) and 5 (b). Further, the distance expressed in the coordinate system based on the distance image sensor 20 is different from the distance measured from the main body of the cleaning robot 1 along the floor surface. Therefore, in order to obtain an accurate distance from the cleaning robot 1 to the object, it is necessary to perform coordinate conversion to the coordinate system of the cleaning robot 1 reference (floor surface reference) and integrate the data of the two distance image sensors. .
  • XYZ coordinates which are the coordinate system based on the cleaning robot 1, are defined separately from the xyz coordinates based on the distance image sensor 20.
  • the traveling direction is the Z axis
  • the normal direction of the floor is the Y axis
  • the direction perpendicular to the Z axis and the Y axis is the X axis (rightward is positive).
  • the x-axis direction based on the distance image sensor 20 and the X-axis direction based on the cleaning robot 1 are substantially the same. That is, the distance image sensor 20 is not attached in a direction that rotates about the z axis, and the inclination between the floor surface and the distance image sensor 20 is only the inclination ⁇ in the direction that rotates about the x axis. It means that. Or, even if it is inclined in the direction of rotation about the z axis, it means that it is sufficiently smaller than the inclination ⁇ and can be ignored.
  • the z coordinate is the distance itself included in the distance image.
  • the x-coordinate and y-coordinate are calculated from z based on the principle of triangulation if the focal length f of the optical system of the distance image sensor 20, the pixel pitch p, and the pixel shift amount c between the optical axis and the center of the image sensor are known. I can do it.
  • the distance image sensor is calibrated in advance to obtain these parameters.
  • FIG. 6 is a diagram illustrating an example of three-dimensional coordinate data in the first embodiment.
  • the left-right direction is the x-axis
  • the up-down direction is the y-axis
  • the front-rear direction is the z-axis.
  • the three-dimensional coordinate calculation unit 31 may be a coordinate system rotated around at least one of the x-axis, y-axis, and z-axis, or a coordinate whose origin has been changed.
  • a distance image can be converted into three-dimensional coordinate data in various coordinate systems such as a system.
  • the projection image generation unit 32 generates a projection image obtained by projecting the three-dimensional coordinate data onto a two-dimensional surface (plane).
  • the projection image projected on the xy plane is obtained by extracting the x and y coordinates of all the pixels.
  • the projection image projected on the yz plane is obtained by extracting the y-coordinate and the z-coordinate for all pixels, and the projection image projected on the zx plane is the z-coordinate for all images.
  • the x coordinate is extracted.
  • the projection range of the y axis is ⁇ 1200 mm to +1200 mm (offset 1200 mm)
  • the projection range of the z axis is 0 mm to +3200 mm (offset 0 mm)
  • the scale at the time of projection is 1/10 [ pixel / mm].
  • the size of the projected image is as follows.
  • This projection image size can be freely changed by changing the scale as described above, independently of the image size of the original image (distance image).
  • the resolution at the time of projection becomes finer, so that the accuracy of calculation increases, but the calculation time becomes longer accordingly. This trade-off determines the image size that is actually used.
  • the pixel values of all points included in the projection image projected on the yz plane are initialized with “0”.
  • point B (x, y, z) ( ⁇ 400 mm, 900 mm, 1500 mm) exists as another data.
  • point C (x, y, z) ( ⁇ 200 mm, ⁇ 303 mm, 1998 mm) exists as another data.
  • This is the same as the projected coordinate of point A, and the pixel value of this point is already “1”, so nothing is done here.
  • y-coordinate values and z-coordinate values are extracted and converted into coordinate values on the yz plane for all points included in the three-dimensional coordinate data, and the pixel values of the corresponding coordinates are converted to “1”. Make a change.
  • a yz plane projection image is obtained in the form of a binary image in which only the portion where the point included in the three-dimensional coordinate data exists is “1”.
  • the straight line detection unit 33 detects a straight line from the projection image generated by the projection image generation unit 32. For example, a case where a straight line indicating a floor surface is detected in the projection image shown in FIG. 7 will be described.
  • FIG. 7 is a projection image obtained by projecting the three-dimensional coordinate data of FIG. 6 onto the yz plane.
  • the y-coordinate and the z-coordinate are extracted from all the pixels in the projection image onto the yz plane.
  • the point representing the floor surface is the lowest, that is, the y coordinate is the smallest (hereinafter referred to as the bottom point). It turns out that it is.
  • the plane representing the floor is a straight line in the projected image.
  • the distance image sensor 20 is not attached to be inclined in the direction of rotation about the z axis, the inclination angle between the floor surface and the distance image sensor 20 is in the direction of rotation about the x axis. It is sufficiently smaller than the inclination ⁇ . As a result, when the three-dimensional coordinate data representing the floor surface is projected onto the yz plane, it is distributed on almost one straight line.
  • the straight line detection unit 33 scans each pixel row included in the projection image from the bottom to the top and finds the first point along the scan direction, that is, “1” Only the point that becomes “” is left and the other point is deleted to obtain a bottom image.
  • the straight line detection unit 33 performs a fitting process for detecting a straight line on the obtained bottom image, and obtains parameters such as a slope of the straight line and an intercept. Depending on the result of straight line detection, a plurality of straight line candidates may be obtained instead of one. In this case, the most likely straight line is selected based on a predetermined criterion.
  • the straight line detection method (fitting process)
  • arbitrary processing such as Hough transformation, probabilistic Hough transformation which is an improved method thereof, simple least square method, and RANSAC method can be applied.
  • the straight line having the smallest error (residual) when performing straight line fitting is selected as the most likely straight line.
  • the Hough transform a line having the largest number of points that support the straight line can be selected as the most likely straight line.
  • the height and angle of the plane detected by the plane detector 34 can be estimated from the height and angle at which the distance image sensor 20 is attached. is there. Therefore, it is possible to detect the floor surface more reliably by setting the allowable range of the detected height and angle in advance and determining whether the plane detected by the plane detection unit 34 is within the allowable range.
  • the plane detection unit 34 holds the detected floor surface as floor plane information.
  • the floor plane information is updated as needed when the plane detection unit 34 detects the floor surface. By doing so, it is possible to follow the fluctuation of the floor plane that occurs with the movement of the cleaning robot 1 and always grasp the floor plane. In addition, even if the floor surface cannot be detected temporarily due to a person crossing the distance image sensor, it is possible to prevent missing floor plane detection processing by using previously detected floor plane information. it can.
  • the obstacle / step detection unit 35 converts the three-dimensional coordinate data in the xyz coordinate system into the three-dimensional coordinate data in the XYZ coordinate system. Then, in the XYZ coordinate system, the distance between each point and the plane is calculated to determine whether it is higher or lower than the detected plane.
  • the data integration unit 36 integrates a plurality of obstacles and a plurality of steps detected from a plurality of distance images as one obstacle / step data.
  • the data integration unit 36 integrates a plurality of obstacles and a plurality of steps detected from a plurality of distance images as one obstacle / step data.
  • obstacles and steps are detected from the distance images acquired from the distance image sensor 20a and the distance image sensor 20b, information on the obstacles and steps is integrated into one. Create obstacle / step data.
  • the data can be integrated so that the data of B has priority.
  • the format of the obstacle / step data can be converted into an arbitrary format so that the traveling control unit 41 can easily process it later.
  • the coordinate system of the data can be output as the cleaning robot standard XYZ coordinate system, or can be converted into a polar coordinate system (R- ⁇ coordinate system).
  • a method of thinning out or interpolating data or extracting only the obstacles and step data closest to the main body of the cleaning robot 1 can be considered. .
  • FIG. 8 is a flowchart illustrating a procedure performed by the arithmetic device 30 of the cleaning robot 1 according to the first embodiment of the present invention.
  • the three-dimensional coordinate calculation unit 31 acquires a distance image generated by the distance image sensor 20 (step S101).
  • the distance image for short distance and the long distance The distance image for use is acquired from each distance image sensor 20.
  • the three-dimensional coordinate calculation unit 31 converts the acquired distance image into three-dimensional coordinate data in the xyz coordinate system (step S102). From the converted three-dimensional coordinate data, the projection image generation unit 32 generates a projection image projected on the yz plane (step S103).
  • the distance image sensor 20 is not attached with an inclination in the direction of rotation about the z axis, and thus rotates about the z axis between the floor surface and the distance image sensor 20.
  • the inclination in the direction to be rotated is sufficiently smaller than the inclination in the direction of rotation about the x axis. Accordingly, the plane representing the floor surface in the three-dimensional coordinate data becomes a point group on a straight line in the projection image onto the yz plane.
  • an actual projection image is shown in FIG.
  • FIG. 9 is a projection image onto the yz plane generated from the distance image photographed by the distance image sensor 20b for long distance according to the first embodiment of the present invention, and (b) in FIG. ) Is a bottom image of (a) in FIG. 9 according to Embodiment 1 of the present invention, and (c) in FIG. 9 is obtained by the distance image sensor 20a for short distance according to Embodiment 1 of the present invention.
  • FIG. 9D is a projected image on the yz plane generated from the captured distance image, and is a bottom image of FIG. 9C according to Embodiment 1 of the present invention.
  • the point group 61 and the point group 62 representing the floor surface are straight lines as described above.
  • the straight line detection unit 33 generates a bottom image from the projection image (step S104). As shown in FIGS. 9B and 9D, the straight line of the bottom image matches the point group 61 representing the floor surface and the point group 62 representing the floor surface. From the bottom image, the straight line detection unit 33 detects a straight line (step S105). Then, the plane detection unit 34 detects a plane in the three-dimensional coordinate data from the detected straight line, and calculates the angle and height of the plane (step S106).
  • the plane angle is an angle (tilt angle) with respect to the z-axis.
  • the plane height is a separation distance between the floor surface and the distance image sensor 20.
  • the calculated plane angle and height are determined by the plane detector 34 to be within a preset angle and height tolerance (step S107).
  • step S107 If it is determined in step S107 that “the angle and the height are within the allowable range” (step S107: Yes), since the detected plane is the floor surface, the plane detector 34 updates the floor plane information ( Step S108). On the other hand, if it is determined in step S107 that “the angle and the height are not within the allowable range” (step S107: No), the detected plane is not a floor surface, and thus the floor plane information is not updated (step S107). S109).
  • the obstacle / step detection unit 35 converts the three-dimensional coordinate data from the xyz coordinate system to the XYZ coordinate system (step S110).
  • the obstacle / step detection unit 35 calculates the distance between each point and the plane from the converted three-dimensional coordinate data of the XYZ coordinate system, determines whether the obstacle is higher or lower than the detected plane, A step is detected (step S111). In this determination, a threshold value t is set, and if the distance from the floor plane is larger than t, the obstacle is higher than the floor or a step, and if it is smaller than -t, the step is lower than the floor plane.
  • the threshold value t is set in advance in consideration of the size of the unevenness of the floor plane, the measurement error of the distance image sensor, and the like. Thereby, it is determined for all points included in the three-dimensional coordinate data whether the point belongs to a step, an obstacle, or the other. Then, information F indicating whether the point belongs to a step, an obstacle, or the other is added to the coordinates (X, Y, Z) of each point, and (X, Y, Z, F). ) Converted to format. Information on the steps and obstacles obtained in this way is passed from the obstacle / step detection unit 35 to the data integration unit 36.
  • the data integration unit 36 integrates the obstacle and step information detected by the obstacle / step detection unit 35 to create obstacle / step data (step S112). Finally, the data integration unit 36 outputs the obstacle / step data to the travel control unit 41 (step S113).
  • the arithmetic unit 30 creates obstacle / step data from the distance image at high speed and more reliably and outputs it to the traveling control unit 41. Therefore, the cleaning robot 1 can move while avoiding obstacles and steps. Further, by performing plane detection independently from a plurality of distance sensors and integrating the data, it is possible to detect a wider range of obstacles and steps and move while avoiding them.
  • the height direction (y-axis direction) and the depth are utilized by using the condition that the distance image sensor 20 is not attached to the floor surface in a direction that rotates about the z-axis. Only the inclination in the direction (z-axis direction) is obtained.
  • the number of parameters to be specified is limited to two, it becomes possible to detect a plane at a higher speed than when three parameters are specified, and real-time floor surface detection can be easily realized even in an autonomous mobile device. .
  • Step detection In the first embodiment described above, it is desirable that no object exists below the floor surface, but for example, there may be a level difference on the surface lower than the floor surface. A method for detecting the floor surface in such a case will be described below.
  • FIG. 10 is a projection image onto the yz plane when a step lower than the floor surface exists
  • (b) in FIG. 10 is a bottom image in (a) in FIG.
  • step difference are a straight line.
  • the allowable range 64 is set. By setting the allowable range 64, the straight line detection unit 33 can detect the point group 61 representing the floor as a straight line instead of the point group 63 representing the step.
  • the theoretical floor surface is present in the distance image sensor reference coordinate system at a distance of 710 [mm] from the origin at an angle (tilt angle in the depth direction) 22.5 [deg] with the zx plane. Will do.
  • the mounting position varies somewhat due to assembly errors and the like, in step S106, it is determined whether the calculated floor height and angle are within a range of ⁇ several millimeters and ⁇ several deg from the above values. Check.
  • this straight line is detected as being the floor surface, and if it is not within the range, another straight line is selected from the plurality of straight lines detected in step S105, and whether it is within the range as well. Check if.
  • the floor surface is detected by limiting the height and angle of the detected plane, even if the distance image includes a level difference lower than the floor surface, it is excluded and the floor surface is detected. can do.
  • the inclination in the direction rotating around the z axis is sufficiently smaller than the inclination in the direction rotating around the x axis and can be ignored. explained.
  • plane detection in the case where the inclination in the direction of rotation about the z-axis is smaller than the inclination in the direction of rotation about the x-axis but cannot be completely ignored will be described below. explain.
  • FIG. 11 is a flowchart illustrating a procedure performed by the arithmetic device 30 included in the cleaning robot according to the second embodiment. The description of the same steps as those in the flowchart of FIG. 8 is omitted.
  • 12A is a diagram illustrating an example of the three-dimensional coordinate data according to the second embodiment
  • FIG. 12B is a diagram illustrating the three-dimensional coordinate data of FIG. Is a projection image obtained by projecting the three-dimensional coordinate data of (a) in FIG. 12 onto the xy plane
  • the angle ⁇ and the height of the rotation direction about the x axis of the detected plane which is the process of step S106, are calculated.
  • the point cloud obtained by the projection image on the yz plane is not completely aligned on a straight line but is distributed in a band shape with a width.
  • xy′z ′ coordinates obtained by rotating the xyz coordinate system by an angle corresponding to the inclination ⁇ in the direction of rotation about the x axis are newly defined.
  • the y ′ axis and the z ′ axis Is obtained by rotating the y-axis and the z-axis by 22.5 [deg] around the x-axis, respectively.
  • the three-dimensional coordinate calculation unit 31 converts the three-dimensional coordinate data in the xyz coordinate system into the newly defined xy′z ′ coordinate system (step S121).
  • the projection image generation unit 32 projects the converted three-dimensional coordinate data onto the xy ′ plane to generate a projection image (step S122).
  • the straight line detection unit 33 generates a bottom image of the projection image (step S123), and detects a straight line from the bottom image (step S124).
  • the plane detection unit 34 can obtain the angle in the left-right direction (step S125), and the angle is combined with the inclination ⁇ in the direction of rotation about the x axis obtained in step S106. And whether the height is within the set range (step S107).
  • it is the same as that of Embodiment 1.
  • the arithmetic unit 30 calculates the inclination ⁇ in the direction of rotation about the x axis in the xyz coordinate system as the first step even when the inclination in the direction of rotation about the z axis cannot be ignored. To do. Next, the coordinate system is converted to xy′z ′, and the angle in the left-right direction is calculated from the projection image on the xy ′ plane as a second step. By such a two-stage process, the floor surface can be detected more reliably.
  • a projection image as shown in (c) in FIG. 12 is obtained.
  • the point group representing the floor surface is not aligned on a straight line. Therefore, for example, when another object exists in the line AA ′ in FIG. 12C and the floor is not visible, when a straight line is extracted from the set of bottom points, A ⁇ If the A ′ line is a straight line representing the floor, it will be erroneously detected.
  • the z ′ axis is substantially parallel to the floor surface.
  • the projected image onto the plane can be extracted with the point group representing the floor as the bottom point.
  • a flat surface detection device is a flat surface detection device (a flat surface detection device 60) that detects a specific detection target plane from distance image data of a subject including the specific detection target plane, and includes the distance image.
  • Three-dimensional coordinate calculation means three-dimensional coordinate calculation unit 31) for converting data into three-dimensional coordinate data including a detection target three-dimensional point group representing the specific detection target plane;
  • Projection image generation means projection image generation unit 32) that generates projection image data in which the detection target three-dimensional point group is linearly distributed by projecting onto a three-dimensional plane, and detects the linear straight line from the projection image data.
  • a plane parameter calculation that calculates a plane parameter including information on the inclination of the specific detection target plane based on the detection result of the line detection unit (line detection unit 33) and the line detection unit.
  • Stage and (planar detector unit 34) is characterized in that it comprises.
  • distance image data including a plane is converted into three-dimensional coordinate data, and the converted three-dimensional coordinate data is projected onto the plane. Then, a straight line is detected from the projected image, and a plane parameter based on the straight line is detected. Therefore, the range image does not require a clue for plane detection, and even if a lot of information unrelated to the plane such as an obstacle is included, the plane can be detected more reliably.
  • the flat surface detection apparatus uses the depth direction of the subject as the z axis in the distance image data, and the x axis and the y axis that are perpendicular to the z axis.
  • the projection image generation means sets the three-dimensional coordinate data on the yz plane. Projected projection image data is generated, and the plane parameter calculation means (plane detection unit 34) calculates the plane parameter including the inclination angle of the specific detection target plane with respect to the z-axis from the projection image data. It is characterized by.
  • the plane parameter calculation means is characterized by determining whether or not the straight line is within a predetermined range.
  • the flat surface detection apparatus converts the xyz coordinate system into an xy′z ′ coordinate system by rotating the x axis as the rotation axis, and xy ′ a second three-dimensional coordinate calculation means (three-dimensional coordinate calculation unit 31) for generating second three-dimensional coordinate data including a detection target three-dimensional point group representing the specific detection target plane in the z ′ coordinate system;
  • the second three-dimensional coordinate data is projected onto the xy ′ plane to generate second projection image data in which the detection target three-dimensional point group included in the second three-dimensional coordinate data is linearly distributed.
  • Two projection image generation means projection image generation unit 32
  • second straight line detection means for detecting the linear second straight line distributed in the second projection image data from the second projection image data.
  • the second straight line 2nd plane parameter calculation means which calculates the 2nd plane parameter containing the information about the inclination of the above-mentioned specific detection object plane based on the detection result of a detection means, It is characterized by the above-mentioned. It is said.
  • an autonomous mobile device cleaning robot 1 according to one aspect of the present invention includes a flat surface detection device (flat surface detection device 60), distance image generation means (distance image sensor 20) that generates the distance image data, and travel. Means (driving wheel 2), and an autonomous mobile device that detects a plane as a travel path using the plane detection device.
  • the autonomous mobile device can achieve the same effects as the flat surface detection device.
  • the autonomous mobile device cleaning robot 1 according to an aspect of the present invention includes a plurality of the distance image generation means (distance image sensor 20), and the plane parameter is calculated from each of the distance image data generated by each distance image generation means. It is characterized by calculating.
  • the distance image sensor 20 uses the infrared projection method, but other types of distance image sensors such as a stereo method and a TOF method can also be used.
  • a stereo method parallax is calculated by a technique such as corresponding point search for left and right images obtained from stereo left and right cameras.
  • the distance to the object can be obtained from the parallax value by the principle of triangulation.
  • plane detection can be realized by the same processing as in the above-described embodiment.
  • the floor surface is detected, but it can also be used to detect other planes such as a road surface, a water surface, a wall surface, and a ceiling surface.
  • the cleaning robot 1 has been described as an autonomous mobile device. It can also be applied to other autonomous mobile devices.
  • the flat surface detection device 60 is incorporated in the cleaning robot 1 and used.
  • the flat surface detection device 60 is used as an independent device for industrial, consumer, and other purposes, and a general-purpose portable information terminal or the like. It is also possible to incorporate it into a part of
  • each block of the flat panel detector may be realized in hardware by a logic circuit formed on an integrated circuit (IC chip) or in software using a CPU (Central Processing Unit). May be.
  • IC chip integrated circuit
  • CPU Central Processing Unit
  • the flat surface detection apparatus includes a CPU that executes instructions of programs that realize each function, a ROM (Read Memory) that stores the programs, a RAM (Random Access Memory) that expands the programs, the programs, and various types
  • a storage device such as a memory for storing data is provided.
  • An object of the present invention is to provide a recording medium in which a program code (execution format program, intermediate code program, source program) of a control program for a flat panel detector, which is software that realizes the functions described above, is recorded so as to be readable by a computer This can also be achieved by supplying to the flat panel detector and reading and executing the program code recorded on the recording medium by the computer (or CPU or MPU).
  • Examples of the recording medium include non-transitory tangible media, such as magnetic tapes and cassette tapes, magnetic disks such as floppy (registered trademark) disks / hard disks, and CD-ROM / MO.
  • Discs including optical disks such as / MD / DVD / CD-R, cards such as IC cards (including memory cards) / optical cards, and semiconductor memories such as mask ROM / EPROM / EEPROM (registered trademark) / flash ROM
  • logic circuits such as PLD (Programmable logic device) and FPGA (Field Programmable Gate array) can be used.
  • the flat surface detection device may be configured to be connectable to a communication network, and the program code may be supplied via the communication network.
  • the communication network is not particularly limited as long as it can transmit the program code.
  • the Internet intranet, extranet, LAN, ISDN, VAN, CATV communication network, virtual private network (Virtual Private Network), telephone line network, mobile communication network, satellite communication network, etc. can be used.
  • the transmission medium constituting the communication network may be any medium that can transmit the program code, and is not limited to a specific configuration or type.
  • wired lines such as IEEE 1394, USB, power line carrier, cable TV line, telephone line, ADSL (Asymmetric Digital Subscriber Line) line, infrared rays such as IrDA and remote control, Bluetooth (registered trademark), IEEE 802.11 wireless, HDR ( It can also be used wirelessly such as High Data Rate, NFC (Near Field Communication), DLNA (registered trademark) (Digital Living Network Alliance), mobile phone network, satellite line, and digital terrestrial network.
  • the present invention can also be realized in the form of a computer data signal embedded in a carrier wave in which the program code is embodied by electronic transmission.
  • Embodiment 3 which is one form of the road surface level difference detection method according to the present invention will be described below.
  • the obstacle detection device of Patent Document 1 extracts a white line such as a lane and recognizes a road surface. For example, a vehicle traveling on a road surface without a lane, such as a senior car or an electric wheelchair. When applied to, the road surface cannot be recognized correctly, which makes it difficult to detect an obstacle.
  • the road surface level difference detection method projects at least a first image and a second image obtained by taking a stereo image of a road surface on XY plane coordinates and specifies them on the plane coordinates.
  • a detection area centered on the coordinates (X, Y) is set, parallax v1 when the image of the detection area is at the road surface position is calculated, and coordinates obtained by subtracting the parallax v1 from the second image (Xv1,
  • a comparison area centering on Y) is set, and the image of the detection area is compared with the image of the comparison area to detect the height of the detection area from the road surface.
  • the road surface level difference detection method obtains a height difference between adjacent detection areas from the heights of a plurality of detection areas, and the height difference is equal to or greater than a threshold value. If so, it is determined that there is a step between the detection areas.
  • the road surface level difference detection method obtains a height difference between adjacent detection areas from the heights of the plurality of detection areas. When the height difference changes continuously, it is determined that there is an inclination between the detection areas.
  • the road surface level difference detection device (first road level level detection device) according to the third embodiment includes at least a first camera and a second camera that take a stereo image of a road surface, and a first image that is captured by the first camera.
  • An image and a second image captured by the second camera are projected onto XY plane coordinates, a detection area centered on specific coordinates (X, Y) is set on the plane coordinates, and an image of the detection area Is calculated on the road surface position, a comparison area centered on coordinates (X ⁇ v1, Y) obtained by subtracting the parallax v1 from the second image is set, and an image in the detection area and an image in the comparison area
  • a height calculation unit that detects the height of the detection region from the road surface.
  • the vehicle of the third embodiment includes the road surface level difference detection device described above.
  • FIG. 13 shows the configuration of the road surface level difference detecting device 1001 of the third embodiment.
  • a road surface level difference detection apparatus 1001 according to the third embodiment includes two cameras 1011 and 1012 that capture a stereo image, and a calculation unit 1020 for calculating the stereo image.
  • the calculation unit 1020 includes a height detection unit 1030 that calculates the height of a region where a step is desired to be detected from the stereo image, and a step detection unit 1040 that determines the presence / absence of a step between detection regions from the height of each detection region. It is composed of Furthermore, an output device 1050 such as an audio speaker or a display device is provided to notify the operator of the presence or absence of a step as necessary.
  • FIG. 14 (a) is a top view showing the arrangement of the stereo camera
  • FIG. 14 (b) is a side view showing the mounting position of the stereo camera.
  • the two cameras 1011 and 1012 have the same specifications, and have a predetermined horizontal angle of view as shown in FIG. 14A. For example, a predetermined distance g is set on the left and right in the front part of a senior car or an electric wheelchair vehicle. Installed separately.
  • the cameras 1011 and 1012 are installed at a predetermined height hc from the road surface, have a predetermined vertical field angle and depression angle, and the optical axis of the lens images the road surface side. It has become downward.
  • the step of the road surface to be imaged becomes small and the detection accuracy of the step is lowered. Conversely, if it is too small, the detection range of the step becomes narrow. It is necessary to set appropriately according to conditions.
  • the depression angle is preferably set so that the proportion of the road surface is increased.
  • the mounting height hc of the cameras 1011 and 1012 is preferably set as high as possible in order to widen a detectable range from a small step to a large step.
  • the specifications and arrangement of the cameras 1011 and 1012 are, for example, a horizontal field angle and a vertical field angle corresponding to a 35 mm size lens, a mounting interval g is 15 to 25 cm, a mounting height hc is 60 to 80 cm, 10 It is installed at a depression angle of ⁇ 25 °.
  • half angles of the horizontal field angle and the vertical field angle are ⁇ 1 and ⁇ 2, respectively, and the depression angle is ⁇ 3.
  • FIG. 14 an embodiment in which the cameras 1011 and 1012 are installed on the left and right will be described. However, the camera 1011 and 1012 can be installed vertically and obliquely.
  • the detection method is the same.
  • FIG. 15 shows two images of the stereo camera, and the road surface including the sidewalk is captured.
  • FIG. 15A is a first image captured by the left camera 1011
  • FIG. 15B is a second image captured by the right camera 1012.
  • FIG. 15C is a diagram in which only the boundary line between the sidewalk and the road surface is extracted by superimposing the first image and the second image. As shown in FIG. 15C, the image is taken at a position where the boundary line is shifted between the left image and the right image.
  • the amount of deviation in the left-right direction is parallax, and on a flat road surface, the parallax decreases from the near side to the depth side at a certain rate.
  • the height of the detection region from the road surface is detected by comparing the parallax v1 on the flat road surface and the actual parallax v2 obtained by imaging the step detection region.
  • FIG. 16 is a flowchart of the height detection process in the calculation unit 1020.
  • the calculation unit 1020 assumes that the detection region is on the road surface from the Y coordinate and the position information of the camera, etc., with respect to the detection region centered on an arbitrary coordinate (X, Y) of the first image.
  • the parallax v1 is obtained, a comparison area centered on the coordinate (Xv1, Y) shifted by the parallax v1 is determined in the second image, and the height from the road surface of the detection area is determined from the parallax v2 of the detection area and the comparison area.
  • step S1 With the processing from step S1 to step S3 illustrated in FIG.
  • the parallax v1 with the second image when it is assumed that the vehicle is on the road surface is obtained.
  • FIG. 17 is an explanatory diagram illustrating a distance calculation method.
  • FIG. 17A shows a step detection region in which the first image is converted into a coordinate space 13 in which the center of coordinates (0, 0) is the origin P and the horizontal width is ⁇ w pixels and the vertical length is ⁇ h pixels. These coordinate points (X, Y) are shown in the coordinate space 13.
  • FIG. 17B is a side view showing the focal plane A1 of the cameras 1011 and 1012 and the positional information of the cameras.
  • step S1 a coordinate point (X, Y) of an arbitrary detection region for detecting a step in the coordinate space 13 is selected.
  • the coordinate space 13 corresponds to a focal plane that is a plane perpendicular to the optical axis of the camera 1011 shown in FIG.
  • the coordinate point (X, Y) in the coordinate space 13 and the origin P also have the same parallax.
  • step S2 the distance d1 from the camera to the origin P of the focal plane A1 when the coordinate point (X, Y) is assumed to be on the road surface is obtained.
  • the same focal point as the coordinate point (X, Y) is used by utilizing that the coordinate points on the focal plane all have the same parallax.
  • a distance d1 to the origin P of the focal plane A1 is calculated using the coordinate point Q (0, Y) on the plane A1 as a base point.
  • FIG. 18 is an explanatory diagram showing a parallax calculation method.
  • FIG. 18A is a side view of the camera 1011 and shows a downward angle ⁇ y when the coordinate point Q is viewed from the camera 1011. Assuming that the half of the vertical angle of view of the camera 1011 is ⁇ 2, the height of the coordinate point Q in FIG. 17A is Y, so ⁇ y can be obtained by the following equation 1.
  • step S3 a parallax v1 between the first image and the second image when the road surface is on the focal plane A1 is obtained.
  • the parallax v1 may be obtained using the distance d1 obtained in the above (Equation 3).
  • the distance d1 may use another value or need to be corrected depending on lens distortion or the like.
  • FIG. 18B is a top view of the left and right cameras 1011 and 1012. As shown in FIG. 18B, the origin P located at the focal plane A1 of the left camera 1011 is seen by the right camera 1012 in the direction of the angle ⁇ x from the center. This ⁇ x is obtained by the following equation, where g is the distance between the left and right cameras.
  • ⁇ x arctan (g / d1) (Formula 4)
  • the origin P looks like the coordinates of the origin (0, 0) in the first image on the left side as shown in FIG. 19 (a), and the parallax in the second image on the right side as shown in FIG. 19 (b).
  • the number of pixels is v1, it appears at the point ( ⁇ v1, 0).
  • v1 is obtained by the following expression when ⁇ 1 is half of the horizontal angle of view of the camera.
  • step S4 it is determined whether or not the object shown in the coordinates of the detection area (X, Y) of the left first image is at the same height as the road surface. At this time, it is confirmed whether the same object as the object shown in the coordinates (X, Y) of the left first image is shown in the position of the comparison area (Xr, Y) of the right second image. do it.
  • the luminance of several pixels around the target point of the left and right images may be taken out and compared. If they match within the range of error factors such as camera noise, the point can be determined to be the same height as the road surface. If it is determined that the two do not match and are shifted to the left or right, it can be determined that the position is higher or lower than the road surface according to the parallax.
  • the height hs of the road level difference can be known. That is, when the parallax v2 of the object is positive (larger than the parallax v1 of the road surface), the distance d2 to the object is smaller than the distance d1 to the road surface as shown in FIG. 20A, and hs is a positive value. It can be judged that it is higher than the road surface.
  • step S5 the height difference from the road surface of the coordinate point (X, Y) in the first image is known (step S5).
  • step S6 the above procedure is repeated at other coordinate points with appropriate intervals, and the process ends when the detection of the height from the road surface within the necessary range in the image is completed.
  • the details of the flowchart shown in FIG. 16 are as described above, and these processes are processed by the arithmetic unit 1020 shown in FIG. Specifically, it may be realized as software on a PC or a microcomputer, or may be realized as hardware using an FPGA or ASIC. A configuration in which the remainder is partially processed by hardware and software is also possible.
  • FIG. 21 is an example of a result of applying the above method to the stereo image of FIG.
  • the gradation is displayed according to the height of each detection area, the area having the same height as the road surface is displayed in gray, and the area lower than the road surface is displayed in black.
  • the level difference detection unit 1040 of the calculation unit 1020 will be described.
  • the height from the road surface is compared between the detection areas adjacent to each other in the vertical and horizontal directions. It is judged that.
  • the height difference threshold for determining a step is set so as to ensure safety even when a senior car or a wheelchair falls, for example.
  • the boundary determined as a step is indicated by a broken line portion.
  • FIG. 22 shows an application example of a senior car 1060 as an example of a vehicle provided with the road surface level difference detection device 1001 of the third embodiment.
  • the road surface level difference detection device 1001 is provided in front of the handle 1061 of the senior car 1060 at a height hc from the road surface.
  • the road surface level difference detection device 1001 includes an output device 1050 such as a speaker 1031 and a display device 1032 to notify the driver of the senior car 1060 of the level difference detection result.
  • a buzzer sound or voice guidance is output from the speaker 1031 or a character or a figure is displayed on the display device 1032 to notify the presence of a level difference on the road surface. It is possible to avoid dangers such as falling wheels and falling of the senior car 1060.
  • the road surface level difference detection device 1001 of the third embodiment may be provided not only in front of the senior car 1060 but also in the rear. As a result, it is possible to avoid danger such as falling wheels even when reversing with poor visibility.
  • the use of the road surface level difference detection device 1001 of the third embodiment is not limited to the senior car 1060, but is a vehicle that needs to detect a road level level difference, for example, various vehicles ranging from wheelchairs to forklifts. It can be suitably used.
  • the road surface level difference detection device 1001 in a wheelchair, is provided in the front-rear direction and the left-right direction, and even if the direction is changed on the spot, the wheel is dropped or falls at a level difference around the wheelchair. Risk can be prevented.
  • a forklift even if the forward field of view may be blocked during cargo transportation, it is possible to detect a short cargo placed on the road surface as a step and avoid a collision.
  • This provides a road surface level difference detection device that can detect not only front obstacles but also road level differences.
  • Embodiment 4 which is one form of the road surface level
  • the obstacle detection apparatus of Patent Document 1 extracts a white line such as a lane and recognizes a road surface.
  • a road surface without a lane such as a senior car or an electric wheelchair is used.
  • the road surface cannot be recognized correctly, which makes it difficult to detect an obstacle.
  • the road surface step detection method (second road surface step detection method) of the fourth embodiment at least a first image and a second image obtained by stereo shooting of a road surface are projected onto XY plane coordinates, and in a specific Y-axis direction.
  • the parallax when the image is at the road surface position is calculated for each row data of the image, a third image is generated by correcting the second image by shifting the parallax for each Y axis, and the first image and the third image
  • the height from the road surface is detected by comparing the images for each step detection area.
  • the road surface level difference detection method of the fourth embodiment obtains a height difference between adjacent detection areas from the heights of a plurality of detection areas, and the height difference is equal to or greater than a threshold value. If so, it is determined that there is a step between the detection areas.
  • the road surface level difference detection method of the fourth embodiment obtains a height difference between adjacent detection areas from the heights of the plurality of detection areas, and detects the difference between the plurality of detection areas. When the height difference changes continuously, it is determined that there is an inclination between the detection areas.
  • the road surface level difference detection device (second road level level detection device) according to the fourth exemplary embodiment includes a stereo camera that stereo-shoots at least a first image and a second image of a road surface, a first image that is captured in stereo, and a second image.
  • the image is projected onto the XY plane coordinates, the parallax when the image is at the road surface position is calculated for each row data of the image in the specific Y-axis direction, and the parallax is shifted for the second image for each Y-axis.
  • a corrected third image is generated, and the height detection unit detects the height from the road surface by comparing the first image and the third image for each step detection region.
  • the vehicle of the fourth embodiment includes the road surface level difference detection device described above.
  • the configuration of the road surface level difference detection device 1001 of the fourth embodiment is the same as that of the third embodiment described based on FIGS. 13 and 14, the description thereof is omitted. Only differences from the third embodiment will be described below.
  • the boundary line is shifted in the left image and the right image.
  • the amount of deviation in the left-right direction is parallax
  • the road surface level difference detection apparatus of the fourth embodiment extracts such parallax v1 and detects the height and height of the level difference from the road surface.
  • the bottom surface of the recess parallel to the road surface is distorted and imaged unlike a three-dimensional object, so the height from the road surface is detected using the third image corrected for this distortion. is there.
  • FIG. 23A and 23B are diagrams for explaining correction of image distortion.
  • FIG. 23A shows a first image
  • FIG. 23B shows a second image
  • FIG. 23C shows a third image after correction. Yes.
  • the road surface level difference detection method according to the fourth embodiment captures the first image and the second image after taking a first image and a second image obtained by viewing the road surface from different directions with a stereo camera.
  • the row data obtained by collecting the pixel values at the same Y coordinate are compared, and the parallax v1 when the object in the row data is assumed to be on the road surface is expressed in the Y-axis direction. Calculated from coordinate values and camera information.
  • a third image is generated by correcting the parallaxes v1 in the Y axis direction in the second image. Then, the corrected third image and the first image are compared for each detection area, and the height of the detection area from the road surface is detected based on the shift amount v2 between the third image and the first image.
  • FIG. 24 is a flowchart of the height detection process in the calculation unit 1020 of the road surface level difference detection device 1001 according to the fourth embodiment.
  • the computing unit 1020 first obtains the parallax v1 for each row data by the processing from step S1 to step S3.
  • parallax v1 In order to obtain the parallax v1, it is first necessary to calculate the distance d1 from the camera to the focal plane of the image. A method for obtaining the parallax v1 will be described with reference to FIG. 17 used in the third embodiment.
  • the coordinate point (X, Y) indicated in the coordinate space 13 of FIG. 17 is included in the row data.
  • step S1 line data including coordinate points (X, Y) of an arbitrary detection area for detecting a step in the coordinate space 13 is selected.
  • the coordinate space 13 corresponds to a focal plane that is a plane perpendicular to the optical axis of the camera 1011 shown in FIG.
  • the coordinate point (X, Y) in the coordinate space 13 and the origin P also have the same parallax.
  • step S2 the distance d1 from the camera to the origin P of the focal plane A1 when the coordinate point (X, Y) is assumed to be on the road surface is obtained. Since step S2 is the same as step S2 described in the third embodiment, description thereof is omitted here.
  • step S3 a parallax v1 between the first image and the second image when the road surface is on the focal plane A1 is obtained. Since step S3 is the same as step S3 described in the third embodiment, description thereof is omitted here.
  • step S4 after obtaining the parallax v1 between the first image and the second image when there is a road surface for each row data, the second image is as shown in FIG. to correct.
  • each row data of the second image is moved in the X coordinate direction by a parallax v1 pixel corresponding to the Y coordinate.
  • the parallax v1 is a decimal, it is complemented by two nearby points. For example, when v1 is 5.5, correction is performed by writing half of the sum of the fifth and sixth pixels from the left in the position of the zeroth pixel.
  • the third image shown in FIG. 23C is obtained by correcting the row data with the parallax v1 over the entire Y coordinate of the second image. As a result, the corrected object of the third image has the same shape as the first image.
  • step S5 it is determined whether or not the object shown in the (X, Y) coordinates of the first image is at the same height as the road surface.
  • the detection area centered on (X, Y) of the first image and the comparison area centered on (X, Y) of the third image are compared, and the object is the same as the detection area and the comparison area. It can be judged by whether it is reflected in the position of.
  • the luminance of several pixels around the target point of the left and right images can be extracted and compared. If they match within the range of error factors such as camera noise, the point can be determined to be the same height as the road surface. If it is determined that the two do not match and are shifted to the left or right, it can be determined that the position is higher or lower than the road surface according to the shift amount.
  • the height difference hs of the road surface step can be obtained by using (Equation 4) to (Equation 8) and the height difference of the coordinate point (X, Y) from the road surface can be obtained.
  • step S6 the above procedure is repeated at other coordinate points with appropriate intervals, and the process ends when the detection of the height from the road surface within the necessary range in the image is completed.
  • FIG. 10 is an example of a result of applying the above method to the stereo image of FIG.
  • the gradation is displayed according to the height of each detection area, the area having the same height as the road surface is displayed in gray, and the area lower than the road surface is displayed in black.
  • step difference detection part 1040 of the calculating part 1020 has demonstrated in Embodiment 3, description here is abbreviate
  • This provides a road surface level difference detection device that can detect not only front obstacles but also road level differences.
  • the present invention relates to a plane detection device for detecting a plane such as a floor included in image data to be measured and an autonomous mobile device using the same, and the plane detection device itself is an independent device for industrial use. In addition to being used for consumer use and other purposes, it can be used by being incorporated into a part of another device, or part or all of the device can be used as an integrated circuit (IC chip).
  • IC chip integrated circuit

Landscapes

  • Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Electromagnetism (AREA)
  • Remote Sensing (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • Aviation & Aerospace Engineering (AREA)
  • Automation & Control Theory (AREA)
  • Measurement Of Optical Distance (AREA)

Abstract

A cleaning robot (1) according to an embodiment of the present invention comprises a distance image sensor (20) and a plane detection device (60). The plane detection device (60) comprises: a three-dimensional coordinate computation unit (31) that converts a distance image into three-dimensional coordinate data; a projected image creation unit (32) that creates an image in which the three-dimensional coordinate data is projected onto a plane; a straight line detection unit (33) that detects straight lines from the projected image; and a plane detection unit (34) that detects planes on the three-dimensional coordinate data from the straight lines.

Description

平面検出装置、平面検出装置を備えた自律移動装置、路面段差検出方法、路面段差検出装置、および路面段差検出装置を備えた車両Plane detection device, autonomous mobile device equipped with plane detection device, road surface step detection method, road surface step detection device, and vehicle equipped with road surface step detection device
 本発明は、平面検出装置、当該平面検出装置を備えた自律移動装置、路面段差検出方法、路面段差検出装置、および路面段差検出装置を備えた車両に関する。 The present invention relates to a flat surface detection device, an autonomous mobile device including the flat surface detection device, a road surface step detection method, a road surface step detection device, and a vehicle including the road surface step detection device.
 ロボットや無人搬送車といった自律移動装置においては、移動時に前方にある障害物や段差を検出し、衝突や転落を防止する必要がある。このような障害物や段差の検出には、赤外線方式や超音波方式の近接センサ広く用いられていた。 In autonomous mobile devices such as robots and automatic guided vehicles, it is necessary to detect obstacles and steps ahead and prevent collisions and falls when moving. Infrared or ultrasonic proximity sensors have been widely used for detecting such obstacles and steps.
 しかし、赤外線方式や超音波方式の近接センサは、前方の障害物の有無は判別できるが、その詳細な位置や形状までは得られない。したがって、ロボットに取り付けた場合、直近に存在する障害物や段差などの検出はできるものの、進行方向前方の広い範囲の障害物や段差を予め発見して回避しながら移動するような用途には用いることができない。そこで、レーザレンジファインダ(LRF)などの距離センサが、用いられるようになった。 However, an infrared or ultrasonic proximity sensor can determine the presence or absence of a front obstacle, but cannot obtain its detailed position and shape. Therefore, when it is attached to a robot, it can detect obstacles and steps that are present in the immediate vicinity, but it is used for applications that move while finding and avoiding obstacles and steps in a wide range ahead in the direction of travel. I can't. Therefore, distance sensors such as a laser range finder (LRF) have come to be used.
 図25は、従来技術におけるレーザレンジファインダ101を備えた自律移動装置100の図である。図25中の(a)は、従来技術におけるレーザレンジファインダ101を備えた自律移動装置100の外観図を示す図であり、図25中の(b)は、従来技術におけるレーザレンジファインダ101を備えた自律移動装置100が検出できない上方の障害物102、下方の障害物103、および段差104の図である。図25中の(a)に示すように、自律移動装置100は、レーザレンジファインダ101を高さHLRFの位置に備えている。レーザレンジファインダ101は、高さHLRFの位置において、水平方向(図のy軸に垂直な面内)に角度スキャンを行い、障害物、および段差を検出する。 FIG. 25 is a diagram of an autonomous mobile device 100 including a laser range finder 101 in the prior art. (A) in FIG. 25 is a diagram showing an external view of the autonomous mobile device 100 including the laser range finder 101 in the prior art, and (b) in FIG. 25 includes the laser range finder 101 in the prior art. It is the figure of the upper obstacle 102 which the other autonomous mobile device 100 cannot detect, the lower obstacle 103, and the level | step difference 104. FIG. As shown to (a) in FIG. 25, the autonomous mobile apparatus 100 is equipped with the laser range finder 101 in the position of height HLRF . The laser range finder 101 performs an angle scan in the horizontal direction (in a plane perpendicular to the y-axis in the drawing) at the position of the height HLRF , and detects an obstacle and a step.
 しかし、レーザレンジファインダ101は、計測精度は高いが、計測する高さと異なる位置にある障害物、および段差は検出できないという欠点がある。例えば、図25中の(b)において、レーザレンジファインダ101は、障害物102、障害物103、および段差104を検出することはできない。したがって、障害物102、障害物103、および段差104を検出するために、レーザレンジファインダ101の他に、赤外線方式の近接センサ、超音波方式の近接センサなどを多数配置する必要があり、自律移動装置が煩雑な構造となっていた。 However, although the laser range finder 101 has high measurement accuracy, it has a drawback that it cannot detect obstacles and steps at positions different from the measurement height. For example, in FIG. 25B, the laser range finder 101 cannot detect the obstacle 102, the obstacle 103, and the step 104. Therefore, in order to detect the obstacle 102, the obstacle 103, and the step 104, in addition to the laser range finder 101, it is necessary to arrange a large number of infrared proximity sensors, ultrasonic proximity sensors, etc. The apparatus has a complicated structure.
 そこで、障害物、および段差を検出するために、障害物、および段差が含まれる画像を取得し、その画像から平面を検出して、その平面上の障害物、および段差を検出するという手法が提案されている。 Therefore, in order to detect an obstacle and a step, there is a method of acquiring an image including the obstacle and the step, detecting a plane from the image, and detecting the obstacle and the step on the plane. Proposed.
 また最近では、自動車に搭載したカメラ等により車外の対象風景を撮像し、この撮像した画像を画像処理して自動車から対象物までの距離を求め、前方の車両やガードレールなどとの衝突の危険を予測して、車両にブレーキをかけるなどの制御を行う方法が実用化されている。 In addition, recently, an object scene outside the vehicle is imaged by a camera mounted on the vehicle, and the captured image is image-processed to obtain a distance from the vehicle to the object, and the risk of a collision with a vehicle in front or a guardrail is detected. A method of predicting and controlling the vehicle such as applying a brake has been put into practical use.
 このような画像による距離計測技術は、単眼視像からカメラ位置との関係を用いて対象物までの距離を推定する技術と、複数のカメラにより撮像したステレオ画像から、三角測量の原理で対象物までの距離を求める技術とに大別される。このうち、ステレオ画像から三角測量の原理で距離を求める技術は、左右の画像における同一物体の位置の相対的なずれから距離を求めるので、正確な距離を求めることができる。 The distance measurement technology using such images is based on the principle of triangulation from the technology that estimates the distance to a target object from the monocular image using the relationship with the camera position and the stereo images captured by multiple cameras. It is roughly divided into the technology that calculates the distance. Among these, the technique for obtaining the distance from the stereo image based on the principle of triangulation obtains the distance from the relative shift of the position of the same object in the left and right images, and therefore, an accurate distance can be obtained.
 例えば、特許文献1は、複数のカメラにより入力された画像を蓄積する画像蓄積部と、道路面上に存在する複数の白線を抽出する特徴抽出部と、抽出された白線から道路面上の任意の点の各画像への投影位置の間に成り立つ関係式を求めるパラメータ計算部と、関係式を用いて道路面からの高さを有する物体を検出する検出部から構成される障害物検出装置を開示している。 For example, Patent Literature 1 discloses an image storage unit that stores images input by a plurality of cameras, a feature extraction unit that extracts a plurality of white lines existing on a road surface, and an arbitrary on a road surface from the extracted white lines. An obstacle detection apparatus comprising a parameter calculation unit that obtains a relational expression that is established between projection positions of each point on each image, and a detection part that detects an object having a height from the road surface using the relational expression Disclosure.
 特許文献1の障害物検出装置によれば、道路面の傾斜の変化がある状況でも、2本の白線の動きから道路面を認識して、道路面上に存在する障害物を高速・高精度で検出することができる。 According to the obstacle detection device of Patent Document 1, even when there is a change in the slope of the road surface, the road surface is recognized from the movement of the two white lines, and the obstacle present on the road surface is detected at high speed and with high accuracy. Can be detected.
 また、特許文献2においては、ステレオカメラを用いて、左右画像の微分画像のHough変換結果から道路面を検出し、それを手掛かりに他の車両や歩行者などを検出する。 In Patent Document 2, a stereo camera is used to detect the road surface from the Hough transform result of the differential image of the left and right images, and other vehicles and pedestrians are detected using this as a clue.
 また、特許文献3においては、TOF(Time Of Flight)方式の距離画像センサによって得られる距離画像データから、RANSAC(RANdom SAmple Consensus)法と呼ばれる計算手法によって平面情報を抽出し、それを手掛かりに歩行者などを検出する。 In Patent Document 3, plane information is extracted by a calculation method called RANSAC (RANdom Sampl Consensus) method from distance image data obtained by a distance image sensor of a TOF (Time Of Flight) method, and walking is performed using the information as a clue. Detect people.
日本国公開特許公報「特開2001-76128号公報(2005年1月27日公開)」Japanese Patent Publication “Japanese Patent Laid-Open No. 2001-76128” (published on January 27, 2005) 日本国公開特許公報「特開2005-24464号公報(2005年1月27日公開)」Japanese Patent Publication “Japanese Patent Laid-Open No. 2005-24464 (published on January 27, 2005)” 日本国公開特許公報「特表2011-530706号公報(2011年12月22日公表)」Japanese Patent Gazette “Special Table 2011-530706 (published on December 22, 2011)”
 しかしながら、いずれの従来技術も、平面および障害物などを確実に検出できるとは言い難い。 However, it is difficult to say that any of the conventional techniques can reliably detect a plane and an obstacle.
 例えば、特許文献1および2の技術では、車線のような白線を抽出して道路面を認識するため、例えば、シニアカーや電動車椅子等のように、車線の無い路面を走行する車両に適用した場合は、路面を正しく認識できない。 For example, in the techniques of Patent Documents 1 and 2, in order to recognize a road surface by extracting a white line such as a lane, for example, when applied to a vehicle traveling on a road surface without a lane, such as a senior car or an electric wheelchair Cannot correctly recognize the road surface.
 本発明に係る平面検出装置は、特定の検出対象平面を含む被写体の距離画像データから当該特定の検出対象平面を検出する平面検出装置であって、上記距離画像データを、上記特定の検出対象平面を表す検出対象3次元点群を含む3次元座標データに変換する3次元座標演算手段と、上記3次元座標データを所定の2次元平面に投影して、上記検出対象3次元点群が線形に分布した投影画像データを生成する投影画像生成手段と、上記投影画像データから上記線形の直線を検出する直線検出手段と、上記直線検出手段の検出結果に基づいて、上記特定の検出対象平面の傾きに関する情報を含む平面パラメータを算出する平面パラメータ算出手段と、を備えることを特徴としている。 The flat surface detection device according to the present invention is a flat surface detection device that detects a specific detection target plane from distance image data of a subject including a specific detection target plane, and the distance image data is converted into the specific detection target plane. A three-dimensional coordinate calculation means for converting into three-dimensional coordinate data including a detection target three-dimensional point group representing the above, and projecting the three-dimensional coordinate data onto a predetermined two-dimensional plane so that the detection target three-dimensional point group is linear Projection image generation means for generating distributed projection image data, straight line detection means for detecting the linear straight line from the projection image data, and the inclination of the specific detection target plane based on the detection result of the straight line detection means And plane parameter calculation means for calculating plane parameters including information related to the above.
 本発明に係る自律移動装置は、上記平面検出装置と、上記距離画像データを生成する距離画像生成手段と、走行手段と、を備えており、上記平面検出装置を用いて走行路となる平面を検出することを特徴としている。 An autonomous mobile device according to the present invention includes the plane detection device, a distance image generation unit that generates the distance image data, and a travel unit. A plane that becomes a travel path using the plane detection device is provided. It is characterized by detecting.
 本発明の第1の路面段差検出方法は、路面をステレオ撮影した少なくとも第1の画像と第2の画像とをXY平面座標に投影し、該平面座標上に特定の座標(X、Y)を中心とする検出領域を設定し、検出領域の画像が路面位置にある場合の視差v1を算出し、第2の画像に視差v1を差し引いた座標(X-v1、Y)を中心とする比較領域を設定し、検出領域の画像と比較領域の画像とを比較して、検出領域の路面からの高さを検出することを特徴とする。 According to the first road surface level difference detection method of the present invention, at least a first image and a second image obtained by photographing a road surface in stereo are projected onto XY plane coordinates, and specific coordinates (X, Y) are projected on the plane coordinates. A comparison area centered on coordinates (X−v1, Y) obtained by setting a detection area as a center, calculating parallax v1 when the image of the detection area is at a road surface position, and subtracting parallax v1 from the second image And the height of the detection area from the road surface is detected by comparing the image of the detection area and the image of the comparison area.
 また、本発明の第1の路面段差検出装置は、路面をステレオ撮影する少なくとも第1のカメラと第2のカメラと、第1のカメラで撮影された第1の画像と上記第2のカメラで撮影された第2の画像をXY平面座標に投影し、該平面座標上に特定の座標(X、Y)を中心とする検出領域を設定し、検出領域の画像が路面位置にある場合の視差v1を算出し、第2の画像に視差v1を差し引いた座標(X-v1、Y)を中心とする比較領域を設定し、検出領域の画像と比較領域の画像とを比較して、検出領域の路面からの高さを検出する高さ算出部とを備えることを特徴とする。 The first road surface level difference detecting device of the present invention includes at least a first camera and a second camera that take a stereo image of a road surface, a first image captured by the first camera, and the second camera. The parallax when the captured second image is projected onto XY plane coordinates, a detection area centered on specific coordinates (X, Y) is set on the plane coordinates, and the image of the detection area is at the road surface position v1 is calculated, a comparison area centered on the coordinate (X−v1, Y) obtained by subtracting the parallax v1 from the second image is set, the detection area image is compared with the comparison area image, and the detection area And a height calculation unit for detecting a height from the road surface.
 本発明の第2の路面段差検出方法は、路面をステレオ撮影した少なくとも第1の画像と第2の画像とをXY平面座標に投影し、特定のY軸方向で画像の行データ毎に画像が路面位置にある場合の視差を算出し、第2の画像をY軸毎に視差をずらして補正した第3の画像を生成し、第1の画像と第3の画像を段差の検出領域毎に比較して路面からの高さを検出することを特徴とする。 According to the second road surface level difference detection method of the present invention, at least a first image and a second image obtained by taking a stereo image of a road surface are projected on XY plane coordinates, and an image is obtained for each row data of the image in a specific Y-axis direction. The parallax is calculated when the vehicle is at the road surface position, a third image is generated by correcting the second image by shifting the parallax for each Y axis, and the first image and the third image are generated for each step detection region. The height from the road surface is detected by comparison.
 本発明の第2の路面段差検出装置は、路面の少なくとも第1の画像と第2の画像をステレオ撮影するステレオカメラと、ステレオ撮影した第1の画像と第2の画像とをXY平面座標に投影し、特定のY軸方向で上記画像の行データ毎に画像が路面位置にある場合の視差を算出し、第2の画像をY軸毎に視差をずらして補正した第3の画像を生成し、第1の画像と第3の画像を段差の検出領域毎に比較して路面からの高さを検出する高さ検出部とを備えることを特徴とする。 A second road surface level difference detecting device according to the present invention includes a stereo camera that performs stereo imaging of at least a first image and a second image of a road surface, and the first and second images captured in stereo as XY plane coordinates. Project and calculate the parallax when the image is at the road surface position for each row data of the above image in the specific Y-axis direction, and generate the third image corrected by shifting the parallax for each Y-axis And a height detection unit that detects the height from the road surface by comparing the first image and the third image for each step detection region.
 本発明の車両は、上記の第1あるいは第2の路面段差検出装置を備えたことを特徴とする。 A vehicle according to the present invention includes the first or second road surface level difference detecting device.
 本発明によれば、平面および障害物などを確実に検出できるという効果を奏する。 According to the present invention, there is an effect that a plane, an obstacle, and the like can be reliably detected.
(a)は、本発明の実施形態1に係る平面検出装置を用いた清掃ロボットの外観図を示す図であり、(b)は、本発明の実施形態1に係る平面検出装置を用いた清掃ロボットの断面図である。(A) is a figure which shows the external view of the cleaning robot using the plane detection apparatus which concerns on Embodiment 1 of this invention, (b) is the cleaning using the plane detection apparatus which concerns on Embodiment 1 of this invention. It is sectional drawing of a robot. 本発明の実施形態1に係る清掃ロボットに具備される距離画像センサの取り付け位置と、距離画像センサの計測範囲を示す図である。It is a figure which shows the attachment position of the distance image sensor with which the cleaning robot which concerns on Embodiment 1 of this invention is equipped, and the measurement range of a distance image sensor. (a)は、本発明の実施形態1に係る清掃ロボットに具備される遠距離用の距離画像センサにより撮影されたRGB画像であり、(b)は、本発明の実施形態1に係る清掃ロボットに具備される遠距離用の距離画像センサにより撮影された距離画像であり、(c)は、本発明の実施形態1に係る清掃ロボットに具備される近距離用の距離画像センサにより撮影されたRGB画像であり、(d)は、本発明の実施形態1に係る清掃ロボットに具備される近距離用の距離画像センサにより撮影された距離画像である。(A) is the RGB image image | photographed by the distance image sensor for long distances with which the cleaning robot which concerns on Embodiment 1 of this invention is equipped, (b) is the cleaning robot which concerns on Embodiment 1 of this invention. Is a distance image photographed by the distance image sensor for long distances provided in FIG. 5, and (c) is photographed by the distance image sensor for short distances provided in the cleaning robot according to Embodiment 1 of the present invention. It is an RGB image, and (d) is a distance image taken by a distance image sensor for short distance provided in the cleaning robot according to the first embodiment of the present invention. 本発明の実施形態1に係る清掃ロボットの機能構成を示すブロック図である。It is a block diagram which shows the function structure of the cleaning robot which concerns on Embodiment 1 of this invention. (a)は、本発明の実施形態1に係る清掃ロボットに具備される近距離用の距離画像センサ基準の3次元座標系を示す図であり、(b)は、本発明の実施形態1に係る清掃ロボットに具備される遠距離用の距離画像センサ基準の3次元座標系を示す図であり、(c)は、本発明の実施形態1に係る清掃ロボット基準の3次元座標系を示す図である。(A) is a figure which shows the three-dimensional coordinate system of the distance image sensor reference | standard for short distances with which the cleaning robot which concerns on Embodiment 1 of this invention is equipped, (b) is Embodiment 1 of this invention. It is a figure which shows the three-dimensional coordinate system of the distance image sensor for long distances with which this cleaning robot is equipped, (c) is a figure which shows the three-dimensional coordinate system of the cleaning robot reference | standard based on Embodiment 1 of this invention. It is. 本発明の実施形態1に係る3次元座標データの例を示す図である。It is a figure which shows the example of the three-dimensional coordinate data which concern on Embodiment 1 of this invention. 本発明の実施形態1に係る図6の3次元座標データをyz平面に投影した投影画像である。It is the projection image which projected the three-dimensional coordinate data of FIG. 6 which concerns on Embodiment 1 of this invention on yz plane. 本発明の実施形態1に係る清掃ロボットの演算装置が処理する手順を示すフローチャートである。It is a flowchart which shows the procedure which the arithmetic unit of the cleaning robot which concerns on Embodiment 1 of this invention processes. (a)は、本発明の実施形態1に係る清掃ロボットに具備される遠距離用の距離画像センサにより撮影された距離画像から生成されたyz平面への投影画像であり、(b)は、(a)のボトム画像であり、(c)は、本発明の実施形態1に係る清掃ロボットに具備される近距離用の距離画像センサにより撮影された距離画像から生成されたyz平面への投影画像であり、(d)は、(c)のボトム画像である。(A) is a projection image on a yz plane generated from a distance image taken by a distance image sensor for long distances provided in the cleaning robot according to the first embodiment of the present invention, and (b) is (A) is a bottom image, and (c) is a projection onto a yz plane generated from a distance image captured by a short-distance distance image sensor provided in the cleaning robot according to the first embodiment of the present invention. (D) is a bottom image of (c). (a)は、本発明の実施形態1に係る清掃ロボットに具備される距離画像センサにより撮影された距離画像から生成された、床面より低い段差が存在する場合のyz平面への投影画像であり、(b)は、(a)のボトム画像である。(A) is the projection image to the yz plane when the level | step difference lower than a floor surface produced | generated from the distance image image | photographed by the distance image sensor with which the cleaning robot which concerns on Embodiment 1 of this invention is equipped. Yes, (b) is the bottom image of (a). 本発明の実施形態2に係る清掃ロボットの演算装置が処理する手順を示すフローチャートである。It is a flowchart which shows the procedure which the arithmetic unit of the cleaning robot which concerns on Embodiment 2 of this invention processes. (a)は、本発明の実施形態2に係る清掃ロボットに具備される距離画像センサにより撮影された距離画像から生成された3次元座標データの例を示す図であり、(b)は、(a)の3次元座標データをyz平面に投影した投影画像であり、(c)は、(a)の3次元座標データをxy平面に投影した投影画像であり、(d)は、(a)の3次元座標データをxy´平面に投影した投影画像である。(A) is a figure which shows the example of the three-dimensional coordinate data produced | generated from the distance image image | photographed by the distance image sensor with which the cleaning robot which concerns on Embodiment 2 of this invention is equipped, (b) is ( (a) is a projection image obtained by projecting the three-dimensional coordinate data onto the yz plane, (c) is a projection image obtained by projecting the three-dimensional coordinate data (a) onto the xy plane, and (d) is a diagram (a). Is a projection image obtained by projecting the three-dimensional coordinate data on the xy ′ plane. 本発明に係る路面段差検出装置の一形態である実施形態3の路面段差検出装置の構成図である。It is a block diagram of the road surface level | step difference detection apparatus of Embodiment 3 which is one form of the road surface level | step difference detection apparatus which concerns on this invention. 実施形態3の路面段差検出装置に具備されたステレオカメラの配置図である。FIG. 10 is a layout diagram of a stereo camera provided in the road surface level difference detection device according to the third embodiment. 実施形態3の路面段差検出装置に具備されたスレテオカメラで撮像した画像である。It is the image imaged with the stereo camera with which the road surface level | step difference detection apparatus of Embodiment 3 was equipped. 実施形態3の路面段差検出装置に具備された演算部における高さ検出処理のフローチャートである。10 is a flowchart of height detection processing in a calculation unit provided in the road surface level difference detection device of Embodiment 3; 距離算出方法を示す説明図である。It is explanatory drawing which shows the distance calculation method. 視差算出方法を示す説明図である。It is explanatory drawing which shows the parallax calculation method. 視差算出方法を示す説明図である。It is explanatory drawing which shows the parallax calculation method. 路面高さ算出方法示す説明図である。It is explanatory drawing which shows the road surface height calculation method. 路面高さ算出結果の一例である。It is an example of a road surface height calculation result. 路面段差検出装置を備えた車両であるシニアカーである。It is a senior car which is a vehicle provided with a road surface level difference detection device. 本発明に係る路面段差検出装置の一形態である実施形態4の路面段差検出装置においておこなわれるステレオ画像の補正方法を示す図である。It is a figure which shows the correction method of the stereo image performed in the road surface level | step difference detection apparatus of Embodiment 4 which is one form of the road surface level | step difference detection apparatus which concerns on this invention. 実施形態4の路面段差検出装置に具備された演算部における高さ検出処理のフローチャートである。10 is a flowchart of height detection processing in a calculation unit provided in the road surface level difference detection device of Embodiment 4; (a)は、従来技術におけるレーザレンジファインダを備えた自律移動装置の外観図を示す図であり、(b)は、従来技術におけるレーザレンジファインダを備えた自律移動装置が検出できない上方の障害物、下方の障害物、および段差の図である。(A) is a figure which shows the external view of the autonomous mobile device provided with the laser range finder in a prior art, (b) is the upper obstacle which the autonomous mobile device provided with the laser range finder in a prior art cannot detect It is a figure of a lower obstacle, and a level | step difference. ステレオ画像の歪みを示す図である。It is a figure which shows the distortion of a stereo image.
 〔実施形態1〕
 以下、本発明の平面検出装置および自律移動装置の一実施形態を説明する。
[Embodiment 1]
Hereinafter, one embodiment of a plane detection device and an autonomous mobile device of the present invention is described.
 ここで、先述の特許文献2に示された方法においては、ステレオカメラが必須である上、道路上の白線や路面端など、平面を検出するための分かりやすい手掛かりが必要となる。例えば、特許文献2にかかる平面検出装置を移動式ロボットに備えた場合、移動式ロボットが走行する路面上に白線や路面端などの分かりやすい手掛かりがない場合には、左右画像の対応関係が得られないため適用できない。 Here, in the method disclosed in Patent Document 2 described above, a stereo camera is essential, and easy-to-understand clues for detecting a plane such as a white line on the road or a road surface end are required. For example, when the mobile robot is provided with the flat surface detection apparatus according to Patent Document 2, if there are no easy-to-understand clues such as white lines or road edges on the road surface on which the mobile robot travels, the correspondence relationship between the left and right images is obtained. Not applicable.
 また、先述の特許文献3に示された方法においては、距離画像データから直接平面情報を抽出できるものの、距離画像データ内に平面とは無関係の情報が多数含まれると、それがノイズとなって平面検出の障害となる。特に、距離画像データ内の大部分が平面以外の情報で占められているような場合には、平面が検出できなくなってしまう。また、平面を特定する際に算出すべきパラメータが多いため、演算時間が増大するという問題がある。 Further, in the method disclosed in Patent Document 3 described above, plane information can be extracted directly from distance image data. However, if a lot of information unrelated to the plane is included in the distance image data, it becomes noise. This is an obstacle to plane detection. In particular, when most of the distance image data is occupied by information other than the plane, the plane cannot be detected. In addition, since there are many parameters to be calculated when specifying the plane, there is a problem that the calculation time increases.
 そこで、本実施形態1は、上記の問題を解決して、距離画像からその画像内に含まれる平面を、高速かつより確実に検出できる平面検出装置を提供する。 Therefore, the first embodiment solves the above-described problem and provides a plane detection device that can detect a plane included in the image from the distance image at high speed and more reliably.
 なお、本実施形態1では、本発明に係る平面検出装置を備えた清掃ロボットを挙げて、これを図1~図10を用いて説明する。 In the first embodiment, a cleaning robot provided with the flat surface detection apparatus according to the present invention will be cited and described with reference to FIGS. 1 to 10.
 図1は、本実施形態1における清掃ロボット1を示す図であり、図1中の(a)は、この清掃ロボット1の外観図を示す図であり、図1中の(b)は、その清掃ロボット1の筐体11の内部構成を図示した断面図である。本実施形態1における清掃ロボット1は、床面を自律走行しながら清掃を行なう自律走行型の清掃ロボットである。この清掃ロボット1は、距離画像センサが取得した距離画像から平面を検出する平面検出装置を必須構成としている。清掃ロボット1は、平面検出装置によって、平面を検出し、進行方向の障害物、および段差を判別して、これらを避けて走行することができる。 FIG. 1 is a diagram showing a cleaning robot 1 according to the first embodiment. (A) in FIG. 1 is an external view of the cleaning robot 1, and (b) in FIG. 2 is a cross-sectional view illustrating an internal configuration of a housing 11 of the cleaning robot 1. FIG. The cleaning robot 1 according to the first embodiment is an autonomous traveling type cleaning robot that performs cleaning while autonomously traveling on a floor surface. The cleaning robot 1 has an essential configuration of a plane detection device that detects a plane from a distance image acquired by a distance image sensor. The cleaning robot 1 can detect a plane by a plane detection device, determine obstacles and steps in the traveling direction, and can travel while avoiding them.
 なお、本発明に係る平面検出装置は、距離画像センサを構成要素として具備してもよいし、本実施形態1において以下で説明するように、清掃ロボット1に距離画像センサおよび平面検出装置が設けられており、当該距離画像センサが取得した距離画像を平面検出装置が入手して平面を検出する態様であってもよい。本実施形態1では、距離画像センサは平面検出装置の構成要素ではなく、平面検出装置の外部構成として距離画像センサが在る態様について説明する。 Note that the flat surface detection apparatus according to the present invention may include a distance image sensor as a component, and the cleaning robot 1 is provided with the distance image sensor and the flat surface detection apparatus as described below in the first embodiment. It is also possible to adopt a mode in which the plane detection device acquires the range image acquired by the range image sensor and detects the plane. In the first embodiment, the distance image sensor is not a constituent element of the flat surface detection device, and an aspect in which the distance image sensor is provided as an external configuration of the flat surface detection device will be described.
 (清掃ロボット1の構成)
 図1中の(a)において、清掃ロボット1は、窓21が設けられた筐体11と、駆動輪2(走行手段)と、保護部材12とを備えている。筐体11の内部には、後述する種々の制御系および駆動系などが実装されており、駆動輪2が駆動制御されることによって、清掃ロボット1が走行路面を走行し、走行中あるいは走行停止中に走行路面の清掃を行なう。
(Configuration of cleaning robot 1)
In FIG. 1A, the cleaning robot 1 includes a housing 11 provided with a window 21, drive wheels 2 (traveling means), and a protection member 12. Various control systems and drive systems, which will be described later, are mounted inside the housing 11. When the drive wheels 2 are driven and controlled, the cleaning robot 1 travels on the traveling road surface and travels or stops traveling. Clean the road surface during.
 清掃ロボット1のより具体的な構成を図1中の(b)を用いて説明する。図1中の(b)において、清掃ロボット1は、窓21が設けられた筐体11の内部に、バッテリ4と、廃液回収ユニット45と、洗浄液吐出ユニット46と、モータ10と、距離画像センサ20(距離画像生成手段)と、演算装置30とを実装している。更に、図1中の(b)において、清掃ロボット1は、筐体11の外部、より具体的には筐体11と走行路面との間に、先に説明した駆動輪2とともに、従輪3と、清掃ブラシ9と、保護部材12とを備えている。以下に各構成について説明するが、本実施形態1の特徴的構成は演算装置30の一部に設けられた平面検出装置60にある。そこで、以下ではこの特徴的構成について重点的に説明する一方、特徴的構成以外の構成については従来周知の構成によって実現することも可能であるため詳細な説明は省略する。 A more specific configuration of the cleaning robot 1 will be described with reference to (b) in FIG. 1B, the cleaning robot 1 includes a battery 4, a waste liquid recovery unit 45, a cleaning liquid discharge unit 46, a motor 10, and a distance image sensor inside a casing 11 provided with a window 21. 20 (distance image generating means) and an arithmetic unit 30 are mounted. Further, in (b) of FIG. 1, the cleaning robot 1 is connected to the outside of the housing 11, more specifically between the housing 11 and the traveling road surface, together with the driving wheel 2 described above, The cleaning brush 9 and the protection member 12 are provided. Each configuration will be described below, but the characteristic configuration of the first embodiment resides in the flat surface detection device 60 provided in a part of the arithmetic device 30. Therefore, while the characteristic configuration will be described below in detail, the configuration other than the characteristic configuration can be realized by a conventionally known configuration, and thus detailed description thereof will be omitted.
 なお、図1において、清掃ロボット1は、紙面左方向へ前進、紙面右方向へ後退、紙面奥、または手前方向への旋回が可能であるが、以下の説明では、主な進行方向である紙面左方向への移動を単に進行方向と記載することがある。 In FIG. 1, the cleaning robot 1 can move forward in the left direction of the paper, move backward in the right direction of the paper, turn in the back of the paper surface, or in the front direction. However, in the following description, the paper surface that is the main traveling direction is used. Movement in the left direction may be simply referred to as a traveling direction.
 駆動輪2は、清掃ロボット1の底部の左右に配置されており、駆動モータ(不図示)により制御されて清掃ロボット1の移動を実現する。従輪3は、清掃ロボット1の底部に回転自在に取り付けられている。駆動輪2および従輪3は、前進、後退、旋回、および停止ができ、これらの組み合わせにより清掃ロボット1の自在な走行を可能としている。 The drive wheels 2 are arranged on the left and right of the bottom of the cleaning robot 1 and are controlled by a drive motor (not shown) to realize the movement of the cleaning robot 1. The follower wheel 3 is rotatably attached to the bottom of the cleaning robot 1. The drive wheel 2 and the slave wheel 3 can move forward, backward, turn, and stop, and the cleaning robot 1 can freely travel by a combination thereof.
 バッテリ4は清掃ロボット1に電源を供給する。バッテリ4は、よく知られた降圧回路と整流平滑回路とにより充電され、所定の電圧を出力する。 The battery 4 supplies power to the cleaning robot 1. The battery 4 is charged by a well-known step-down circuit and a rectifying / smoothing circuit, and outputs a predetermined voltage.
 洗浄液吐出ユニット46は、洗浄液タンク5と、洗浄液吐出部6とを有している。洗浄液タンク5は、洗浄液を貯留する。また、洗浄液吐出部6は、洗浄液タンク5とパイプにより連結されており、洗浄液タンク5に貯留されている洗浄液を吐出する。 The cleaning liquid discharge unit 46 includes a cleaning liquid tank 5 and a cleaning liquid discharge unit 6. The cleaning liquid tank 5 stores the cleaning liquid. Further, the cleaning liquid discharge unit 6 is connected to the cleaning liquid tank 5 by a pipe, and discharges the cleaning liquid stored in the cleaning liquid tank 5.
 廃液回収ユニット45は、廃液タンク7と、吸引口8とを有している。廃液タンク7は、清掃ロボット1が内部に吸い込んだ廃液(塵や埃等を含む)を溜める。清掃ロボット1は、吸引口8から廃液を吸い込み、吸引口8とパイプにより連結されている廃液タンク7に廃液を排出する。 The waste liquid recovery unit 45 has a waste liquid tank 7 and a suction port 8. The waste liquid tank 7 stores the waste liquid (including dust and dirt) sucked into the cleaning robot 1. The cleaning robot 1 sucks the waste liquid from the suction port 8 and discharges the waste liquid to the waste liquid tank 7 connected to the suction port 8 by a pipe.
 清掃ブラシ9は、吸引口8の付近に設置され、洗浄液吐出部6から吐出された洗浄を使い清掃する。清掃ブラシ9は、モータ10によって駆動される。 The cleaning brush 9 is installed in the vicinity of the suction port 8 and is cleaned using the cleaning discharged from the cleaning liquid discharge unit 6. The cleaning brush 9 is driven by a motor 10.
 保護部材12は、洗浄液の飛散、および異物の巻き込みを防止するために、清掃ロボット1の底部における進行方向前方側に設置されている。 The protection member 12 is installed on the front side in the traveling direction at the bottom of the cleaning robot 1 in order to prevent the cleaning liquid from splashing and foreign matter from getting involved.
 距離画像センサ20は、近距離用の距離画像センサ20aと、遠距離用の距離画像センサ20bとを有している。なお、近距離用の距離画像センサ20aと、遠距離用の距離画像センサ20bとに共通する構成について説明する場合には単に距離画像センサ20と記載することがある。 The distance image sensor 20 includes a distance image sensor 20a for a short distance and a distance image sensor 20b for a long distance. It should be noted that the configuration common to the distance image sensor 20a for short distance and the distance image sensor 20b for long distance may be described simply as the distance image sensor 20.
 距離画像センサ20は、赤外光投射方式の距離画像センサであり、その内部に赤外光の投射素子を含む投射光学系と、赤外光の撮像素子を含む撮像光学系を有する。所定のパターンを有する赤外光を外部に投影照射し、外部物体からの反射光を撮像素子で撮影することで、撮像光学系の視野範囲内にある物体までの距離を計測することができる。近距離用の距離画像センサ20aと、遠距離用の距離画像センサ20bとは、筐体11の内部に配設されており、赤外光を筐体11の窓21を通じて外部に投射するとともに、窓21を通じて外部から反射光を入射する。 The distance image sensor 20 is an infrared light projection type distance image sensor, and includes a projection optical system including an infrared light projection element and an imaging optical system including an infrared light image sensor. By projecting and irradiating infrared light having a predetermined pattern to the outside and photographing reflected light from the external object with an image sensor, the distance to the object within the field of view of the imaging optical system can be measured. The distance image sensor 20a for short distance and the distance image sensor 20b for long distance are disposed inside the housing 11, projecting infrared light to the outside through the window 21 of the housing 11, and Reflected light is incident from the outside through the window 21.
 距離画像センサ20の距離計測結果は、視野範囲に含まれる物体までの距離を画像上の画素のグレースケール値として表現した距離画像(深度画像、デプス画像)として出力される。本実施形態における距離画像センサ20a、および距離画像センサ20bの詳細については、後述する。 The distance measurement result of the distance image sensor 20 is output as a distance image (depth image, depth image) in which the distance to an object included in the visual field range is expressed as a grayscale value of a pixel on the image. Details of the distance image sensor 20a and the distance image sensor 20b in the present embodiment will be described later.
 演算装置30は、距離画像センサ20の距離画像を取得して、平面を検出する処理を行う。演算装置30の構成および機能の詳細については、後述する。 The computing device 30 acquires a distance image of the distance image sensor 20 and performs a process of detecting a plane. Details of the configuration and functions of the arithmetic unit 30 will be described later.
 なお、清掃ロボット1には、上記の各構成の他にも、後述する構成も含む。その他にも、手動走行、または自動走行を選択する操作パネル、手動走行時の走行方向を決定する走行スイッチ、および非常時に運転を停止させる非常停止スイッチ等の制御スイッチ50(図4)などを備えることができる。また、清掃ロボット1の形態は、上記のような洗浄液を使用して洗浄するタイプに限定されるものではなく、ファン、集塵室、吸込口などを備えたいわゆる家庭用掃除機のような態様のロボットであってもよい。 Note that the cleaning robot 1 includes a configuration described later in addition to the above-described configurations. In addition, an operation panel for selecting manual travel or automatic travel, a travel switch for determining a travel direction during manual travel, and a control switch 50 (FIG. 4) such as an emergency stop switch for stopping operation in an emergency are provided. be able to. Moreover, the form of the cleaning robot 1 is not limited to the type of cleaning using the cleaning liquid as described above, and is an aspect like a so-called household vacuum cleaner provided with a fan, a dust collection chamber, a suction port, and the like. It may be a robot.
 本発明に係る自律移動装置は、本実施形態1の清掃ロボット1でいうところの距離画像センサ20と、後述する演算装置30の平面検出装置60とを必須構成としている。そこで、以下では、先述した距離画像センサ20の詳細を説明するとともに、演算装置30の詳細を説明する。 The autonomous mobile device according to the present invention includes the distance image sensor 20 as referred to in the cleaning robot 1 of the first embodiment and the plane detection device 60 of the arithmetic device 30 to be described later. Therefore, in the following, details of the distance image sensor 20 described above will be described, and details of the arithmetic unit 30 will be described.
 (距離画像センサ20の詳細)
 図2は、本発明の実施形態1に係る清掃ロボット1の、距離画像センサ20の取り付け位置と、距離画像センサ20の計測範囲を示す図である。距離画像センサ20は、清掃ロボット1の進行方向前面に、清掃対象である床面(走行路面)から垂直に所定の距離離間した高さ位置に取り付けられている。より具体的には、距離画像センサ20の光軸は、進行方向前後に沿って伸びているものの、光軸の一端である撮像素子から進行方向に向かって斜め下向きになるよう、つまり距離画像センサ20は斜め下向きに床面を向いて取り付けられている。本実施形態1においては、近距離用の距離画像センサ20aは、床面からの高さ(床面から撮像素子の設置位置までの高さ)H=740[mm]、床面に対する光軸の角度θ=67.5[deg]となるよう取り付けられている。同様に、遠距離用の距離画像センサ20bは、床面からの高さ(床面から撮像素子の設置位置までの高さ)H=710[mm]、床面に対する光軸の角度θ=22.5[deg]となるよう取り付けられている。
(Details of the distance image sensor 20)
FIG. 2 is a diagram illustrating the attachment position of the distance image sensor 20 and the measurement range of the distance image sensor 20 in the cleaning robot 1 according to the first embodiment of the present invention. The distance image sensor 20 is attached to the front surface in the traveling direction of the cleaning robot 1 at a height position that is vertically separated from the floor surface (traveling road surface) to be cleaned. More specifically, although the optical axis of the distance image sensor 20 extends along the front and back in the traveling direction, the distance image sensor is inclined downward from the image sensor that is one end of the optical axis in the traveling direction, that is, the distance image sensor. 20 is attached to the floor surface obliquely downward. In the first embodiment, the distance image sensor 20a for short distance has a height from the floor (height from the floor to the installation position of the image sensor) H = 740 [mm], and the optical axis with respect to the floor. It is attached so that the angle θ = 67.5 [deg]. Similarly, the distance image sensor 20b for a long distance has a height from the floor surface (height from the floor surface to the installation position of the image sensor) H = 710 [mm], and an angle θ of the optical axis with respect to the floor surface = 22. .5 [deg].
 図3中の(a)は、遠距離用の距離画像センサ20bにより撮影されたRGB画像であり、図3中の(b)は、遠距離用の距離画像センサ20bにより撮影された距離画像であり、図3中の(c)は、近距離用の距離画像センサ20aにより撮影されたRGB画像であり、図3中の(d)は、近距離用の距離画像センサ20aにより撮影された距離画像である。RGB画像である図3中の(a)および(c)の視野内にある物体への距離を、近い距離は明るく、遠い距離は暗く表示した距離画像が図3中の(b)および(d)となる。図3中の(b)および(d)が示すように、距離画像センサ20aと距離画像センサ20bとは、取り付けられた位置、および水平面に対する角度が異なるため、平面である床面の角度が異なっている。 3A is an RGB image photographed by the long-distance distance image sensor 20b, and FIG. 3B is a distance image photographed by the long-distance distance image sensor 20b. 3C is an RGB image photographed by the short distance image sensor 20a, and FIG. 3D is a distance photographed by the short distance image sensor 20a. It is an image. The distance images displayed as RGB images in FIGS. 3A and 3C in the field of view in FIGS. 3A and 3C are displayed with the distance images bright in the near distance and dark in the far distance in FIGS. 3B and 3D. ) As (b) and (d) in FIG. 3 show, the distance image sensor 20a and the distance image sensor 20b have different positions at which they are attached and the angle with respect to the horizontal plane, so that the angle of the floor that is a plane is different. ing.
 ここで、距離画像センサ20の光軸を床面に対して平行に配置した場合、清掃ロボット1本体の近傍が、距離画像センサの画角から外れてしまう。したがって、清掃ロボット1本体の近距離の広い範囲が、視野外領域となり、計測不能になってしまう。これに対し、進行方向前方の斜め下向きに取り付けると、近傍の視野外領域を小さくできるので、清掃ロボット1本体の比較的近距離まで計測可能となる。 Here, when the optical axis of the distance image sensor 20 is arranged parallel to the floor surface, the vicinity of the main body of the cleaning robot 1 deviates from the angle of view of the distance image sensor. Therefore, a wide range in the short distance of the main body of the cleaning robot 1 becomes an out-of-view area, which makes measurement impossible. On the other hand, if it is attached obliquely downward in front of the traveling direction, the area outside the visual field in the vicinity can be reduced, so that it is possible to measure up to a relatively short distance of the cleaning robot 1 main body.
 また、床面上に投影した近距離用の距離画像センサ20aの視野範囲は、図2に示すAの台形領域になる。一方、遠距離用の距離画像センサ20bの視野範囲は、図2のAの台形領域になる。このように、水平面に対して異なる角度になるよう取り付けられた複数の距離画像センサ20を併用することにより、清掃ロボット1は、清掃ロボット1の進行方向の近傍から遠方まで、広い計測範囲を持つことが可能となる。 Further, the visual field range of the short-distance distance image sensor 20a projected on the floor surface is a trapezoidal area of A 0 B 0 C 0 D 0 shown in FIG. On the other hand, the visual field range of the distance image sensor 20b for a long distance is a trapezoidal area of A 1 B 1 C 1 D 1 in FIG. Thus, the cleaning robot 1 has a wide measurement range from the vicinity of the traveling direction of the cleaning robot 1 to the distant place by using the plurality of distance image sensors 20 attached at different angles with respect to the horizontal plane. It becomes possible.
 なお、距離画像センサ20の配置や個数は本実施形態1の構成に限定されず、例えば距離画像センサ20を1個のみ搭載したり、複数個を水平方向に並べて配置したりすることも可能である。また、本実施形態1において、近距離用の距離画像センサ20aおよび遠距離用の距離画像センサ20bは、同一波長の赤外光源を用いているため、互いの干渉を防ぐ目的で、視野領域AとAとの間に図2に示すようにわずかに隙間を設けている。異なる波長の光源を用いるなどして干渉を防ぐことが可能であれば、2つの視野領域の間に隙間を設けないように近距離用の距離画像センサ20aおよび遠距離用の距離画像センサ20bを設置することも可能である。 The arrangement and the number of the distance image sensors 20 are not limited to the configuration of the first embodiment. For example, only one distance image sensor 20 may be mounted, or a plurality of distance image sensors 20 may be arranged in the horizontal direction. is there. In the first embodiment, the distance image sensor 20a for short distance and the distance image sensor 20b for long distance use the infrared light source having the same wavelength, and therefore the visual field region A is used for the purpose of preventing mutual interference. A slight gap is provided between 0 B 0 C 0 D 0 and A 1 B 1 C 1 D 1 as shown in FIG. If interference can be prevented by using light sources of different wavelengths, the distance image sensor 20a for short distance and the distance image sensor 20b for long distance are provided so as not to provide a gap between the two visual field regions. It is also possible to install.
 (清掃ロボット1の走行機能)
 図4は、本実施形態1の清掃ロボット1における、走行機能に関連する構成を示すブロック図である。図4に示すように、清掃ロボット1は、先に説明した距離画像センサ20および演算装置30の他に、走行制御部41、清掃制御部42、マップ情報メモリ部43、状態表示部44、ロータリーエンコーダ47、駆動輪モータ48、ジャイロセンサ49、および制御スイッチ50を備えている。
(Running function of cleaning robot 1)
FIG. 4 is a block diagram illustrating a configuration related to the travel function in the cleaning robot 1 of the first embodiment. As shown in FIG. 4, the cleaning robot 1 includes a travel control unit 41, a cleaning control unit 42, a map information memory unit 43, a status display unit 44, a rotary in addition to the distance image sensor 20 and the calculation device 30 described above. An encoder 47, a drive wheel motor 48, a gyro sensor 49, and a control switch 50 are provided.
 演算装置30は、距離画像センサ20から距離画像を取得し、取得した距離画像から障害物や段差の位置、大きさ、および形状を抽出する。抽出された障害物、および段差の情報(以下、障害物・段差データと称する)は走行制御部41に出力される。演算装置30の詳細は後述する。 The computing device 30 acquires a distance image from the distance image sensor 20, and extracts the position, size, and shape of an obstacle or a step from the acquired distance image. The extracted obstacles and step information (hereinafter referred to as obstacle / step data) are output to the traveling control unit 41. Details of the arithmetic unit 30 will be described later.
 走行制御部41は、駆動輪2に取り付けられたロータリーエンコーダ47、およびジャイロセンサ49の情報に基づき、清掃ロボット1の移動距離、および現在の位置と方向とを把握する。予めマップ情報メモリ部43に保存されたマップ情報、および演算装置30から出力された障害物・段差データに基づき、障害物や段差を回避するように走行経路を決定し、駆動輪モータ48を制御する。また、制御スイッチ50から信号を取得すると、それに応じて非常停止や走行方向の変更など必要な制御を行う。これらの制御に関する情報は状態表示部44に表示され、リアルタイムに更新される。 The traveling control unit 41 grasps the moving distance of the cleaning robot 1 and the current position and direction based on information from the rotary encoder 47 and the gyro sensor 49 attached to the drive wheel 2. Based on the map information stored in advance in the map information memory unit 43 and the obstacle / step data output from the arithmetic unit 30, the travel route is determined so as to avoid the obstacle and the step, and the drive wheel motor 48 is controlled. To do. Further, when a signal is acquired from the control switch 50, necessary control such as an emergency stop or a change in the traveling direction is performed accordingly. Information regarding these controls is displayed on the status display unit 44 and updated in real time.
 清掃制御部42は、走行制御部41からのコマンドを受けて、清掃ブラシ9、廃液回収ユニット45、および洗浄液吐出ユニット46の動作開始、および停止の切り替えなど、清掃に関する部分を制御する。 The cleaning control unit 42 receives a command from the traveling control unit 41 and controls parts related to cleaning, such as operation start and stop switching of the cleaning brush 9, the waste liquid recovery unit 45, and the cleaning liquid discharge unit 46.
 マップ情報メモリ部43は、清掃ロボット1が清掃する範囲の障害物、および段差などの情報が保存されている。マップ情報メモリ部43に保存されている情報は、走行制御部41により、更新される。 The map information memory unit 43 stores information such as obstacles and steps in a range to be cleaned by the cleaning robot 1. The information stored in the map information memory unit 43 is updated by the travel control unit 41.
 状態表示部44は、清掃ロボット1の状態に関する情報を表示する。例えば、手動走行、または自動走行の表示、非常時の運転停止の表示等である。 The state display unit 44 displays information related to the state of the cleaning robot 1. For example, manual travel or automatic travel display, emergency stop display, and the like.
 ロータリーエンコーダ47は、駆動輪2に取り付けられており、回転の変位をデジタル信号として走行制御部41に出力する。ロータリーエンコーダ47の出力により、走行制御部41は、進んだ距離を把握することができる。 The rotary encoder 47 is attached to the driving wheel 2 and outputs a rotational displacement to the travel control unit 41 as a digital signal. Based on the output of the rotary encoder 47, the travel control unit 41 can grasp the distance traveled.
 ジャイロセンサ49は、向きの変化を検出し、走行制御部41に出力する。ジャイロセンサ49の出力により、走行制御部41は進行方向を把握することができる。 The gyro sensor 49 detects a change in direction and outputs it to the traveling control unit 41. From the output of the gyro sensor 49, the traveling control unit 41 can grasp the traveling direction.
 (演算装置30の詳細)
 図4に示すように、演算装置30は、平面検出装置60、障害物・段差検出部35、およびデータ統合部36を備えている。平面検出装置60は、図4に示すように、3次元座標演算部31(3次元座標演算手段、第2の3次元座標演算手段)と、投影画像生成部32(投影画像生成手段、第2の投影画像生成手段)と、直線検出部33(直線検出手段、第2の直線検出手段)と、平面検出部34(平面パラメータ算出手段、第2の平面パラメータ算出手段)とを有している。
(Details of the arithmetic unit 30)
As illustrated in FIG. 4, the arithmetic device 30 includes a flat surface detection device 60, an obstacle / step detection unit 35, and a data integration unit 36. As shown in FIG. 4, the plane detection device 60 includes a three-dimensional coordinate calculation unit 31 (three-dimensional coordinate calculation unit, second three-dimensional coordinate calculation unit) and a projection image generation unit 32 (projection image generation unit, second Projection image generating means), a straight line detecting section 33 (straight line detecting means, second straight line detecting means), and a plane detecting section 34 (plane parameter calculating means, second plane parameter calculating means). .
 ・3次元座標演算部31
 3次元座標演算部31は、距離画像センサ20から距離画像を取得し、取得した距離画像を3次元座標データに変換する。3次元座標データの座標系の定義を、図5を用いて説明する。
・ 3D coordinate calculation unit 31
The three-dimensional coordinate calculation unit 31 acquires a distance image from the distance image sensor 20 and converts the acquired distance image into three-dimensional coordinate data. The definition of the coordinate system of the three-dimensional coordinate data will be described with reference to FIG.
 図5中の(a)は、近距離用の距離画像センサ20a基準の3次元座標系を示す図であり、図5中の(b)は、遠距離用の距離画像センサ20b基準の3次元座標系を示す図であり、図5中の(c)は、清掃ロボット1基準の3次元座標系を示す図である。なお、近距離用または遠距離用の距離画像センサ20a、20b基準の3次元座標系とは、距離画像センサ20の光学中心を原点として、進行方向を向いたときの左右方向をx軸(右向きを正)、上下方向をy軸(上向きを正)、前後方向、即ち距離画像センサ20の光軸方向をz軸(奥行き方向を正)にとる。距離画像センサ20aおよび距離画像センサ20bは取り付け位置、および角度が異なるため、図5中の(a)および(b)に示すように、座標系も互いに異なっている。また、距離画像センサ20基準の座標系で表した距離と、清掃ロボット1本体から床面に沿って計測した距離とも異なる。したがって、清掃ロボット1から対象物までの正確な距離を求めるためには、清掃ロボット1基準(床面基準)の座標系に座標変換して、2つの距離画像センサのデータを統合する必要がある。 (A) in FIG. 5 is a diagram showing a three-dimensional coordinate system based on the distance image sensor 20a for short distance, and (b) in FIG. 5 is a three-dimensional based on the distance image sensor 20b for long distance. It is a figure which shows a coordinate system, (c) in FIG. 5 is a figure which shows the three-dimensional coordinate system of cleaning robot 1 reference | standard. Note that the short-distance or long-distance range image sensors 20a and 20b are based on the three-dimensional coordinate system with the optical center of the distance image sensor 20 as the origin and the left-right direction when facing the traveling direction as the x-axis (rightward direction). The vertical direction is the y-axis (upward is positive), and the front-rear direction, that is, the optical axis direction of the distance image sensor 20 is the z-axis (depth direction is positive). Since the distance image sensor 20a and the distance image sensor 20b have different attachment positions and angles, the coordinate systems are also different from each other as shown in FIGS. 5 (a) and 5 (b). Further, the distance expressed in the coordinate system based on the distance image sensor 20 is different from the distance measured from the main body of the cleaning robot 1 along the floor surface. Therefore, in order to obtain an accurate distance from the cleaning robot 1 to the object, it is necessary to perform coordinate conversion to the coordinate system of the cleaning robot 1 reference (floor surface reference) and integrate the data of the two distance image sensors. .
 そこで、図5中の(c)に示すように、距離画像センサ20基準のxyz座標とは別に、清掃ロボット1基準の座標系であるXYZ座標を定義する。進行方向をZ軸、床面の法線方向をY軸、Z軸およびY軸に垂直な方向をX軸(右向きを正)とする。 Therefore, as shown in (c) of FIG. 5, XYZ coordinates, which are the coordinate system based on the cleaning robot 1, are defined separately from the xyz coordinates based on the distance image sensor 20. The traveling direction is the Z axis, the normal direction of the floor is the Y axis, and the direction perpendicular to the Z axis and the Y axis is the X axis (rightward is positive).
 なお、本実施形態1においては、距離画像センサ20基準のx軸の向きと、清掃ロボット1基準のX軸の向きとが、ほぼ一致している。これは即ち、距離画像センサ20がz軸を中心に回転する方向に傾いて取り付けられてはおらず、床面と距離画像センサ20との傾きは、x軸を中心に回転する方向の傾きθだけであるということを意味する。もしくは、z軸を中心に回転する方向に傾いていたとしても、傾きθに比べて十分小さく、無視できるということを意味する。 In the first embodiment, the x-axis direction based on the distance image sensor 20 and the X-axis direction based on the cleaning robot 1 are substantially the same. That is, the distance image sensor 20 is not attached in a direction that rotates about the z axis, and the inclination between the floor surface and the distance image sensor 20 is only the inclination θ in the direction that rotates about the x axis. It means that. Or, even if it is inclined in the direction of rotation about the z axis, it means that it is sufficiently smaller than the inclination θ and can be ignored.
 続いて、3次元座標データに含まれる点の座標を算出する方法について説明する。3次元座標(x,y,z)データのうち、z座標は距離画像に含まれる距離そのものである。x座標、およびy座標は、距離画像センサ20の光学系の焦点距離f、画素ピッチp、光軸と撮像素子中心との画素ズレ量cなどが分かれば、三角測量の原理によりzから計算することが出来る。本実施形態1においては、事前に距離画像センサに対しキャリブレーションを行い、これらのパラメータを求めておく。これにより、距離画像が得られれば、その画像に含まれる全画素において距離画像センサ20基準の3次元座標(x,y,z)データ、即ち3次元点群(Point Cloud)データを得ることができる。 Next, a method for calculating the coordinates of the points included in the three-dimensional coordinate data will be described. Among the three-dimensional coordinate (x, y, z) data, the z coordinate is the distance itself included in the distance image. The x-coordinate and y-coordinate are calculated from z based on the principle of triangulation if the focal length f of the optical system of the distance image sensor 20, the pixel pitch p, and the pixel shift amount c between the optical axis and the center of the image sensor are known. I can do it. In the first embodiment, the distance image sensor is calibrated in advance to obtain these parameters. Thus, if a distance image is obtained, three-dimensional coordinate (x, y, z) data based on the distance image sensor 20, that is, three-dimensional point cloud data can be obtained for all pixels included in the image. it can.
 図6は、本実施形態1において、3次元座標データの例を示す図である。前述したように、進行方向に向いて左右方向をx軸、上下方向をy軸、前後方向をz軸にとっている。 FIG. 6 is a diagram illustrating an example of three-dimensional coordinate data in the first embodiment. As described above, in the traveling direction, the left-right direction is the x-axis, the up-down direction is the y-axis, and the front-rear direction is the z-axis.
 なお、3次元座標演算部31は、上記の座標系以外にも、例えばx軸、y軸、およびz軸のうち、少なくとも一つの軸を中心に回転させた座標系や、原点を変更した座標系など、様々な座標系において、距離画像を3次元座標データに変換可能である。 In addition to the above coordinate system, the three-dimensional coordinate calculation unit 31 may be a coordinate system rotated around at least one of the x-axis, y-axis, and z-axis, or a coordinate whose origin has been changed. A distance image can be converted into three-dimensional coordinate data in various coordinate systems such as a system.
 ・投影画像生成部32
 投影画像生成部32は、3次元座標データを、2次元面(平面)に投影した投影画像を生成する。例えば、xy平面に投影された投影画像は、全ての画素について、そのx座標とy座標とを抽出したものになる。同様に、yz平面に投影された投影画像は、全ての画素について、そのy座標とz座標とを抽出したものとなり、zx平面に投影された投影画像は、全ての画像について、そのz座標とx座標とを抽出したものとなる。
Projected image generation unit 32
The projection image generation unit 32 generates a projection image obtained by projecting the three-dimensional coordinate data onto a two-dimensional surface (plane). For example, the projection image projected on the xy plane is obtained by extracting the x and y coordinates of all the pixels. Similarly, the projection image projected on the yz plane is obtained by extracting the y-coordinate and the z-coordinate for all pixels, and the projection image projected on the zx plane is the z-coordinate for all images. The x coordinate is extracted.
 ここで、投影画像は白(=「1」)/黒(=「0」)の2値の画像であり、その縦横のサイズは、投影時の縮尺に応じて自由に決めることができる。例えば、yz平面への投影画像の条件として、y軸の投影範囲を-1200mm~+1200mm(オフセット1200mm)、z軸の投影範囲を0mm~+3200mm(オフセット0mm)、投影時の縮尺を1/10[pixel/mm]とする。この場合、投影画像のサイズは以下のようになる。 Here, the projected image is a binary image of white (= “1”) / black (= “0”), and the vertical and horizontal sizes can be freely determined according to the scale at the time of projection. For example, as conditions for a projected image on the yz plane, the projection range of the y axis is −1200 mm to +1200 mm (offset 1200 mm), the projection range of the z axis is 0 mm to +3200 mm (offset 0 mm), and the scale at the time of projection is 1/10 [ pixel / mm]. In this case, the size of the projected image is as follows.
 縦(y軸): (1200-(-1200))/10=240
 横(z軸): (3200-0)/10=320
 即ち、横320pixel×縦240pixelとなる。
Vertical (y-axis): (1200 − (− 1200)) / 10 = 240
Horizontal (z-axis): (3200-0) / 10 = 320
That is, the horizontal 320 pixels × vertical 240 pixels.
 この投影画像サイズは、原画像(距離画像)の画像サイズとは独立に、上記等の縮尺を変えることで自由に変更することができる。投影画像サイズを大きくすると、投影時の分解能が細かくなることで計算の精度が上がるが、その分演算時間が長くなる。このトレードオフによって実際に使用する画像サイズが決定される。 This projection image size can be freely changed by changing the scale as described above, independently of the image size of the original image (distance image). When the projection image size is increased, the resolution at the time of projection becomes finer, so that the accuracy of calculation increases, but the calculation time becomes longer accordingly. This trade-off determines the image size that is actually used.
 実際の投影画像の作成は以下のようにして行う。一例として、3次元座標データの一つ、点A(x,y,z)=(500mm,-300mm,2000mm)というデータを、yz平面に投影する場合について、説明する。 The actual projection image is created as follows. As an example, a case where data of one of the three-dimensional coordinate data, that is, point A (x, y, z) = (500 mm, −300 mm, 2000 mm) is projected on the yz plane will be described.
 最初に、yz平面に投影された投影画像に含まれる全ての点の画素値を「0」で初期化しておく。 First, the pixel values of all points included in the projection image projected on the yz plane are initialized with “0”.
 次に、点Aのy座標、z座標を抽出すると、(y,z)=(-300mm,2000mm)となる。これを上記のyz平面に投影するために、上記の縮尺と投影範囲を考慮して座標を換算すると、
 縦(y軸):(-300+1200)/10=90
 横(z軸):(2000+0)/10=200
となるので、投影画像上の(y,z)=(90、200)の点に対応することが分かる。従って、yz平面投影画像の(y,z)=(90,200)の点の画素値を「0」から「1」に変更する。
Next, when the y and z coordinates of the point A are extracted, (y, z) = (− 300 mm, 2000 mm). In order to project this onto the yz plane, the coordinates are converted in consideration of the scale and the projection range.
Vertical (y-axis): (−300 + 1200) / 10 = 90
Horizontal (z axis): (2000 + 0) / 10 = 200
Therefore, it can be seen that it corresponds to a point of (y, z) = (90, 200) on the projection image. Accordingly, the pixel value of the point (y, z) = (90, 200) in the yz plane projection image is changed from “0” to “1”.
 次に、別のデータとして、点B(x,y,z)=(-400mm,900mm,1500mm)が存在したとする。このデータを同様にyz平面に投影すると、(y,z)=(210、150)の点に相当するので、この点の画素値を同様に「0」から「1」に変更する。 Next, it is assumed that point B (x, y, z) = (− 400 mm, 900 mm, 1500 mm) exists as another data. Similarly, when this data is projected onto the yz plane, it corresponds to a point of (y, z) = (210, 150). Therefore, the pixel value at this point is similarly changed from “0” to “1”.
 さらに、別のデータとして、点C(x,y,z)=(-200mm,-303mm,1998mm)が存在したとする。このデータを同様にyz平面に投影すると、(y,z)=(90、200)の点に相当する。これは点Aの投影後の座標と同じであり、既にこの点の画素値は「1」になっているため、ここでは何も行わない。これは、3次元座標上の異なる2点AとCとが、yz平面への投影後は同一の点に投影されることを意味する。 Furthermore, it is assumed that point C (x, y, z) = (− 200 mm, −303 mm, 1998 mm) exists as another data. When this data is similarly projected onto the yz plane, it corresponds to a point of (y, z) = (90, 200). This is the same as the projected coordinate of point A, and the pixel value of this point is already “1”, so nothing is done here. This means that two different points A and C on the three-dimensional coordinates are projected onto the same point after projection onto the yz plane.
 以下同様にして、3次元座標データに含まれる点の全てについて、y座標値・z座標値の抽出、yz平面の座標値への換算を行い、該当する座標の画素値の「1」への変更を行う。これにより、3次元座標データに含まれる点が存在する部分のみ「1」となった2値画像の形でyz平面投影画像が得られることになる。 In the same manner, y-coordinate values and z-coordinate values are extracted and converted into coordinate values on the yz plane for all points included in the three-dimensional coordinate data, and the pixel values of the corresponding coordinates are converted to “1”. Make a change. As a result, a yz plane projection image is obtained in the form of a binary image in which only the portion where the point included in the three-dimensional coordinate data exists is “1”.
 ・直線検出部33
 直線検出部33は、投影画像生成部32が生成した投影画像から、直線を検出する。例えば、図7に示す投影画像において、床面を示す直線を検出する場合について説明する。
Linear detection unit 33
The straight line detection unit 33 detects a straight line from the projection image generated by the projection image generation unit 32. For example, a case where a straight line indicating a floor surface is detected in the projection image shown in FIG. 7 will be described.
 図7は、図6の3次元座標データをyz平面に投影した投影画像である。図7に示すように、yz平面への投影画像は、全ての画素について、y座標とz座標とを抽出している。また、床面よりも下に物体が存在しない場合、同一のz座標を有する点同士を比較すると、床面を表す点が最も低い、即ちy座標が最も小さい点(以下、ボトム点と称する)であることが分かる。さらに、床面を表す平面が、投影画像では直線になっていることが分かる。 FIG. 7 is a projection image obtained by projecting the three-dimensional coordinate data of FIG. 6 onto the yz plane. As shown in FIG. 7, the y-coordinate and the z-coordinate are extracted from all the pixels in the projection image onto the yz plane. Further, when no object exists below the floor surface, when the points having the same z coordinate are compared, the point representing the floor surface is the lowest, that is, the y coordinate is the smallest (hereinafter referred to as the bottom point). It turns out that it is. Furthermore, it can be seen that the plane representing the floor is a straight line in the projected image.
 前述したように、距離画像センサ20は、z軸を中心として回転する向きに傾いて取り付けられていないため、床面と距離画像センサ20との傾き角は、x軸を中心として回転する方向の傾きθに比べて十分小さい。その結果、床面を表す3次元座標データをyz平面へ投影すると、ほぼ一つの直線上に分布する。 As described above, since the distance image sensor 20 is not attached to be inclined in the direction of rotation about the z axis, the inclination angle between the floor surface and the distance image sensor 20 is in the direction of rotation about the x axis. It is sufficiently smaller than the inclination θ. As a result, when the three-dimensional coordinate data representing the floor surface is projected onto the yz plane, it is distributed on almost one straight line.
 ボトム点が床面を示すので、直線検出部33は、投影画像に含まれる各画素列を、下から上方向にスキャンして、スキャン方向に沿って最初に見つかった点、即ち最初に「1」となった点のみを残し、他の点を削除する演算を行い、ボトム画像を得る。 Since the bottom point indicates the floor surface, the straight line detection unit 33 scans each pixel row included in the projection image from the bottom to the top and finds the first point along the scan direction, that is, “1” Only the point that becomes “” is left and the other point is deleted to obtain a bottom image.
 次に、直線検出部33は、得たボトム画像に対し、直線を検出するフィッティング処理を施し、直線の傾き、および切片等のパラメータを得る。直線検出の結果によっては直線の候補が一本ではなく複数本得られる場合があるが、その場合は予め定められた基準で最も確からしい直線を選出する。 Next, the straight line detection unit 33 performs a fitting process for detecting a straight line on the obtained bottom image, and obtains parameters such as a slope of the straight line and an intercept. Depending on the result of straight line detection, a plurality of straight line candidates may be obtained instead of one. In this case, the most likely straight line is selected based on a predetermined criterion.
 直線検出手法(フィッティング処理)には、Hough変換、その改良手法である確率的Hough変換、単純な最小自乗法、およびRANSAC法など任意の処理を適用できる。なお、上記「定められた基準」としては、例えば、最小自乗法やRANSAC法であれば、直線フィッティングを行った際の誤差(残差)が最も小さい直線を、最も確からしい直線として選択することができる。また、Hough変換であれば、その直線を支持する点の数が最も多いものを、最も確からしい直線として選択することができる。これらの処理により、直線の候補が複数存在する場合に、それぞれの手法で最も確からしい直線を抽出することができる。 As the straight line detection method (fitting process), arbitrary processing such as Hough transformation, probabilistic Hough transformation which is an improved method thereof, simple least square method, and RANSAC method can be applied. For example, in the case of the least square method or the RANSAC method, the straight line having the smallest error (residual) when performing straight line fitting is selected as the most likely straight line. Can do. In the case of the Hough transform, a line having the largest number of points that support the straight line can be selected as the most likely straight line. By these processes, when there are a plurality of straight line candidates, the most likely straight line can be extracted by each method.
 以上により、yz平面投影画像上で床面を表す直線の情報を得ることができる。 From the above, it is possible to obtain straight line information representing the floor surface on the yz plane projection image.
 ・平面検出部34
 平面検出部34は、直線検出部33が検出した直線から、3次元座標データ内の平面を検出し、平面の高さ、および角度を算出する。検出された平面は、高さ、および角度という幾何的な情報以外に、例えば平面の方程式:ax+by+cz+d=0といった他の形式によって表現したパラメータを求めても構わない。
-Planar detector 34
The plane detection unit 34 detects a plane in the three-dimensional coordinate data from the straight line detected by the line detection unit 33, and calculates the height and angle of the plane. For the detected plane, in addition to the geometric information such as the height and the angle, parameters expressed by other forms such as a plane equation: ax + by + cz + d = 0 may be obtained.
 図6、および図7において平面として検出される床面は、距離画像センサ20の取り付けられた高さ、および角度から、平面検出部34によって検出される平面の高さ、および角度が推測可能である。したがって、検出される高さ、および角度の許容範囲を予め設定し、平面検出部34が検出した平面が許容範囲内にあるかを判定することによって、より確実に床面を検出可能となる。 6 and 7, the height and angle of the plane detected by the plane detector 34 can be estimated from the height and angle at which the distance image sensor 20 is attached. is there. Therefore, it is possible to detect the floor surface more reliably by setting the allowable range of the detected height and angle in advance and determining whether the plane detected by the plane detection unit 34 is within the allowable range.
 また、平面検出部34は、検出した床面を、床平面情報として保有する。床平面情報は、平面検出部34が床面を検出すると、随時更新される。こうすることにより、清掃ロボット1の移動に伴って発生する床平面の変動に追随し、常に床平面を把握することが出来る。また、距離画像センサの前を人が横切るなどして床面が一時的に検出できなくなっても、以前に検出した床平面情報を使用することで、床平面検出処理の欠落を防止することができる。 Further, the plane detection unit 34 holds the detected floor surface as floor plane information. The floor plane information is updated as needed when the plane detection unit 34 detects the floor surface. By doing so, it is possible to follow the fluctuation of the floor plane that occurs with the movement of the cleaning robot 1 and always grasp the floor plane. In addition, even if the floor surface cannot be detected temporarily due to a person crossing the distance image sensor, it is possible to prevent missing floor plane detection processing by using previously detected floor plane information. it can.
 ・障害物・段差検出部35
 障害物・段差検出部35は、xyz座標系の3次元座標データをXYZ座標系の3次元座標データに変換する。そして、XYZ座標系において、各点と平面との距離を計算し、検出した平面より高いか低いかを判定する。
・ Obstacle / step detector 35
The obstacle / step detection unit 35 converts the three-dimensional coordinate data in the xyz coordinate system into the three-dimensional coordinate data in the XYZ coordinate system. Then, in the XYZ coordinate system, the distance between each point and the plane is calculated to determine whether it is higher or lower than the detected plane.
 ・データ統合部36
 データ統合部36は、複数の距離画像から検出された複数の障害物、および複数の段差を、一つの障害物・段差データとして統合する。本実施形態においては、距離画像センサ20a、および距離画像センサ20bのそれぞれから取得した距離画像から障害物、および段差が検出されるので、それらの障害物、および段差の情報を一つに統合し、障害物・段差データを作成する。その際、例えば、遠距離用の距離画像センサ20bで検出された障害物Aの前に、近距離用の距離画像センサ20aで検出された別の障害物Bがある場合、距離の近い障害物Bのデータが優先するようにしてデータを統合することもできる。
Data integration unit 36
The data integration unit 36 integrates a plurality of obstacles and a plurality of steps detected from a plurality of distance images as one obstacle / step data. In the present embodiment, since obstacles and steps are detected from the distance images acquired from the distance image sensor 20a and the distance image sensor 20b, information on the obstacles and steps is integrated into one. Create obstacle / step data. At this time, for example, when there is another obstacle B detected by the distance image sensor 20a for short distance before the obstacle A detected by the distance image sensor 20b for long distance, The data can be integrated so that the data of B has priority.
 障害物・段差データの形式は、後で走行制御部41が処理しやすいように、任意の形式に変換することができる。データの座標系は、清掃ロボット基準のXYZ座標系のまま出力することもでき、また極座標系(R-θ座標系)に変換することもできる。検出した全ての障害物、および段差をデータ化する手法以外に、データの間引き、または補間を行ったり、清掃ロボット1本体に最も近い障害物、および段差データのみを抽出したりする手法も考えられる。 The format of the obstacle / step data can be converted into an arbitrary format so that the traveling control unit 41 can easily process it later. The coordinate system of the data can be output as the cleaning robot standard XYZ coordinate system, or can be converted into a polar coordinate system (R-θ coordinate system). In addition to the method of converting all detected obstacles and steps into data, a method of thinning out or interpolating data or extracting only the obstacles and step data closest to the main body of the cleaning robot 1 can be considered. .
 (演算装置30の処理)
 以下に、演算装置30の処理フローを纏める。図8は、本発明の実施形態1に係る清掃ロボット1の演算装置30が処理する手順を示すフローチャートである。
(Processing of the arithmetic unit 30)
The processing flow of the arithmetic device 30 is summarized below. FIG. 8 is a flowchart illustrating a procedure performed by the arithmetic device 30 of the cleaning robot 1 according to the first embodiment of the present invention.
 まず、3次元座標演算部31が、距離画像センサ20が生成した距離画像を取得する(ステップS101)。本実施形態1においては、近距離用の距離画像センサ20aと、遠距離用の距離画像センサ20bとの2つの距離画像センサ20が取り付けられているので、近距離用の距離画像と、遠距離用の距離画像とを、それぞれの距離画像センサ20から取得する。 First, the three-dimensional coordinate calculation unit 31 acquires a distance image generated by the distance image sensor 20 (step S101). In the first embodiment, since the two distance image sensors 20 of the distance image sensor 20a for short distance and the distance image sensor 20b for long distance are attached, the distance image for short distance and the long distance The distance image for use is acquired from each distance image sensor 20.
 次に、3次元座標演算部31が、取得した距離画像を、xyz座標系の3次元座標データに変換する(ステップS102)。変換された3次元座標データから、投影画像生成部32はyz平面に投影された投影画像を生成する(ステップS103)。前述したように、本実施形態1においては、距離画像センサ20は、z軸を中心として回転する向きに傾いて取り付けられていないため、床面と距離画像センサ20とのz軸を中心として回転する方向の傾きは、x軸を中心として回転する方向の傾きに比べて十分に小さい。したがって、3次元座標データにおいて床面を表す平面は、yz平面への投影画像においては一直線上の点群となる。ここで、実際の投影画像を、図9に示す。 Next, the three-dimensional coordinate calculation unit 31 converts the acquired distance image into three-dimensional coordinate data in the xyz coordinate system (step S102). From the converted three-dimensional coordinate data, the projection image generation unit 32 generates a projection image projected on the yz plane (step S103). As described above, in the first embodiment, the distance image sensor 20 is not attached with an inclination in the direction of rotation about the z axis, and thus rotates about the z axis between the floor surface and the distance image sensor 20. The inclination in the direction to be rotated is sufficiently smaller than the inclination in the direction of rotation about the x axis. Accordingly, the plane representing the floor surface in the three-dimensional coordinate data becomes a point group on a straight line in the projection image onto the yz plane. Here, an actual projection image is shown in FIG.
 図9中の(a)は、本発明の実施形態1に係る遠距離用の距離画像センサ20bにより撮影された距離画像から生成されたyz平面への投影画像であり、図9中の(b)は、本発明の実施形態1に係る図9中の(a)のボトム画像であり、図9中の(c)は、本発明の実施形態1に係る近距離用の距離画像センサ20aにより撮影された距離画像から生成されたyz平面への投影画像であり、図9中の(d)は、本発明の実施形態1に係る図9中の(c)のボトム画像である。図9中の(a)および(c)において、床面を表す点群61、および点群62は、前述したように、直線になっていることが分かる。 (A) in FIG. 9 is a projection image onto the yz plane generated from the distance image photographed by the distance image sensor 20b for long distance according to the first embodiment of the present invention, and (b) in FIG. ) Is a bottom image of (a) in FIG. 9 according to Embodiment 1 of the present invention, and (c) in FIG. 9 is obtained by the distance image sensor 20a for short distance according to Embodiment 1 of the present invention. FIG. 9D is a projected image on the yz plane generated from the captured distance image, and is a bottom image of FIG. 9C according to Embodiment 1 of the present invention. In (a) and (c) in FIG. 9, it can be seen that the point group 61 and the point group 62 representing the floor surface are straight lines as described above.
 続いて、直線検出部33が、投影画像からボトム画像を生成する(ステップS104)。図9(b)、および(d)に示すように、ボトム画像の直線は、床面を表す点群61、および床面を表す点群62と一致している。このボトム画像から、直線検出部33が、直線を検出する(ステップS105)。そして、平面検出部34が、検出された直線から、3次元座標データ内の平面を検出し、当該平面の角度および高さを算出する(ステップS106)。ここで平面の角度とは、z軸に対する角度(傾斜角)である。また、平面の高さとは、床面と距離画像センサ20との間の離間距離である。 Subsequently, the straight line detection unit 33 generates a bottom image from the projection image (step S104). As shown in FIGS. 9B and 9D, the straight line of the bottom image matches the point group 61 representing the floor surface and the point group 62 representing the floor surface. From the bottom image, the straight line detection unit 33 detects a straight line (step S105). Then, the plane detection unit 34 detects a plane in the three-dimensional coordinate data from the detected straight line, and calculates the angle and height of the plane (step S106). Here, the plane angle is an angle (tilt angle) with respect to the z-axis. The plane height is a separation distance between the floor surface and the distance image sensor 20.
 算出された平面の角度、および高さは、平面検出部34により、予め設定された角度、および高さの許容範囲内にあるか判定される(ステップS107)。 The calculated plane angle and height are determined by the plane detector 34 to be within a preset angle and height tolerance (step S107).
 ステップS107において、「角度、および高さが許容範囲内である」と判定された場合(ステップS107:Yes)、検出された平面は床面なので、平面検出部34は床平面情報を更新する(ステップS108)。一方、ステップS107において、「角度、および高さが許容範囲内ではない」と判定された場合(ステップS107:No)、検出された平面は床面ではないので、床平面情報は更新しない(ステップS109)。 If it is determined in step S107 that “the angle and the height are within the allowable range” (step S107: Yes), since the detected plane is the floor surface, the plane detector 34 updates the floor plane information ( Step S108). On the other hand, if it is determined in step S107 that “the angle and the height are not within the allowable range” (step S107: No), the detected plane is not a floor surface, and thus the floor plane information is not updated (step S107). S109).
 ステップS108もしくはステップS109の後、障害物・段差検出部35が、xyz座標系からXYZ座標系へと、3次元座標データを変換する(ステップS110)。次に、障害物・段差検出部35が、変換したXYZ座標系の3次元座標データから、各点と平面との距離を計算し、検出した平面より高いか低いかを判定し、障害物および段差を検出する(ステップS111)。当該判定には、閾値tを設け、床平面との距離がtよりも大きければ床よりも高い障害物、もしくは段差、-tよりも小さければ床平面より低い段差と判定する。閾値tは、床平面の凹凸の大きさや、距離画像センサの計測誤差等を考慮して予め設定する。これにより、3次元座標データに含まれる全ての点について、その点が段差に属するか、障害物に属するか、あるいはそれ以外であるかが判定される。そして、各点の座標(X,Y,Z)に、その点が段差に属するか、障害物に属するか、それ以外であるかを表す情報Fが付加され、(X,Y,Z,F)形式に変換される。このようにして得られた段差および障害物に関する情報が、障害物・段差検出部35からデータ統合部36に渡される。 After step S108 or step S109, the obstacle / step detection unit 35 converts the three-dimensional coordinate data from the xyz coordinate system to the XYZ coordinate system (step S110). Next, the obstacle / step detection unit 35 calculates the distance between each point and the plane from the converted three-dimensional coordinate data of the XYZ coordinate system, determines whether the obstacle is higher or lower than the detected plane, A step is detected (step S111). In this determination, a threshold value t is set, and if the distance from the floor plane is larger than t, the obstacle is higher than the floor or a step, and if it is smaller than -t, the step is lower than the floor plane. The threshold value t is set in advance in consideration of the size of the unevenness of the floor plane, the measurement error of the distance image sensor, and the like. Thereby, it is determined for all points included in the three-dimensional coordinate data whether the point belongs to a step, an obstacle, or the other. Then, information F indicating whether the point belongs to a step, an obstacle, or the other is added to the coordinates (X, Y, Z) of each point, and (X, Y, Z, F). ) Converted to format. Information on the steps and obstacles obtained in this way is passed from the obstacle / step detection unit 35 to the data integration unit 36.
 そして、データ統合部36が、障害物・段差検出部35が検出した障害物および段差の情報を統合し、障害物・段差データを作成する(ステップS112)。最後に、データ統合部36は、障害物・段差データを走行制御部41に出力する(ステップS113)。 Then, the data integration unit 36 integrates the obstacle and step information detected by the obstacle / step detection unit 35 to create obstacle / step data (step S112). Finally, the data integration unit 36 outputs the obstacle / step data to the travel control unit 41 (step S113).
 このような手順により、演算装置30は、距離画像から障害物・段差データを高速かつより確実に作成し、走行制御部41に出力する。従って、清掃ロボット1は、障害物、および段差を回避しながら移動することが可能となる。また、複数の距離センサから独立に平面検出を行い、それらのデータを統合することで、さらに広い範囲の障害物、および段差を検出し、それらを回避しながら移動することができる。 By such a procedure, the arithmetic unit 30 creates obstacle / step data from the distance image at high speed and more reliably and outputs it to the traveling control unit 41. Therefore, the cleaning robot 1 can move while avoiding obstacles and steps. Further, by performing plane detection independently from a plurality of distance sensors and integrating the data, it is possible to detect a wider range of obstacles and steps and move while avoiding them.
 また、一般的に3次元空間上において、一つの平面を特定するには、3つのパラメータを特定する必要がある。しかし、本実施形態においては、距離画像センサ20が床面に対して、z軸を中心として回転する向きに傾いて取り付けられていないという条件を利用し、高さ方向(y軸方向)と奥行き方向(z軸方向)の傾きのみを求めている。即ち、特定するパラメータの数を2つに限定しているため、3つのパラメータを特定する場合に比べて高速な平面検出が可能となり、自律移動装置においても容易にリアルタイムの床面検出が実現できる。 In general, in order to specify one plane in a three-dimensional space, it is necessary to specify three parameters. However, in the present embodiment, the height direction (y-axis direction) and the depth are utilized by using the condition that the distance image sensor 20 is not attached to the floor surface in a direction that rotates about the z-axis. Only the inclination in the direction (z-axis direction) is obtained. In other words, since the number of parameters to be specified is limited to two, it becomes possible to detect a plane at a higher speed than when three parameters are specified, and real-time floor surface detection can be easily realized even in an autonomous mobile device. .
 (段差の検出)
 前述した実施形態1においては、床面よりも下に物体が存在しない場合が望ましいが、例えば、床面よりも低い面の段差が存在する場合がある。このような場合における床面の検出方法について、以下に説明する。
(Step detection)
In the first embodiment described above, it is desirable that no object exists below the floor surface, but for example, there may be a level difference on the surface lower than the floor surface. A method for detecting the floor surface in such a case will be described below.
 図10中の(a)は、床面より低い段差が存在する場合のyz平面への投影画像であり、図10中の(b)は、図10中の(a)のボトム画像である。図10中の(a)に示すように、yz平面への投影画像において、床面を表す点群61と、段差を表す点群62とが直線になっている。このような場合は、図10中の(b)に示すように、距離画像センサ20の取り付けられた高さ、角度から、予め床面が存在すると予想される範囲を限定し、それを直線検出の際の許容範囲64として設定する。許容範囲64を設定することにより、直線検出部33は、段差を表す点群63ではなく、床面を表す点群61を直線として検出可能となる。 (A) in FIG. 10 is a projection image onto the yz plane when a step lower than the floor surface exists, and (b) in FIG. 10 is a bottom image in (a) in FIG. As shown to (a) in FIG. 10, in the projection image to yz plane, the point group 61 showing a floor surface and the point group 62 showing a level | step difference are a straight line. In such a case, as shown in FIG. 10 (b), the range in which the floor surface is expected to be present is limited in advance from the height and angle at which the distance image sensor 20 is attached, and the straight line is detected. In this case, the allowable range 64 is set. By setting the allowable range 64, the straight line detection unit 33 can detect the point group 61 representing the floor as a straight line instead of the point group 63 representing the step.
 例えば、遠距離用の距離画像センサ20bが、高さH=710[mm]、角度θ=22.5[deg]に取り付けられているとする。このとき、理論上の床面は、距離画像センサ基準の座標系で、原点から710[mm]の距離に、zx平面とのなす角(奥行き方向の傾き角)22.5[deg]で存在することになる。しかし、取り付け位置は組み立て誤差等により多少変動するため、ステップS106において、算出された床面の高さ、および角度が、上記の値から±数mm、±数degの範囲内にあるかどうかを確認する。確認の結果、範囲内にあれば床面であるとしてこの直線を検出し、範囲内になければ、ステップS105において検出された複数の直線から別の直線を選択し、同様に範囲内にあるかどうかを確認する。 For example, it is assumed that the distance image sensor 20b for a long distance is attached at a height H = 710 [mm] and an angle θ = 22.5 [deg]. At this time, the theoretical floor surface is present in the distance image sensor reference coordinate system at a distance of 710 [mm] from the origin at an angle (tilt angle in the depth direction) 22.5 [deg] with the zx plane. Will do. However, since the mounting position varies somewhat due to assembly errors and the like, in step S106, it is determined whether the calculated floor height and angle are within a range of ± several millimeters and ± several deg from the above values. Check. As a result of the confirmation, if it is within the range, this straight line is detected as being the floor surface, and if it is not within the range, another straight line is selected from the plurality of straight lines detected in step S105, and whether it is within the range as well. Check if.
 このように、検出された平面の高さ、および角度を限定して床面を検出すれば、距離画像内に床面より低い段差が含まれていても、それを除外し、床面を検出することができる。 In this way, if the floor surface is detected by limiting the height and angle of the detected plane, even if the distance image includes a level difference lower than the floor surface, it is excluded and the floor surface is detected. can do.
 〔実施形態2〕
 本発明の第2の実施形態について、図11~図12を用いて説明する。なお、説明の便宜上、上記実施形態にて説明した部材と同じ機能を有する部材については、同じ符号を付記し、その説明を省略する。
[Embodiment 2]
A second embodiment of the present invention will be described with reference to FIGS. For convenience of explanation, members having the same functions as those described in the above embodiment are denoted by the same reference numerals and description thereof is omitted.
 実施形態1においては、床面と距離画像センサ20との傾きのうち、z軸を中心として回転する方向の傾きが、x軸を中心として回転する方向の傾きに比べて十分小さく無視できる場合について説明した。これに対して本実施形態2では、z軸を中心として回転する方向の傾きがx軸を中心として回転する方向の傾きに比べて小さいものの、完全には無視できない場合における平面検出について、以下に説明する。 In the first embodiment, among the inclinations of the floor surface and the distance image sensor 20, the inclination in the direction rotating around the z axis is sufficiently smaller than the inclination in the direction rotating around the x axis and can be ignored. explained. On the other hand, in the second embodiment, plane detection in the case where the inclination in the direction of rotation about the z-axis is smaller than the inclination in the direction of rotation about the x-axis but cannot be completely ignored will be described below. explain.
 (演算装置30の処理)
 図11は、本実施形態2に係る清掃ロボットに具備される演算装置30が処理する手順を示すフローチャートである。なお、図8のフローチャートと同じステップについては、説明を省略する。また、図12中の(a)は、本実施形態2に係る3次元座標データの例を示す図であり、図12中の(b)は、図12中の(a)の3次元座標データをyz平面に投影した投影画像であり、図12中の(c)は、図12中の(a)の3次元座標データをxy平面に投影した投影画像であり、図12中の(d)は、図12中の(a)の3次元座標データをxy´平面に投影した投影画像である。
(Processing of the arithmetic unit 30)
FIG. 11 is a flowchart illustrating a procedure performed by the arithmetic device 30 included in the cleaning robot according to the second embodiment. The description of the same steps as those in the flowchart of FIG. 8 is omitted. 12A is a diagram illustrating an example of the three-dimensional coordinate data according to the second embodiment, and FIG. 12B is a diagram illustrating the three-dimensional coordinate data of FIG. Is a projection image obtained by projecting the three-dimensional coordinate data of (a) in FIG. 12 onto the xy plane, and (d) in FIG. Is a projection image obtained by projecting the three-dimensional coordinate data (a) in FIG. 12 onto the xy ′ plane.
 図11において、ステップS106の処理である、検出した平面のx軸を中心として回転する向きの角度θ、および高さを算出する。しかし、図12中の(b)に示すように、yz平面への投影画像で得られる点群は完全に一直線上には並ばず、幅を有して帯状に分布している。 In FIG. 11, the angle θ and the height of the rotation direction about the x axis of the detected plane, which is the process of step S106, are calculated. However, as shown in (b) of FIG. 12, the point cloud obtained by the projection image on the yz plane is not completely aligned on a straight line but is distributed in a band shape with a width.
 そこで、xyz座標系を、x軸を中心として回転する方向の傾きθに相当する角度だけ回転させたxy´z´座標を、新たに定義する。例えば、床面がxyz座標系で原点から710[mm]の距離に、zx平面とのなす角(奥行き方向の傾き角)22.5[deg]で存在する場合、y´軸およびz´軸は、y軸およびz軸をそれぞれx軸を中心に22.5[deg]回転させて得られる。このように、3次元座標演算部31は、xyz座標系における3次元座標データを、新たに定義したxy´z´座標系に変換する(ステップS121)。 Therefore, xy′z ′ coordinates obtained by rotating the xyz coordinate system by an angle corresponding to the inclination θ in the direction of rotation about the x axis are newly defined. For example, when the floor surface is present at an angle (inclination angle in the depth direction) 22.5 [deg] with the zx plane at a distance of 710 [mm] from the origin in the xyz coordinate system, the y ′ axis and the z ′ axis Is obtained by rotating the y-axis and the z-axis by 22.5 [deg] around the x-axis, respectively. In this way, the three-dimensional coordinate calculation unit 31 converts the three-dimensional coordinate data in the xyz coordinate system into the newly defined xy′z ′ coordinate system (step S121).
 次に、投影画像生成部32は、変換された3次元座標データをxy´平面に投影し、投影画像を生成する(ステップS122)。その後、直線検出部33は、投影画像のボトム画像を生成し(ステップS123)、ボトム画像から直線を検出する(ステップS124)。直線検出部33が検出した直線から、平面検出部34は左右方向の角度を得ることができ(ステップS125)、ステップS106で得たx軸を中心として回転する方向の傾きθと合わせて、角度、および高さが設定範囲内であるか判定する(ステップS107)。以下、実施形態1と同様である。 Next, the projection image generation unit 32 projects the converted three-dimensional coordinate data onto the xy ′ plane to generate a projection image (step S122). Thereafter, the straight line detection unit 33 generates a bottom image of the projection image (step S123), and detects a straight line from the bottom image (step S124). From the straight line detected by the straight line detection unit 33, the plane detection unit 34 can obtain the angle in the left-right direction (step S125), and the angle is combined with the inclination θ in the direction of rotation about the x axis obtained in step S106. And whether the height is within the set range (step S107). Hereinafter, it is the same as that of Embodiment 1.
 このような手順により、演算装置30は、z軸を中心に回転する方向への傾きが無視できない場合においても、第1段階としてxyz座標系においてx軸を中心として回転する方向の傾きθを算出する。次に座標系をxy´z´に変換し、xy´平面への投影画像から、第2段階として左右方向の角度を算出する。このような2段階の処理により、より確実に床面を検出することができる。 By such a procedure, the arithmetic unit 30 calculates the inclination θ in the direction of rotation about the x axis in the xyz coordinate system as the first step even when the inclination in the direction of rotation about the z axis cannot be ignored. To do. Next, the coordinate system is converted to xy′z ′, and the angle in the left-right direction is calculated from the projection image on the xy ′ plane as a second step. By such a two-stage process, the floor surface can be detected more reliably.
 なお、ステップS121のような座標変換を行わず、投影画像生成部32が元のxyz座標系のままxy平面への投影画像を生成すると、図12中の(c)のような投影画像を得る。図12中の(c)に示すように、この画像では床面を表す点群が一直線上に並んでいない。したがって、例えば、図12中の(c)のA-A´のラインに別の物体が存在し、床面が見えない状態にある場合、ボトム点の集合から直線を抽出したときに、A-A´ラインが床面を表す直線だと誤検出してしまうことになる。一方、最初にx軸を中心として回転する方向の傾きθだけx軸を中心に回転させたxy´z´座標系においては、z´軸が床面とほぼ平行になっているため、xy´平面への投影画像は図12中の(d)に示すように、床面を表す点群がボトム点となり、抽出できる。 If the projection image generation unit 32 generates the projection image on the xy plane while maintaining the original xyz coordinate system without performing the coordinate transformation as in step S121, a projection image as shown in (c) in FIG. 12 is obtained. . As shown in (c) of FIG. 12, in this image, the point group representing the floor surface is not aligned on a straight line. Therefore, for example, when another object exists in the line AA ′ in FIG. 12C and the floor is not visible, when a straight line is extracted from the set of bottom points, A− If the A ′ line is a straight line representing the floor, it will be erroneously detected. On the other hand, in the xy′z ′ coordinate system that is first rotated about the x axis by the inclination θ in the direction of rotation about the x axis, the z ′ axis is substantially parallel to the floor surface. As shown in (d) of FIG. 12, the projected image onto the plane can be extracted with the point group representing the floor as the bottom point.
 〔実施形態1および2のまとめ〕
 本発明の一態様に係る平面検出装置は、特定の検出対象平面を含む被写体の距離画像データから当該特定の検出対象平面を検出する平面検出装置(平面検出装置60)であって、上記距離画像データを、上記特定の検出対象平面を表す検出対象3次元点群を含む3次元座標データに変換する3次元座標演算手段(3次元座標演算部31)と、上記3次元座標データを所定の2次元平面に投影して、上記検出対象3次元点群が線形に分布した投影画像データを生成する投影画像生成手段(投影画像生成部32)と、上記投影画像データから上記線形の直線を検出する直線検出手段(直線検出部33)と、上記直線検出手段の検出結果に基づいて、上記特定の検出対象平面の傾きに関する情報を含む平面パラメータを算出する平面パラメータ算出手段(平面検出部34)と、を備えることを特徴としている。
[Summary of Embodiments 1 and 2]
A flat surface detection device according to an aspect of the present invention is a flat surface detection device (a flat surface detection device 60) that detects a specific detection target plane from distance image data of a subject including the specific detection target plane, and includes the distance image. Three-dimensional coordinate calculation means (three-dimensional coordinate calculation unit 31) for converting data into three-dimensional coordinate data including a detection target three-dimensional point group representing the specific detection target plane; Projection image generation means (projection image generation unit 32) that generates projection image data in which the detection target three-dimensional point group is linearly distributed by projecting onto a three-dimensional plane, and detects the linear straight line from the projection image data. A plane parameter calculation that calculates a plane parameter including information on the inclination of the specific detection target plane based on the detection result of the line detection unit (line detection unit 33) and the line detection unit. Stage and (planar detector unit 34) is characterized in that it comprises.
 上記の構成によれば、平面を含む距離画像データを3次元座標データに変換し、変換した3次元座標データを平面に投影する。そして、投影した画像から直線を検出し、その直線に基づいた平面パラメータを検出する。したがって、距離画像の中に平面検出の手掛かりを必要とせず、また障害物等の平面とは無関係の情報が多数含まれていても、平面をより確実に検出できるという効果を奏する。 According to the above configuration, distance image data including a plane is converted into three-dimensional coordinate data, and the converted three-dimensional coordinate data is projected onto the plane. Then, a straight line is detected from the projected image, and a plane parameter based on the straight line is detected. Therefore, the range image does not require a clue for plane detection, and even if a lot of information unrelated to the plane such as an obstacle is included, the plane can be detected more reliably.
 さらに、本発明の一態様に係る平面検出装置(平面検出装置60)は、上記距離画像データにおいて、上記被写体の奥行き方向をz軸として、当該z軸に対して垂直であるx軸およびy軸であって、当該距離画像データの左右方向をx軸、上下方向をy軸としたxyz座標系において、上記投影画像生成手段(投影画像生成部32)は、上記3次元座標データをyz平面に投影した投影画像データを生成し、上記平面パラメータ算出手段(平面検出部34)は、投影画像データから、上記特定の検出対象平面の、上記z軸に対する傾斜角を含む上記平面パラメータを算出することを特徴としている。 Furthermore, the flat surface detection apparatus (the flat surface detection apparatus 60) according to one aspect of the present invention uses the depth direction of the subject as the z axis in the distance image data, and the x axis and the y axis that are perpendicular to the z axis. In the xyz coordinate system in which the left-right direction of the distance image data is the x-axis and the up-down direction is the y-axis, the projection image generation means (projection image generation unit 32) sets the three-dimensional coordinate data on the yz plane. Projected projection image data is generated, and the plane parameter calculation means (plane detection unit 34) calculates the plane parameter including the inclination angle of the specific detection target plane with respect to the z-axis from the projection image data. It is characterized by.
 上記の構成によれば、yz平面に投影した投影画像データから平面を検出できることにより、より高速に平面を検出できるという効果を奏する。 According to the above configuration, since the plane can be detected from the projection image data projected onto the yz plane, there is an effect that the plane can be detected at a higher speed.
 さらに、本発明の一態様に係る上記平面パラメータ算出手段は、上記直線が予め定めた範囲内にあるか否かを判別することを特徴としている。 Furthermore, the plane parameter calculation means according to an aspect of the present invention is characterized by determining whether or not the straight line is within a predetermined range.
 上記の構成によれば、予め検出する直線の範囲を設定することによって、より確実に対象の平面を検出できるという効果を奏する。 According to the above configuration, there is an effect that the target plane can be detected more reliably by setting the range of the straight line to be detected in advance.
 さらに、本発明の一態様に係る平面検出装置(平面検出装置60)は、上記xyz座標系を、上記x軸を回転軸として回転させることによってxy´z´座標系に変換して、xy´z´座標系における上記特定の検出対象平面を表す検出対象3次元点群を含む第2の3次元座標データを生成する第2の3次元座標演算手段(3次元座標演算部31)と、上記第2の3次元座標データを、xy´平面に投影して、当該第2の3次元座標データに含まれる上記検出対象3次元点群が線形に分布した第2の投影画像データを生成する第2の投影画像生成手段(投影画像生成部32)と、上記第2の投影画像データから、当該第2の投影画像データに分布する上記線形の第2の直線を検出する第2の直線検出手段と(直線検出部33)、上記第2の直線検出手段の検出結果に基づいて、上記特定の検出対象平面の傾きに関する情報を含む第2の平面パラメータを算出する第2の平面パラメータ算出手段(平面検出部34)と、をさらに備えることを特徴としている。 Furthermore, the flat surface detection apparatus (the flat surface detection apparatus 60) according to an aspect of the present invention converts the xyz coordinate system into an xy′z ′ coordinate system by rotating the x axis as the rotation axis, and xy ′ a second three-dimensional coordinate calculation means (three-dimensional coordinate calculation unit 31) for generating second three-dimensional coordinate data including a detection target three-dimensional point group representing the specific detection target plane in the z ′ coordinate system; The second three-dimensional coordinate data is projected onto the xy ′ plane to generate second projection image data in which the detection target three-dimensional point group included in the second three-dimensional coordinate data is linearly distributed. Two projection image generation means (projection image generation unit 32), and second straight line detection means for detecting the linear second straight line distributed in the second projection image data from the second projection image data. (Straight line detection unit 33), the second straight line 2nd plane parameter calculation means (plane detection part 34) which calculates the 2nd plane parameter containing the information about the inclination of the above-mentioned specific detection object plane based on the detection result of a detection means, It is characterized by the above-mentioned. It is said.
 上記の構成によれば、より確実に対象の平面を検出できるという効果を奏する。 According to the above configuration, there is an effect that the target plane can be detected more reliably.
 さらに、本発明の一態様に係る自律移動装置(清掃ロボット1)は、平面検出装置(平面検出装置60)と、上記距離画像データを生成する距離画像生成手段(距離画像センサ20)と、走行手段(駆動輪2)と、を備えており、上記平面検出装置を用いて走行路となる平面を検出することを特徴とする自律移動装置。 Furthermore, an autonomous mobile device (cleaning robot 1) according to one aspect of the present invention includes a flat surface detection device (flat surface detection device 60), distance image generation means (distance image sensor 20) that generates the distance image data, and travel. Means (driving wheel 2), and an autonomous mobile device that detects a plane as a travel path using the plane detection device.
 上記の構成によれば、上記自律移動装置は、上記平面検出装置と同様の効果を奏することができる。 According to the above configuration, the autonomous mobile device can achieve the same effects as the flat surface detection device.
 さらに、本発明の一態様に係る自律移動装置(清掃ロボット1)は、上記距離画像生成手段(距離画像センサ20)を複数備え、各距離画像生成手段が生成する距離画像データそれぞれから上記平面パラメータを算出することを特徴としている。 Furthermore, the autonomous mobile device (cleaning robot 1) according to an aspect of the present invention includes a plurality of the distance image generation means (distance image sensor 20), and the plane parameter is calculated from each of the distance image data generated by each distance image generation means. It is characterized by calculating.
 上記の構成によれば、より広範囲においてより確実に平面を検出できるという効果を奏する。 According to the above configuration, there is an effect that the plane can be detected more reliably in a wider range.
 (距離画像センサ20について)
 前述した実施形態においては、距離画像センサ20は赤外線投射方式を用いたが、ステレオ方式やTOF方式など、他方式の距離画像センサを用いることも可能である。ステレオ方式の場合、ステレオ配置された左右カメラから得られる左右画像に対し、対応点探索などの手法によって視差を計算する。視差の値から三角測量の原理によって対象物までの距離を求めることができる。このようにして得られた距離画像について、前述した実施形態と同様の処理により、平面検出を実現できる。
(About the distance image sensor 20)
In the above-described embodiment, the distance image sensor 20 uses the infrared projection method, but other types of distance image sensors such as a stereo method and a TOF method can also be used. In the case of the stereo system, parallax is calculated by a technique such as corresponding point search for left and right images obtained from stereo left and right cameras. The distance to the object can be obtained from the parallax value by the principle of triangulation. With respect to the distance image obtained in this way, plane detection can be realized by the same processing as in the above-described embodiment.
 (検出できる平面)
 前述した実施形態においては、床面を検出したが、路面、水面、壁面、および天井面など、他の平面の検出にも用いることができる。
(Detectable plane)
In the embodiment described above, the floor surface is detected, but it can also be used to detect other planes such as a road surface, a water surface, a wall surface, and a ceiling surface.
 (自律移動装置)
 前述した実施形態においては、自律移動装置として清掃ロボット1について説明したが、例えば工場内の無人搬送機、自動走行車、介護用ロボット、防犯用ロボット、災害救助用ロボット、エンターテインメント用ロボットなど、他の自律移動装置にも適用可能である。
(Autonomous mobile device)
In the embodiment described above, the cleaning robot 1 has been described as an autonomous mobile device. It can also be applied to other autonomous mobile devices.
 (平面検出装置)
 前述した実施形態においては、平面検出装置60を清掃ロボット1に組み込んで利用したが、平面検出装置60を独立した装置として産業用、民生用、その他用途に用いる他、汎用的な携帯情報端末などの一部に組み込むことも可能である。
(Flat surface detector)
In the above-described embodiment, the flat surface detection device 60 is incorporated in the cleaning robot 1 and used. However, the flat surface detection device 60 is used as an independent device for industrial, consumer, and other purposes, and a general-purpose portable information terminal or the like. It is also possible to incorporate it into a part of
 (ソフトウェアによる実現例)
 最後に、平面検出装置の各ブロックは、集積回路(ICチップ)上に形成された論理回路によってハードウェア的に実現してもよいし、CPU(Central Processing Unit)を用いてソフトウェア的に実現してもよい。
(Example of software implementation)
Finally, each block of the flat panel detector may be realized in hardware by a logic circuit formed on an integrated circuit (IC chip) or in software using a CPU (Central Processing Unit). May be.
 後者の場合、平面検出装置は、各機能を実現するプログラムの命令を実行するCPU、上記プログラムを格納したROM(Read Only Memory)、上記プログラムを展開するRAM(Random Access Memory)、上記プログラムおよび各種データを格納するメモリ等の記憶装置(記録媒体)などを備えている。そして、本発明の目的は、前述した機能を実現するソフトウェアである平面検出装置の制御プログラムのプログラムコード(実行形式プログラム、中間コードプログラム、ソースプログラム)をコンピュータで読み取り可能に記録した記録媒体を、上記平面検出装置に供給し、そのコンピュータ(またはCPUやMPU)が記録媒体に記録されているプログラムコードを読み出し実行することによっても、達成可能である。 In the latter case, the flat surface detection apparatus includes a CPU that executes instructions of programs that realize each function, a ROM (Read Memory) that stores the programs, a RAM (Random Access Memory) that expands the programs, the programs, and various types A storage device (recording medium) such as a memory for storing data is provided. An object of the present invention is to provide a recording medium in which a program code (execution format program, intermediate code program, source program) of a control program for a flat panel detector, which is software that realizes the functions described above, is recorded so as to be readable by a computer This can also be achieved by supplying to the flat panel detector and reading and executing the program code recorded on the recording medium by the computer (or CPU or MPU).
 上記記録媒体としては、一時的でない有形の媒体(non-transitory tangible medium)、例えば、磁気テープやカセットテープ等のテープ類、フロッピー(登録商標)ディスク/ハードディスク等の磁気ディスクやCD-ROM/MO/MD/DVD/CD-R等の光ディスクを含むディスク類、ICカード(メモリカードを含む)/光カード等のカード類、マスクROM/EPROM/EEPROM(登録商標)/フラッシュROM等の半導体メモリ類、あるいはPLD(Programmable logic device)やFPGA(Field Programmable Gate Array)等の論理回路類などを用いることができる。 Examples of the recording medium include non-transitory tangible media, such as magnetic tapes and cassette tapes, magnetic disks such as floppy (registered trademark) disks / hard disks, and CD-ROM / MO. Discs including optical disks such as / MD / DVD / CD-R, cards such as IC cards (including memory cards) / optical cards, and semiconductor memories such as mask ROM / EPROM / EEPROM (registered trademark) / flash ROM Alternatively, logic circuits such as PLD (Programmable logic device) and FPGA (Field Programmable Gate array) can be used.
 また、平面検出装置を通信ネットワークと接続可能に構成し、上記プログラムコードを通信ネットワークを介して供給してもよい。この通信ネットワークは、プログラムコードを伝送可能であればよく、特に限定されない。例えば、インターネット、イントラネット、エキストラネット、LAN、ISDN、VAN、CATV通信網、仮想専用網(Virtual Private Network)、電話回線網、移動体通信網、衛星通信網等が利用可能である。また、この通信ネットワークを構成する伝送媒体も、プログラムコードを伝送可能な媒体であればよく、特定の構成または種類のものに限定されない。例えば、IEEE1394、USB、電力線搬送、ケーブルTV回線、電話線、ADSL(Asymmetric Digital Subscriber Line)回線等の有線でも、IrDAやリモコンのような赤外線、Bluetooth(登録商標)、IEEE802.11無線、HDR(High Data Rate)、NFC(Near Field Communication)、DLNA(登録商標)(Digital Living Network Alliance)、携帯電話網、衛星回線、地上波デジタル網等の無線でも利用可能である。なお、本発明は、上記プログラムコードが電子的な伝送で具現化された、搬送波に埋め込まれたコンピュータデータ信号の形態でも実現され得る。 Further, the flat surface detection device may be configured to be connectable to a communication network, and the program code may be supplied via the communication network. The communication network is not particularly limited as long as it can transmit the program code. For example, the Internet, intranet, extranet, LAN, ISDN, VAN, CATV communication network, virtual private network (Virtual Private Network), telephone line network, mobile communication network, satellite communication network, etc. can be used. The transmission medium constituting the communication network may be any medium that can transmit the program code, and is not limited to a specific configuration or type. For example, even with wired lines such as IEEE 1394, USB, power line carrier, cable TV line, telephone line, ADSL (Asymmetric Digital Subscriber Line) line, infrared rays such as IrDA and remote control, Bluetooth (registered trademark), IEEE 802.11 wireless, HDR ( It can also be used wirelessly such as High Data Rate, NFC (Near Field Communication), DLNA (registered trademark) (Digital Living Network Alliance), mobile phone network, satellite line, and digital terrestrial network. The present invention can also be realized in the form of a computer data signal embedded in a carrier wave in which the program code is embodied by electronic transmission.
 〔実施形態3〕
 以下に、本発明に係る路面段差検出方法の一形態である実施形態3を説明する。
[Embodiment 3]
Embodiment 3 which is one form of the road surface level difference detection method according to the present invention will be described below.
 ここで、先述の特許文献1の障害物検出装置は、車線のような白線を抽出して道路面を認識するため、例えば、シニアカーや電動車椅子等のように、車線の無い路面を走行する車両に適用した場合は、路面を正しく認識できないので、障害物を検出することが困難となる問題がある。 Here, the obstacle detection device of Patent Document 1 described above extracts a white line such as a lane and recognizes a road surface. For example, a vehicle traveling on a road surface without a lane, such as a senior car or an electric wheelchair. When applied to, the road surface cannot be recognized correctly, which makes it difficult to detect an obstacle.
 そこで、本実施形態3では、このような問題を解決し、路面の識別が困難である場合でも、路面をステレオカメラで撮像した複数の画像データから路面からの段差の高低差を検出することができる路面段差検出装置および路面段差検出方法並びに車両について説明する。 Therefore, in the third embodiment, such a problem is solved, and even when it is difficult to identify the road surface, it is possible to detect the level difference of the step from the road surface from a plurality of image data obtained by imaging the road surface with a stereo camera. A road surface level difference detection device, a road level level difference detection method, and a vehicle that can be performed will be described.
 本実施形態3の路面段差検出方法(第1の路面段差検出方法)は、路面をステレオ撮影した少なくとも第1の画像と第2の画像とをXY平面座標に投影し、該平面座標上に特定の座標(X、Y)を中心とする検出領域を設定し、検出領域の画像が路面位置にある場合の視差v1を算出し、第2の画像に視差v1を差し引いた座標(X-v1、Y)を中心とする比較領域を設定し、検出領域の画像と比較領域の画像とを比較して、検出領域の路面からの高さを検出する。 The road surface level difference detection method (first road level level difference detection method) according to the third embodiment projects at least a first image and a second image obtained by taking a stereo image of a road surface on XY plane coordinates and specifies them on the plane coordinates. A detection area centered on the coordinates (X, Y) is set, parallax v1 when the image of the detection area is at the road surface position is calculated, and coordinates obtained by subtracting the parallax v1 from the second image (Xv1, A comparison area centering on Y) is set, and the image of the detection area is compared with the image of the comparison area to detect the height of the detection area from the road surface.
 また、本実施形態3の路面段差検出方法は、上記の構成に加えて、複数の検出領域の高さから、近傍する検出領域の間で高さの高低差を求めて、高低差が閾値以上であれば検出領域間に段差があると判定する。 In addition to the above configuration, the road surface level difference detection method according to the third embodiment obtains a height difference between adjacent detection areas from the heights of a plurality of detection areas, and the height difference is equal to or greater than a threshold value. If so, it is determined that there is a step between the detection areas.
 また、本実施形態3の路面段差検出方法は、上記の構成に加えて、複数の検出領域の高さから、近傍する検出領域の間で高さの高低差を求めて、複数の検出領域間で高低差が連続して変化する場合に検出領域間に傾斜があると判定する。 In addition to the above-described configuration, the road surface level difference detection method according to the third exemplary embodiment obtains a height difference between adjacent detection areas from the heights of the plurality of detection areas. When the height difference changes continuously, it is determined that there is an inclination between the detection areas.
 また、本実施形態3の路面段差検出装置(第1の路面段差検出装置)は、路面をステレオ撮影する少なくとも第1のカメラと第2のカメラと、第1のカメラで撮影された第1の画像と上記第2のカメラで撮影された第2の画像をXY平面座標に投影し、該平面座標上に特定の座標(X、Y)を中心とする検出領域を設定し、検出領域の画像が路面位置にある場合の視差v1を算出し、第2の画像に視差v1を差し引いた座標(X-v1、Y)を中心とする比較領域を設定し、検出領域の画像と比較領域の画像とを比較して、検出領域の路面からの高さを検出する高さ算出部とを備える。 Further, the road surface level difference detection device (first road level level detection device) according to the third embodiment includes at least a first camera and a second camera that take a stereo image of a road surface, and a first image that is captured by the first camera. An image and a second image captured by the second camera are projected onto XY plane coordinates, a detection area centered on specific coordinates (X, Y) is set on the plane coordinates, and an image of the detection area Is calculated on the road surface position, a comparison area centered on coordinates (X−v1, Y) obtained by subtracting the parallax v1 from the second image is set, and an image in the detection area and an image in the comparison area And a height calculation unit that detects the height of the detection region from the road surface.
 また、本実施形態3の車両は、上記の路面段差検出装置を備える。 Further, the vehicle of the third embodiment includes the road surface level difference detection device described above.
 図13は、本実施形態3の路面段差検出装置1001の構成を表す。本実施形態3の路面段差検出装置1001は、ステレオ画像を撮像する2つのカメラ1011、1012と、ステレオ画像を演算するための演算部1020とを備える。また、演算部1020は、ステレオ画像から段差を検出したい領域の高さを算出する高さ検出部1030と、検出領域毎の高さから検出領域間の段差の有無を判断する段差検出部1040とから構成されている。さらに、必要に応じて段差の有無を操作者へ通知するため、音声スピーカや表示装置などの出力装置1050を備える。 FIG. 13 shows the configuration of the road surface level difference detecting device 1001 of the third embodiment. A road surface level difference detection apparatus 1001 according to the third embodiment includes two cameras 1011 and 1012 that capture a stereo image, and a calculation unit 1020 for calculating the stereo image. In addition, the calculation unit 1020 includes a height detection unit 1030 that calculates the height of a region where a step is desired to be detected from the stereo image, and a step detection unit 1040 that determines the presence / absence of a step between detection regions from the height of each detection region. It is composed of Furthermore, an output device 1050 such as an audio speaker or a display device is provided to notify the operator of the presence or absence of a step as necessary.
 図14(a)は、ステレオカメラの配置を示す上面図であり、図14(b)は、ステレオカメラの取り付け位置を示す側面図である。2つのカメラ1011、1012は同一仕様であり、図14(a)に示すように所定の水平画角を有し、例えば、シニアカーや電動車椅子の車両の前方部において、左右に所定の間隔gを隔てて設置される。また、図14(b)に示すように、カメラ1011、1012は、路面から所定の高さhcに設置され、所定の垂直画角と俯角を有し、レンズの光軸が路面側を撮像するように下向きとなっている。 14 (a) is a top view showing the arrangement of the stereo camera, and FIG. 14 (b) is a side view showing the mounting position of the stereo camera. The two cameras 1011 and 1012 have the same specifications, and have a predetermined horizontal angle of view as shown in FIG. 14A. For example, a predetermined distance g is set on the left and right in the front part of a senior car or an electric wheelchair vehicle. Installed separately. 14B, the cameras 1011 and 1012 are installed at a predetermined height hc from the road surface, have a predetermined vertical field angle and depression angle, and the optical axis of the lens images the road surface side. It has become downward.
 カメラ1011、1012の水平画角と垂直画角は、大きすぎると撮像される路面の段差が小さくなって段差の検出精度が低下し、逆に小さすぎると段差の検出範囲が狭くなるため、使用条件に合わせて適切に設定する必要がある。また、俯角は、路面の写る割合が多くなるように設定することが好ましい。カメラ1011、1012の取り付け高さhcは、小さな段差から大きな段差まで検出可能な範囲を広げるため、なるべく高い位置に設置することが好ましい。 If the horizontal angle of view and the vertical angle of view of the cameras 1011 and 1012 are too large, the step of the road surface to be imaged becomes small and the detection accuracy of the step is lowered. Conversely, if it is too small, the detection range of the step becomes narrow. It is necessary to set appropriately according to conditions. The depression angle is preferably set so that the proportion of the road surface is increased. The mounting height hc of the cameras 1011 and 1012 is preferably set as high as possible in order to widen a detectable range from a small step to a large step.
 カメラ1011、1012の仕様と配置は、例えば、35mm判用レンズに相当する水平画角と垂直画角を有し、取り付け間隔gが15~25cm、取り付け高さhcが60~80cmであり、10~25°の俯角で設置されている。本実施形態3の以降の説明において、水平画角と垂直画角の半分の角度をそれぞれθ1、θ2とし、俯角をθ3とする。また、図14に示すように、カメラ1011、1012を左右に設置した実施形態について説明するが、上下や斜め方向に設置することも可能であり、上下や斜めに設置する場合であっても基本的な検出方法は同じである。 The specifications and arrangement of the cameras 1011 and 1012 are, for example, a horizontal field angle and a vertical field angle corresponding to a 35 mm size lens, a mounting interval g is 15 to 25 cm, a mounting height hc is 60 to 80 cm, 10 It is installed at a depression angle of ~ 25 °. In the following description of the third embodiment, half angles of the horizontal field angle and the vertical field angle are θ1 and θ2, respectively, and the depression angle is θ3. Further, as shown in FIG. 14, an embodiment in which the cameras 1011 and 1012 are installed on the left and right will be described. However, the camera 1011 and 1012 can be installed vertically and obliquely. The detection method is the same.
 図15は、ステレオカメラの2枚の画像であり、歩道を含む路面が撮像されている。図15(a)は左側のカメラ1011で撮像した第1の画像であり、図15(b)は右側のカメラ1012で撮像した第2の画像である。また、図15(c)は、第1の画像と第2の画像を重ねて、歩道と路面の境界線だけを抽出したものである。図15(c)に示すように、左の画像と右の画像において境界線がずれた位置に撮像されている。 FIG. 15 shows two images of the stereo camera, and the road surface including the sidewalk is captured. FIG. 15A is a first image captured by the left camera 1011, and FIG. 15B is a second image captured by the right camera 1012. FIG. 15C is a diagram in which only the boundary line between the sidewalk and the road surface is extracted by superimposing the first image and the second image. As shown in FIG. 15C, the image is taken at a position where the boundary line is shifted between the left image and the right image.
 この左右方向のずれ量が視差であり、平坦な路面では、視差は手前から奥行き側に一定の割合で減少する。本実施形態3は、このような平坦な路面における視差v1と、段差の検出領域を撮像した実際の視差v2とを比較することにより、検出領域の路面からの高さを検出するものである。 The amount of deviation in the left-right direction is parallax, and on a flat road surface, the parallax decreases from the near side to the depth side at a certain rate. In the third embodiment, the height of the detection region from the road surface is detected by comparing the parallax v1 on the flat road surface and the actual parallax v2 obtained by imaging the step detection region.
 図16は、演算部1020における高さ検出処理のフローチャートである。演算部1020は、第1の画像の任意の座標(X、Y)を中心とする検出領域に対して、そのY座標とカメラの位置情報等から、検出領域が路面にあると仮定したときの視差v1を求めて、第2の画像で視差v1ずらした座標(X-v1、Y)を中心とする比較領域を定め、検出領域と比較領域の視差v2から検出領域の路面からの高さを求める。 FIG. 16 is a flowchart of the height detection process in the calculation unit 1020. When the calculation unit 1020 assumes that the detection region is on the road surface from the Y coordinate and the position information of the camera, etc., with respect to the detection region centered on an arbitrary coordinate (X, Y) of the first image. The parallax v1 is obtained, a comparison area centered on the coordinate (Xv1, Y) shifted by the parallax v1 is determined in the second image, and the height from the road surface of the detection area is determined from the parallax v2 of the detection area and the comparison area. Ask.
 図16に示すステップS1からステップS3の処理により、第1の画像の任意の座標(X、Y)を中心とする検出領域に対して、そのY座標値とカメラの位置情報から、検出領域が路面にあると仮定したときの第2の画像との視差v1を求める。 With the processing from step S1 to step S3 illustrated in FIG. The parallax v1 with the second image when it is assumed that the vehicle is on the road surface is obtained.
 視差v1を求めるには、まず、カメラから画像の焦点面までの距離d1を算出する必要がある。図17は、距離の算出方法を示す説明図である。図17(a)は、第1の画像を、座標の中心(0、0)を原点Pとする横が±wピクセル、縦が±hピクセルである座標空間13に変換し、段差の検出領域の座標点(X、Y)を座標空間13に示したものである。また、図17(b)は、カメラ1011、1012の焦点面A1とカメラの位置情報を示した側面図である。 In order to obtain the parallax v1, it is first necessary to calculate the distance d1 from the camera to the focal plane of the image. FIG. 17 is an explanatory diagram illustrating a distance calculation method. FIG. 17A shows a step detection region in which the first image is converted into a coordinate space 13 in which the center of coordinates (0, 0) is the origin P and the horizontal width is ± w pixels and the vertical length is ± h pixels. These coordinate points (X, Y) are shown in the coordinate space 13. FIG. 17B is a side view showing the focal plane A1 of the cameras 1011 and 1012 and the positional information of the cameras.
 ステップS1では、座標空間13において、段差を検出する任意の検出領域の座標点(X、Y)を選択する。座標空間13は、図17(b)に示すカメラ1011の光軸に垂直な面である焦点面に相当する。カメラ1011、1012のレンズの歪みがない場合、焦点面上にあるものは全て同じ視差になるため、座標空間13にある座標点(X、Y)と原点Pも同じ視差となる。 In step S1, a coordinate point (X, Y) of an arbitrary detection region for detecting a step in the coordinate space 13 is selected. The coordinate space 13 corresponds to a focal plane that is a plane perpendicular to the optical axis of the camera 1011 shown in FIG. When there is no distortion of the lenses of the cameras 1011 and 1012, all the objects on the focal plane have the same parallax. Therefore, the coordinate point (X, Y) in the coordinate space 13 and the origin P also have the same parallax.
 ステップS2では、座標点(X、Y)が路面にあると仮定したときのカメラから焦点面A1の原点Pまでの距離d1を求める。ここで座標点(X、Y)を基点として距離d1を算出すると複雑になるので、焦点面にある座標点が全て同じ視差となることを利用して、座標点(X、Y)と同じ焦点面A1にある座標点Q(0、Y)を基点にして、焦点面A1の原点Pまでの距離d1を算出する。 In step S2, the distance d1 from the camera to the origin P of the focal plane A1 when the coordinate point (X, Y) is assumed to be on the road surface is obtained. Here, since it is complicated to calculate the distance d1 using the coordinate point (X, Y) as a base point, the same focal point as the coordinate point (X, Y) is used by utilizing that the coordinate points on the focal plane all have the same parallax. A distance d1 to the origin P of the focal plane A1 is calculated using the coordinate point Q (0, Y) on the plane A1 as a base point.
 図18は、視差の算出方法を示す説明図である。図18(a)は、カメラ1011の側面図であり、カメラ1011から座標点Qを見たときの下向きの角度θyを示す。カメラ1011の垂直画角の半分をθ2とすると、図17(a)の座標点Qの高さがYであるから、θyは下記の式1で求めることができる。 FIG. 18 is an explanatory diagram showing a parallax calculation method. FIG. 18A is a side view of the camera 1011 and shows a downward angle θy when the coordinate point Q is viewed from the camera 1011. Assuming that the half of the vertical angle of view of the camera 1011 is θ2, the height of the coordinate point Q in FIG. 17A is Y, so θy can be obtained by the following equation 1.
 θy=arctan(tanθ2×Y/h) (式1)
 また、図17(b)のように、カメラの取り付け高さをhcとして、カメラが俯角θ3で下向きに取付けられている場合、カメラ1011から座標点Qまでの距離d1´は、
 d1´=hc/sin(θ3+θy)    (式2)
より求めることができる。
θy = arctan (tan θ2 × Y / h) (Formula 1)
Also, as shown in FIG. 17B, when the camera mounting height is hc and the camera is mounted downward at a depression angle θ3, the distance d1 ′ from the camera 1011 to the coordinate point Q is
d1 ′ = hc / sin (θ3 + θy) (Formula 2)
It can be obtained more.
 したがって、カメラ1011から焦点面A1の原点Pまでの距離d1は、
 d1=d1´×cos(θy)       (式3)
より求めることができる。
Therefore, the distance d1 from the camera 1011 to the origin P of the focal plane A1 is
d1 = d1 ′ × cos (θy) (Formula 3)
It can be obtained more.
 次に、ステップS3では、焦点面A1に路面があるとしたきの第1の画像と第2の画像との視差v1を求める。歪みのないレンズの場合、焦点面A1上にある点は全て同じ視差になると考えられるので、上記(式3)で求めた距離d1を使って視差v1を求めればよい。なお、距離d1は、レンズの歪みなどによって、他の値を用いたり、補正が必要となる場合もある。 Next, in step S3, a parallax v1 between the first image and the second image when the road surface is on the focal plane A1 is obtained. In the case of a lens without distortion, all the points on the focal plane A1 are considered to have the same parallax. Therefore, the parallax v1 may be obtained using the distance d1 obtained in the above (Equation 3). The distance d1 may use another value or need to be corrected depending on lens distortion or the like.
 図18(b)は、左右のカメラ1011、1012の上面図である。図18(b)のように、左のカメラ1011の焦点面A1に位置する原点Pは、右のカメラ1012では中央から角度θxの方向に見える。このθxは、左右のカメラ間隔をgとすると、以下の式で求められる。 FIG. 18B is a top view of the left and right cameras 1011 and 1012. As shown in FIG. 18B, the origin P located at the focal plane A1 of the left camera 1011 is seen by the right camera 1012 in the direction of the angle θx from the center. This θx is obtained by the following equation, where g is the distance between the left and right cameras.
 θx=arctan(g/d1)       (式4)
このとき、原点Pは、左側の第1の画像では図19(a)のように原点(0、0)の座標に見え、右側の第2の画像では図19(b)のように、視差のピクセル数をv1としたとき、(-v1、0)の地点に見える。v1は、図18(b)のように、カメラの水平画角の半分をθ1としたとき、以下の式で求められる。
θx = arctan (g / d1) (Formula 4)
At this time, the origin P looks like the coordinates of the origin (0, 0) in the first image on the left side as shown in FIG. 19 (a), and the parallax in the second image on the right side as shown in FIG. 19 (b). When the number of pixels is v1, it appears at the point (−v1, 0). As shown in FIG. 18B, v1 is obtained by the following expression when θ1 is half of the horizontal angle of view of the camera.
 v1=w×tanθx/tanθ1    (式5)
視差のピクセル数v1は点Pと点(X、Y)で同一であるので、右画像では(Xr、Y)の位置に見えることになる。ここで、
 Xr=X-v1             (式6)
である。
v1 = w × tan θx / tan θ1 (Formula 5)
Since the number of parallax pixels v1 is the same at the point P and the point (X, Y), it appears at the position (Xr, Y) in the right image. here,
Xr = X−v1 (Formula 6)
It is.
 次に、ステップS4では、左側の第1の画像の検出領域(X、Y)の座標に写っている物体が路面と同じ高さにあるかどうかを判断する。このとき、左側の第1の画像の(X、Y)の座標に写っている物体と同一の物体が、右側の第2の画像の比較領域(Xr、Y)の位置に写っているかを確認すればよい。 Next, in step S4, it is determined whether or not the object shown in the coordinates of the detection area (X, Y) of the left first image is at the same height as the road surface. At this time, it is confirmed whether the same object as the object shown in the coordinates (X, Y) of the left first image is shown in the position of the comparison area (Xr, Y) of the right second image. do it.
 同一物体かどうかの確認方法は様々な方法があるが、例えば、左右画像の対象点の周辺数ピクセル分の輝度を取り出して比較すればよい。両者がカメラのノイズなどの誤差要因の範囲内で一致すれば、その地点は路面と同じ高さであると判断することができる。もし、両者が一致せず、左右どちらかにずれていると判断される場合には、その視差に応じて、路面より高い位置か、低い位置にあると判断することができる。 There are various methods for confirming whether or not they are the same object. For example, the luminance of several pixels around the target point of the left and right images may be taken out and compared. If they match within the range of error factors such as camera noise, the point can be determined to be the same height as the road surface. If it is determined that the two do not match and are shifted to the left or right, it can be determined that the position is higher or lower than the road surface according to the parallax.
 例えば、左側の第1の画像の(X、Y)の位置にある物体が、右側の第2の画像で(Xr-v2、Y)の座標にあると判断された場合、物体までの実際の距離をd2とすると、(式4)及び(式5)より、
 v1=(w×g)/(d1×tanθ1)
 v1+v2=(w×g)/(d2×tanθ1)
v1を消すと、
 d2=(d×w×g)/(w×g+v2×tanθ1) (式7)
図20に示すように、路面にある物体の高さhsは、
 hs=hc×(d1-d2)/d1          (式8)
となり、これにより路面の段差の高さhsがわかる。すなわち、物体の視差v2が正(路面の視差v1よりも大きい)の場合は、図20(a)のように物体までの距離d2が路面までの距離d1より小さくなり、hsが正の値となって路面より高いと判断できる。逆にv2が負(路面の視差v1よりも小さい)の場合は、図20(b)のように物体までの距離d2が路面までの距離d1より大きくなり、hsは負の値となって、路面より低いと判断できる。このようにして、第1の画像中の座標点(X、Y)の路面からの高低差がわかる(ステップS5)。
For example, when it is determined that the object at the position (X, Y) in the first image on the left side is at the coordinates (Xr−v2, Y) in the second image on the right side, When the distance is d2, from (Expression 4) and (Expression 5),
v1 = (w × g) / (d1 × tan θ1)
v1 + v2 = (w × g) / (d2 × tan θ1)
If v1 is deleted,
d2 = (d × w × g) / (w × g + v2 × tan θ1) (Expression 7)
As shown in FIG. 20, the height hs of the object on the road surface is
hs = hc × (d1−d2) / d1 (Formula 8)
Thus, the height hs of the road level difference can be known. That is, when the parallax v2 of the object is positive (larger than the parallax v1 of the road surface), the distance d2 to the object is smaller than the distance d1 to the road surface as shown in FIG. 20A, and hs is a positive value. It can be judged that it is higher than the road surface. Conversely, when v2 is negative (smaller than the parallax v1 on the road surface), the distance d2 to the object is larger than the distance d1 to the road surface as shown in FIG. 20B, and hs is a negative value. It can be judged that it is lower than the road surface. In this way, the height difference from the road surface of the coordinate point (X, Y) in the first image is known (step S5).
 ステップS6では、上記手順を適切な間隔をもって他の座標点でも繰り返し、画像中の必要な範囲で路面からの高さの検出を完了したら終了する。 In step S6, the above procedure is repeated at other coordinate points with appropriate intervals, and the process ends when the detection of the height from the road surface within the necessary range in the image is completed.
 図16に示したフローチャートの詳細は以上であり、これらの処理は図13に示した演算部1020で処理される。具体的にはPCやマイコン上でソフトウェアとして実現してもよいし、FPGAやASICを用いてハードウェアとして実現してもよい。部分的にハードウェアで残りをソフトウェアで処理する構成も可能である。 The details of the flowchart shown in FIG. 16 are as described above, and these processes are processed by the arithmetic unit 1020 shown in FIG. Specifically, it may be realized as software on a PC or a microcomputer, or may be realized as hardware using an FPGA or ASIC. A configuration in which the remainder is partially processed by hardware and software is also possible.
 図21は、図15のステレオ画像に対して、上記の方法を適用した結果の一例である。検出領域毎の高さに応じて諧調表示されており、路面と同じ高さの領域がグレーで、路面より低い領域が黒く表示されている。 FIG. 21 is an example of a result of applying the above method to the stereo image of FIG. The gradation is displayed according to the height of each detection area, the area having the same height as the road surface is displayed in gray, and the area lower than the road surface is displayed in black.
 次に、演算部1020の段差検出部1040を説明する。図21のような高さ検出部1030の検出結果において、上下・左右に隣り合う検出領域間で路面からの高さを比較し、例えば、閾値を5cmとして、それ以上の高低差があれば段差であると判断する。段差と判断する高低差の閾値は、例えば、シニアカーや車椅子が落ちても安全性を確保できるように設定されている。一例として、図21の検出結果では、段差と判断した境界を破線部で示している。 Next, the level difference detection unit 1040 of the calculation unit 1020 will be described. In the detection result of the height detection unit 1030 as shown in FIG. 21, the height from the road surface is compared between the detection areas adjacent to each other in the vertical and horizontal directions. It is judged that. The height difference threshold for determining a step is set so as to ensure safety even when a senior car or a wheelchair falls, for example. As an example, in the detection result of FIG. 21, the boundary determined as a step is indicated by a broken line portion.
 また、隣り合う検出領域の高低差が小さく路面であると想定される場合、検出領域間の距離と高低差とから、路面の傾斜角度を求めることも可能である。これにより、路面が上り坂や下り坂、または左右へ傾斜している場合には、安全性を確保できる傾斜か判断することができる。 Also, when it is assumed that the height difference between adjacent detection areas is small and the road surface, it is possible to obtain the inclination angle of the road surface from the distance between the detection areas and the height difference. Thereby, when the road surface is inclined uphill, downhill, or left and right, it is possible to determine whether the road surface can be secured.
 図22は、本実施形態3の路面段差検出装置1001を備えた車両の一例として、シニアカー1060の適用例を示す。路面段差検出装置1001は、シニアカー1060のハンドル1061の前方に路面から高さhcで備え付けられている。路面段差検出装置1001は、シニアカー1060の運転者に段差検出結果を報知するため、スピーカ1031や表示装置1032などの出力装置1050を備えている。路面段差検出装置1001により段差が検出された場合、スピーカ1031からブザー音や音声案内を出力したり、表示装置1032に文字や図形を表示するなどして、路面に段差の存在を知らせることにより、シニアカー1060の落輪や転倒などの危険を回避することができる。 FIG. 22 shows an application example of a senior car 1060 as an example of a vehicle provided with the road surface level difference detection device 1001 of the third embodiment. The road surface level difference detection device 1001 is provided in front of the handle 1061 of the senior car 1060 at a height hc from the road surface. The road surface level difference detection device 1001 includes an output device 1050 such as a speaker 1031 and a display device 1032 to notify the driver of the senior car 1060 of the level difference detection result. When a road level difference is detected by the road surface level detection device 1001, a buzzer sound or voice guidance is output from the speaker 1031 or a character or a figure is displayed on the display device 1032 to notify the presence of a level difference on the road surface. It is possible to avoid dangers such as falling wheels and falling of the senior car 1060.
 また、本実施形態3の路面段差検出装置1001は、シニアカー1060の前方だけでなく、後方に備え付けてもよい。これにより、視界の悪い後退時にも落輪などの危険回避を図ることができる。また、本実施形態3の路面段差検出装置1001の用途は、シニアカー1060に限定されるものではなく、路面の段差を検出することが必要される車両、例えば、車椅子からフォークリフトに至る様々な車両で好適に用いることが可能である。 Further, the road surface level difference detection device 1001 of the third embodiment may be provided not only in front of the senior car 1060 but also in the rear. As a result, it is possible to avoid danger such as falling wheels even when reversing with poor visibility. In addition, the use of the road surface level difference detection device 1001 of the third embodiment is not limited to the senior car 1060, but is a vehicle that needs to detect a road level level difference, for example, various vehicles ranging from wheelchairs to forklifts. It can be suitably used.
 例えば、車椅子では、本実施形態3の路面段差検出装置1001を前後や左右に備えることにより、その場で方向転換するような場合であっても、車椅子周辺の段差で落輪したり転倒したりする危険を防止することができる。また、フォークリフトでは、荷物運搬中に前方の視界が塞がれることがあっても、路面に置かれた背の低い荷物を段差として検出し、衝突を回避することが可能である。 For example, in a wheelchair, the road surface level difference detection device 1001 according to the third embodiment is provided in the front-rear direction and the left-right direction, and even if the direction is changed on the spot, the wheel is dropped or falls at a level difference around the wheelchair. Risk can be prevented. In addition, with a forklift, even if the forward field of view may be blocked during cargo transportation, it is possible to detect a short cargo placed on the road surface as a step and avoid a collision.
 (実施形態3の作用効果)
 本実施形態3によると、ステレオカメラで撮像した画像を用いて、画像中の路面の各領域から段差を容易に検出することができる。
(Effect of Embodiment 3)
According to the third embodiment, it is possible to easily detect a step from each region of the road surface in the image using the image captured by the stereo camera.
 (実施形態3のまとめ)
 本実施形態3の路面段差検出方法は、路面をステレオ撮影した少なくとも第1の画像と第2の画像とをXY平面座標に投影し、該平面座標上に特定の座標(X、Y)を中心とする検出領域を設定し、検出領域の画像が路面位置にある場合の視差v1を算出し、第2の画像に視差v1を差し引いた座標(X-v1、Y)を中心とする比較領域を設定し、検出領域の画像と比較領域の画像とを比較して、検出領域の路面からの高さを検出することを特徴とする。
(Summary of Embodiment 3)
In the road surface level difference detection method of the third embodiment, at least a first image and a second image obtained by photographing a road surface in stereo are projected on XY plane coordinates, and specific coordinates (X, Y) are centered on the plane coordinates. A parallax v1 when the image of the detection area is at a road surface position is calculated, and a comparison area centered on coordinates (X−v1, Y) obtained by subtracting the parallax v1 from the second image It is set, and the height of the detection region from the road surface is detected by comparing the image of the detection region with the image of the comparison region.
 これにより、前方の障害物だけではなく、路面の段差も検出することができる路面段差検出装置を提供する。 This provides a road surface level difference detection device that can detect not only front obstacles but also road level differences.
 〔実施形態4〕
 以下に、本発明に係る路面段差検出方法の一形態である実施形態4を説明する。
[Embodiment 4]
Embodiment 4 which is one form of the road surface level | step difference detection method which concerns on this invention below is demonstrated.
 ここで、先述のように、特許文献1の障害物検出装置は、車線のような白線を抽出して道路面を認識するため、例えば、シニアカーや電動車椅子等のように、車線の無い路面を走行する車両に適用した場合は、路面を正しく認識できないので、障害物を検出することが困難となる問題がある。 Here, as described above, the obstacle detection apparatus of Patent Document 1 extracts a white line such as a lane and recognizes a road surface. For example, a road surface without a lane such as a senior car or an electric wheelchair is used. When applied to a traveling vehicle, the road surface cannot be recognized correctly, which makes it difficult to detect an obstacle.
 また、特許文献1の障害物検出装置のように、障害物までの距離を三角測量の視差により測定する場合、図26に示すように、路面上の立体物と路面の凹部を一緒に撮像した画像では、立体物は正面が写るので左右の画像で同じ形状となり視差を求められるが、路面と平行な凸部の上面や凹部の底面などは、画像の奥行き方向で歪みが生じて視差を求めることが難しくなるため、立体物と同様の測定方法では凸部の上面や凹部の底面までの距離を正確に測定することができないという問題がある。 In addition, when measuring the distance to an obstacle by triangulation parallax as in the obstacle detection device of Patent Document 1, as shown in FIG. 26, a solid object on the road surface and a concave portion of the road surface are imaged together. In the image, the three-dimensional object has the same shape in the left and right images because the front is reflected, and the parallax can be obtained. This makes it difficult to accurately measure the distance to the top surface of the convex portion and the bottom surface of the concave portion by the same measurement method as that for the three-dimensional object.
 そこで、本実施形態4では、このような問題を解決し、路面の識別が困難である場合でも、路面をステレオカメラで撮像した複数の画像データから路面と平行な凹部などの段差を正確に検出することができる路面段差検出装置および路面段差検出方法並びに車両について説明する。 Therefore, in the fourth embodiment, such a problem is solved, and even when it is difficult to identify the road surface, a step such as a recess parallel to the road surface is accurately detected from a plurality of image data obtained by imaging the road surface with a stereo camera. A road surface level difference detection apparatus, a road surface level difference detection method, and a vehicle that can be used will be described.
 本実施形態4の路面段差検出方法(第2の路面段差検出方法)は、路面をステレオ撮影した少なくとも第1の画像と第2の画像とをXY平面座標に投影し、特定のY軸方向で画像の行データ毎に画像が路面位置にある場合の視差を算出し、第2の画像をY軸毎に視差をずらして補正した第3の画像を生成し、第1の画像と第3の画像を段差の検出領域毎に比較して路面からの高さを検出する。 In the road surface step detection method (second road surface step detection method) of the fourth embodiment, at least a first image and a second image obtained by stereo shooting of a road surface are projected onto XY plane coordinates, and in a specific Y-axis direction. The parallax when the image is at the road surface position is calculated for each row data of the image, a third image is generated by correcting the second image by shifting the parallax for each Y axis, and the first image and the third image The height from the road surface is detected by comparing the images for each step detection area.
 また、本実施形態4の路面段差検出方法は、上記の構成に加えて、複数の検出領域の高さから、近傍する検出領域の間で高さの高低差を求めて、高低差が閾値以上であれば検出領域間に段差があると判定する。 In addition to the above configuration, the road surface level difference detection method of the fourth embodiment obtains a height difference between adjacent detection areas from the heights of a plurality of detection areas, and the height difference is equal to or greater than a threshold value. If so, it is determined that there is a step between the detection areas.
 また、本実施形態4の路面段差検出方法は、上記の構成に加えて、複数の検出領域の高さから、近傍する検出領域の間で高さの高低差を求めて、複数の検出領域間で高低差が連続して変化する場合に検出領域間に傾斜があると判定する。 In addition to the above-described configuration, the road surface level difference detection method of the fourth embodiment obtains a height difference between adjacent detection areas from the heights of the plurality of detection areas, and detects the difference between the plurality of detection areas. When the height difference changes continuously, it is determined that there is an inclination between the detection areas.
 本実施形態4の路面段差検出装置(第2の路面段差検出装置)は、路面の少なくとも第1の画像と第2の画像をステレオ撮影するステレオカメラと、ステレオ撮影した第1の画像と第2の画像とをXY平面座標に投影し、特定のY軸方向で上記画像の行データ毎に画像が路面位置にある場合の視差を算出し、第2の画像をY軸毎に視差をずらして補正した第3の画像を生成し、第1の画像と第3の画像を段差の検出領域毎に比較して路面からの高さを検出する高さ検出部とを備える。 The road surface level difference detection device (second road level level detection device) according to the fourth exemplary embodiment includes a stereo camera that stereo-shoots at least a first image and a second image of a road surface, a first image that is captured in stereo, and a second image. The image is projected onto the XY plane coordinates, the parallax when the image is at the road surface position is calculated for each row data of the image in the specific Y-axis direction, and the parallax is shifted for the second image for each Y-axis. A corrected third image is generated, and the height detection unit detects the height from the road surface by comparing the first image and the third image for each step detection region.
 また、本実施形態4の車両は、上記の路面段差検出装置を備える。 Further, the vehicle of the fourth embodiment includes the road surface level difference detection device described above.
 本実施形態4の路面段差検出装置1001の構成は、図13および図14に基づいて説明した実施形態3と同一であるため、説明を省略する。以下では、実施形態3との相違点のみについて説明する。 Since the configuration of the road surface level difference detection device 1001 of the fourth embodiment is the same as that of the third embodiment described based on FIGS. 13 and 14, the description thereof is omitted. Only differences from the third embodiment will be described below.
 実施形態3において図15(c)に示したように第1の画像と第2の画像を重ねて歩道と路面の境界線だけを抽出すると左の画像と右の画像において境界線がずれた位置に撮像される。この左右方向のずれ量が視差であり、本実施形態4の路面段差検出装置は、このような視差v1を抽出して路面からの段差の高さや低さを検出するものであるが、図26に示したように、路面に平行な凹部の底面等は立体物と異なって歪んで撮像されているため、このひずみを補正した第3の画像を用いて路面からの高さを検出するものである。 In the third embodiment, as illustrated in FIG. 15C, when the first image and the second image are overlapped to extract only the boundary line between the sidewalk and the road surface, the boundary line is shifted in the left image and the right image. To be imaged. The amount of deviation in the left-right direction is parallax, and the road surface level difference detection apparatus of the fourth embodiment extracts such parallax v1 and detects the height and height of the level difference from the road surface. As shown in Fig. 3, the bottom surface of the recess parallel to the road surface is distorted and imaged unlike a three-dimensional object, so the height from the road surface is detected using the third image corrected for this distortion. is there.
 図23は、画像の歪みの補正を説明するための図であり、(a)は第1の画像、(b)は第2の画像、(c)は補正後の第3の画像を示している。図23に示すように、本実施形態4の路面段差検出方法は、ステレオカメラで路面を異なる方向から見た第1の画像と第2の画像を撮影した後、第1の画像と第2の画像において、図23の破線に示すように、同じY座標における画素値を集めた行データ同士を比較し、行データ中の対象物が路面にあると仮定したときの視差v1をY軸方向の座標値とカメラ情報から算出する。Y軸全ての視差v1を算出した後、図23(c)に示すように、第2の画像でY軸方向にそれぞれの視差v1ずつ補正した第3の画像を生成する。そして、補正した第3の画像と第1の画像を検出領域毎に比較し、第3の画像と第1の画像とのずれ量v2により検出領域の路面からの高さを検出する。 23A and 23B are diagrams for explaining correction of image distortion. FIG. 23A shows a first image, FIG. 23B shows a second image, and FIG. 23C shows a third image after correction. Yes. As shown in FIG. 23, the road surface level difference detection method according to the fourth embodiment captures the first image and the second image after taking a first image and a second image obtained by viewing the road surface from different directions with a stereo camera. In the image, as shown by the broken line in FIG. 23, the row data obtained by collecting the pixel values at the same Y coordinate are compared, and the parallax v1 when the object in the row data is assumed to be on the road surface is expressed in the Y-axis direction. Calculated from coordinate values and camera information. After calculating the parallaxes v1 for all the Y axes, as shown in FIG. 23C, a third image is generated by correcting the parallaxes v1 in the Y axis direction in the second image. Then, the corrected third image and the first image are compared for each detection area, and the height of the detection area from the road surface is detected based on the shift amount v2 between the third image and the first image.
 図24は、本実施形態4の路面段差検出装置1001の演算部1020における高さ検出処理のフローチャートである。演算部1020は、まず、ステップS1からステップS3の処理により、行データ毎に視差v1を求める。 FIG. 24 is a flowchart of the height detection process in the calculation unit 1020 of the road surface level difference detection device 1001 according to the fourth embodiment. The computing unit 1020 first obtains the parallax v1 for each row data by the processing from step S1 to step S3.
 視差v1を求めるには、まず、カメラから画像の焦点面までの距離d1を算出する必要がある。視差v1を求める方法を、実施形態3で用いた図17を用いて説明する。図17の座標空間13に示された座標点(X、Y)は行データに含まれている。 In order to obtain the parallax v1, it is first necessary to calculate the distance d1 from the camera to the focal plane of the image. A method for obtaining the parallax v1 will be described with reference to FIG. 17 used in the third embodiment. The coordinate point (X, Y) indicated in the coordinate space 13 of FIG. 17 is included in the row data.
 ステップS1では、座標空間13において、段差を検出する任意の検出領域の座標点(X、Y)を含む行データを選択する。座標空間13は、図17(b)に示すカメラ1011の光軸に垂直な面である焦点面に相当する。カメラ1011、1012のレンズの歪みがない場合、焦点面上にあるものは全て同じ視差になるため、座標空間13にある座標点(X、Y)と原点Pも同じ視差となる。 In step S1, line data including coordinate points (X, Y) of an arbitrary detection area for detecting a step in the coordinate space 13 is selected. The coordinate space 13 corresponds to a focal plane that is a plane perpendicular to the optical axis of the camera 1011 shown in FIG. When there is no distortion of the lenses of the cameras 1011 and 1012, all the objects on the focal plane have the same parallax. Therefore, the coordinate point (X, Y) in the coordinate space 13 and the origin P also have the same parallax.
 ステップS2では、座標点(X、Y)が路面にあると仮定したときのカメラから焦点面A1の原点Pまでの距離d1を求める。ステップS2は、実施形態3において説明したステップS2と同じであるため、此処での説明を省略する。 In step S2, the distance d1 from the camera to the origin P of the focal plane A1 when the coordinate point (X, Y) is assumed to be on the road surface is obtained. Since step S2 is the same as step S2 described in the third embodiment, description thereof is omitted here.
 次に、ステップS3では、焦点面A1に路面があるとしたきの第1の画像と第2の画像との視差v1を求める。ステップS3は、実施形態3において説明したステップS3と同じであるため、此処での説明を省略する。 Next, in step S3, a parallax v1 between the first image and the second image when the road surface is on the focal plane A1 is obtained. Since step S3 is the same as step S3 described in the third embodiment, description thereof is omitted here.
 次に、ステップS4では、行データ毎に路面があるとしたきの第1の画像と第2の画像との視差v1を求めた後、第2の画像を図23(c)に示すように補正する。補正方法は、第2の画像のそれぞれの行データをY座標に応じた視差v1ピクセルずつX座標方向に移動させる。視差v1が小数の場合は近傍の2点で補完する。例えば、v1が5.5の場合、左から5ピクセル目と6ピクセル目の合計の半分を0ピクセル目の位置に書き込むことで補正を行う。 Next, in step S4, after obtaining the parallax v1 between the first image and the second image when there is a road surface for each row data, the second image is as shown in FIG. to correct. In the correction method, each row data of the second image is moved in the X coordinate direction by a parallax v1 pixel corresponding to the Y coordinate. When the parallax v1 is a decimal, it is complemented by two nearby points. For example, when v1 is 5.5, correction is performed by writing half of the sum of the fifth and sixth pixels from the left in the position of the zeroth pixel.
 第2の画像をY座標全体にわたって行データを視差v1で補正することにより、図23(c)に示す第3の画像が得られる。これにより、補正後の第3の画像の対象物は、第1の画像と同じ形状となる。 The third image shown in FIG. 23C is obtained by correcting the row data with the parallax v1 over the entire Y coordinate of the second image. As a result, the corrected object of the third image has the same shape as the first image.
 ステップS5では、第1の画像の(X、Y)の座標に写っている対象物が路面と同じ高さにあるかどうかを判断する。ここでは、第1の画像の(X、Y)を中心とする検出領域と第3の画像の(X、Y)を中心とする比較領域を比較し、対象物が検出領域と比較領域の同一の位置に写っているかで判断することができる。 In step S5, it is determined whether or not the object shown in the (X, Y) coordinates of the first image is at the same height as the road surface. Here, the detection area centered on (X, Y) of the first image and the comparison area centered on (X, Y) of the third image are compared, and the object is the same as the detection area and the comparison area. It can be judged by whether it is reflected in the position of.
 同一の位置に写っている物体同士が同一物体かどうかの確認方法は様々な方法があるが、例えば、左右画像の対象点の周辺数ピクセル分の輝度を取り出して比較することができる。両者がカメラのノイズなどの誤差要因の範囲内で一致すれば、その地点は路面と同じ高さであると判断することができる。もし、両者が一致せず、左右どちらかにずれていると判断される場合には、そのずれ量に応じて、路面より高い位置か、低い位置にあると判断することができる。 There are various methods for confirming whether or not the objects appearing at the same position are the same object. For example, the luminance of several pixels around the target point of the left and right images can be extracted and compared. If they match within the range of error factors such as camera noise, the point can be determined to be the same height as the road surface. If it is determined that the two do not match and are shifted to the left or right, it can be determined that the position is higher or lower than the road surface according to the shift amount.
 例えば、左側の第1の画像の(X、Y)の位置にある物体が、右側の第3の画像で(Xr-v2、Y)の座標にあると判断された場合、上述した実施形態3の(式4)~(式8)を用いて路面の段差の高さhsを求めて、座標点(X、Y)の路面からの高低差を求めることができる。 For example, when it is determined that the object at the position (X, Y) in the left first image is at the coordinates (Xr−v2, Y) in the right third image, the third embodiment described above. The height difference hs of the road surface step can be obtained by using (Equation 4) to (Equation 8) and the height difference of the coordinate point (X, Y) from the road surface can be obtained.
 ステップS6では、上記手順を適切な間隔をもって他の座標点でも繰り返し、画像中の必要な範囲で路面からの高さの検出を完了したら終了する。 In step S6, the above procedure is repeated at other coordinate points with appropriate intervals, and the process ends when the detection of the height from the road surface within the necessary range in the image is completed.
 図24に示したフローチャートの詳細は以上であり、これらの処理は図13に示した演算部1020で処理される。 The details of the flowchart shown in FIG. 24 are as described above, and these processes are processed by the arithmetic unit 1020 shown in FIG.
 図10は、図3のステレオ画像に対して、上記の方法を適用した結果の一例である。検出領域毎の高さに応じて諧調表示されており、路面と同じ高さの領域がグレーで、路面より低い領域が黒く表示されている。 FIG. 10 is an example of a result of applying the above method to the stereo image of FIG. The gradation is displayed according to the height of each detection area, the area having the same height as the road surface is displayed in gray, and the area lower than the road surface is displayed in black.
 なお、演算部1020の段差検出部1040は、実施形態3において説明しているため、此処での説明は省略する。 In addition, since the level | step difference detection part 1040 of the calculating part 1020 has demonstrated in Embodiment 3, description here is abbreviate | omitted.
 (実施形態4の作用効果)
 本実施形態4によると、ステレオカメラで撮像した画像を用いて、画像中の路面の各領域から段差を容易に検出することができる。
(Effect of Embodiment 4)
According to the fourth embodiment, it is possible to easily detect a step from each region of the road surface in the image using the image captured by the stereo camera.
 (実施形態4のまとめ)
 本実施形態4の路面段差検出方法は、路面をステレオ撮影した少なくとも第1の画像と第2の画像とをXY平面座標に投影し、特定のY軸方向で画像の行データ毎に画像が路面位置にある場合の視差を算出し、第2の画像をY軸毎に視差をずらして補正した第3の画像を生成し、第1の画像と第3の画像を段差の検出領域毎に比較して路面からの高さを検出することを特徴とする。
(Summary of Embodiment 4)
In the road surface level difference detection method according to the fourth embodiment, at least a first image and a second image obtained by photographing a road surface in stereo are projected onto XY plane coordinates, and an image is obtained for each row data of the image in a specific Y-axis direction. Calculates the parallax when in position, generates a third image corrected by shifting the parallax for each Y-axis, and compares the first image and the third image for each step detection area Then, the height from the road surface is detected.
 これにより、前方の障害物だけではなく、路面の段差も検出することができる路面段差検出装置を提供する。 This provides a road surface level difference detection device that can detect not only front obstacles but also road level differences.
 〔補足説明〕
 本発明は前述した各実施形態に限定されるものではなく、請求項に示した範囲で種々の変更が可能であり、異なる実施形態にそれぞれ開示された技術的手段を適宜組み合わせて得られる実施形態についても本発明の技術的範囲に含まれる。さらに、各実施形態にそれぞれ開示された技術的手段を組み合わせることにより、新しい技術的特徴を形成することができる。
[Supplementary explanation]
The present invention is not limited to the above-described embodiments, and various modifications are possible within the scope of the claims, and the embodiments can be obtained by appropriately combining technical means disclosed in different embodiments. Is also included in the technical scope of the present invention. Furthermore, a new technical feature can be formed by combining the technical means disclosed in each embodiment.
 本発明は、計測対象となる画像データに含まれる床などの平面を検出するための平面検出装置およびそれを用いた自律移動装置に関するものであり、平面検出装置自体を独立した装置として産業用、民生用、その他用途に用いる他、他の装置の一部に組み込んで利用したり、装置の一部または全部を集積回路(ICチップ)化したりして利用することができる。 The present invention relates to a plane detection device for detecting a plane such as a floor included in image data to be measured and an autonomous mobile device using the same, and the plane detection device itself is an independent device for industrial use. In addition to being used for consumer use and other purposes, it can be used by being incorporated into a part of another device, or part or all of the device can be used as an integrated circuit (IC chip).
1 清掃ロボット(自律移動装置)
2 駆動輪(走行手段)
3 従輪
4 バッテリ
5 洗浄液タンク
6 洗浄液吐出部
7 廃液タンク
8 吸引口
9 清掃ブラシ
10 モータ
11 筐体
12 保護部材
20 距離画像センサ(距離画像生成手段)
20a 近距離用の距離画像センサ
20b 遠距離用の距離画像センサ
30 演算装置
31 3次元座標演算部(3次元座標演算手段、第2の3次元座標演算手段)
32 投影画像生成部(投影画像生成手段、第2の投影画像生成手段)
33 直線検出部(直線検出手段、第2の直線検出手段)
34 平面検出部(平面パラメータ算出手段、第2の平面パラメータ算出手段)
35 障害物・段差検出部
36 データ統合部
41 走行制御部
42 清掃制御部
43 マップ情報メモリ部
44 状態表示部
45 廃液回収ユニット
46 洗浄液吐出ユニット
47 ロータリーエンコーダ
48 駆動輪モータ
49 ジャイロセンサ
50 制御スイッチ
60 平面検出装置
1001 路面段差検出装置
1011 カメラ
1012 カメラ
1020 演算部
1030 検出部
1031 スピーカ
1032 表示装置
1040 段差検出部
1050 出力装置
1060 シニアカー
1061 ハンドル
1 Cleaning robot (autonomous mobile device)
2 Drive wheels (traveling means)
3 Subordinate wheel 4 Battery 5 Cleaning liquid tank 6 Cleaning liquid discharge part 7 Waste liquid tank 8 Suction port 9 Cleaning brush 10 Motor 11 Housing 12 Protection member 20 Distance image sensor (distance image generation means)
20a Distance image sensor for short distance 20b Distance image sensor for long distance 30 Computing device 31 3D coordinate computing unit (3D coordinate computing means, second 3D coordinate computing means)
32 Projection image generation unit (projection image generation means, second projection image generation means)
33 Straight line detection unit (straight line detection means, second straight line detection means)
34 Plane detector (plane parameter calculation means, second plane parameter calculation means)
35 Obstacle / step detection unit 36 Data integration unit 41 Travel control unit 42 Cleaning control unit 43 Map information memory unit 44 Status display unit 45 Waste liquid recovery unit 46 Cleaning liquid discharge unit 47 Rotary encoder 48 Drive wheel motor 49 Gyro sensor 50 Control switch 60 Flat surface detection device 1001 Road surface level detection device 1011 Camera 1012 Camera 1020 Operation unit 1030 Detection unit 1031 Speaker 1032 Display device 1040 Level detection unit 1050 Output device 1060 Senior car 1061 Handle

Claims (15)

  1.  特定の検出対象平面を含む被写体の距離画像データから当該特定の検出対象平面を検出する平面検出装置であって、
     上記距離画像データを、上記特定の検出対象平面を表す検出対象3次元点群を含む3次元座標データに変換する3次元座標演算手段と、
     上記3次元座標データを所定の2次元平面に投影して、上記検出対象3次元点群が線形に分布した投影画像データを生成する投影画像生成手段と、
     上記投影画像データから上記線形の直線を検出する直線検出手段と、
     上記直線検出手段の検出結果に基づいて、上記特定の検出対象平面の傾きに関する情報を含む平面パラメータを算出する平面パラメータ算出手段と、
    を備えることを特徴とする平面検出装置。
    A plane detection device for detecting a specific detection target plane from distance image data of a subject including the specific detection target plane,
    Three-dimensional coordinate calculation means for converting the distance image data into three-dimensional coordinate data including a detection target three-dimensional point group representing the specific detection target plane;
    Projection image generating means for projecting the three-dimensional coordinate data onto a predetermined two-dimensional plane and generating projection image data in which the detection target three-dimensional point group is linearly distributed;
    Straight line detecting means for detecting the linear straight line from the projection image data;
    A plane parameter calculating means for calculating a plane parameter including information on the inclination of the specific detection target plane based on the detection result of the straight line detecting means;
    A flat surface detecting device comprising:
  2.  上記距離画像データにおいて、上記被写体の奥行き方向をz軸として、当該z軸に対して垂直であるx軸およびy軸であって、当該距離画像データの左右方向をx軸、上下方向をy軸としたxyz座標系において、上記投影画像生成手段は、上記3次元座標データをyz平面に投影した投影画像データを生成し、
     上記平面パラメータ算出手段は、投影画像データから、上記特定の検出対象平面の、上記z軸に対する傾斜角を含む上記平面パラメータを算出することを特徴とする請求項1に記載の平面検出装置。
    In the distance image data, the depth direction of the subject is the z-axis, the x-axis and the y-axis are perpendicular to the z-axis, the left-right direction of the distance image data is the x-axis, and the up-down direction is the y-axis In the xyz coordinate system, the projection image generation means generates projection image data obtained by projecting the three-dimensional coordinate data onto the yz plane,
    The plane detection device according to claim 1, wherein the plane parameter calculation unit calculates the plane parameter including an inclination angle of the specific detection target plane with respect to the z axis from the projection image data.
  3.  上記平面パラメータ算出手段は、上記直線が予め定めた範囲内にあるか否かを判別することを特徴とする請求項1または2に記載の平面検出装置。 3. The plane detection apparatus according to claim 1, wherein the plane parameter calculation means determines whether or not the straight line is within a predetermined range.
  4.  上記xyz座標系を、上記x軸を回転軸として回転させることによってxy´z´座標系に変換して、xy´z´座標系における上記特定の検出対象平面を表す検出対象3次元点群を含む第2の3次元座標データを生成する第2の3次元座標演算手段と、
     上記第2の3次元座標データを、xy´平面に投影して、当該第2の3次元座標データに含まれる上記検出対象3次元点群が線形に分布した第2の投影画像データを生成する第2の投影画像生成手段と、
     上記第2の投影画像データから、当該第2の投影画像データに分布する上記線形の第2の直線を検出する第2の直線検出手段と、
     上記第2の直線検出手段の検出結果に基づいて、上記特定の検出対象平面の傾きに関する情報を含む第2の平面パラメータを算出する第2の平面パラメータ算出手段と、
    をさらに備えることを特徴とする請求項2に記載の平面検出装置。
    The xyz coordinate system is converted into an xy′z ′ coordinate system by rotating the x-axis as a rotation axis, and a detection target three-dimensional point group representing the specific detection target plane in the xy′z ′ coordinate system is converted. Second 3D coordinate calculation means for generating second 3D coordinate data including:
    The second three-dimensional coordinate data is projected onto the xy ′ plane to generate second projection image data in which the detection target three-dimensional point group included in the second three-dimensional coordinate data is linearly distributed. Second projection image generation means;
    Second straight line detection means for detecting the linear second straight line distributed in the second projection image data from the second projection image data;
    Second plane parameter calculation means for calculating a second plane parameter including information on the inclination of the specific detection target plane based on the detection result of the second straight line detection means;
    The flat panel detector according to claim 2, further comprising:
  5.  請求項1から4までの何れか1項に記載の平面検出装置と、
     上記距離画像データを生成する距離画像生成手段と、
     走行手段と、を備えており、
     上記平面検出装置を用いて走行路となる平面を検出することを特徴とする自律移動装置。
    A flat surface detection device according to any one of claims 1 to 4,
    Distance image generation means for generating the distance image data;
    Traveling means, and
    An autonomous mobile device that detects a plane as a traveling path using the plane detection device.
  6.  路面をステレオ撮影した少なくとも第1の画像と第2の画像とをXY平面座標に投影し、
     該平面座標上に特定の座標(X、Y)を中心とする検出領域を設定し、
     上記検出領域の画像が路面位置にある場合の視差v1を算出し、
     上記第2の画像に上記視差v1を差し引いた座標(X-v1、Y)を中心とする比較領域を設定し、
     上記検出領域の画像と上記比較領域の画像とを比較して、上記検出領域の上記路面からの高さを検出することを特徴とする路面段差検出方法。
    Projecting at least a first image and a second image obtained by stereo shooting of a road surface onto XY plane coordinates;
    A detection area centered on specific coordinates (X, Y) is set on the plane coordinates,
    Calculating the parallax v1 when the image of the detection area is at the road surface position;
    A comparison area centered on the coordinates (X−v1, Y) obtained by subtracting the parallax v1 from the second image;
    A road surface level difference detecting method, comprising: comparing an image of the detection area with an image of the comparison area to detect a height of the detection area from the road surface.
  7.  複数の上記検出領域の高さから、近傍する上記検出領域の間で上記高さの高低差を求めて、
     上記高低差が閾値以上であれば上記検出領域間に段差があると判定することを特徴とする請求項6に記載の路面段差検出方法。
    From the height of the plurality of detection areas, to determine the height difference of the height between the adjacent detection areas,
    7. The road surface level difference detection method according to claim 6, wherein if the height difference is equal to or greater than a threshold value, it is determined that there is a level difference between the detection areas.
  8.  複数の上記検出領域の高さから、近傍する上記検出領域の間で上記高さの高低差を求めて、
     複数の上記検出領域間で上記高低差が連続して変化する場合に上記検出領域間に傾斜があると判定することを特徴とする請求項6に記載の路面段差検出方法。
    From the height of the plurality of detection areas, to determine the height difference of the height between the adjacent detection areas,
    The road surface level detection method according to claim 6, wherein it is determined that there is an inclination between the detection areas when the difference in height continuously changes between the plurality of detection areas.
  9.  路面をステレオ撮影する少なくとも第1のカメラと第2のカメラと、
     上記第1のカメラで撮影された第1の画像と上記第2のカメラで撮影された第2の画像をXY平面座標に投影し、該平面座標上に特定の座標(X、Y)を中心とする検出領域を設定し、上記検出領域の画像が路面位置にある場合の視差v1を算出し、上記第2の画像に上記視差v1を差し引いた座標(X-v1、Y)を中心とする比較領域を設定し、上記検出領域の画像と上記比較領域の画像とを比較して、上記検出領域の上記路面からの高さを検出する高さ算出部と
     を備えることを特徴とする路面段差検出装置。
    At least a first camera and a second camera for stereo shooting of the road surface;
    The first image captured by the first camera and the second image captured by the second camera are projected onto XY plane coordinates, and the specific coordinates (X, Y) are centered on the plane coordinates. The parallax v1 when the detection area image is at the road surface position is calculated, and the coordinates (X−v1, Y) obtained by subtracting the parallax v1 from the second image are set as the center. A road surface step comprising: a height calculation unit configured to set a comparison area, compare the image of the detection area with the image of the comparison area, and detect the height of the detection area from the road surface. Detection device.
  10.  請求項9に記載の路面段差検出装置を備えたことを特徴とする車両。 A vehicle comprising the road surface level difference detecting device according to claim 9.
  11.  路面をステレオ撮影した少なくとも第1の画像と第2の画像とをXY平面座標に投影し、
     特定のY軸方向で上記画像の行データ毎に画像が路面位置にある場合の視差を算出し、
     上記第2の画像をY軸毎に上記視差をずらして補正した第3の画像を生成し、
     上記第1の画像と上記第3の画像を段差の検出領域毎に比較して上記路面からの高さを検出する
    ことを特徴とする路面段差検出方法。
    Projecting at least a first image and a second image obtained by stereo shooting of a road surface onto XY plane coordinates;
    Calculate the parallax when the image is at the road surface position for each row data of the image in the specific Y-axis direction,
    Generating a third image obtained by correcting the second image by shifting the parallax for each Y axis;
    A road level difference detection method, wherein the height from the road surface is detected by comparing the first image and the third image for each step detection area.
  12.  複数の上記検出領域の高さから、近傍する上記検出領域の間で上記高さの高低差を求めて、
     上記高低差が閾値以上であれば上記検出領域間に段差があると判定することを特徴とする請求項11に記載の路面段差検出方法。
    From the height of the plurality of detection areas, to determine the height difference of the height between the adjacent detection areas,
    12. The road surface level difference detection method according to claim 11, wherein if the height difference is equal to or greater than a threshold value, it is determined that there is a level difference between the detection areas.
  13.  複数の上記検出領域の高さから、近傍する上記検出領域の間で上記高さの高低差を求めて、
     複数の上記検出領域間で上記高低差が連続して変化する場合に上記検出領域間に傾斜があると判定することを特徴とする請求項11に記載の路面段差検出方法。
    From the height of the plurality of detection areas, to determine the height difference of the height between the adjacent detection areas,
    12. The road surface level difference detecting method according to claim 11, wherein it is determined that there is an inclination between the detection areas when the height difference continuously changes between the plurality of detection areas.
  14.  路面の少なくとも第1の画像と第2の画像をステレオ撮影するステレオカメラと、
     ステレオ撮影した第1の画像と第2の画像とをXY平面座標に投影し、特定のY軸方向で上記画像の行データ毎に画像が路面位置にある場合の視差を算出し、上記第2の画像をY軸毎に上記視差をずらして補正した第3の画像を生成し、上記第1の画像と上記第3の画像を段差の検出領域毎に比較して上記路面からの高さを検出する高さ検出部と
    を備えることを特徴とする路面段差検出装置。
    A stereo camera for stereo shooting at least a first image and a second image of the road surface;
    The first and second images captured in stereo are projected onto XY plane coordinates, and parallax is calculated when the image is at the road surface position for each row data of the image in a specific Y-axis direction. A third image is generated by correcting the parallax for each Y-axis by shifting the parallax, and the height from the road surface is compared by comparing the first image and the third image for each step detection area. A road surface level difference detection device comprising: a height detection unit for detection.
  15.  請求項14に記載の路面段差検出装置を備えたことを特徴とする車両。 A vehicle comprising the road surface level difference detecting device according to claim 14.
PCT/JP2013/071855 2012-10-25 2013-08-13 Plane detection device, autonomous locomotion device provided with plane detection device, method for detecting road level difference, device for detecting road level difference, and vehicle provided with device for detecting road level difference WO2014064990A1 (en)

Applications Claiming Priority (6)

Application Number Priority Date Filing Date Title
JP2012-235950 2012-10-25
JP2012235950A JP6030405B2 (en) 2012-10-25 2012-10-25 Planar detection device and autonomous mobile device including the same
JP2012-238567 2012-10-30
JP2012-238566 2012-10-30
JP2012238567A JP2014089548A (en) 2012-10-30 2012-10-30 Road surface level difference detection method, road surface level difference detection device and vehicle equipped with the road surface level difference detection device
JP2012238566A JP6072508B2 (en) 2012-10-30 2012-10-30 Road surface step detection method, road surface step detection device, and vehicle equipped with road surface step detection device

Publications (1)

Publication Number Publication Date
WO2014064990A1 true WO2014064990A1 (en) 2014-05-01

Family

ID=50544371

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2013/071855 WO2014064990A1 (en) 2012-10-25 2013-08-13 Plane detection device, autonomous locomotion device provided with plane detection device, method for detecting road level difference, device for detecting road level difference, and vehicle provided with device for detecting road level difference

Country Status (1)

Country Link
WO (1) WO2014064990A1 (en)

Cited By (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2017117205A (en) * 2015-12-24 2017-06-29 アイシン精機株式会社 Mobile vehicle
CN107963120A (en) * 2016-10-19 2018-04-27 中车株洲电力机车研究所有限公司 A kind of rubber tire low-floor intelligent track train automatic steering control method
CN108136934A (en) * 2015-11-19 2018-06-08 爱信精机株式会社 Moving body
JP2018096798A (en) * 2016-12-12 2018-06-21 株式会社Soken Object detector
EP3333828A4 (en) * 2015-08-04 2018-08-15 Nissan Motor Co., Ltd. Step detection device and step detection method
JP2018156617A (en) * 2017-03-15 2018-10-04 株式会社東芝 Processor and processing system
US10245730B2 (en) * 2016-05-24 2019-04-02 Asustek Computer Inc. Autonomous mobile robot and control method thereof
EP3363342A4 (en) * 2015-10-14 2019-05-22 Toshiba Lifestyle Products & Services Corporation Electric vacuum cleaner
JP2020144023A (en) * 2019-03-07 2020-09-10 株式会社Subaru Road surface measurement device, method for measuring road surface, and road surface measurement system
CN112740284A (en) * 2018-11-30 2021-04-30 多玩国株式会社 Moving picture composition device, moving picture composition method, and recording medium
CN112947449A (en) * 2021-02-20 2021-06-11 大陆智源科技(北京)有限公司 Anti-falling device, robot and anti-falling method
CN113050103A (en) * 2021-02-05 2021-06-29 上海擎朗智能科技有限公司 Ground detection method, device, electronic equipment, system and medium
WO2021215688A1 (en) 2020-04-22 2021-10-28 Samsung Electronics Co., Ltd. Robot cleaner and controlling method thereof
EP3889720A4 (en) * 2018-11-29 2021-11-24 Honda Motor Co., Ltd. Work machine, work machine control method, and program
US11373532B2 (en) 2019-02-01 2022-06-28 Hitachi Astemo, Ltd. Pothole detection system
WO2022179270A1 (en) * 2021-02-23 2022-09-01 京东科技信息技术有限公司 Robot traveling method and apparatus, and electronic device, storage medium and program product
CN115198605A (en) * 2022-07-20 2022-10-18 成都宁顺智能设备有限公司 Remote detection method for micro deformation of highway pavement

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2005217883A (en) * 2004-01-30 2005-08-11 Rikogaku Shinkokai Method for detecting flat road area and obstacle by using stereo image
JP2011027724A (en) * 2009-06-24 2011-02-10 Canon Inc Three-dimensional measurement apparatus, measurement method therefor, and program

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2005217883A (en) * 2004-01-30 2005-08-11 Rikogaku Shinkokai Method for detecting flat road area and obstacle by using stereo image
JP2011027724A (en) * 2009-06-24 2011-02-10 Canon Inc Three-dimensional measurement apparatus, measurement method therefor, and program

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
TOM DRUMMOND ET AL.: "Real-Time Visual Tracking of Complex Structures", IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, vol. 24, no. 7, July 2002 (2002-07-01), pages 932 - 946 *

Cited By (28)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP3333828A4 (en) * 2015-08-04 2018-08-15 Nissan Motor Co., Ltd. Step detection device and step detection method
US10339394B2 (en) 2015-08-04 2019-07-02 Nissan Motor Co., Ltd. Step detection device and step detection method
US10932635B2 (en) 2015-10-14 2021-03-02 Toshiba Lifestyle Products & Services Corporation Vacuum cleaner
EP3363342A4 (en) * 2015-10-14 2019-05-22 Toshiba Lifestyle Products & Services Corporation Electric vacuum cleaner
CN108136934A (en) * 2015-11-19 2018-06-08 爱信精机株式会社 Moving body
EP3378695A4 (en) * 2015-11-19 2018-09-26 Aisin Seiki Kabushiki Kaisha Moving body
CN108136934B (en) * 2015-11-19 2021-01-05 爱信精机株式会社 Moving body
JP2017117205A (en) * 2015-12-24 2017-06-29 アイシン精機株式会社 Mobile vehicle
US10245730B2 (en) * 2016-05-24 2019-04-02 Asustek Computer Inc. Autonomous mobile robot and control method thereof
CN107963120B (en) * 2016-10-19 2020-11-10 中车株洲电力机车研究所有限公司 Automatic steering control method for rubber-tyred low-floor intelligent rail train
CN107963120A (en) * 2016-10-19 2018-04-27 中车株洲电力机车研究所有限公司 A kind of rubber tire low-floor intelligent track train automatic steering control method
JP2018096798A (en) * 2016-12-12 2018-06-21 株式会社Soken Object detector
JP2018155726A (en) * 2017-03-15 2018-10-04 株式会社東芝 On-vehicle processing system
JP2021152543A (en) * 2017-03-15 2021-09-30 株式会社東芝 Vehicular processing system
JP2018156617A (en) * 2017-03-15 2018-10-04 株式会社東芝 Processor and processing system
JP2021170385A (en) * 2017-03-15 2021-10-28 株式会社東芝 Processing device and processing system
EP3889720A4 (en) * 2018-11-29 2021-11-24 Honda Motor Co., Ltd. Work machine, work machine control method, and program
CN112740284A (en) * 2018-11-30 2021-04-30 多玩国株式会社 Moving picture composition device, moving picture composition method, and recording medium
US11373532B2 (en) 2019-02-01 2022-06-28 Hitachi Astemo, Ltd. Pothole detection system
JP2020144023A (en) * 2019-03-07 2020-09-10 株式会社Subaru Road surface measurement device, method for measuring road surface, and road surface measurement system
JP7256659B2 (en) 2019-03-07 2023-04-12 株式会社Subaru Road surface measurement device, road surface measurement method, and road surface measurement system
WO2021215688A1 (en) 2020-04-22 2021-10-28 Samsung Electronics Co., Ltd. Robot cleaner and controlling method thereof
EP4057880A4 (en) * 2020-04-22 2023-01-11 Samsung Electronics Co., Ltd. Robot cleaner and controlling method thereof
US11653808B2 (en) 2020-04-22 2023-05-23 Samsung Electronics Co., Ltd. Robot cleaner and controlling method thereof
CN113050103A (en) * 2021-02-05 2021-06-29 上海擎朗智能科技有限公司 Ground detection method, device, electronic equipment, system and medium
CN112947449A (en) * 2021-02-20 2021-06-11 大陆智源科技(北京)有限公司 Anti-falling device, robot and anti-falling method
WO2022179270A1 (en) * 2021-02-23 2022-09-01 京东科技信息技术有限公司 Robot traveling method and apparatus, and electronic device, storage medium and program product
CN115198605A (en) * 2022-07-20 2022-10-18 成都宁顺智能设备有限公司 Remote detection method for micro deformation of highway pavement

Similar Documents

Publication Publication Date Title
WO2014064990A1 (en) Plane detection device, autonomous locomotion device provided with plane detection device, method for detecting road level difference, device for detecting road level difference, and vehicle provided with device for detecting road level difference
JP6030405B2 (en) Planar detection device and autonomous mobile device including the same
JP6132659B2 (en) Ambient environment recognition device, autonomous mobile system using the same, and ambient environment recognition method
US11433880B2 (en) In-vehicle processing apparatus
EP3415281B1 (en) Robot cleaner and method for controlling the same
EP3104194B1 (en) Robot positioning system
US8467902B2 (en) Method and apparatus for estimating pose of mobile robot using particle filter
JP5124351B2 (en) Vehicle operation system
TWI401175B (en) Dual vision front vehicle safety warning device and method thereof
US20180165833A1 (en) Calculation device, camera device, vehicle, and calibration method
KR20190131402A (en) Moving Object and Hybrid Sensor with Camera and Lidar
JP6565188B2 (en) Parallax value deriving apparatus, device control system, moving body, robot, parallax value deriving method, and program
JP2007235642A (en) Obstruction detecting system
JP2007334859A (en) Object detector
JP4539388B2 (en) Obstacle detection device
CN113110451A (en) Mobile robot obstacle avoidance method with depth camera and single line laser radar fused
JP2014089548A (en) Road surface level difference detection method, road surface level difference detection device and vehicle equipped with the road surface level difference detection device
JP2023083305A (en) Cleaning map display device
JP6781535B2 (en) Obstacle determination device and obstacle determination method
JP6543935B2 (en) PARALLEL VALUE DERIVING DEVICE, DEVICE CONTROL SYSTEM, MOBILE OBJECT, ROBOT, PARALLEL VALUE DERIVING METHOD, AND PROGRAM
JP2014106638A (en) Moving device and control method
KR101965739B1 (en) Mobile robot and method for controlling the same
JP2010250743A (en) Automatic running vehicle and road shape recognition system
KR20190119231A (en) Driving control device improved position correcting function and robot cleaner using the same
Poomarin et al. Automatic docking with obstacle avoidance of a differential wheel mobile robot

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 13848787

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 13848787

Country of ref document: EP

Kind code of ref document: A1