WO2014064990A1 - Dispositif de détection de plan, dispositif de locomotion autonome doté d'un dispositif de détection de plan, procédé de détection de différence de nivellement de chaussée, dispositif de détection de différence de nivellement de la chaussée et véhicule doté d'un dispositif de détection d'une différence de nivellement de la chaussée - Google Patents

Dispositif de détection de plan, dispositif de locomotion autonome doté d'un dispositif de détection de plan, procédé de détection de différence de nivellement de chaussée, dispositif de détection de différence de nivellement de la chaussée et véhicule doté d'un dispositif de détection d'une différence de nivellement de la chaussée Download PDF

Info

Publication number
WO2014064990A1
WO2014064990A1 PCT/JP2013/071855 JP2013071855W WO2014064990A1 WO 2014064990 A1 WO2014064990 A1 WO 2014064990A1 JP 2013071855 W JP2013071855 W JP 2013071855W WO 2014064990 A1 WO2014064990 A1 WO 2014064990A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
plane
detection
road surface
height
Prior art date
Application number
PCT/JP2013/071855
Other languages
English (en)
Japanese (ja)
Inventor
透 花岡
松尾 順向
光平 松尾
岡田 和久
Original Assignee
シャープ株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from JP2012235950A external-priority patent/JP6030405B2/ja
Priority claimed from JP2012238567A external-priority patent/JP2014089548A/ja
Priority claimed from JP2012238566A external-priority patent/JP6072508B2/ja
Application filed by シャープ株式会社 filed Critical シャープ株式会社
Publication of WO2014064990A1 publication Critical patent/WO2014064990A1/fr

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01BMEASURING LENGTH, THICKNESS OR SIMILAR LINEAR DIMENSIONS; MEASURING ANGLES; MEASURING AREAS; MEASURING IRREGULARITIES OF SURFACES OR CONTOURS
    • G01B11/00Measuring arrangements characterised by the use of optical techniques
    • G01B11/24Measuring arrangements characterised by the use of optical techniques for measuring contours or curvatures
    • G01B11/25Measuring arrangements characterised by the use of optical techniques for measuring contours or curvatures by projecting a pattern, e.g. one or more lines, moiré fringes on the object
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01BMEASURING LENGTH, THICKNESS OR SIMILAR LINEAR DIMENSIONS; MEASURING ANGLES; MEASURING AREAS; MEASURING IRREGULARITIES OF SURFACES OR CONTOURS
    • G01B11/00Measuring arrangements characterised by the use of optical techniques
    • G01B11/24Measuring arrangements characterised by the use of optical techniques for measuring contours or curvatures
    • G01B11/245Measuring arrangements characterised by the use of optical techniques for measuring contours or curvatures using a plurality of fixed, simultaneously operating transducers
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/02Systems using the reflection of electromagnetic waves other than radio waves
    • G01S17/06Systems determining position data of a target
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/02Systems using the reflection of electromagnetic waves other than radio waves
    • G01S17/06Systems determining position data of a target
    • G01S17/46Indirect determination of position data
    • G01S17/48Active triangulation systems, i.e. using the transmission and reflection of electromagnetic waves other than radio waves
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/87Combinations of systems using electromagnetic waves other than radio waves
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/88Lidar systems specially adapted for specific applications
    • G01S17/89Lidar systems specially adapted for specific applications for mapping or imaging
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S7/00Details of systems according to groups G01S13/00, G01S15/00, G01S17/00
    • G01S7/48Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S17/00
    • G01S7/4808Evaluating distance, position or velocity data
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0231Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means
    • G05D1/0246Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using a video camera in combination with image processing means
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/12Edge-based segmentation

Definitions

  • the present invention relates to a flat surface detection device, an autonomous mobile device including the flat surface detection device, a road surface step detection method, a road surface step detection device, and a vehicle including the road surface step detection device.
  • Infrared or ultrasonic proximity sensors have been widely used for detecting such obstacles and steps.
  • an infrared or ultrasonic proximity sensor can determine the presence or absence of a front obstacle, but cannot obtain its detailed position and shape. Therefore, when it is attached to a robot, it can detect obstacles and steps that are present in the immediate vicinity, but it is used for applications that move while finding and avoiding obstacles and steps in a wide range ahead in the direction of travel. I can't. Therefore, distance sensors such as a laser range finder (LRF) have come to be used.
  • LRF laser range finder
  • FIG. 25 is a diagram of an autonomous mobile device 100 including a laser range finder 101 in the prior art.
  • (A) in FIG. 25 is a diagram showing an external view of the autonomous mobile device 100 including the laser range finder 101 in the prior art, and (b) in FIG. 25 includes the laser range finder 101 in the prior art. It is the figure of the upper obstacle 102 which the other autonomous mobile device 100 cannot detect, the lower obstacle 103, and the level
  • the autonomous mobile apparatus 100 is equipped with the laser range finder 101 in the position of height HLRF .
  • the laser range finder 101 performs an angle scan in the horizontal direction (in a plane perpendicular to the y-axis in the drawing) at the position of the height HLRF , and detects an obstacle and a step.
  • the laser range finder 101 has high measurement accuracy, it has a drawback that it cannot detect obstacles and steps at positions different from the measurement height.
  • the laser range finder 101 cannot detect the obstacle 102, the obstacle 103, and the step 104. Therefore, in order to detect the obstacle 102, the obstacle 103, and the step 104, in addition to the laser range finder 101, it is necessary to arrange a large number of infrared proximity sensors, ultrasonic proximity sensors, etc.
  • the apparatus has a complicated structure.
  • an object scene outside the vehicle is imaged by a camera mounted on the vehicle, and the captured image is image-processed to obtain a distance from the vehicle to the object, and the risk of a collision with a vehicle in front or a guardrail is detected.
  • a method of predicting and controlling the vehicle such as applying a brake has been put into practical use.
  • the distance measurement technology using such images is based on the principle of triangulation from the technology that estimates the distance to a target object from the monocular image using the relationship with the camera position and the stereo images captured by multiple cameras. It is roughly divided into the technology that calculates the distance.
  • the technique for obtaining the distance from the stereo image based on the principle of triangulation obtains the distance from the relative shift of the position of the same object in the left and right images, and therefore, an accurate distance can be obtained.
  • Patent Literature 1 discloses an image storage unit that stores images input by a plurality of cameras, a feature extraction unit that extracts a plurality of white lines existing on a road surface, and an arbitrary on a road surface from the extracted white lines.
  • An obstacle detection apparatus comprising a parameter calculation unit that obtains a relational expression that is established between projection positions of each point on each image, and a detection part that detects an object having a height from the road surface using the relational expression Disclosure.
  • the obstacle detection device of Patent Document 1 even when there is a change in the slope of the road surface, the road surface is recognized from the movement of the two white lines, and the obstacle present on the road surface is detected at high speed and with high accuracy. Can be detected.
  • Patent Document 2 a stereo camera is used to detect the road surface from the Hough transform result of the differential image of the left and right images, and other vehicles and pedestrians are detected using this as a clue.
  • plane information is extracted by a calculation method called RANSAC (RANdom Sampl Consensus) method from distance image data obtained by a distance image sensor of a TOF (Time Of Flight) method, and walking is performed using the information as a clue. Detect people.
  • RANSAC Random Sampl Consensus
  • the flat surface detection device is a flat surface detection device that detects a specific detection target plane from distance image data of a subject including a specific detection target plane, and the distance image data is converted into the specific detection target plane.
  • Projection image generation means for generating distributed projection image data, straight line detection means for detecting the linear straight line from the projection image data, and the inclination of the specific detection target plane based on the detection result of the straight line detection means
  • plane parameter calculation means for calculating plane parameters including information related to the above.
  • An autonomous mobile device includes the plane detection device, a distance image generation unit that generates the distance image data, and a travel unit.
  • a plane that becomes a travel path using the plane detection device is provided. It is characterized by detecting.
  • the first road surface level difference detection method of the present invention at least a first image and a second image obtained by photographing a road surface in stereo are projected onto XY plane coordinates, and specific coordinates (X, Y) are projected on the plane coordinates.
  • the height of the detection area from the road surface is detected by comparing the image of the detection area and the image of the comparison area.
  • the first road surface level difference detecting device of the present invention includes at least a first camera and a second camera that take a stereo image of a road surface, a first image captured by the first camera, and the second camera.
  • the second road surface level difference detection method of the present invention at least a first image and a second image obtained by taking a stereo image of a road surface are projected on XY plane coordinates, and an image is obtained for each row data of the image in a specific Y-axis direction.
  • the parallax is calculated when the vehicle is at the road surface position, a third image is generated by correcting the second image by shifting the parallax for each Y axis, and the first image and the third image are generated for each step detection region.
  • the height from the road surface is detected by comparison.
  • a second road surface level difference detecting device includes a stereo camera that performs stereo imaging of at least a first image and a second image of a road surface, and the first and second images captured in stereo as XY plane coordinates. Project and calculate the parallax when the image is at the road surface position for each row data of the above image in the specific Y-axis direction, and generate the third image corrected by shifting the parallax for each Y-axis And a height detection unit that detects the height from the road surface by comparing the first image and the third image for each step detection region.
  • a vehicle according to the present invention includes the first or second road surface level difference detecting device.
  • (A) is a figure which shows the external view of the cleaning robot using the plane detection apparatus which concerns on Embodiment 1 of this invention
  • (b) is the cleaning using the plane detection apparatus which concerns on Embodiment 1 of this invention.
  • It is sectional drawing of a robot.
  • It is a figure which shows the attachment position of the distance image sensor with which the cleaning robot which concerns on Embodiment 1 of this invention is equipped, and the measurement range of a distance image sensor.
  • (A) is the RGB image image
  • (b) is the cleaning robot which concerns on Embodiment 1 of this invention. Is a distance image photographed by the distance image sensor for long distances provided in FIG.
  • (c) is a figure which shows the three-dimensional coordinate system of the cleaning robot reference
  • (A) is a projection image on a yz plane generated from a distance image taken by a distance image sensor for long distances provided in the cleaning robot according to the first embodiment of the present invention
  • (b) is (A) is a bottom image
  • (c) is a projection onto a yz plane generated from a distance image captured by a short-distance distance image sensor provided in the cleaning robot according to the first embodiment of the present invention.
  • (D) is a bottom image of (c).
  • (A) is the projection image to the yz plane when the level
  • (b) is the bottom image of (a). It is a flowchart which shows the procedure which the arithmetic unit of the cleaning robot which concerns on Embodiment 2 of this invention processes.
  • (A) is a figure which shows the example of the three-dimensional coordinate data produced
  • (b) is ( (a) is a projection image obtained by projecting the three-dimensional coordinate data onto the yz plane, (c) is a projection image obtained by projecting the three-dimensional coordinate data (a) onto the xy plane, and (d) is a diagram (a).
  • FIG. 10 is a layout diagram of a stereo camera provided in the road surface level difference detection device according to the third embodiment. It is the image imaged with the stereo camera with which the road surface level
  • FIG. 10 is a flowchart of height detection processing in a calculation unit provided in the road surface level difference detection device of Embodiment 4;
  • A) is a figure which shows the external view of the autonomous mobile device provided with the laser range finder in a prior art,
  • (b) is the upper obstacle which the autonomous mobile device provided with the laser range finder in a prior art cannot detect It is a figure of a lower obstacle, and a level
  • a stereo camera is essential, and easy-to-understand clues for detecting a plane such as a white line on the road or a road surface end are required.
  • a plane such as a white line on the road or a road surface end
  • the mobile robot is provided with the flat surface detection apparatus according to Patent Document 2
  • the first embodiment solves the above-described problem and provides a plane detection device that can detect a plane included in the image from the distance image at high speed and more reliably.
  • a cleaning robot provided with the flat surface detection apparatus according to the present invention will be cited and described with reference to FIGS. 1 to 10.
  • FIG. 1 is a diagram showing a cleaning robot 1 according to the first embodiment.
  • (A) in FIG. 1 is an external view of the cleaning robot 1
  • (b) in FIG. 2 is a cross-sectional view illustrating an internal configuration of a housing 11 of the cleaning robot 1.
  • FIG. The cleaning robot 1 according to the first embodiment is an autonomous traveling type cleaning robot that performs cleaning while autonomously traveling on a floor surface.
  • the cleaning robot 1 has an essential configuration of a plane detection device that detects a plane from a distance image acquired by a distance image sensor.
  • the cleaning robot 1 can detect a plane by a plane detection device, determine obstacles and steps in the traveling direction, and can travel while avoiding them.
  • the flat surface detection apparatus may include a distance image sensor as a component, and the cleaning robot 1 is provided with the distance image sensor and the flat surface detection apparatus as described below in the first embodiment. It is also possible to adopt a mode in which the plane detection device acquires the range image acquired by the range image sensor and detects the plane.
  • the distance image sensor is not a constituent element of the flat surface detection device, and an aspect in which the distance image sensor is provided as an external configuration of the flat surface detection device will be described.
  • the cleaning robot 1 includes a housing 11 provided with a window 21, drive wheels 2 (traveling means), and a protection member 12. Various control systems and drive systems, which will be described later, are mounted inside the housing 11. When the drive wheels 2 are driven and controlled, the cleaning robot 1 travels on the traveling road surface and travels or stops traveling. Clean the road surface during.
  • the cleaning robot 1 includes a battery 4, a waste liquid recovery unit 45, a cleaning liquid discharge unit 46, a motor 10, and a distance image sensor inside a casing 11 provided with a window 21. 20 (distance image generating means) and an arithmetic unit 30 are mounted. Further, in (b) of FIG. 1, the cleaning robot 1 is connected to the outside of the housing 11, more specifically between the housing 11 and the traveling road surface, together with the driving wheel 2 described above, The cleaning brush 9 and the protection member 12 are provided.
  • the characteristic configuration of the first embodiment resides in the flat surface detection device 60 provided in a part of the arithmetic device 30. Therefore, while the characteristic configuration will be described below in detail, the configuration other than the characteristic configuration can be realized by a conventionally known configuration, and thus detailed description thereof will be omitted.
  • the cleaning robot 1 can move forward in the left direction of the paper, move backward in the right direction of the paper, turn in the back of the paper surface, or in the front direction.
  • the paper surface that is the main traveling direction is used. Movement in the left direction may be simply referred to as a traveling direction.
  • the drive wheels 2 are arranged on the left and right of the bottom of the cleaning robot 1 and are controlled by a drive motor (not shown) to realize the movement of the cleaning robot 1.
  • the follower wheel 3 is rotatably attached to the bottom of the cleaning robot 1.
  • the drive wheel 2 and the slave wheel 3 can move forward, backward, turn, and stop, and the cleaning robot 1 can freely travel by a combination thereof.
  • the battery 4 supplies power to the cleaning robot 1.
  • the battery 4 is charged by a well-known step-down circuit and a rectifying / smoothing circuit, and outputs a predetermined voltage.
  • the cleaning liquid discharge unit 46 includes a cleaning liquid tank 5 and a cleaning liquid discharge unit 6.
  • the cleaning liquid tank 5 stores the cleaning liquid. Further, the cleaning liquid discharge unit 6 is connected to the cleaning liquid tank 5 by a pipe, and discharges the cleaning liquid stored in the cleaning liquid tank 5.
  • the waste liquid recovery unit 45 has a waste liquid tank 7 and a suction port 8.
  • the waste liquid tank 7 stores the waste liquid (including dust and dirt) sucked into the cleaning robot 1.
  • the cleaning robot 1 sucks the waste liquid from the suction port 8 and discharges the waste liquid to the waste liquid tank 7 connected to the suction port 8 by a pipe.
  • the cleaning brush 9 is installed in the vicinity of the suction port 8 and is cleaned using the cleaning discharged from the cleaning liquid discharge unit 6.
  • the cleaning brush 9 is driven by a motor 10.
  • the protection member 12 is installed on the front side in the traveling direction at the bottom of the cleaning robot 1 in order to prevent the cleaning liquid from splashing and foreign matter from getting involved.
  • the distance image sensor 20 includes a distance image sensor 20a for a short distance and a distance image sensor 20b for a long distance. It should be noted that the configuration common to the distance image sensor 20a for short distance and the distance image sensor 20b for long distance may be described simply as the distance image sensor 20.
  • the distance image sensor 20 is an infrared light projection type distance image sensor, and includes a projection optical system including an infrared light projection element and an imaging optical system including an infrared light image sensor. By projecting and irradiating infrared light having a predetermined pattern to the outside and photographing reflected light from the external object with an image sensor, the distance to the object within the field of view of the imaging optical system can be measured.
  • the distance image sensor 20a for short distance and the distance image sensor 20b for long distance are disposed inside the housing 11, projecting infrared light to the outside through the window 21 of the housing 11, and Reflected light is incident from the outside through the window 21.
  • the distance measurement result of the distance image sensor 20 is output as a distance image (depth image, depth image) in which the distance to an object included in the visual field range is expressed as a grayscale value of a pixel on the image. Details of the distance image sensor 20a and the distance image sensor 20b in the present embodiment will be described later.
  • the computing device 30 acquires a distance image of the distance image sensor 20 and performs a process of detecting a plane. Details of the configuration and functions of the arithmetic unit 30 will be described later.
  • the cleaning robot 1 includes a configuration described later in addition to the above-described configurations.
  • an operation panel for selecting manual travel or automatic travel a travel switch for determining a travel direction during manual travel, and a control switch 50 (FIG. 4) such as an emergency stop switch for stopping operation in an emergency are provided.
  • a control switch 50 such as an emergency stop switch for stopping operation in an emergency.
  • the form of the cleaning robot 1 is not limited to the type of cleaning using the cleaning liquid as described above, and is an aspect like a so-called household vacuum cleaner provided with a fan, a dust collection chamber, a suction port, and the like. It may be a robot.
  • the autonomous mobile device includes the distance image sensor 20 as referred to in the cleaning robot 1 of the first embodiment and the plane detection device 60 of the arithmetic device 30 to be described later. Therefore, in the following, details of the distance image sensor 20 described above will be described, and details of the arithmetic unit 30 will be described.
  • FIG. 2 is a diagram illustrating the attachment position of the distance image sensor 20 and the measurement range of the distance image sensor 20 in the cleaning robot 1 according to the first embodiment of the present invention.
  • the distance image sensor 20 is attached to the front surface in the traveling direction of the cleaning robot 1 at a height position that is vertically separated from the floor surface (traveling road surface) to be cleaned. More specifically, although the optical axis of the distance image sensor 20 extends along the front and back in the traveling direction, the distance image sensor is inclined downward from the image sensor that is one end of the optical axis in the traveling direction, that is, the distance image sensor. 20 is attached to the floor surface obliquely downward.
  • FIG. 3A is an RGB image photographed by the long-distance distance image sensor 20b
  • FIG. 3B is a distance image photographed by the long-distance distance image sensor 20b
  • 3C is an RGB image photographed by the short distance image sensor 20a
  • FIG. 3D is a distance photographed by the short distance image sensor 20a. It is an image.
  • the distance images displayed as RGB images in FIGS. 3A and 3C in the field of view in FIGS. 3A and 3C are displayed with the distance images bright in the near distance and dark in the far distance in FIGS. 3B and 3D.
  • the distance image sensor 20a and the distance image sensor 20b have different positions at which they are attached and the angle with respect to the horizontal plane, so that the angle of the floor that is a plane is different. ing.
  • the optical axis of the distance image sensor 20 is arranged parallel to the floor surface, the vicinity of the main body of the cleaning robot 1 deviates from the angle of view of the distance image sensor. Therefore, a wide range in the short distance of the main body of the cleaning robot 1 becomes an out-of-view area, which makes measurement impossible.
  • the area outside the visual field in the vicinity can be reduced, so that it is possible to measure up to a relatively short distance of the cleaning robot 1 main body.
  • the visual field range of the short-distance distance image sensor 20a projected on the floor surface is a trapezoidal area of A 0 B 0 C 0 D 0 shown in FIG.
  • the visual field range of the distance image sensor 20b for a long distance is a trapezoidal area of A 1 B 1 C 1 D 1 in FIG.
  • the arrangement and the number of the distance image sensors 20 are not limited to the configuration of the first embodiment.
  • only one distance image sensor 20 may be mounted, or a plurality of distance image sensors 20 may be arranged in the horizontal direction. is there.
  • the distance image sensor 20a for short distance and the distance image sensor 20b for long distance use the infrared light source having the same wavelength, and therefore the visual field region A is used for the purpose of preventing mutual interference.
  • a slight gap is provided between 0 B 0 C 0 D 0 and A 1 B 1 C 1 D 1 as shown in FIG. If interference can be prevented by using light sources of different wavelengths, the distance image sensor 20a for short distance and the distance image sensor 20b for long distance are provided so as not to provide a gap between the two visual field regions. It is also possible to install.
  • FIG. 4 is a block diagram illustrating a configuration related to the travel function in the cleaning robot 1 of the first embodiment.
  • the cleaning robot 1 includes a travel control unit 41, a cleaning control unit 42, a map information memory unit 43, a status display unit 44, a rotary in addition to the distance image sensor 20 and the calculation device 30 described above.
  • An encoder 47, a drive wheel motor 48, a gyro sensor 49, and a control switch 50 are provided.
  • the computing device 30 acquires a distance image from the distance image sensor 20, and extracts the position, size, and shape of an obstacle or a step from the acquired distance image.
  • the extracted obstacles and step information (hereinafter referred to as obstacle / step data) are output to the traveling control unit 41. Details of the arithmetic unit 30 will be described later.
  • the traveling control unit 41 grasps the moving distance of the cleaning robot 1 and the current position and direction based on information from the rotary encoder 47 and the gyro sensor 49 attached to the drive wheel 2. Based on the map information stored in advance in the map information memory unit 43 and the obstacle / step data output from the arithmetic unit 30, the travel route is determined so as to avoid the obstacle and the step, and the drive wheel motor 48 is controlled. To do. Further, when a signal is acquired from the control switch 50, necessary control such as an emergency stop or a change in the traveling direction is performed accordingly. Information regarding these controls is displayed on the status display unit 44 and updated in real time.
  • the cleaning control unit 42 receives a command from the traveling control unit 41 and controls parts related to cleaning, such as operation start and stop switching of the cleaning brush 9, the waste liquid recovery unit 45, and the cleaning liquid discharge unit 46.
  • the map information memory unit 43 stores information such as obstacles and steps in a range to be cleaned by the cleaning robot 1.
  • the information stored in the map information memory unit 43 is updated by the travel control unit 41.
  • the state display unit 44 displays information related to the state of the cleaning robot 1. For example, manual travel or automatic travel display, emergency stop display, and the like.
  • the rotary encoder 47 is attached to the driving wheel 2 and outputs a rotational displacement to the travel control unit 41 as a digital signal. Based on the output of the rotary encoder 47, the travel control unit 41 can grasp the distance traveled.
  • the gyro sensor 49 detects a change in direction and outputs it to the traveling control unit 41. From the output of the gyro sensor 49, the traveling control unit 41 can grasp the traveling direction.
  • the arithmetic device 30 includes a flat surface detection device 60, an obstacle / step detection unit 35, and a data integration unit 36.
  • the plane detection device 60 includes a three-dimensional coordinate calculation unit 31 (three-dimensional coordinate calculation unit, second three-dimensional coordinate calculation unit) and a projection image generation unit 32 (projection image generation unit, second Projection image generating means), a straight line detecting section 33 (straight line detecting means, second straight line detecting means), and a plane detecting section 34 (plane parameter calculating means, second plane parameter calculating means).
  • the three-dimensional coordinate calculation unit 31 acquires a distance image from the distance image sensor 20 and converts the acquired distance image into three-dimensional coordinate data.
  • the definition of the coordinate system of the three-dimensional coordinate data will be described with reference to FIG.
  • FIG. 5 is a diagram showing a three-dimensional coordinate system based on the distance image sensor 20a for short distance
  • (b) in FIG. 5 is a three-dimensional based on the distance image sensor 20b for long distance. It is a figure which shows a coordinate system
  • (c) in FIG. 5 is a figure which shows the three-dimensional coordinate system of cleaning robot 1 reference
  • the vertical direction is the y-axis (upward is positive), and the front-rear direction, that is, the optical axis direction of the distance image sensor 20 is the z-axis (depth direction is positive). Since the distance image sensor 20a and the distance image sensor 20b have different attachment positions and angles, the coordinate systems are also different from each other as shown in FIGS. 5 (a) and 5 (b). Further, the distance expressed in the coordinate system based on the distance image sensor 20 is different from the distance measured from the main body of the cleaning robot 1 along the floor surface. Therefore, in order to obtain an accurate distance from the cleaning robot 1 to the object, it is necessary to perform coordinate conversion to the coordinate system of the cleaning robot 1 reference (floor surface reference) and integrate the data of the two distance image sensors. .
  • XYZ coordinates which are the coordinate system based on the cleaning robot 1, are defined separately from the xyz coordinates based on the distance image sensor 20.
  • the traveling direction is the Z axis
  • the normal direction of the floor is the Y axis
  • the direction perpendicular to the Z axis and the Y axis is the X axis (rightward is positive).
  • the x-axis direction based on the distance image sensor 20 and the X-axis direction based on the cleaning robot 1 are substantially the same. That is, the distance image sensor 20 is not attached in a direction that rotates about the z axis, and the inclination between the floor surface and the distance image sensor 20 is only the inclination ⁇ in the direction that rotates about the x axis. It means that. Or, even if it is inclined in the direction of rotation about the z axis, it means that it is sufficiently smaller than the inclination ⁇ and can be ignored.
  • the z coordinate is the distance itself included in the distance image.
  • the x-coordinate and y-coordinate are calculated from z based on the principle of triangulation if the focal length f of the optical system of the distance image sensor 20, the pixel pitch p, and the pixel shift amount c between the optical axis and the center of the image sensor are known. I can do it.
  • the distance image sensor is calibrated in advance to obtain these parameters.
  • FIG. 6 is a diagram illustrating an example of three-dimensional coordinate data in the first embodiment.
  • the left-right direction is the x-axis
  • the up-down direction is the y-axis
  • the front-rear direction is the z-axis.
  • the three-dimensional coordinate calculation unit 31 may be a coordinate system rotated around at least one of the x-axis, y-axis, and z-axis, or a coordinate whose origin has been changed.
  • a distance image can be converted into three-dimensional coordinate data in various coordinate systems such as a system.
  • the projection image generation unit 32 generates a projection image obtained by projecting the three-dimensional coordinate data onto a two-dimensional surface (plane).
  • the projection image projected on the xy plane is obtained by extracting the x and y coordinates of all the pixels.
  • the projection image projected on the yz plane is obtained by extracting the y-coordinate and the z-coordinate for all pixels, and the projection image projected on the zx plane is the z-coordinate for all images.
  • the x coordinate is extracted.
  • the projection range of the y axis is ⁇ 1200 mm to +1200 mm (offset 1200 mm)
  • the projection range of the z axis is 0 mm to +3200 mm (offset 0 mm)
  • the scale at the time of projection is 1/10 [ pixel / mm].
  • the size of the projected image is as follows.
  • This projection image size can be freely changed by changing the scale as described above, independently of the image size of the original image (distance image).
  • the resolution at the time of projection becomes finer, so that the accuracy of calculation increases, but the calculation time becomes longer accordingly. This trade-off determines the image size that is actually used.
  • the pixel values of all points included in the projection image projected on the yz plane are initialized with “0”.
  • point B (x, y, z) ( ⁇ 400 mm, 900 mm, 1500 mm) exists as another data.
  • point C (x, y, z) ( ⁇ 200 mm, ⁇ 303 mm, 1998 mm) exists as another data.
  • This is the same as the projected coordinate of point A, and the pixel value of this point is already “1”, so nothing is done here.
  • y-coordinate values and z-coordinate values are extracted and converted into coordinate values on the yz plane for all points included in the three-dimensional coordinate data, and the pixel values of the corresponding coordinates are converted to “1”. Make a change.
  • a yz plane projection image is obtained in the form of a binary image in which only the portion where the point included in the three-dimensional coordinate data exists is “1”.
  • the straight line detection unit 33 detects a straight line from the projection image generated by the projection image generation unit 32. For example, a case where a straight line indicating a floor surface is detected in the projection image shown in FIG. 7 will be described.
  • FIG. 7 is a projection image obtained by projecting the three-dimensional coordinate data of FIG. 6 onto the yz plane.
  • the y-coordinate and the z-coordinate are extracted from all the pixels in the projection image onto the yz plane.
  • the point representing the floor surface is the lowest, that is, the y coordinate is the smallest (hereinafter referred to as the bottom point). It turns out that it is.
  • the plane representing the floor is a straight line in the projected image.
  • the distance image sensor 20 is not attached to be inclined in the direction of rotation about the z axis, the inclination angle between the floor surface and the distance image sensor 20 is in the direction of rotation about the x axis. It is sufficiently smaller than the inclination ⁇ . As a result, when the three-dimensional coordinate data representing the floor surface is projected onto the yz plane, it is distributed on almost one straight line.
  • the straight line detection unit 33 scans each pixel row included in the projection image from the bottom to the top and finds the first point along the scan direction, that is, “1” Only the point that becomes “” is left and the other point is deleted to obtain a bottom image.
  • the straight line detection unit 33 performs a fitting process for detecting a straight line on the obtained bottom image, and obtains parameters such as a slope of the straight line and an intercept. Depending on the result of straight line detection, a plurality of straight line candidates may be obtained instead of one. In this case, the most likely straight line is selected based on a predetermined criterion.
  • the straight line detection method (fitting process)
  • arbitrary processing such as Hough transformation, probabilistic Hough transformation which is an improved method thereof, simple least square method, and RANSAC method can be applied.
  • the straight line having the smallest error (residual) when performing straight line fitting is selected as the most likely straight line.
  • the Hough transform a line having the largest number of points that support the straight line can be selected as the most likely straight line.
  • the height and angle of the plane detected by the plane detector 34 can be estimated from the height and angle at which the distance image sensor 20 is attached. is there. Therefore, it is possible to detect the floor surface more reliably by setting the allowable range of the detected height and angle in advance and determining whether the plane detected by the plane detection unit 34 is within the allowable range.
  • the plane detection unit 34 holds the detected floor surface as floor plane information.
  • the floor plane information is updated as needed when the plane detection unit 34 detects the floor surface. By doing so, it is possible to follow the fluctuation of the floor plane that occurs with the movement of the cleaning robot 1 and always grasp the floor plane. In addition, even if the floor surface cannot be detected temporarily due to a person crossing the distance image sensor, it is possible to prevent missing floor plane detection processing by using previously detected floor plane information. it can.
  • the obstacle / step detection unit 35 converts the three-dimensional coordinate data in the xyz coordinate system into the three-dimensional coordinate data in the XYZ coordinate system. Then, in the XYZ coordinate system, the distance between each point and the plane is calculated to determine whether it is higher or lower than the detected plane.
  • the data integration unit 36 integrates a plurality of obstacles and a plurality of steps detected from a plurality of distance images as one obstacle / step data.
  • the data integration unit 36 integrates a plurality of obstacles and a plurality of steps detected from a plurality of distance images as one obstacle / step data.
  • obstacles and steps are detected from the distance images acquired from the distance image sensor 20a and the distance image sensor 20b, information on the obstacles and steps is integrated into one. Create obstacle / step data.
  • the data can be integrated so that the data of B has priority.
  • the format of the obstacle / step data can be converted into an arbitrary format so that the traveling control unit 41 can easily process it later.
  • the coordinate system of the data can be output as the cleaning robot standard XYZ coordinate system, or can be converted into a polar coordinate system (R- ⁇ coordinate system).
  • a method of thinning out or interpolating data or extracting only the obstacles and step data closest to the main body of the cleaning robot 1 can be considered. .
  • FIG. 8 is a flowchart illustrating a procedure performed by the arithmetic device 30 of the cleaning robot 1 according to the first embodiment of the present invention.
  • the three-dimensional coordinate calculation unit 31 acquires a distance image generated by the distance image sensor 20 (step S101).
  • the distance image for short distance and the long distance The distance image for use is acquired from each distance image sensor 20.
  • the three-dimensional coordinate calculation unit 31 converts the acquired distance image into three-dimensional coordinate data in the xyz coordinate system (step S102). From the converted three-dimensional coordinate data, the projection image generation unit 32 generates a projection image projected on the yz plane (step S103).
  • the distance image sensor 20 is not attached with an inclination in the direction of rotation about the z axis, and thus rotates about the z axis between the floor surface and the distance image sensor 20.
  • the inclination in the direction to be rotated is sufficiently smaller than the inclination in the direction of rotation about the x axis. Accordingly, the plane representing the floor surface in the three-dimensional coordinate data becomes a point group on a straight line in the projection image onto the yz plane.
  • an actual projection image is shown in FIG.
  • FIG. 9 is a projection image onto the yz plane generated from the distance image photographed by the distance image sensor 20b for long distance according to the first embodiment of the present invention, and (b) in FIG. ) Is a bottom image of (a) in FIG. 9 according to Embodiment 1 of the present invention, and (c) in FIG. 9 is obtained by the distance image sensor 20a for short distance according to Embodiment 1 of the present invention.
  • FIG. 9D is a projected image on the yz plane generated from the captured distance image, and is a bottom image of FIG. 9C according to Embodiment 1 of the present invention.
  • the point group 61 and the point group 62 representing the floor surface are straight lines as described above.
  • the straight line detection unit 33 generates a bottom image from the projection image (step S104). As shown in FIGS. 9B and 9D, the straight line of the bottom image matches the point group 61 representing the floor surface and the point group 62 representing the floor surface. From the bottom image, the straight line detection unit 33 detects a straight line (step S105). Then, the plane detection unit 34 detects a plane in the three-dimensional coordinate data from the detected straight line, and calculates the angle and height of the plane (step S106).
  • the plane angle is an angle (tilt angle) with respect to the z-axis.
  • the plane height is a separation distance between the floor surface and the distance image sensor 20.
  • the calculated plane angle and height are determined by the plane detector 34 to be within a preset angle and height tolerance (step S107).
  • step S107 If it is determined in step S107 that “the angle and the height are within the allowable range” (step S107: Yes), since the detected plane is the floor surface, the plane detector 34 updates the floor plane information ( Step S108). On the other hand, if it is determined in step S107 that “the angle and the height are not within the allowable range” (step S107: No), the detected plane is not a floor surface, and thus the floor plane information is not updated (step S107). S109).
  • the obstacle / step detection unit 35 converts the three-dimensional coordinate data from the xyz coordinate system to the XYZ coordinate system (step S110).
  • the obstacle / step detection unit 35 calculates the distance between each point and the plane from the converted three-dimensional coordinate data of the XYZ coordinate system, determines whether the obstacle is higher or lower than the detected plane, A step is detected (step S111). In this determination, a threshold value t is set, and if the distance from the floor plane is larger than t, the obstacle is higher than the floor or a step, and if it is smaller than -t, the step is lower than the floor plane.
  • the threshold value t is set in advance in consideration of the size of the unevenness of the floor plane, the measurement error of the distance image sensor, and the like. Thereby, it is determined for all points included in the three-dimensional coordinate data whether the point belongs to a step, an obstacle, or the other. Then, information F indicating whether the point belongs to a step, an obstacle, or the other is added to the coordinates (X, Y, Z) of each point, and (X, Y, Z, F). ) Converted to format. Information on the steps and obstacles obtained in this way is passed from the obstacle / step detection unit 35 to the data integration unit 36.
  • the data integration unit 36 integrates the obstacle and step information detected by the obstacle / step detection unit 35 to create obstacle / step data (step S112). Finally, the data integration unit 36 outputs the obstacle / step data to the travel control unit 41 (step S113).
  • the arithmetic unit 30 creates obstacle / step data from the distance image at high speed and more reliably and outputs it to the traveling control unit 41. Therefore, the cleaning robot 1 can move while avoiding obstacles and steps. Further, by performing plane detection independently from a plurality of distance sensors and integrating the data, it is possible to detect a wider range of obstacles and steps and move while avoiding them.
  • the height direction (y-axis direction) and the depth are utilized by using the condition that the distance image sensor 20 is not attached to the floor surface in a direction that rotates about the z-axis. Only the inclination in the direction (z-axis direction) is obtained.
  • the number of parameters to be specified is limited to two, it becomes possible to detect a plane at a higher speed than when three parameters are specified, and real-time floor surface detection can be easily realized even in an autonomous mobile device. .
  • Step detection In the first embodiment described above, it is desirable that no object exists below the floor surface, but for example, there may be a level difference on the surface lower than the floor surface. A method for detecting the floor surface in such a case will be described below.
  • FIG. 10 is a projection image onto the yz plane when a step lower than the floor surface exists
  • (b) in FIG. 10 is a bottom image in (a) in FIG.
  • step difference are a straight line.
  • the allowable range 64 is set. By setting the allowable range 64, the straight line detection unit 33 can detect the point group 61 representing the floor as a straight line instead of the point group 63 representing the step.
  • the theoretical floor surface is present in the distance image sensor reference coordinate system at a distance of 710 [mm] from the origin at an angle (tilt angle in the depth direction) 22.5 [deg] with the zx plane. Will do.
  • the mounting position varies somewhat due to assembly errors and the like, in step S106, it is determined whether the calculated floor height and angle are within a range of ⁇ several millimeters and ⁇ several deg from the above values. Check.
  • this straight line is detected as being the floor surface, and if it is not within the range, another straight line is selected from the plurality of straight lines detected in step S105, and whether it is within the range as well. Check if.
  • the floor surface is detected by limiting the height and angle of the detected plane, even if the distance image includes a level difference lower than the floor surface, it is excluded and the floor surface is detected. can do.
  • the inclination in the direction rotating around the z axis is sufficiently smaller than the inclination in the direction rotating around the x axis and can be ignored. explained.
  • plane detection in the case where the inclination in the direction of rotation about the z-axis is smaller than the inclination in the direction of rotation about the x-axis but cannot be completely ignored will be described below. explain.
  • FIG. 11 is a flowchart illustrating a procedure performed by the arithmetic device 30 included in the cleaning robot according to the second embodiment. The description of the same steps as those in the flowchart of FIG. 8 is omitted.
  • 12A is a diagram illustrating an example of the three-dimensional coordinate data according to the second embodiment
  • FIG. 12B is a diagram illustrating the three-dimensional coordinate data of FIG. Is a projection image obtained by projecting the three-dimensional coordinate data of (a) in FIG. 12 onto the xy plane
  • the angle ⁇ and the height of the rotation direction about the x axis of the detected plane which is the process of step S106, are calculated.
  • the point cloud obtained by the projection image on the yz plane is not completely aligned on a straight line but is distributed in a band shape with a width.
  • xy′z ′ coordinates obtained by rotating the xyz coordinate system by an angle corresponding to the inclination ⁇ in the direction of rotation about the x axis are newly defined.
  • the y ′ axis and the z ′ axis Is obtained by rotating the y-axis and the z-axis by 22.5 [deg] around the x-axis, respectively.
  • the three-dimensional coordinate calculation unit 31 converts the three-dimensional coordinate data in the xyz coordinate system into the newly defined xy′z ′ coordinate system (step S121).
  • the projection image generation unit 32 projects the converted three-dimensional coordinate data onto the xy ′ plane to generate a projection image (step S122).
  • the straight line detection unit 33 generates a bottom image of the projection image (step S123), and detects a straight line from the bottom image (step S124).
  • the plane detection unit 34 can obtain the angle in the left-right direction (step S125), and the angle is combined with the inclination ⁇ in the direction of rotation about the x axis obtained in step S106. And whether the height is within the set range (step S107).
  • it is the same as that of Embodiment 1.
  • the arithmetic unit 30 calculates the inclination ⁇ in the direction of rotation about the x axis in the xyz coordinate system as the first step even when the inclination in the direction of rotation about the z axis cannot be ignored. To do. Next, the coordinate system is converted to xy′z ′, and the angle in the left-right direction is calculated from the projection image on the xy ′ plane as a second step. By such a two-stage process, the floor surface can be detected more reliably.
  • a projection image as shown in (c) in FIG. 12 is obtained.
  • the point group representing the floor surface is not aligned on a straight line. Therefore, for example, when another object exists in the line AA ′ in FIG. 12C and the floor is not visible, when a straight line is extracted from the set of bottom points, A ⁇ If the A ′ line is a straight line representing the floor, it will be erroneously detected.
  • the z ′ axis is substantially parallel to the floor surface.
  • the projected image onto the plane can be extracted with the point group representing the floor as the bottom point.
  • a flat surface detection device is a flat surface detection device (a flat surface detection device 60) that detects a specific detection target plane from distance image data of a subject including the specific detection target plane, and includes the distance image.
  • Three-dimensional coordinate calculation means three-dimensional coordinate calculation unit 31) for converting data into three-dimensional coordinate data including a detection target three-dimensional point group representing the specific detection target plane;
  • Projection image generation means projection image generation unit 32) that generates projection image data in which the detection target three-dimensional point group is linearly distributed by projecting onto a three-dimensional plane, and detects the linear straight line from the projection image data.
  • a plane parameter calculation that calculates a plane parameter including information on the inclination of the specific detection target plane based on the detection result of the line detection unit (line detection unit 33) and the line detection unit.
  • Stage and (planar detector unit 34) is characterized in that it comprises.
  • distance image data including a plane is converted into three-dimensional coordinate data, and the converted three-dimensional coordinate data is projected onto the plane. Then, a straight line is detected from the projected image, and a plane parameter based on the straight line is detected. Therefore, the range image does not require a clue for plane detection, and even if a lot of information unrelated to the plane such as an obstacle is included, the plane can be detected more reliably.
  • the flat surface detection apparatus uses the depth direction of the subject as the z axis in the distance image data, and the x axis and the y axis that are perpendicular to the z axis.
  • the projection image generation means sets the three-dimensional coordinate data on the yz plane. Projected projection image data is generated, and the plane parameter calculation means (plane detection unit 34) calculates the plane parameter including the inclination angle of the specific detection target plane with respect to the z-axis from the projection image data. It is characterized by.
  • the plane parameter calculation means is characterized by determining whether or not the straight line is within a predetermined range.
  • the flat surface detection apparatus converts the xyz coordinate system into an xy′z ′ coordinate system by rotating the x axis as the rotation axis, and xy ′ a second three-dimensional coordinate calculation means (three-dimensional coordinate calculation unit 31) for generating second three-dimensional coordinate data including a detection target three-dimensional point group representing the specific detection target plane in the z ′ coordinate system;
  • the second three-dimensional coordinate data is projected onto the xy ′ plane to generate second projection image data in which the detection target three-dimensional point group included in the second three-dimensional coordinate data is linearly distributed.
  • Two projection image generation means projection image generation unit 32
  • second straight line detection means for detecting the linear second straight line distributed in the second projection image data from the second projection image data.
  • the second straight line 2nd plane parameter calculation means which calculates the 2nd plane parameter containing the information about the inclination of the above-mentioned specific detection object plane based on the detection result of a detection means, It is characterized by the above-mentioned. It is said.
  • an autonomous mobile device cleaning robot 1 according to one aspect of the present invention includes a flat surface detection device (flat surface detection device 60), distance image generation means (distance image sensor 20) that generates the distance image data, and travel. Means (driving wheel 2), and an autonomous mobile device that detects a plane as a travel path using the plane detection device.
  • the autonomous mobile device can achieve the same effects as the flat surface detection device.
  • the autonomous mobile device cleaning robot 1 according to an aspect of the present invention includes a plurality of the distance image generation means (distance image sensor 20), and the plane parameter is calculated from each of the distance image data generated by each distance image generation means. It is characterized by calculating.
  • the distance image sensor 20 uses the infrared projection method, but other types of distance image sensors such as a stereo method and a TOF method can also be used.
  • a stereo method parallax is calculated by a technique such as corresponding point search for left and right images obtained from stereo left and right cameras.
  • the distance to the object can be obtained from the parallax value by the principle of triangulation.
  • plane detection can be realized by the same processing as in the above-described embodiment.
  • the floor surface is detected, but it can also be used to detect other planes such as a road surface, a water surface, a wall surface, and a ceiling surface.
  • the cleaning robot 1 has been described as an autonomous mobile device. It can also be applied to other autonomous mobile devices.
  • the flat surface detection device 60 is incorporated in the cleaning robot 1 and used.
  • the flat surface detection device 60 is used as an independent device for industrial, consumer, and other purposes, and a general-purpose portable information terminal or the like. It is also possible to incorporate it into a part of
  • each block of the flat panel detector may be realized in hardware by a logic circuit formed on an integrated circuit (IC chip) or in software using a CPU (Central Processing Unit). May be.
  • IC chip integrated circuit
  • CPU Central Processing Unit
  • the flat surface detection apparatus includes a CPU that executes instructions of programs that realize each function, a ROM (Read Memory) that stores the programs, a RAM (Random Access Memory) that expands the programs, the programs, and various types
  • a storage device such as a memory for storing data is provided.
  • An object of the present invention is to provide a recording medium in which a program code (execution format program, intermediate code program, source program) of a control program for a flat panel detector, which is software that realizes the functions described above, is recorded so as to be readable by a computer This can also be achieved by supplying to the flat panel detector and reading and executing the program code recorded on the recording medium by the computer (or CPU or MPU).
  • Examples of the recording medium include non-transitory tangible media, such as magnetic tapes and cassette tapes, magnetic disks such as floppy (registered trademark) disks / hard disks, and CD-ROM / MO.
  • Discs including optical disks such as / MD / DVD / CD-R, cards such as IC cards (including memory cards) / optical cards, and semiconductor memories such as mask ROM / EPROM / EEPROM (registered trademark) / flash ROM
  • logic circuits such as PLD (Programmable logic device) and FPGA (Field Programmable Gate array) can be used.
  • the flat surface detection device may be configured to be connectable to a communication network, and the program code may be supplied via the communication network.
  • the communication network is not particularly limited as long as it can transmit the program code.
  • the Internet intranet, extranet, LAN, ISDN, VAN, CATV communication network, virtual private network (Virtual Private Network), telephone line network, mobile communication network, satellite communication network, etc. can be used.
  • the transmission medium constituting the communication network may be any medium that can transmit the program code, and is not limited to a specific configuration or type.
  • wired lines such as IEEE 1394, USB, power line carrier, cable TV line, telephone line, ADSL (Asymmetric Digital Subscriber Line) line, infrared rays such as IrDA and remote control, Bluetooth (registered trademark), IEEE 802.11 wireless, HDR ( It can also be used wirelessly such as High Data Rate, NFC (Near Field Communication), DLNA (registered trademark) (Digital Living Network Alliance), mobile phone network, satellite line, and digital terrestrial network.
  • the present invention can also be realized in the form of a computer data signal embedded in a carrier wave in which the program code is embodied by electronic transmission.
  • Embodiment 3 which is one form of the road surface level difference detection method according to the present invention will be described below.
  • the obstacle detection device of Patent Document 1 extracts a white line such as a lane and recognizes a road surface. For example, a vehicle traveling on a road surface without a lane, such as a senior car or an electric wheelchair. When applied to, the road surface cannot be recognized correctly, which makes it difficult to detect an obstacle.
  • the road surface level difference detection method projects at least a first image and a second image obtained by taking a stereo image of a road surface on XY plane coordinates and specifies them on the plane coordinates.
  • a detection area centered on the coordinates (X, Y) is set, parallax v1 when the image of the detection area is at the road surface position is calculated, and coordinates obtained by subtracting the parallax v1 from the second image (Xv1,
  • a comparison area centering on Y) is set, and the image of the detection area is compared with the image of the comparison area to detect the height of the detection area from the road surface.
  • the road surface level difference detection method obtains a height difference between adjacent detection areas from the heights of a plurality of detection areas, and the height difference is equal to or greater than a threshold value. If so, it is determined that there is a step between the detection areas.
  • the road surface level difference detection method obtains a height difference between adjacent detection areas from the heights of the plurality of detection areas. When the height difference changes continuously, it is determined that there is an inclination between the detection areas.
  • the road surface level difference detection device (first road level level detection device) according to the third embodiment includes at least a first camera and a second camera that take a stereo image of a road surface, and a first image that is captured by the first camera.
  • An image and a second image captured by the second camera are projected onto XY plane coordinates, a detection area centered on specific coordinates (X, Y) is set on the plane coordinates, and an image of the detection area Is calculated on the road surface position, a comparison area centered on coordinates (X ⁇ v1, Y) obtained by subtracting the parallax v1 from the second image is set, and an image in the detection area and an image in the comparison area
  • a height calculation unit that detects the height of the detection region from the road surface.
  • the vehicle of the third embodiment includes the road surface level difference detection device described above.
  • FIG. 13 shows the configuration of the road surface level difference detecting device 1001 of the third embodiment.
  • a road surface level difference detection apparatus 1001 according to the third embodiment includes two cameras 1011 and 1012 that capture a stereo image, and a calculation unit 1020 for calculating the stereo image.
  • the calculation unit 1020 includes a height detection unit 1030 that calculates the height of a region where a step is desired to be detected from the stereo image, and a step detection unit 1040 that determines the presence / absence of a step between detection regions from the height of each detection region. It is composed of Furthermore, an output device 1050 such as an audio speaker or a display device is provided to notify the operator of the presence or absence of a step as necessary.
  • FIG. 14 (a) is a top view showing the arrangement of the stereo camera
  • FIG. 14 (b) is a side view showing the mounting position of the stereo camera.
  • the two cameras 1011 and 1012 have the same specifications, and have a predetermined horizontal angle of view as shown in FIG. 14A. For example, a predetermined distance g is set on the left and right in the front part of a senior car or an electric wheelchair vehicle. Installed separately.
  • the cameras 1011 and 1012 are installed at a predetermined height hc from the road surface, have a predetermined vertical field angle and depression angle, and the optical axis of the lens images the road surface side. It has become downward.
  • the step of the road surface to be imaged becomes small and the detection accuracy of the step is lowered. Conversely, if it is too small, the detection range of the step becomes narrow. It is necessary to set appropriately according to conditions.
  • the depression angle is preferably set so that the proportion of the road surface is increased.
  • the mounting height hc of the cameras 1011 and 1012 is preferably set as high as possible in order to widen a detectable range from a small step to a large step.
  • the specifications and arrangement of the cameras 1011 and 1012 are, for example, a horizontal field angle and a vertical field angle corresponding to a 35 mm size lens, a mounting interval g is 15 to 25 cm, a mounting height hc is 60 to 80 cm, 10 It is installed at a depression angle of ⁇ 25 °.
  • half angles of the horizontal field angle and the vertical field angle are ⁇ 1 and ⁇ 2, respectively, and the depression angle is ⁇ 3.
  • FIG. 14 an embodiment in which the cameras 1011 and 1012 are installed on the left and right will be described. However, the camera 1011 and 1012 can be installed vertically and obliquely.
  • the detection method is the same.
  • FIG. 15 shows two images of the stereo camera, and the road surface including the sidewalk is captured.
  • FIG. 15A is a first image captured by the left camera 1011
  • FIG. 15B is a second image captured by the right camera 1012.
  • FIG. 15C is a diagram in which only the boundary line between the sidewalk and the road surface is extracted by superimposing the first image and the second image. As shown in FIG. 15C, the image is taken at a position where the boundary line is shifted between the left image and the right image.
  • the amount of deviation in the left-right direction is parallax, and on a flat road surface, the parallax decreases from the near side to the depth side at a certain rate.
  • the height of the detection region from the road surface is detected by comparing the parallax v1 on the flat road surface and the actual parallax v2 obtained by imaging the step detection region.
  • FIG. 16 is a flowchart of the height detection process in the calculation unit 1020.
  • the calculation unit 1020 assumes that the detection region is on the road surface from the Y coordinate and the position information of the camera, etc., with respect to the detection region centered on an arbitrary coordinate (X, Y) of the first image.
  • the parallax v1 is obtained, a comparison area centered on the coordinate (Xv1, Y) shifted by the parallax v1 is determined in the second image, and the height from the road surface of the detection area is determined from the parallax v2 of the detection area and the comparison area.
  • step S1 With the processing from step S1 to step S3 illustrated in FIG.
  • the parallax v1 with the second image when it is assumed that the vehicle is on the road surface is obtained.
  • FIG. 17 is an explanatory diagram illustrating a distance calculation method.
  • FIG. 17A shows a step detection region in which the first image is converted into a coordinate space 13 in which the center of coordinates (0, 0) is the origin P and the horizontal width is ⁇ w pixels and the vertical length is ⁇ h pixels. These coordinate points (X, Y) are shown in the coordinate space 13.
  • FIG. 17B is a side view showing the focal plane A1 of the cameras 1011 and 1012 and the positional information of the cameras.
  • step S1 a coordinate point (X, Y) of an arbitrary detection region for detecting a step in the coordinate space 13 is selected.
  • the coordinate space 13 corresponds to a focal plane that is a plane perpendicular to the optical axis of the camera 1011 shown in FIG.
  • the coordinate point (X, Y) in the coordinate space 13 and the origin P also have the same parallax.
  • step S2 the distance d1 from the camera to the origin P of the focal plane A1 when the coordinate point (X, Y) is assumed to be on the road surface is obtained.
  • the same focal point as the coordinate point (X, Y) is used by utilizing that the coordinate points on the focal plane all have the same parallax.
  • a distance d1 to the origin P of the focal plane A1 is calculated using the coordinate point Q (0, Y) on the plane A1 as a base point.
  • FIG. 18 is an explanatory diagram showing a parallax calculation method.
  • FIG. 18A is a side view of the camera 1011 and shows a downward angle ⁇ y when the coordinate point Q is viewed from the camera 1011. Assuming that the half of the vertical angle of view of the camera 1011 is ⁇ 2, the height of the coordinate point Q in FIG. 17A is Y, so ⁇ y can be obtained by the following equation 1.
  • step S3 a parallax v1 between the first image and the second image when the road surface is on the focal plane A1 is obtained.
  • the parallax v1 may be obtained using the distance d1 obtained in the above (Equation 3).
  • the distance d1 may use another value or need to be corrected depending on lens distortion or the like.
  • FIG. 18B is a top view of the left and right cameras 1011 and 1012. As shown in FIG. 18B, the origin P located at the focal plane A1 of the left camera 1011 is seen by the right camera 1012 in the direction of the angle ⁇ x from the center. This ⁇ x is obtained by the following equation, where g is the distance between the left and right cameras.
  • ⁇ x arctan (g / d1) (Formula 4)
  • the origin P looks like the coordinates of the origin (0, 0) in the first image on the left side as shown in FIG. 19 (a), and the parallax in the second image on the right side as shown in FIG. 19 (b).
  • the number of pixels is v1, it appears at the point ( ⁇ v1, 0).
  • v1 is obtained by the following expression when ⁇ 1 is half of the horizontal angle of view of the camera.
  • step S4 it is determined whether or not the object shown in the coordinates of the detection area (X, Y) of the left first image is at the same height as the road surface. At this time, it is confirmed whether the same object as the object shown in the coordinates (X, Y) of the left first image is shown in the position of the comparison area (Xr, Y) of the right second image. do it.
  • the luminance of several pixels around the target point of the left and right images may be taken out and compared. If they match within the range of error factors such as camera noise, the point can be determined to be the same height as the road surface. If it is determined that the two do not match and are shifted to the left or right, it can be determined that the position is higher or lower than the road surface according to the parallax.
  • the height hs of the road level difference can be known. That is, when the parallax v2 of the object is positive (larger than the parallax v1 of the road surface), the distance d2 to the object is smaller than the distance d1 to the road surface as shown in FIG. 20A, and hs is a positive value. It can be judged that it is higher than the road surface.
  • step S5 the height difference from the road surface of the coordinate point (X, Y) in the first image is known (step S5).
  • step S6 the above procedure is repeated at other coordinate points with appropriate intervals, and the process ends when the detection of the height from the road surface within the necessary range in the image is completed.
  • the details of the flowchart shown in FIG. 16 are as described above, and these processes are processed by the arithmetic unit 1020 shown in FIG. Specifically, it may be realized as software on a PC or a microcomputer, or may be realized as hardware using an FPGA or ASIC. A configuration in which the remainder is partially processed by hardware and software is also possible.
  • FIG. 21 is an example of a result of applying the above method to the stereo image of FIG.
  • the gradation is displayed according to the height of each detection area, the area having the same height as the road surface is displayed in gray, and the area lower than the road surface is displayed in black.
  • the level difference detection unit 1040 of the calculation unit 1020 will be described.
  • the height from the road surface is compared between the detection areas adjacent to each other in the vertical and horizontal directions. It is judged that.
  • the height difference threshold for determining a step is set so as to ensure safety even when a senior car or a wheelchair falls, for example.
  • the boundary determined as a step is indicated by a broken line portion.
  • FIG. 22 shows an application example of a senior car 1060 as an example of a vehicle provided with the road surface level difference detection device 1001 of the third embodiment.
  • the road surface level difference detection device 1001 is provided in front of the handle 1061 of the senior car 1060 at a height hc from the road surface.
  • the road surface level difference detection device 1001 includes an output device 1050 such as a speaker 1031 and a display device 1032 to notify the driver of the senior car 1060 of the level difference detection result.
  • a buzzer sound or voice guidance is output from the speaker 1031 or a character or a figure is displayed on the display device 1032 to notify the presence of a level difference on the road surface. It is possible to avoid dangers such as falling wheels and falling of the senior car 1060.
  • the road surface level difference detection device 1001 of the third embodiment may be provided not only in front of the senior car 1060 but also in the rear. As a result, it is possible to avoid danger such as falling wheels even when reversing with poor visibility.
  • the use of the road surface level difference detection device 1001 of the third embodiment is not limited to the senior car 1060, but is a vehicle that needs to detect a road level level difference, for example, various vehicles ranging from wheelchairs to forklifts. It can be suitably used.
  • the road surface level difference detection device 1001 in a wheelchair, is provided in the front-rear direction and the left-right direction, and even if the direction is changed on the spot, the wheel is dropped or falls at a level difference around the wheelchair. Risk can be prevented.
  • a forklift even if the forward field of view may be blocked during cargo transportation, it is possible to detect a short cargo placed on the road surface as a step and avoid a collision.
  • This provides a road surface level difference detection device that can detect not only front obstacles but also road level differences.
  • Embodiment 4 which is one form of the road surface level
  • the obstacle detection apparatus of Patent Document 1 extracts a white line such as a lane and recognizes a road surface.
  • a road surface without a lane such as a senior car or an electric wheelchair is used.
  • the road surface cannot be recognized correctly, which makes it difficult to detect an obstacle.
  • the road surface step detection method (second road surface step detection method) of the fourth embodiment at least a first image and a second image obtained by stereo shooting of a road surface are projected onto XY plane coordinates, and in a specific Y-axis direction.
  • the parallax when the image is at the road surface position is calculated for each row data of the image, a third image is generated by correcting the second image by shifting the parallax for each Y axis, and the first image and the third image
  • the height from the road surface is detected by comparing the images for each step detection area.
  • the road surface level difference detection method of the fourth embodiment obtains a height difference between adjacent detection areas from the heights of a plurality of detection areas, and the height difference is equal to or greater than a threshold value. If so, it is determined that there is a step between the detection areas.
  • the road surface level difference detection method of the fourth embodiment obtains a height difference between adjacent detection areas from the heights of the plurality of detection areas, and detects the difference between the plurality of detection areas. When the height difference changes continuously, it is determined that there is an inclination between the detection areas.
  • the road surface level difference detection device (second road level level detection device) according to the fourth exemplary embodiment includes a stereo camera that stereo-shoots at least a first image and a second image of a road surface, a first image that is captured in stereo, and a second image.
  • the image is projected onto the XY plane coordinates, the parallax when the image is at the road surface position is calculated for each row data of the image in the specific Y-axis direction, and the parallax is shifted for the second image for each Y-axis.
  • a corrected third image is generated, and the height detection unit detects the height from the road surface by comparing the first image and the third image for each step detection region.
  • the vehicle of the fourth embodiment includes the road surface level difference detection device described above.
  • the configuration of the road surface level difference detection device 1001 of the fourth embodiment is the same as that of the third embodiment described based on FIGS. 13 and 14, the description thereof is omitted. Only differences from the third embodiment will be described below.
  • the boundary line is shifted in the left image and the right image.
  • the amount of deviation in the left-right direction is parallax
  • the road surface level difference detection apparatus of the fourth embodiment extracts such parallax v1 and detects the height and height of the level difference from the road surface.
  • the bottom surface of the recess parallel to the road surface is distorted and imaged unlike a three-dimensional object, so the height from the road surface is detected using the third image corrected for this distortion. is there.
  • FIG. 23A and 23B are diagrams for explaining correction of image distortion.
  • FIG. 23A shows a first image
  • FIG. 23B shows a second image
  • FIG. 23C shows a third image after correction. Yes.
  • the road surface level difference detection method according to the fourth embodiment captures the first image and the second image after taking a first image and a second image obtained by viewing the road surface from different directions with a stereo camera.
  • the row data obtained by collecting the pixel values at the same Y coordinate are compared, and the parallax v1 when the object in the row data is assumed to be on the road surface is expressed in the Y-axis direction. Calculated from coordinate values and camera information.
  • a third image is generated by correcting the parallaxes v1 in the Y axis direction in the second image. Then, the corrected third image and the first image are compared for each detection area, and the height of the detection area from the road surface is detected based on the shift amount v2 between the third image and the first image.
  • FIG. 24 is a flowchart of the height detection process in the calculation unit 1020 of the road surface level difference detection device 1001 according to the fourth embodiment.
  • the computing unit 1020 first obtains the parallax v1 for each row data by the processing from step S1 to step S3.
  • parallax v1 In order to obtain the parallax v1, it is first necessary to calculate the distance d1 from the camera to the focal plane of the image. A method for obtaining the parallax v1 will be described with reference to FIG. 17 used in the third embodiment.
  • the coordinate point (X, Y) indicated in the coordinate space 13 of FIG. 17 is included in the row data.
  • step S1 line data including coordinate points (X, Y) of an arbitrary detection area for detecting a step in the coordinate space 13 is selected.
  • the coordinate space 13 corresponds to a focal plane that is a plane perpendicular to the optical axis of the camera 1011 shown in FIG.
  • the coordinate point (X, Y) in the coordinate space 13 and the origin P also have the same parallax.
  • step S2 the distance d1 from the camera to the origin P of the focal plane A1 when the coordinate point (X, Y) is assumed to be on the road surface is obtained. Since step S2 is the same as step S2 described in the third embodiment, description thereof is omitted here.
  • step S3 a parallax v1 between the first image and the second image when the road surface is on the focal plane A1 is obtained. Since step S3 is the same as step S3 described in the third embodiment, description thereof is omitted here.
  • step S4 after obtaining the parallax v1 between the first image and the second image when there is a road surface for each row data, the second image is as shown in FIG. to correct.
  • each row data of the second image is moved in the X coordinate direction by a parallax v1 pixel corresponding to the Y coordinate.
  • the parallax v1 is a decimal, it is complemented by two nearby points. For example, when v1 is 5.5, correction is performed by writing half of the sum of the fifth and sixth pixels from the left in the position of the zeroth pixel.
  • the third image shown in FIG. 23C is obtained by correcting the row data with the parallax v1 over the entire Y coordinate of the second image. As a result, the corrected object of the third image has the same shape as the first image.
  • step S5 it is determined whether or not the object shown in the (X, Y) coordinates of the first image is at the same height as the road surface.
  • the detection area centered on (X, Y) of the first image and the comparison area centered on (X, Y) of the third image are compared, and the object is the same as the detection area and the comparison area. It can be judged by whether it is reflected in the position of.
  • the luminance of several pixels around the target point of the left and right images can be extracted and compared. If they match within the range of error factors such as camera noise, the point can be determined to be the same height as the road surface. If it is determined that the two do not match and are shifted to the left or right, it can be determined that the position is higher or lower than the road surface according to the shift amount.
  • the height difference hs of the road surface step can be obtained by using (Equation 4) to (Equation 8) and the height difference of the coordinate point (X, Y) from the road surface can be obtained.
  • step S6 the above procedure is repeated at other coordinate points with appropriate intervals, and the process ends when the detection of the height from the road surface within the necessary range in the image is completed.
  • FIG. 10 is an example of a result of applying the above method to the stereo image of FIG.
  • the gradation is displayed according to the height of each detection area, the area having the same height as the road surface is displayed in gray, and the area lower than the road surface is displayed in black.
  • step difference detection part 1040 of the calculating part 1020 has demonstrated in Embodiment 3, description here is abbreviate
  • This provides a road surface level difference detection device that can detect not only front obstacles but also road level differences.
  • the present invention relates to a plane detection device for detecting a plane such as a floor included in image data to be measured and an autonomous mobile device using the same, and the plane detection device itself is an independent device for industrial use. In addition to being used for consumer use and other purposes, it can be used by being incorporated into a part of another device, or part or all of the device can be used as an integrated circuit (IC chip).
  • IC chip integrated circuit

Landscapes

  • Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Electromagnetism (AREA)
  • Remote Sensing (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • Aviation & Aerospace Engineering (AREA)
  • Automation & Control Theory (AREA)
  • Measurement Of Optical Distance (AREA)

Abstract

La présente invention concerne, selon un mode de réalisation, un robot de nettoyage (1) comprenant un capteur de distance d'image (20) et un dispositif de détection de plan (60). Le dispositif de détection de plan (60) comprend : une unité de calcul de coordonnées tridimensionnelles (31) qui convertir la distance d'une image en données de coordonnées tridimensionnelles ; une unité de création d'image projetée (32) qui crée une image dans laquelle les données de coordonnées tridimensionnelles sont projetées sur un plan ; une unité de détection de ligne droite (33) qui détecte les lignes droites dans l'image projetée ; et une unité de détection de plan (34) qui détecte les plans dans les données de coordonnées tridimensionnelles grâce aux lignes droites.
PCT/JP2013/071855 2012-10-25 2013-08-13 Dispositif de détection de plan, dispositif de locomotion autonome doté d'un dispositif de détection de plan, procédé de détection de différence de nivellement de chaussée, dispositif de détection de différence de nivellement de la chaussée et véhicule doté d'un dispositif de détection d'une différence de nivellement de la chaussée WO2014064990A1 (fr)

Applications Claiming Priority (6)

Application Number Priority Date Filing Date Title
JP2012-235950 2012-10-25
JP2012235950A JP6030405B2 (ja) 2012-10-25 2012-10-25 平面検出装置およびそれを備えた自律移動装置
JP2012238567A JP2014089548A (ja) 2012-10-30 2012-10-30 路面段差検出方法、路面段差検出装置、路面段差検出装置を備えた車両
JP2012238566A JP6072508B2 (ja) 2012-10-30 2012-10-30 路面段差検出方法、路面段差検出装置、路面段差検出装置を備えた車両
JP2012-238567 2012-10-30
JP2012-238566 2012-10-30

Publications (1)

Publication Number Publication Date
WO2014064990A1 true WO2014064990A1 (fr) 2014-05-01

Family

ID=50544371

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2013/071855 WO2014064990A1 (fr) 2012-10-25 2013-08-13 Dispositif de détection de plan, dispositif de locomotion autonome doté d'un dispositif de détection de plan, procédé de détection de différence de nivellement de chaussée, dispositif de détection de différence de nivellement de la chaussée et véhicule doté d'un dispositif de détection d'une différence de nivellement de la chaussée

Country Status (1)

Country Link
WO (1) WO2014064990A1 (fr)

Cited By (18)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2017117205A (ja) * 2015-12-24 2017-06-29 アイシン精機株式会社 移動体
CN107963120A (zh) * 2016-10-19 2018-04-27 中车株洲电力机车研究所有限公司 一种胶轮低地板智能轨道列车自动转向控制方法
CN108136934A (zh) * 2015-11-19 2018-06-08 爱信精机株式会社 移动体
JP2018096798A (ja) * 2016-12-12 2018-06-21 株式会社Soken 物標検出装置
EP3333828A4 (fr) * 2015-08-04 2018-08-15 Nissan Motor Co., Ltd. Dispositif et procédé de détection de dénivelé
JP2018156617A (ja) * 2017-03-15 2018-10-04 株式会社東芝 処理装置および処理システム
US10245730B2 (en) * 2016-05-24 2019-04-02 Asustek Computer Inc. Autonomous mobile robot and control method thereof
EP3363342A4 (fr) * 2015-10-14 2019-05-22 Toshiba Lifestyle Products & Services Corporation Aspirateur automatique
JP2020144023A (ja) * 2019-03-07 2020-09-10 株式会社Subaru 路面計測装置、路面計測方法、及び路面計測システム
CN112740284A (zh) * 2018-11-30 2021-04-30 多玩国株式会社 动画合成装置、动画合成方法以及记录介质
CN112947449A (zh) * 2021-02-20 2021-06-11 大陆智源科技(北京)有限公司 防跌落装置、机器人和防跌落方法
CN113050103A (zh) * 2021-02-05 2021-06-29 上海擎朗智能科技有限公司 一种地面检测方法、装置、电子设备、系统及介质
WO2021215688A1 (fr) 2020-04-22 2021-10-28 Samsung Electronics Co., Ltd. Robot nettoyeur et son procédé de commande
EP3889720A4 (fr) * 2018-11-29 2021-11-24 Honda Motor Co., Ltd. Machine de travail, méthode de contrôle de machine de travail, et programme
US11373532B2 (en) 2019-02-01 2022-06-28 Hitachi Astemo, Ltd. Pothole detection system
WO2022179270A1 (fr) * 2021-02-23 2022-09-01 京东科技信息技术有限公司 Procédé et appareil de déplacement de robot, dispositif électronique, support de stockage et produit programme
CN115198605A (zh) * 2022-07-20 2022-10-18 成都宁顺智能设备有限公司 一种针对高速公路路面微小形变的远距离检测方法
WO2024195097A1 (fr) * 2023-03-23 2024-09-26 日本電気株式会社 Dispositif de traitement d'informations, procédé de traitement d'informations et support de stockage

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2005217883A (ja) * 2004-01-30 2005-08-11 Rikogaku Shinkokai ステレオ画像を用いた道路平面領域並びに障害物検出方法
JP2011027724A (ja) * 2009-06-24 2011-02-10 Canon Inc 3次元計測装置、その計測方法及びプログラム

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2005217883A (ja) * 2004-01-30 2005-08-11 Rikogaku Shinkokai ステレオ画像を用いた道路平面領域並びに障害物検出方法
JP2011027724A (ja) * 2009-06-24 2011-02-10 Canon Inc 3次元計測装置、その計測方法及びプログラム

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
TOM DRUMMOND ET AL.: "Real-Time Visual Tracking of Complex Structures", IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, vol. 24, no. 7, July 2002 (2002-07-01), pages 932 - 946 *

Cited By (29)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP3333828A4 (fr) * 2015-08-04 2018-08-15 Nissan Motor Co., Ltd. Dispositif et procédé de détection de dénivelé
US10339394B2 (en) 2015-08-04 2019-07-02 Nissan Motor Co., Ltd. Step detection device and step detection method
US10932635B2 (en) 2015-10-14 2021-03-02 Toshiba Lifestyle Products & Services Corporation Vacuum cleaner
EP3363342A4 (fr) * 2015-10-14 2019-05-22 Toshiba Lifestyle Products & Services Corporation Aspirateur automatique
CN108136934A (zh) * 2015-11-19 2018-06-08 爱信精机株式会社 移动体
EP3378695A4 (fr) * 2015-11-19 2018-09-26 Aisin Seiki Kabushiki Kaisha Corps mobile
CN108136934B (zh) * 2015-11-19 2021-01-05 爱信精机株式会社 移动体
JP2017117205A (ja) * 2015-12-24 2017-06-29 アイシン精機株式会社 移動体
US10245730B2 (en) * 2016-05-24 2019-04-02 Asustek Computer Inc. Autonomous mobile robot and control method thereof
CN107963120B (zh) * 2016-10-19 2020-11-10 中车株洲电力机车研究所有限公司 一种胶轮低地板智能轨道列车自动转向控制方法
CN107963120A (zh) * 2016-10-19 2018-04-27 中车株洲电力机车研究所有限公司 一种胶轮低地板智能轨道列车自动转向控制方法
JP2018096798A (ja) * 2016-12-12 2018-06-21 株式会社Soken 物標検出装置
JP2018155726A (ja) * 2017-03-15 2018-10-04 株式会社東芝 車両用処理システム
JP2018156617A (ja) * 2017-03-15 2018-10-04 株式会社東芝 処理装置および処理システム
JP2021152543A (ja) * 2017-03-15 2021-09-30 株式会社東芝 車両用処理システム
JP2021170385A (ja) * 2017-03-15 2021-10-28 株式会社東芝 処理装置および処理システム
EP3889720A4 (fr) * 2018-11-29 2021-11-24 Honda Motor Co., Ltd. Machine de travail, méthode de contrôle de machine de travail, et programme
CN112740284A (zh) * 2018-11-30 2021-04-30 多玩国株式会社 动画合成装置、动画合成方法以及记录介质
US11373532B2 (en) 2019-02-01 2022-06-28 Hitachi Astemo, Ltd. Pothole detection system
JP7256659B2 (ja) 2019-03-07 2023-04-12 株式会社Subaru 路面計測装置、路面計測方法、及び路面計測システム
JP2020144023A (ja) * 2019-03-07 2020-09-10 株式会社Subaru 路面計測装置、路面計測方法、及び路面計測システム
EP4057880A4 (fr) * 2020-04-22 2023-01-11 Samsung Electronics Co., Ltd. Robot nettoyeur et son procédé de commande
WO2021215688A1 (fr) 2020-04-22 2021-10-28 Samsung Electronics Co., Ltd. Robot nettoyeur et son procédé de commande
US11653808B2 (en) 2020-04-22 2023-05-23 Samsung Electronics Co., Ltd. Robot cleaner and controlling method thereof
CN113050103A (zh) * 2021-02-05 2021-06-29 上海擎朗智能科技有限公司 一种地面检测方法、装置、电子设备、系统及介质
CN112947449A (zh) * 2021-02-20 2021-06-11 大陆智源科技(北京)有限公司 防跌落装置、机器人和防跌落方法
WO2022179270A1 (fr) * 2021-02-23 2022-09-01 京东科技信息技术有限公司 Procédé et appareil de déplacement de robot, dispositif électronique, support de stockage et produit programme
CN115198605A (zh) * 2022-07-20 2022-10-18 成都宁顺智能设备有限公司 一种针对高速公路路面微小形变的远距离检测方法
WO2024195097A1 (fr) * 2023-03-23 2024-09-26 日本電気株式会社 Dispositif de traitement d'informations, procédé de traitement d'informations et support de stockage

Similar Documents

Publication Publication Date Title
WO2014064990A1 (fr) Dispositif de détection de plan, dispositif de locomotion autonome doté d'un dispositif de détection de plan, procédé de détection de différence de nivellement de chaussée, dispositif de détection de différence de nivellement de la chaussée et véhicule doté d'un dispositif de détection d'une différence de nivellement de la chaussée
JP6030405B2 (ja) 平面検出装置およびそれを備えた自律移動装置
JP6132659B2 (ja) 周囲環境認識装置、それを用いた自律移動システムおよび周囲環境認識方法
US11433880B2 (en) In-vehicle processing apparatus
EP3415281B1 (fr) Robot de nettoyage et procédé pour le commander
EP3104194B1 (fr) Système de positionnement de robot
US8467902B2 (en) Method and apparatus for estimating pose of mobile robot using particle filter
EP3447532B1 (fr) Capteur hybride avec caméra et lidar et objet mobile
JP5124351B2 (ja) 車両操作システム
TWI401175B (zh) Dual vision front vehicle safety warning device and method thereof
US20180165833A1 (en) Calculation device, camera device, vehicle, and calibration method
KR20190131402A (ko) 카메라와 라이다를 이용한 융합 센서 및 이동체
US12033400B2 (en) Overhead-view image generation device, overhead-view image generation system, and automatic parking device
JP6565188B2 (ja) 視差値導出装置、機器制御システム、移動体、ロボット、視差値導出方法、およびプログラム
CN113110451B (zh) 一种深度相机与单线激光雷达融合的移动机器人避障方法
JP2007235642A (ja) 障害物検知システム
JP2007334859A (ja) 物体検出装置
JP4539388B2 (ja) 障害物検出装置
JP2014089548A (ja) 路面段差検出方法、路面段差検出装置、路面段差検出装置を備えた車両
JP2023083305A (ja) 掃除地図表示装置
JP6543935B2 (ja) 視差値導出装置、機器制御システム、移動体、ロボット、視差値導出方法、およびプログラム
JP2014106638A (ja) 移動装置および制御方法
KR101965739B1 (ko) 이동 로봇 및 이의 제어 방법
JP6781535B2 (ja) 障害物判定装置及び障害物判定方法
JP2010250743A (ja) 自動走行車両及び道路形状認識装置

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 13848787

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 13848787

Country of ref document: EP

Kind code of ref document: A1