WO2005087452A1 - Robot device, behavior control method for the robot device, and moving device - Google Patents

Robot device, behavior control method for the robot device, and moving device Download PDF

Info

Publication number
WO2005087452A1
WO2005087452A1 PCT/JP2005/004838 JP2005004838W WO2005087452A1 WO 2005087452 A1 WO2005087452 A1 WO 2005087452A1 JP 2005004838 W JP2005004838 W JP 2005004838W WO 2005087452 A1 WO2005087452 A1 WO 2005087452A1
Authority
WO
WIPO (PCT)
Prior art keywords
plane
tread
information
stair
robot device
Prior art date
Application number
PCT/JP2005/004838
Other languages
French (fr)
Japanese (ja)
Other versions
WO2005087452A9 (en
Inventor
Steffen Gutmann
Masaki Fukuchi
Original Assignee
Sony Corporation
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Sony Corporation filed Critical Sony Corporation
Priority to JP2006511065A priority Critical patent/JP4618247B2/en
Publication of WO2005087452A1 publication Critical patent/WO2005087452A1/en
Publication of WO2005087452A9 publication Critical patent/WO2005087452A9/en

Links

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B62LAND VEHICLES FOR TRAVELLING OTHERWISE THAN ON RAILS
    • B62DMOTOR VEHICLES; TRAILERS
    • B62D57/00Vehicles characterised by having other propulsion or other ground- engaging means than wheels or endless track, alone or in addition to wheels or endless track
    • B62D57/02Vehicles characterised by having other propulsion or other ground- engaging means than wheels or endless track, alone or in addition to wheels or endless track with ground-engaging propulsion means, e.g. walking members
    • B62D57/032Vehicles characterised by having other propulsion or other ground- engaging means than wheels or endless track, alone or in addition to wheels or endless track with ground-engaging propulsion means, e.g. walking members with alternately or sequentially lifted supporting base and legs; with alternately or sequentially lifted feet or skid
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/97Determining parameters from multiple pictures

Definitions

  • Robot device operation control method thereof, and moving device
  • the present invention relates to a robot apparatus, a moving apparatus, and a method for moving up and down stairs, for example, having moving means such as legs, and enabling a stair climbing operation including a plurality of steps.
  • FIG. 3 there is a method in which an infrared sensor is provided on the side of the sole, and a stair climbing operation is performed by applying a landmark tape to the stairs (Japanese Patent No. 3330710).
  • a plurality of optical sensor detectors 682 are provided on the left and right sides of the foot 622RZL of a bipedal walking robot device, and painted with a paint such as black paint that absorbs light well.
  • the land marker 680 that is a surface area having a predetermined width that also produces a linear force
  • a relative direction with respect to the land marker 680 is detected by comparing paired sensor outputs.
  • the present invention has been proposed in view of such a conventional situation, and a robot apparatus and a moving apparatus that allow a moving body itself to acquire information on a stair and perform an autonomous stair climbing operation, and
  • An object of the present invention is to provide an operation control method of a robot device.
  • a robot device is a robot device that can be moved by moving means, detects one or a plurality of planes included in an environment from three-dimensional distance data, Plane detection means for outputting information as information, stair recognition means for recognizing a stair having a movable plane from the plane information, and outputting tread information and tread information relating to the tread of the stair, Based on the stair information, it is determined whether the stairs can be moved up or down. It is characterized by having.
  • a robot device that can be moved with, for example, legs as moving means, it is determined whether the sole can be placed on the tread based on tread information on, for example, the size and position of the tread of the stairs. Information ability to judge whether or not it is possible to move to the tread of that height, and if it is judged that it is possible to move, it is positioned autonomously. This makes it possible to climb and descend stairs.
  • the stair recognition means is provided with a given plane information force.
  • the stair detection means which detects a stair having a movable plane and outputs stair information before integration is different in time outputted from the stair detection means.
  • Statistical processing of multiple pre-integration staircase information Stair integration means for outputting combined integrated stair information as the above-mentioned stair information.For example, in the case of a robot device having a narrow field of view, or a case where stairs cannot be recognized well by a single detection process, However, accurate and high-frequency recognition results can be obtained by using integrated staircase information that is statistically integrated over time.
  • the stair detecting means recognizes the size and spatial position of the tread based on the plane information, and outputs tread information as a result of the recognition as the pre-integration stair information. If the tread group having two or more tread forces having an overlapping area larger than a predetermined threshold and having a relative difference in height equal to or less than a predetermined threshold is detected from tread information before and after the tread, the tread group is determined. Both treads can be integrated so that they can be combined into a single tread, and at the time of unification, by integrating all treads selected to be integrated, recognition results can be obtained over a wide range. it can.
  • the stair recognizing means can recognize the size and spatial position of the tread based on the plane information and use the tread information as the tread information.
  • the tread information is at least in front of the tread with respect to the moving direction. It can include information on the front edge indicating the boundary of the tread and the backedge indicating the boundary on the back side.In order to recognize the front edge and the back edge of the tread, for example, even if it is a spiral-shaped staircase, etc. Recognition of the stairs enables the stairs to move up and down.
  • right margin information and left margin information indicating a margin area which is adjacent to the left and right sides of the safety area which is an area sandwiched between the front edge and the back edge and is estimated to have a high probability of being movable
  • Reference point information indicating the center of gravity of the area estimated to be a tread based on the above-described plane information
  • three-dimensional coordinate information of a point group forming a tread plane can be provided.
  • the stair recognition means can extract a boundary of a plane based on the plane information to calculate a polygon, and calculate the tread information based on the polygon.
  • the polygon can be a convex polygon area circumscribing the boundary of the plane extracted based on the plane information, and is actually detected. It can be a region including a plane.
  • the polygon may be a convex polygon area inscribed on the boundary of the plane extracted based on the plane information, and may be an area included in the plane actually detected. By doing so, the tread can be accurately detected by cutting the noise portion.
  • the device can be controlled to perform an ascending / descending operation.
  • the front edge and the back edge overlap, for example, a stair with a small rise.
  • the user may move the front edge to the target to perform the elevating operation.
  • the stair climbing control means moves to a predetermined position that is opposite to the front edge of the next tread surface to be subjected to the next climbing operation. It can be controlled to perform a vertical movement.For example, if a stair is detected while moving on the floor, the knock edge of the floor may not overlap with the front edge of the first step of the stair. Can move up and down by moving the front edge of the tread of the first step of the stairs, that is, the next step to be moved up and down.
  • the stair climbing control means detects a tread to be moved next, and performs a series of operations to move to a predetermined position opposite to the tread to be moved, thereby performing a climbing operation. Each time the user moves to a new tread, a search 'align' approach operation is performed on the tread to enable a vertical movement.
  • the stair climbing control means searches for the next step to be moved and the stair information obtained in the past. By acquiring information on several steps up or down in advance, it is possible for the robot device to move up and down even if the field of view is narrow due to a configuration in which the robot device cannot obtain the latest information. I do.
  • the stair climbing control means detects the next tread to be moved after moving to the predetermined position on the current moving surface, which is opposite to the back edge, and determines the predetermined position of the current moving surface, which is opposite to the front edge. To perform the elevating operation to move to the tread. By using the front edge and the back edge, both edges can be controlled in parallel, and even a spiral staircase can be moved up and down.
  • the elevation control means can control the elevation operation using a parameter that defines the position of the movement means with respect to the tread surface, and this parameter can be, for example, the height at which the leg is raised or the height at which the foot is lowered. It can be determined based on height. Further, it is possible to have a parameter switching means for changing the numerical values of the above parameters between the operation of climbing the stairs and the operation of descending the stairs. It can be controlled similarly.
  • the plane detecting means is a line segment extracting means for extracting a line segment for each distance data point group estimated to be on the same plane in a three-dimensional space, and is extracted by the line segment extracting means.
  • Plane area extending means for extracting a plurality of line segments estimated to belong to the same plane from the line segment group and calculating a plane from the plurality of line segments,
  • the line segment extraction means can adaptively extract a line segment according to the distribution of distance data points, and the line segment extraction means is arranged on the same straight line when the three-dimensional distance data is on the same plane.
  • the distribution of distance data points differs due to the effects of noise and other factors. Therefore, the line segments are adaptively extracted according to the distribution of distance data (Adaptive Line). Fitting) makes it possible to extract line segments accurately and robustly against noise, and to obtain planes by a large number of extracted line segment force line segment expansion methods. It is possible to accurately extract planes without using a single plane to perform, or multiple planes when only one plane exists.
  • the line segment extracting means extracts a distance data point group which is estimated to be on the same plane based on the distance between the distance data points, and extracts the distance data point group based on the distribution of the distance data points in the distance data point group. It is possible to re-estimate whether the distance data point group is on the same plane or not, and once extract the distance data point group based on the distance of the distance data points in the three-dimensional space, and based on the distribution of the data points. By estimating the force on the same plane again, line segments can be extracted accurately.
  • the line segment extracting means extracts a first line segment from the distance data point group estimated to be on the same plane, and calculates a distance from the distance data point group to the first line segment. Is the most A large distance data point is set as a point of interest, and when the distance is equal to or smaller than a predetermined threshold, a second line segment is extracted from the distance data point group, and a distance data point is located on one side of the second line segment. A determination is made as to whether or not there is a predetermined number or more of continuous data, and if there is a predetermined number or more of data, the distance data point group can be divided at the target point.
  • a line segment connecting the end points of the point group is defined as a first line segment, and if there is a point having a large distance, a second line segment is generated by, for example, the least squares method. If multiple data points exist consecutively on the side, it can be assumed that the data point group has, for example, a zigzag shape with respect to the line segment, and thus the extracted data point group is biased. Is determined, and the data point group can be divided based on the noted point or the like.
  • the plane area extending means selects one or more line segments estimated to belong to the same plane, calculates a reference plane, and calculates a line segment estimated to belong to the same plane as the reference plane.
  • a process for retrieving a segment from the group of segments as an extension line segment, updating the reference plane with the extension line segment, and expanding the area of the reference plane is repeated, and the updated plane can be output as an updated plane.
  • the plane area extension processing and the plane update processing can be performed by line segments that belong to the same plane.
  • the distance data point group belonging to the updated plane if there is a distance data point whose distance from the updated plane exceeds a predetermined threshold, the distance data point group force excluding this is removed again from the plane. It is possible to further have a plane recalculating means for calculating, and the updated plane is obtained as an average plane of all line segments belonging to the updated plane. By obtaining it, a detection result in which the influence of noise and the like is further reduced can be obtained.
  • the plane area expanding means can estimate whether or not the line segment belongs to the same plane as the reference plane based on an error between the plane determined by the line segment and the reference plane.
  • the plane can be detected more accurately by determining whether the plane is due to noise or a different plane based on the root mean square error or the like.
  • An operation control method for a robot device is the operation control method for a robot device movable by a moving means, wherein one or a plurality of planes included in an environment is detected from three-dimensional distance data, and the plane information is obtained.
  • the plane detection step to be output and transfer from the plane information
  • a stair recognition step of recognizing a stair having a movable plane and outputting tread information and tread information relating to the tread of the stair, and determining whether or not the stairs can be raised or lowered based on the stair information. If it is determined that the ascending / descending operation is possible, a stairs ascending / descending control step of controlling the stairs ascending / descending operation by autonomously positioning with respect to the tread surface is provided.
  • a mobile device is a mobile device movable by the mobile device, wherein the mobile device detects one or more planes included in the environment from the three-dimensional distance data and outputs the plane information as plane information;
  • a step recognition means for recognizing a stair having a movable plane from the plane information and outputting stair information including tread information and kick-up information relating to the tread of the stair, and whether or not stair climbing is possible based on the stair information
  • stairs ascending / descending control means for controlling the stairs ascending / descending operation by autonomously positioning with respect to the tread surface is provided.
  • the sole in a robot device and a moving device that are movable with, for example, legs as moving means, the sole can be placed on the tread from the tread information on, for example, the size and position of the tread of the stairs.
  • the ability to judge whether it is the size that can be carried out and the information power of kicking up indicating the steps of the stairs It is judged whether it is possible to move to the tread of that height or not, and if it is judged that it can be moved, it will be autonomous Positioning at a location makes it possible to climb and descend stairs.
  • FIG. 1 is a diagram illustrating a conventional elevating operation.
  • FIG. 2 is a diagram illustrating a conventional elevating operation.
  • FIG. 3A and FIG. 3B are diagrams illustrating a conventional elevating operation.
  • FIG. 4 is a perspective view showing an overview of a robot device according to an embodiment of the present invention.
  • FIG. 5 is a view schematically showing a configuration of a degree of freedom of a joint included in the robot apparatus.
  • FIG. 6 is a schematic diagram showing a control system configuration of a robot device.
  • FIG. 7 is a functional block diagram showing a system that executes processing from the robot apparatus force stereo data to the step of performing a stair climbing operation.
  • FIG. 8A is a schematic diagram showing a state in which the robot apparatus photographs the outside world
  • FIG. 8B is a view showing the size of the sole of the robot apparatus.
  • FIG. 9 is a diagram for explaining staircase detection.
  • FIG. 9A is a diagram of the stairs viewed from the front
  • FIG. 9B is a diagram of the stairs viewed from the side
  • FIG. 10 is a diagram illustrating another example of staircase detection.
  • FIG. 10A is a diagram of the stairs viewed from the front
  • FIG. 10B is a diagram of the stairs viewed from the side
  • FIG. 11 is a diagram showing an example of a result of detecting the stairs in FIG. 9;
  • FIG. 11A is a schematic diagram showing an image obtained by photographing the stairs in FIG. 9;
  • FIG. 11B is a diagram showing three-dimensional distance data acquired from the image shown in FIG. 11A.
  • FIG. 12 is a diagram showing an example of a result of detecting the stairs in FIG. 10;
  • FIG. 12A is a schematic diagram showing an image obtained by photographing the stairs in FIG. 10;
  • FIG. 12B is a diagram showing three-dimensional distance data acquired from the image shown in FIG. 12A.
  • FIG. 13A is a schematic diagram showing an image of a staircase
  • FIG. 13B shows a result of detecting four plane areas A, B, C, and D obtained from the three-dimensional distance data obtained from FIG. 13A. It is a figure
  • FIG. 14 is a functional block diagram showing a staircase recognizer.
  • FIG. 15 is a flowchart showing a procedure of a staircase detection process.
  • FIG. 16A and FIG. 16B are schematic diagrams showing polygons.
  • FIG. 17 is a schematic diagram for explaining the algorithm of Melkman.
  • FIG. 18A and FIG. 18B are schematic diagrams for explaining a method for obtaining a polygon by Sklansky's algorithm.
  • FIG. 19 is a schematic diagram for explaining a problem that occurs with a non-convex polygonal staircase.
  • FIG. 19A is a diagram showing an input plane, and FIG. It is a figure which shows the polygon expression result of the convex polygon-shaped staircase.
  • FIG. 20 is a schematic diagram showing a method for obtaining a polygon including an input plane by smoothing.
  • 20A is a diagram illustrating an input plane
  • FIG. 20B is a diagram illustrating an input plane
  • FIG. 20C is a diagram illustrating a polygon in which a polygonal force in which a discontinuous gap is also removed is removed and smoothed.
  • FIG. 21B is a diagram showing a polygon obtained by further smoothing the polygon obtained in FIG. 20B by line fitting.
  • FIG. 21 is a diagram showing a program example of a process of obtaining a polygon including an input plane by smoothing by gap removal and line fitting.
  • FIG. 22A and FIG. 22B are schematic diagrams for explaining a method of calculating staircase parameters.
  • FIG. 23 is a schematic diagram for explaining tread and staircase parameters finally recognized.
  • FIG. 24A and FIG. 24B are schematic diagrams showing stairs.
  • FIG. 25 is a flowchart showing a method of staircase integration processing.
  • FIG. 26 is a schematic diagram for explaining a process of integrating overlapping staircase data.
  • FIG. 27 is a diagram for explaining an alignment operation.
  • FIG. 28 is a schematic diagram for explaining an approach operation.
  • FIG. 29 is a flowchart showing a procedure of a stair climbing operation.
  • FIG. 30 is a flowchart showing a search-alignment-approach processing method.
  • FIG. 31 is a flowchart showing a method of a lifting operation process.
  • FIG. 32 is a schematic diagram showing a staircase surface recognized or scheduled to be recognized by the robot device.
  • FIG. 33 is a flowchart showing a method of a lifting operation process.
  • FIG. 34 is a flowchart showing a method of a lifting operation process.
  • FIG. 35A is a diagram for explaining a relationship between a tread and a sole recognized by a robot device
  • FIG. 35B is a diagram illustrating dimensions of respective parts.
  • FIG. 36 is a traced image of a state in which the robot device has performed a vertical movement.
  • FIG. 37 is a traced image of a state in which the robot apparatus performs a vertical movement.
  • FIG. 38 is a diagram showing the relationship between a single step and the sole of the robot apparatus.
  • FIG. 39 is a view showing the relationship between a single recess and the sole of the robot apparatus.
  • FIG. 40 is a functional block diagram showing a flat panel detection device in this modification.
  • FIG. 41 is a diagram for explaining a robot apparatus having a means for giving a texture.
  • FIG. 42 is a diagram illustrating a plane detection method by a line segment extension method in this modification.
  • FIG. 43 is a flowchart showing plane detection processing by the line segment extension method.
  • FIG. 44 is a flowchart showing details of processing in a line segment extraction unit in this modification.
  • FIG. 45 is a diagram showing a distribution of distance data points.
  • FIG. 45A shows a case where the distribution of data is zigzag with respect to a line segment
  • FIG. FIG. 4 is a schematic diagram showing a case where the data is uniformly distributed in the vicinity.
  • FIG. 46 is a flowchart showing a Zig-Zag-Shape discrimination method according to the present modification.
  • FIG. 47 is a diagram showing a program example of the Zig-Zag-Shape discrimination processing.
  • FIG. 48 is a block diagram illustrating a processing unit that performs a Zig-Zag-Shape determination process.
  • FIG. 49 is a schematic diagram for explaining an area extension process in the present modification.
  • FIG. 50 is a flowchart showing a procedure of a process of searching for an area type and an area expanding process in an area expanding unit in the present modification.
  • FIG. 51 is a diagram showing an example in which the root-mean-square error rms of the plane equation is different even when the distance between the end point and the straight line is equal.
  • FIG. 51A shows that the line segment has a flat force due to the influence of noise or the like. If misaligned,
  • FIG. 51B is a schematic diagram showing a case where there is another plane to which the line segment belongs.
  • FIG. 52 is a diagram showing an area type selection process.
  • FIG. 53 is a diagram showing an area extension process.
  • FIG. 54A is a schematic view showing a floor surface when the robot apparatus is standing and looking down on the floor surface.
  • the vertical axis is X
  • the horizontal axis is y
  • the z-axis is expressed by the shading of each data point, which is the same in the line segment extraction processing from the three-dimensional distance data and the pixel columns in the row direction
  • FIG. 54C is a diagram showing a straight line detected from a data point group assumed to be present on a plane
  • FIG. 54C is a diagram showing a plane region obtained from the straight line group shown in FIG. 54B by a region expanding process.
  • FIG. 55 is a diagram for explaining the difference between the result of the plane detection method according to the present modification and the result of the conventional plane detection method when one step is placed on the floor surface.
  • 55A is a schematic diagram showing the observed image
  • FIG. 55B is a diagram showing the experimental conditions
  • FIG. 55C is a diagram showing the result of plane detection by the plane detection method in the present modified example
  • 55D is a diagram showing a result of plane detection by a conventional plane detection method.
  • Figure 56A is a schematic diagram showing an image of the floor, and Figures 56B and 56C are three-dimensional distances obtained by capturing the floor shown in Figure 56A. It is a figure which shows the line segment detected by the line segment detection of this modification, and the line segment detected by the conventional line segment detection from the distance line of a horizontal direction and a vertical direction from night and night.
  • the present invention is applied to an autonomously operable mouth pot device equipped with a step recognition device for recognizing a step such as a staircase existing in the surrounding environment.
  • the robot device uses distance information obtained by stereo vision or the like.
  • a biped walking type mouth pot device will be described as an example of such a mouth pot device.
  • This mouth pot device is a practical mouth pot that supports human activities in various situations in the living environment and other everyday life, and can act according to the internal state (anger, sadness, joy, enjoyment, etc.) It is an entertainment pot device that can display basic actions performed by humans.
  • a bipedal walking robot device will be described as an example, but the staircase recognition device is not limited to a bipedal walking pot device, but can be mounted on a legged movable mouth pot device. Can perform a stair climbing operation.
  • FIG. 4 is a perspective view showing an overview of the robot device according to the present embodiment.
  • a head unit 203 is connected to a predetermined position of a trunk unit 202, and two left and right arm units 204RZL and two left and right leg units 205RZL are connected.
  • R and L is a suffix indicating right and left. The same applies hereinafter.
  • FIG. 5 schematically shows the configuration of the degrees of freedom of the joints provided in the robot apparatus 201.
  • the neck joint supporting the head unit 203 includes a neck joint axis 101, a neck joint pitch axis 102, and a neck joint one-piece axis 103! With three degrees of freedom! / Puru.
  • each arm unit 204RZL constituting the upper limb includes a shoulder joint pitch axis 107, a shoulder joint Lorenole axis 108, an upper arm joint axis 109, a lunar joint pitch axis 110, a forearm joint axis 111, and a wrist joint. It comprises a pitch axis 112, a wrist joint roll wheel 113, and a hand 114.
  • the hand 114 is actually a multi-joint * multi-degree-of-freedom structure including a plurality of fingers. However, the movement of the hand 114 has little contribution or influence to the posture control and the walking control of the robot apparatus 201, and therefore is assumed to have zero degrees of freedom for simplicity in this specification. Therefore, each arm has seven degrees of freedom.
  • the trunk unit 202 has three degrees of freedom, namely, a trunk pitch axis 104, a trunk roll axis 105, and a trunk axis 106.
  • Each of the leg units 205RZL constituting the lower limb has a hip joint axis 115, a hip joint pitch axis 116, a hip joint roll axis 117, a knee joint pitch axis 118, an ankle joint pitch axis 119, and an ankle joint axis. It is composed of a roll shaft 120 and a sole 121.
  • the intersection of the hip joint pitch axis 116 and the hip joint roll axis 117 defines the hip joint position of the robot device 201.
  • the sole 121 of the human body is actually a structure including a multi-joint, multi-degree-of-freedom sole, in this specification, the sole of the robot apparatus 201 is assumed to have zero degrees of freedom for simplicity. . Thus, each leg has six degrees of freedom.
  • the robot 201 for entertainment is not necessarily limited to 32 degrees of freedom.
  • the degree of freedom that is, the number of joints, can be appropriately increased or decreased according to design constraints, production constraints, required specifications, etc. Absent.
  • Each degree of freedom of the robot apparatus 201 as described above is actually implemented using an actuator. Eliminating extra bulges on the appearance to approximate the human body shape, bipedal walking! Due to demands such as controlling the posture of unstable structures, it is preferable that the actuator is small and lightweight!
  • Such a robot device includes a control system that controls the operation of the entire robot device, for example, in the trunk unit 202 or the like.
  • FIG. 6 is a schematic diagram illustrating a control system configuration of the robot device 201. As shown in Fig. 6, the control system controls the whole-body cooperative movement of the robot device 201, such as the drive of the thought control module 200, which dynamically responds to user input, etc., and performs emotion judgment and emotional expression, and the actuator 350. And a motion control module 300 to be operated.
  • the thinking control module 200 includes a CPU (Central Processing Unit) 211 that executes arithmetic processing related to emotion determination and emotional expression, a RAM (Random Access Memory) 212, a ROM (Read Only Memory) 213, and an external storage device (node).
  • This is an independent drive type information processing device composed of 214, etc., which can perform self-contained processing in the module.
  • the thinking control module 200 determines the current emotion and intention of the robot device 201 according to external stimuli, such as image data input from the image input device 251 and audio data input from the voice input device 252. I do. That is, as described above, the input image data is recognized. By recognizing the user's facial expression and reflecting the information on the emotions and intentions of the robot device 201, it is possible to express an action according to the user's facial expression.
  • the image input device 251 includes, for example, a plurality of CCD (Charge Coupled Device) cameras, and can obtain a distance image with an image captured by these cameras.
  • the audio input device 252 includes, for example, a plurality of microphones.
  • the thinking control module 200 issues a command to the motion control module 300 to execute a motion or action sequence based on a decision, that is, a motion of a limb.
  • One motion control module 300 controls the whole body cooperative motion of the robot device 201.
  • This is an independent drive type information processing device which includes a CPU 311, a RAM 312, a ROM 313, an external storage device (a node 'disk' drive, etc.) 314, and can perform self-contained processing in a module. Further, in the external storage device 314, for example, a walking pattern calculated offline, a target ZMP trajectory, and other action plans can be stored.
  • the motion control module 300 includes an actuator 350 for realizing the degrees of freedom of the joints distributed over the whole body of the robot apparatus 201 shown in FIG. 5, and a distance measurement sensor (not shown) for measuring a distance to an object.
  • a posture sensor 351 for measuring the posture and inclination of the trunk unit 202
  • ground contact confirmation sensors 352, 353 for detecting leaving or landing on the left and right soles
  • a load sensor provided on the sole 121 of the sole 121
  • a battery Various devices such as a power supply control device 354 that manages power supplies such as the power supply are connected via a bus interface (IZF) 310.
  • the attitude sensor 351 is configured by, for example, a combination of an acceleration sensor and a gyro 'sensor
  • the grounding confirmation sensors 352, 353 are configured by a proximity sensor or a micro' switch.
  • the thought control module 200 and the motion control module 300 are built on a common platform, and are interconnected via a bus; interfaces 210 and 310.
  • the motion control module 300 is instructed by the thought control module 200. It controls the whole-body coordination by each actuator 350 that embodies the behavior. That is, the CPU 311 retrieves an operation pattern corresponding to the action instructed from the thought control module 200 from the external storage device 314, or internally generates an operation pattern. Then, the CPU 311 sets the foot motion, the ZMP trajectory, the trunk motion, the upper limb motion, the waist horizontal position and the height, etc., according to the specified motion pattern, and instructs the motion according to the set contents. Command value to be transferred to each actuator 350.
  • the CPU 311 detects the posture and inclination of the trunk unit 202 of the robot apparatus 201 based on the output signal of the posture sensor 351, and the leg unit 205RZL detects the swing leg based on the output signal of each of the grounding confirmation sensors 352 and 353.
  • the CPU 311 controls the posture and operation of the robot device 201 so that the ZMP position always faces the center of the ZMP stable region.
  • the motion control module 300 is designed to return the force, ie, the state of processing, to which degree the action determined by the thought control module 200 has been performed, as described in the thought control module 200. In this way, the robot device 201 can determine its own and surrounding conditions based on the control program, and can act autonomously.
  • a stereo vision system is mounted on the head unit 203, and three-dimensional distance information of the outside world can be acquired.
  • a flat surface is detected using the three-dimensional distance data obtained by acquiring the surrounding environmental power by a stereo vision system, which is preferably mounted on such a robot device.
  • FIG. 7 is a functional block diagram showing a system for executing processing from the stereoscopic data of the robot apparatus to the start of the stair climbing operation.
  • the robot apparatus receives a stereo vision system (Stereo Vision System) 1 as a distance data measuring means for acquiring three-dimensional distance data, and stereo data D1 from the stereo vision system 1, and receives the stereo data.
  • a stereo vision system Stepo Vision System 1
  • stereo data D1 stereo data measuring means
  • the robot apparatus first observes the outside world by stereo vision, and outputs stereo data D1, which is three-dimensional distance information calculated by parallax between the eyes, as an image. That is, it compares the image input of two cameras with the right and left equivalent to both eyes of the human for each pixel neighborhood, estimates the distance to the parallax map target, and outputs 3D distance information as an image (distance image ).
  • stereo data D1 is three-dimensional distance information calculated by parallax between the eyes, as an image. That is, it compares the image input of two cameras with the right and left equivalent to both eyes of the human for each pixel neighborhood, estimates the distance to the parallax map target, and outputs 3D distance information as an image (distance image ).
  • the plane detector 2 By detecting planes with the plane detector 2 using the distance image camera, a plurality of planes existing in the environment can be recognized.
  • the staircase recognizer 3 These plane force robots extract a plane that can be raised and lowered, recognize stairs from the plane, and output
  • FIG. 8A is a schematic diagram illustrating a state where the robot apparatus 201 is capturing an image of the outside world. Assuming that the floor is an XY plane and the height direction is the z direction, as shown in FIG. 8A, the visual field range of the robot device 201 having the image input unit (stereo camera) in the head unit 203 is as follows. It is a predetermined range in front of 201.
  • the robot device 201 has a software configuration in which the CPU 211 described above receives a color image and a parallax image from the image input device 251 and sensor data such as all joint angles of each actuator 350, and executes various processes. Realize.
  • the software in the robot device 201 is configured for each object, recognizes the position, the movement amount, the surrounding obstacles, the environment map, and the like of the robot device, and performs an action that the robot device should ultimately take. It can perform various kinds of recognition processing to output an action sequence for.
  • coordinates indicating the position of the robot apparatus for example, a camera coordinate system of a world reference system (hereinafter, also referred to as absolute coordinates) having a predetermined position based on a specific object such as a landmark as an origin of the coordinates, Two coordinates are used: the robot center coordinate system (hereinafter also referred to as relative coordinates) with the robot itself as the center (origin of coordinates).
  • the robot center 201 is fixed at the center using the joint angle at which the sensor data force is also determined.
  • a homogeneous transformation matrix and the like in the camera coordinate system are derived from the robot central coordinate system, and a distance image including the homogeneous transformation matrix and the corresponding three-dimensional distance data is output. I do.
  • the robot device can recognize a staircase included in its own visual field, and can perform a stairs ascent / descent operation using a recognition result (hereinafter, referred to as staircase data). Therefore, for the stair climbing operation, the robot device determines whether the size of the stairs is smaller than the size of its sole, or whether the height of the stairs is a height that can be climbed or descended. Various decisions need to be made regarding the size of the stairs.
  • the size of the sole of the robot apparatus is set to FIG. 8B. That is, as shown in FIG.
  • the forward direction of the robot apparatus 201 is defined as the X-axis direction, a method parallel to the floor surface and orthogonal to the X direction ⁇ y direction, and the y-direction of both feet when the robot apparatus 201 stands upright.
  • the width of feet base width is the size of the sole and the ankle (the joint between the leg and the sole) is the front part of the force, the front width of the sole foot_fr 0 nt_ S i Ze , the part of the back from the ankle is the foot
  • the sole width behind the sole shall be foot_back_size.
  • Stairs detected by the robot apparatus 201 from the environment include, for example, those shown in FIGS. 9A and 10A show the stairs viewed from the front, FIGS. 9B and 10B show the stairs viewed from the side, and FIGS. 9C and IOC show the stairs viewed obliquely.
  • the surface used by humans and robotic equipment to move up and down the stairs (the surface on which the feet or movable legs are placed) is referred to as the tread, and the tread force is the height to the next tread (one step of the stairs).
  • the height of the steps) is called kicking.
  • stairs are counted as the first and second steps as they climb from the side closer to the ground.
  • the staircase ST1 shown in Fig. 9 is a three-step staircase, with a kick-up of 4cm, the size of the treads of the first and second steps is 30cm in width, 10cm in depth, and only the third step, which is the top step, has a width of It is 30cm in depth and 21cm in depth.
  • the staircase ST2 shown in Fig. 10 is also a three-step staircase, with a 3cm kick-up, the treads of the first and second steps are 33cm in width, 12cm in depth, and the third step, the top step Only 33cm wide and 32cm deep. The result of the robot device recognizing these stairs will be described later.
  • the plane detector 2 detects a plurality of planes existing in the environment (distance data D1), and outputs plane data D2.
  • distance data D1 distance data
  • plane data D2 plane data
  • a known plane detecting technique using Hough transform can be applied in addition to a line segment extension method described later.
  • planes can be accurately detected by performing plane detection by a line extension method described later.
  • FIG. 11 and FIG. 12 are diagrams illustrating an example of a result of detecting a staircase.
  • FIGS. 11 and 12 show examples in which three-dimensional distance data is acquired from images obtained by photographing the stairs shown in FIGS. 9 and 10, respectively, and plane detection is performed by the plane detection method described later. That is, FIG. 11B to 11D are schematic diagrams showing images when the stairs are photographed, and FIGS. 11B to 11D are diagrams showing three-dimensional distance data acquired from the images shown in FIG. 11A.
  • FIG. 12A is a schematic diagram showing an image when the stage of FIG. 10 is photographed, and FIGS. 12B to 12D are diagrams showing three-dimensional distance data acquired from the image shown in FIG. 12A.
  • all treads could be detected as flat in any case.
  • FIG. 11B shows an example in which the first, second, and third steps from the bottom are detected as flat surfaces.
  • FIG. 12B shows that a part of the floor surface is successfully detected as another plane.
  • the areas A to D are respectively the floor surface, the first step, the second step, and the third step. It is detected as a plane indicating the tread surface of the eye.
  • the point cloud in the same area included in each of the areas A to D indicates a distance data point group estimated to constitute the same plane.
  • the plane data D2 detected by the plane detector 2 is input to the stair recognizer 3 to recognize the shape of the stairs, that is, the size of the tread, the height of the stairs (the size of the kick-up), and the like.
  • the staircase recognizer 3 in the present embodiment has a boundary on the near side (the side closer to the robot apparatus) with respect to the area (polygon) included in the tread recognized by the force robot apparatus 201 described later in detail.
  • (Front Edge) hereinafter referred to as “Front Edge FE”
  • the boundary hereinafter referred to as “Knock Edge BE”
  • the stair climbing controller 4 controls the stair climbing operation using the stair data.
  • a stair climbing control method of the robot device will be specifically described.
  • the staircase recognition method of the robot device the stair climbing / lowering operation using the recognized staircase second, and the plane detection method by the line segment extension method as a specific example of the plane detection method are described below in this order. Will be explained.
  • FIG. 14 is a functional block diagram showing the staircase recognizer shown in FIG.
  • the stair recognizer 3 includes a stair detector (Stair Extraction) 5 for detecting a stair from the plane data D2 output from the plane detector 2, and a stair data detected by the stair detector 5.
  • a stair merging unit (Stair Merging) 6 that performs processing to recognize stairs more accurately by integrating the time series data of D3, that is, a plurality of stair data D3 detected at different times.
  • the stair data D4 integrated by the integrator is the output of the stair recognizer 3.
  • the staircase detector 5 detects a staircase from the plane data D2 input from the plane detector 2,
  • the plane data D2 input from the plane detector 2 has a plurality of pieces of information shown below for each plane, and the plane data for each of a plurality of planes in which the image power captured by the stereo vision system 1 is also detected is input. Is done.
  • plane data D2 is
  • Plane parameters normal vector, distance from origin
  • the robot device selects a plane that is substantially horizontal to the ground surface, such as the floor surface or tread, on which it is grounded, and calculates the following information (hereinafter referred to as staircase parameters). That is,
  • the front edge FE and the knock edge BE recognized by the robot device indicate the boundary (line) of the tread surface of the stairs as described above.
  • the front boundary (front side boundary) near the robot unit is the front edge FE
  • the boundary far away from the robot unit (back side boundary) is the back edge BE.
  • a minimum polygon including all points constituting a plane can be obtained and set as a boundary on the near side or the far side.
  • the information on the front edge FE and the back edge BE can be information on these end points. Further, information such as the width W (width) of the stairs and the length (length) of the stairs can be obtained from the polygon.
  • the height of the stairs can be calculated using the center point of the plane of the given plane data D2 as the height difference between the center points of the planes of 2 and the above polygon. Using the center of gravity of 2 Or the difference in height. Note that the kick-up may be based on the difference between the height of the front back edge BE and the front edge FE of the rear stage.
  • the front edge FE and the back edge BE in addition to the front edge FE and the back edge BE, it is a region adjacent to the left and right of a region (safety region) sandwiched between the front edge FE and the back edge BE and is movable.
  • the region estimated to have a high probability is recognized as a margin (region). How to obtain these will be described later.
  • a margin region
  • stair parameters such as the number of data point groups forming the tread and the information of the reference point defining one of the above-mentioned center of gravity points can be used as the stair data D3.
  • a plane (stair) satisfying the following conditions is extracted from the above stair data.
  • the length of the front edge FE and the back edge BE is greater than or equal to a predetermined threshold
  • the height of the stairs is less than a predetermined threshold
  • Stair width W (width) is greater than or equal to a predetermined threshold
  • Stair length L (length) is more than a predetermined threshold
  • FIG. 15 is a flowchart showing the procedure of the staircase detection process of the staircase detector 5.
  • the input plane is a plane that can be walked or moved, for example, whether or not the input plane is horizontal to a ground contact surface.
  • Judge (Step Sl).
  • the condition of what plane is horizontal or movable may be set according to the function of the robot device. For example, if the plane vector of the input plane is ⁇ ( ⁇ , ⁇ , ⁇ ), it is horizontal if
  • min is a threshold value for judging a horizontal plane.
  • step S1 If it is determined in step S1 that the level is not horizontal (step Sl: No), the detection failure is output and the processing is terminated. The processing is performed on the plane data.
  • step SI: Yes processing for recognizing the boundary (shape) of the plane is performed.
  • the algorithm of Sklansky J. Sklansky, "Measuring concavity on a rectangular mosaic, IEEE Trans Comput. 21, 1974, pp. L355-1364"
  • the algorithm of Melkman Melkman A., "On-line Construction of the A convex hull such as Convex Hull of a sample Polygon "Information Processing Letters 25, 1987, p.11) or a polygon encompassing the input plane is obtained by smoothing by removing noise (step S2).
  • the boundary lines before and after this polygon are determined as staircase parameters such as a front edge and a back edge (step S3), and in the present embodiment, a plane indicating a stair tread surface is obtained from both the boundary lines of the front edge and the back edge.
  • the width W (width) and the length L (length) of the tread are determined, and it is determined whether or not these values are larger than a predetermined threshold value (step S4).
  • S4 No) it is determined that the plane is not a movable plane of the robot apparatus, and the processing from step S1 is repeated again for the next plane data.
  • Step S4 If the width and length of the plane are equal to or larger than the predetermined threshold (Step S4: No), it is determined that the tread is movable, and the left and right margins (Left Margin, Right Margin) are calculated (Step S5). Is output as stair data D3.
  • FIG. 16 is a schematic diagram showing a convex polygon
  • FIG. 16A shows all supporting points determined to belong to one input plane (contained in a continuous area on the same plane.
  • FIG. 16B shows a convex polygon obtained from the figure shown in FIG. 16A.
  • the convex polygon shown here can use a convex hull (convex hull) for finding the minimum convex set including a given plane figure (the area including the supporting points).
  • the point indicated by G is used when calculating the width W of the tread, and indicates, for example, a point (reference point) such as the center of gravity of the area including the supporting point.
  • FIG. 17 shows Melkman's It is a schematic diagram for explaining an algorithm. As shown in Fig. 17, three points PI, P2, and P3 are extracted from the points included in the given figure, a line segment connecting the points PI and P2 is drawn, and a straight line passing through the points PI, ⁇ 3,, P2, and P3 pull. As a result, it is divided into five areas AR1 to AR5 including a triangle AR4 consisting of three points PI, P2, and P3.
  • the process of determining which area the next selected point P4 is included in and re-forming the polygon is repeated to update the convex polygon. For example, if P4 exists in the area AR1, the area surrounded by segments connected in the order of PI, P2, P4, and P3 becomes the updated convex polygon. When point P4 exists in regions AR3 and AR4, the region surrounded by segments connected in the order of PI, P2, P3, and P4, respectively, is connected in the order of PI, P4, P2, and P3. Update the convex polygon as an area surrounded by line segments.
  • the convex polygon is not updated. If the point P4 is in the area AR2, the points P3 and P3 are excluded except for the point P3.
  • the convex polygon is updated as an area surrounded by the line segments connected in order.
  • a convex polygon can be generated for all the supporting points in consideration of a region included in each point.
  • FIG. 18 is a schematic diagram for explaining a method of obtaining a polygon by the Sklansky algorithm.
  • the polygons extracted by Sklansky's algorithm are called Weakly Externally Visible Polygons.
  • the computational complexity is smaller than that of Sklansky's algorithm described above, so that high-speed operation is possible.
  • a half line is drawn from an arbitrary point X on the boundary of the given figure 131 to a circle 132 including the figure 131 as shown in FIG. 18A.
  • this point is assumed to be a point that forms the boundary of the convex polygon.
  • FIG. 18B when a half line is drawn from any other point y on the boundary of the given figure 133 to a circle 134 including the figure 133, a half line that does not cross the figure 133 is drawn. I can't.
  • the other point y does not form a boundary of the convex polygon.
  • a figure as shown in FIG. 16A is obtained.
  • the convex polygon shown in Fig. 16B can be obtained.
  • the convex polygon shown in Fig. 16B in consideration of the accuracy, characteristics, and the like of the stereo vision system 1, when obtaining the convex polygon from FIG. 16A, as shown in FIG. 16B, the convex circumscribing the figure in FIG.
  • a convex polygon inscribed in the figure of FIG. 16A may be obtained in consideration of the accuracy and characteristics of the camera. Also, use these methods according to the degree of inclination of the plane and the surrounding situation.
  • FIG. 19 is a schematic diagram showing this problem
  • FIG. 19A is an input plane
  • stepO is a non-convex polygonal step
  • Figure 19B shows the polygonal representation of stepO using convex hulls, with significant deviations from the desired results for non-convex portions.
  • FIG. 20 is a schematic diagram showing a smoothing ridge
  • FIG. 20A is a diagram showing all supporting points determined to belong to one input plane (contained in a continuous area on the same plane).
  • FIG. 20B shows an input polygon which is a region including the distance data point group), and
  • FIG. 20B shows a smoothed polygon (close gaps) by removing discontinuous gaps from the polygon indicating the input plane.
  • the gap-removed polygon is shown as a closed polygon
  • FIG. 20C is a polygon obtained by further smoothing the polygon obtained in FIG. 20B by fit line segments (smoothed polygon).
  • FIG. 21 is a diagram showing an example of a program for processing for obtaining a polygon including the input plane by gap removal and smoothing by line fitting. And Fit line segments processing for further smoothing the obtained polygon by line fitting.
  • a method of removing a gap will be described. Three consecutive vertices are selected from the vertices representing the polygon, and if this center point is far away from the straight line connecting the end points, this center point is removed. For the remaining vertices, continue this process until there are no more points to remove.
  • a line fitting method will be described. Three consecutive vertices are selected from the vertices indicating the polygon, and a straight line approximating these three points and the error between the straight line and the three points are obtained by the least squares method.
  • FIG. 22 is a schematic diagram for explaining a method of calculating staircase parameters. As shown in FIG. 22A, it is assumed that the obtained polygon 140 is a region surrounded by one point 147 and one point 147. Here, the line segment forming the front boundary of the polygon 140 as viewed from the robot apparatus 201 is the front edge FE, and the line segment forming the rear boundary is the back edge BE.
  • the width W of the stair tread is the length of the line connecting the center point C of the front edge FE and the reference point G.
  • the reference point G can be set at the approximate center of the plane to be a tread.
  • the center point of all the supporting points, the center of gravity of the polygon 140, the end points of the front edge FE and the back edge BE are determined.
  • the center of gravity of the safety area 152 shown in FIG. 22B can be used.
  • the length L of the stairs is, for example, the shorter of the lengths of the front edge FE and the back edge BE, or the length L of the front edge FE including the left and right margins and the back edge BE including the left and right margins shown below. Or longer.
  • FIG. 23 is a schematic diagram for explaining the tread and stair parameters finally recognized.
  • margins M, M are provided at the left and right ends of the safety area 152, and the area 151 including the left and right margins M, M is finally treaded.
  • Left and right margins M and M are front edge FE and back edge If a polygon is outside the safety area 152 defined by BE, those points are selected first. In Figure 22A, for example, to find the right margin M
  • the point 142 that is the farthest point from the safety area 152 is selected, and a perpendicular line is drawn from this point 142 to the front edge FE and the back edge BE. Then, it is assumed that the area 151 surrounded by the perpendicular, the front edge FE, and the knock edge BE is recognized as a tread.
  • the margin may be obtained by simply drawing a line that passes through the point 142 and intersects the front edge FE or the back edge BE.
  • the length of the left margin M on the same line as the front edge FE is lmf
  • the length of the left margin M on the same line as the back edge BE is lbm.
  • the lengths of the right margin M on the same straight line as the front edge FE and the back edge BE be rfm and rbm, respectively.
  • FIGS. 24A and 24B are schematic diagrams showing two types of stairs.
  • FIG. 24A shows a step having a rectangular tread as shown in FIGS. 9 and 10
  • FIG. 24B shows a step having a spiral shape.
  • the back edge BE is not parallel to the front edge FE. Therefore, for example, an algorithm that simply extracts a detected plane force rectangular area may not be applicable. Therefore, as in the present embodiment, by obtaining a polygon from the detected plane and obtaining the front edge FE and the back edge BE, it is possible for the robot apparatus to perform a vertical movement even with such a spiral staircase. Become.
  • the staircase integrator 6 receives stair data (stair parameters) D3 detected by the stair detector 5 as an input, and temporally integrates the stair data D3 to obtain more accurate and high-frequency stair information. It is an estimate. For example, when the field of view of the robot device is narrow, it may not be possible to recognize the entire staircase at once. In such a case, for example, in the old stair data such as the previous frame and the new stair data such as the current frame, for example, a set of spatially overlapping stairs is searched for and the stairs are overlapped. By integrating stairs, new Define a virtual staircase. By continuing this operation until there are no overlapping stairs, accurate stairs can be recognized.
  • FIG. 25 is a flowchart showing a method of the staircase integration process in the staircase integrator 6.
  • the current stairs data New Stairs
  • old and stairs data Old Stairs
  • stairs data are input (Step SI 1), and all of the new, stairs and old! And stairs data are combined into one set (ujnion).
  • Step S12 In these combined staircase data sets, spatially overlapping staircase data is searched (step S13). If there are overlapping sets (step S14: Yes), those staircase data are searched. Is integrated and registered in the staircase data set (step S15). Then, the processing of steps S13 and S14 is continued until there is no spatially overlapping set of stairs (step S14: No), and the finally updated stair data set is output as stair data D4.
  • FIG. 26 is a schematic diagram for explaining the processing in step S13 for integrating the overlapping staircase data.
  • FIG. 26 shows staircase data ST11 and ST12 that overlap spatially. To judge whether or not they are spatially overlapping, for example, the difference in height (distance) at the reference point G of the two staircase data ST11 and ST12 and the tread area including the left and right margins overlap The size of the area can be used. That is, the difference between the heights of the centers of gravity G and G of the two steps is equal to or less than the threshold (maxdz), and
  • the stair data ST11 and ST12 are integrated and the center of gravity G is calculated.
  • Step data is ST13.
  • the area of the outer frame including the stair data ST11 and ST12 is defined as step ST13, and the area including the safety area excluding the left and right margins of the stair data ST11 and the stair data ST12 before integration is integrated.
  • a new safety area 165 is defined, and areas obtained by removing the safety area 165 from the staircase data ST13 are defined as margins M and M.
  • the integrated front edge FE and back edge BE can be obtained.
  • both end points of the front edge FE of the combined staircase data ST13 are the left and right ends of the front edge FE of the staircase data ST11 and the front edge FE of the staircase data ST12. Comparing the points, the right end point 163 is on the right side and the left end point is on the left side.
  • the position of the line of the front edge FE is a line position closer to the robot apparatus (front side) as compared with the front edge FE of the stair data ST11 and the front edge FE of the stair data ST12.
  • the position on the farther side is selected, and the left and right end points 161 and 162 are selected so as to spread to the left and right.
  • the integration method is not limited to this.
  • a rectangular area determined by the front edge FE and the back edge BE and the integrated data ST13 are integrated so as to be the largest in consideration of the field of view of the robot device, for example. If the field of view is wide or the accuracy of the distance data is sufficiently high, a combined area of the two staircase data may be used as the combined staircase data. Further, the reference point G after integration can be obtained by taking a weighted average according to the ratio of the number of supporting points included in the staircase data ST11 and the staircase data ST12.
  • the stair climbing controller 4 uses the stair data D4 integrated and detected by the stair detector and the stair integrator 6 to perform control for the robot apparatus to actually perform the stair climbing operation.
  • This ascent / descent control includes an operation of searching for stairs.
  • the stair climbing operation realized in this embodiment can be constructed as the following five state machines.
  • FIG. 27 is a diagram for explaining the alignment operation.
  • the area 170 is the first step of the stair tread recognized by the robot apparatus.
  • the center point force of the front edge FE of the tread 170 also moves to a target position (hereinafter referred to as an aligning position) separated by a predetermined distance ad (align.distance) in a direction orthogonal to the treading surface FE.
  • ad align.distance
  • Arain position 172 but when the angle difference between the direction facing the case and the direction and the mouth bot device perpendicular to the front edge FE away more than a predetermined threshold max_d is equal to or greater than a predetermined threshold value ma X _ a, the robot device intended It is assumed that the alignment operation is completed when these conditions are satisfied.
  • FIG. 28 is a schematic diagram for explaining the approach operation. Face the stairs as shown in Figure 28
  • the robot apparatus 201 that has moved to the target position 172, which is a target position separated by Align_distance, and has completed the alignment operation, moves the stairs 170 up and down so that the center point C of the front edge FE of the tread 170 and the robot apparatus 201 are correct. And the distance is a predetermined value ax (
  • approach position Move to the target position that will be approach.x (hereinafter referred to as approach position).
  • a stair climbing operation is performed based on the stair data obtained by the stair recognition.
  • step When moving to the next step (step) and the next step is observed, continue the ascent or descent. By continuing this operation until there is no next step, a stair climbing operation is realized.
  • FIG. 29 is a flowchart showing the procedure of the stair climbing operation.
  • the stairs are searched for by the search (Search) 'Align' (Approach) operation, and the searched stairs are raised against the stairs. Is moved to the predetermined position (aligned), and an approach operation approaching the first step of the stairs is executed (step S21). If this search 'alignment' approach operation is successful (step S22: Yes), It moves up and down (step S23) and outputs success. If the approach fails (step S22: No), the failure is output and the processing ends. In this case, the processing power of step S21 is repeated again.
  • FIG. 30 is a flowchart showing a search 'align' approach processing method.
  • step S32 when the search 'alignment' approach is started, the search operation (1) is executed (step S32).
  • the search operation (1) the head is shaken to collect information as wide as possible.
  • step S32 it is determined whether or not there are stairs that can be moved up and down.
  • step S32 using the height n of the plane that forms the first tread surface among the detected stairs, if the height satisfies step_min_z ⁇ n ⁇ step_max_z, it is determined that it is possible to move up and down. If there is a stair that can be moved up and down (Step S32: Yes), an alignment operation is performed to move the stairs to the specified distance (align_distance) in order to recognize the stairs nearby (Step S33). Then, the stairs about to go up and down are recognized again (step S34).
  • the operation in step S34 is the search operation (2).
  • step S35 the force of the stairs that can be raised and lowered is checked again (step S35). If the search operation (2) is successful, the stairs that have been re-recognized staircase against the re-recognized stairs and have a predetermined distance. It is checked whether the force has been successfully moved to the line position, that is, whether or not the aligning operation in step S33 has been successful (step S36). If there is a stair that can be raised and lowered and the aligning is performed, In steps S35 and S36: Yes), an approach operation is performed to advance to the front edge of the first staircase (step S37). On the other hand, if there is no stair that can be moved up and down in step S35, the process returns to step S31. If the alignment operation is successful and V ⁇ in step S36, the processing power in step S33 is repeated.
  • the stair climbing operation consists of ascending and descending operation processing 1 when the robot is able to recognize the next step up or down (hereinafter referred to as the next step) and the next step from the current moving plane.
  • the next step when two or more steps, upper steps or lower steps (hereinafter referred to as two or more steps ahead) can be recognized.
  • FIG. 31, FIG. 33, and FIG. 34 are flow charts showing the processing method of the lifting operation processing 13 respectively.
  • the step that is currently moving is step-0
  • the next step is step-1
  • the next step is step-2
  • the next step is m. Step-m.
  • step S41 an operation of climbing up the stairs and descending Z (climb operation (1)) is executed (step S41).
  • the climb operation (1) since the height n of the stairs has been recognized in the above-described step S32, the positive / negative judgment of the height n is determined by z z
  • Control parameter values used for system operation are different. That is, it is possible to switch between the climbing operation and the descending operation only by switching the control parameters.
  • step S42 it is determined whether or not the climb operation (1) is successful (step S42). If the force is successful (step S42: Yes), the search operation (3) is executed (step S43).
  • This search operation (3) is a process in which the head unit equipped with the stereo vision system is moved, the surrounding distance data is acquired, and the next step is detected. This is an operation process.
  • FIG. 32 is a schematic diagram showing a staircase surface recognized or scheduled to be recognized by the robot device. As shown in FIG. 32, for example, it is assumed that the sole 121LZR of the currently moving robot apparatus is on the tread 181. In FIG. 32, the safety area sandwiched between the front edge FE and the back edge BE and the left and right margins M, M adjacent to the safety area are recognized as treads. This
  • the robot apparatus can recognize the tread surface 182 of the next next step (step-2).
  • a gap 184 exists between the tread 181 and the tread 182 due to a kick-up or the like.
  • the elevating operation (1) it is determined whether it is possible to move (climb operation) from the tread surface 181 of the current step (st mark-0) to the tread surface 182 of the next step (st mark-1). Those that meet the criteria of above shall be judged as movable.
  • step-1 The deviation of the angle sufficiently close to the front edge FE of the tread surface 182 of the next step (step-1) is below a predetermined threshold
  • step-1 The size of the tread 182 of the next step (step-1) is sufficiently large.
  • Front edge FE force is also the distance to the rear end of the sole 121LZR front_x is larger than the control parameter front_x_limit in the specified elevating mode
  • Back edge BE force is also the distance to the front end of the sole 121LZR back_x is larger than the control parameter back_x_limi1: in the elevation mode! / ⁇
  • the height zl at the reference point of the tread 182 of the next step (step-1) and the tread of the next step (step-2) From the difference (z2-zl) of the height z2 at the reference point 183, whether climbing from the tread 182 of the next step (step-1) to the tread 183 of the next step (step-2) is climbing It can be determined whether it is going down. If the tread 183 at the step two steps ahead (st mark-2) cannot be recognized, the current ascending / descending state may be maintained.
  • the robot device is aligned with the back edge BE of the tread 181 on the tread 181 of the current step (step-0).
  • tread 181 and tread 182 are aligned with the back edge BE of the tread 181 on the tread 181 of the current step (step-0).
  • step-2 If the gap 184 is large, the operation moves to the next step after aligning with the front edge FE of the tread 182 of the next step (step-1), and then aligns with the knock edge BE. Then, in the next climb operation, an error is applied to the front edge FE of the tread 183 of the next step (step-2).
  • tread 183 and align with its back edge BE Move in to tread 183 and align with its back edge BE. That is, for example,
  • the climbing operation is performed by aligning with the front edge FE of the tread of the next step, moving up and down, and aligning with the back edge BE of the moved tread.
  • the alignment operation may be performed only on one of the edges. That is, the current tread 181 Align to the back edge BE, move to the next step tread 182, and
  • the climb operation is a process of omitting the process of performing an error on the front edge FE of the next stage, and performing an alignment operation on the back edge BE of the tread moved to the next stage.
  • the ascending / descending operation processing 1 described above can be applied when the robot device can observe the tread of the next movable step (step-1) during the ascent / descent operation of the stairs.
  • a biped robot device needs to be equipped with a stereo vision system 1 that can look down on its feet.
  • the current step (due to the restriction of the movable angle of the connection between the head unit and the trunk unit of the robot unit, etc.) Tread force at step-0)
  • the tread of the next step (step-1) was observed, and the tread of the step two steps ahead (step-2) or more steps (step-m) could not be recognized. Elevation operation when recognition is possible Processing 2 and 3 will be described.
  • a case where the tread of the step two steps ahead (st mark-2) can be recognized will be described.
  • the search operation ( Execute 4) (step S51).
  • This search operation (4) is the same processing as the above-described step S43 and the like, except for recognizing the tread of the step two steps ahead (step-2).
  • the climb operation (2) is executed (step S52).
  • the climb operation (2) is the same operation as the climb operation (1). In this case as well, switching between climbing and descending stairs in climbing is also determined by the tread height n of the next step (step-1).
  • the tread of the next step is a tread that moves forward in time with respect to the tread of the current step (step- 0), and is observed on the tread of the next step.
  • Step S53 Yes
  • Step S54 Yes
  • Step S56 the next step is performed.
  • step S56 the next step is performed.
  • step S55 No
  • step S55 the finish operation is executed (step S55), and the process ends.
  • a lifting operation 3 when the tread surface up to a plurality of steps (hereinafter referred to as m steps) can be observed and recognized.
  • the search operation (5) is basically the same as step S51, except that the tread surface up to the recognizable m-step ahead (st-m) is the observation target.
  • the climb operation (3) is performed for k stages (step S62).
  • the differential force between the heights of a plurality of treads that has been observed so far can also determine the elevating mode. That is, the height of the first tread z—z
  • i i-1 is negative, the operation goes down the stairs. If i i-1 is 0 or positive, the operation mode goes up the stairs.
  • the information on the tread moving in this climb operation (3) is data that has been observed m steps before the current step.
  • Step S63 Yes
  • Step S64: No there is no tread to be moved next, so a finish operation is performed (Step S65), and the process ends.
  • the same procedure is used for climbing and descending the stairs only by changing the control parameters used for the climbing operation and the descending operation in the climb operation. Can be executed.
  • the control parameters used for the stair climbing operation are for regulating the position of the sole of the robot device with respect to the current tread surface.
  • FIG.35A is a diagram for explaining the relationship between the tread and the sole recognized by the robot device.
  • FIG. 35B is a diagram showing an example of control parameters used for the climb operation.
  • Each control parameter shown in FIG. 35A indicates the following.
  • step min.z The minimum value of the height difference (kick-up) between the current step and the next step that can be raised and lowered step
  • max.z The difference in height between the current step and the next step (kick-up) )
  • ax (approach.x): Distance between the front edge FE and the robot at the approach position
  • front_x_limit Limit value of the distance between the front edge FE on the tread and the rear end of the sole 121 (.minimal x-value)
  • back_x_limit Limit value of the distance between the back edge BE on the tread and the front end of the sole 121 (maximal x-value)
  • back_x_desired Desired value of the distance between the back edge BE and the front end of the sole 121 (desired value;
  • aligi iistance is a parameter that is used only when performing an alignment operation, and is used when starting the stairs elevating operation processing that executes the operation of climbing the stairs and descending the Z step, that is, performing the elevating operation of the first step.
  • appr 0ac h_x also a parameter to be used only when the approach operation, is used to initiate the stair climbing operation process for performing Z down operation climbing stairs.
  • front_x_limit and back_x_limit specify the relationship between the tread surface recognized by the robot device and the sole of the robot device.
  • the distance between the back edge BE and front edge FE of the tread surface and the end of the sole that is, If a small portion of the tread when moving to the tread is smaller than these values, the tread cannot be moved, or even if it can be moved, the next Is determined to be impossible to ascend and descend.
  • a negative value of both front_x_limit and back_x_limit indicates that the tread is smaller than the sole. That is, in the climbing operation, even if the tread surface is smaller than the sole, it can be determined that the tread is movable.
  • back_x_desired indicates the distance between the back edge BE at the position where the robot device wants to align with the back edge BE on the current tread and the front end of the sole, and as shown in Fig.
  • back_x_desired is located before the knock edge BE, and in this embodiment, at a position 15 mm before the back edge BE.On the other hand, when descending, the sole protrudes beyond the knock edge BE. In this embodiment, the position protrudes by 5 mm. This is because climbing requires a certain distance before moving up to the next step, while descending does not require such a distance, and it must extend beyond the tread. This is because it is easier to observe and recognize the next or subsequent tread at a position that is easier.
  • FIG. 36 and FIG. 37 are traces obtained by photographing a state in which the robot apparatus actually performs the ascent / descent operation using the control parameters shown in FIG. FIG. 36 shows the operation of the robot apparatus climbing the stairs.
  • the search operation (4) was performed and the next tread was not observed (No. 17), the stairs ascent / descent operation was finished (finish).
  • the appearance (No. 18) is shown.
  • Fig. 37 shows the descending operation, and the search operation (No. 1, No. 4, No. 7, No. 10, No. 13, No. 16), climb Repeat the operation (including the alignment operation) (No. 5, No. 6, No. 8, No. 9, No. 11, No. 14, No. 15, No. 15), and the tread of the next step is no longer observed Finishes (No. 18) and moves up and down End the operation.
  • FIG. 38 is a diagram showing the relationship between a single step and the sole of the robot device.
  • the robot device is located at the next step (stepl).
  • the case of moving from the lower side to the upper side will be described.
  • the next area from the step 191 is determined. It can be determined that the movement to is a descending operation.
  • the value of the above-mentioned control parameter may be changed in the climb operation in accordance with this determination.
  • the ront_x_limit and the back_x_limit in the case of the climbing operation are both negative values, and the sole 121 of the robot apparatus has the step 191 as shown in FIG. Indicates that it is determined that it is possible to move even if it protrudes.
  • FIG. 39 is a diagram illustrating a relationship between a single recess and the sole of the robot device.
  • the robot apparatus is located below the space of the recess 192 at the next step (stepl). The case where the lateral force also moves upward will be described.
  • the movement can be determined to be a climbing operation. Therefore, similar to the step portion 191, in the climb operation according to this determination, What is necessary is just to change the value of the control parameter.
  • ront_x_limit and back_x_limit in the case of the descending operation are both positive values, and the sole 121 of the robot apparatus is moved from the recess 191 as shown in FIG. 39. It is determined that it can be moved only when it is small.
  • a plane that can be determined to be movable such as a detected plane force, for example, is extracted, and the tread surface of the polygonal stairs including that area is recognized. Then, the stair climbing operation is performed using the information on the treads such as the polygonal front edge FE and back edge BE, and the stairs information including the height from the floor.
  • the elevating operation a search operation is performed on the moving tread, and an align operation is performed on the front edge FE of the searched tread or the knock edge BE on the current moving surface, and the next moving surface is compared with the current moving surface.
  • control parameters By switching control parameters by judging whether to climb or descend from the height of the moving surface, it is possible to perform not only stairs with a normal rectangular tread force, but also up and down operations such as spiral stairs. At the same time, the climbing operation and the descending operation can be executed in the same procedure only by changing the control parameters. Therefore, not only the stairs, but also the movement to a single step or a single concave part can be moved by the same control method. For example, a robot device with a limited field of view due to the stairs being large relative to the size of the mouth bot device, or a restriction on the position of the stereo vision system mounted on the robot device, etc. Can be recognized over the steps.
  • the plane detection device can reliably detect a plurality of planes by the line segment expansion method even when there are a plurality of planes such as stairs that are not only dominant planes in the visual field, In the line segment extraction that is extracted when detecting a plane, it is possible to obtain a plane detection result that is robust against measurement noise by fitting a line segment adaptively according to the distribution of points in the distance data.
  • FIG. 40 is a functional block diagram showing a flat panel detection device according to this modification. As shown in FIG. 40, the plane detection device 100 converts a stereo vision system (Stereo Vision System) 1 as a distance data measuring means for acquiring three-dimensional distance data into a distance image composed of three-dimensional distance data.
  • a stereo vision system Stereo Vision System
  • a plane detection unit 2 for detecting an existing plane by a line segment extension method.
  • the plane detection unit 2 selects a distance data point group estimated to be on the same plane from the distance data points forming the image, and extracts a line segment for each distance data point group.
  • An area extending section 2b for detecting one or a plurality of planar areas existing in the image from a group of line segments consisting of the total line force extracted by the line segment extracting section 2a included in the image.
  • the area extension unit 2b selects any three line segments estimated to exist on the same plane from the group of line segments, and obtains a reference plane from these. Then, it is determined whether or not a line segment adjacent to the selected three line segments belongs to the same plane as the reference plane. If it is determined that the line segment belongs to the same plane, the line segment as a region extending line segment is determined. Updates the reference plane and extends the area of the reference plane.
  • the line segment extraction unit 2a extracts a distance data point group that is estimated to be on the same plane in a three-dimensional space from each data column for each column or row in the distance image, and extracts this distance data point group. Generates one or more line segments according to the distribution of distance data point cloud from. In other words, if it is determined that the distribution is biased, it is determined that the data points are not on the same plane, the data points are divided, and it is determined whether the distribution is again biased for each of the divided data points. The determination process is repeated, and if the distribution is not biased, a line segment is generated from the data point group. The above process is performed for all data strings, and the generated line segment group D11 is output to the area extension unit 2b.
  • the area extension unit 2b selects three line segments estimated to belong to the same plane in the line group D11, and obtains a plane that also serves as a seed as a reference plane.
  • the range image is extended by integrating line segments belonging to the same plane as the region type into the region of this type of flat surface (region type: seed region). And output the plane group D2.
  • the robot 201 can perform information processing such as avoiding obstacles and going up and down stairs, or by performing these processes periodically, to obtain important information such as stairs, floors, and walls. Get surface information.
  • a pattern (texture) is required on the surface of the staircase ST2.
  • parallax since parallax can be obtained by two cameras, parallax cannot be calculated for objects without a pattern, and distance cannot be measured accurately. That is, the measurement accuracy of the distance data in the stereo vision system depends on the texture to be measured. Note that parallax indicates the difference between a point in the space mapped to the left eye and the right eye, and varies depending on the distance of the camera.
  • the head unit of the robot apparatus is provided with a stereo camera 11RZL constituting a stereo vision system, and also outputs, for example, infrared light or the like as a projection means to the head unit, for example.
  • a light source 12 is provided.
  • the light source 12 projects (irradiates) an object such as a stairless ST3 having no pattern, an object having little or no texture, a wall, etc., and operates as a pattern giving means for giving a random pattern PT. .
  • the means for applying the random pattern PT is not limited to a light source that projects infrared light. It may be written, but if it is infrared light, it can be given a pattern that is invisible to human eyes but can be observed with a CCD camera mounted on a robot device.
  • FIG. 42 is a diagram illustrating a plane detection method using the line segment extension method.
  • processing is performed on a data string in the row direction or the column direction in the image 11 in which the focal F force is also captured. For example, in a row of pixels (image row) in an image, if a distance data point belongs to the same plane, it becomes a straight line, and the distance data point is assumed to belong to the same plane.
  • Generates a line segment consisting of Then, in the obtained line segment group consisting of a plurality of line segments, a method of estimating and detecting a plane based on the line group that is supposed to constitute the same plane.
  • FIG. 43 is a flowchart showing a plane detection process by the line segment extension method. Shown in Figure 43 First, a distance image is input (step S71), and a line segment is also obtained for a data point force estimated to belong to the same plane in each pixel column in the row direction (or column direction) of the distance image (step S72). ). Then, a line segment presumed to belong to the same plane is extracted from the group of line segments, and a plane composed of these line segments is obtained (step S73). In this step S73, first, a region serving as a plane seed (hereinafter referred to as a “seed region”) is selected, and a corresponding region type is selected.
  • a plane seed region hereinafter referred to as a “seed region”
  • the condition is that three line segments including one line in the vertically adjacent row direction (or the right and left adjacent column direction) are on the same plane.
  • the plane to which the selected three line segment force region types belong is set as a reference plane, and a plane obtained by averaging the three line segments is obtained.
  • an area composed of three line segments is defined as a reference plane area. Then, it is determined whether or not a straight line composed of pixel columns in the row direction (or column direction) adjacent to the selected region type is the same plane as the reference plane by comparing spatial distances.
  • the adjacent line segment is added to the reference plane area (area extension processing), and the above-mentioned reference plane is updated to include the added line segment (plane update processing), and this is added to the plane area.
  • This operation is repeated until no line segment on the same plane exists in the adjacent data string.
  • the region type is searched and the plane updating and the region expansion processing are repeatedly executed until there is no longer a region to be a seed (three line segments). Finally, those that form the same plane are connected from among the plurality of obtained region groups.
  • a plane recalculation process of re-obtaining a plane except for a line segment that deviates from the plane by a predetermined threshold or more from the plane among the line segments belonging to the obtained plane is further provided as step S74, although it is a plane, details will be described later.
  • the process of detecting a line segment from the three-dimensional distance data and combining the region on the same plane into a single plane is a plane detection process using the conventional line segment extension method.
  • the line segment extraction method in step S72 is different from the conventional one. That is, as described above, even if a line segment is obtained from a distance data point and the line segment is generated so as to fit the distance data point as much as possible, if the threshold is not changed according to the accuracy of the distance data, over-segmentation is performed. Or under-segmentation and other problems occur. Therefore, in this modified example, in this line segment extraction, a method of adaptively changing the threshold value according to the accuracy of the distance data and the noise by analyzing the distribution of the distance data is introduced. To do.
  • the line extraction unit (Line Extraction) 2a receives the three-dimensional range image from the stereo vision system 1 and assumes that each column or row of the range image is on the same plane in the three-dimensional space. Detect the estimated line segment.
  • line segment extraction over-segmentation and under-segmentation problems due to measurement noise, etc., that is, multiple planes are originally recognized as one plane,
  • algorithm Adaptive Line Fitting that adaptively fits line segments according to the distribution of data points.
  • the line segment extraction unit 2a first roughly extracts a line segment as a first line segment using a relatively large threshold value, and then extracts data belonging to the extracted first line segment.
  • Point group force The distribution of the data point group with respect to a line segment as a second line segment obtained by the least square method described later is analyzed.
  • the data points are extracted by roughly estimating whether or not they are on the same plane, and whether or not there is a bias in the distribution of the data points in the extracted data points is analyzed to see if they exist on the same plane. Estimate again whether the force is being applied.
  • the distribution of the data points is analyzed, and if the data point group fits into a zig-zag-shape described later, the process of dividing the data point group is determined to be biased. , And by repeating this, an algorithm that adaptively extracts line segments for noise included in the data point group shall be used.
  • FIG. 44 is a flowchart showing details of the processing in the line segment extracting unit 2a, that is, the processing in step S72 in FIG.
  • distance data is input to the line segment extraction unit 2a.
  • a data point group that is presumed to be present on the same plane in a three-dimensional space is extracted.
  • Data points that are estimated to be on the same plane in the three-dimensional space are those whose distance in the three-dimensional space between the data points is less than a predetermined threshold, for example, the distance between adjacent data points is, for example, 6 cm or less.
  • a set of data points can be obtained, and this is extracted as a data point group ( ⁇ [0 ⁇ ⁇ -1]) (step S81). And this data point cloud ⁇ [0 ⁇ ⁇ ⁇ n-1] is checked whether the number n of samples included in the processing is greater than the minimum required number of samples (required minimum value) min_n (step S82), and if the number n of data is less than the required minimum value min_n In (S82: YES), an empty set is output as a detection result, and the process ends.
  • the data point group data point group ⁇ [0 ⁇ ⁇ -1] is set as a point of interest (division point) brk Then, it is divided into two data point groups ⁇ [0 ⁇ brk] and P [brk ' ⁇ ⁇ -1] (step S88).
  • step S85 the data point group division threshold value max_d (S84: NO)
  • step S85 the data point group division threshold value max_d (S84: NO)
  • step S86 it is checked whether or not the data point group ⁇ [0 ⁇ ⁇ -1] is a Zig-Zag-Shape described later for this line segment L2 (step S86). If it is not a Zig-Zag-Shape (S86) : NO), the obtained line equation line is added to the line segment extraction result list (step S87), and the process ends.
  • step S86 if it is determined that the line segment obtained in step S85 is a Zig-Zag-Shape (S86: YES), the process proceeds to step S88, as in step S84 described above.
  • step S83 the data point group is divided into two data point groups P [0 ⁇ 'brk] and P [brk' ⁇ ⁇ -1] at the point of interest brk for which the distance dist is obtained.
  • step S81 the processes from step S81 are performed again recursively. This process is repeated until all the data points are no longer divided, that is, until all the data points have passed through step S87. Get the list.
  • the influence of the data point group ⁇ [0 ⁇ n-1] noise is eliminated, and a line group consisting of a plurality of line forces can be detected with high accuracy.
  • a line segment L1 connecting the end points of the data point group ⁇ [0 ⁇ ⁇ -1] is generated in step S83.
  • the point of interest brk is one point having the largest distance to the line segment L1 connecting the end points.1S
  • the distance to the line segment obtained by the least square as described above is the largest. If there are multiple points whose distance is equal to or greater than the data point group division threshold value max_d, the data point group ⁇ [0 ⁇ ⁇ -1] is divided by all of these points or one or more selected points You may make it.
  • step S85 a method of generating a line segment using least squares (Least-Squares Line Fitting) in step S85 will be described.
  • n data points ⁇ [0 ⁇ ⁇ _1] we show how to find the equation of the straight line that best fits the data point group.
  • the model of the equation of the straight line is expressed by the following equation (1).
  • E flt L (x i ⁇ s + y t sia + dy ... (2) best-fit straight line to the data points are thus required for minimizing the total error in the formula (2).
  • ⁇ and d that minimize the above equation (2) can be obtained as in the following (3) using the average and variance covariance matrix of the data point group P.
  • a method of determining the zigzag shape (Zig-Zag-Shape) in step S86 will be described.
  • FIG. 46 is a flowchart showing a Zig-Zag-Shape discrimination method.
  • a data point group ⁇ [0 ⁇ ⁇ 1] and a straight line Line, d, ⁇ ) are input (step S90).
  • indicates the standard deviation of the point sequence.
  • a counter that counts the number of consecutive data points on the same side (hereinafter referred to as a continuous point counter).
  • a count value count Is set to 1 (step S92).
  • sign (x) is a function that returns the sign (+ or 1) of the value of X
  • sdist (i) is calculated as P [i] .xcosa + P [i] .ycos ⁇ + d Indicates the positive / negative distance from the i-th data point in the straight line Line. In other words, Val indicates on which side of the straight line Line the data point P [0] is.
  • the count value i of a counter for counting data points (hereinafter, referred to as a data point counter, and this count value is referred to as a count value i) is set to 1 (step S93).
  • the count value i of the data point counter is smaller than the number n of data (step S94: YES)
  • the data point P [i] which is the data point of the next data (hereinafter, i-th)
  • Sing dis (P [i] >>) is used to determine which side is located, and the result is assigned to val (step S95).
  • val obtained in step 92 is compared with val obtained in step S95, and val and val
  • step S96 If 0 is different from 0 (step S96: NO), substitute val for val and count the continuous point counter.
  • Substitute 1 for count (step S98), increment the count value i of the data point counter (step S100), and return to the processing from step S94.
  • step S96 YES
  • step S97 it is determined whether or not the count value count of the continuous point counter is larger than the minimum number of data points min_c to be determined as Zig-Zag-Shape (step S99). If it is larger (step S99: YES), Judge as Zig-Zag-Shape, output TRUE and end the process. On the other hand, when the count value count of the continuous point counter is smaller than the minimum number of data points min_c (step S99: NO), the process proceeds to step S100, and the count value i of the data point counter is incremented (step S100). The processing from step S94 is repeated.
  • the processing from step S91 to step S100 can be expressed as shown in FIG.
  • FIG. 48 is a block diagram illustrating a processing unit that performs a Zig-Zag-Shape determination process. As shown in FIG. 48, the Zig-Zag-Shape discrimination processing unit 20 receives n data point groups P [0 •• ⁇ 1] and sequentially converts each data point P [i] into a straight line.
  • a direction discriminator 21 that outputs the discrimination result Val
  • a delay unit 22 for comparing the immediately succeeding data with the result of the direction discriminator 21, and a direction discrimination result Val at the data point P [i] and the data
  • the comparison unit 23 compares the direction discrimination result Val at the data point P [i-1] with the comparison unit 23.
  • a comparing unit 25 for comparing the minimum data points MIN_ C read from the count value count and the minimum data points storage unit 26 of the counter 24.
  • the operation of the Zig-Zag-Shape discrimination processing unit is as follows. That is, the direction discriminating unit 21 obtains a straight line Line by the least squares method for the data point group ⁇ [0 ⁇ ⁇ 1] force, and calculates a positive / negative distance between each data point P [i] and the straight line Line. , And outputs its sign. When a positive or negative sign with respect to the distance to the straight line Line of the data point P [i-1] is input, the delay unit 2 2 outputs the data until the next positive or negative sign of the data point P [i] is input. Is stored.
  • the comparison unit 23 compares the above-mentioned positive and negative signs of the data point P [i] and the data point P [i-1], and outputs a signal for incrementing the count value count of the counter 24 if the signs are the same. If the positive and negative signs are different, a signal that substitutes 1 for the count value count is output.
  • the comparison unit 25 compares the count value count with the minimum data point number min_c. If the count value count is larger than the minimum data point number min_c, the data point group ⁇ [0 ⁇ ⁇ -1] must be zigzag. A signal indicating is output.
  • the region extension unit 2b receives the line segment group obtained by the line segment extraction unit 2a as input, determines which plane each of the line segments belongs to by applying a sequence of points to the plane (Plane Fitting), and gives Is divided into a plurality of planes (plane areas). The following method is used to separate the planes.
  • a plane (reference plane) obtained by these three line segments is a seed of a plane, and a region including the three line segments is called a seed region.
  • FIG. 49 is a schematic diagram for explaining the area expansion processing.
  • three line segments 32a to 32c indicated by thick lines are selected as the region types.
  • the region consisting of these three line segments 32a-32c is the region type.
  • one plane (reference plane) P is obtained from these three line segments 32a-32c.
  • a line segment which is the same plane as the plane P is selected in the data string 33 or 34 adjacent to the outermost line segment 32a or 32c of the region type outside the region type, respectively.
  • the line segment 33a is selected.
  • a plane P ′ consisting of these four line segments is obtained, and the reference plane P is updated.
  • a plane P ′ ′ composed of a group of five line segments is obtained, and the plane P ′ is updated.
  • the second tread of the stairs 31 is obtained as the plane 45 surrounded by the broken line.
  • the area expansion processing is performed until there is no line segment to be added using the selected area type as a seed.
  • a process of retrieving three line segments as the region type from the image 30 and executing the region enlarging process is repeated, and the three line segments as the region type are repeated.
  • the process of step S3 in FIG. 43 is repeated until there is no more.
  • a and b can be updated as shown in (7) below, and can be extended to a plane update process for n data point groups.
  • FIG. 50 is a flowchart showing the procedure of the area type search processing and the area expansion processing.
  • the region type is selected by first selecting three adjacent line segments (1, 1, 1) in the row or column direction data string used in the line segment extraction. The pixel position in each line segment (1, 1), (1, 1) is
  • a search is made for duplicates in the direction orthogonal to the data sequence (step S101).
  • Each data point has an index indicating the pixel position in the image.For example, if the data point is a line segment in the data column in the row direction, this index is compared to determine whether the data point overlaps in the column direction. Compare. If the search is successful (step S102: YES), the above (6-1) is calculated using the above equation (7). As a result, the plane parameters n and d can be determined, and are used to calculate the mean square error (1, 1, 1) of the plane equation shown in the above equation (8).
  • Step S103 the root-mean-square error rms (l, 1, 1) of this plane equation is, for example, lcm
  • Step SI 04 If the value is larger than the predetermined threshold th1, the process returns to step S101 again.
  • the region is extended by the line segment extension method from the region type thus selected. That is, first, a line segment that is a candidate to be added to the region type region is searched (step S105). Note that this area also includes an updated area type described later when the area type has already been updated.
  • the candidate line segments are the line segments included in the region type region (for example, 1).
  • step S106 the mean square error rms (1) of the plane equation is calculated. It is determined whether or not this is smaller than a predetermined threshold th2 (step S107).
  • step S108 the plane parameters are updated (step S108), and the processing from step S105 is repeated again.
  • the process is repeated until there are no more candidate line segments.
  • step S106: NO the process returns to step S101 and the region type is searched again. Then, when there is no region type included in the line segment group (step S102: NO), the plane parameters obtained so far are output and the processing is terminated.
  • the region type is searched, it is determined whether or not the three line segments belong to the same plane, and the force belonging to the reference plane or the updated plane that has been updated when performing the region extension processing is determined.
  • Equation (8) above is used to determine whether or not there is no error. That is, only when the root-mean-square error rms of the plane equation is less than a predetermined threshold (th_rms), the line segment (group) is estimated to belong to the same plane, and the plane including the line segment is re-assembled as a plane. Is calculated. In this way, by using the mean square error rms of the plane equation to determine whether or not a force belongs to the same plane, even if the noise is more robust and contains fine steps, the plane can be accurately detected. Can be extracted. The reason will be described below.
  • FIG. 51 is a diagram showing the effect, and is a schematic diagram showing an example in which the root mean square error rms of the plane equation is different even when the distance between the end point and the straight line is equal.
  • a straight line La intersecting the plane P Fig. 51A
  • a straight line Lb parallel to the plane P and shifted by a predetermined distance Fig. 51B
  • the square root of the plane equation obtained from the straight line Lb in FIG. 51B is compared with the root mean square error rms (La) of the plane equation obtained from the straight line La in FIG. 51A.
  • Average error rms (Lb) is larger. That is, when the straight line La intersects with the plane P as shown in Fig. 51A, the root mean square error rms of the plane equation is relatively small and the effect of noise is often small. In such a case, there is a high probability that the straight line Lb in which the mean square error rms of the plane equation is large is not the same plane as the plane P but a different plane P '.
  • the root mean square error rms of the plane equation is calculated as in this modification.
  • the value is less than a predetermined threshold, it is preferable to determine that the plane is the same. If the distance between the end point of the line segment and the plane is equal to or smaller than a predetermined threshold value, the line segment may be included in the plane or a combination of these may be used, as in the past, depending on the environment and the properties of the distance data. Also, once the surface parameters n and d are calculated, the mean square error rms of the plane equation is updated from the values of the two moments obtained during line segment extraction for the data point group. However, it can be easily calculated by the above equation (8).
  • rms (l, 1, 1) is calculated by using the above equation (6) to calculate the plane equation 2 for all three straight lines.
  • Neighbor (index) is a function that returns an index adjacent to the given index, for example, ⁇ index-1, index + 1 ⁇ .
  • step S74 after performing the area expansion process in step S73 in FIG. 43 to update the plane equation, in step S74, recalculating the plane equation (Post processing) )I do.
  • the distance data points or line segments that are deemed to belong to the plane represented by the finally obtained plane equation and are updated as described above are calculated, and the distance data points or line segments that deviate from the plane by a predetermined value or more are calculated as follows: Excluding this, updating the plane equation again can further reduce the effect of noise.
  • step S74 will be described in detail.
  • a method of calculating the plane equation again in two steps will be described.
  • a data point is detected that includes a plane that does not belong to any plane and whose distance is equal to or smaller than a relatively large threshold value, for example, 1.5 cm
  • processing for including the data point in the plane is performed. These processes can be performed by searching for data points near the boundary of each plane area. After the above processing is completed, the plane equation is calculated again.
  • Fig. 54A is a schematic diagram showing the floor surface when looking down on the floor surface with the robot device standing
  • Fig. 54B shows the x-axis on the vertical axis, y on the horizontal axis, and the z-axis by shading each data point.
  • FIG. 4 is a diagram showing three-dimensional distance data, and further shows a data point group force assumed to be on the same plane in a pixel column force line segment extraction process in a row direction in which a straight line is detected.
  • FIG. 54C shows a plane region obtained by the region extension processing from the straight line group shown in FIG. 54B.
  • FIG. 55 shows the result when one step is placed on the floor.
  • FIG. 55A on the floor F, one step ST3 is placed.
  • FIG. 55B is a diagram showing the experimental conditions. If the distance between the point of interest and the straight line (line segment) exceeds 3 ⁇ 4ax_d, the data point group is divided. Ma The success / failure of extraction (horizontal) is the number of successful plane detections by line segmentation, which performs a total of 10 line segment extractions for each row-wise data column. (Correct extraction (vertical)) indicates the success or failure of extraction for each data column in the column direction. No. 1-No.
  • 55C and 55D are diagrams showing the results of plane detection by the line segment extension method, and show the results of plane detection by the method in the present modified example and the results of plane detection by the conventional line segment extension method, respectively. (Comparative Example) is shown. As shown in FIG.
  • FIGS. 56B and 56C show a case where three-dimensional distance data is obtained from a captured image.
  • the left diagram shows an example in which a line segment is extracted from a pixel column in the row direction (distance data sequence)
  • the right diagram shows an example in which a line segment is extracted from a pixel column in the column direction (distance data sequence).
  • three-dimensional distance data can be acquired from images obtained by photographing different stairs as described above, and plane detection can be performed.
  • plane detection can be performed. For example, as shown in FIGS. 11 and 12, in all cases, all treads can be detected as planes. In FIG. 12B, a part of the floor surface is successfully detected as another plane.
  • a large threshold is initially set. If the line does not have a data point exceeding the threshold but has a zigzag shape, the line is divided by multiple planes that are not noises by Zig-Zag-Shape discrimination processing. Since the line segment is assumed to be divided, the distance information including noise can detect a plurality of planes with high accuracy.
  • the uneven floor surface constituted by a plurality of planes is not erroneously recognized as a walkable plane, and the movement of the robot device is further simplified.
  • one or more of the above-described plane detection processing, stair recognition processing, and stair climbing control processing can be realized by causing a computing unit (CPU) to execute a computer program, even if the processing is configured by hardware. You may. When it is a computer program, it can be provided by recording it on a recording medium, or can be provided by transmitting it via the Internet or other transmission media.
  • a computing unit CPU

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Chemical & Material Sciences (AREA)
  • Combustion & Propulsion (AREA)
  • Transportation (AREA)
  • Mechanical Engineering (AREA)
  • Image Processing (AREA)
  • Manipulator (AREA)

Abstract

A robot device observes the external world by a stereo vision system (1) and outputs stereo data (D1) as a distance image, the stereo data (D1) being three-dimensional distance information calculated by a parallax of both eyes. A plane detector (2) detects planes from the distance image to recognize the planes present in the environment. From plane data (D2), a stair recognizer (3) extracts a plane which the robot can climb up and down, recognizes a stair from that plane, and outputs stair data (D4). A stair climb-up/-down controller (4) outputs a behavior control command (D5) for realizing the movement for climbing up and down the stair by using the stair data (D4). This enables a mobile body itself to obtain information on a stair to autonomously perform the movement of stair climb up and down.

Description

ロボット装置、 及びその動作制御方法、 並びに移動装置  Robot device, operation control method thereof, and moving device
技術分野  Technical field
[0001] 本発明は、例えば脚部などの移動手段を有し、複数段からなる階段の昇降動作等 を可能とするロボット装置、移動装置、及びその階段昇降方法に関する。  The present invention relates to a robot apparatus, a moving apparatus, and a method for moving up and down stairs, for example, having moving means such as legs, and enabling a stair climbing operation including a plurality of steps.
本出願は、日本国において 2004年 3月 17日に出願された日本特許出願番号 200 4 077214を基礎として優先権を主張するものであり、これらの出願は参照すること により、本出願に援用される。  This application claims priority based on Japanese Patent Application No. 2004077214 filed on March 17, 2004 in Japan, and these applications are incorporated herein by reference. You.
背景技術  Background art
[0002] 従来、車輪等の移動手段を有する移動装置を含むロボット装置に対し、段差のある 環境や、階段の昇降動作をさせることを可能とする技術が複数開示されて 、る。 例えば、対象物体の形状特徴点を認識するため対象物体のノードデータを有した 、図 1に示す既知の階段情報を用いて昇降動作を行う方法 (特許第 3176701号公 報)や、図 2に示すように、多数の接触センサ 602が保護膜 603で被覆されつつマトリ タス状に足平 601の裏面の全面的に貼り付けられた足平を利用して、階段昇降動作 を行う方法がある(特許第 3278467号公報)。更に、図 3に示すように、足底の脇に 赤外センサを装備し、階段にランドマークテープを付与することで、階段昇降動作を 行う方法がある(特許第 3330710号公報)。これは、図 3Aに示すように、 2足歩行可 能なロボット装置の足平 622RZLの左右の脇部分に複数の光センサ検出部 682を 設け、黒色ペイントなどの光をよく吸収する塗料で描 、た直線力もなる所定幅の面領 域であるランドマーカ 680を利用することで、対になっているセンサ出力を比較するこ とでランドマーカ 680に対する相対的な方角を検出するものである。このような足平 6 22RZLとランドマーカ 680とを使用することで、図 3Bに示すように、階段 690の位置 を認識することが可能となる。  [0002] Conventionally, there have been disclosed a plurality of techniques for enabling a robot apparatus including a moving apparatus having a moving means such as a wheel to perform an environment with a step or an operation of moving up and down stairs. For example, in order to recognize the shape feature points of the target object, a method of performing the ascent / descent operation using the known staircase information shown in FIG. 1 having node data of the target object (Japanese Patent No. 3176701), and FIG. As shown in the figure, there is a method in which a large number of contact sensors 602 are covered with a protective film 603 and a stair climbing operation is performed using a foot which is entirely attached to the back surface of the foot 601 in a matrix state ( Patent No. 3278467). Further, as shown in FIG. 3, there is a method in which an infrared sensor is provided on the side of the sole, and a stair climbing operation is performed by applying a landmark tape to the stairs (Japanese Patent No. 3330710). As shown in Fig. 3A, a plurality of optical sensor detectors 682 are provided on the left and right sides of the foot 622RZL of a bipedal walking robot device, and painted with a paint such as black paint that absorbs light well. In addition, by using the land marker 680 that is a surface area having a predetermined width that also produces a linear force, a relative direction with respect to the land marker 680 is detected by comparing paired sensor outputs. By using the foot 622RZL and the land marker 680, the position of the stairs 690 can be recognized as shown in FIG. 3B.
し力しながら、特許第 3176701号公報に記載の技術においては、既知の階段情 報に基づいているため、未知の環境においては適応することができなぐ例えば自律 型のロボット装置などに適用することが困難である。また、特許第 3278467号公報及 び特許第 3330710号公報に記載の技術においては、足裏もしくは足底の脇に設け られた複数のセンサを利用して階段昇降動作を行うため、着床するまで階段の情報 が取得できない。したがって、遠くからの観測が不可能である。このため、ある程度目 の前に階段があることが予測されるときにし力利用することができない。 However, since the technology described in Japanese Patent No. 3176701 is based on known staircase information, it cannot be applied in an unknown environment, such as an autonomous robot device. Is difficult. Patent No. 3278467 and In the technology described in Japanese Patent No. 3330710 and Japanese Patent No. 3330710, the stairs climbing operation is performed using a plurality of sensors provided on the soles or on the sides of the soles, so that information on the stairs cannot be obtained until landing. Therefore, observation from a distance is impossible. For this reason, power cannot be used when it is expected that there will be stairs in front of the eyes to some extent.
発明の開示 Disclosure of the invention
発明が解決しょうとする課題 Problems to be solved by the invention
本発明は、このような従来の実情に鑑みて提案されたものであり、移動体自身が階 段に係る情報を取得して自律的に階段昇降動作を可能とするロボット装置及び移動 装置、並びにロボット装置の動作制御方法を提供することを目的とする。  The present invention has been proposed in view of such a conventional situation, and a robot apparatus and a moving apparatus that allow a moving body itself to acquire information on a stair and perform an autonomous stair climbing operation, and An object of the present invention is to provide an operation control method of a robot device.
上述した目的を達成するために、本発明に係るロボット装置は、移動手段により移 動可能なロボット装置において、 3次元の距離データから環境内に含まれる 1又は複 数の平面を検出し、平面情報として出力する平面検出手段と、上記平面情報から移 動可能な平面を有する階段を認識し、該階段の踏面に関する踏面情報及び蹴り上 げ情報を有する階段情報を出力する階段認識手段と、上記階段情報に基づき、階 段昇降可能か否かを判断し、昇降動作が可能であると判断した場合には、その踏面 に対して自律的に位置決めして階段昇降動作を制御する階段昇降制御手段を有す ることを特徴とする。  In order to achieve the above object, a robot device according to the present invention is a robot device that can be moved by moving means, detects one or a plurality of planes included in an environment from three-dimensional distance data, Plane detection means for outputting information as information, stair recognition means for recognizing a stair having a movable plane from the plane information, and outputting tread information and tread information relating to the tread of the stair, Based on the stair information, it is determined whether the stairs can be moved up or down. It is characterized by having.
本発明においては、移動手段として例えば脚部などを備えて移動可能なロボット装 置において、階段の踏面の例えば大きさや位置などに関する踏面情報から、その足 底がその踏面に載せることができる大きさか否かを判断したり、階段の段差を示す蹴 り上げの情報力 その高さの踏面への移動が可能力否かを判断し、移動可能である と判断した場合には自律的に位置決めすることで階段を登ったり降りたりすることが 可能となる。  According to the present invention, in a robot device that can be moved with, for example, legs as moving means, it is determined whether the sole can be placed on the tread based on tread information on, for example, the size and position of the tread of the stairs. Information ability to judge whether or not it is possible to move to the tread of that height, and if it is judged that it is possible to move, it is positioned autonomously. This makes it possible to climb and descend stairs.
また、上記 3次元の距離データを取得する距離計測手段を有することができ、階段 を検出した際には自律的に昇降動作が可能となる。  In addition, it is possible to have a distance measuring means for acquiring the above three-dimensional distance data, and when the stairs are detected, it is possible to autonomously move up and down.
また、上記階段認識手段は、与えられた平面情報力 移動可能な平面を有する階 段を検出して統合前階段情報を出力する階段検出手段と、上記階段検出手段から 出力される時間的に異なる複数の統合前階段情報を統計的に処理することにより統 合した統合済階段情報を上記階段情報として出力する階段統合手段とを有すること ができ、例えば視野が狭いロボット装置であったり、 1度の検出処理ではうまく階段認 識できないような場合であっても、時間的に統計的に統合した統合済階段情報とす ることで、正確かつ高域な認識結果を得ることができる。 Further, the stair recognition means is provided with a given plane information force. The stair detection means which detects a stair having a movable plane and outputs stair information before integration is different in time outputted from the stair detection means. Statistical processing of multiple pre-integration staircase information Stair integration means for outputting combined integrated stair information as the above-mentioned stair information.For example, in the case of a robot device having a narrow field of view, or a case where stairs cannot be recognized well by a single detection process, However, accurate and high-frequency recognition results can be obtained by using integrated staircase information that is statistically integrated over time.
更に、上記階段検出手段は、上記平面情報に基づき踏面の大きさ及び空間的な 位置を認識し、この認識結果である踏面情報を上記統合前階段情報として出力し、 上記階段統合手段は、時間的に前後する踏面情報から、所定の閾値より大きい重複 領域を有しかつ相対的な高さの違いが所定の閾値以下である 2以上の踏面力 なる 踏面群を検出した場合、当該踏面群を何れをも含む一の踏面となるよう統合すること ができ、統合の際には統合すべきとして選択された踏面を全て含むように統合するこ とで、広 、範囲にわたって認識結果を得ることができる。  Further, the stair detecting means recognizes the size and spatial position of the tread based on the plane information, and outputs tread information as a result of the recognition as the pre-integration stair information. If the tread group having two or more tread forces having an overlapping area larger than a predetermined threshold and having a relative difference in height equal to or less than a predetermined threshold is detected from tread information before and after the tread, the tread group is determined. Both treads can be integrated so that they can be combined into a single tread, and at the time of unification, by integrating all treads selected to be integrated, recognition results can be obtained over a wide range. it can.
更にまた、上記階段認識手段は、上記平面情報に基づき踏面の大きさ及び空間的 な位置を認識し上記踏面情報とすることができ、踏面情報は、少なくとも移動方向に 対して該踏面の手前側の境界を示すフロントエッジ及び奥側の境界を示すバックェ ッジの情報を含むものとすることができ、踏面のフロントエッジ及びバックエッジを認識 するため、例えばスパイラル形状の階段などであっても正確に踏面を認識して階段 昇降動作を可能にする。  Further, the stair recognizing means can recognize the size and spatial position of the tread based on the plane information and use the tread information as the tread information. The tread information is at least in front of the tread with respect to the moving direction. It can include information on the front edge indicating the boundary of the tread and the backedge indicating the boundary on the back side.In order to recognize the front edge and the back edge of the tread, for example, even if it is a spiral-shaped staircase, etc. Recognition of the stairs enables the stairs to move up and down.
また、上記フロントエッジ及びバックエッジに挟まれた領域である安全領域の左右 両側に隣接した領域であって移動可能である確率が高いと推定されるマージン領域 を示す右側マージン情報及び左側マージン情報、上記平面情報に基づき踏面と推 定された領域の重心を示す参照点情報、踏面となる平面を構成する点群の 3次元座 標情報などを有することができる。これらの踏面情報を使用することで、階段昇降動 作をより正確に制御することができる。  Further, right margin information and left margin information indicating a margin area which is adjacent to the left and right sides of the safety area which is an area sandwiched between the front edge and the back edge and is estimated to have a high probability of being movable, Reference point information indicating the center of gravity of the area estimated to be a tread based on the above-described plane information, and three-dimensional coordinate information of a point group forming a tread plane can be provided. By using these tread information, the stair climbing operation can be controlled more accurately.
更に、上記階段認識手段は、上記平面情報に基づき平面の境界を抽出して多角 形を算出し、該多角形に基づき上記踏面情報を算出することができ、例えば、視野 が狭い場合、 3次元距離データの信頼性が高いような場合には、上記多角形は、上 記平面情報に基づき抽出された平面の境界に外接する凸多角形領域とすることがで き、実際に検出されている平面を含んだ領域とすることができる。一方、ノイズが多い 距離データなどの場合には、上記多角形は、上記平面情報に基づき抽出された平 面の境界に内接する凸多角形領域とすることができ、実際に検出されている平面に 内包される領域とすることで、ノイズ部分をカットして正確に踏面を検出することができ る。 Further, the stair recognition means can extract a boundary of a plane based on the plane information to calculate a polygon, and calculate the tread information based on the polygon. For example, when the visual field is narrow, three-dimensional If the reliability of the distance data is high, the polygon can be a convex polygon area circumscribing the boundary of the plane extracted based on the plane information, and is actually detected. It can be a region including a plane. On the other hand, there is much noise In the case of distance data or the like, the polygon may be a convex polygon area inscribed on the boundary of the plane extracted based on the plane information, and may be an area included in the plane actually detected. By doing so, the tread can be accurately detected by cutting the noise portion.
更にまた、現在移動中の移動面におけるバックエッジに対畤した所定位置に移動 した後、昇降動作を実行するよう制御することができ、例えば蹴り上げが小さい階段 など、フロントエッジとバックエッジが重なるような場合には、ノ ックエッジを目標に移 動して昇降動作することができる。同様に、フロントエッジを目標に移動して昇降動作 を行ってもよい。  Furthermore, after moving to a predetermined position on the moving surface that is currently moving, the device can be controlled to perform an ascending / descending operation.For example, the front edge and the back edge overlap, for example, a stair with a small rise. In such a case, it is possible to move the knock edge to the target and move up and down. Similarly, the user may move the front edge to the target to perform the elevating operation.
また、上記階段昇降制御手段は、現在移動中の移動面におけるバックエッジが確 認できない場合は、次に昇降動作の対象となる次段の踏面におけるフロントエッジに 対畤した所定位置に移動した後、昇降動作を実行するよう制御することができ、例え ば床面を移動していて階段を検出した場合、床面のノックエッジが階段初段のフロ ントエッジに重ならない場合があり、そのような場合は階段の初段、すなわち昇降動 作の対象となる次段の踏面のフロントエッジを目標に移動して昇降動作をすることが できる。  If the back edge on the moving surface that is currently moving cannot be confirmed, the stair climbing control means moves to a predetermined position that is opposite to the front edge of the next tread surface to be subjected to the next climbing operation. It can be controlled to perform a vertical movement.For example, if a stair is detected while moving on the floor, the knock edge of the floor may not overlap with the front edge of the first step of the stair. Can move up and down by moving the front edge of the tread of the first step of the stairs, that is, the next step to be moved up and down.
また、上記階段昇降制御手段は、次に移動対象となる踏面を検出し、当該移動対 象となる踏面に対畤した所定位置に移動する一連の動作を行って昇降動作を実行 するよう制御することができ、新たな踏面に移動する毎に、踏面に対してサーチ'ァラ イン'アプローチ動作を実行することで昇降動作を可能にする。  Further, the stair climbing control means detects a tread to be moved next, and performs a series of operations to move to a predetermined position opposite to the tread to be moved, thereby performing a climbing operation. Each time the user moves to a new tread, a search 'align' approach operation is performed on the tread to enable a vertical movement.
更に、上記階段昇降制御手段は、現在位置から次に移動対象となる次段又は次段 以降の踏面が検出できない場合、過去に取得した階段情報力 当該移動対象となる 次段の踏面を検索することができ、予め数段上又は下の階段情報を取得しておくこと で、ロボット装置が自身の直近の情報が得られないような構成で視野が狭い場合で あっても昇降動作を可能とする。  Further, when the next step or the next step or the next step to be moved from the current position cannot be detected from the current position, the stair climbing control means searches for the next step to be moved and the stair information obtained in the past. By acquiring information on several steps up or down in advance, it is possible for the robot device to move up and down even if the field of view is narrow due to a configuration in which the robot device cannot obtain the latest information. I do.
また、上記階段昇降制御手段は、現在の移動面におけるバックエッジに対畤した所 定位置に移動した後、次の移動対象となる踏面を検出し、当該踏面におけるフロント エッジに対畤した所定位置に移動し、当該踏面に移動する昇降動作を実行するよう 制御することができ、フロントエッジ及びバックエッジを使用することで、両エッジが平 行して 、な 、螺旋状の階段であっても昇降動作を可能とする。 In addition, the stair climbing control means detects the next tread to be moved after moving to the predetermined position on the current moving surface, which is opposite to the back edge, and determines the predetermined position of the current moving surface, which is opposite to the front edge. To perform the elevating operation to move to the tread. By using the front edge and the back edge, both edges can be controlled in parallel, and even a spiral staircase can be moved up and down.
更に、上記昇降制御手段は、踏面に対する上記移動手段の位置を規定したパラメ ータを使用して昇降動作を制御することができ、このパラメータは、例えば上記脚部 の足上げ高さ又は足下げ高さに基づき決定されることができる。そして、階段を登る 動作と降りる動作とで上記パラメータの数値を変更するパラメータ切り替え手段を有 することができ、階段を登る動作であっても、降りる動作であってもノ メータ変更す るのみで同様に制御することができる。  Furthermore, the elevation control means can control the elevation operation using a parameter that defines the position of the movement means with respect to the tread surface, and this parameter can be, for example, the height at which the leg is raised or the height at which the foot is lowered. It can be determined based on height. Further, it is possible to have a parameter switching means for changing the numerical values of the above parameters between the operation of climbing the stairs and the operation of descending the stairs. It can be controlled similarly.
また、上記平面検出手段は、 3次元空間で同一平面上にあると推定される距離デ 一タ点群毎に線分を抽出する線分抽出手段と、上記線分抽出手段によって抽出さ れた線分群から同一平面に属すると推定される複数の線分を抽出し該複数の線分 から平面を算出する平面領域拡張手段とを有し、  Further, the plane detecting means is a line segment extracting means for extracting a line segment for each distance data point group estimated to be on the same plane in a three-dimensional space, and is extracted by the line segment extracting means. Plane area extending means for extracting a plurality of line segments estimated to belong to the same plane from the line segment group and calculating a plane from the plurality of line segments,
上記線分抽出手段は、距離データ点の分布に応じて適応的に線分を抽出すること ができ、線分抽出手段は、 3次元の距離データが同一平面上にある場合同一直線上 に並ぶことを利用して線分を抽出するが、この際、ノイズなどの影響により距離データ 点の分布に違いが生じるため、この距離データの分布に応じて適応的に線分を抽出 する(Adaptive Line Fitting)ことにより、ノイズに対してロバストに、精確な線分抽出を 可能とし、抽出された多数の線分力 線分拡張法により平面を求めるため、ノイズの 影響などにより、本来複数平面が存在するのに 1つの平面としたり、 1つの平面しか 存在しないのに複数平面としたりすることなく精確に平面抽出することができる。 更に、上記線分抽出手段は、上記距離データ点間の距離に基づき同一平面上に あると推定される距離データ点群を抽出し、該距離データ点群における距離データ 点の分布に基づき、当該距離データ点群が同一平面上にある力否かを再度推定す ることができ、距離データ点の 3次元空間における距離に基づき一旦距離データ点 群を抽出しておき、データ点の分布に基づき再度同一平面上にある力否かを推定す ることにより精確に線分抽出することができる。  The line segment extraction means can adaptively extract a line segment according to the distribution of distance data points, and the line segment extraction means is arranged on the same straight line when the three-dimensional distance data is on the same plane. In this case, the distribution of distance data points differs due to the effects of noise and other factors. Therefore, the line segments are adaptively extracted according to the distribution of distance data (Adaptive Line). Fitting) makes it possible to extract line segments accurately and robustly against noise, and to obtain planes by a large number of extracted line segment force line segment expansion methods. It is possible to accurately extract planes without using a single plane to perform, or multiple planes when only one plane exists. Further, the line segment extracting means extracts a distance data point group which is estimated to be on the same plane based on the distance between the distance data points, and extracts the distance data point group based on the distribution of the distance data points in the distance data point group. It is possible to re-estimate whether the distance data point group is on the same plane or not, and once extract the distance data point group based on the distance of the distance data points in the three-dimensional space, and based on the distribution of the data points. By estimating the force on the same plane again, line segments can be extracted accurately.
更にまた、上記線分抽出手段は、上記同一平面上にあると推定される距離データ 点群から第 1の線分を抽出し、該距離データ点群のうち該第 1の線分との距離が最も 大きい距離データ点を着目点とし、当該距離が所定の閾値以下である場合に該距 離データ点群から第 2の線分を抽出し、該第 2の線分の一方側に距離データ点が所 定の数以上連続して存在するか否かを判定し、所定の数以上連続して存在する場 合に該距離データ点群を該着目点にて分割することができ、例えば抽出したデータ 点群の端点を結ぶ線分を第 1の線分とし、上記距離が大きい点が存在する場合には 、例えば最小二乗法により第 2の線分を生成し、この第 2の線分において一方側に連 続して複数のデータ点が存在する場合には、データ点群は例えば線分に対してジグ ザグな形などをとつていることが想定でき、したがって抽出したデータ点群には偏りが あると判断して、上記着目点などにてデータ点群を分割することができる。 Furthermore, the line segment extracting means extracts a first line segment from the distance data point group estimated to be on the same plane, and calculates a distance from the distance data point group to the first line segment. Is the most A large distance data point is set as a point of interest, and when the distance is equal to or smaller than a predetermined threshold, a second line segment is extracted from the distance data point group, and a distance data point is located on one side of the second line segment. A determination is made as to whether or not there is a predetermined number or more of continuous data, and if there is a predetermined number or more of data, the distance data point group can be divided at the target point. A line segment connecting the end points of the point group is defined as a first line segment, and if there is a point having a large distance, a second line segment is generated by, for example, the least squares method. If multiple data points exist consecutively on the side, it can be assumed that the data point group has, for example, a zigzag shape with respect to the line segment, and thus the extracted data point group is biased. Is determined, and the data point group can be divided based on the noted point or the like.
また、上記平面領域拡張手段は、同一の平面に属すると推定される 1以上の線分 を選択して基準平面を算出し、該基準平面と同一平面に属すると推定される線分を 該線分群から拡張用線分として検索し、該拡張用線分により該基準平面を更新する と共に該基準平面の領域を拡張する処理を繰り返し、更新が終了した平面を更新済 平面として出力することができ、同一平面に属するとされる線分により平面領域拡張 処理及び平面更新処理を行うことができる。  Further, the plane area extending means selects one or more line segments estimated to belong to the same plane, calculates a reference plane, and calculates a line segment estimated to belong to the same plane as the reference plane. A process for retrieving a segment from the group of segments as an extension line segment, updating the reference plane with the extension line segment, and expanding the area of the reference plane is repeated, and the updated plane can be output as an updated plane. In addition, the plane area extension processing and the plane update processing can be performed by line segments that belong to the same plane.
更に、上記更新済平面に属する距離データ点群において、当該更新済平面との距 離が所定の閾値を超える距離データ点が存在する場合、これを除!ヽた距離データ点 群力も再度平面を算出する平面再算出手段を更に有することができ、更新済平面は それに属する全線分の平均した平面として得られているため、これ力 大きく外れた 距離データ点を除いたデータ点群力 再度平面を求めることで、よりノイズなどの影 響を低減した検出結果を得ることができる。  Further, in the distance data point group belonging to the updated plane, if there is a distance data point whose distance from the updated plane exceeds a predetermined threshold, the distance data point group force excluding this is removed again from the plane. It is possible to further have a plane recalculating means for calculating, and the updated plane is obtained as an average plane of all line segments belonging to the updated plane. By obtaining it, a detection result in which the influence of noise and the like is further reduced can be obtained.
更にまた、上記平面領域拡張手段は、線分により定まる平面と上記基準平面との 誤差に基づき当該線分が該基準平面と同一平面に属するか否かを推定することが でき、例えば平面方程式の 2乗平均誤差などに基づきノイズの影響であるのか、異な る平面なのかを判別して更に正確に平面検出することができる。  Furthermore, the plane area expanding means can estimate whether or not the line segment belongs to the same plane as the reference plane based on an error between the plane determined by the line segment and the reference plane. The plane can be detected more accurately by determining whether the plane is due to noise or a different plane based on the root mean square error or the like.
本発明に係るロボット装置の動作制御方法は、移動手段により移動可能なロボット 装置の動作制御方法において、 3次元の距離データから環境内に含まれる 1又は複 数の平面を検出し、平面情報として出力する平面検出工程と、上記平面情報から移 動可能な平面を有する階段を認識し、該階段の踏面に関する踏面情報及び蹴り上 げ情報を有する階段情報を出力する階段認識工程と、上記階段情報に基づき、階 段昇降可能か否かを判断し、昇降動作が可能であると判断した場合には、その踏面 に対して自律的に位置決めして階段昇降動作を制御する階段昇降制御工程とを有 することを特徴とする。 An operation control method for a robot device according to the present invention is the operation control method for a robot device movable by a moving means, wherein one or a plurality of planes included in an environment is detected from three-dimensional distance data, and the plane information is obtained. The plane detection step to be output and transfer from the plane information A stair recognition step of recognizing a stair having a movable plane and outputting tread information and tread information relating to the tread of the stair, and determining whether or not the stairs can be raised or lowered based on the stair information. If it is determined that the ascending / descending operation is possible, a stairs ascending / descending control step of controlling the stairs ascending / descending operation by autonomously positioning with respect to the tread surface is provided.
本発明に係る移動装置は、移動手段により移動可能な移動装置において、 3次元 の距離データから環境内に含まれる 1又は複数の平面を検出し、平面情報として出 力する平面検出手段と、上記平面情報から移動可能な平面を有する階段を認識し、 該階段の踏面に関する踏面情報及び蹴り上げ情報を有する階段情報を出力する階 段認識手段と、上記階段情報に基づき、階段昇降可能か否かを判断し、昇降動作が 可能であると判断した場合には、その踏面に対して自律的に位置決めして階段昇降 動作を制御する階段昇降制御手段を有することを特徴とする。  A mobile device according to the present invention is a mobile device movable by the mobile device, wherein the mobile device detects one or more planes included in the environment from the three-dimensional distance data and outputs the plane information as plane information; A step recognition means for recognizing a stair having a movable plane from the plane information and outputting stair information including tread information and kick-up information relating to the tread of the stair, and whether or not stair climbing is possible based on the stair information When it is determined that the ascending / descending operation is possible, stairs ascending / descending control means for controlling the stairs ascending / descending operation by autonomously positioning with respect to the tread surface is provided.
本発明によれば、移動手段として例えば脚部などを備えて移動可能なロボット装置 及び移動装置において、階段の踏面の例えば大きさや位置などに関する踏面情報 から、その足底がその踏面に載せることができる大きさか否かを判断したり、階段の段 差を示す蹴り上げの情報力 その高さの踏面への移動が可能力否かを判断し、移動 可能であると判断した場合には自律的に位置決めすることで階段を登ったり降りたり することが可能となる。  ADVANTAGE OF THE INVENTION According to the present invention, in a robot device and a moving device that are movable with, for example, legs as moving means, the sole can be placed on the tread from the tread information on, for example, the size and position of the tread of the stairs. The ability to judge whether it is the size that can be carried out and the information power of kicking up indicating the steps of the stairs It is judged whether it is possible to move to the tread of that height or not, and if it is judged that it can be moved, it will be autonomous Positioning at a location makes it possible to climb and descend stairs.
本発明の更に他の目的、本発明によって得られる利点は、以下において図面を参 照して説明される実施に形態から一層明らかにされるであろう。  Still other objects of the present invention and advantages obtained by the present invention will become more apparent from the embodiments described below with reference to the drawings.
図面の簡単な説明 Brief Description of Drawings
[図 1]図 1は、従来の昇降動作を説明する図である。 FIG. 1 is a diagram illustrating a conventional elevating operation.
[図 2]図 2は、従来の昇降動作を説明する図である。 FIG. 2 is a diagram illustrating a conventional elevating operation.
[図 3]図 3A、図 3Bは、従来の昇降動作を説明する図である。 FIG. 3A and FIG. 3B are diagrams illustrating a conventional elevating operation.
[図 4]図 4は、本発明の実施の形態におけるロボット装置の概観を示す斜視図である  FIG. 4 is a perspective view showing an overview of a robot device according to an embodiment of the present invention.
[図 5]図 5は、ロボット装置が具備する関節自由度構成を模式的に示す図である。 [FIG. 5] FIG. 5 is a view schematically showing a configuration of a degree of freedom of a joint included in the robot apparatus.
[図 6]図 6は、ロボット装置の制御システム構成を示す模式図である。 [図 7]図 7は、ロボット装置力ステレオデータから階段昇降動作を発現するまでの処理 を実行するシステムを示す機能ブロック図である。 FIG. 6 is a schematic diagram showing a control system configuration of a robot device. [FIG. 7] FIG. 7 is a functional block diagram showing a system that executes processing from the robot apparatus force stereo data to the step of performing a stair climbing operation.
[図 8]図 8Aは、ロボット装置が外界を撮影している様子を示す模式図、図 8Bは、ロボ ット装置の足底の大きさを示す図である。  [FIG. 8] FIG. 8A is a schematic diagram showing a state in which the robot apparatus photographs the outside world, and FIG. 8B is a view showing the size of the sole of the robot apparatus.
[図 9]図 9は、階段検出を説明する図であって、図 9Aは、階段を正面から見た図、図 9Bは、階段を側面から見た図、図 9Cは、階段を斜め力も見た図である。  [FIG. 9] FIG. 9 is a diagram for explaining staircase detection. FIG. 9A is a diagram of the stairs viewed from the front, FIG. 9B is a diagram of the stairs viewed from the side, and FIG. FIG.
[図 10]図 10は、階段検出の他の例を示す説明する図であって、図 10Aは、階段を正 面から見た図、図 10Bは、階段を側面から見た図、図 10Cは、階段を斜めから見た 図である。 [FIG. 10] FIG. 10 is a diagram illustrating another example of staircase detection. FIG. 10A is a diagram of the stairs viewed from the front, FIG. 10B is a diagram of the stairs viewed from the side, and FIG. Is a view of the stairs viewed diagonally.
[図 11]図 11は、図 9の階段を検出した結果の一例を示す図であって、図 11Aは、図 9の階段を撮影した場合の画像を示す模式図、図 11B乃至図 11Dは、図 11Aに示 す画像から取得した 3次元の距離データを示す図である。  FIG. 11 is a diagram showing an example of a result of detecting the stairs in FIG. 9; FIG. 11A is a schematic diagram showing an image obtained by photographing the stairs in FIG. 9; FIG. 11B to FIG. FIG. 11B is a diagram showing three-dimensional distance data acquired from the image shown in FIG. 11A.
[図 12]図 12は、図 10の階段を検出した結果の一例を示す図であって、図 12Aは、 図 10の階段を撮影した場合の画像を示す模式図、図 12B乃至図 12Dは、図 12Aに 示す画像から取得した 3次元の距離データを示す図である。  FIG. 12 is a diagram showing an example of a result of detecting the stairs in FIG. 10; FIG. 12A is a schematic diagram showing an image obtained by photographing the stairs in FIG. 10; FIG. 12B to FIG. FIG. 12B is a diagram showing three-dimensional distance data acquired from the image shown in FIG. 12A.
[図 13]図 13Aは、階段を撮影した画像を示す模式図、図 13Bは、図 13Aから取得し た 3次元距離データ力 4つの平面領域 A、 B、 C、 Dを検出した結果を示す図である [FIG. 13] FIG. 13A is a schematic diagram showing an image of a staircase, and FIG. 13B shows a result of detecting four plane areas A, B, C, and D obtained from the three-dimensional distance data obtained from FIG. 13A. It is a figure
。 階段を検出した結果の一例を示す図である。 . It is a figure showing an example of the result of having detected a staircase.
[図 14]図 14は、階段認識器を示す機能ブロック図である。  FIG. 14 is a functional block diagram showing a staircase recognizer.
[図 15]図 15は、階段検出処理の手順を示すフローチャートである。  FIG. 15 is a flowchart showing a procedure of a staircase detection process.
[図 16]図 16A、図 16Bは、多角形を示す模式図である。  FIG. 16A and FIG. 16B are schematic diagrams showing polygons.
[図 17]図 17は、 Melkmanのアルゴリズムを説明するための模式図である。  FIG. 17 is a schematic diagram for explaining the algorithm of Melkman.
[図 18]図 18A、図 18Bは、 Sklanskyのアルゴリズムにより多角形を求める方法を説明 するための模式図である。  FIG. 18A and FIG. 18B are schematic diagrams for explaining a method for obtaining a polygon by Sklansky's algorithm.
圆 19]図 19は、非凸多角形形状の階段について発生する問題を説明するための模 式図であって、図 19Aは、入力される平面を示す図、図 19Bは、凸包による非凸多 角形形状の階段の多角形表現結果を示す図である。 [19] FIG. 19 is a schematic diagram for explaining a problem that occurs with a non-convex polygonal staircase. FIG. 19A is a diagram showing an input plane, and FIG. It is a figure which shows the polygon expression result of the convex polygon-shaped staircase.
[図 20]図 20は、平滑ィ匕によって入力平面を包含する多角形を求める方法を示す模 式図であって、図 20Aは、入力された平面を示す図、図 20Bは、入力平面を示す多 角形力も不連続なギャップを除去し平滑ィ匕した多角形を示す図、図 20Cは、図 20B で得られた多角形に対してラインフィッティングにより更に平滑ィ匕した多角形を示す 図である。 [FIG. 20] FIG. 20 is a schematic diagram showing a method for obtaining a polygon including an input plane by smoothing. 20A is a diagram illustrating an input plane, FIG. 20B is a diagram illustrating an input plane, and FIG. 20C is a diagram illustrating a polygon in which a polygonal force in which a discontinuous gap is also removed is removed and smoothed. FIG. 21B is a diagram showing a polygon obtained by further smoothing the polygon obtained in FIG. 20B by line fitting.
[図 21]図 21は、ギャップ除去とラインフィットによる平滑ィ匕によって入力平面を包含す る多角形を求める処理のプログラム例を示す図である。  FIG. 21 is a diagram showing a program example of a process of obtaining a polygon including an input plane by smoothing by gap removal and line fitting.
[図 22]図 22A、図 22Bは、階段パラメータの算出方法を説明するための模式図であ る。  FIG. 22A and FIG. 22B are schematic diagrams for explaining a method of calculating staircase parameters.
[図 23]図 23は、最終的に認識される踏面及び階段パラメータを説明するための模式 図である。  FIG. 23 is a schematic diagram for explaining tread and staircase parameters finally recognized.
[図 24]図 24A、図 24Bは、階段を示す模式図である。  FIG. 24A and FIG. 24B are schematic diagrams showing stairs.
[図 25]図 25は、階段統合処理の方法を示すフローチャートである。  FIG. 25 is a flowchart showing a method of staircase integration processing.
[図 26]図 26は、オーバーラップしている階段データを統合する処理を説明するため の模式図である。  FIG. 26 is a schematic diagram for explaining a process of integrating overlapping staircase data.
[図 27]図 27は、ァライン動作を説明するための図である。  FIG. 27 is a diagram for explaining an alignment operation.
[図 28]図 28は、アプローチ動作を説明するための模式図である。  FIG. 28 is a schematic diagram for explaining an approach operation.
[図 29]図 29は、階段昇降動作の手順を示すフローチャートである。  FIG. 29 is a flowchart showing a procedure of a stair climbing operation.
[図 30]図 30は、サーチ ·ァライン ·アプローチ処理方法を示すフローチャートである。  FIG. 30 is a flowchart showing a search-alignment-approach processing method.
[図 31]図 31は、昇降動作処理の方法を示すフローチャートである。  FIG. 31 is a flowchart showing a method of a lifting operation process.
[図 32]図 32は、ロボット装置が認識しているか又は認識する予定の階段面を示す模 式図である。  [FIG. 32] FIG. 32 is a schematic diagram showing a staircase surface recognized or scheduled to be recognized by the robot device.
[図 33]図 33は、昇降動作処理の方法を示すフローチャートである。  FIG. 33 is a flowchart showing a method of a lifting operation process.
[図 34]図 34は、昇降動作処理の方法を示すフローチャートである。  FIG. 34 is a flowchart showing a method of a lifting operation process.
[図 35]図 35Aは、ロボット装置により認識されている踏面と足底の関係を説明するた めの図であり、図 35Bは、各部の寸法を示す図である。  FIG. 35A is a diagram for explaining a relationship between a tread and a sole recognized by a robot device, and FIG. 35B is a diagram illustrating dimensions of respective parts.
[図 36]図 36は、ロボット装置が昇降動作を行った様子を撮影したものをトレースした 図である。  [FIG. 36] FIG. 36 is a traced image of a state in which the robot device has performed a vertical movement.
[図 37]図 37は、ロボット装置が昇降動作を行った様子を撮影したものをトレースした 図である。 [FIG. 37] FIG. 37 is a traced image of a state in which the robot apparatus performs a vertical movement. FIG.
[図 38]図 38は、単一の段部とロボット装置の足底の関係を示す図である。  FIG. 38 is a diagram showing the relationship between a single step and the sole of the robot apparatus.
[図 39]図 39は、単一の凹部とロボット装置の足底の関係を示す図である。  FIG. 39 is a view showing the relationship between a single recess and the sole of the robot apparatus.
[図 40]図 40は、本変形例における平面検出装置を示す機能ブロック図である。  FIG. 40 is a functional block diagram showing a flat panel detection device in this modification.
[図 41]図 41は、テクスチャを付与する手段を有しているロボット装置を説明するため の図である。  FIG. 41 is a diagram for explaining a robot apparatus having a means for giving a texture.
[図 42]図 42は、本変形例における線分拡張法による平面検出方法を説明する図で ある。  [FIG. 42] FIG. 42 is a diagram illustrating a plane detection method by a line segment extension method in this modification.
[図 43]図 43は、線分拡張法による平面検出処理を示すフローチャートである。  FIG. 43 is a flowchart showing plane detection processing by the line segment extension method.
[図 44]図 44は、本変形例における線分抽出部における処理の詳細を示すフローチ ヤートである。  [FIG. 44] FIG. 44 is a flowchart showing details of processing in a line segment extraction unit in this modification.
[図 45]図 45は、距離データ点の分布の様子を示す図であって、図 45Aは、データの 分布が線分に対してジグザグ形である場合、図 45Bは、ノイズなどにより線分近傍に 一様に分布して 、る場合を示す模式図である。  [FIG. 45] FIG. 45 is a diagram showing a distribution of distance data points. FIG. 45A shows a case where the distribution of data is zigzag with respect to a line segment, and FIG. FIG. 4 is a schematic diagram showing a case where the data is uniformly distributed in the vicinity.
[図 46]図 46は、本変形例における Zig-Zag-Shape判別方法を示すフローチャートで ある。  FIG. 46 is a flowchart showing a Zig-Zag-Shape discrimination method according to the present modification.
[図 47]図 47は、上記 Zig-Zag-Shape判別処理のプログラム例を示す図である。  FIG. 47 is a diagram showing a program example of the Zig-Zag-Shape discrimination processing.
[図 48]図 48は、 Zig-Zag-Shape判別処理を行う処理部を示すブロック図である。 FIG. 48 is a block diagram illustrating a processing unit that performs a Zig-Zag-Shape determination process.
[図 49]図 49は、本変形例における領域拡張処理を説明するための模式図である。 [FIG. 49] FIG. 49 is a schematic diagram for explaining an area extension process in the present modification.
[図 50]図 50は、本変形例における領域拡張部における領域種を検索する処理及び 領域拡張処理の手順を示すフローチャートである。 [FIG. 50] FIG. 50 is a flowchart showing a procedure of a process of searching for an area type and an area expanding process in an area expanding unit in the present modification.
[図 51]図 51は、端点と直線との距離が等しくても平面方程式の 2乗平均誤差 rmsが 異なる例を示す図であって、図 51Aは、ノイズなどの影響により線分が平面力もずれ ている場合、図 51Bは、線分が属する他の平面が存在する場合を示す模式図である  [FIG. 51] FIG. 51 is a diagram showing an example in which the root-mean-square error rms of the plane equation is different even when the distance between the end point and the straight line is equal. FIG. 51A shows that the line segment has a flat force due to the influence of noise or the like. If misaligned, FIG. 51B is a schematic diagram showing a case where there is another plane to which the line segment belongs.
[図 52]図 52は、領域種の選択処理を示す図である。 FIG. 52 is a diagram showing an area type selection process.
[図 53]図 53は、領域拡張処理を示す図である。 FIG. 53 is a diagram showing an area extension process.
[図 54]図 54Aは、ロボット装置が立った状態で床面を見下ろした際の床面を示す模 式図、 図 5 4 Bは、 縦軸を X、 横軸を y、 各データ点の濃淡で z軸を表現して 3次元距離 データ及び、 行方向の画素列から線分抽出処理にて同一平面に存在するとされるデータ点 群から直線を検出したものを示す図、 図 5 4 Cは、 図 5 4 Bに示す直線群から領域拡張処 理により得られた平面領域を示す図である。 [FIG. 54] FIG. 54A is a schematic view showing a floor surface when the robot apparatus is standing and looking down on the floor surface. In the equation and Fig. 54B, the vertical axis is X, the horizontal axis is y, and the z-axis is expressed by the shading of each data point, which is the same in the line segment extraction processing from the three-dimensional distance data and the pixel columns in the row direction FIG. 54C is a diagram showing a straight line detected from a data point group assumed to be present on a plane, and FIG. 54C is a diagram showing a plane region obtained from the straight line group shown in FIG. 54B by a region expanding process.
【図 5 5】 図 5 5は、 床面に段差を一段置いたときの本変形例における平面検出方法と従 来の平面検出方法との結果の違いを説明するための図であって、 図 5 5 Aは、 観察された 画像を示す模式図、 図 5 5 Bは、 実験条件を示す図、 図 5 5 Cは、 本変形例における平面 検出方法により平面検出された結果を示す図、 図 5 5 Dは、 従来の平面検出方法により平 面検出された結果を示す図である。  FIG. 55 is a diagram for explaining the difference between the result of the plane detection method according to the present modification and the result of the conventional plane detection method when one step is placed on the floor surface. 55A is a schematic diagram showing the observed image, FIG. 55B is a diagram showing the experimental conditions, and FIG. 55C is a diagram showing the result of plane detection by the plane detection method in the present modified example. 55D is a diagram showing a result of plane detection by a conventional plane detection method.
【図 5 6】 図 5 6 Aは、 床面を撮影した画像を示す模式図、 図 5 6 B及び図 5 6 Cは、 図 5 6 Aに示す床面を撮影して取得した 3次元距離デ一夕から水平方向及び垂直方向の距離 デ一夕点列から、 それぞれ本変形例の線分検出により検出した線分及び従来の線分検出に より検出した線分を示す図である。  [Figure 56] Figure 56A is a schematic diagram showing an image of the floor, and Figures 56B and 56C are three-dimensional distances obtained by capturing the floor shown in Figure 56A. It is a figure which shows the line segment detected by the line segment detection of this modification, and the line segment detected by the conventional line segment detection from the distance line of a horizontal direction and a vertical direction from night and night.
発明を実施するための最良の形態 BEST MODE FOR CARRYING OUT THE INVENTION
以下、 本発明を適用した具体的な実施の形態について、 図面を参照しながら詳細に説明 する。 この実施の形態は、 本発明を、 周囲の環境に存在する階段などの段差を認識する段 差認識装置を搭載した自律的に動作可能な口ポット装置に適用したものである。  Hereinafter, specific embodiments to which the present invention is applied will be described in detail with reference to the drawings. In this embodiment, the present invention is applied to an autonomously operable mouth pot device equipped with a step recognition device for recognizing a step such as a staircase existing in the surrounding environment.
本実施の形態におけるロポット装置は、 ステレオビジョンなどにより得られた距離情報 The robot device according to the present embodiment uses distance information obtained by stereo vision or the like.
(距離データ) から抽出した複数平面から階段を認識し、 この階段認識結果を利用して階 段昇降動作を可能とするものである。 It recognizes stairs from multiple planes extracted from (distance data), and makes it possible to move up and down the stairs using the results of this stair recognition.
( 1 ) 口ポット装置  (1) Mouth pot device
ここでは、 まず、 このような口ポット装置の一例として 2足歩行タイプの口ポット装置 を例にとって説明する。 この口ポット装置は、 住環境その他の日常生活上の様々な場面に おける人的活動を支援する実用口ポットであり、 内部状態 (怒り、 悲しみ、 喜び、 楽しみ 等) に応じて行動できるほか、 人間が行う基本的な動作を表出できるエンターティンメン トロポット装置である。 なお、 ここでは、 2足歩行型のロボット装置を例にとって説明す るが、 階段認識装置は、 2足歩行の口ポット装置に限らず、 脚式移動型の口ポット装置に 搭載すればロボット装置に階段昇降動作を実行させることができる。  Here, first, a biped walking type mouth pot device will be described as an example of such a mouth pot device. This mouth pot device is a practical mouth pot that supports human activities in various situations in the living environment and other everyday life, and can act according to the internal state (anger, sadness, joy, enjoyment, etc.) It is an entertainment pot device that can display basic actions performed by humans. Here, a bipedal walking robot device will be described as an example, but the staircase recognition device is not limited to a bipedal walking pot device, but can be mounted on a legged movable mouth pot device. Can perform a stair climbing operation.
差替え用紙(規則 26) 図 4は、本実施の形態におけるロボット装置の概観を示す斜視図である。図 4に示 すように、ロボット装置 201は、体幹部ユニット 202の所定の位置に頭部ユニット 203 が連結されると共に、左右 2つの腕部ユニット 204RZLと、左右 2つの脚部ユニット 2 05RZLが連結されて構成されている(ただし、 R及び Lの各々は、右及び左の各々 を示す接尾辞である。以下において同じ。 ) o Replacement form (Rule 26) FIG. 4 is a perspective view showing an overview of the robot device according to the present embodiment. As shown in FIG. 4, in the robot apparatus 201, a head unit 203 is connected to a predetermined position of a trunk unit 202, and two left and right arm units 204RZL and two left and right leg units 205RZL are connected. (However, each of R and L is a suffix indicating right and left. The same applies hereinafter.) O
このロボット装置 201が具備する関節自由度構成を図 5に模式的に示す。頭部ュ- ット 203を支持する首関節は、首関節ョー軸 101と、首関節ピッチ軸 102と、首関節口 一ノレ軸 103と!ヽぅ 3自由度を有して!/ヽる。  FIG. 5 schematically shows the configuration of the degrees of freedom of the joints provided in the robot apparatus 201. The neck joint supporting the head unit 203 includes a neck joint axis 101, a neck joint pitch axis 102, and a neck joint one-piece axis 103! With three degrees of freedom! / Puru.
また、上肢を構成する各々の腕部ユニット 204RZLは、肩関節ピッチ軸 107と、肩 関節ローノレ軸 108と、上腕ョー軸 109と、月寸関節ピッチ軸 110と、前腕ョー軸 111と、 手首関節ピッチ軸 112と、手首関節ロール輪 113と、手部 114とで構成される。手部 114は、実際には、複数本の指を含む多関節 *多自由度構造体である。ただし、手部 114の動作は、ロボット装置 201の姿勢制御や歩行制御に対する寄与や影響が少な いので、本明細書では簡単のため、ゼロ自由度と仮定する。したがって、各腕部は 7 自由度を有するとする。  Further, each arm unit 204RZL constituting the upper limb includes a shoulder joint pitch axis 107, a shoulder joint Lorenole axis 108, an upper arm joint axis 109, a lunar joint pitch axis 110, a forearm joint axis 111, and a wrist joint. It comprises a pitch axis 112, a wrist joint roll wheel 113, and a hand 114. The hand 114 is actually a multi-joint * multi-degree-of-freedom structure including a plurality of fingers. However, the movement of the hand 114 has little contribution or influence to the posture control and the walking control of the robot apparatus 201, and therefore is assumed to have zero degrees of freedom for simplicity in this specification. Therefore, each arm has seven degrees of freedom.
また、体幹部ユニット 202は、体幹ピッチ軸 104と、体幹ロール軸 105と、体幹ョー 軸 106という 3自由度を有する。  The trunk unit 202 has three degrees of freedom, namely, a trunk pitch axis 104, a trunk roll axis 105, and a trunk axis 106.
また、下肢を構成する各々の脚部ユニット 205RZLは、股関節ョー軸 115と、股関 節ピッチ軸 116と、股関節ロール軸 117と、膝関節ピッチ軸 118と、足首関節ピッチ 軸 119と、足首関節ロール軸 120と、足底 121とで構成される。本明細書中では、股 関節ピッチ軸 116と股関節ロール軸 117の交点は、ロボット装置 201の股関節位置 を定義する。人体の足底 121は、実際には多関節,多自由度の足底を含んだ構造体 であるが、本明細書においては、簡単のためロボット装置 201の足底は、ゼロ自由度 とする。したがって、各脚部は、 6自由度で構成される。  Each of the leg units 205RZL constituting the lower limb has a hip joint axis 115, a hip joint pitch axis 116, a hip joint roll axis 117, a knee joint pitch axis 118, an ankle joint pitch axis 119, and an ankle joint axis. It is composed of a roll shaft 120 and a sole 121. In this specification, the intersection of the hip joint pitch axis 116 and the hip joint roll axis 117 defines the hip joint position of the robot device 201. Although the sole 121 of the human body is actually a structure including a multi-joint, multi-degree-of-freedom sole, in this specification, the sole of the robot apparatus 201 is assumed to have zero degrees of freedom for simplicity. . Thus, each leg has six degrees of freedom.
以上を総括すれば、ロボット装置 201全体としては、合計で 3 + 7 X 2 + 3 + 6 X 2 = 32自由度を有することになる。ただし、エンターテインメント向けのロボット装置 201が 必ずしも 32自由度に限定されるわけではない。設計'制作上の制約条件や要求仕 様等に応じて、自由度すなわち関節数を適宜増減することができることはいうまでも ない。 Summarizing the above, the robot apparatus 201 as a whole has a total of 3 + 7 × 2 + 3 + 6 × 2 = 32 degrees of freedom. However, the robot 201 for entertainment is not necessarily limited to 32 degrees of freedom. Needless to say, the degree of freedom, that is, the number of joints, can be appropriately increased or decreased according to design constraints, production constraints, required specifications, etc. Absent.
上述したようなロボット装置 201がもつ各自由度は、実際にはァクチユエータを用い て実装される。外観上で余分な膨らみを排してヒトの自然体形状に近似させること、 2 足歩行と!/、う不安定構造体に対して姿勢制御を行うこと等の要請から、ァクチユエ一 タは小型かつ軽量であることが好まし!/、。  Each degree of freedom of the robot apparatus 201 as described above is actually implemented using an actuator. Eliminating extra bulges on the appearance to approximate the human body shape, bipedal walking! Due to demands such as controlling the posture of unstable structures, it is preferable that the actuator is small and lightweight!
このようなロボット装置は、ロボット装置全体の動作を制御する制御システムを例え ば体幹部ユニット 202等に備える。図 6は、ロボット装置 201の制御システム構成を示 す模式図である。図 6に示すように、制御システムは、ユーザ入力等に動的に反応し て情緒判断や感情表現を司る思考制御モジュール 200と、ァクチユエータ 350の駆 動等、ロボット装置 201の全身協調運動を制御する運動制御モジュール 300とで構 成される。  Such a robot device includes a control system that controls the operation of the entire robot device, for example, in the trunk unit 202 or the like. FIG. 6 is a schematic diagram illustrating a control system configuration of the robot device 201. As shown in Fig. 6, the control system controls the whole-body cooperative movement of the robot device 201, such as the drive of the thought control module 200, which dynamically responds to user input, etc., and performs emotion judgment and emotional expression, and the actuator 350. And a motion control module 300 to be operated.
思考制御モジュール 200は、情緒判断や感情表現に関する演算処理を実行する C PU (Central Processing Unit) 211や、 RAM (Random Access Memory) 212、 ROM (Read Only Memory) 213及び外部記憶装置(ノヽード ·ディスク ·ドライブ等) 214等で 構成され、モジュール内で自己完結した処理を行うことができる、独立駆動型の情報 処理装置である。  The thinking control module 200 includes a CPU (Central Processing Unit) 211 that executes arithmetic processing related to emotion determination and emotional expression, a RAM (Random Access Memory) 212, a ROM (Read Only Memory) 213, and an external storage device (node). This is an independent drive type information processing device composed of 214, etc., which can perform self-contained processing in the module.
この思考制御モジュール 200は、画像入力装置 251から入力される画像データや 音声入力装置 252から入力される音声データ等、外界からの刺激等に従って、ロボ ット装置 201の現在の感情や意思を決定する。すなわち、上述したように、入力され る画像データ力 ユーザの表情を認識し、その情報をロボット装置 201の感情や意 思に反映させることで、ユーザの表情に応じた行動を発現することができる。ここで、 画像入力装置 251は、例えば CCD (Charge Coupled Device)カメラを複数備えてお り、これらのカメラにより撮像した画像力も距離画像を得ることができる。また、音声入 力装置 252は、例えばマイクロホンを複数備えている。  The thinking control module 200 determines the current emotion and intention of the robot device 201 according to external stimuli, such as image data input from the image input device 251 and audio data input from the voice input device 252. I do. That is, as described above, the input image data is recognized. By recognizing the user's facial expression and reflecting the information on the emotions and intentions of the robot device 201, it is possible to express an action according to the user's facial expression. . Here, the image input device 251 includes, for example, a plurality of CCD (Charge Coupled Device) cameras, and can obtain a distance image with an image captured by these cameras. The audio input device 252 includes, for example, a plurality of microphones.
思考制御モジュール 200は、意思決定に基づいた動作又は行動シーケンス、すな わち四肢の運動を実行するように、運動制御モジュール 300に対して指令を発行す る。  The thinking control module 200 issues a command to the motion control module 300 to execute a motion or action sequence based on a decision, that is, a motion of a limb.
一方の運動制御モジュール 300は、ロボット装置 201の全身協調運動を制御する CPU311や、 RAM312、 ROM313及び外部記憶装置(ノヽード'ディスク'ドライブ等 ) 314等で構成され、モジュール内で自己完結した処理を行うことができる独立駆動 型の情報処理装置である。また、外部記憶装置 314には、例えば、オフラインで算出 された歩行パターンや目標とする ZMP軌道、その他の行動計画を蓄積することがで きる。 One motion control module 300 controls the whole body cooperative motion of the robot device 201. This is an independent drive type information processing device which includes a CPU 311, a RAM 312, a ROM 313, an external storage device (a node 'disk' drive, etc.) 314, and can perform self-contained processing in a module. Further, in the external storage device 314, for example, a walking pattern calculated offline, a target ZMP trajectory, and other action plans can be stored.
この運動制御モジュール 300には、図 5に示したロボット装置 201の全身に分散す るそれぞれの関節自由度を実現するァクチユエータ 350、対象物との距離を測定す る距離計測センサ(図示せず)、体幹部ユニット 202の姿勢や傾斜を計測する姿勢セ ンサ 351、左右の足底の離床又は着床を検出する接地確認センサ 352, 353、足底 121の足底 121に設けられる荷重センサ、バッテリ等の電源を管理する電源制御装 置 354等の各種の装置力 バス'インターフェース(IZF) 310経由で接続されている 。ここで、姿勢センサ 351は、例えば加速度センサとジャイロ 'センサの組み合わせに よって構成され、接地確認センサ 352, 353は、近接センサ又はマイクロ 'スィッチ等 で構成される。  The motion control module 300 includes an actuator 350 for realizing the degrees of freedom of the joints distributed over the whole body of the robot apparatus 201 shown in FIG. 5, and a distance measurement sensor (not shown) for measuring a distance to an object. , A posture sensor 351 for measuring the posture and inclination of the trunk unit 202, ground contact confirmation sensors 352, 353 for detecting leaving or landing on the left and right soles, a load sensor provided on the sole 121 of the sole 121, and a battery. Various devices such as a power supply control device 354 that manages power supplies such as the power supply are connected via a bus interface (IZF) 310. Here, the attitude sensor 351 is configured by, for example, a combination of an acceleration sensor and a gyro 'sensor, and the grounding confirmation sensors 352, 353 are configured by a proximity sensor or a micro' switch.
思考制御モジュール 200と運動制御モジュール 300は、共通のプラットフォーム上 で構築され、両者間はバス.インターフェース 210, 310を介して相互接続されている 運動制御モジュール 300では、思考制御モジュール 200から指示された行動を体 現すベぐ各ァクチユエータ 350による全身協調運動を制御する。すなわち、 CPU3 11は、思考制御モジュール 200から指示された行動に応じた動作パターンを外部記 憶装置 314から取り出し、又は、内部的に動作パターンを生成する。そして、 CPU3 11は、指定された動作パターンに従って、足部運動、 ZMP軌道、体幹運動、上肢運 動、腰部水平位置及び高さ等を設定するとともに、これらの設定内容に従った動作を 指示する指令値を各ァクチユエータ 350に転送する。  The thought control module 200 and the motion control module 300 are built on a common platform, and are interconnected via a bus; interfaces 210 and 310.The motion control module 300 is instructed by the thought control module 200. It controls the whole-body coordination by each actuator 350 that embodies the behavior. That is, the CPU 311 retrieves an operation pattern corresponding to the action instructed from the thought control module 200 from the external storage device 314, or internally generates an operation pattern. Then, the CPU 311 sets the foot motion, the ZMP trajectory, the trunk motion, the upper limb motion, the waist horizontal position and the height, etc., according to the specified motion pattern, and instructs the motion according to the set contents. Command value to be transferred to each actuator 350.
また、 CPU311は、姿勢センサ 351の出力信号によりロボット装置 201の体幹部ュ ニット 202の姿勢や傾きを検出するとともに、各接地確認センサ 352, 353の出力信 号により各脚部ユニット 205RZLが遊脚又は立脚の何れの状態であるかを検出する ことによって、ロボット装置 201の全身協調運動を適応的に制御することができる。更 に、 CPU311は、 ZMP位置が常に ZMP安定領域の中心に向力うように、ロボット装 置 201の姿勢や動作を制御する。 In addition, the CPU 311 detects the posture and inclination of the trunk unit 202 of the robot apparatus 201 based on the output signal of the posture sensor 351, and the leg unit 205RZL detects the swing leg based on the output signal of each of the grounding confirmation sensors 352 and 353. Alternatively, by detecting whether the robot is in the standing state, the whole body cooperative movement of the robot apparatus 201 can be adaptively controlled. Change In addition, the CPU 311 controls the posture and operation of the robot device 201 so that the ZMP position always faces the center of the ZMP stable region.
また、運動制御モジュール 300は、思考制御モジュール 200において決定された 意思通りの行動がどの程度発現された力 すなわち処理の状況を、思考制御モジュ ール 200〖こ返すよう〖こなっている。このようにしてロボット装置 201は、制御プログラム に基づいて自己及び周囲の状況を判断し、自律的に行動することができる。  In addition, the motion control module 300 is designed to return the force, ie, the state of processing, to which degree the action determined by the thought control module 200 has been performed, as described in the thought control module 200. In this way, the robot device 201 can determine its own and surrounding conditions based on the control program, and can act autonomously.
(2)ロボット の謝乍制御 去  (2) Robot control
上述のロボット装置においては、頭部ユニット 203にステレオビジョンシステムを搭 載し、外界の 3次元距離情報を取得することができる。次に、このようなロボット装置な どに好適に搭載されるものであって、ロボット装置力 ステレオビジョンシステムにより 周囲の環境力 獲得した 3次元距離データを使用して平面を検出し、この平面検出 結果に基づき階段を認識し、この階段認識結果を使用して階段昇降動作を行う一連 の処理について説明する。  In the robot device described above, a stereo vision system is mounted on the head unit 203, and three-dimensional distance information of the outside world can be acquired. Next, a flat surface is detected using the three-dimensional distance data obtained by acquiring the surrounding environmental power by a stereo vision system, which is preferably mounted on such a robot device. A series of processes for recognizing a stair based on the result and performing a stair climbing operation using the result of the stair recognition will be described.
図 7は、ロボット装置力ステレオデータから階段昇降動作を発現するまでの処理を 実行するシステムを示す機能ブロック図である。図 7に示すように、ロボット装置は、 3 次元の距離データを取得する距離データ計測手段としてのステレオビジョンシステム (Stereo Vision System) 1と、ステレオビジョンシステム 1からステレオデータ D1が入力 され、このステレオデータ D1から環境内の平面を検出する平面検出器 (Plane Segmentation/Extractor) 2と、平面検出器 2から出力される平面データ D2から階段 を認識する階段認識器 (Stair Recognition) 3と、階段認識部 2により認識された認識 結果である階段データ D4を使用して階段昇降動作をするための動作制御指令 D5 を出力する階段昇降制御器 (Stair Climber) 4とを備える。  FIG. 7 is a functional block diagram showing a system for executing processing from the stereoscopic data of the robot apparatus to the start of the stair climbing operation. As shown in FIG. 7, the robot apparatus receives a stereo vision system (Stereo Vision System) 1 as a distance data measuring means for acquiring three-dimensional distance data, and stereo data D1 from the stereo vision system 1, and receives the stereo data. A plane detector (Plane Segmentation / Extractor) 2 that detects planes in the environment from the data D1, a stair recognizer (Stair Recognition) 3 that recognizes stairs from plane data D2 output from the plane detector 2, and stair recognition A stair climbing controller (Stair Climber) 4 that outputs an operation control command D5 for performing a stair climbing operation using the stair data D4, which is a recognition result recognized by the unit 2.
そして、ロボット装置は、先ず、ステレオビジョンによって外界を観測し、両眼の視差 によって算出される 3次元距離情報であるステレオデータ D1を画像として出力する。 すなわち、人間の両眼に相当する左右 2つのカメラ力もの画像入力を各画素近傍毎 に比較し、その視差カゝら対象までの距離を推定し、 3次元距離情報を画像として出力 (距離画像)する。この距離画像カゝら平面検出器 2によって平面を検出することで、環 境内に存在する複数の平面を認識することができる。更に、階段認識器 3によって、 これら平面力 ロボット装置が昇降可能な平面を抽出し、その平面から階段を認識し 階段データ D4を出力する。そして階段昇降制御器 4が階段データ D4を用いて階段 の昇降動作を実現する動作制御指令 D5を出力する。 Then, the robot apparatus first observes the outside world by stereo vision, and outputs stereo data D1, which is three-dimensional distance information calculated by parallax between the eyes, as an image. That is, it compares the image input of two cameras with the right and left equivalent to both eyes of the human for each pixel neighborhood, estimates the distance to the parallax map target, and outputs 3D distance information as an image (distance image ). By detecting planes with the plane detector 2 using the distance image camera, a plurality of planes existing in the environment can be recognized. Furthermore, by the staircase recognizer 3, These plane force robots extract a plane that can be raised and lowered, recognize stairs from the plane, and output stair data D4. Then, the stair climbing controller 4 outputs an operation control command D5 for realizing the stair climbing operation using the stair data D4.
図 8Aは、ロボット装置 201が外界を撮影している様子を示す模式図である。床面を X— y平面とし、高さ方向を z方向としたとき、図 8Aに示すように、画像入力部 (ステレ ォカメラ)を頭部ユニット 203に有するロボット装置 201の視野範囲は、ロボット装置 2 01の前方の所定範囲となる。  FIG. 8A is a schematic diagram illustrating a state where the robot apparatus 201 is capturing an image of the outside world. Assuming that the floor is an XY plane and the height direction is the z direction, as shown in FIG. 8A, the visual field range of the robot device 201 having the image input unit (stereo camera) in the head unit 203 is as follows. It is a predetermined range in front of 201.
ロボット装置 201は、上述した CPU211において、画像入力装置 251からのカラー 画像及び視差画像と、各ァクチユエータ 350の全ての関節角度等のセンサデータと などが入力されて各種の処理を実行するソフトウェア構成を実現する。  The robot device 201 has a software configuration in which the CPU 211 described above receives a color image and a parallax image from the image input device 251 and sensor data such as all joint angles of each actuator 350, and executes various processes. Realize.
本実施の形態のロボット装置 201におけるソフトウェアは、オブジェクト単位で構成 され、ロボット装置の位置、移動量、周囲の障害物、及び環境地図等を認識し、ロボ ット装置が最終的に取るべき行動についての行動列を出力する各種認識処理等を 行うことができる。なお、ロボット装置の位置を示す座標として、例えば、ランドマーク 等の特定の物体等に基づく所定位置を座標の原点としたワールド基準系のカメラ座 標系(以下、絶対座標ともいう。)と、ロボット装置自身を中心 (座標の原点)としたロボ ット中心座標系(以下、相対座標ともいう。)との 2つの座標を使用する。  The software in the robot device 201 according to the present embodiment is configured for each object, recognizes the position, the movement amount, the surrounding obstacles, the environment map, and the like of the robot device, and performs an action that the robot device should ultimately take. It can perform various kinds of recognition processing to output an action sequence for. Note that, as coordinates indicating the position of the robot apparatus, for example, a camera coordinate system of a world reference system (hereinafter, also referred to as absolute coordinates) having a predetermined position based on a specific object such as a landmark as an origin of the coordinates, Two coordinates are used: the robot center coordinate system (hereinafter also referred to as relative coordinates) with the robot itself as the center (origin of coordinates).
ステレオビジョンシステム 1では、カラー画像及びステレオカメラによる視差画像など の画像データが撮像された時間において、センサデータ力も割り出した関節角を使 用してロボット装置 201が中心に固定されたロボット中心座標系を頭部ユニット 203 に設けられた画像入力装置 251の座標系へ変換する。この場合、本実施の形態に おいては、ロボット中心座標系からカメラ座標系の同次変換行列等を導出し、この同 次変換行列とこれに対応する 3次元距離データからなる距離画像を出力する。  In the stereo vision system 1, at the time when image data such as a color image and a parallax image by a stereo camera is captured, the robot center 201 is fixed at the center using the joint angle at which the sensor data force is also determined. To the coordinate system of the image input device 251 provided in the head unit 203. In this case, in the present embodiment, a homogeneous transformation matrix and the like in the camera coordinate system are derived from the robot central coordinate system, and a distance image including the homogeneous transformation matrix and the corresponding three-dimensional distance data is output. I do.
本実施の形態におけるロボット装置は、自身の視野内に含まれる階段を認識するこ とができ、その認識結果 (以下、階段データという。)を使用して階段昇降動作を可能 とする。したがって、階段昇降動作のためには、ロボット装置は、階段の大きさが自身 の足底の大きさより小さいか、階段の高さが登ることができる又は降りることができる高 さか否かなどの、階段の大きさについて様々な判断を行う必要がある。 ここで、本実施の形態においては、ロボット装置の足底の大きさを図 8Bとした場合 について説明する。すなわち、図 8Bに示すように、ロボット装置 201の前進方向を X 軸方向、床面に平行で X方向と直交する方法^ y方向としたとし、ロボット装置 201が 直立した際の両足の y方向の幅を feet base width足底の大きさであて足首(脚部と足 底の接続部)力ゝら前側の部分を足底前幅 foot_fr0nt_SiZe、足首から後ろ側の部分を足 底の足底後ろ幅 foot_back_sizeとするものとする。 The robot device according to the present embodiment can recognize a staircase included in its own visual field, and can perform a stairs ascent / descent operation using a recognition result (hereinafter, referred to as staircase data). Therefore, for the stair climbing operation, the robot device determines whether the size of the stairs is smaller than the size of its sole, or whether the height of the stairs is a height that can be climbed or descended. Various decisions need to be made regarding the size of the stairs. Here, in the present embodiment, a case will be described in which the size of the sole of the robot apparatus is set to FIG. 8B. That is, as shown in FIG. 8B, the forward direction of the robot apparatus 201 is defined as the X-axis direction, a method parallel to the floor surface and orthogonal to the X direction ^ y direction, and the y-direction of both feet when the robot apparatus 201 stands upright. The width of feet base width is the size of the sole and the ankle (the joint between the leg and the sole) is the front part of the force, the front width of the sole foot_fr 0 nt_ S i Ze , the part of the back from the ankle is the foot The sole width behind the sole shall be foot_back_size.
ロボット装置 201が環境内から検出する階段としては、例えば図 9、図 10に示すよう なものがある。図 9A、図 10Aは、階段を正面から見た図、図 9B、図 10Bは、階段を 側面から見た図、図 9C、図 IOCは、階段を斜めから見た図である。  Stairs detected by the robot apparatus 201 from the environment include, for example, those shown in FIGS. 9A and 10A show the stairs viewed from the front, FIGS. 9B and 10B show the stairs viewed from the side, and FIGS. 9C and IOC show the stairs viewed obliquely.
ここで、人間、ロボット装置などが階段を昇降するために使用する面 (足又は可動脚 部を載せる面)を踏面と 、、一の踏面力 その次の踏面までの高さ(階段 1段の階 段の高さ)を蹴り上げという。また、以下では、階段は、地面に近い方から登るに従つ て 1段目、 2段目とカウントすることとする。  Here, the surface used by humans and robotic equipment to move up and down the stairs (the surface on which the feet or movable legs are placed) is referred to as the tread, and the tread force is the height to the next tread (one step of the stairs). The height of the steps) is called kicking. In the following, stairs are counted as the first and second steps as they climb from the side closer to the ground.
図 9に示す階段 ST1は、段数が 3段の階段であり、蹴り上げ 4cm、 1, 2段面の踏面 の大きさは幅 30cm、奥行き 10cm、最上段である 3段目の踏面のみ、幅 30cm、奥 行き 21cmとなっている。また、図 10に示す階段 ST2も、段数が 3段の階段であり、蹴 り上げ 3cm、 1, 2段面の踏面の大きさは幅 33cm、奥行き 12cm、最上段である 3段 目の踏面のみ、幅 33cm、奥行き 32cmとなっている。ロボット装置がこれらの階段を 認識した結果は後述する。  The staircase ST1 shown in Fig. 9 is a three-step staircase, with a kick-up of 4cm, the size of the treads of the first and second steps is 30cm in width, 10cm in depth, and only the third step, which is the top step, has a width of It is 30cm in depth and 21cm in depth. The staircase ST2 shown in Fig. 10 is also a three-step staircase, with a 3cm kick-up, the treads of the first and second steps are 33cm in width, 12cm in depth, and the third step, the top step Only 33cm wide and 32cm deep. The result of the robot device recognizing these stairs will be described later.
平面検出器 2は、ステレオビジョン等の距離計測器力も出力される距離情報 (ステレ ォデータ D1)力 環境内に存在する複数の平面を検出し、平面データ D2を出力す る。平面の検出方法としては後述する線分拡張法の他、ハフ変換を利用した公知の 平面検出技術を適用することができる。ただし、ノイズを含む距離データ力 階段の ように複数平面を検出するには、後述する線分拡張法などによる平面検出を行うと正 確に平面を検出することができる。  The plane detector 2 detects a plurality of planes existing in the environment (distance data D1), and outputs plane data D2. As a plane detecting method, a known plane detecting technique using Hough transform can be applied in addition to a line segment extension method described later. However, in order to detect a plurality of planes as in the case of a staircase of distance data including noise, planes can be accurately detected by performing plane detection by a line extension method described later.
図 11、図 12は、階段を検出した結果の一例を示す図である。図 11及び図 12は、 後述する平面検出方法により、それぞれ図 9及び図 10に示す階段を撮影した画像 から 3次元距離データを取得して平面検出した例である。すなわち、図 11Aは、図 9 の階段を撮影した場合の画像を示す模式図、図 11B乃至図 11Dは、図 11Aに示す 画像から取得した 3次元の距離データを示す図である。また、図 12Aは、図 10の階 段を撮影した場合の画像を示す模式図、図 12B乃至図 12Dは、図 12Aに示す画像 から取得した 3次元の距離データを示す図である。図 11及び図 12に示すように、何 れの場合も全ての踏面を平面として検出できている。図 11Bは、下から 1段目、 2段 目、 3段目の踏面が平面検出されている例を示す。また、図 12Bは、床面の一部も他 の平面として検出成功して 、ることを示す。 FIG. 11 and FIG. 12 are diagrams illustrating an example of a result of detecting a staircase. FIGS. 11 and 12 show examples in which three-dimensional distance data is acquired from images obtained by photographing the stairs shown in FIGS. 9 and 10, respectively, and plane detection is performed by the plane detection method described later. That is, FIG. 11B to 11D are schematic diagrams showing images when the stairs are photographed, and FIGS. 11B to 11D are diagrams showing three-dimensional distance data acquired from the images shown in FIG. 11A. FIG. 12A is a schematic diagram showing an image when the stage of FIG. 10 is photographed, and FIGS. 12B to 12D are diagrams showing three-dimensional distance data acquired from the image shown in FIG. 12A. As shown in FIGS. 11 and 12, all treads could be detected as flat in any case. FIG. 11B shows an example in which the first, second, and third steps from the bottom are detected as flat surfaces. FIG. 12B shows that a part of the floor surface is successfully detected as another plane.
すなわち、図 13Aに示すように、例えば階段 ST2を撮影した距離画像力も平面検 出すると、図 13Bに示すように、領域 A— Dがそれぞれ、床面、 1段目、 2段目、 3段 目の踏面を示す平面として検出される。各領域 A— Dに含まれる同一領域に示す点 群は、それぞれ同一平面を構成すると推定された距離データ点群を示している。 階段認識器 3には、こうして平面検出器 2が検出した平面データ D2が入力され、階 段の形状、すなわち踏面の大きさ、階段の高さ (蹴り上げの大きさ)などを認識する。 ここで、本実施の形態における階段認識器 3は、詳細は後述する力 ロボット装置 20 1が認識した踏面に含まれる領域 (多角形)の手前側 (ロボット装置に距離的に近い 側)の境界(Front Edge) (以下、フロントエッジ FEという。)と、踏面の奥側(ロボット装 置に距離的に遠い側)の境界 (Back Edge) (以下、ノックエッジ BEという。)とを階段 データとして認識する。そして、階段昇降制御器 4が階段データを利用して階段昇降 動作を制御する。  That is, as shown in FIG. 13A, for example, when the distance image force of the staircase ST2 is also detected in a plane, as shown in FIG. 13B, the areas A to D are respectively the floor surface, the first step, the second step, and the third step. It is detected as a plane indicating the tread surface of the eye. The point cloud in the same area included in each of the areas A to D indicates a distance data point group estimated to constitute the same plane. The plane data D2 detected by the plane detector 2 is input to the stair recognizer 3 to recognize the shape of the stairs, that is, the size of the tread, the height of the stairs (the size of the kick-up), and the like. Here, the staircase recognizer 3 in the present embodiment has a boundary on the near side (the side closer to the robot apparatus) with respect to the area (polygon) included in the tread recognized by the force robot apparatus 201 described later in detail. (Front Edge) (hereinafter referred to as “Front Edge FE”) and the boundary (Back Edge) (hereinafter referred to as “Knock Edge BE”) on the far side of the tread (the side farther from the robot device) as staircase data. recognize. Then, the stair climbing controller 4 controls the stair climbing operation using the stair data.
次に、ロボット装置の階段昇降制御方法について具体的に説明する。なお、以下 では、第 1に、ロボット装置の階段認識方法、第 2に認識した階段を利用して行う階段 昇降動作、最後に平面検出方法の具体例として線分拡張法による平面検出方法の 順にて説明する。  Next, a stair climbing control method of the robot device will be specifically described. In the following, the staircase recognition method of the robot device, the stair climbing / lowering operation using the recognized staircase second, and the plane detection method by the line segment extension method as a specific example of the plane detection method are described below in this order. Will be explained.
図 14は、図 7に示す階段認識器を示す機能ブロック図である。図 14に示すように、 階段認識器 3は、平面検出器 2から出力された平面データ D2から階段を検出する階 段検出器 (Stair Extraction) 5と、この階段検出器 5が検出した階段データ D3の時系 列データ、すなわち異なる時間に検出された複数の階段データ D3を統合することで 更に正確に階段を認識する処理を行う階段統合器 (Stair Merging) 6とを有し、階段 統合器にて統合した階段データ D4が階段認識器 3の出力となる。 FIG. 14 is a functional block diagram showing the staircase recognizer shown in FIG. As shown in FIG. 14, the stair recognizer 3 includes a stair detector (Stair Extraction) 5 for detecting a stair from the plane data D2 output from the plane detector 2, and a stair data detected by the stair detector 5. A stair merging unit (Stair Merging) 6 that performs processing to recognize stairs more accurately by integrating the time series data of D3, that is, a plurality of stair data D3 detected at different times. The stair data D4 integrated by the integrator is the output of the stair recognizer 3.
階段検出器 5は、平面検出器 2から入力される平面データ D2から階段を検出する が、  The staircase detector 5 detects a staircase from the plane data D2 input from the plane detector 2,
平面検出器 2から入力される平面データ D2は、 1つの平面につき以下に示す複数の 情報を有し、ステレオビジョンシステム 1によって取り込まれた画像力も検出された複 数の平面毎の平面データが入力される。 The plane data D2 input from the plane detector 2 has a plurality of pieces of information shown below for each plane, and the plane data for each of a plurality of planes in which the image power captured by the stereo vision system 1 is also detected is input. Is done.
すなわち、平面データ D2は、平面毎に  That is, plane data D2 is
1— 1:平 を構成する 、の数 (number of supporting point) 1— 1: The number of supporting points
1 2 :平面の中心となる点 1 2: Point at the center of the plane
1-3:平面パラメータ (法線ベクトル、原点からの距離)  1-3: Plane parameters (normal vector, distance from origin)
1- 4:平面を構成する多角形の境界  1-4: Polygon boundaries that make up the plane
から構成される情報を有する。 Has information composed of
この平面データに基づき、ロボット装置は、自身が接地している床面や踏面などの 接地面と略水平な平面を選択し、下記の情報 (以下、階段パラメータという。)を算出 する。すなわち、  Based on this plane data, the robot device selects a plane that is substantially horizontal to the ground surface, such as the floor surface or tread, on which it is grounded, and calculates the following information (hereinafter referred to as staircase parameters). That is,
2— 1:フロントエッジ FE、バックエッジ BE  2—1: Front edge FE, back edge BE
2— 2 :階段の高さ 2— 2: Stair height
である。 It is.
ロボット装置が認識するフロントエッジ FE、ノ ックエッジ BEとは、上述した如ぐ階段 の踏面の境界 (線)を示すものであって、ロボット装置が対畤した場合に、多角形にお Vヽてロボット装置に近 、側の境界(手前側の境界)をフロントエッジ FE、ロボット装置 から距離が離れている側の境界(奥側の境界)をバックエッジ BEとする。これは、後 述するように、例えば平面を構成する点を全て含む最小の多角形を求め、その手前 側又は奥側の境界とすることができる。フロントエッジ FE及びバックエッジ BEの情報 としては、これらの端点の情報などとすることができる。また、上記多角形から階段の 幅 W (width)、階段の長さ (length)などの情報を得ることができる。また、階段の高さ( 蹴り上げ)は、与えられた平面データ D2の平面の中心点を利用して、 2の平面の中 心点間の高さ差としたり、上記多角形を求めた際の重心点を利用して 2の重心点の 高さの差としたりすることができる。なお、蹴り上げは、前段のバックエッジ BEと後段 のフロントエッジ FEとの高さの差としたりしてもよい。 The front edge FE and the knock edge BE recognized by the robot device indicate the boundary (line) of the tread surface of the stairs as described above. The front boundary (front side boundary) near the robot unit is the front edge FE, and the boundary far away from the robot unit (back side boundary) is the back edge BE. For example, as described later, for example, a minimum polygon including all points constituting a plane can be obtained and set as a boundary on the near side or the far side. The information on the front edge FE and the back edge BE can be information on these end points. Further, information such as the width W (width) of the stairs and the length (length) of the stairs can be obtained from the polygon. Also, the height of the stairs (kick-up) can be calculated using the center point of the plane of the given plane data D2 as the height difference between the center points of the planes of 2 and the above polygon. Using the center of gravity of 2 Or the difference in height. Note that the kick-up may be based on the difference between the height of the front back edge BE and the front edge FE of the rear stage.
また、本実施の形態においては、フロントエッジ FE及びバックエッジ BEに加えて更 にフロントエッジ FE及びバックエッジ BEに挟まれた領域 (安全領域)の左右に隣接 する領域であって移動可能である確率が高 、と推定される領域をマージン (領域)と して認識する。これらの求め方については後述する。このマージンを求めることにより 、移動可能であると推定する踏面の領域を広く認識することができる。更に、踏面を 構成するデータ点群の数や、上述の重心点などを 1つ規定した参照点の情報などの 情報(階段パラメータ)のセットを階段データ D3とすることができる。  Further, in the present embodiment, in addition to the front edge FE and the back edge BE, it is a region adjacent to the left and right of a region (safety region) sandwiched between the front edge FE and the back edge BE and is movable. The region estimated to have a high probability is recognized as a margin (region). How to obtain these will be described later. By obtaining this margin, it is possible to widely recognize the area of the tread that is estimated to be movable. Further, a set of information (stair parameters) such as the number of data point groups forming the tread and the information of the reference point defining one of the above-mentioned center of gravity points can be used as the stair data D3.
以上の階段データから下記の条件を満たす平面 (階段)を抽出する。  A plane (stair) satisfying the following conditions is extracted from the above stair data.
3—1 :フロントエッジ FE及びバックエッジ BEの長さが所定の閾値以上 3-1: The length of the front edge FE and the back edge BE is greater than or equal to a predetermined threshold
3-2:階段の高さ (height)が所定の閾値以下 3-2: The height of the stairs is less than a predetermined threshold
また、その他、  In addition,
3-3:階段の幅 W (width)が所定の閾値以上 3-3: Stair width W (width) is greater than or equal to a predetermined threshold
3-4:階段の長さ L (length)が所定の閾値以上 3-4: Stair length L (length) is more than a predetermined threshold
など同時に満たすものを抽出することが好ましい。 It is preferable to extract those that satisfy the conditions at the same time.
図 15は、階段検出器 5の階段検出処理の手順を示すフローチャートである。図 15 に示すように、先ず、入力された平面データの平面パラメータなどに基づき、入力さ れた平面が例えば接地面と水平であるか否かなど、歩行又は移動可能である平面か 否かを判断する (ステップ Sl)。ここで、どのような平面が水平であるか又は移動可能 であるかの条件は、ロボット装置の機能に応じて設定すればよい。例えば、入力平面 の平面ベクトルを η (η , η , η )とした場合、 | sin n | >min であれば水平である  FIG. 15 is a flowchart showing the procedure of the staircase detection process of the staircase detector 5. As shown in FIG. 15, first, based on the plane parameters of the input plane data, it is determined whether or not the input plane is a plane that can be walked or moved, for example, whether or not the input plane is horizontal to a ground contact surface. Judge (Step Sl). Here, the condition of what plane is horizontal or movable may be set according to the function of the robot device. For example, if the plane vector of the input plane is η (η, η, η), it is horizontal if | sin n |> min
X y z — 1 z th と判断することができる。ここで、 min は、水平面を判断するための閾値であり、例え  X y z — 1 z th Here, min is a threshold value for judging a horizontal plane.
th  th
ば、使用する距離データ、平面検出など精度を考慮して min =80° などとして、水 For example, consider the accuracy of distance data to be used, plane detection, etc.
th  th
平面に対して ± 10° までの傾きであれば水平と判断して検出するなどとすることがで きる。または、例えば、 ± 30° 程度の傾きがあっても歩行または移動可能であれば、 それらの角度範囲の平面を抽出するようにすればょ 、。ステップ S 1にて水平でな ヽ と判断した場合 (ステップ Sl :No)、検出に失敗したことを出力して処理を終了し、次 の平面データにっ 、ての処理を実行する。 If the inclination is up to ± 10 ° with respect to the plane, it can be judged as horizontal and detected. Or, for example, if it is possible to walk or move even if there is a tilt of about ± 30 °, a plane within those angle ranges should be extracted. If it is determined in step S1 that the level is not horizontal (step Sl: No), the detection failure is output and the processing is terminated. The processing is performed on the plane data.
次に、平面が水平である場合 (ステップ SI : Yes)、平面の境界 (形状)を認識する ための処理を行う。ここでは、例えば Sklanskyのアルゴリズム(J. Sklansky, "Measuring concavity on a rectangular mosaic , IEEE Trans Comput.21,1974,pp.l355- 1364)や Melkmanのアルゴリズム (Melkman A., "On-line Construction of the Convex Hull of a sample Polygon" Information Processing Letters 25, 1987, p.11)などの凸包や、ノィ ズ除去による平滑ィ匕によって、入力平面を包含する多角形を求める (ステップ S 2)。 そして、この多多角形の前後の境界線をフロントエッジ及びバックエッジなどの階段 パラメータを求める(ステップ S3)。そして、フロントエッジ及びバックエッジの両境界 線から、本実施の形態においては、階段踏面を示す平面における幅 W (width)及び 長さ L (length)を求め、これらの値が所定の閾値より大きいかどうか判断する (ステツ プ S4)。踏面の幅及び長さが所定の閾値以上でない場合 (ステップ S4 : No)、ロボッ ト装置が移動可能な平面ではな 、とし、次の平面データにつ 、て再びステップ S 1か らの処理を繰り返す。  Next, when the plane is horizontal (step SI: Yes), processing for recognizing the boundary (shape) of the plane is performed. Here, for example, the algorithm of Sklansky (J. Sklansky, "Measuring concavity on a rectangular mosaic, IEEE Trans Comput. 21, 1974, pp. L355-1364") and the algorithm of Melkman (Melkman A., "On-line Construction of the A convex hull such as Convex Hull of a sample Polygon "Information Processing Letters 25, 1987, p.11) or a polygon encompassing the input plane is obtained by smoothing by removing noise (step S2). The boundary lines before and after this polygon are determined as staircase parameters such as a front edge and a back edge (step S3), and in the present embodiment, a plane indicating a stair tread surface is obtained from both the boundary lines of the front edge and the back edge. The width W (width) and the length L (length) of the tread are determined, and it is determined whether or not these values are larger than a predetermined threshold value (step S4). S4 : No), it is determined that the plane is not a movable plane of the robot apparatus, and the processing from step S1 is repeated again for the next plane data.
平面の幅及び長さが所定の閾値以上である場合 (ステップ S4 : No)、移動可能な 踏面であると判断し左右のマージン(Left Margin, Right Margin)を計算し(ステップ S 5)、これらの情報を階段データ D3として出力する。  If the width and length of the plane are equal to or larger than the predetermined threshold (Step S4: No), it is determined that the tread is movable, and the left and right margins (Left Margin, Right Margin) are calculated (Step S5). Is output as stair data D3.
次に、ステップ S2において凸包によって入力平面を包含する多角形を求める方法 について説明する。図 16は、凸多角形を示す模式図であって、図 16Aは、入力され た一の平面に属すると判断されたサポーティングポイント全て(同一平面であって連 続した領域に含まれるとされる距離データ点群)が含まれる領域を示し、図 16Bは、 図 16Aに示す図形から求めた凸多角形である。ここで示す凸多角形は、与えられた 平面図形 (サポーティングポイントが含まれる領域)を含む最小の凸集合を求める凸 包(convex hull)を利用したものとすることができる。なお、 Gで示す点は、後述するが 、踏面の幅 Wを求める際に使用するもので、例えばサポーティングポイントが含まれ る領域の重心などの点 (参照点)を示す。  Next, a method of obtaining a polygon including the input plane by the convex hull in step S2 will be described. FIG. 16 is a schematic diagram showing a convex polygon, and FIG. 16A shows all supporting points determined to belong to one input plane (contained in a continuous area on the same plane. FIG. 16B shows a convex polygon obtained from the figure shown in FIG. 16A. The convex polygon shown here can use a convex hull (convex hull) for finding the minimum convex set including a given plane figure (the area including the supporting points). As will be described later, the point indicated by G is used when calculating the width W of the tread, and indicates, for example, a point (reference point) such as the center of gravity of the area including the supporting point.
このような凸包を利用して凸多角形を求めるアルゴリズムの例として、上述したように 、 Melkmanのアルゴリズム、 Sklanskyのアルゴリズムなどがある。図 17は、 Melkmanの アルゴリズムを説明するための模式図である。図 17に示すように、与えられた図形に 含まれる点を 3点 PI, P2, P3抽出し、点 PI, P2を結ぶ線分を引き、点 PI, Ρ3、 , P 2, P3を通る直線を引く。これにより、 3点 PI, P2, P3からなる三角形 AR4を含む 5 つの領域 AR1— AR5に区画される。そして、次に選択した点 P4がどの領域に含ま れるかを判断して多角形を形成しなおすという処理を繰り返して凸多角形を更新して いく。例えば P4が、領域 AR1に存在する場合、 PI, P2, P4, P3の順序で結んだ線 分に囲まれた領域が更新された凸多角形となる。また、領域 AR3, AR4に点 P4が存 在する場合には、それぞれ PI, P2, P3, P4の順序で結んだ線分に囲まれた領域, PI, P4, P2, P3の順序で結んだ線分に囲まれた領域として凸多角形を更新する。 一方、領域 AR4、すなわち凸多角形内部に点 Pが含まれる場合は、凸多角形は更 新せず、また、領域 AR2に点 P4がある場合は、点 P3は除き PI, P2, P3の順序で結 んだ線分に囲まれた領域として凸多角形を更新する。本実施の形態においては、全 てのサポーティングポイントに対して、各点に含まれる領域を考慮して凸多角形を生 成することができる。 Examples of the algorithm for obtaining a convex polygon using such a convex hull include the Melkman algorithm and the Sklansky algorithm as described above. Figure 17 shows Melkman's It is a schematic diagram for explaining an algorithm. As shown in Fig. 17, three points PI, P2, and P3 are extracted from the points included in the given figure, a line segment connecting the points PI and P2 is drawn, and a straight line passing through the points PI, Ρ3,, P2, and P3 pull. As a result, it is divided into five areas AR1 to AR5 including a triangle AR4 consisting of three points PI, P2, and P3. Then, the process of determining which area the next selected point P4 is included in and re-forming the polygon is repeated to update the convex polygon. For example, if P4 exists in the area AR1, the area surrounded by segments connected in the order of PI, P2, P4, and P3 becomes the updated convex polygon. When point P4 exists in regions AR3 and AR4, the region surrounded by segments connected in the order of PI, P2, P3, and P4, respectively, is connected in the order of PI, P4, P2, and P3. Update the convex polygon as an area surrounded by line segments. On the other hand, if the point P is included in the area AR4, that is, the convex polygon, the convex polygon is not updated.If the point P4 is in the area AR2, the points P3 and P3 are excluded except for the point P3. The convex polygon is updated as an area surrounded by the line segments connected in order. In the present embodiment, a convex polygon can be generated for all the supporting points in consideration of a region included in each point.
図 18は、 Sklanskyのアルゴリズムにより多角形を求める方法を説明するための模式 図である。 Sklanskyのアルゴリズムにより抽出される多角形は、 Weakly Externally Visible Polygonと呼ばれるものである力 上述の Sklanskyのアル比して計算量が少な ぐしたがって高速演算が可能である。  FIG. 18 is a schematic diagram for explaining a method of obtaining a polygon by the Sklansky algorithm. The polygons extracted by Sklansky's algorithm are called Weakly Externally Visible Polygons. The computational complexity is smaller than that of Sklansky's algorithm described above, so that high-speed operation is possible.
与えられた図形を包含する凸多角形を求める場合、図 18Aに示すように、与えられ た図形 131の境界上の任意の点 Xから図形 131を含む円 132に対して半直線を引く 。このとき、点 Xから円 132に引いた半直線のうち、図形 131を横切らない半直線を引 くことができた場合、この点は凸多角形の境界を構成する点であるとする。一方、図 1 8Bに示すように、与えられた図形 133の境界上任意の他の点 yから図形 133を含む 円 134に半直線を引いた場合には、図形 133を横切らない半直線を引くことができ ない。この場合は、この他の点 yは凸多角形の境界を構成しないものとする。以上の ようにして、順次各点が凸多角形の境界を構成する力否かを判断して選択された点 のみからの図形を求めると、図 16Aに示すような図形が得られる。  When obtaining a convex polygon including a given figure, a half line is drawn from an arbitrary point X on the boundary of the given figure 131 to a circle 132 including the figure 131 as shown in FIG. 18A. At this time, if a half line that does not cross the figure 131 among the half lines drawn from the point X to the circle 132 can be drawn, this point is assumed to be a point that forms the boundary of the convex polygon. On the other hand, as shown in FIG. 18B, when a half line is drawn from any other point y on the boundary of the given figure 133 to a circle 134 including the figure 133, a half line that does not cross the figure 133 is drawn. I can't. In this case, the other point y does not form a boundary of the convex polygon. As described above, when it is determined whether each point constitutes a boundary of a convex polygon or not and a figure is obtained only from the selected points, a figure as shown in FIG. 16A is obtained.
この図形を包含する凸多角形を求めることで、図 16Bの凸多角形を得ることができ る。ここで、本実施の形態においては、ステレオビジョンシステム 1の精度、特性など を考慮し、図 16Aから凸多角形を求める場合には、図 16Bに示すように、図 16Aの 図形に外接する凸多角形を求めるものとして説明するが、カメラの精度、特性などを 考慮し、例えば図 16Aの図形に内接する凸多角形を求めるようにしてもよいことは勿 論である。また、これらの方法を平面の傾き度合いや、周囲の状況に応じて使い分け るようにしてちょい。 By calculating the convex polygon that includes this figure, the convex polygon shown in Fig. 16B can be obtained. The Here, in the present embodiment, in consideration of the accuracy, characteristics, and the like of the stereo vision system 1, when obtaining the convex polygon from FIG. 16A, as shown in FIG. 16B, the convex circumscribing the figure in FIG. Although the description will be made assuming that a polygon is obtained, it is needless to say that, for example, a convex polygon inscribed in the figure of FIG. 16A may be obtained in consideration of the accuracy and characteristics of the camera. Also, use these methods according to the degree of inclination of the plane and the surrounding situation.
また、図 15のステップ S2において凸包によって入力平面を包含する多角形を求め た場合、非凸多角形形状の階段について問題が発生する。図 19は、この問題を示 す模式図であって、図 19Aは、入力される平面であり stepOは非凸多角形形状の階 段である。図 19Bは、凸包による stepOの多角形表現結果であり、非凸部分に関して 望ましい結果と大きな乖離が生じている。このような非凸多角形を扱う方法として、ギ ヤップ除去とラインフィットによる平滑ィ匕によって多角形を求める方法が考えられる。 ギャップ除去とラインフィットによる平滑ィ匕によって入力平面を包含する多角形を求 める方法について説明する。図 20は、平滑ィ匕を示す模式図であって、図 20Aは、入 力された一の平面に属すると判断されたサポーティングポイント全て(同一平面であ つて連続した領域に含まれるとされる距離データ点群)が含まれる領域である入力多 角形(input polygon)を示し、図 20Bは、入力平面を示す多角形から不連続なギヤッ プを除去し(close gaps)平滑化した多角形(ギャップ除去多角形: gaps closed polygon)を示し、図 20Cは、図 20Bで得られた多角形に対してラインフィッティングに より(fit line segments)更に平滑化した多角形(平滑化多角形: smoothed polygon)で ある。ここで、図 21は、ギャップ除去とラインフィットによる平滑ィ匕によって入力平面を 包含する多角形を求める処理のプログラム例を示す図であり、多角形力 不連続な ギャップを除去する Close gaps処理、及び得られた多角形に対してラインフイツティン グにより更に平滑化する Fit line segments処理を示している。  In addition, when a polygon including the input plane is obtained by the convex hull in step S2 in FIG. 15, a problem occurs with respect to the steps of the non-convex polygon shape. FIG. 19 is a schematic diagram showing this problem, and FIG. 19A is an input plane, and stepO is a non-convex polygonal step. Figure 19B shows the polygonal representation of stepO using convex hulls, with significant deviations from the desired results for non-convex portions. As a method of handling such a non-convex polygon, a method of obtaining a polygon by gap removal and smoothing by line fitting can be considered. A method of obtaining a polygon including the input plane by gap removal and smoothing by line fitting will be described. FIG. 20 is a schematic diagram showing a smoothing ridge, and FIG. 20A is a diagram showing all supporting points determined to belong to one input plane (contained in a continuous area on the same plane). FIG. 20B shows an input polygon which is a region including the distance data point group), and FIG. 20B shows a smoothed polygon (close gaps) by removing discontinuous gaps from the polygon indicating the input plane. The gap-removed polygon is shown as a closed polygon, and FIG. 20C is a polygon obtained by further smoothing the polygon obtained in FIG. 20B by fit line segments (smoothed polygon). ). Here, FIG. 21 is a diagram showing an example of a program for processing for obtaining a polygon including the input plane by gap removal and smoothing by line fitting. And Fit line segments processing for further smoothing the obtained polygon by line fitting.
まず、ギャップ除去の方法について説明する。多角形を示す頂点から連続する 3つ の頂点を選び、この中央の点が端点を結んだ直線から大きく離れている場合にこの 中央の点を除去する。残った頂点について、この処理を除去する点が無くなるまで続 ける。 次に、ラインフィットの方法について説明する。多角形を示す頂点から連続する 3つ の頂点を選び、最小二乗法によりこれらの 3点を近似する直線とこの直線と 3点の誤 差を求める。求められた全ての近似直線と誤差について、誤差の小さい順に並べ、 誤差がある閾値より小さい場合に、中央の点を除去し、端点の位置を近似直線によ つて再計算する。この処理を除去する点が無くなるまで続ける。 First, a method of removing a gap will be described. Three consecutive vertices are selected from the vertices representing the polygon, and if this center point is far away from the straight line connecting the end points, this center point is removed. For the remaining vertices, continue this process until there are no more points to remove. Next, a line fitting method will be described. Three consecutive vertices are selected from the vertices indicating the polygon, and a straight line approximating these three points and the error between the straight line and the three points are obtained by the least squares method. All the approximation lines and errors found are arranged in ascending order of error, and if the error is smaller than a certain threshold, the center point is removed and the position of the end point is recalculated using the approximation line. This process is continued until there are no more points to remove.
次に、図 15のステップ S3における処理について説明する。先ず、得られた多角形( 図 16B、図 20C)力も上述した階段パラメータを算出する。図 22は、階段パラメータ の算出方法を説明するための模式図である。図 22Aに示すように、得られた多角形 140力 点 141一点 147により囲まれる領域であるものとする。ここで、ロボット装置 2 01からみて多角形 140の前側の境界を構成する線分がフロントエッジ FE、奥側の境 界を構成する線分がバックエッジ BEである。  Next, the processing in step S3 in FIG. 15 will be described. First, the obtained polygonal (FIG. 16B, FIG. 20C) force also calculates the above-mentioned staircase parameters. FIG. 22 is a schematic diagram for explaining a method of calculating staircase parameters. As shown in FIG. 22A, it is assumed that the obtained polygon 140 is a region surrounded by one point 147 and one point 147. Here, the line segment forming the front boundary of the polygon 140 as viewed from the robot apparatus 201 is the front edge FE, and the line segment forming the rear boundary is the back edge BE.
階段踏面の幅 Wは、フロントエッジ FEの中心点 C と参照点 Gとを結ぶ線分の長さ  The width W of the stair tread is the length of the line connecting the center point C of the front edge FE and the reference point G.
FE  FE
を dl、 ノ ックエッジ BEの中心点 C と参照点 Gとを結ぶ線分の長さを d2としたとき、 Is dl and the length of the line segment connecting the center point C of the knock edge BE and the reference point G is d2,
BE  BE
幅 W=dl + d2とすることができる。 The width W = dl + d2.
ここで、参照点 Gは、踏面となる平面の略中心であればよぐ例えば、全サポーティ ングポイントの中心点としたり、多角形 140の重心としたり、フロントエッジ FE及びバッ クエッジ BEの端点を結んだ図 22Bに示す安全領域 152の重心としたりすることがで きる。  Here, the reference point G can be set at the approximate center of the plane to be a tread.For example, the center point of all the supporting points, the center of gravity of the polygon 140, the end points of the front edge FE and the back edge BE are determined. For example, the center of gravity of the safety area 152 shown in FIG. 22B can be used.
こうして得られたフロントエッジ FE、 ノックエッジ BE、参照点 Gに基づき、階段の長 さ L及び幅 Wを求めステップ S4の処理を実行する。階段の長さ Lは、フロントエッジ F E及びバックエッジ BEの長さのうち例えば短い方としたり、以下に示す左右のマージ ンを含めたフロントエッジ FE及び左右のマージンを含めたバックエッジ BEのうち長い 方としたりすることができる。  Based on the thus obtained front edge FE, knock edge BE, and reference point G, the length L and width W of the stairs are obtained, and the process of step S4 is executed. The length L of the stairs is, for example, the shorter of the lengths of the front edge FE and the back edge BE, or the length L of the front edge FE including the left and right margins and the back edge BE including the left and right margins shown below. Or longer.
次に、ステップ S5におけるマージン算出方法について説明する。図 23は、最終的 に認識される踏面及び階段パラメータを説明するための模式図である。本実施の形 態においては、図 22B、図 23に示すように、安全領域 152の左右端部にマージン M , Mを設け、この左右のマージン M , Mを含んだ領域 151を最終的に踏面として Next, the margin calculation method in step S5 will be described. FIG. 23 is a schematic diagram for explaining the tread and stair parameters finally recognized. In the present embodiment, as shown in FIGS. 22B and 23, margins M, M are provided at the left and right ends of the safety area 152, and the area 151 including the left and right margins M, M is finally treaded. As
1 2 1 2 1 2 1 2
認識するものとする。左右のマージン M , Mは、フロントエッジ FE及びバックエッジ BEにて規定される安全領域 152の外側に多角形がはみ出している場合、先ず、そ れらの点を選択する。図 22Aにおいて、例えば右マージン Mを求める場合であれば Shall be recognized. Left and right margins M and M are front edge FE and back edge If a polygon is outside the safety area 152 defined by BE, those points are selected first. In Figure 22A, for example, to find the right margin M
2  2
、点 142, 143, 146である。安全領域 152の右側に隣接した右マージン Mを求める  , Points 142, 143, and 146. Find the right margin M adjacent to the right side of safety area 152
2 際は、安全領域 152から最も離れた点である点 142を選択し、この点 142からフロン トエッジ FE、バックエッジ BEに対して垂線を下ろす。そして、これら垂線及びフロント エッジ FE、 ノ ックエッジ BEに囲まれる領域 151を踏面として認識するものとする。な お、このマージンの求め方としては、点 142を通り、フロントエッジ FE又はバックエツ ジ BEと交差する線分を引くのみでもよい。  In this case, the point 142 that is the farthest point from the safety area 152 is selected, and a perpendicular line is drawn from this point 142 to the front edge FE and the back edge BE. Then, it is assumed that the area 151 surrounded by the perpendicular, the front edge FE, and the knock edge BE is recognized as a tread. The margin may be obtained by simply drawing a line that passes through the point 142 and intersects the front edge FE or the back edge BE.
ここで、図 22Bに示すように、フロントエッジ FEと同一直線における左マージン M の長さを lmfとし、バックエッジ BEと同一直線における左マージン Mの長さを lbmと する。同様に、フロントエッジ FE,バックエッジ BEとそれぞれ同一直線における右マ 一ジン Mの長さを rfm, rbmとする。  Here, as shown in FIG. 22B, the length of the left margin M on the same line as the front edge FE is lmf, and the length of the left margin M on the same line as the back edge BE is lbm. Similarly, let the lengths of the right margin M on the same straight line as the front edge FE and the back edge BE be rfm and rbm, respectively.
2  2
このようにして多角形力もフロントエッジ FE、 ノ ックエッジ BEを求めて階段を認識す ることの効果について説明しておく。図 24A及び図 24Bは、 2種類の階段を示す模 式図である。図 24Aに示すのは、図 9、図 10に示したような踏面が長方形の形状の 階段であるが、図 24Bは、螺旋 (spiral)状の階段である。この図 24Bのような螺旋階 段の場合、フロントエッジ FEに対してバックエッジ BEは平行でない。したがって、例 えば単に検出した平面力 長方形領域を抽出してしまうようなアルゴリズムを適用で きない場合がある。したがって、本実施の形態のように、検出した平面から多角形を 求め、フロントエッジ FE及びバックエッジ BEを求めることにより、このような螺旋階段 であってもロボット装置が昇降動作することが可能となる。  The effect of recognizing the stairs by obtaining the front edge FE and the knock edge BE for the polygon force in this way will be described. FIGS. 24A and 24B are schematic diagrams showing two types of stairs. FIG. 24A shows a step having a rectangular tread as shown in FIGS. 9 and 10, whereas FIG. 24B shows a step having a spiral shape. In the case of the spiral stage as shown in FIG. 24B, the back edge BE is not parallel to the front edge FE. Therefore, for example, an algorithm that simply extracts a detected plane force rectangular area may not be applicable. Therefore, as in the present embodiment, by obtaining a polygon from the detected plane and obtaining the front edge FE and the back edge BE, it is possible for the robot apparatus to perform a vertical movement even with such a spiral staircase. Become.
次に、図 14に示す階段統合器 6について説明する。階段統合器 6は、階段検出器 5によって検出された階段データ(階段パラメータ) D3を入力とし、それらの階段デー タ D3を時間的に統合することで、より正確かつ高域な階段の情報を推定するもので ある。例えば、ロボット装置の視野が狭い場合などにおいて、一度に階段全体を認識 できない場合がある。そのような場合、例えば、例えば前フレームなどの古い階段デ ータと例えば現フレームなどの新しい階段データとの中で、空間的にオーバーラップ している階段の組を探し、オーバーラップしている階段を統合することにより、新しい 仮想的な階段を定義する。この作業をオーバーラップする階段がなくなるまで続ける ことによって正確な階段を認識することが可能となる。 Next, the staircase integrator 6 shown in FIG. 14 will be described. The stair integrator 6 receives stair data (stair parameters) D3 detected by the stair detector 5 as an input, and temporally integrates the stair data D3 to obtain more accurate and high-frequency stair information. It is an estimate. For example, when the field of view of the robot device is narrow, it may not be possible to recognize the entire staircase at once. In such a case, for example, in the old stair data such as the previous frame and the new stair data such as the current frame, for example, a set of spatially overlapping stairs is searched for and the stairs are overlapped. By integrating stairs, new Define a virtual staircase. By continuing this operation until there are no overlapping stairs, accurate stairs can be recognized.
図 25は、階段統合器 6における階段統合処理の方法を示すフローチャートである。 まず、現在の階段データ(New Stairs)と古 、階段データ(Old Stairs)とを入力とし (ス テツプ SI 1)、新 、階段データ及び古!、階段データの全てを 1つの集合 (ujnion)と する (ステップ S12)。これら合わされた階段データ集合において、空間的にオーバ 一ラップしている階段データを検索し (ステップ S 13)、オーバーラップしている組があ る場合には (ステップ S14 :Yes)、それらの階段を統合して当該階段データ集合に登 録する(ステップ S15)。そして、ステップ S13、 14の処理を空間的にオーバーラップ する階段の組がなくなるまで続け (ステップ S 14 : No)、最終的に更新された階段デ ータ集合を階段データ D4として出力する。  FIG. 25 is a flowchart showing a method of the staircase integration process in the staircase integrator 6. First, the current stairs data (New Stairs) and old and stairs data (Old Stairs) are input (Step SI 1), and all of the new, stairs and old! And stairs data are combined into one set (ujnion). (Step S12). In these combined staircase data sets, spatially overlapping staircase data is searched (step S13). If there are overlapping sets (step S14: Yes), those staircase data are searched. Is integrated and registered in the staircase data set (step S15). Then, the processing of steps S13 and S14 is continued until there is no spatially overlapping set of stairs (step S14: No), and the finally updated stair data set is output as stair data D4.
図 26は、オーバーラップしている階段データを統合するステップ S13における処理 を説明するための模式図である。図 26は、空間的にオーバーラップしている階段デ ータ ST11, ST12を示す。空間的にオーバーラップしているかどうかの判断には、例 えば 2つの階段データ ST11, ST12の参照点 Gにおける高さの差 (距離)と、左右の マージンを含めた踏面の領域が重なっている面積の大きさを利用することができる。 すなわち、 2つの階段の重心 G , G の高さの差が閾値 (maxdz)以下であり、かつ、  FIG. 26 is a schematic diagram for explaining the processing in step S13 for integrating the overlapping staircase data. FIG. 26 shows staircase data ST11 and ST12 that overlap spatially. To judge whether or not they are spatially overlapping, for example, the difference in height (distance) at the reference point G of the two staircase data ST11 and ST12 and the tread area including the left and right margins overlap The size of the area can be used. That is, the difference between the heights of the centers of gravity G and G of the two steps is equal to or less than the threshold (maxdz), and
11 12  11 12
オーバーラップして 、る面積の大きさが閾値 (minarea)以上ある場合には、これらの 2 つの階段データが示す踏面はオーバーラップしていると判断することができる。その 場合には図 26の下図に示すように、階段データ ST11, ST12を統合して重心 G の If the size of the overlapping area is equal to or larger than the threshold (minarea), it can be determined that the treads indicated by these two stairs data overlap. In that case, as shown in the lower diagram of Fig. 26, the stair data ST11 and ST12 are integrated and the center of gravity G is calculated.
13 階段データ ST13とする。  13 Step data is ST13.
ここで、統合に際しては、階段データ ST11, ST12を含む外枠の領域をステップ S T13とし、統合前の階段データ ST11及び階段データ ST12の左右のマージンを除 いた安全領域を含む領域を統合後の新たな安全領域 165とし、階段データ ST13か らこの安全領域 165を除いた領域をマージン M , Mとする。統合後の安全領域 165  Here, at the time of integration, the area of the outer frame including the stair data ST11 and ST12 is defined as step ST13, and the area including the safety area excluding the left and right margins of the stair data ST11 and the stair data ST12 before integration is integrated. A new safety area 165 is defined, and areas obtained by removing the safety area 165 from the staircase data ST13 are defined as margins M and M. Safety area after integration 165
1 2  1 2
により統合されたフロントエッジ FE及びバックエッジ BEを求めることができる。 , The integrated front edge FE and back edge BE can be obtained.
すなわち、統合後の階段データ ST13におけるフロントエッジ FEの両端点は、階段 データ ST11のフロントエッジ FEと階段データ ST12のフロントエッジ FEの左右の端 点を比較し、右端点 163はより右側にある方、左端点はより左側にある方とされる。ま た、フロントエッジ FEのラインの位置は、階段データ ST11のフロントエッジ FEと階段 データ ST12のフロントエッジ FEを比較してよりロボット装置に近い方(手前側)のライ ン位置とされる。同様にバックエッジ BEもより奥側にある方の位置が選択され、より左 右に広がるようにその左右の端点 161, 162が選択される。 In other words, both end points of the front edge FE of the combined staircase data ST13 are the left and right ends of the front edge FE of the staircase data ST11 and the front edge FE of the staircase data ST12. Comparing the points, the right end point 163 is on the right side and the left end point is on the left side. In addition, the position of the line of the front edge FE is a line position closer to the robot apparatus (front side) as compared with the front edge FE of the stair data ST11 and the front edge FE of the stair data ST12. Similarly, for the back edge BE, the position on the farther side is selected, and the left and right end points 161 and 162 are selected so as to spread to the left and right.
なお、統合方法はこれに限るものではない。本実施の形態においては、ロボット装 置の視界などを考慮してフロントエッジ FE及びバックエッジ BEにより決まる四角形の 領域、統合データ ST13が共に最も大きくなるように統合するものとしている力 例え ば、十分視界が広い場合や、距離データの精度が十分高い場合においては、 2つの 階段データを単に合わせた領域を統合後の階段データとするなどしてもょ 、。また、 統合後の参照点 Gは、階段データ ST11と階段データ ST12に含まれるサポーティン グポイントの数の比に応じて重み付き平均をとつて求めることができる。  The integration method is not limited to this. In the present embodiment, a rectangular area determined by the front edge FE and the back edge BE and the integrated data ST13 are integrated so as to be the largest in consideration of the field of view of the robot device, for example. If the field of view is wide or the accuracy of the distance data is sufficiently high, a combined area of the two staircase data may be used as the combined staircase data. Further, the reference point G after integration can be obtained by taking a weighted average according to the ratio of the number of supporting points included in the staircase data ST11 and the staircase data ST12.
次に、図 7に示す階段昇降制御器 4について説明する。階段昇降制御器 4は、階 段検出器及び階段統合器 6によって統合されて検出された階段データ D4を用 、て 実際にロボット装置が階段の昇降動作を行うための制御を行う。この昇降制御には、 階段を探す動作も含むものとする。  Next, the stair climbing controller 4 shown in FIG. 7 will be described. The stair climbing controller 4 uses the stair data D4 integrated and detected by the stair detector and the stair integrator 6 to perform control for the robot apparatus to actually perform the stair climbing operation. This ascent / descent control includes an operation of searching for stairs.
本実施の形態において実現した階段昇降動作は、次の 5つのステートマシンとして 構築することができる。  The stair climbing operation realized in this embodiment can be constructed as the following five state machines.
4-1:サーチ(Search)動作  4-1: Search operation
ロボット装置の頭部に搭載されたステレオビジョンシステムにて距離画像を取得する ため、首を振って周囲を見廻し、環境の情報を集める。  In order to acquire a range image using a stereo vision system mounted on the head of the robot device, shake the head and look around to collect environmental information.
4 2 :ァライン (Align)動作  4 2: Align operation
階段に対して正対し、かつ、決められた一定量の距離位置に移動する。図 27は、 ァライン動作を説明するための図である。図 27において、領域 170がロボット装置の 認識した階段踏面の 1段目であるとする。ァライン動作においては、踏面 170のフロ ントエッジ FEの中心点力も直交する方向に所定距離 ad (align.distance)離れた目的 とする位置(以下、ァライン位置という。)に移動する。この場合、現在のロボット装置 の位置が点 171であって、目的とするァライン位置が点 172である場合、両者の距離 が所定の閾値 max_d以上離れている場合及びフロントエッジ FEに直行する方向と口 ボット装置の向く方向との角度差が所定の閾値 maX_a以上ある場合、ロボット装置は 目的とするァライン位置 172に移動を開始し、これらの条件を満たしたとき、ァライン 動作が完了したと判断するものとする。これらの閾値としては、例えば max_d= 3 (cm) 、max_a = 2° とすることができる。 It moves directly to the stairs and to a predetermined fixed distance. FIG. 27 is a diagram for explaining the alignment operation. In FIG. 27, it is assumed that the area 170 is the first step of the stair tread recognized by the robot apparatus. In the aligning operation, the center point force of the front edge FE of the tread 170 also moves to a target position (hereinafter referred to as an aligning position) separated by a predetermined distance ad (align.distance) in a direction orthogonal to the treading surface FE. In this case, if the current position of the robot device is point 171 and the target alignment position is point 172, the distance between the two is determined. Arain position 172 but when the angle difference between the direction facing the case and the direction and the mouth bot device perpendicular to the front edge FE away more than a predetermined threshold max_d is equal to or greater than a predetermined threshold value ma X _ a, the robot device intended It is assumed that the alignment operation is completed when these conditions are satisfied. These thresholds may be, for example, max_d = 3 (cm) and max_a = 2 °.
4-3:アプローチ(Approach)動作  4-3: Approach operation
アプローチ動作では、ロボット装置が階段の直前まで移動する。図 28は、ァプロー チ動作を説明するための模式図である。図 28に示すように、階段と正対し  In the approach operation, the robot device moves to just before the stairs. FIG. 28 is a schematic diagram for explaining the approach operation. Face the stairs as shown in Figure 28
Align_distanceだけ離れた目的位置であるァライン位置 172に移動してァライン動作 を終了したロボット装置 201は、階段の昇降を行うために、踏面 170のフロントエッジ FEの中心点 C とロボット装置 201とが正対し、かつその距離が所定の値 ax ( The robot apparatus 201 that has moved to the target position 172, which is a target position separated by Align_distance, and has completed the alignment operation, moves the stairs 170 up and down so that the center point C of the front edge FE of the tread 170 and the robot apparatus 201 are correct. And the distance is a predetermined value ax (
FE  FE
approach.x)となる目的の位置(以下、アプローチ位置という。 )に移動する。 Move to the target position that will be approach.x) (hereinafter referred to as approach position).
4-4 :クライム(Climb)動作  4-4: Climb operation
階段認識によって得られた階段データを元に階段昇降動作をする。次の踏面 (段) に移動した場合であって、次の段の踏面が観測されている場合には、更に昇る又は 下る動作を続ける。この動作を次の段がなくなるまで続けて行うことにより階段昇降動 作が実現される。  A stair climbing operation is performed based on the stair data obtained by the stair recognition. When moving to the next step (step) and the next step is observed, continue the ascent or descent. By continuing this operation until there is no next step, a stair climbing operation is realized.
4-5:フィニッシュ(Finish)動作  4-5: Finish operation
クライム動作にて階段を登る動作をした場合は、最上段にいることを確認し、最上段 の中央へ移動する。クライム動作にて降りる動作をした場合、又は図 9に示すような階 段 ST1である場合には例えば向きを変えるなどして降りる動作をした場合には、床面 に降りたことを認識することで階段昇降動作は終了される。  If you climb the stairs during a climb, make sure you are at the top and move to the center of the top. Recognize that you have descended on the floor if you climb down or climb down the stairs as shown in Fig. 9 if you step down. Then, the stair climbing operation is completed.
以上の階段昇降処理の動作方法について更に詳細に説明する。図 29は、階段昇 降動作の手順を示すフローチャートである。図 29に示すように、階段昇降制御の処 理が開始されると、サーチ(Search) 'ァライン (Align) 'アプローチ (Approach)動作に よって、階段を検索し、検索した階段に対して対畤した所定位置に移動 (ァライン)し 、階段の 1段目に近づくアプローチ動作を実行する (ステップ S21)。このサーチ'ァラ イン'アプローチ動作が成功した場合 (ステップ S22 : Yes)、後述する方法にて階段 昇降を行い (ステップ S23)、成功したことを出力する。近づくことに失敗した場合 (ス テツプ S22 : No)、失敗したことを出力して処理を終了する。この場合は、もう一度、ス テツプ S21の処理力 繰り返すなどする。 The operation method of the above-described stair climbing process will be described in more detail. FIG. 29 is a flowchart showing the procedure of the stair climbing operation. As shown in Fig. 29, when the processing of the stair climbing control is started, the stairs are searched for by the search (Search) 'Align' (Approach) operation, and the searched stairs are raised against the stairs. Is moved to the predetermined position (aligned), and an approach operation approaching the first step of the stairs is executed (step S21). If this search 'alignment' approach operation is successful (step S22: Yes), It moves up and down (step S23) and outputs success. If the approach fails (step S22: No), the failure is output and the processing ends. In this case, the processing power of step S21 is repeated again.
ここで、ステップ S21におけるサーチ 'ァライン ·アプローチのシーケンスは通常ロボ ット装置が物体や目的地に到達するために用いる制御と同じである。具体的には例 えば以下に示すような方法がある。図 30は、サーチ'ァライン'アプローチ処理方法 を示すフローチャートである。  Here, the sequence of the search-alignment approach in step S21 is the same as the control that is usually used by the robot apparatus to reach an object or a destination. Specifically, for example, there is the following method. FIG. 30 is a flowchart showing a search 'align' approach processing method.
図 30に示すように、サーチ 'ァライン'アプローチが開始されると、サーチ動作(1)を 実行する (ステップ S32)。サーチ動作(1)では、首を振りできるだけ広い範囲の情報 を集める動作とする。その結果、周囲に昇降可能な階段があるかどうかを判断する( ステップ S32)。ここで、検出された階段の中で 1段目の踏面を構成する平面の高さ n を利用し、高さが step_min_z<n < step_max_zを満たす場合に、昇降可能であると判 断する。昇降可能な階段があった場合 (ステップ S32 : Yes)、階段を近くで認識しな おすために、階段に対して決められた距離 (align_distance)まで移動するァライン動 作を実行し (ステップ S33)、昇降しょうとしている階段を再度認識する (ステップ S34) 。このステップ S34の動作がサーチ動作(2)である。  As shown in FIG. 30, when the search 'alignment' approach is started, the search operation (1) is executed (step S32). In the search operation (1), the head is shaken to collect information as wide as possible. As a result, it is determined whether or not there are stairs that can be moved up and down (step S32). Here, using the height n of the plane that forms the first tread surface among the detected stairs, if the height satisfies step_min_z <n <step_max_z, it is determined that it is possible to move up and down. If there is a stair that can be moved up and down (Step S32: Yes), an alignment operation is performed to move the stairs to the specified distance (align_distance) in order to recognize the stairs nearby (Step S33). Then, the stairs about to go up and down are recognized again (step S34). The operation in step S34 is the search operation (2).
そして、昇降可能な階段である力否力を再度確認し (ステップ S35)、サーチ動作( 2)が成功している場合には、再認識した階段に対して対畤しかつ所定の距離のァラ イン位置に移動完了できている力否力 すなわちステップ S33のァライン動作が成功 している力否かを確認し (ステップ S36)、昇降可能な階段があり、かつァラインされて いる場合には(ステップ S35、 S36 : Yes)、初段の階段のフロントエッジまで進むァプ ローチ動作を実行する (ステップ S37)。一方、ステップ S35にて昇降可能な階段が ない場合にはステップ S31の処理に戻り、ステップ S36にてァライン動作が成功して Vヽな 、場合にはステップ S33の処理力も繰り返す。  Then, the force of the stairs that can be raised and lowered is checked again (step S35). If the search operation (2) is successful, the stairs that have been re-recognized staircase against the re-recognized stairs and have a predetermined distance. It is checked whether the force has been successfully moved to the line position, that is, whether or not the aligning operation in step S33 has been successful (step S36). If there is a stair that can be raised and lowered and the aligning is performed, In steps S35 and S36: Yes), an approach operation is performed to advance to the front edge of the first staircase (step S37). On the other hand, if there is no stair that can be moved up and down in step S35, the process returns to step S31. If the alignment operation is successful and V ヽ in step S36, the processing power in step S33 is repeated.
次に、ステップ S22における階段昇降動作について説明する。階段昇降動作は、 一段上又は下の段 (以下、次段という。)を現在の踏面力 ロボット装置自身が認識で きる場合の昇降動作処理 1と、現在の移動面から次段は認識できない場合等であつ て、 2段以上、上の段又は下の段(以下、 2段以上先の段という。)を認識できる場合 の昇降動作処理 2と、複数段の踏面が認識可能な場合の昇降動作処理 3とがある。 図 31、図 33、図 34は、それぞれ昇降動作処理 1一 3の処理方法を示すフローチヤ ートである。ここで、以下の説明においては、現在移動中の段 (床面を含む)を step-0 、次段の段を step- 1、その更に次の段を step- 2、 m段先の段を step-mとする。 Next, the stair climbing / falling operation in step S22 will be described. The stair climbing operation consists of ascending and descending operation processing 1 when the robot is able to recognize the next step up or down (hereinafter referred to as the next step) and the next step from the current moving plane. Etc., when two or more steps, upper steps or lower steps (hereinafter referred to as two or more steps ahead) can be recognized. There is a lifting operation process 2 and a lifting operation process 3 when a plurality of steps can be recognized. FIG. 31, FIG. 33, and FIG. 34 are flow charts showing the processing method of the lifting operation processing 13 respectively. Here, in the following description, the step that is currently moving (including the floor) is step-0, the next step is step-1, the next step is step-2, and the next step is m. Step-m.
先ず、次段 (st印- 1)の踏面を観測 ·認識できる場合の昇降動作処理にっ 、て説明 する。図 31に示すように、昇降動作処理 1が開始されると、階段を登る Z降りる動作( クライム動作(1) )を実行する (ステップ S41)。このクライム動作(1)では、上述のステ ップ S32において、階段の高さ nを認識できているため、この高さ nの正負の判断を z z  First, a description will be given of the ascent / descent operation processing when the next step (st mark-1) can be observed and recognized. As shown in FIG. 31, when the ascent / descent operation process 1 is started, an operation of climbing up the stairs and descending Z (climb operation (1)) is executed (step S41). In the climb operation (1), since the height n of the stairs has been recognized in the above-described step S32, the positive / negative judgment of the height n is determined by z z
することにより、昇降モードを切り替える。すなわち、 n < 0であれば降りる動作であり z To switch the elevating mode. That is, if n <0, it is a descending operation and z
、 n >0であれば登る動作となる。登る場合と降りる場合では、後述するように、クライ z  , N> 0, it is a climbing operation. When climbing and descending, as described below,
ム動作に使用する制御パラメータの値が異なる。すなわち、制御パラメータを切り替 えるのみで、登る動作と降りる動作を切り替え実行することができる。 Control parameter values used for system operation are different. That is, it is possible to switch between the climbing operation and the descending operation only by switching the control parameters.
そして、クライム動作(1)が成功した力否かを判断し (ステップ S42)、成功した場合 (ステップ S42 : Yes)は、サーチ動作(3)を実行する(ステップ S43)。このサーチ動 作(3)は、ステレオビジョンシステムが搭載された頭部ユニットを動かし、周囲の距離 データを取得して次の階段を検出する処理であり、サーチ動作(2)などとほぼ同様の 動作処理である。  Then, it is determined whether or not the climb operation (1) is successful (step S42). If the force is successful (step S42: Yes), the search operation (3) is executed (step S43). This search operation (3) is a process in which the head unit equipped with the stereo vision system is moved, the surrounding distance data is acquired, and the next step is detected. This is an operation process.
そして、ステップ S35と同様、サーチ動作(3)が成功したら (ステップ S44 : Yes)、再 びステップ S41の処理を繰り返す。階段が検索されな力つた場合 (ステップ S44 : No )、階段を登り切ったか、又は下りきつたものと判断し、階段面の中央など所定の位置 まで移動するなどのフィニッシュ動作を実行し (ステップ S45)、処理を終了する。 次に、ステップ S41のクライム動作(1)について詳細に説明する。図 32は、ロボット 装置が認識しているか又は認識する予定の階段面を示す模式図である。図 32に示 すように、例えば、現在移動中のロボット装置の足底 121LZRが踏面 181にあるとす る。また、この図 32においては、フロントエッジ FE及びバックエッジ BEに挟まれる安 全領域及びこれに隣接する左右のマージン M , Mを踏面と認識することとする。こ  Then, similarly to step S35, when the search operation (3) is successful (step S44: Yes), the process of step S41 is repeated again. If the stairs are not searched (Step S44: No), it is determined that the stairs have been climbed up or down, and a finishing operation such as moving to a predetermined position such as the center of the stairs is executed (Step S44). S45), the process ends. Next, the climb operation (1) in step S41 will be described in detail. FIG. 32 is a schematic diagram showing a staircase surface recognized or scheduled to be recognized by the robot device. As shown in FIG. 32, for example, it is assumed that the sole 121LZR of the currently moving robot apparatus is on the tread 181. In FIG. 32, the safety area sandwiched between the front edge FE and the back edge BE and the left and right margins M, M adjacent to the safety area are recognized as treads. This
1 2  1 2
のとき、ロボット装置は、次の次段(step- 2)の踏面 182を認識できている。ここで、踏 面 181と踏面 182との間にはその蹴り上げなどによりギャップ 184が存在するものと する。昇降動作(1)では、現在の段 (st印- 0)の踏面 181から次の段 (st印- 1)の踏面 182に移動 (クライム動作)可能であるかを判断するが、例えば、以下の基準を満た すものを移動可能と判断するものとする。 At this time, the robot apparatus can recognize the tread surface 182 of the next next step (step-2). Here, it is assumed that a gap 184 exists between the tread 181 and the tread 182 due to a kick-up or the like. To do. In the elevating operation (1), it is determined whether it is possible to move (climb operation) from the tread surface 181 of the current step (st mark-0) to the tread surface 182 of the next step (st mark-1). Those that meet the criteria of above shall be judged as movable.
5—1 :次段(step-1)の踏面 182のフロントエッジ FEに十分近ぐ角度のずれが所定 の閾値以下  5-1: The deviation of the angle sufficiently close to the front edge FE of the tread surface 182 of the next step (step-1) is below a predetermined threshold
5-2:次段 (step-1)の踏面 182の大きさが十分に大き ヽ  5-2: The size of the tread 182 of the next step (step-1) is sufficiently large.
5— 3 :フロントエッジ FE力も足底 121LZRの後端までの距離 front_xが指定された昇 降モードにおける制御パラメータ front_x_limitより大きい  5-3: Front edge FE force is also the distance to the rear end of the sole 121LZR front_x is larger than the control parameter front_x_limit in the specified elevating mode
5— 4 :バックエッジ BE力も足底 121LZRの前端までの距離 back_xが昇降モードに おける制御パラメータ back_x_limi1:より大き!/ヽ  5-4: Back edge BE force is also the distance to the front end of the sole 121LZR back_x is larger than the control parameter back_x_limi1: in the elevation mode! / ヽ
ここで、更に次段 (step-2)の踏面 183を観測できた場合、次段 (step- 1)の踏面 18 2の参照点における高さ zlと更に次の段(step- 2)の踏面 183の参照点における高さ z2の差(z2— zl)から、次段(step-1)の踏面 182からその次の段(step-2)の踏面 18 3へのクライム動作が登りであるか下りであるかを判断することができる。なお、 2段先 の段 (st印- 2)の踏面 183が認識できない場合は、現在の昇降状態が維持されるもの とすればよい。  Here, if the tread 183 of the next step (step-2) can be observed, the height zl at the reference point of the tread 182 of the next step (step-1) and the tread of the next step (step-2) From the difference (z2-zl) of the height z2 at the reference point 183, whether climbing from the tread 182 of the next step (step-1) to the tread 183 of the next step (step-2) is climbing It can be determined whether it is going down. If the tread 183 at the step two steps ahead (st mark-2) cannot be recognized, the current ascending / descending state may be maintained.
上記の 5— 1一 5— 4力もクライム動作可能と判断した場合は、階段を登る又は降りる クライム動作を実行する。ロボット装置は、現在の段 (step- 0)の踏面 181において、 踏面 181のバックエッジ BEにァラインしている。ここで、踏面 181と踏面 182との間  If it is judged that the above 5-1-1-5-4 forces can perform the climb operation, perform the climb operation to climb or descend the stairs. The robot device is aligned with the back edge BE of the tread 181 on the tread 181 of the current step (step-0). Here, between tread 181 and tread 182
0  0
のギャップ 184が大きい場合には、次段(step-1)の踏面 182のフロントエッジ FEに ァラインする動作をした後に、次段に移動し、そのノックエッジ BEにァラインする。そ して、次のクライム動作にて、次の段(step-2)の踏面 183のフロントエッジ FEにァラ If the gap 184 is large, the operation moves to the next step after aligning with the front edge FE of the tread 182 of the next step (step-1), and then aligns with the knock edge BE. Then, in the next climb operation, an error is applied to the front edge FE of the tread 183 of the next step (step-2).
2 インし、踏面 183に移動し、そのバックエッジ BEにァラインする。すなわち、例えば、  2 Move in to tread 183 and align with its back edge BE. That is, for example,
2  2
次の段の踏面のフロントエッジ FEにァラインして、昇降動作し、その移動した踏面の バックエッジ BEにァラインするまでをクライム動作とする。 The climbing operation is performed by aligning with the front edge FE of the tread of the next step, moving up and down, and aligning with the back edge BE of the moved tread.
また、ギャップ 184が小さい場合には、現在の段(step- 0)の踏面 181のバックエツ ジ BEと次段の踏面 182におけるフロントエッジ FEがほぼ一致するものとして、何れ When the gap 184 is small, it is assumed that the back edge BE of the tread 181 of the current step (step-0) and the front edge FE of the next step tread 182 almost coincide with each other.
0 1 0 1
か一方のエッジにのみァライン動作するようにしてもよい。すなわち、現在の踏面 181 のバックエッジ BEにァラインして次段の踏面 182に移動し、そのバックエッジ BEにThe alignment operation may be performed only on one of the edges. That is, the current tread 181 Align to the back edge BE, move to the next step tread 182, and
0 1 ァラインする力、次段の踏面 182のフロントエッジ FEにァラインして次段の踏面に移 動し、更に次の段の踏面 183のフロントエッジ FEにァラインすればよい。この場合、 クライム動作とは、次段のフロントエッジ FEにァランする処理を省略し、次段に移動し て移動した踏面のバックエッジ BEにァライン動作を実行する処理となる。 0 1 Force to align, align to the front edge FE of the next step tread 182, move to the next step tread, and then align to the front edge FE of the next step tread 183. In this case, the climb operation is a process of omitting the process of performing an error on the front edge FE of the next stage, and performing an alignment operation on the back edge BE of the tread moved to the next stage.
以上の昇降動作処理 1は、ロボット装置が階段を昇降動作中に次に移動可能な段 (step-1)の踏面を観測できる場合に適用することができる。例えば 2足歩行ロボット装 置であれば、自身の足元を見下ろすことが可能なステレオビジョンシステム 1を搭載し ておく必要がある。  The ascending / descending operation processing 1 described above can be applied when the robot device can observe the tread of the next movable step (step-1) during the ascent / descent operation of the stairs. For example, a biped robot device needs to be equipped with a stereo vision system 1 that can look down on its feet.
次に、次段の踏面を観測 ·認識可能な場合の昇降動作処理 1とは異なり、ロボット装 置の頭部ユニットと体幹部ユニットとの接続部の可動角の制約などにより、現在の段( step- 0)における踏面力 次段 (step- 1)の踏面を観測.認識することができず、 2段先 (step- 2)又はそれ以上先の段 (step-m)の段の踏面を認識可能な場合の昇降動作 処理 2、 3について説明する。先ず、 2段先の段 (st印- 2)の踏面を認識可能な場合に ついて説明すると、図 32に示すように、昇降動作処理 2が開始されると、上述と同様 に、サーチ動作 (4)を実行する (ステップ S51)。このサーチ動作 (4)では、ステレオビ ジョンシステム 1により 2段先の段 (st印- 2)の踏面を観察'認識する。このサーチ動作 (4)は、 2段先の段 (step- 2)の踏面を認識する以外、上述のステップ S43などと同様 の処理である。  Next, unlike the lifting / lowering operation process 1 in which the tread of the next step is observable and recognizable, the current step (due to the restriction of the movable angle of the connection between the head unit and the trunk unit of the robot unit, etc.) Tread force at step-0) The tread of the next step (step-1) was observed, and the tread of the step two steps ahead (step-2) or more steps (step-m) could not be recognized. Elevation operation when recognition is possible Processing 2 and 3 will be described. First, a case where the tread of the step two steps ahead (st mark-2) can be recognized will be described. As shown in FIG. 32, when the ascending and descending operation processing 2 is started, the search operation ( Execute 4) (step S51). In this search operation (4), the tread surface of the next step (st mark-2) is observed'recognized by the stereo vision system 1. This search operation (4) is the same processing as the above-described step S43 and the like, except for recognizing the tread of the step two steps ahead (step-2).
そして、クライム動作 (2)を実行する (ステップ S52)。クライム動作 (2)は、クライム動 作(1)と同様の動作である。この場合においても、クライム動作における階段を登る動 作と降りる動作の切り替えは、同じく次段 (step-1)の踏面の高さ nにより判断すること z  Then, the climb operation (2) is executed (step S52). The climb operation (2) is the same operation as the climb operation (1). In this case as well, switching between climbing and descending stairs in climbing is also determined by the tread height n of the next step (step-1).
ができる。次段 (step- 1)の踏面とは、現在の段 (step-0)の踏面より時間的に前に移 動して 、た段の踏面にぉ 、て観測された踏面である。 Can do. The tread of the next step (step- 1) is a tread that moves forward in time with respect to the tread of the current step (step- 0), and is observed on the tread of the next step.
そして、クライム動作が成功した場合 (ステップ S53 : Yes)、ステップ S51にてサー チした結果、 2段先の階段 (step- 2)が検出されていれば (ステップ S54 : Yes)、それ を次の階段として更新し(step-1 = step- 2) (ステップ S56)、ステップ S51からの処理 を繰り返す。ステップ S51のサーチ動作 (4)にて 2段先の段 (step- 2)の踏面が検出さ れていない場合 (ステップ S55 : No)、フィニッシュ動作を実行し (ステップ S55)、処 理を終了する。 If the climb operation is successful (Step S53: Yes), if the search performed in Step S51 indicates that the stairs (step-2) two steps ahead are detected (Step S54: Yes), the next step is performed. (Step-1 = step-2) (step S56), and repeat the processing from step S51. In the search operation (4) in step S51, the tread on the next two steps (step-2) is detected. If not (step S55: No), the finish operation is executed (step S55), and the process ends.
なお、ここでは 2段先の段の踏面を観測できることとして説明したが、 3段以降先の 踏面を観測できる場合であっても同様にして処理すればょ 、。  Although the description has been given here assuming that the tread of the step two steps ahead can be observed, the same processing should be performed even when the tread of the step ahead of three steps can be observed.
次に、複数段 (以下、 m段)先までの踏面が観測'認識可能な場合についの昇降動 作を昇降動作 3として説明する。  Next, a description will be given as a lifting operation 3 when the tread surface up to a plurality of steps (hereinafter referred to as m steps) can be observed and recognized.
図 34に示すように、 1一 n段が観測'認識できている場合に昇降動作を行う場合、ま ず、サーチ動作(5)を実行する。サーチ動作(5)は、基本的には、ステップ S51と同 様であるが、認識可能な m段先の段 (st印- m)までの踏面を観測対象として 、る点が 異なる。  As shown in FIG. 34, when performing the ascending / descending operation when the (n) th stage has been observed and recognized, the search operation (5) is first performed. The search operation (5) is basically the same as step S51, except that the tread surface up to the recognizable m-step ahead (st-m) is the observation target.
そして、クライム動作 (3)を k段分実行する (ステップ S62)。クライム動作 (3)におい ても、現在までに観測されて 、る複数の踏面の高さの差力も昇降モードを決定するこ とができる。すなわち、潘目、ト 1番目の踏面の高さ z— z  Then, the climb operation (3) is performed for k stages (step S62). In the climbing operation (3), the differential force between the heights of a plurality of treads that has been observed so far can also determine the elevating mode. That is, the height of the first tread z—z
i i-1が負であれば階段を下り る動作となり、 0又は正であれば階段を登る動作モードとすればよい。このクライム動 作(3)において移動する踏面の情報は、現在の段より時間的に m段前にて観測済み のデータである。  If i i-1 is negative, the operation goes down the stairs. If i i-1 is 0 or positive, the operation mode goes up the stairs. The information on the tread moving in this climb operation (3) is data that has been observed m steps before the current step.
そして、昇降動作が成功した場合 (ステップ S63 :Yes)、現在の踏面より先の階段 データが観測できたか否力 すなわち m+n>kであるか否かが判定され (ステップ S 64)、 m+n>kであれば(ステップ S64 :Yes)、 step- (k+1)— step- (n+m)を、 step- 1— step-(m+n— k) (n=m)として、更新し (ステップ S66)、ステップ S61からの処理を繰 り返す。一方、 mく 0であれば (ステップ S64 : No)、次に移動する対象となる踏面が 存在しないため、フィニッシュ動作を実行し (ステップ S65)、処理を終了する。  Then, when the lifting operation is successful (Step S63: Yes), it is determined whether the stairs data ahead of the current tread can be observed or not, that is, m + n> k (Step S64), m If + n> k (step S64: Yes), step- (k + 1) —step- (n + m) is set as step-1—step- (m + n—k) (n = m) Is updated (step S66), and the processing from step S61 is repeated. On the other hand, if m is 0 (Step S64: No), there is no tread to be moved next, so a finish operation is performed (Step S65), and the process ends.
以上説明したように、階段昇降動作処理 1一 3においては、クライム動作における登 る動作と降りる動作とで使用する制御パラメータを変更するのみで、階段を登ることも 、降りることも同様の手順にて実行することができる。階段昇降動作に使用する制御 パラメータは、ロボット装置の足底の現在の踏面に対する位置を規制するためのもの である。  As described above, in the stairs ascent / descent operation processing 1-3, the same procedure is used for climbing and descending the stairs only by changing the control parameters used for the climbing operation and the descending operation in the climb operation. Can be executed. The control parameters used for the stair climbing operation are for regulating the position of the sole of the robot device with respect to the current tread surface.
図 35Aは、ロボット装置により認識されている踏面と足底の関係を説明するための 図、図 35Bは、クライム動作に使用する制御パラメータの一例を示す図である。 FIG.35A is a diagram for explaining the relationship between the tread and the sole recognized by the robot device. FIG. 35B is a diagram showing an example of control parameters used for the climb operation.
図 35Aに示す各制御パラメータは、以下を示す。  Each control parameter shown in FIG. 35A indicates the following.
step min.z:現在の段と次段の段との高さの差 (蹴り上げ)の昇降可能最小値 step max.z:現在の段と次段の段との高さの差 (蹴り上げ)の昇降可能最大値 ad (align.distance):ァライン位置におけるフロントエッジ FEとロボット装置との間の 距離  step min.z: The minimum value of the height difference (kick-up) between the current step and the next step that can be raised and lowered step max.z: The difference in height between the current step and the next step (kick-up) ) The maximum value that can be raised and lowered ad (align.distance): The distance between the front edge FE and the robot at the alignment position
ax (approach.x):アプローチ位置におけるフロントエッジ FEとロボット装置との間の 距離  ax (approach.x): Distance between the front edge FE and the robot at the approach position
front_x_limit:踏面におけるフロントエッジ FEと足底 121の後端部との距離の限界値 (.minimal x- value)  front_x_limit: Limit value of the distance between the front edge FE on the tread and the rear end of the sole 121 (.minimal x-value)
back_x_limit:踏面におけるバックエッジ BEと足底 121の前端部との距離の限界値( maximal x- value)  back_x_limit: Limit value of the distance between the back edge BE on the tread and the front end of the sole 121 (maximal x-value)
back_x_desired:バックエッジ BEと足底 121の前端部との距離の欲求値 (desired value;  back_x_desired: Desired value of the distance between the back edge BE and the front end of the sole 121 (desired value;
階段を登る場合には、蹴り上げが Step_min_Zより小さい場合は段差がある(階段)と 見なさないものとし、蹴り上げが p_maX_Zより大きい場合には、階段昇降不可と判断 する。同様に、階段を降りる場合には、蹴り上げが Step_maX_Zより大きい場合には、階 段と見なさないものとし、 p_min_Zより大きい場合には、階段昇降不可と判断するも のとする。 When climbing stairs, kicking it shall not considered if less than S t e p_min_ Z there is a step (stairs), when kicked greater than p_ma X _ Z judges that stair climbing impossible . Similarly, if you go down the stairs, when kicked up is greater than the S tep_m aX _ Z is, shall not be regarded as a floor stage, is greater than p_min_ Z is also of the judges that stair climbing can not .
aligi iistanceは、ァライン動作をする場合にのみ使用するパラメータであり、階段を 登る Z降りる動作を実行する階段昇降動作処理を開始する場合、すなわち、初段の 段に対する昇降動作を行う場合に使用される。同様に、 appr0ach_xも、アプローチ動 作をする場合にのみ使用するパラメータであり、階段を登る Z降りる動作を実行する 階段昇降動作処理を開始する場合に使用される。 aligi iistance is a parameter that is used only when performing an alignment operation, and is used when starting the stairs elevating operation processing that executes the operation of climbing the stairs and descending the Z step, that is, performing the elevating operation of the first step. . Similarly, appr 0ac h_x also a parameter to be used only when the approach operation, is used to initiate the stair climbing operation process for performing Z down operation climbing stairs.
front_x_limit及び back_x_limitは、ロボット装置が認識している踏面と、ロボット装置の 足底の関係を規定するもので、踏面のバックエッジ BE及びフロントエッジ FEと、足底 の端部との距離、すなわちその踏面に移動した場合における踏面のあまり部分がこ れらの値より小さい場合には、その踏面に移動できない、又は移動できたとしても次 の昇降動作が不能であると判断される。ここで、登り動作において、 front_x_limit及び back_x_limitの値が何れも負であることは、踏面が足底より小さ 、ことを許すことを示す 。すなわち、登り動作においては、踏面は足底より小さい場合であっても移動可能と 半 U断することができる。 front_x_limit and back_x_limit specify the relationship between the tread surface recognized by the robot device and the sole of the robot device.The distance between the back edge BE and front edge FE of the tread surface and the end of the sole, that is, If a small portion of the tread when moving to the tread is smaller than these values, the tread cannot be moved, or even if it can be moved, the next Is determined to be impossible to ascend and descend. Here, in the climbing operation, a negative value of both front_x_limit and back_x_limit indicates that the tread is smaller than the sole. That is, in the climbing operation, even if the tread surface is smaller than the sole, it can be determined that the tread is movable.
back_x_desiredは、現在の踏面におけるバックエッジ BEに対してロボット装置がァラ インしたい位置におけるバックエッジ BEと足底前端部との距離を示すものであり、図 35Bに示すように、通常、登る場合であれば、 back_x_desiredは、ノックエッジ BEより も手前、本実施の形態においては、バックエッジ BEから 15mm手前の位置となって おり、一方、降りる動作であれば、ノ ックエッジ BEより足底がはみ出した位置、本実 施の形態においては、 5mmはみ出した位置となる。これは、登る動作であれば、次 段に足上げして移動するまでにある程度の距離が必要であるのに対し、降りる動作 においては、そのような距離が不要であると共に、踏面からはみ出すような位置の方 が次段又はそれ以降の踏面の観測 ·認識が容易となるためである。  back_x_desired indicates the distance between the back edge BE at the position where the robot device wants to align with the back edge BE on the current tread and the front end of the sole, and as shown in Fig. In this case, back_x_desired is located before the knock edge BE, and in this embodiment, at a position 15 mm before the back edge BE.On the other hand, when descending, the sole protrudes beyond the knock edge BE. In this embodiment, the position protrudes by 5 mm. This is because climbing requires a certain distance before moving up to the next step, while descending does not require such a distance, and it must extend beyond the tread. This is because it is easier to observe and recognize the next or subsequent tread at a position that is easier.
図 36及び図 37は、図 35に示す制御パラメータを使用して実際にロボット装置が昇 降動作を行った様子を撮影したものをトレースした図である。図 36は、ロボット装置が 階段を登る動作を示している。最上段から番号順に、ステップ S31のサーチ動作(1) 実行の様子 (No. 1)、ステップ S33のァライン動作実行の様子 (No. 2)、ステップ S3 2のサーチ動作(2)実行の様子 (No. 3)、ステップ S37のアプローチ動作実行の様 子(No. 4)、ステップ S51のサーチ動作(4)実行の様子(No. 5)、ステップ S52のク ライム動作(2)実行の様子 (No. 6)、ステップ S52のクライム動作(2)の続きであって 現在の踏面のバックエッジ BEにァラインしている様子(No. 7)、ステップ S51のサー チ動作 (4)実行の様子 (No. 8)、 · · ·を示し、サーチ動作 (4)をして次段の踏面が観 測されな力つた場合 (No. 17)に、階段昇降動作終了 (フィニッシュ)動作を行ってい る様子 (No. 18)を示す。  FIG. 36 and FIG. 37 are traces obtained by photographing a state in which the robot apparatus actually performs the ascent / descent operation using the control parameters shown in FIG. FIG. 36 shows the operation of the robot apparatus climbing the stairs. Search operation (1) of step S31 (No. 1), alignment operation of step S33 (No. 2), search operation (2) of step S32 (step S32) No. 3), execution of approach operation in step S37 (No. 4), execution of search operation (4) in step S51 (No. 5), execution of prime operation (2) in step S52 ( No. 6), the continuation of the climb operation (2) in step S52, alignment with the back edge BE of the current tread (No. 7), search operation in step S51 (4) execution ( No. 8), ... When the search operation (4) was performed and the next tread was not observed (No. 17), the stairs ascent / descent operation was finished (finish). The appearance (No. 18) is shown.
図 37は、降りる動作を示すものであり、図 36の登る動作と同様に、サーチ動作 (No . 1, No. 4、 No. 7、 No. 10、 No. 13、 No. 16)、クライム動作(ァライン動作含む) を繰り返し(No. 5、 No. 6、 No. 8、 No. 9、 No. 11、 No. 12、 No. 14、 No. 15)、 次段の踏面が観測されなくなった時点でフィニッシュ動作を行って (No. 18)、昇降 動作を終了する。 Fig. 37 shows the descending operation, and the search operation (No. 1, No. 4, No. 7, No. 10, No. 13, No. 16), climb Repeat the operation (including the alignment operation) (No. 5, No. 6, No. 8, No. 9, No. 11, No. 14, No. 15, No. 15), and the tread of the next step is no longer observed Finishes (No. 18) and moves up and down End the operation.
次に、本実施の形態における変形例について説明する。以上の説明においては、 階段を登る動作、降りる動作について説明したが、本実施の形態の制御方法に係る アルゴリズムを適用すれば、複数の段力 なる階段のみならず、 1段のみ力 なる段 部や、 1段の凹部が存在しても、その段差が所定の値以下であれば、移動を可能に するものである。  Next, a modification of the present embodiment will be described. In the above description, the operation of climbing the stairs and the operation of descending the stairs have been described. However, if the algorithm according to the control method of the present embodiment is applied, not only a plurality of steps but also a step having only one step In addition, even if there is a one-step recess, if the step is equal to or less than a predetermined value, it can be moved.
先ず、単一の段部に登る動作について説明する。図 38は、単一の段部とロボット装 置の足底の関係を示す図である。図 38において、 191で示すのは、床面(z = 0)力も の高さが zl = 30である段部を示し、ここでは、ロボット装置が次段 (stepl)となる段部 191の紙面下側から上側に移動する場合にっ 、て説明する。段部 191に対し紙面 下側における移動面の高さが z0 = 0、紙面上側における移動面の高さが z2 = 0であ る場合であって、現在、高さ ζθの移動面を移動しているとした場合、 zl— z0 = 30 >0 であるので次段の段部 191への移動は登り動作と判断し、 z2— zl =— 30く 0である ので段部 191から次の領域への移動は降りる動作であると判断することができる。こ の判断に応じてクライム動作にお 、て、上述した制御パラメータの値を変更すればよ い。ここで、図 35Bに示す制御パラメータを使用した場合、登る動作の場合の ront_x_limit及び back_x_limitは、何れも負の値であって、ロボット装置の足底 121が図 38に示すように、段部 191からはみ出した状態であっても移動可能と判断することを 示す。  First, the operation of climbing a single step will be described. FIG. 38 is a diagram showing the relationship between a single step and the sole of the robot device. In FIG. 38, reference numeral 191 denotes a step having a floor (z = 0) force and a height of zl = 30. In this case, the robot device is located at the next step (stepl). The case of moving from the lower side to the upper side will be described. The height of the moving surface on the lower side of the paper is z0 = 0 and the height of the moving surface on the upper side of the paper is z2 = 0 with respect to the step 191. If zl-z0 = 30> 0, it is determined that the movement to the next step 191 is an ascending operation. Since z2-zl =-30 0 0, the next area from the step 191 is determined. It can be determined that the movement to is a descending operation. The value of the above-mentioned control parameter may be changed in the climb operation in accordance with this determination. Here, when the control parameters shown in FIG. 35B are used, the ront_x_limit and the back_x_limit in the case of the climbing operation are both negative values, and the sole 121 of the robot apparatus has the step 191 as shown in FIG. Indicates that it is determined that it is possible to move even if it protrudes.
次に、単一の窪み(凹部)に降りる動作について説明する。図 39は、単一の凹部と ロボット装置の足底の関係を示す図である。図 39において、 192で示すのは、床面( z = 0)力もの高さが zl =— 30である凹部を示し、ここでは、ロボット装置が次段(stepl )となる凹部 192の紙面下側力も上側に移動する場合について説明する。凹部 192 に対し紙面下側における移動面の高さが z0 = 0、紙面上側における異動面の高さが z2 = 0である場合であって、現在、高さ ζθの移動面を移動しているとした場合、 zl— z 0=— 30く 0であるので次段の凹部 192への移動は降りる動作と判断し、 z2-zl = 3 0 > 0であるので凹部 192から次の領域への移動は登る動作であると判断することが できる。したがって、段部 191と同様、この判断に応じてクライム動作において、上述 した制御パラメータの値を変更すればよい。ここで、図 35Bに示す制御パラメータを 使用した場合、降りる動作の場合の ront_x_limit及び back_x_limitは、何れも正の値で あって、ロボット装置の足底 121が図 39に示すように、凹部 191より小さい場合にの み移動可能と判断する。 Next, an operation of descending into a single depression (recess) will be described. FIG. 39 is a diagram illustrating a relationship between a single recess and the sole of the robot device. In FIG. 39, the reference numeral 192 indicates a recess having a floor surface (z = 0) and a height of zl = −30. In this case, the robot apparatus is located below the space of the recess 192 at the next step (stepl). The case where the lateral force also moves upward will be described. The height of the moving surface on the lower side of the paper is z0 = 0 and the height of the moving surface on the upper side of the paper is z2 = 0 with respect to the concave portion 192, and the moving surface with the height ζθ is currently moving. In this case, zl—z 0 = —30 and 0, so it is determined that the movement to the next recess 192 is a descending operation. Since z2-zl = 30> 0, the movement from the recess 192 to the next area is performed. The movement can be determined to be a climbing operation. Therefore, similar to the step portion 191, in the climb operation according to this determination, What is necessary is just to change the value of the control parameter. Here, when the control parameters shown in FIG. 35B are used, ront_x_limit and back_x_limit in the case of the descending operation are both positive values, and the sole 121 of the robot apparatus is moved from the recess 191 as shown in FIG. 39. It is determined that it can be moved only when it is small.
本実施の形態においては、検出された平面力も例えば水平など、移動可能と判断 できる平面を抽出し、その領域を含む多角形力 階段の踏面を認識する。そして、多 角形のフロントエッジ FE及びバックエッジ BEなどの踏面に関する情報と、その床面 からの高さを含む階段情報を利用して階段昇降動作を行う。昇降動作においては、 移動する踏面をサーチ動作を実行し、サーチできた踏面のフロントエッジ FE又は現 在の移動面におけるノ ックエッジ BEに対してァライン動作を実行し、次段の移動面と 現在の移動面との高さの違いから登る動作か降りる動作かを判断して制御パラメータ を切り替えることにより、通常の矩形の踏面力 なる階段のみならず、スパイラル形状 の階段などの昇降動作が可能となると共に、登る動作も降りる動作も制御パラメータ を変更するのみで同一の手順にて実行することができる。したがって、階段のみなら ず、単一の段部や、単一の凹部などへの移動も同一の制御方法にて移動可能となる また、階段認識においては、認識した階段を時間的に統合していくため、例えば口 ボット装置の大きさに対して階段が大きい、ロボット装置に搭載されるステレオビジョン システムの位置の制約などの理由で視野が限られたロボット装置などにぉ 、ても、高 域に亘つて階段情報を認識することができる。また、この階段情報を利用して昇降動 作する際、同じくステレオビジョンシステムの位置の制約などの理由で次段の踏面が 観測 '認識できない場合であっても、過去に観測 '認識してある階段情報を利用して 昇降動作を同様に行うことができる。  In the present embodiment, a plane that can be determined to be movable, such as a detected plane force, for example, is extracted, and the tread surface of the polygonal stairs including that area is recognized. Then, the stair climbing operation is performed using the information on the treads such as the polygonal front edge FE and back edge BE, and the stairs information including the height from the floor. In the elevating operation, a search operation is performed on the moving tread, and an align operation is performed on the front edge FE of the searched tread or the knock edge BE on the current moving surface, and the next moving surface is compared with the current moving surface. By switching control parameters by judging whether to climb or descend from the height of the moving surface, it is possible to perform not only stairs with a normal rectangular tread force, but also up and down operations such as spiral stairs. At the same time, the climbing operation and the descending operation can be executed in the same procedure only by changing the control parameters. Therefore, not only the stairs, but also the movement to a single step or a single concave part can be moved by the same control method. For example, a robot device with a limited field of view due to the stairs being large relative to the size of the mouth bot device, or a restriction on the position of the stereo vision system mounted on the robot device, etc. Can be recognized over the steps. In addition, when ascending and descending using this stair information, the tread of the next step is also 'observed' in the past even if it could not be 'observed' due to the restriction of the position of the stereo vision system. Elevating operation can be performed in the same way using stairs information.
本変形例における平面検出装置は、線分拡張法により、視野内において支配的な 平面だけでなぐ例えば階段など複数の平面が存在する場合であっても確実に複数 平面を検出することができ、平面を検出する際に抽出する線分抽出において、距離 データの点の分布に応じて適応的に線分をフィッティングさせることにより計測ノイズ に対してロバストな平面検出結果を得ることができるものである。 図 40は、本変形例における平面検出装置を示す機能ブロック図である。図 40に示 すように、平面検出装置 100は、 3次元の距離データを取得する距離データ計測手 段としてのステレオビジョンシステム(Stereo Vision System) 1と、 3次元の距離データ からなる距離画像に存在する平面を線分拡張法により検出する平面検出部 2とを有 する。平面検出部 2は、画像を構成する距離データ点から同一平面にあると推定され る距離データ点群を選択し、この距離データ点群毎に線分を抽出する線分抽出部 2 aと、画像内に含まれる、線分抽出部 2aよって抽出された全線分力ゝらなる線分群から 、該画像内に存在する 1又は複数の平面領域を検出する領域拡張部 2bとを有する。 領域拡張部 2bは、線分群から同一平面上に存在すると推定される任意の 3本の線 分を選択し、これらから基準平面を求める。そして、選択した 3本の線分に隣接する 線分がこの基準平面と同一平面に属するか否かを判定し、同一平面に属すると判定 した場合にはその領域拡張用線分としての線分により基準平面を更新すると共に基 準平面の領域を拡張する。 The plane detection device according to the present modification can reliably detect a plurality of planes by the line segment expansion method even when there are a plurality of planes such as stairs that are not only dominant planes in the visual field, In the line segment extraction that is extracted when detecting a plane, it is possible to obtain a plane detection result that is robust against measurement noise by fitting a line segment adaptively according to the distribution of points in the distance data. . FIG. 40 is a functional block diagram showing a flat panel detection device according to this modification. As shown in FIG. 40, the plane detection device 100 converts a stereo vision system (Stereo Vision System) 1 as a distance data measuring means for acquiring three-dimensional distance data into a distance image composed of three-dimensional distance data. A plane detection unit 2 for detecting an existing plane by a line segment extension method. The plane detection unit 2 selects a distance data point group estimated to be on the same plane from the distance data points forming the image, and extracts a line segment for each distance data point group. An area extending section 2b for detecting one or a plurality of planar areas existing in the image from a group of line segments consisting of the total line force extracted by the line segment extracting section 2a included in the image. The area extension unit 2b selects any three line segments estimated to exist on the same plane from the group of line segments, and obtains a reference plane from these. Then, it is determined whether or not a line segment adjacent to the selected three line segments belongs to the same plane as the reference plane. If it is determined that the line segment belongs to the same plane, the line segment as a region extending line segment is determined. Updates the reference plane and extends the area of the reference plane.
線分抽出部 2aは、その距離画像における列または行毎の各データ列において、 3 次元空間内で同一平面上にあると推定される距離データ点群を抽出し、この距離デ ータ点群から距離データ点群の分布に応じて 1以上の線分を生成する。すなわち、 分布に偏りがあると判断された場合には、データ点群は同一平面上にないと判断し、 データ点群を分割し、分割したデータ点群それぞれについて再度分布に偏りがある かを判断する処理を繰り返し、分布に偏りがない場合にはそのデータ点群から線分 を生成する。全てのデータ列について以上の処理を行い、生成した線分群 D11を領 域拡張部 2bに出力する。  The line segment extraction unit 2a extracts a distance data point group that is estimated to be on the same plane in a three-dimensional space from each data column for each column or row in the distance image, and extracts this distance data point group. Generates one or more line segments according to the distribution of distance data point cloud from. In other words, if it is determined that the distribution is biased, it is determined that the data points are not on the same plane, the data points are divided, and it is determined whether the distribution is again biased for each of the divided data points. The determination process is repeated, and if the distribution is not biased, a line segment is generated from the data point group. The above process is performed for all data strings, and the generated line segment group D11 is output to the area extension unit 2b.
領域拡張部 2bは、この線分群 D11において、同一の平面に属すると推定される線 分を 3本選択し、これら力も基準平面としての種となる平面を求める。この種となる平 面の領域 (領域種: seed region)に対して、該領域種と同一平面に属する線分を順次 統合していくことで拡張していく領域拡張によって距離画像を複数の平面に分割し、 平面群 D2を出力する。  The area extension unit 2b selects three line segments estimated to belong to the same plane in the line group D11, and obtains a plane that also serves as a seed as a reference plane. The range image is extended by integrating line segments belonging to the same plane as the region type into the region of this type of flat surface (region type: seed region). And output the plane group D2.
ロボット装置 201は、障害物回避や階段昇降など平面の情報が必要なとき、または 定期的にこれらの処理を行うことによって、階段や床面、壁といった歩行に重要な平 面の情報を取得する。 The robot 201 can perform information processing such as avoiding obstacles and going up and down stairs, or by performing these processes periodically, to obtain important information such as stairs, floors, and walls. Get surface information.
ここで、このような 3次元距離データをステレオビジョンシステム 1によって取得する ためには、階段 ST2の表面に模様 (テクスチャ)が必要となる。すなわち、 2台のカメラ による視差により得ることができるため、模様がないものは視差が算出できず、正確に 距離を計測することができない。すなわち、ステレオビジョンシステムにおける距離デ ータの計測精度は、計測対象のテクスチャに依存することになる。なお、視差とは、空 間中のある点が左目及び右目に写像される点の違いを示し、そのカメラ力もの距離 に応じて変化するものである。  Here, in order to obtain such three-dimensional distance data by the stereo vision system 1, a pattern (texture) is required on the surface of the staircase ST2. In other words, since parallax can be obtained by two cameras, parallax cannot be calculated for objects without a pattern, and distance cannot be measured accurately. That is, the measurement accuracy of the distance data in the stereo vision system depends on the texture to be measured. Note that parallax indicates the difference between a point in the space mapped to the left eye and the right eye, and varies depending on the distance of the camera.
そこで、図 41に示すように、ロボット装置の頭部ユニットに、ステレオビジョンシステ ムを構成するステレオカメラ 11RZLを備えると共に、例えば同じく頭部ユニットなど に投射手段としての例えば赤外光などを出力する光源 12を設ける。この光源 12は、 模様がない階段 ST3、その他テクスチャがないか少ない物体、壁などの対象物に対 してこれを投射 (照射)し、ランダムなパターン PTを付与する模様付与手段として作 用する。なお、ランダムパターン PTを形成して距離画像を取得できるものであれば、 ランダムパターン PTを付与する手段は赤外光を投射する光源などには限らず、例え ばロボット装置自ら対象物に模様を書いたりしてもよいが、赤外光であれば、人間の 目にはみえないものの、ロボット装置に搭載される CCDカメラなどにおいては観測可 能なパターンを付与することができる。  Therefore, as shown in FIG. 41, the head unit of the robot apparatus is provided with a stereo camera 11RZL constituting a stereo vision system, and also outputs, for example, infrared light or the like as a projection means to the head unit, for example. A light source 12 is provided. The light source 12 projects (irradiates) an object such as a stairless ST3 having no pattern, an object having little or no texture, a wall, etc., and operates as a pattern giving means for giving a random pattern PT. . As long as a random pattern PT can be formed and a distance image can be obtained, the means for applying the random pattern PT is not limited to a light source that projects infrared light. It may be written, but if it is infrared light, it can be given a pattern that is invisible to human eyes but can be observed with a CCD camera mounted on a robot device.
次に、平面検出装置 100の平面検出部 2について説明する。この平面検出部 2は、 線分拡張法を使用して平面を検出するものであり、図 42は、線分拡張法による平面 検出方法を説明する図である。線分拡張法による平面検出では、図 42に示すように 、まず、焦点 F力も撮影された画像 11において、行方向または列方向のデータ列に おける処理をする。画像内の例えば行方向の画素列(image row:イメージロウ)にお いて、距離データ点が同一の平面に属するならば直線となることを利用し、同一平面 に属すると推定される距離データ点からなる線分を生成する。そして、得られた複数 の線分カゝらなる線分群において、同一平面を構成するとされる線分群に基づき平面 を推定、検出する方法である。  Next, the plane detecting unit 2 of the plane detecting apparatus 100 will be described. The plane detecting unit 2 detects a plane using the line segment extension method, and FIG. 42 is a diagram illustrating a plane detection method using the line segment extension method. In the plane detection by the line segment extension method, as shown in FIG. 42, first, processing is performed on a data string in the row direction or the column direction in the image 11 in which the focal F force is also captured. For example, in a row of pixels (image row) in an image, if a distance data point belongs to the same plane, it becomes a straight line, and the distance data point is assumed to belong to the same plane. Generates a line segment consisting of Then, in the obtained line segment group consisting of a plurality of line segments, a method of estimating and detecting a plane based on the line group that is supposed to constitute the same plane.
図 43は、線分拡張法による平面検出処理を示すフローチャートである。図 43に示 すように、先ず、距離画像を入力し (ステップ S71)、距離画像の行方向(又は列方向 )の各画素列において同一平面に属すると推定されるデータ点力も線分を求める (ス テツプ S72)。そして、これらの線分群の中から同一平面に属すると推定される線分を 抽出し、これらの線分からなる平面を求める(ステップ S73)。このステップ S73では、 まず、平面の種となる領域 (以下、領域種 (seed region)という。)を選び、該当する領 域種を選択する。この選択においては、上下隣接する行方向(又は左右隣接する列 方向)の 1ラインを含む 3本の線分が同一平面にあることを条件とする。ここで、選択し た 3本の線分力 なる領域種が属する平面を基準平面とし、 3本の線分から平均して 求まる平面を求めておく。また、 3本の線分からなる領域を基準平面領域とする。 そして、選択した領域種に隣接する行方向(又は列方向)の画素列からなる直線と 上記基準平面とが同じ平面であるかどうかを空間的な距離を比較することで判断し、 同じ平面である場合には、その隣接する線分を基準平面領域に追加し (領域拡張処 理)、追加した線分を含めたものとして上記基準平面を更新し (平面更新処理)、これ を平面領域に隣接するデータ列に同一平面の線分が存在しなくなるまで繰り返し行 う。そして、以上領域種を検索して平面更新及び領域拡張処理を、種となる領域 (3 本の線分)が存在しなくなるまで繰り返し実行する。最後に、得られた複数の領域群 の中から同一平面を構成するものを連結する。そして、本変形例においては、得られ た平面に属する線分群のうち、平面から所定の閾値以上外れる線分を除いて再度平 面を求める平面再算出処理をステップ S74として更に設け、最終的な平面とするが、 詳細は後述する。 FIG. 43 is a flowchart showing a plane detection process by the line segment extension method. Shown in Figure 43 First, a distance image is input (step S71), and a line segment is also obtained for a data point force estimated to belong to the same plane in each pixel column in the row direction (or column direction) of the distance image (step S72). ). Then, a line segment presumed to belong to the same plane is extracted from the group of line segments, and a plane composed of these line segments is obtained (step S73). In this step S73, first, a region serving as a plane seed (hereinafter referred to as a “seed region”) is selected, and a corresponding region type is selected. In this selection, the condition is that three line segments including one line in the vertically adjacent row direction (or the right and left adjacent column direction) are on the same plane. Here, the plane to which the selected three line segment force region types belong is set as a reference plane, and a plane obtained by averaging the three line segments is obtained. Also, an area composed of three line segments is defined as a reference plane area. Then, it is determined whether or not a straight line composed of pixel columns in the row direction (or column direction) adjacent to the selected region type is the same plane as the reference plane by comparing spatial distances. In some cases, the adjacent line segment is added to the reference plane area (area extension processing), and the above-mentioned reference plane is updated to include the added line segment (plane update processing), and this is added to the plane area. This operation is repeated until no line segment on the same plane exists in the adjacent data string. Then, the region type is searched and the plane updating and the region expansion processing are repeatedly executed until there is no longer a region to be a seed (three line segments). Finally, those that form the same plane are connected from among the plurality of obtained region groups. Then, in the present modification, a plane recalculation process of re-obtaining a plane except for a line segment that deviates from the plane by a predetermined threshold or more from the plane among the line segments belonging to the obtained plane is further provided as step S74, Although it is a plane, details will be described later.
ここで、 3次元距離データから線分を検出し、これを同一平面毎にまとめた領域を 1 つの平面とする処理は従来の線分拡張法による平面検出処理であるが、本変形例 においては、ステップ S72における線分抽出方法が従来とは異なる。すなわち、上述 したように、距離データ点から線分を求めて距離データ点にできるだけフィットするよ うに線分を生成しょうとしても、距離データの精度に応じて閾値を変更しなければ、 over— segmentation又は under— segmentationなどもの問題が生じてしまつ。そこで、本 変形例においては、この線分抽出において、距離データの分布を解析することで、 距離データの精度、ノイズに応じて適応的に閾値を変更する手法を導入するものと する。 Here, the process of detecting a line segment from the three-dimensional distance data and combining the region on the same plane into a single plane is a plane detection process using the conventional line segment extension method. However, the line segment extraction method in step S72 is different from the conventional one. That is, as described above, even if a line segment is obtained from a distance data point and the line segment is generated so as to fit the distance data point as much as possible, if the threshold is not changed according to the accuracy of the distance data, over-segmentation is performed. Or under-segmentation and other problems occur. Therefore, in this modified example, in this line segment extraction, a method of adaptively changing the threshold value according to the accuracy of the distance data and the noise by analyzing the distribution of the distance data is introduced. To do.
以下、図 43に示す線分拡張法による平面検出方法について更に詳細に説明する 。線分抽出部 (Line Extraction)2aは、上述したように、ステレオビジョンシステム 1から の 3次元距離画像を入力とし、距離画像の各列または各行毎に 3次元空間内で同一 平面上にあると推定される線分を検出する。この線分抽出において、計測ノイズなど による、上 し 7こ over— segmentationや under— segmentationのロ題、すなわ 、本来 複数の平面であるのに 1つの平面として認識してしまつたり、本来は 1つの平面である のに、複数の平面として認識してしまったりする問題を回避するため、データ点の分 布に応じて適応的に線分フィッティングさせるアルゴリズム(Adaptive Line Fitting)を 導入する。 Adaptive Line Fittingは、線分抽出部 2aにおいて、先ず比較的大きい閾 値を使用して大まかに第 1の線分としての線分を抽出し、次に抽出された第 1の線分 に属するデータ点群力 後述する最小二乗法によって得られる第 2の線分としての 線分に対する該データ点群の分布を解析する。すなわち、同一平面上に存在するか 否かを大まかに推定してデータ点群を抽出し、抽出したデータ点群におけるデータ 点の分布の偏りがある力否かを解析して同一平面上に存在している力否かを再度推 定する。  Hereinafter, the plane detecting method based on the line segment extension method shown in FIG. 43 will be described in more detail. As described above, the line extraction unit (Line Extraction) 2a receives the three-dimensional range image from the stereo vision system 1 and assumes that each column or row of the range image is on the same plane in the three-dimensional space. Detect the estimated line segment. In this line segment extraction, over-segmentation and under-segmentation problems due to measurement noise, etc., that is, multiple planes are originally recognized as one plane, In order to avoid the problem of being recognized as multiple planes even though it is one plane, we introduce an algorithm (Adaptive Line Fitting) that adaptively fits line segments according to the distribution of data points. In the Adaptive Line Fitting, the line segment extraction unit 2a first roughly extracts a line segment as a first line segment using a relatively large threshold value, and then extracts data belonging to the extracted first line segment. Point group force The distribution of the data point group with respect to a line segment as a second line segment obtained by the least square method described later is analyzed. In other words, the data points are extracted by roughly estimating whether or not they are on the same plane, and whether or not there is a bias in the distribution of the data points in the extracted data points is analyzed to see if they exist on the same plane. Estimate again whether the force is being applied.
本変形例においては、このデータ点の分布を解析し、データ点群が後述するジグ ザグ形 (zig-zag-shape)に当てはまる場合には、分布に偏りがあるとしてデータ点群 を分割する処理を行い、これを繰り返すことによって、データ点群に含まれるノイズに 対して適応的に線分の抽出を行うアルゴリズムを使用するものとする。  In this modified example, the distribution of the data points is analyzed, and if the data point group fits into a zig-zag-shape described later, the process of dividing the data point group is determined to be biased. , And by repeating this, an algorithm that adaptively extracts line segments for noise included in the data point group shall be used.
図 44は、線分抽出部 2aにおける処理、すなわち、図 43におけるステップ S72の処 理の詳細を示すフローチャートである。まず、線分抽出部 2aには、距離データが入 力される。入力された距離データのうち、例えば行方向の画素列 (データ点列)にお いて、 3次元空間上で同一平面上に存在すると推定されるデータ点群を抽出する。 3 次元空間上で同一平面上に存在すると推定されるデータ点群は、例えば隣接する データ点の距離が、例えば 6cm以下など、データ点間の 3次元空間における距離が 所定の閾値以下のものからなるデータ点の集合などとすることができ、これをデータ 点群 (Ρ [0 · · ·η-1] )として抽出する (ステップ S81)。そして、このデータ点群 Ρ [0 · · · n-1]に含まれるサンプル数 nが処理に最低限必要なサンプル数 (必要最小値) min_n より多いか否かをチェックし (ステップ S82)、データ数 nが必要最小値 min_nより少な い場合 (S82 :YES)には、検出結果として空集合を出力して処理を終了する。 FIG. 44 is a flowchart showing details of the processing in the line segment extracting unit 2a, that is, the processing in step S72 in FIG. First, distance data is input to the line segment extraction unit 2a. From the input distance data, for example, in a pixel column (data point sequence) in a row direction, a data point group that is presumed to be present on the same plane in a three-dimensional space is extracted. Data points that are estimated to be on the same plane in the three-dimensional space are those whose distance in the three-dimensional space between the data points is less than a predetermined threshold, for example, the distance between adjacent data points is, for example, 6 cm or less. A set of data points can be obtained, and this is extracted as a data point group (Ρ [0 ··· η-1]) (step S81). And this data point cloud Ρ [0 · · · n-1] is checked whether the number n of samples included in the processing is greater than the minimum required number of samples (required minimum value) min_n (step S82), and if the number n of data is less than the required minimum value min_n In (S82: YES), an empty set is output as a detection result, and the process ends.
一方、サンプル数 nが必要最小値 min_n以上である場合(S82 :NO)、データ点群 P [0· · ·η-1]の一方の端点 Ρ [0]と他方の端点 Ρ [n-1]とを結ぶ線分 (弦) L 1を第 1の線 分として生成する。そして、データ点群 Ρ[0· · ·η-1]から、この線分 L1との距離が最も 大き 、データ点を着目点 brkとして検索し、その距離 distを算出する (ステップ S83)。 最大距離 distがデータ点群分割の閾値 maX_dより大きい場合には(S84 :YES)、デ 一タ点群データ点群 Ρ[0· · ·η-1]を着目点(分割点) brkにて 2つのデータ点群 ρ [0· · •brk]及び P[brk' · ·η- 1]に分割する(ステップ S88)。 On the other hand, if the number of samples n is equal to or more than the required minimum value min_n (S82: NO), one end point Ρ [0] and the other end point Ρ [n-1] of the data point group P [0 ·· η−1] Is generated as the first line segment (string) L1 connecting the two. Then, from the data point group Ρ [0 ··· η−1], the distance to the line segment L1 is the largest, and the data point is searched as the point of interest brk, and the distance dist is calculated (step S83). If the maximum distance dist is larger than the threshold value ma X _d for data point group division (S84: YES), the data point group data point group Ρ [0 ··· η-1] is set as a point of interest (division point) brk Then, it is divided into two data point groups ρ [0 ··· brk] and P [brk '··· η-1] (step S88).
一方、最大距離 distがデータ点群分割の閾値 max_dより小さい場合には(S84 : NO )、データ点群 Ρ[0· · ·η-1]力 後述する最小二乗法によって最適な線分の方程式 lineを求め(ステップ S85)、この方程式 lineが示す線分 L2を第 2の線分として生成す る。そして、データ点群 Ρ[0· · ·η-1]がこの線分 L2に対して後述する Zig-Zag-Shape であるかどうかを調べる(ステップ S86)、 Zig- Zag- Shapeでない場合(S86 :NO)、得 られた線分の方程式 lineを線分抽出結果リストに追加し (ステップ S87)、処理を終了 する。  On the other hand, if the maximum distance dist is smaller than the data point group division threshold value max_d (S84: NO), the data point group Ρ [0 ··· η-1] A line is obtained (step S85), and a line segment L2 represented by the equation line is generated as a second line segment. Then, it is checked whether or not the data point group Ρ [0 ··· η-1] is a Zig-Zag-Shape described later for this line segment L2 (step S86). If it is not a Zig-Zag-Shape (S86) : NO), the obtained line equation line is added to the line segment extraction result list (step S87), and the process ends.
また、ステップ S86にお!/、てステップ S85で求めた線分が Zig-Zag-Shapeである判 断された場合(S86 : YES)、上述のステップ S84と同様、ステップ S88に進み、ステ ップ S83において距離 distを求めた着目点 brkにてデータ点群を 2つのデータ点群 P [0· · 'brk]及び P[brk' · ·η- 1]に分割する。このステップ S88にて 2つのデータ点群が 得られた場合には、それぞれを再帰的に再度ステップ S81からの処理を行う。そして 、この処理を分割された全てのデータ点について分割されなくなるまで、すなわち全 てのデータ点群がステップ S87を経るまで処理を繰り返し、これにより、全ての線分が 登録された線分抽出結果リストを得る。このような処理によって、データ点群 Ρ[0· · · n-1]力 ノイズの影響を排除し複数の線分力 なる線分群を精度よく検出することが できる。  In step S86, if it is determined that the line segment obtained in step S85 is a Zig-Zag-Shape (S86: YES), the process proceeds to step S88, as in step S84 described above. In step S83, the data point group is divided into two data point groups P [0 ·· 'brk] and P [brk' ·· η-1] at the point of interest brk for which the distance dist is obtained. When two data point groups are obtained in step S88, the processes from step S81 are performed again recursively. This process is repeated until all the data points are no longer divided, that is, until all the data points have passed through step S87. Get the list. By such processing, the influence of the data point group Ρ [0 ···· n-1] noise is eliminated, and a line group consisting of a plurality of line forces can be detected with high accuracy.
なお、ステップ S83にてデータ点群 Ρ[0· · ·η-1]の端点を結ぶ線分 L1を生成するも のとして説明したが、例えばデータ点群 Ρ[0· · ·η-1]の分布、性質など必要に応じて データ点群 Ρ[0· · ·η-1]力も最小二乗により線分 L1を求めてもよい。また、本変形例 においては、着目点 brkは、端点を結んだ線分 L1との距離が最大の点 1つとしている 1S 例えば、上記のように最小二乗により求めた線分との距離が最大の点としたり、距 離がデータ点群分割の閾値 max_d以上のものが複数ある場合はそれら全ての点又は 選択した 1つ以上にてデータ点群 Ρ[0· · ·η-1]を分割するようにしてもよい。 Note that a line segment L1 connecting the end points of the data point group 点 [0 ··· η-1] is generated in step S83. However, if necessary, for example, the distribution and properties of the data point group Ρ [0 ··· η-1], the force of the data point group η [0 ··· η-1] You may ask. In this modification, the point of interest brk is one point having the largest distance to the line segment L1 connecting the end points.1S For example, the distance to the line segment obtained by the least square as described above is the largest. If there are multiple points whose distance is equal to or greater than the data point group division threshold value max_d, the data point group Ρ [0 ··· η-1] is divided by all of these points or one or more selected points You may make it.
次に、ステップ S85における最小二乗による線分生成方法(Least-Squares Line Fitting)について説明する。ある n個のデータ点群 Ρ[0· · ·η_1]が与とき、データ点群 に最もフィットした直線の方程式を求める方法を示す。直線の方程式のモデルを下 記式(1)で表す。  Next, a method of generating a line segment using least squares (Least-Squares Line Fitting) in step S85 will be described. Given a set of n data points Ρ [0 ··· η_1], we show how to find the equation of the straight line that best fits the data point group. The model of the equation of the straight line is expressed by the following equation (1).
[数 1] xco& +ysiaa + d = Q ... ) この場合、 η個のデータ点群 Ρ[0· · ·η-1]の 1点(x,y)において、直線方 程式のモデルとデータ点との誤差の総和は下記式(2)で表すことができる。 [Equation 1] xco & + ysiaa + d = Q ...) In this case, at one point (x, y) of η data point group Ρ [0 ··· η-1], the model of the linear equation is The sum of the error from the data point can be expressed by the following equation (2).
[数 2] [Number 2]
Eflt = L(xi∞s + yt sia + dy ...(2) データ点群に最もフィットした直線は、上記式(2)の誤差の総和を最小化することに よって求められる。上記式(2)を最小にする α及び dは、データ点群 Pの平均及び分 散共分散行列を用いて下記(3)のように求めることができる。 E flt = L (x i∞s + y t sia + dy ... (2) best-fit straight line to the data points are thus required for minimizing the total error in the formula (2). Α and d that minimize the above equation (2) can be obtained as in the following (3) using the average and variance covariance matrix of the data point group P.
[数 3] [Number 3]
1 _! -25 一 1 _! -25 one
^~ 2tan ~ ~ ~ ' d = ~[-x os +ysina) ---(3) ^ ~ 2 tan ~ ~ ~ ' d = ~ [ -x os + y sina ) --- (3)
ただし、  However,
s^ =∑(Xi -x)2s ^ = ∑ (Xi -x) 2 = £
i i  i i
s ) Oi一 次に、ステップ S86におけるジグザグ形 (Zig-Zag-Shape)判別方法につ!、て説明す る。この Zig- Zag- Shape判別では、ある n個のデータ点群 Ρ[0· · ·η- 1]と直線 Line , d)、 xcosa +ycosa +d=0力与えられたとき、そのデータ点群 Ρ[0· · ·η- 1]が、図 4 5Αに示すように直線 Lineに対して交差する力、図 45Bに示すように、例えばノイズな どの影響によりデータ点が一様に分布しているかを判別するものである。基本的にはs) Oi Next, a method of determining the zigzag shape (Zig-Zag-Shape) in step S86 will be described. In this Zig-Zag-Shape discrimination, when given n data point group Ρ [0 ·· η-1] and straight line Line, d), xcosa + ycosa + d = 0 Ρ [η-1] is the force that intersects the straight line Line as shown in Fig. 45Α, and the data points are uniformly distributed due to the effects of noise, for example, as shown in Fig. 45B. Is to determine whether the Basically
、直線 Lineの一方にデータ点群 Ρ[0· · ·η-1]が連続して現れる数をカウントし、ある一 定数を超えて連続して現れる場合には、 zig-zag-shapeであると判断することができる 。図 45Aの場合には、データ点群 Ρ[0···η-1]によりよくフィットする直線 Lineを求め るためにデータ点群 P[i]を分割する必要がある。図 46は、 Zig-Zag-Shape判別方法 を示すフローチャートである。 , The number of data points Ρ [0 ··· η-1] appearing continuously on one side of the line Line is counted, and if it appears continuously beyond a certain constant, it is zig-zag-shape. It can be determined that. In the case of Fig. 45A, it is necessary to divide the data point group P [i] in order to find a straight line Line that better fits the data point group Ρ [0 ... η-1]. FIG. 46 is a flowchart showing a Zig-Zag-Shape discrimination method.
まず、データ点群 Ρ[0···η-1]と直線 Line ,d, σ )とを入力する (ステップ S90)。 ここで、 σは、点列の標準偏差を示す。次に、この標準偏差 σが所定の閾値 th_ σよ り大きいか否かを判断する。この標準偏差 σが閾値 th_aより小さい場合 (ステップ S9 1:Νο)は、演算器の浮動小数点演算誤差による誤差検出の影響を回避するため、 判別を終了する。そして、標準偏差 σが閾値 th_ σより大き!/、場合のみ判別処理を継 続する。次に、データ点群 Ρ[0· · ·η-1]のうちの最初のデータ点 Ρ[0]が直線のどちら 側にあるかを sing(sdist(P[0]》によって判断し、この結果を valに代入すると共に valと  First, a data point group Ρ [0 ··· η−1] and a straight line Line, d, σ) are input (step S90). Here, σ indicates the standard deviation of the point sequence. Next, it is determined whether or not this standard deviation σ is larger than a predetermined threshold th_σ. If this standard deviation σ is smaller than the threshold th_a (step S91: Νο), the determination is ended to avoid the influence of error detection due to the floating-point arithmetic error of the arithmetic unit. Then, the determination process is continued only when the standard deviation σ is larger than the threshold value th_σ! /. Next, it is determined by sing (sdist (P [0] >>) which side of the straight line the first data point · [0] of the data point group Ρ [0 ··· η-1] is. Assign the result to val and val and
0 0 同じ側にあるデータ点の連続数をカウントするカウンタ(以下、連続点カウンタといい 0 0 A counter that counts the number of consecutive data points on the same side (hereinafter referred to as a continuous point counter).
、このカウント値をカウント値 countという。)のカウント値 countを 1に設定する(ステップ S92)。ここで、 sign(x)は、 Xの値の符号(+又は一)を返す関数であり、 sdist(i)は、 P[i] .xcosa+ P[i].ycos α+dとして計算された直線 Lineにおいて、 i番目のデータ点との正 負の距離を示す。すなわち、 Valには、データ点 P[0]が直線 Lineのどちら側にあるか , And this count value is referred to as a count value count. ) Is set to 1 (step S92). Here, sign (x) is a function that returns the sign (+ or 1) of the value of X, and sdist (i) is calculated as P [i] .xcosa + P [i] .ycos α + d Indicates the positive / negative distance from the i-th data point in the straight line Line. In other words, Val indicates on which side of the straight line Line the data point P [0] is.
0  0
で +又は一の符号が代入される。 Is substituted with + or one sign.
次に、データ点をカウントするためのカウンタ(以下、データ点カウンタといい、この カウント値をカウント値 iという。)のカウント値 iを 1とする(ステップ S93)。そして、デー タ点カウンタのカウント値 iがデータ数 nより小さい場合 (ステップ S94: YES)、その次 のデータ(以下、 i番目とする。)のデータ点であるデータ点 P[i]が直線のどちら側に あるかを sing(sdist(P[i]》によって判断し、この結果を valに代入する (ステップ S95)。 そして、ステップ 92にて求めた valとステップ S95にて求めた valとを比較し、 valと val Next, the count value i of a counter for counting data points (hereinafter, referred to as a data point counter, and this count value is referred to as a count value i) is set to 1 (step S93). Then, when the count value i of the data point counter is smaller than the number n of data (step S94: YES), the data point P [i], which is the data point of the next data (hereinafter, i-th), is a straight line. Sing (sdist (P [i] >>) is used to determine which side is located, and the result is assigned to val (step S95). Then, val obtained in step 92 is compared with val obtained in step S95, and val and val
0 0 とが異なる場合 (ステップ S96 : NO)、 valに valを代入し、連続点カウンタのカウント値  If 0 is different from 0 (step S96: NO), substitute val for val and count the continuous point counter.
0  0
countに 1を代入し (ステップ S98)、データ点カウンタのカウント値 iをインクリメントして (ステップ S100)ステップ S94からの処理に戻る。 Substitute 1 for count (step S98), increment the count value i of the data point counter (step S100), and return to the processing from step S94.
一方、ステップ S96において、 valと valとが同じ場合 (ステップ S96 : YES)、データ  On the other hand, if val and val are the same in step S96 (step S96: YES),
0  0
点 P [i— 1]と P [i]は、直線 Lineに対して同じ側にあると判断され、連続点カウンタの力 ゥント値 countを 1つインクリメントする(ステップ S97)。更に、連続点カウンタのカウン ト値 countが Zig-Zag-Shapeと判定されるための最小のデータ点数 min_cより大きいか 否か判定し (ステップ S99)、大きい場合には(ステップ S99 : YES)、 Zig- Zag- Shapeと 判断し、 TRUEを出力して処理を終了する。一方、連続点カウンタのカウント値 count が最小のデータ点数 min_cより小さい場合には(ステップ S99 : NO)、ステップ S 100 に進み、データ点カウンタのカウント値 iをインクリメントして(ステップ S 100)、ステップ S94からの処理を繰り返す。 The points P [i-1] and P [i] are determined to be on the same side of the straight line Line, and the force point value count of the continuous point counter is incremented by one (step S97). Further, it is determined whether or not the count value count of the continuous point counter is larger than the minimum number of data points min_c to be determined as Zig-Zag-Shape (step S99). If it is larger (step S99: YES), Judge as Zig-Zag-Shape, output TRUE and end the process. On the other hand, when the count value count of the continuous point counter is smaller than the minimum number of data points min_c (step S99: NO), the process proceeds to step S100, and the count value i of the data point counter is incremented (step S100). The processing from step S94 is repeated.
そして、このステップ S94からの処理を、データ点カウンタのカウント値 iがデータ点 数 nに到達するまで続け、カウント値 i> =nとなったところで、 FALSEを出力して処理 を終了する。  Then, the process from step S94 is continued until the count value i of the data point counter reaches the number n of data points, and when the count value i> = n, FALSE is output and the process ends.
このようなジグザク形判別処理によって、 n個のデータ点群 Ρ [0 · · ·η-1]と直線 Line ( a ,d) : xcos a +ycos a + d=0が与えられたとき、このデータ点群が直線 Lineに対 して zig-zagに交差しているかどうかを判断することができる。これによつて、上述した ように、ステップ S86にてデータ点群を分割するべき力どうかを判断することができ、 最小二乗により求めた直線に対し、データ点群が zig-zagに交差していると判断した 場合にはデータ点群を分割すべきと判断してステップ S88の処理へ進み、着目点 brkを分割点としてデータ点群を分割することができる。なお、上記ステップ S91—ス テツプ S 100までの処理は図 47のように表現することも可能である。  When such a zigzag shape discriminating process gives n data point groups Ρ [0 ··· η-1] and a straight line Line (a, d): xcos a + ycos a + d = 0, It can be determined whether the data points intersect zig-zag with the straight line Line. As a result, as described above, it is possible to determine whether or not the force to divide the data point group in step S86, and the data point group intersects zig-zag with respect to the straight line obtained by the least square. If it is determined that there is a data point group, it is determined that the data point group should be divided, and the process proceeds to step S88, where the data point group can be divided using the point of interest brk as a division point. The processing from step S91 to step S100 can be expressed as shown in FIG.
また、このような Zig-Zag-Shape判別処理は、演算器のみならずノヽードウエアで行う ことも可能である。図 48は、 Zig-Zag-Shape判別処理を行う処理部を示すブロック図 である。図 48に示すように、 Zig-Zag-Shape判別処理部 20は、 n個のデータ点群 P [0 • · ·η-1]が入力され、順次各データ点 P[i]が直線 Lineの何れ側に位置するかを判別 し、その判別結果 Valを出力する方向判別部 21と、 1つ後のデータと方向判別部 21 の結果を比較させるための遅延部 22と、データ点 P[i]における方向判別結果 Valとデ ータ点 P[i— 1]における方向判別結果 Valとを比較する比較部 23と、比較部 23にお Further, such Zig-Zag-Shape discrimination processing can be performed not only by a computing unit but also by hardware. FIG. 48 is a block diagram illustrating a processing unit that performs a Zig-Zag-Shape determination process. As shown in FIG. 48, the Zig-Zag-Shape discrimination processing unit 20 receives n data point groups P [0 •• η−1] and sequentially converts each data point P [i] into a straight line. Determine which side is located Then, a direction discriminator 21 that outputs the discrimination result Val, a delay unit 22 for comparing the immediately succeeding data with the result of the direction discriminator 21, and a direction discrimination result Val at the data point P [i] and the data The comparison unit 23 compares the direction discrimination result Val at the data point P [i-1] with the comparison unit 23.
0  0
いて Val=Valの場合に、カウント値をインクリメントする連続点カウンタ 24と、連続点 And the continuous point counter 24 that increments the count value when Val = Val
0  0
カウンタ 24のカウント値 countと最小データ点数格納部 26から読み出した最小データ 点数 min_Cとを比較する比較部 25とを有する。 And a comparing unit 25 for comparing the minimum data points MIN_ C read from the count value count and the minimum data points storage unit 26 of the counter 24.
この Zig-Zag-Shape判別処理部における動作は以下のようになる。すなわち、方向 判別部 21は、データ点群 Ρ[0· · ·η-1]力も最小二乗法により直線 Lineを求め、各デ ータ点 P[i]と直線 Lineとの正負の距離を求め、その正負の符号を出力する。遅延部 2 2は、データ点 P[i— 1]の直線 Lineまでの距離に対する正負の符号が入力されると 1つ 後のデータ点 P[i]の正負の符号が入力されるタイミングまでデータを格納する。 比較部 23は、データ点 P[i]とデータ点 P[i— 1]の上記正負の符号を比較し、同じ符 号である場合にはカウンタ 24のカウント値 countをインクリメントする信号を出力し、正 負の符号が異なればカウント値 countに 1を代入する信号を出力する。比較部 25は、 カウント値 countと最小データ点数 min_cとを比較し、最小データ点数 min_cよりカウント 値 countが大きい場合には、データ点群 Ρ[0· · ·η-1]がジグザグであることを示す信 号を出力する。  The operation of the Zig-Zag-Shape discrimination processing unit is as follows. That is, the direction discriminating unit 21 obtains a straight line Line by the least squares method for the data point group Ρ [0 ··· η−1] force, and calculates a positive / negative distance between each data point P [i] and the straight line Line. , And outputs its sign. When a positive or negative sign with respect to the distance to the straight line Line of the data point P [i-1] is input, the delay unit 2 2 outputs the data until the next positive or negative sign of the data point P [i] is input. Is stored. The comparison unit 23 compares the above-mentioned positive and negative signs of the data point P [i] and the data point P [i-1], and outputs a signal for incrementing the count value count of the counter 24 if the signs are the same. If the positive and negative signs are different, a signal that substitutes 1 for the count value count is output. The comparison unit 25 compares the count value count with the minimum data point number min_c. If the count value count is larger than the minimum data point number min_c, the data point group Ρ [0 ··· η-1] must be zigzag. A signal indicating is output.
次に、図 40に示す領域拡張部(Region Growing) 2bについて説明する。領域拡張 部 2bは、線分抽出部 2aによって得られた線分群を入力とし、それらの線分それぞれ がどの平面に属しているかを点列の平面への当てはめ(Plane Fitting)により判断し、 与えられる線分群からなる領域を複数の平面 (平面領域)に分離する。複数の平面に 分離するために、以下の手法をとる。  Next, the region expanding unit (Region Growing) 2b shown in FIG. 40 will be described. The region extension unit 2b receives the line segment group obtained by the line segment extraction unit 2a as input, determines which plane each of the line segments belongs to by applying a sequence of points to the plane (Plane Fitting), and gives Is divided into a plurality of planes (plane areas). The following method is used to separate the planes.
先ず、与えられた線分群から、同じ平面上にあると推定される隣接する 3本の線分 を検索する。この 3本の線分により求められる平面 (基準平面) 平面の種となるも のであり、この 3本の線分が含まれる領域を領域種(seed region)という。そして、この 領域種に隣接する線分を順次、基準平面と同一平面上にある線分か否かを点列の 平面への当てはめ(Plane Fitting)により判断し、隣接する線分が同じ平面に含まれる と判断された場合には、この線分を領域拡大用の線分として領域種に追加してその 領域を拡大すると共に、基準平面の方程式を上記領域拡大用の線分を含めて再度 算出し直す。このような処理によって、全ての線分を何れかの領域 (平面)に配分する 図 49は、領域拡張処理を説明するための模式図である。図 49に示すように、画像 30内に複数の平面力もなる階段 31が存在する場合、例えば太線で示す 32a— 32c の 3本の線分が領域種として選択されたとする。これら 3本の線分 32a— 32cからなる 領域が領域種となる。先ず、この 3つの線分 32a— 32cにより 1つの平面 (基準平面) Pを求める。次に、領域種の最も外側の線分 32a又は 32cに領域種外にて隣接する それぞれデータ列 33又は 34において、平面 Pと同一の平面である線分を選択する。 ここでは、線分 33aが選択されるとする。次に、これら 4本の線分群からなる平面 P'を 求め、基準平面 Pを更新する。次に、線分 34aが選択されれば、 5本の線分群からな る平面 P' 'を求め、平面 P'を更新する。これを繰り返すことにより、階段 31の 2段目の 踏面が、破線で囲まれる平面 45として求められる。このようにして、選択された領域 種を種として追加する線分がなくなるまで領域拡大処理する。そして、追加する線分 がなくなった場合、再び画像 30内から領域種となる 3つの線分を検索して領域拡大 処理を実行するというような処理を繰り返し、領域種となる 3つの線分がなくなるまで 図 43のステップ S3の処理を繰り返す。 First, three adjacent line segments estimated to be on the same plane are searched from the given line group. A plane (reference plane) obtained by these three line segments is a seed of a plane, and a region including the three line segments is called a seed region. Then, it is determined whether or not the line segments adjacent to this region type are line segments on the same plane as the reference plane by sequentially fitting the point sequence to the plane (Plane Fitting). If it is determined that it is included, this line segment is added to the region type as a line segment for region enlargement, and While enlarging the area, recalculate the equation of the reference plane again, including the line for enlarging the area. All lines are allocated to any area (plane) by such processing. FIG. 49 is a schematic diagram for explaining the area expansion processing. As shown in FIG. 49, in the case where there are steps 31 having a plurality of plane forces in the image 30, for example, it is assumed that three line segments 32a to 32c indicated by thick lines are selected as the region types. The region consisting of these three line segments 32a-32c is the region type. First, one plane (reference plane) P is obtained from these three line segments 32a-32c. Next, a line segment which is the same plane as the plane P is selected in the data string 33 or 34 adjacent to the outermost line segment 32a or 32c of the region type outside the region type, respectively. Here, it is assumed that the line segment 33a is selected. Next, a plane P ′ consisting of these four line segments is obtained, and the reference plane P is updated. Next, when the line segment 34a is selected, a plane P ′ ′ composed of a group of five line segments is obtained, and the plane P ′ is updated. By repeating this, the second tread of the stairs 31 is obtained as the plane 45 surrounded by the broken line. In this way, the area expansion processing is performed until there is no line segment to be added using the selected area type as a seed. Then, when there are no more line segments to be added, a process of retrieving three line segments as the region type from the image 30 and executing the region enlarging process is repeated, and the three line segments as the region type are repeated. The process of step S3 in FIG. 43 is repeated until there is no more.
次に、データ点群 Ρ [0 · · · η-1]力も構成される平面の方程式を推定する手法 (Plane Fitting)、これを使用して領域種を選択する方法(Selection of seed region)、領域種 から領域を拡大していく領域拡張処理 (Region growing)、及び得られた平面方程式 力も誤差が大きいものなどを除いて再度算出する後処理 (Post processing)について 説明する。  Next, a method of estimating the equation of a plane that also consists of the data point group Ρ [0 ··· η-1] force (Plane Fitting), a method of selecting a region type using this (Selection of seed region), Region growing processing for expanding a region from a region type (Region growing) and post-processing (Post processing) for recalculating the obtained plane equation force except for those with large errors will be described.
3次元空間内の点 Pは P= (x , y , により表され、平面の方程式はその法線べタト ル n (nx, ny, nz)と非負の定数 dによって下記式 (4)で表される。  The point P in the three-dimensional space is represented by P = (x, y,), and the plane equation is represented by the following equation (4) using its normal vector n (nx, ny, nz) and a non-negative constant d. Is done.
画 xnx + yn„ + znz + d = 0 - - - (4) ここで、ステレオカメラでは、焦点を通る平面を観測することができない、すなわち、 平面は焦点を通らないため、 d≠0とすることができる。したがって、平面は、最小二乗 法により下記式(5)に示す値を最小にする値として求めることができる。 Image xn x + yn "+ zn z + d = 0 - - - (4) Here, in the stereo camera, it is impossible to observe the plane passing through the focal point, i.e., Since the plane does not pass through the focal point, d ≠ 0 can be set. Therefore, the plane can be obtained by the least squares method as a value that minimizes the value shown in the following equation (5).
[数 5] fit(n,d) = i(pjn + d)2- ...(5) 最適解は n=m/ " m " , d=~l/■ m■として求まる。ここで、 ■ ·■は、ベクトルの大きさ、 mは、行列式によって連立一次方程式を解くクラメールの法則(Cramer's rule)を使 [Equation 5] fit (n, d) = i (pjn + d) 2 -... (5) The optimal solution is obtained as n = m / "m", d = ~ l / ■ m ■. Where ■ · ■ is the magnitude of the vector, and m is the Cramer's rule, which solves a system of linear equations using the determinant.
用して下記(6— 1)のように容易に得られる線形システムの解である。 This is the solution of the linear system that can be easily obtained as shown below (6-1).
[数 6] [Number 6]
A m = b -.-(6-1) A m = b -.- (6-1)
ここで、  here,
∑PiPi . b=∑Pi --(6-2) この解は、新たなデータ点が加えられたり、又はデータ点削除されたりした場合であ つても、上記式 (6— 2)に示す Aと bの値を更新するのみで、平面パラメータを再計算 ∑PiPi. B = ∑Pi-(6-2) This solution is given by the above equation (6-2) even when new data points are added or data points are deleted. Recalculate plane parameters only by updating A and b values
することができる。更に、本変形例における線分抽出方法の場合は n個のデータ点群 の 2つのモーメント(1次モーメント:平均、 2次モーメント:分散) E(p)、E(pp )が既知 can do. Furthermore, in the case of the line segment extraction method according to this modification, two moments (first moment: average, second moment: variance) E (p) and E (pp) of the n data point groups are known.
T  T
であり、これらを使用して、下記(7)に示すように A, bを更新することができ、 n個のデ ータ点群における平面更新処理に拡張することができる。 Using these, A and b can be updated as shown in (7) below, and can be extended to a plane update process for n data point groups.
[数 7] [Number 7]
A A + nE(ppT) , b<-b + «E(p) ·.イ 7) AA + nE (pp T), b <-b + «E (p) ·. Lee 7)
また、一度平面パラメータ n, dを算出すれば、求まった平面方程式から、 n個のデ  Also, once the plane parameters n and d are calculated, n data
ータ点群の平面方程式からの外れ度合!、を示す平面方程式の 2乗平均誤差 (RMS Mean square error of the plane equation indicating the degree of deviation of the data point group from the plane equation (RMS
(root mean square) residual) (以下、 rmsという。)を下記式(8)により算出することが (root mean square) residual) (hereinafter referred to as rms) can be calculated by the following equation (8).
できる。この場合も、 n個のデータ点の上記 2つのモーメントを使用して下記式 (8)を 求めることができる。 it can. Also in this case, the following equation (8) can be obtained using the above two moments of the n data points.
[数 8] P广 ·Ρ„) = ∑(ρΓη + > the rms 〜(8) 上記(8)に示すように、各データ点が求めた平面上にあれば平面方程式の 2乗平 均誤差 rms (p · ' ·ρ )は 0になる値であり、この値が小さいほど各データ点が平面に よくフィットして!/、ることを示す。 [Equation 8] P Ρ „· Ρ„) = ∑ (ρΓη +> the rms ~ (8) As shown in (8) above, if each data point is on the obtained plane, the square mean error rms (p · '· Ρ) is a value that becomes 0. The smaller this value is, the better the data points fit to the plane! /.
次に、領域種 (seed region)を検索する方法及び領域種から領域を拡大すると供に 平面を更新する方法について説明する。図 50は、領域種を検索する処理及び領域 拡張処理の手順を示すフローチャートである。図 50に示すように、領域種の選択に は、先ず、線分抽出の際に使用した行方向又は列方向のデータ列が隣接する 3つの 線分 (1 , 1 , 1 )であって、互いの線分 (1 , 1 ) , (1 , 1 )における画素位置が上記デー Next, a method of searching for a region type (seed region) and a method of expanding a region from the region type and updating a plane will be described. FIG. 50 is a flowchart showing the procedure of the area type search processing and the area expansion processing. As shown in FIG. 50, the region type is selected by first selecting three adjacent line segments (1, 1, 1) in the row or column direction data string used in the line segment extraction. The pixel position in each line segment (1, 1), (1, 1) is
1 2 3 1 2 2 3 1 2 3 1 2 2 3
タ列とは直交する方向にて重複したものを検索する (ステップ S101)。各データ点は 画像内における画素位置を示すインデックス (index)を有しており、例えば行方向の データ列における線分である場合、このインデックスを比較して列方向にて重複して いるか否かを比較する。この検索に成功した場合 (ステップ S102 : YES)、上記式(7 )を使用して上記 (6—1)を算出する。これにより、平面パラメータ n, dを決定でき、こ れを使用して上記式 (8)に示す平面方程式の 2乗平均誤差 (1 , 1 , 1 )を計算する (ス A search is made for duplicates in the direction orthogonal to the data sequence (step S101). Each data point has an index indicating the pixel position in the image.For example, if the data point is a line segment in the data column in the row direction, this index is compared to determine whether the data point overlaps in the column direction. Compare. If the search is successful (step S102: YES), the above (6-1) is calculated using the above equation (7). As a result, the plane parameters n and d can be determined, and are used to calculate the mean square error (1, 1, 1) of the plane equation shown in the above equation (8).
1 2 3  one two Three
テツプ S103)。そして、この平面方程式の 2乗平均誤差 rms (l, 1, 1 )が例えば lcm Step S103). Then, the root-mean-square error rms (l, 1, 1) of this plane equation is, for example, lcm
1 2 3  one two Three
などの所定の閾値 th 1より小さい場合には、この 3つの線分を領域種として選択す rms If these are smaller than the predetermined threshold th1, select these three line segments as the region type.
る(ステップ SI 04)。所定の閾値 th 1より大きい場合には、再びステップ S 101に戻 rms (Step SI 04). If the value is larger than the predetermined threshold th1, the process returns to step S101 again.
り、上記条件を満たす線分を検索する。また、領域種に選ばれた線分は、線分群のリ ストから除くことで、他の平面拡張などの際に使用されないようにしておく。 Then, a line segment satisfying the above condition is searched. Also, the line segment selected as the region type is excluded from the list of line segment groups so that it will not be used in other plane expansion or the like.
こうして選択された領域種から線分拡張法により領域を拡張する。すなわち、先ず、 領域種の領域に追加する候補となる線分を検索する (ステップ S 105)。なお、この領 域は、領域種が既に更新されている場合の、後述する更新された領域種も含む。候 補となる線分は、領域種の領域に含まれる線分 (例えば 1 )  The region is extended by the line segment extension method from the region type thus selected. That is, first, a line segment that is a candidate to be added to the region type region is searched (step S105). Note that this area also includes an updated area type described later when the area type has already been updated. The candidate line segments are the line segments included in the region type region (for example, 1).
1 に隣接する線分 (1 )  Line segment adjacent to 1 (1)
4であ つて、上述同様、これらの線分の画素位置が相互に重なりあうことを条件とする。検索 が成功した場合 (ステップ S 106 : YES)、その平面方程式の 2乗平均誤差 rms (1 )を 算出し、これが所定の閾値 th 2より小さいか否かを判定し (ステップ S 107)、小さい rms 4, as described above, provided that the pixel positions of these line segments overlap each other. If the search is successful (step S106: YES), the mean square error rms (1) of the plane equation is calculated. It is determined whether or not this is smaller than a predetermined threshold th2 (step S107).
場合には平面パラメータを更新し (ステップ S 108)、再びステップ S105からの処理を 繰り返す。ここで、候補となる線分がなくなるまで処理を繰り返し、候補となる線分がな くなつたら(ステップ S106 :NO)、ステップ S101の処理に戻り、再び領域種を検索す る。そして、線分群に含まれる領域種がなくなった場合 (ステップ S 102 : NO)、今ま で得られて 、る平面パラメータを出力して処理を終了する。 In this case, the plane parameters are updated (step S108), and the processing from step S105 is repeated again. Here, the process is repeated until there are no more candidate line segments. When there are no more candidate line segments (step S106: NO), the process returns to step S101 and the region type is searched again. Then, when there is no region type included in the line segment group (step S102: NO), the plane parameters obtained so far are output and the processing is terminated.
ここで、本変形例においては、領域種を検索し、 3つの線分が同一平面に属するか 否かの判定、及び領域拡張処理を行う際に基準平面又はこれを更新した更新平面 に属する力否かの判定には、上記式 (8)を使用する。すなわち、平面方程式の 2乗 平均誤差 rmsが所定の閾値 (th_rms)未満である場合にのみその線分 (群)を同一平 面に属するものと推定し、その線分を含めた平面として再び平面を算出する。このよ うに平面方程式の 2乗平均誤差 rmsを使用して同一平面に属する力否かを判定する ことにより、更にノイズにロバストでかつ、細かい段差を含んでいるような場合にも正確 に平面を抽出することができる。以下にその理由について説明する。  Here, in this modified example, the region type is searched, it is determined whether or not the three line segments belong to the same plane, and the force belonging to the reference plane or the updated plane that has been updated when performing the region extension processing is determined. Equation (8) above is used to determine whether or not there is no error. That is, only when the root-mean-square error rms of the plane equation is less than a predetermined threshold (th_rms), the line segment (group) is estimated to belong to the same plane, and the plane including the line segment is re-assembled as a plane. Is calculated. In this way, by using the mean square error rms of the plane equation to determine whether or not a force belongs to the same plane, even if the noise is more robust and contains fine steps, the plane can be accurately detected. Can be extracted. The reason will be described below.
図 51は、その効果を示す図であって、端点と直線との距離が等しくても平面方程式 の 2乗平均誤差 rmsが異なる例を示す模式図である。ここで、非特許文献 4のように、 領域拡張処理する際、注目の直線 (線分)の端点 (end point)と平面 Pとの距離 Dの 値が所定の閾値より小さい場合に、当該注目の線分が平面 Pと同一平面であるとして 領域拡張処理を行うと、平面 Pに交差する直線 La (図 51A)と、平面 Pと平行で所定 距離ずれているような直線 Lb (図 51B)とが同様に平面 Pの更新に使用されることとな る。ここで、平面方程式の 2乗平均誤差 rmsを求めると、図 51Aの直線 Laから求まる 平面方程式の 2乗平均誤差 rms (La)に比して図 51Bの直線 Lbから求まる平面方程 式の 2乗平均誤差 rms (Lb)の方が大きい。すなわち、図 51 Aのように、直線 Laと平 面 Pとが交差する場合は、平面方程式の 2乗平均誤差 rmsが比較的小さくノイズの影 響である場合が多いのに対し、図 51Bのような場合、平面方程式の 2乗平均誤差 rm sが大きぐ直線 Lbは平面 Pと同一平面ではなく異なる平面 P'である確率が高い。し たがって、複数の平面が含まれるような環境力 平面を精確に求める必要がある場合 などにおいては、本変形例のように、平面方程式の 2乗平均誤差 rmsを算出し、この 値が所定の閾値未満である場合に同一平面と判断することが好ましい。なお、環境 や距離データの性質に応じて、従来と同様、線分の端点と平面との距離が所定の閾 値以下の場合は当該線分を平面に含めるようににたり、これらを組み合わせてもよい また、面パラメータ n, dを一旦算出すれば、平面方程式の 2乗平均誤差 rmsは、デ ータ点群について線分抽出の際に求めた 2つのモーメントの値から平面方程式を更 新し、上記式 (8)にて簡単に算出することができる。 FIG. 51 is a diagram showing the effect, and is a schematic diagram showing an example in which the root mean square error rms of the plane equation is different even when the distance between the end point and the straight line is equal. Here, as in Non-Patent Document 4, when the area expansion processing is performed, if the value of the distance D between the end point of the line of interest (line segment) and the plane P is smaller than a predetermined threshold value, When the area expansion processing is performed assuming that the line segment is the same plane as the plane P, a straight line La intersecting the plane P (Fig. 51A) and a straight line Lb parallel to the plane P and shifted by a predetermined distance (Fig. 51B) Will be used to update plane P as well. Here, when the root mean square error rms of the plane equation is obtained, the square root of the plane equation obtained from the straight line Lb in FIG. 51B is compared with the root mean square error rms (La) of the plane equation obtained from the straight line La in FIG. 51A. Average error rms (Lb) is larger. That is, when the straight line La intersects with the plane P as shown in Fig. 51A, the root mean square error rms of the plane equation is relatively small and the effect of noise is often small. In such a case, there is a high probability that the straight line Lb in which the mean square error rms of the plane equation is large is not the same plane as the plane P but a different plane P '. Therefore, when it is necessary to accurately determine an environmental force plane that includes multiple planes, the root mean square error rms of the plane equation is calculated as in this modification. When the value is less than a predetermined threshold, it is preferable to determine that the plane is the same. If the distance between the end point of the line segment and the plane is equal to or smaller than a predetermined threshold value, the line segment may be included in the plane or a combination of these may be used, as in the past, depending on the environment and the properties of the distance data. Also, once the surface parameters n and d are calculated, the mean square error rms of the plane equation is updated from the values of the two moments obtained during line segment extraction for the data point group. However, it can be easily calculated by the above equation (8).
また、上述の領域種の選択方法は、図 52のようにも表現することができる。 overlapQ , 1 )は、各イメージロウに含まれる直線ベクトル 1と 1における端点間の位置が直線べ j k j k  Further, the above-described method of selecting the area type can be expressed as shown in FIG. overlapQ, 1) is the position between the end points of the straight lines 1 and 1 included in each image row.
タトルとは直交する位置にて重なっている場合に trueを出力する関数である。また、 fitPlaned , 1 , 1 )は、上記式(4)一(7)により Am=bの解を求め平面パラメータ n, dを Tuttle is a function that outputs true when they overlap at orthogonal positions. FitPlaned, 1, 1) calculates the solution of Am = b by the above equation (4)-(7) and calculates the plane parameters n, d.
1 2 3  one two Three
計算し、上記式 (8)により算出された A, b〖こより、直線ベクトル 1 , 1 , 1を平面にフイツ From the A, b values calculated by the above equation (8), the linear vectors 1, 1, 1
1 2 3  one two Three
ティングさせる関数である。 This is the function to be used.
rms(l , 1 , 1 )は、上記式(6)を使用して 3本の直線全てにおいて、平面方程式の 2 rms (l, 1, 1) is calculated by using the above equation (6) to calculate the plane equation 2 for all three straight lines.
1 2 3 one two Three
乗平均誤差 rmsの値を算出する関数である。また、 removed , 1 , 1 )は、 lines [i] , lines This function calculates the value of the root mean square error rms. Removed, 1, 1) is lines [i], lines
1 2 3  one two Three
[i+l]l , lines [i+2]から領域種を構成するとして選択されたそれぞれ直線 1 , 1 , 1を除 [i + l] l, lines [i + 2] are removed by removing the straight lines 1, 1 and 1 respectively selected as constituting the region type.
2 1 2 3 くことを意味し、これにより、再びこれらの直線が計算に使用されることを防止する。 また、領域拡張処理は、図 53のように表現することもできる。図 53において、 A及び bは、上記式(6— 1)に示すそれぞれ行列及びベクトルである、また、 add(A, b, 1)は、 上記式(8)により、 Aと bに直線 lineのモーメントをカ卩える関数である。 Solve(A, b)は、 Am=bを満たす mを求め、上記式 (4)一(7)により平面パラメータ n, dを計算する。 select(open)は、例えば最初の 1つなど、 openの中力 任意に 1つのエレメントを選択 する関数である。また、 index(l )は、画素列又は行における 1のインデックス 2 1 2 3, which again prevents these lines from being used in the calculations. Further, the area extension processing can be expressed as shown in FIG. In FIG. 53, A and b are the matrix and the vector shown in the above equation (6-1), respectively, and add (A, b, 1) is a straight line between A and b according to the above equation (8). It is a function to calculate the moment of. Solve (A, b) finds m that satisfies Am = b, and calculates plane parameters n and d by the above equations (4)-(7). select (open) is a function that selects one element of open medium, such as the first one. Index (l) is the index of 1 in the pixel column or row
1 1 を返す関 数である。また、 neighbor(index)は、与えられたインデックスに隣接したインデックス、 例えば {index- 1, index+1}を返す関数である。  This is a function that returns 1 1. Neighbor (index) is a function that returns an index adjacent to the given index, for example, {index-1, index + 1}.
また、上述したように、本変形例においては、図 43のステップ S73において領域拡 張処理を行って平面方程式を更新した後、ステップ S 74にお 、て平面方程式を再度 算出する処理 (Post processing)を行う。この再度算出する処理では、例えば上述の ように更新され最終的に得られた平面方程式が示す平面に属するとされた距離デー タ点又は線分の平面力 のずれを計算し、所定の値以上平面から外れる距離データ 点又は線分は除き、再度平面方程式を更新することで、ノイズの影響を更に低減す ることがでさる。 Also, as described above, in the present modification, after performing the area expansion process in step S73 in FIG. 43 to update the plane equation, in step S74, recalculating the plane equation (Post processing) )I do. In the process of calculating again, for example, The distance data points or line segments that are deemed to belong to the plane represented by the finally obtained plane equation and are updated as described above are calculated, and the distance data points or line segments that deviate from the plane by a predetermined value or more are calculated as follows: Excluding this, updating the plane equation again can further reduce the effect of noise.
次に、このステップ S74について詳細に説明する。ここでは、 2つのステップにより、 平面方程式を再度算出する方法について説明する。先ず、ステップ S73にて検出さ れた各平面の境界の距離データ点(pixels)において、現在属している平面よりも、隣 接する平面までの距離が近いデータ点が検出された場合は、当該データ点を隣接 する平面の方に含める処理をする。また、何れの平面にも属していなぐかつ距離が 例えば 1. 5cmなど比較的大きい閾値以下である平面が存在するデータ点が検出で きた場合は、当該データ点をその平面に含める処理をする。これらの処理は各平面 領域の境界近傍のデータ点を検索することで実行することができる。以上の処理が 終了したら、再度平面方程式を算出する。  Next, step S74 will be described in detail. Here, a method of calculating the plane equation again in two steps will be described. First, in the distance data points (pixels) at the boundary of each plane detected in step S73, if a data point whose distance to an adjacent plane is shorter than the plane to which the plane currently belongs is detected, Process to include points on the adjacent plane. When a data point is detected that includes a plane that does not belong to any plane and whose distance is equal to or smaller than a relatively large threshold value, for example, 1.5 cm, processing for including the data point in the plane is performed. These processes can be performed by searching for data points near the boundary of each plane area. After the above processing is completed, the plane equation is calculated again.
次に、上述のようにして再度算出された平面の各領域の境界近傍において、各デ ータ点と平面との距離が例えば 0. 75cmなど比較的小さい閾値を超える場合は、そ れらのデータ点を捨てる処理を実行する。これにより、その平面領域は若干小さくな るものの更に精確な平面を求めることができる。距離データ点を削除後、再び平面を 求め、この処理を繰り返す。このことにより、極めて精密に平面を求めることができる。 次に各処理によって得られる結果を示す。図 54Aは、ロボット装置が立った状態で 床面を見下ろした際の床面を示す模式図、図 54Bは、縦軸を x、横軸を y、各データ 点の濃淡で z軸を表現して 3次元距離データを示す図であり、更に、行方向の画素列 力 線分抽出処理にて同一平面に存在するとされるデータ点群力も直線を検出した ものを示す。図 54Bに示す直線群から領域拡張処理によりえられた平面領域を図 54 Cに示す。このように、ロボット装置の視野内には、 1つの平面(床面)のみが存在する 、すなわち、床面が全て同じ平面として検出されていることがわかる。  Next, when the distance between each data point and the plane exceeds a relatively small threshold value, for example, 0.75 cm, near the boundary of each area of the plane calculated again as described above, Perform the process of discarding data points. As a result, a more accurate plane can be obtained although the plane area is slightly reduced. After deleting the distance data points, find the plane again and repeat this process. As a result, the plane can be determined very precisely. Next, the results obtained by each processing are shown. Fig. 54A is a schematic diagram showing the floor surface when looking down on the floor surface with the robot device standing, and Fig. 54B shows the x-axis on the vertical axis, y on the horizontal axis, and the z-axis by shading each data point. FIG. 4 is a diagram showing three-dimensional distance data, and further shows a data point group force assumed to be on the same plane in a pixel column force line segment extraction process in a row direction in which a straight line is detected. FIG. 54C shows a plane region obtained by the region extension processing from the straight line group shown in FIG. 54B. Thus, it can be seen that only one plane (floor surface) exists in the field of view of the robot device, that is, all floor surfaces are detected as the same plane.
次に、床面に段差を一段置いたときの結果を図 55に示す。図 55Aに示すように、 床面 Fには、 1段の段差 ST3が載置されている。図 55Bは、実験条件を示す図であり 、着目点と直線 (線分)との距離力 ¾ax_dを超える場合は、データ点群を分割する。ま た、抽出の成否(水平)(correct extraction(horizontal))は、行方向のデータ列毎に、 合計 10回の線分抽出を行う線分拡法による平面検出を行って成功した回数を示し、 抽出の成否(垂直)(correct extraction(vertical))は、列方向のデータ列毎について の抽出の成否を示す。また、 No. 1— No. 5は、上述した Zig-Zag-Shape判別処理を 取り入れていない従来の線分拡張法による平面検出処理の条件、 No. 6は、 Zig-Zag-Shape判別処理を行った本変形例における平面検出方法の条件を示す。 図 55C及び図 55Dは、線分拡張法により平面検出した結果を示す図であって、そ れぞれ本変形例における手法により平面検出した結果、従来の線分拡張法により平 面検出した結果 (比較例)を示す。図 55Bに示すように、従来の手法においては、線 分抽出(Line Fitting)において推定のための閾値パラメータ max_dを大きくする( max_d= 25, 30)と検出率が下がり、閾値 max_d小さくする (max_d= 10, 15)と検出率 が向上する。これに対して、本発明のように、ジグザグ形検証処理を導入することによ り、大きな閾値 max_d= 30を設定しても、優れた検出結果を示すことがわかる。 Next, FIG. 55 shows the result when one step is placed on the floor. As shown in FIG. 55A, on the floor F, one step ST3 is placed. FIG. 55B is a diagram showing the experimental conditions. If the distance between the point of interest and the straight line (line segment) exceeds ¾ax_d, the data point group is divided. Ma The success / failure of extraction (horizontal) is the number of successful plane detections by line segmentation, which performs a total of 10 line segment extractions for each row-wise data column. (Correct extraction (vertical)) indicates the success or failure of extraction for each data column in the column direction. No. 1-No. 5 are the conditions for plane detection processing by the conventional line extension method that does not incorporate the Zig-Zag-Shape discrimination processing described above, and No. 6 is the Zig-Zag-Shape discrimination processing. The conditions of the plane detection method according to this modified example are described below. 55C and 55D are diagrams showing the results of plane detection by the line segment extension method, and show the results of plane detection by the method in the present modified example and the results of plane detection by the conventional line segment extension method, respectively. (Comparative Example) is shown. As shown in FIG. 55B, in the conventional method, when the threshold parameter max_d for estimation is increased (max_d = 25, 30) in the line segment extraction (Line Fitting), the detection rate decreases and the threshold value max_d decreases (max_d = 10, 15), which improves the detection rate. On the other hand, by introducing the zigzag verification processing as in the present invention, it can be seen that excellent detection results are shown even when a large threshold value max_d = 30 is set.
すなわち、閾値 max_dを大きくすると、ノイズの影響が少なくなるものの、線分抽出が 難しくなり、閾値 max_dを小さくすると、ノイズの影響を受けて誤検出が多くなつてしまう o図 56Bに示す床面を撮影した画像から 3次元距離データを取得した場合を図 56B 及び図 56Cに示す。何れ左図は、行方向の画素列(距離データ列)から線分を抽出 した例、右図は列方向の画素列 (距離データ列)から線分を抽出した例を示す。図 5 6Bに示すように、閾値 max_dを小さくすると、ノイズの影響が大きくなり、ノイズの影響 が大きい遠方などにおいては特に、線分をうまく検出することができない。一方、図 5 6Cに示すように、従来の線分抽出に更にジグザグ形判別処理を加えた場合、閾値 max_dを大きくしても、更にノイズの影響が大きい遠方の領域であっても線分が検出さ れていることがわ力る。  That is, if the threshold value max_d is increased, the influence of noise is reduced, but line segment extraction becomes difficult.If the threshold value max_d is reduced, the number of erroneous detections increases due to the effect of noise.o The floor shown in Fig. FIGS. 56B and 56C show a case where three-dimensional distance data is obtained from a captured image. In each case, the left diagram shows an example in which a line segment is extracted from a pixel column in the row direction (distance data sequence), and the right diagram shows an example in which a line segment is extracted from a pixel column in the column direction (distance data sequence). As shown in FIG. 56B, when the threshold value max_d is reduced, the influence of noise increases, and it is difficult to detect a line segment particularly in a distant place where the influence of noise is large. On the other hand, as shown in FIG. 56C, when the zigzag shape discrimination processing is further added to the conventional line segment extraction, even if the threshold value max_d is increased, the line segment is not affected even in a distant region where the influence of noise is further increased. It is clear that it has been detected.
これにより、上述した如ぐそれぞれ異なる階段を撮影した画像から 3次元距離デー タを取得して平面検出することができる。例えば、図 11及び図 12に示すように、何れ の場合も全ての踏面を平面として検出できている。なお、図 12Bでは、床面の一部も 他の平面として検出成功して 、る。  As a result, three-dimensional distance data can be acquired from images obtained by photographing different stairs as described above, and plane detection can be performed. For example, as shown in FIGS. 11 and 12, in all cases, all treads can be detected as planes. In FIG. 12B, a part of the floor surface is successfully detected as another plane.
本変形例によれば、線分拡張法による平面検出を行う際、始めは大きな閾値を設 定して線分を分割し、次に Zig-Zag-Shape判別処理により、閾値を超えるデータ点を 持たない直線であってもジグザグ形である場合には、ノイズではなぐ複数平面から なる直線であるとして線分を分割するようにしたので、ノイズを含む距離情報力も複数 の平面を精度よく検出することが可能となる。 According to this modification, when performing plane detection by the line segment extension method, a large threshold is initially set. If the line does not have a data point exceeding the threshold but has a zigzag shape, the line is divided by multiple planes that are not noises by Zig-Zag-Shape discrimination processing. Since the line segment is assumed to be divided, the distance information including noise can detect a plurality of planes with high accuracy.
このように、小さい段差も精度よく検出することができるため、例えばロボット装置が 移動可能な環境内の階段などを認識することができ、二足歩行ロボット装置であれば 、この検出結果を利用して階段昇降動作が可能となる。  As described above, even a small step can be accurately detected, so that, for example, a stair in an environment where the robot device can move can be recognized, and a biped walking robot device can use the detection result. The stairs ascending and descending operation becomes possible.
更に、複数の平面によって構成されている凸凹の床面を歩行可能な平面だと誤認 識することがなくなり、ロボット装置の移動などが更に簡単になる。  Further, the uneven floor surface constituted by a plurality of planes is not erroneously recognized as a walkable plane, and the movement of the robot device is further simplified.
なお、本発明は上述した変形例のみに限定されるものではなぐ本発明の要旨を逸 脱しない範囲において種々の変更が可能であることは勿論である。また、上述した平 面検出処理、階段認識処理、階段昇降制御処理のうち 1以上の任意の処理は、ハー ドウエアで構成しても、演算器 (CPU)にコンピュータプログラムを実行させることで実 現してもよい。コンピュータプログラムとする場合には、記録媒体に記録して提供する ことも可能であり、また、インターネットその他の伝送媒体を介して伝送することにより 提供することも可能である。  It should be noted that the present invention is not limited to the above-described modified examples, and it is needless to say that various modifications can be made without departing from the spirit of the present invention. In addition, one or more of the above-described plane detection processing, stair recognition processing, and stair climbing control processing can be realized by causing a computing unit (CPU) to execute a computer program, even if the processing is configured by hardware. You may. When it is a computer program, it can be provided by recording it on a recording medium, or can be provided by transmitting it via the Internet or other transmission media.

Claims

請求の範囲 The scope of the claims
[1] 1.移動手段により移動可能なロボット装置において、  [1] 1. In a robot device that can be moved by moving means,
3次元の距離データ力 環境内に含まれる 1又は複数の平面を検出し、平面情報と して出力する平面検出手段と、  Plane detection means for detecting one or more planes contained in the environment and outputting the plane information as plane information;
上記平面情報力 移動可能な平面を有する階段を認識し、該階段の踏面に関する 踏面情報及び蹴り上げ情報を有する階段情報を出力する階段認識手段と、 上記階段情報に基づき、階段昇降可能か否かを判断し、昇降動作が可能であると 判断した場合には、その踏面に対して自律的に位置決めして階段昇降動作を制御 する階段昇降制御手段を有する  The plane information force: a stair recognition means for recognizing a stair having a movable plane and outputting stair information including tread information and kick-up information relating to the tread of the stair. If it is determined that the ascending / descending operation is possible, there is a stairs ascending / descending control means for controlling the stairs ascent / descent operation by autonomously positioning with respect to the tread.
ことを特徴とするロボット装置。  A robot device characterized by the above-mentioned.
[2] 2.上記 3次元の距離データを取得する距離計測手段を有する [2] 2. Has distance measurement means to acquire the above three-dimensional distance data
ことを特徴とする請求の範囲第 1項記載のロボット装置。  The robot device according to claim 1, wherein:
[3] 3.上記移動手段として脚部を備え、この脚部により移動可能である [3] 3. A leg is provided as the moving means, and the leg can be moved.
ことを特徴とする請求の範囲第 1項記載のロボット装置。  The robot device according to claim 1, wherein:
[4] 4.上記階段認識手段は、 [4] 4. The above staircase recognition means
与えられた平面情報力 移動可能な平面を有する階段を検出して統合前階段情 報を出力する階段検出手段と、  Given plane information power Stair detecting means for detecting a stair having a movable plane and outputting stair information before integration,
上記階段検出手段から出力される時間的に異なる複数の統合前階段情報を統計 的に処理することにより統合した統合済階段情報を上記階段情報として出力する階 段統合手段とを有する  A step integrating means for outputting integrated stair information integrated by statistically processing a plurality of temporally different stair information before integration output from the stair detecting means as the stair information;
ことを特徴とする請求の範囲第 1項記載のロボット装置。  The robot device according to claim 1, wherein:
[5] 5.上記階段検出手段は、上記平面情報に基づき踏面の大きさ及び空間的な位置を 認識し、この認識結果である踏面情報を上記統合前階段情報として出力し、 上記階段統合手段は、時間的に前後する踏面情報から、所定の閾値より大きい重 複領域を有しかつ相対的な高さの違いが所定の閾値以下である 2以上の踏面力 な る踏面群を検出した場合、当該踏面群を何れをも含む一の踏面となるよう統合する ことを特徴とする請求の範囲第 4項記載のロボット装置。 [5] 5. The stair detecting means recognizes the size and spatial position of the tread based on the plane information, and outputs the tread information as a result of the recognition as the pre-integration stair information. Is the case where two or more tread groups with overlapping areas larger than a predetermined threshold value and with a relative height difference equal to or less than a predetermined threshold value and two or more tread surface forces are detected from the tread information that precedes and follows in time. 5. The robot apparatus according to claim 4, wherein said tread groups are integrated so as to form one tread including both tread groups.
[6] 6.上記階段認識手段は、上記平面情報に基づき踏面の大きさ及び空間的な位置を 認識し上記踏面情報とする [6] 6. The stair recognition means determines the size and spatial position of the tread based on the plane information. Recognize and use the above tread information
ことを特徴とする請求の範囲第 1項記載のロボット装置。  The robot device according to claim 1, wherein:
[7] 7.上記踏面情報は、少なくとも移動方向に対して該踏面の手前側の境界を示すフロ ントエッジ及び奥側の境界を示すバックエッジの情報を含む [7] 7. The tread information includes at least information of a front edge indicating a front boundary of the tread and a back edge indicating a back boundary of the tread with respect to the moving direction.
ことを特徴とする請求の範囲第 6項記載のロボット装置。  7. The robot apparatus according to claim 6, wherein:
[8] 8.上記踏面情報は、上記フロントエッジ及びバックエッジに挟まれた領域である安全 領域の左右両側に隣接した領域であって移動可能である確率が高いと推定されるマ 一ジン領域を示す右側マージン情報及び左側マージン情報を有する [8] 8. The tread information is a margin area where it is presumed that there is a high probability of being movable, which is an area adjacent to the left and right sides of the safety area, which is the area sandwiched by the front edge and the back edge. Has right margin information and left margin information indicating
ことを特徴とする請求の範囲第 7項記載のロボット装置。  8. The robot device according to claim 7, wherein:
[9] 9.上記踏面情報は、上記平面情報に基づき踏面と推定された領域の重心を示す参 照点情報を有する [9] 9. The tread information includes reference point information indicating a center of gravity of an area estimated to be a tread based on the plane information.
ことを特徴とする請求の範囲第 7項記載のロボット装置。  8. The robot device according to claim 7, wherein:
[10] 10.上記参照点情報は、上記フロントエッジ及びバックエッジに挟まれた領域を示す 安全領域の重心、該安全領域及びその両側に隣接した領域であって移動可能であ る確率が高 、と推定されるマージン領域力 なる踏面領域の重心、又は踏面となる 平面を構成する点群力 求まる中心点の何れかからなる参照点の位置情報を有する ことを特徴とする請求の範囲第 9項記載のロボット装置。 [10] 10. The reference point information indicates the center of gravity of the safety area indicating the area between the front edge and the back edge, the safety area and the area adjacent to both sides of the safety area, and the probability of being movable is high. 10. The method according to claim 9, wherein the reference information comprises position information of a reference point, which is one of a center of gravity of a tread surface area, or a center of gravity of a tread surface area, which constitutes a tread surface. The robot device according to the item.
[11] 11.上記踏面情報は、踏面となる平面を構成する点群の 3次元座標情報を有する ことを特徴とする請求の範囲第 7項記載のロボット装置。 11. The robot apparatus according to claim 7, wherein the tread information includes three-dimensional coordinate information of a point group forming a tread plane.
[12] 12.上記階段認識手段は、上記平面情報に基づき平面の境界を抽出して多角形を 算出し、該多角形に基づき上記踏面情報を算出する [12] 12. The stair recognition unit extracts a boundary of the plane based on the plane information to calculate a polygon, and calculates the tread information based on the polygon.
ことを特徴とする請求の範囲第 1項記載のロボット装置。  The robot device according to claim 1, wherein:
[13] 13.上記多角形は、上記平面情報に基づき抽出された平面の境界に外接する凸多 角形領域である [13] 13. The polygon is a convex polygonal region circumscribing the boundary of the plane extracted based on the plane information
ことを特徴とする請求の範囲第 12項記載のロボット装置。  13. The robot apparatus according to claim 12, wherein:
[14] 14.上記多角形は、上記平面情報に基づき抽出された平面の境界に内接する凸多 角形領域である [14] 14. The polygon is a convex polygon region inscribed in the boundary of the plane extracted based on the plane information.
ことを特徴とする請求の範囲第 12項記載のロボット装置。 13. The robot apparatus according to claim 12, wherein:
[15] 15.上記多角形は、上記平面情報に基づき抽出された平面の境界を平滑化した非 凸多角形領域であることを特徴とする請求の範囲第 12項記載のロボット装置。 15. The robot apparatus according to claim 12, wherein the polygon is a non-convex polygon area obtained by smoothing a boundary of a plane extracted based on the plane information.
[16] 16.上記平面情報は、一の平面毎に、法線ベクトル、境界を示す境界情報、重心位 置を示す重心情報、大きさを示す面積情報、及び平面度から選択される 1以上の情 報を有する  [16] 16. The plane information is, for each plane, at least one selected from a normal vector, boundary information indicating a boundary, centroid information indicating a centroid position, area information indicating a size, and flatness. Have the information of
ことを特徴とする請求の範囲第 1項記載のロボット装置。  The robot device according to claim 1, wherein:
[17] 17.上記階段昇降制御手段は、現在移動中の移動面におけるノ ックエッジに対畤し た所定位置に移動した後、昇降動作を実行するよう制御する [17] 17. The above-mentioned stair climbing control means controls to execute a climbing operation after moving to a predetermined position on the moving surface which is currently moving, with respect to the knock edge.
ことを特徴とする請求の範囲第 3項記載のロボット装置。  4. The robot device according to claim 3, wherein:
[18] 18.上記階段昇降制御手段は、現在移動中の移動面におけるノ ックエッジが確認 できない場合は、次に昇降動作の対象となる次段の踏面におけるフロントエッジに対 畤した所定位置に移動した後、昇降動作を実行するよう制御する [18] 18. If the knocking edge on the moving surface that is currently moving cannot be confirmed, the stair climbing control means moves to a predetermined position that is above the front edge of the next tread to be moved next. And then control to execute the elevating operation
ことを特徴とする請求の範囲第 17項記載のロボット装置。  18. The robot device according to claim 17, wherein:
[19] 19.上記階段昇降制御手段は、次に移動対象となる踏面を検出し、当該移動対象と なる踏面に対畤した所定位置に移動する一連の動作を行って昇降動作を実行する よう制御する [19] 19. The stair climbing control means detects a tread to be moved next, and performs a series of operations of moving to a predetermined position opposite to the tread to be moved to perform the ascending / descending operation. Control
ことを特徴とする請求の範囲第 3項記載のロボット装置。  4. The robot device according to claim 3, wherein:
[20] 20.上記階段昇降制御手段は、現在位置から次に移動対象となる次段又は次段以 降の踏面が検出できない場合、過去に取得した階段情報から当該移動対象となる次 段の踏面を検索する [20] 20. If the next step or the next step or the next step to be moved from the current position cannot be detected from the current position, the above-mentioned stairs ascent / descent control means uses the stair information acquired in the past to determine the next step to be moved. Search for treads
ことを特徴とする請求の範囲第 19項記載のロボット装置。  20. The robot device according to claim 19, wherein:
[21] 21.上記階段昇降制御手段は、現在の移動面におけるノ ックエッジに対畤した所定 位置に移動した後、次の移動対象となる踏面を検出し、当該踏面におけるフロントェ ッジに対畤した所定位置に移動し、当該踏面に移動する昇降動作を実行するよう制 御する [21] 21. The stair climbing control means detects the next tread to be moved after moving to a predetermined position with respect to the knock edge on the current moving surface, and detects the next tread to be moved with the front edge on the tread. Control to execute the vertical movement to move to the specified position
ことを特徴とする請求の範囲第 3項記載のロボット装置。  4. The robot device according to claim 3, wherein:
[22] 22.上記階段昇降制御手段は、踏面に対する上記移動手段の位置を規定したパラ メータを使用して昇降動作を制御する ことを特徴とする請求の範囲第 3項記載のロボット装置。 [22] 22. The stair climbing control means controls the ascending / descending operation using a parameter defining the position of the moving means with respect to the tread surface. 4. The robot device according to claim 3, wherein:
[23] 23.上記パラメータは、上記脚部の足上げ高さ又は足下げ高さに基づき決定される ことを特徴とする請求の範囲第 22項記載のロボット装置。 23. The robot apparatus according to claim 22, wherein the parameter is determined based on a height at which the leg is raised or lowered.
[24] 24.階段を登る動作と降りる動作とで上記パラメータの数値を変更するパラメータ切 り替え手段を有する [24] 24. There is a parameter switching means for changing the numerical values of the above parameters for the operation of climbing up and down the stairs
ことを特徴とする請求の範囲第 22項記載のロボット装置。  23. The robot device according to claim 22, wherein:
[25] 25.上記平面検出手段は、 3次元空間で同一平面上にあると推定される距離データ 点群毎に線分を抽出する線分抽出手段と、上記線分抽出手段によって抽出された 線分群から同一平面に属すると推定される複数の線分を抽出し該複数の線分から平 面を算出する平面領域拡張手段とを有し、 [25] 25. The plane detecting means is a line segment extracting means for extracting a line segment for each distance data point group presumed to be on the same plane in a three-dimensional space, and is extracted by the line segment extracting means. Plane area extending means for extracting a plurality of line segments presumed to belong to the same plane from the line segment group and calculating a plane from the plurality of line segments,
上記線分抽出手段は、距離データ点の分布に応じて適応的に線分を抽出する ことを特徴とする請求の範囲第 1項記載のロボット装置。  2. The robot apparatus according to claim 1, wherein the line segment extraction unit adaptively extracts a line segment according to a distribution of distance data points.
[26] 26.上記線分抽出手段は、上記距離データ点間の距離に基づき同一平面上にある と推定される距離データ点群を抽出し、該距離データ点群における距離データ点の 分布に基づき、当該距離データ点群が同一平面上にある力否力を再度推定する ことを特徴とする請求の範囲第 25項記載のロボット装置。 [26] 26. The line segment extracting means extracts a distance data point group estimated to be on the same plane based on the distance between the distance data points, and extracts the distance data point distribution in the distance data point group. 26. The robot apparatus according to claim 25, further comprising: re-estimating a force / non-force of the distance data point group on the same plane based on the distance data point group.
[27] 27.上記線分抽出手段は、上記同一平面上にあると推定される距離データ点群から 線分を抽出し、該距離データ点群のうち該線分との距離が最も大きい距離データ点 を着目点とし、当該距離が所定の閾値以下である場合に該距離データ点群における 距離データ点の分布に偏りがある力否かを判別し、該偏りがある場合には該着目点 にて該距離データ点群を分割する [27] 27. The line segment extracting means extracts a line segment from the distance data point group presumed to be on the same plane, and selects a distance of the distance data point group having the largest distance to the line segment. When the distance is equal to or less than a predetermined threshold, it is determined whether the distribution of the distance data points in the distance data point group is biased or not. Divides the distance data point cloud
ことを特徴とする請求の範囲第 25項記載のロボット装置。  26. The robot device according to claim 25, wherein:
[28] 28.上記線分抽出手段は、上記同一平面上にあると推定される距離データ点群から 第 1の線分を抽出し、該距離データ点群のうち該第 1の線分との距離が最も大きい距 離データ点を着目点とし、当該距離が所定の閾値以下である場合に該距離データ 点群力 第 2の線分を抽出し、該第 2の線分の一方側に距離データ点が所定の数以 上連続して存在するか否かを判定し、所定の数以上連続して存在する場合に該距 離データ点群を該着目点にて分割する ことを特徴とする請求の範囲第 25項記載のロボット装置。 [28] 28. The line segment extracting means extracts a first line segment from the distance data point group presumed to be on the same plane, and extracts the first line segment from the distance data point group. The distance data point having the largest distance is taken as the point of interest, and when the distance is equal to or less than a predetermined threshold value, a second line segment of the distance data point group force is extracted, and one side of the second line segment is extracted. It is determined whether or not the distance data points continuously exist for a predetermined number or more. If the distance data points continuously exist for a predetermined number or more, the distance data point group is divided by the target point. 26. The robot device according to claim 25, wherein:
[29] 29.上記平面領域拡張手段は、同一の平面に属すると推定される 1以上の線分を選 択して基準平面を算出し、該基準平面と同一平面に属すると推定される線分を該線 分群から拡張用線分として検索し、該拡張用線分により該基準平面を更新すると共 に該基準平面の領域を拡張する処理を繰り返し、更新が終了した平面を更新済平 面として出力する [29] 29. The plane area extending means selects one or more line segments presumed to belong to the same plane, calculates a reference plane, and calculates a line estimated to belong to the same plane as the reference plane. Is searched for as an extension line segment from the group of line segments, the process of updating the reference plane with the extension line segment and expanding the area of the reference plane is repeated, and the updated plane is updated. Output as
ことを特徴とする請求の範囲第 25項記載のロボット装置。  26. The robot device according to claim 25, wherein:
[30] 30.上記更新済平面に属する距離データ点群において、当該更新済平面との距離 が所定の閾値を超える距離データ点が存在する場合、これを除!ヽた距離データ点群 力 再度平面を算出する平面再算出手段を更に有する [30] 30. In the distance data point group belonging to the above-mentioned updated plane, if there is a distance data point whose distance to the updated plane exceeds a predetermined threshold, the distance data point group obtained by removing this is re-entered. It further has plane recalculation means for calculating a plane.
ことを特徴とする請求の範囲第 29項記載のロボット装置。  30. The robot device according to claim 29, wherein:
[31] 31.上記平面領域拡張手段は、線分により定まる平面と上記基準平面との誤差に基 づき当該線分が該基準平面と同一平面に属する力否かを推定する [31] 31. The plane area extending means estimates, based on an error between a plane defined by the line segment and the reference plane, whether or not the line segment belongs to the same plane as the reference plane.
ことを特徴とする請求の範囲第 25項記載のロボット装置。  26. The robot device according to claim 25, wherein:
[32] 32.移動手段により移動可能なロボット装置の動作制御方法において、 [32] 32. In the operation control method of the robot device movable by the moving means,
3次元の距離データ力 環境内に含まれる 1又は複数の平面を検出し、平面情報と して出力する平面検出工程と、  A plane detection step of detecting one or more planes contained in the environment and outputting the plane information as plane information;
上記平面情報力 移動可能な平面を有する階段を認識し、該階段の踏面に関する 踏面情報及び蹴り上げ情報を有する階段情報を出力する階段認識工程と、 上記階段情報に基づき、階段昇降可能か否かを判断し、昇降動作が可能であると 判断した場合には、その踏面に対して自律的に位置決めして階段昇降動作を制御 する階段昇降制御工程と  The plane information force: a stair recognition step of recognizing a stair having a movable plane and outputting stair information including tread information and kick-up information relating to the tread of the stair; and, based on the stair information, whether or not stairs can be moved up and down. If it is determined that the ascending / descending operation is possible, a stairs ascending / descending control step of autonomously positioning the stairs to control the stairs elevating operation is performed.
を有することを特徴とするロボット装置の動作制御方法。  An operation control method for a robot device, comprising:
[33] 33.上記 3次元の距離データを取得する距離計測工程を有する [33] 33. Includes a distance measurement step for acquiring the above three-dimensional distance data
ことを特徴とする請求の範囲第 32項記載のロボット装置の動作制御方法。  33. The operation control method for a robot device according to claim 32, wherein:
[34] 34.上記移動手段は、機体脚部である [34] 34. The above-mentioned means of transportation is an airframe leg
ことを特徴とする請求の範囲第 32項記載のロボット装置の動作制御方法。  33. The operation control method for a robot device according to claim 32, wherein:
[35] 35.上記階段認識工程は、 与えられた平面情報力 移動可能な平面を有する階段を検出して統合前階段情 報を出力する階段検出工程と、 [35] 35. The above staircase recognition process A given step information detecting step for detecting a step having a movable plane and outputting step information before integration,
上記階段検出工程にて出力される時間的に異なる複数の統合前階段情報を統計 的に処理することにより統合した統合済階段情報を上記階段情報として出力する階 段統合工程とを有する  A step integration step of outputting integrated stair information integrated by statistically processing a plurality of pre-integration stair information different in time output in the stair detection step as the stair information.
ことを特徴とする請求の範囲第 32項記載のロボット装置の動作制御方法。  33. The operation control method for a robot device according to claim 32, wherein:
[36] 36.上記階段検出工程では、上記平面情報に基づき踏面の大きさ及び空間的な位 置を認識し、この認識結果である踏面情報を上記統合前階段情報として出力し、 上記階段統合工程では、時間的に前後する踏面情報から、所定の閾値より大きい 重複領域を有しかつ相対的な高さの違いが所定の閾値以下である 2以上の踏面から なる踏面群を検出した場合、当該踏面群を何れも含む一の踏面となるよう統合する ことを特徴とする請求の範囲第 35項記載のロボット装置の動作制御方法。 [36] 36. In the stair detection step, the size and spatial position of the tread are recognized based on the plane information, and the tread information resulting from this recognition is output as the pre-integration stair information, and the stair integration is performed. In the process, when a tread group consisting of two or more treads having an overlapping area larger than a predetermined threshold and having a relative height difference equal to or smaller than a predetermined threshold is detected from tread information that is temporally preceding and following, 36. The operation control method for a robot device according to claim 35, wherein the treads are integrated so as to be one tread including all the tread groups.
[37] 37.上記階段認識工程では、上記平面情報に基づき踏面の大きさ及び空間的な位 置を認識し上記踏面情報とする [37] 37. In the stair recognition step, the size and spatial position of the tread are recognized based on the plane information and are used as the tread information.
ことを特徴とする請求の範囲第 32項記載のロボット装置の動作制御方法。  33. The operation control method for a robot device according to claim 32, wherein:
[38] 38.上記踏面情報は、少なくとも移動方向に対して該踏面の手前側の境界を示すフ ロントエッジ及び奥側の境界を示すバックエッジを示す情報を含む [38] 38. The tread information includes at least a front edge indicating a front boundary of the tread and a back edge indicating a back boundary of the tread with respect to the moving direction.
ことを特徴とする請求の範囲第 37項記載のロボット装置の動作制御方法。  38. The operation control method for a robot device according to claim 37, wherein:
[39] 39.上記階段認識工程では、上記平面情報に基づき平面の境界を抽出して多角形 を算出し、該多角形に基づき上記踏面情報を算出する [39] 39. In the stair recognition step, a boundary of a plane is extracted based on the plane information to calculate a polygon, and the tread information is calculated based on the polygon.
ことを特徴とする請求の範囲第 32項記載のロボット装置の動作制御方法。  33. The operation control method for a robot device according to claim 32, wherein:
[40] 40.上記階段昇降制御工程では、現在移動中の移動面におけるバックエッジに対 畤した所定位置に移動した後、昇降動作を実行するよう制御する [40] 40. In the above-mentioned stair climbing / falling control step, control is performed so as to perform a climbing / falling operation after moving to a predetermined position on the moving surface that is currently moving, which is above the back edge.
ことを特徴とする請求の範囲第 34項記載のロボット装置の動作制御方法。  35. The operation control method for a robot device according to claim 34, wherein:
[41] 41.上記階段昇降制御工程では、現在移動中の移動面におけるノ ックエッジが確 認できない場合は、次に昇降動作の対象となる次段の踏面におけるフロントエッジに 対畤した所定位置に移動した後、昇降動作を実行するよう制御する [41] 41. In the above-described stair climbing control process, if a knock edge on the moving surface that is currently moving cannot be confirmed, the stair climbing control process is performed at a predetermined position that is below the front edge of the next step tread to be moved up and down. After moving, control to perform the elevating operation
ことを特徴とする請求の範囲第 40項記載のロボット装置の動作制御方法。 41. The operation control method for a robot device according to claim 40, wherein:
[42] 42.上記階段昇降制御工程では、次に移動対象となる踏面を検出し、当該踏面に 対畤した所定位置に移動する一連の動作を行って昇降動作を実行するよう制御する ことを特徴とする請求の範囲第 34項記載のロボット装置の動作制御方法。 [42] 42. In the above-described stair climbing control step, control is performed such that a tread to be moved next is detected, and a series of operations for moving to a predetermined position with respect to the tread are performed to perform a climbing operation. 35. The operation control method for a robot device according to claim 34, wherein:
[43] 43.上記階段昇降制御工程では、現在位置から次に移動対象となる次段又は次段 以降の踏面が検出できない場合、過去に取得した階段情報から当該移動対象となる 次段の踏面を検索する  [43] 43. In the above stair climbing control process, if the next step or the next step to be moved next from the current position cannot be detected, the next step to be moved is determined from the stair information acquired in the past. Search for
ことを特徴とする請求の範囲第 42項記載のロボット装置の動作制御方法。  43. The operation control method for a robot device according to claim 42, wherein:
[44] 44.上記階段昇降制御工程では、現在の移動面におけるバックエッジに対畤した所 定位置に移動した後、次の移動対象となる踏面を検出し、当該踏面におけるフロント エッジに対畤した所定位置に移動し、当該踏面に移動する昇降動作を実行するよう 制御する  [44] 44. In the above-described stair climbing control process, after moving to a predetermined position with respect to the back edge of the current moving surface, the next tread to be moved is detected, and the front edge of the corresponding tread is moved with respect to the front edge. Control to move to the specified position and perform the elevating operation to move to the tread.
ことを特徴とする請求の範囲第 34項記載のロボット装置の動作制御方法。  35. The operation control method for a robot device according to claim 34, wherein:
[45] 45.上記階段昇降制御工程では、踏面に対する移動手段の位置を規定するための パラメータを使用して昇降動作を制御する [45] 45. In the above-mentioned stair climbing / falling control step, the climbing / falling operation is controlled using a parameter for defining the position of the moving means with respect to the tread surface.
ことを特徴とする請求の範囲第 34項記載のロボット装置の動作制御方法。  35. The operation control method for a robot device according to claim 34, wherein:
[46] 46.階段を登る動作と降りる動作とで上記パラメータの数値を変更するパラメータ切 り替え工程を有する [46] 46. There is a parameter switching step of changing the values of the above parameters between the operation of climbing the stairs and the operation of descending the stairs
ことを特徴とする請求の範囲第 45項記載のロボット装置の動作制御方法。  46. The operation control method for a robot device according to claim 45, wherein:
[47] 47.移動手段により移動可能な移動装置において、 [47] 47. In a moving device movable by a moving means,
3次元の距離データ力 環境内に含まれる 1又は複数の平面を検出し、平面情報と して出力する平面検出手段と、  Plane detection means for detecting one or more planes contained in the environment and outputting the plane information as plane information;
上記平面情報力 移動可能な平面を有する階段を認識し、該階段の踏面に関する 踏面情報及び蹴り上げ情報を有する階段情報を出力する階段認識手段と、 上記階段情報に基づき、階段昇降可能か否かを判断し、昇降動作が可能であると 判断した場合には、その踏面に対して自律的に位置決めして階段昇降動作を制御 する階段昇降制御手段とを有する  The plane information force: a stair recognition means for recognizing a stair having a movable plane and outputting stair information including tread information and kick-up information relating to the tread of the stair. And a stair climbing control means for controlling the stair climbing operation by autonomously positioning with respect to the tread when it is determined that the climbing operation is possible.
ことを特徴とする移動装置。  A moving device, characterized by:
PCT/JP2005/004838 2004-03-17 2005-03-17 Robot device, behavior control method for the robot device, and moving device WO2005087452A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
JP2006511065A JP4618247B2 (en) 2004-03-17 2005-03-17 Robot apparatus and operation control method thereof

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2004-077214 2004-03-17
JP2004077214 2004-03-17

Publications (2)

Publication Number Publication Date
WO2005087452A1 true WO2005087452A1 (en) 2005-09-22
WO2005087452A9 WO2005087452A9 (en) 2008-03-13

Family

ID=34975410

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2005/004838 WO2005087452A1 (en) 2004-03-17 2005-03-17 Robot device, behavior control method for the robot device, and moving device

Country Status (2)

Country Link
JP (1) JP4618247B2 (en)
WO (1) WO2005087452A1 (en)

Cited By (22)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2008246609A (en) * 2007-03-29 2008-10-16 Honda Motor Co Ltd Legged mobile robot
JP2011093024A (en) * 2009-10-28 2011-05-12 Honda Motor Co Ltd Control device for legged mobile robot
EP2372652A1 (en) 2010-03-08 2011-10-05 Optex Co. Ltd. Method for estimating a plane in a range image and range image camera
US8289321B2 (en) 2004-03-17 2012-10-16 Sony Corporation Method and apparatus for detecting plane, and robot apparatus having apparatus for detecting plane
JP2013109750A (en) * 2011-11-23 2013-06-06 Samsung Electronics Co Ltd Method of recognizing stairs in three dimensional data image
CN103879471A (en) * 2014-04-05 2014-06-25 凌昕 Mountain climbing vehicle
KR101591471B1 (en) 2008-11-03 2016-02-04 삼성전자주식회사 Apparatus and method for extracting feature information of an object, and apparatus and method for generating feature map using the apparatus and method
US9441319B2 (en) 2014-02-26 2016-09-13 Brother Kogyo Kabushiki Kaisha Embroidery data generating device and non-transitory computer-readable medium storing embroidery data generating program
JP2016212824A (en) * 2015-05-06 2016-12-15 高麗大学校 産学協力団 Method for extracting outer space feature information (method for extracting outer static structure of space from geometric data of space)
JP2017522195A (en) * 2014-07-23 2017-08-10 グーグル インコーポレイテッド Predictable adjustable hydraulic rail
US10434649B2 (en) 2017-02-21 2019-10-08 Fanuc Corporation Workpiece pick up system
CN111127497A (en) * 2019-12-11 2020-05-08 深圳市优必选科技股份有限公司 A robot and its stair climbing control method and device
JP2020075340A (en) * 2018-11-08 2020-05-21 株式会社東芝 Operating system, controller, and program
CN112597857A (en) * 2020-12-16 2021-04-02 武汉科技大学 Indoor robot stair climbing pose rapid estimation method based on kinect
CN112699734A (en) * 2020-12-11 2021-04-23 深圳市银星智能科技股份有限公司 Threshold detection method, mobile robot and storage medium
US11123869B2 (en) 2019-04-12 2021-09-21 Boston Dynamics, Inc. Robotically negotiating stairs
US20210331754A1 (en) * 2020-04-22 2021-10-28 Boston Dynamics, Inc. Stair Tracking for Modeled and Perceived Terrain
WO2021216235A1 (en) * 2020-04-20 2021-10-28 Boston Dynamics, Inc. Identifying stairs from footfalls
WO2021216264A1 (en) * 2020-04-22 2021-10-28 Boston Dynamics, Inc. Perception and fitting for a stair tracker
CN114766975A (en) * 2022-04-13 2022-07-22 江苏商贸职业学院 Floor sweeping robot special for stair cleaning
US11396101B2 (en) 2018-11-08 2022-07-26 Kabushiki Kaisha Toshiba Operating system, control device, and computer program product
CN115256470A (en) * 2022-08-09 2022-11-01 七腾机器人有限公司 A depth vision-based stair measurement method, system and quadruped robot

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP5623362B2 (en) * 2011-09-28 2014-11-12 本田技研工業株式会社 Step recognition device
NL2008490C2 (en) 2012-03-15 2013-09-18 Ooms Otto Bv METHOD, DEVICE AND COMPUTER PROGRAM FOR EXTRACTING INFORMATION ON ONE OR MULTIPLE SPATIAL OBJECTS.
JP7280700B2 (en) 2019-01-21 2023-05-24 株式会社東芝 Holding device, control system and inspection system
JP7462466B2 (en) 2020-04-20 2024-04-05 株式会社東芝 Holding device, inspection system, movement method, and inspection method

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH05223549A (en) * 1992-02-10 1993-08-31 Honda Motor Co Ltd Recognizing methods such as stairs of moving objects

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH05223549A (en) * 1992-02-10 1993-08-31 Honda Motor Co Ltd Recognizing methods such as stairs of moving objects

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
OKADA S. ET AL.: "Jitsujikan Plane Segment Finder no Kenkyu.", DAI 6 KAI ROBOTICS SYMPOSIA YOKOSHU., 18 March 2001 (2001-03-18), pages 51 - 56, XP002994251 *

Cited By (40)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8289321B2 (en) 2004-03-17 2012-10-16 Sony Corporation Method and apparatus for detecting plane, and robot apparatus having apparatus for detecting plane
JP2008246609A (en) * 2007-03-29 2008-10-16 Honda Motor Co Ltd Legged mobile robot
KR101591471B1 (en) 2008-11-03 2016-02-04 삼성전자주식회사 Apparatus and method for extracting feature information of an object, and apparatus and method for generating feature map using the apparatus and method
JP2011093024A (en) * 2009-10-28 2011-05-12 Honda Motor Co Ltd Control device for legged mobile robot
EP2372652A1 (en) 2010-03-08 2011-10-05 Optex Co. Ltd. Method for estimating a plane in a range image and range image camera
US8599278B2 (en) 2010-03-08 2013-12-03 Optex Co., Ltd. Method for estimating a plane in a range image and range image camera
JP2013109750A (en) * 2011-11-23 2013-06-06 Samsung Electronics Co Ltd Method of recognizing stairs in three dimensional data image
KR101820299B1 (en) 2011-11-23 2018-03-02 삼성전자주식회사 Stairs recognition method for three dimension data image
US9552640B2 (en) 2011-11-23 2017-01-24 Samsung Electronics Co., Ltd. Method of recognizing stairs in three dimensional data image
US9441319B2 (en) 2014-02-26 2016-09-13 Brother Kogyo Kabushiki Kaisha Embroidery data generating device and non-transitory computer-readable medium storing embroidery data generating program
CN103879471B (en) * 2014-04-05 2016-04-13 凌昕 Mountain climbing vehicle
CN103879471A (en) * 2014-04-05 2014-06-25 凌昕 Mountain climbing vehicle
US11077898B2 (en) 2014-07-23 2021-08-03 Boston Dynamics, Inc. Predictively adjustable hydraulic pressure rails
JP2017522195A (en) * 2014-07-23 2017-08-10 グーグル インコーポレイテッド Predictable adjustable hydraulic rail
JP2016212824A (en) * 2015-05-06 2016-12-15 高麗大学校 産学協力団 Method for extracting outer space feature information (method for extracting outer static structure of space from geometric data of space)
JP2018088275A (en) * 2015-05-06 2018-06-07 高麗大学校 産学協力団 Method of extracting feature information on outline space (method for extracting outter static structure of space from geometric data of space)
US10434649B2 (en) 2017-02-21 2019-10-08 Fanuc Corporation Workpiece pick up system
JP7034971B2 (en) 2018-11-08 2022-03-14 株式会社東芝 Actuation systems, controls, and programs
JP2020075340A (en) * 2018-11-08 2020-05-21 株式会社東芝 Operating system, controller, and program
US11396101B2 (en) 2018-11-08 2022-07-26 Kabushiki Kaisha Toshiba Operating system, control device, and computer program product
US12151380B2 (en) 2019-04-12 2024-11-26 Boston Dynamics, Inc. Robotically negotiating stairs
US11123869B2 (en) 2019-04-12 2021-09-21 Boston Dynamics, Inc. Robotically negotiating stairs
US11660752B2 (en) 2019-04-12 2023-05-30 Boston Dynamics, Inc. Perception and fitting for a stair tracker
US11548151B2 (en) 2019-04-12 2023-01-10 Boston Dynamics, Inc. Robotically negotiating stairs
CN111127497A (en) * 2019-12-11 2020-05-08 深圳市优必选科技股份有限公司 A robot and its stair climbing control method and device
US11644841B2 (en) 2019-12-11 2023-05-09 Ubtech Robotics Corp Ltd Robot climbing control method and robot
WO2021216235A1 (en) * 2020-04-20 2021-10-28 Boston Dynamics, Inc. Identifying stairs from footfalls
US12094195B2 (en) 2020-04-20 2024-09-17 Boston Dynamics, Inc. Identifying stairs from footfalls
US20230143315A1 (en) * 2020-04-22 2023-05-11 Boston Dynamics, Inc. Perception and fitting for a stair tracker
US11599128B2 (en) 2020-04-22 2023-03-07 Boston Dynamics, Inc. Perception and fitting for a stair tracker
WO2021216264A1 (en) * 2020-04-22 2021-10-28 Boston Dynamics, Inc. Perception and fitting for a stair tracker
US20210331754A1 (en) * 2020-04-22 2021-10-28 Boston Dynamics, Inc. Stair Tracking for Modeled and Perceived Terrain
US12077229B2 (en) * 2020-04-22 2024-09-03 Boston Dynamics, Inc. Stair tracking for modeled and perceived terrain
CN112699734B (en) * 2020-12-11 2024-04-16 深圳银星智能集团股份有限公司 Threshold detection method, mobile robot and storage medium
CN112699734A (en) * 2020-12-11 2021-04-23 深圳市银星智能科技股份有限公司 Threshold detection method, mobile robot and storage medium
CN112597857B (en) * 2020-12-16 2022-06-14 武汉科技大学 Indoor robot stair climbing pose rapid estimation method based on kinect
CN112597857A (en) * 2020-12-16 2021-04-02 武汉科技大学 Indoor robot stair climbing pose rapid estimation method based on kinect
CN114766975A (en) * 2022-04-13 2022-07-22 江苏商贸职业学院 Floor sweeping robot special for stair cleaning
CN114766975B (en) * 2022-04-13 2023-06-02 江苏商贸职业学院 Floor sweeping robot special for cleaning stairs
CN115256470A (en) * 2022-08-09 2022-11-01 七腾机器人有限公司 A depth vision-based stair measurement method, system and quadruped robot

Also Published As

Publication number Publication date
JPWO2005087452A1 (en) 2008-01-24
JP4618247B2 (en) 2011-01-26
WO2005087452A9 (en) 2008-03-13

Similar Documents

Publication Publication Date Title
JP4618247B2 (en) Robot apparatus and operation control method thereof
US8289321B2 (en) Method and apparatus for detecting plane, and robot apparatus having apparatus for detecting plane
JP4479372B2 (en) Environmental map creation method, environmental map creation device, and mobile robot device
Lee et al. RGB-D camera based wearable navigation system for the visually impaired
Pérez-Yus et al. Detection and modelling of staircases using a wearable depth sensor
CN115702405A (en) Stair tracking for modeling and terrain awareness
KR101121763B1 (en) Apparatus and method for recognizing environment
JP6617830B2 (en) Skeletal estimation device, skeleton estimation method, and skeleton estimation program
JP2022543997A (en) Constrained mobility mapping
CN115702445A (en) Sensing and adaptation for stair trackers
EP3680618A1 (en) Method and system for tracking a mobile device
JP4100239B2 (en) Obstacle detection device and autonomous mobile robot using the same device, obstacle detection method, and obstacle detection program
CN115667061A (en) Identifying steps from footsteps
JP2006054681A (en) Moving object periphery monitoring device
CN114683290B (en) Method and device for optimizing pose of foot robot and storage medium
JP2007041656A (en) Moving body control method, and moving body
JP2003271975A (en) Planar extraction method, its apparatus, its program, its recording medium, and robot apparatus equipped with plane extraction apparatus
KR102067350B1 (en) Walk assistance support method and walk assistance support system based on remote control service for assisting walking of smart glasses
Pradeep et al. Piecewise planar modeling for step detection using stereo vision
CN109164802A (en) A kind of robot maze traveling method, device and robot
Schwarze et al. Stair detection and tracking from egocentric stereo vision
Abro et al. Indoor Smart Home Action Recognition over Multi-videos Surveillance System
KR102061984B1 (en) Control system and control method for walking assistance for smart glasses
Struebig et al. Stair and ramp recognition for powered lower limb exoskeletons
CN117213513B (en) Pedestrian navigation system and path planning method based on environmental perception and human kinematics

Legal Events

Date Code Title Description
AK Designated states

Kind code of ref document: A1

Designated state(s): AE AG AL AM AT AU AZ BA BB BG BR BW BY BZ CA CH CN CO CR CU CZ DE DK DM DZ EC EE EG ES FI GB GD GE GH GM HR HU ID IL IN IS JP KE KG KP KR KZ LC LK LR LS LT LU LV MA MD MG MK MN MW MX MZ NA NI NO NZ OM PG PH PL PT RO RU SC SD SE SG SK SL SM SY TJ TM TN TR TT TZ UA UG US UZ VC VN YU ZA ZM ZW

AL Designated countries for regional patents

Kind code of ref document: A1

Designated state(s): GM KE LS MW MZ NA SD SL SZ TZ UG ZM ZW AM AZ BY KG KZ MD RU TJ TM AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HU IE IS IT LT LU MC NL PL PT RO SE SI SK TR BF BJ CF CG CI CM GA GN GQ GW ML MR NE SN TD TG

121 Ep: the epo has been informed by wipo that ep was designated in this application
WWE Wipo information: entry into national phase

Ref document number: 2006511065

Country of ref document: JP

NENP Non-entry into the national phase

Ref country code: DE

WWW Wipo information: withdrawn in national office

Country of ref document: DE

122 Ep: pct application non-entry in european phase