WO2005088244A1 - 平面検出装置、平面検出方法、及び平面検出装置を搭載したロボット装置 - Google Patents
平面検出装置、平面検出方法、及び平面検出装置を搭載したロボット装置 Download PDFInfo
- Publication number
- WO2005088244A1 WO2005088244A1 PCT/JP2005/004839 JP2005004839W WO2005088244A1 WO 2005088244 A1 WO2005088244 A1 WO 2005088244A1 JP 2005004839 W JP2005004839 W JP 2005004839W WO 2005088244 A1 WO2005088244 A1 WO 2005088244A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- plane
- line segment
- distance data
- data point
- distance
- Prior art date
Links
Classifications
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B62—LAND VEHICLES FOR TRAVELLING OTHERWISE THAN ON RAILS
- B62D—MOTOR VEHICLES; TRAILERS
- B62D57/00—Vehicles characterised by having other propulsion or other ground- engaging means than wheels or endless track, alone or in addition to wheels or endless track
- B62D57/02—Vehicles characterised by having other propulsion or other ground- engaging means than wheels or endless track, alone or in addition to wheels or endless track with ground-engaging propulsion means, e.g. walking members
- B62D57/024—Vehicles characterised by having other propulsion or other ground- engaging means than wheels or endless track, alone or in addition to wheels or endless track with ground-engaging propulsion means, e.g. walking members specially adapted for moving on inclined or vertical surfaces
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/10—Terrestrial scenes
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/13—Edge detection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/70—Determining position or orientation of objects or cameras
- G06T7/73—Determining position or orientation of objects or cameras using feature-based methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/60—Type of objects
- G06V20/64—Three-dimensional objects
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30248—Vehicle exterior or interior
- G06T2207/30252—Vehicle exterior; Vicinity of vehicle
Definitions
- Planar detecting device planar detecting method, and robot device equipped with planar detecting device
- the present invention relates to a plane detection device, a plane detection method, and a robot device equipped with a plane detection device for detecting a plane from three-dimensional distance data, and more particularly, to detecting a plane by a line segment expansion method (scan line grouping).
- the present invention relates to a flat surface detecting device, a flat surface detecting method, and a robot device.
- the detected plane can be used, for example, for obstacle avoidance of a mobile robotic device, or for a stair climbing operation.
- a method of detecting a distance information plane is constituted by the following procedure. 1. Get 3D distance information
- an image having a stair placed on the floor shown in FIG. 1A is divided into four plane areas A, B, C, and D as shown in FIG. 1B.
- area A indicates the floor
- areas B, C, and D indicate the steps.
- Under-segmentation refers to the fact that despite the existence of multiple planes, they are recognized as a single plane, for example, due to the influence of noise, due to the effects of noise, etc. Although it is actually the same plane, it is recognized as multiple different planes due to the influence of noise.
- the distance image acquired by the camera 401RZL includes a plurality of treads, a side surface, a floor surface, and the like.
- the distance image includes a plurality of planes.
- the plane is an Xz plane, it includes a plurality of planes such as a tread 402 and a side 403.
- these planes cannot be distinguished due to under-segmentation.
- a plane detector detects measurement data force planes that are more affected by noise compared to the required plane detection accuracy.Therefore, when designing such planes, the threshold for separating into multiple planes must be loosened. Therefore, such an under-segmentation problem is likely to occur. Conversely, if the threshold value is lowered in the case of measurement data having a large influence of noise, one plane is actually separated into a plurality of planes, and an over-segmentation force S occurs.
- FIG. 3A to 3D are diagrams for explaining a method of extracting a plane by the Hough transform.
- FIG. 3A is a diagram showing a staircase
- FIG. 3B is three-dimensional distance data in which the stair force shown in FIG. 3A is obtained
- FIG. 3C is a diagram.
- FIG. 3B is a diagram in which a peak is obtained by performing a Neutral transformation on the distance data in FIG. 3B
- FIG. 3D is a diagram illustrating a comparison between the plane indicated by the peak illustrated in FIG.
- the three-dimensional data is as shown in FIG. 3B.
- a histogram is generated by randomly selecting three points of this data to obtain a plane and voting it in the plane parameter space, and the dominant plane can be detected as a peak P as shown in Figure 3C.
- the dominant plane can be detected as a peak P as shown in Figure 3C.
- the power of the data after the Hough transform is statistically estimated, the result of under-segmentation becomes the most dominant value statistically. That is, as shown in FIG. 3D, the detected plane 411 is actually obtained as a plane in which all the planes 412, 413, and 414 are equalized.
- the Hough transform can estimate and detect a dominant plane included in the visual field, but cannot accurately detect a plurality of planes when they exist.
- Jiang et al. Disclose a plane extraction method using a line extension method (scan line grouping).
- plane detection by the line segment extension method first, three-dimensional distance data is obtained from a captured image, and the following three-dimensional distance data are obtained for each row or row of data rows (image row). Is performed. For example, row-based data in an image
- the data point group force belonging to the same plane also generates a line segment.
- the generated line segment group three adjacent line segments constituting the same plane are extracted to obtain a reference plane, and the line segment adjacent to the reference plane belongs to the same plane.
- the plane is detected by enlarging the area of the reference plane with the adjacent line segment and updating the reference plane.
- FIG. 4 is a flowchart showing a plane detection process by the line segment extension method.
- a distance image is input (step S41), and a data point group force line segment estimated to be on the same plane in each of data rows in a row direction or a column direction constituting the distance image. Is generated (step S42).
- a region serving as a plane seed (hereinafter referred to as a “seed region”) is searched from the generated line segment group, and the corresponding region type is selected (steps S43 and S44). In this selection, it is conditioned that one line vertically adjacent is on the same plane. Then, the plane to which the selected three line segment force types belong is averaged from the three line segments.
- a search is performed to determine whether or not a line segment on the same plane as the region type has a certain force. Whether or not a force is on the same plane is determined by comparing spatial distances. If there is a line segment determined to be on the same plane, the line segment is added to the area of this area type (area expansion processing), and the original plane is updated to include the added line segment. Then, by repeating these processes, the area is expanded and the plane is updated (step S45). Further, the processing of steps S43-S45 is repeatedly executed until there is no more seed area. Finally, those that form the same plane are connected from the plurality of obtained region groups (step S46), and the process ends.
- FIG. 5 is a diagram illustrating a process of extracting a line segment
- FIGS. 5A to 5C are diagrams illustrating the process steps in order.
- a line segment (chord) connecting both ends 430a and 430b of a plurality of given data point groups 430 is generated.
- the data point having the largest distance from the obtained line segment 431 is searched. Searched data If the distance d between the data point 430c and the line segment 431 exceeds a certain threshold, a process of dividing the line segment 431 is performed.
- a line segment 431 is divided into a line segment 43 la connecting the leftmost data point 430a and the data point 430c to be a division point, and a division point 430c and a rightmost data point 4 30b. Is divided into a line segment 431b connecting. By repeating this process until the distances between all points and the line segments become equal to or smaller than the threshold value, it is possible to detect a plurality of line segments that fit the given data.
- data points 430c and 430d are finally selected as two division points, and line segment 431 is divided into three line segments 431a, 431c, and 431d. You.
- FIG. 25 is a diagram for explaining the area expansion processing shown in step S45. Regions can be divided by sequentially integrating the line segments obtained by the above-described line segment extraction process as seed region forces. For example, as shown in FIG. 25, when there are a plurality of steps 31 having a plane force in the image 30, it is assumed that, for example, three line segments 32a to 32c indicated by thick lines are selected as the region types. The region consisting of these three line segments 32a-32c is the region type. First, one plane (reference plane) P is obtained from these three line segments 32a-32c. Next, a line segment that is the same plane as the plane P is selected in the data string 33 or 34 adjacent to the outermost line segment 32a or 32c of the region type outside the region type, respectively.
- reference plane reference plane
- FIG. 6 is a diagram illustrating a difference in a result of the line segment extraction processing when two threshold values are set.
- Figure 6A shows the measured data point cloud 450 with low noise. Is the case of the measurement data point group 460 with a lot of noise.In each case, the large value (Large threshold) and the small value (Small threshold) are applied as the threshold for the line segment division described above. The result of the above case is shown.
- measurement data can be acquired with high measurement accuracy and low noise for nearby measurement data, and noise can be acquired for distant measurement data due to low measurement accuracy. It becomes a lot of measurement data. For this reason, it is desirable to determine the threshold value adaptively according to the distance, but it is extremely difficult to determine the threshold value uniquely due to the influence of the difference in measurement accuracy due to the environment.
- plane detection by randomized Hough transform, etc. is suitable for detecting dominant planes.
- Data power including multiple planes such as stairs.
- Under-segmentation problem to detect multiple planes.
- the present invention has been proposed in view of such conventional circumstances, and has a distance detection device including a measurement noise, a robustness against a noise, and a plane detection device capable of simultaneously and accurately detecting a plurality of planes.
- An object of the present invention is to provide a detection method and a robot device equipped with a plane detection device.
- a plane detecting apparatus according to the present invention is a plane detecting apparatus for detecting a three-dimensional distance data plane, the distance data points estimated to be on the same plane in a three-dimensional space.
- a line segment extracting means for extracting a line segment for each group, and extracting a plurality of line segments presumed to belong to the same plane from the line segment group extracted by the line segment extracting means and calculating a plane from the plurality of line segments
- the line segment extracting means adaptively extracts a line segment according to the distribution of distance data points.
- the line segment extracting means extracts a line segment by utilizing the fact that the three-dimensional distance data are arranged on the same straight line when they are on the same plane. Since there is a difference in the distribution of data points, adaptively extracting line segments according to the distribution of this distance data (Adaptive Line Fitting) makes it possible to extract line segments accurately and robustly against noise.
- the line segment extracting means extracts a distance data point group that is estimated to be on the same plane based on the distance between the distance data points, and based on the distribution of the distance data points in the distance data point group, It is possible to re-estimate whether the distance data point group is on the same plane or not, and once extract the distance data point group based on the distance of the distance data points in the three-dimensional space, and based on the distribution of the data points. By estimating the force on the same plane again, line segments can be extracted accurately.
- the line segment extracting means extracts a line segment from the distance data point group estimated to be on the same plane, and selects a distance data point having the largest distance from the distance data point group in the distance data point group.
- a predetermined threshold value it is determined whether or not the distribution of the distance data points in the distance data point group is biased.
- the distance data point group can be divided in this way, and if the distance data point distribution is biased, it is determined that the extracted distance data point group is not on the same plane, and ij can.
- the line segment extracting means is configured to output distance data estimated to be on the same plane.
- a first line segment is extracted from the point group, and a distance data point having the largest distance from the first line segment in the distance data point group is set as a point of interest. If the distance is equal to or less than a predetermined threshold, A second line segment is extracted from the distance data point group, and it is determined whether or not distance data points are continuously present on one side of the second line segment for a predetermined number or more.
- the distance data point group can be divided at the point of interest. For example, a line segment connecting the end points of the extracted data point group is defined as a first line segment, and the distance is large.
- a second line segment is generated by, for example, the least squares method, and if a plurality of data points continue on one side in the second line segment, a data point group is generated.
- the line segment has a zigzag shape, for example.
- the data point group can be divided based on the noted point or the like.
- the plane area extending means selects one or more line segments estimated to belong to the same plane, calculates a reference plane, and calculates a line segment estimated to belong to the same plane as the reference plane. It is possible to retrieve a segment from the group of segments as an extension line segment, update the reference plane with the extension line segment, and repeat the process of extending the area of the reference plane, and output the updated plane as an updated plane.
- plane area expansion processing and plane update processing can be performed using line segments that belong to the same plane.
- the distance data point group belonging to the updated plane if there is a distance data point whose distance from the updated plane exceeds a predetermined threshold, the distance data point group force excluding this is removed again from the plane. It is possible to further have a plane recalculating means for calculating, and the updated plane is obtained as an average plane of all line segments belonging to the updated plane. By obtaining it, a detection result in which the influence of noise and the like is further reduced can be obtained.
- the plane area extending means can estimate whether or not the line segment belongs to the same plane as the reference plane based on an error between the plane determined by the line segment and the reference plane. Based on the root mean square error and the like, it is possible to discriminate whether the plane is a different plane due to the influence of noise and detect the plane more accurately.
- a plane detection method is a plane detection method for detecting a plane from three-dimensional distance data, wherein a line is provided for each distance data point group estimated to be on the same plane in a three-dimensional space.
- a line segment extraction step of extracting a segment, and a plane area for extracting a plurality of line segments presumed to belong to the same plane from the line segment group extracted in the line segment extraction step and calculating a plane from the plurality of line segments An extension step, wherein in the line segment extraction step, a line segment is appropriately extracted according to a distribution of distance data points.
- a robot apparatus is a robot apparatus that behaves autonomously, a distance measuring unit that acquires three-dimensional distance data, a plane detecting apparatus that detects a plane from three-dimensional distance data, and the plane detecting apparatus. And a behavior control means for controlling behavior based on a plane detection result by the plane detection device, wherein the plane detection device extracts a line segment for each distance data point group estimated to be on the same plane in a three-dimensional space. Extracting means, and a plane area extending means for extracting a plurality of line segments presumed to belong to the same plane from the group of line segments extracted by the line segment extracting means and calculating the plurality of line force planes. And the line segment extracting means adaptively extracts a line segment according to the distribution of distance data points.
- it can have a pattern providing means such as an irradiation means for irradiating a pattern to the target object, and a distance measuring means. If the target (stairs, floor, etc.) has no or insufficient pattern (texture), the distance image cannot be obtained properly. Can be obtained.
- a pattern providing means such as an irradiation means for irradiating a pattern to the target object, and a distance measuring means. If the target (stairs, floor, etc.) has no or insufficient pattern (texture), the distance image cannot be obtained properly. Can be obtained.
- the robot apparatus can accurately detect a plane even if distance data including noise is acquired by distance measuring means provided in the mouth bot apparatus by mounting the above-described plane detecting apparatus. It can detect stairs existing in the surrounding environment of the robot device and move up and down, or recognize a step on the floor and move on a floor with a step, etc. Increase.
- FIG. 1A is a schematic diagram showing an image of a staircase
- FIG. 1B shows a result of detecting four plane regions A, B, C, and D from three-dimensional distance data obtained from FIG. 1A.
- FIG. 1A is a schematic diagram showing an image of a staircase
- FIG. 1B shows a result of detecting four plane regions A, B, C, and D from three-dimensional distance data obtained from FIG. 1A.
- FIG. 2 is a schematic diagram for explaining under-segmentation.
- FIGS. 3A to 3D are diagrams for explaining a method of extracting a plane by Hough transform.
- FIG. 3A is a diagram showing a staircase
- FIG. 3B is a diagram showing a staircase force shown in FIG. 3A.
- FIG. 3C is a diagram showing distance data
- FIG. 3C is a diagram showing a histogram obtained by subjecting the distance data of FIG. 3B to a Novoff transform
- FIG. 3D is a diagram showing a comparison result between the plane indicated by the peak shown in FIG. It is a figure
- FIG. 4 is a flowchart showing plane detection processing by a line segment extension method.
- FIG. 5 is a diagram illustrating a conventional process of extracting a line segment, and FIG. 5A to FIG.
- FIG. 1 A first figure.
- FIG. 6A and FIG. 6B show the difference in the results of the line segment extraction processing when two thresholds are set for the measured data point group and the measured data point group, respectively.
- FIG. 7 is a perspective view showing an overview of a robot device according to an embodiment of the present invention.
- FIG. 8 is a diagram schematically showing a joint degree of freedom configuration provided in the robot device.
- FIG. 9 is a schematic diagram showing a control system configuration of the robot device.
- FIG. 10 is a functional block diagram showing a flat panel detection device according to the present embodiment.
- FIG. 11 is a schematic diagram showing a state in which the robot apparatus takes a picture of the outside world and moves.
- FIG. 12 is a schematic view showing a staircase
- FIG. 12A is a diagram showing the staircase viewed from the front.
- FIG. 12B is a side view of the stairs
- FIG. 12C is an oblique view of the stairs.
- FIG. 13 is a schematic view showing another example of a staircase, where FIG. 13A is a view of the stairs viewed from the front, FIG. 13B is a view of the stairs viewed from the side, and FIG. FIG. 5 is a view also showing a diagonal force.
- FIG. 14A is a schematic diagram showing an image of the stairs shown in FIG. 13 when a forward force is also captured by a stereo vision system, and FIGS. 14B to 14D are obtained from the images shown in FIG. 14A.
- FIG. 9 is a diagram showing three-dimensional distance data obtained.
- FIG. 15A is a schematic diagram showing an image obtained when the stairs shown in FIG. 13 are also photographed with a lateral force by a stereo vision system
- FIGS. 15B to 15D are three-dimensional distances obtained from the image shown in FIG. 15A. It is a figure showing data.
- FIG. 16A is a schematic view showing an image of the stairs shown in FIG. 13 taken diagonally from the front by a stereo vision system
- FIGS. 16B to 16D are three-dimensional distances obtained from the image shown in FIG. 16A. It is a figure showing data.
- FIG. 17 is a view for explaining a robot apparatus having a means for giving a texture.
- FIG. 18 is a diagram illustrating a plane detection method by the line segment extension method in the present embodiment.
- FIG. 19 is a flowchart showing plane detection processing by the line segment extension method.
- FIG. 20 is a flowchart showing details of processing in a line segment extracting unit according to the present embodiment.
- FIG. 21 is a diagram showing the distribution of distance data points.
- FIG. 21A shows a case where the data distribution is zigzag with respect to a line segment
- FIG. 21B shows a line distribution due to noise or the like.
- FIG. 4 is a schematic diagram showing a case where the data is uniformly distributed near the minute.
- FIG. 22 is a flowchart showing a Zig-Zag-Shape determination method in the present embodiment.
- FIG. 23 is a diagram showing the Zig-Zag-Shape discrimination processing.
- FIG. 24 is a block diagram illustrating a processing unit that performs a Zig-Zag-Shape determination process.
- FIG. 25 is a schematic diagram for explaining an area expansion process in the present embodiment.
- FIG. 26 is a flowchart showing a procedure of a process of searching for a region type and a region expanding process in the region expanding unit in the present embodiment.
- FIG. 27 is a diagram showing an example in which the root mean square error rms of the plane equation is different even when the distance between the end point and the straight line is equal.
- FIG. 27A shows that the line segment has a plane force due to noise or the like. If so,
- FIG. 27B is a schematic diagram showing a case where there is another plane to which the line segment belongs.
- FIG. 28 is a diagram illustrating an area type selection process.
- FIG. 29 is a diagram illustrating an area extension process.
- FIG. 30A is a schematic diagram showing the floor surface when looking down on the floor surface with the robot device standing
- FIG. 30B is a vertical axis representing x and a horizontal axis representing y.
- FIG. 30C is a diagram showing a planar region obtained by the region extension processing from the straight line group shown in FIG. 30B.
- FIG. 31 is a diagram for explaining a difference between a result of the plane detecting method according to the present embodiment and a conventional plane detecting method when a step is placed on the floor surface.
- FIG. 31A is a schematic diagram showing the observed image
- FIG. 31B is a diagram showing the experimental conditions
- FIG. 31C is a diagram showing plane detection by the plane detection method according to the present embodiment.
- FIG. 31D is a diagram showing a result of plane detection by a conventional plane detection method.
- Figure 32A is a schematic diagram showing an image of the floor taken, and Figures 32B and 32C are three-dimensional distances obtained by photographing the floor shown in Figure 32A.
- FIG. 8 is a diagram showing a line segment detected by line segment detection according to the present embodiment and a line segment detected by conventional line segment detection from a data point sequence in a horizontal direction and a vertical direction from data.
- Figure 33A is a schematic diagram showing an image of a staircase, and Figures 33B to 33D are three-dimensional distance data obtained from Figure 33A.
- FIG. 9 is a diagram showing an example in which a plane is detected from the top, front, and side surfaces, respectively.
- FIG. 34 A is a schematic diagram showing an image of another staircase
- FIGS. 34 B to 34 D are three-dimensional distance data obtained from FIG. It is a figure which shows the example which detected the plane from the upper surface, the front, and the side surface, respectively.
- the present invention is applied to a robot device equipped with a plane detecting device capable of simultaneously and accurately detecting a plurality of planes.
- the plane detection device uses distance information obtained by stereo vision or the like.
- the robot device can accurately recognize the environment around itself. It can move and act autonomously according to the recognition result.
- a bipedal walking type robot device will be described as an example of such a robot device.
- This robot device is a practical robot that supports human activities in various situations of the living environment and other daily life, and can act according to the internal state (anger, sadness, joy, enjoyment, etc.) It is an entertainment robot device that can display the basic actions to be performed.
- a bipedal walking robot device will be described as an example, but it is needless to say that the present invention can be applied not only to a bipedal walking robotic device but also to a robotic device movable by four legs or wheels. .
- FIG. 7 is a perspective view showing an overview of the robot device according to the present embodiment.
- a head unit 203 is connected to a predetermined position of a trunk unit 202, and two left and right arm units 204RZL and two left and right leg units 205RZL are connected.
- R and L is a suffix indicating each of right and left. The same applies hereinafter.
- FIG. 8 schematically shows the configuration of the degrees of freedom of the joints included in the robot apparatus 201.
- the neck joint supporting the head unit 203 includes a neck joint axis 101, a neck joint pitch axis 102, and a neck joint one-piece axis 103! With three degrees of freedom! / Puru.
- each arm unit 204RZL constituting the upper limb includes a shoulder joint pitch axis 107, a shoulder joint Lorenole axis 108, an upper arm joint axis 109, a lunar joint pitch axis 110, a forearm joint axis 111, and a wrist joint. It comprises a pitch axis 112, a wrist joint roll wheel 113, and a hand 114.
- the hand 114 is actually a multi-joint * multi-degree-of-freedom structure including a plurality of fingers. However, the movement of the hand 114 has little contribution or influence to the posture control and the walking control of the robot apparatus 201, and therefore is assumed to have zero degrees of freedom for simplicity in this specification. Therefore, each arm has seven degrees of freedom.
- the trunk unit 202 has three degrees of freedom, namely, a trunk pitch axis 104, a trunk roll axis 105, and a trunk axis 106.
- each leg unit 205RZL constituting the lower limb has a hip joint axis 115, a hip joint pitch axis 116, a hip joint roll axis 117, a knee joint pitch axis 118, an ankle joint pitch axis 119, and an ankle joint axis. It is composed of a roll shaft 120 and a sole 121.
- the intersection of the hip joint pitch axis 116 and the hip joint roll axis 117 defines the hip joint position of the robot device 201.
- the sole 121 of the human body is actually a structure including a multi-joint, multi-degree-of-freedom sole, in this specification, the sole of the robot apparatus 201 is assumed to have zero degrees of freedom for simplicity. . Thus, each leg has six degrees of freedom.
- the robot 201 for entertainment is not necessarily limited to 32 degrees of freedom. It goes without saying that the degree of freedom, that is, the number of joints, can be appropriately increased or decreased according to the constraints on design and production and the required specifications.
- Each degree of freedom of the robot apparatus 201 as described above is actually implemented using an actuator. Eliminating extra bulges on the appearance to approximate the human body shape, bipedal walking! / Due to demands such as controlling the posture of unstable structures, it is preferable that the actuator is small and lightweight.
- Such a robot device includes a control system that controls the operation of the entire robot device, for example, in the trunk unit 202 or the like.
- FIG. 9 is a schematic diagram illustrating a control system configuration of the robot device 201. As shown in Fig. 9, the control system controls the whole-body cooperative movement of the robot 201, such as the thought control module 200 that dynamically responds to user input and performs emotional judgment and emotional expression, and the drive of the actuator 350. And a motion control module 300 to be operated.
- the thinking control module 200 includes a CPU (Central Processing Unit) 211 that executes arithmetic processing related to emotion determination and emotional expression, a RAM (Random Access Memory) 212, a ROM (Read Only Memory) 213, and an external storage device (node).
- This is an independent drive type information processing device composed of 214, etc., which can perform self-contained processing in the module.
- This thinking control module 200 is used to control image data input from the image input device 251 and the like.
- the current emotion or intention of the robot device 201 is determined according to external data such as voice data input from the voice input device 252. That is, as described above, the input image data is recognized. By recognizing the user's facial expression and reflecting the information on the emotions and intentions of the robot device 201, it is possible to express an action according to the user's facial expression.
- the image input device 251 includes, for example, a plurality of CCD (Charge Coupled Device) cameras, and can obtain a distance image with an image captured by these cameras.
- the audio input device 252 includes, for example, a plurality of microphones.
- the thinking control module 200 issues a command to the motion control module 300 to execute a motion or action sequence based on a decision, that is, a motion of a limb.
- the motion control module 300 includes a CPU 311 for controlling the whole body cooperative motion of the robot device 201, a RAM 312, a ROM 313, an external storage device (such as a node 'disk' drive) 314, etc., and is self-contained within the module. It is an independently driven information processing device that can perform processing. Further, in the external storage device 314, for example, a walking pattern calculated offline, a target ZMP trajectory, and other action plans can be stored.
- the motion control module 300 includes an actuator 350 for realizing the degrees of freedom of the joints distributed over the whole body of the robot apparatus 201 shown in FIG. 8, and a distance measurement sensor (not shown) for measuring a distance to an object.
- a posture sensor 351 for measuring the posture and inclination of the trunk unit 202
- ground contact confirmation sensors 352, 353 for detecting leaving or landing on the left and right soles
- a load sensor provided on the sole 121 of the sole 121
- a battery Various devices such as a power supply control device 354 that manages the power supply of the devices are connected via a bus interface (IZF) 310.
- the attitude sensor 351 is configured by, for example, a combination of an acceleration sensor and a gyro 'sensor
- the grounding confirmation sensors 352, 353 are configured by a proximity sensor, a micro' switch, or the like.
- the thought control module 200 and the motion control module 300 are constructed on a common platform, and are interconnected via bus interfaces 210 and 310.
- the action specified by the thinking control module 200 is executed. Controls the whole-body coordination by each actuator 350 that appears. That is, the CPU 311 retrieves an operation pattern corresponding to the action instructed from the thought control module 200 from the external storage device 314, or internally generates an operation pattern. Then, the CPU 311 sets the foot motion, the ZMP trajectory, the trunk motion, the upper limb motion, the waist horizontal position and the height, etc., according to the specified motion pattern, and instructs the motion according to the set contents. Command value to be transferred to each actuator 350.
- the CPU 311 detects the posture and inclination of the trunk unit 202 of the robot apparatus 201 based on the output signal of the posture sensor 351, and the leg unit 205RZL detects the swing leg based on the output signal of each of the grounding confirmation sensors 352 and 353.
- the whole body cooperative movement of the robot apparatus 201 can be adaptively controlled.
- the CPU 311 controls the posture and operation of the robot device 201 such that the ZMP position always faces the center of the ZMP stable region.
- the motion control module 300 is designed to return the force, ie, the state of processing, to which degree the action determined by the thought control module 200 has been performed, as described in the thought control module 200. In this way, the robot device 201 can determine its own and surrounding conditions based on the control program, and can act autonomously.
- a stereo vision system is mounted on the head unit 203, and three-dimensional distance information of the outside world can be obtained.
- a description will be given of a plane detection device according to the present embodiment, which is preferably mounted on such a robot device and uses three-dimensional distance information based on stereo vision.
- distance information it goes without saying that distance information from a laser range finder (laser distance meter) or the like may be used.
- the plane detecting apparatus can reliably detect a plurality of planes by the line segment expansion method even when there are a plurality of planes such as stairs that are not limited to a dominant plane in the visual field.
- line segment extraction which is extracted when detecting a plane, by fitting a line segment adaptively according to the distribution of points in the distance data, a robust plane detection result can be obtained for measurement noise. Things.
- FIG. 10 is a functional block diagram illustrating the flat panel detection device according to the present embodiment.
- the plane detecting device 1 includes a stereo vision system (Stereo Vision System) 2 as a distance data measuring means for acquiring three-dimensional distance data, and a plane existing in a distance image composed of three-dimensional distance data.
- a plane detecting unit 3 for detecting the ⁇ ⁇ ⁇ by the line segment extension method. The plane detection unit 3 selects a distance data point group estimated to be on the same plane from the distance data points forming the image, and extracts a line segment for each distance data point group.
- an area extending section 5 for detecting one or a plurality of planar areas present in the image from a line segment group consisting of the entire line force extracted by the line segment extracting section 4 included in the image.
- the area extension unit 5 selects any three line segments presumed to be on the same plane as the line group force, and obtains a reference plane from these. Then, it is determined whether or not the line segments adjacent to the selected three line segments belong to the same plane as this reference plane. If it is determined that the line segments belong to the same plane, the line segment as the area extension line segment is determined. Updates the reference plane and expands the area of the reference plane.
- the stereo vision system 2 generates a distance image from an image acquired by, for example, the image input device 251 of the robot device 201.As a result of observing the outside world, the stereo vision system 2 generates three-dimensional distance data D1 estimated by parallax between both eyes. Output to the line segment extraction unit 4.
- the line segment extraction unit 4 extracts a distance data point group that is estimated to be on the same plane in a three-dimensional space in each data column for each column or row in the distance image, and extracts this distance data point group. Generates one or more line segments according to the distribution of distance data point cloud from. In other words, if it is determined that the distribution is biased, it is determined that the data points are not on the same plane, the data points are divided, and it is determined whether the distribution is again biased for each of the divided data points. The determination process is repeated, and if the distribution is not biased, a line segment is generated from the data point group. The above processing is performed for all data strings, and the generated line segment group D2 is output to the area extension unit 5.
- the area expanding unit 5 selects three line segments estimated to belong to the same plane in the line group D2, and obtains a seed plane as a force reference plane.
- the range image is extended by integrating the line segments belonging to the same plane as the region type into the region of this type of plane (region type: seed region).
- the robot apparatus 201 obtains information on a plane that is important for walking, such as a stair, a floor, or a wall, when information on a plane such as obstacle avoidance or climbing a stair is required, or by performing these processes periodically. .
- the stereo vision system 2 compares the image input from the left and right cameras equivalent to both eyes of the human for each pixel neighborhood, estimates the distance to the target from the parallax, and outputs 3D distance information as an image (Distance image).
- FIG. 11 is a schematic diagram illustrating a state where the robot apparatus 201 is capturing an image of the outside world. Assuming that the floor is an XY plane and the height direction is the z direction, as shown in FIG. 11, the field of view of the robot 201 having an image input unit (stereo camera) in the head unit 203 is It is a predetermined range in front of 201.
- the image input unit stereo camera
- the CPU 211 described above inputs a color image and a parallax image from the image input device 251 and sensor data such as all joint angles of each actuator 350 to realize a software configuration.
- the software in the robot device 201 is configured for each object, recognizes the position, the movement amount, the surrounding obstacles, the environment map, and the like of the robot device, and performs an action that the robot device should ultimately take. It can perform various kinds of recognition processing to output an action sequence for.
- coordinates indicating the position of the robot apparatus for example, a camera coordinate system of a world reference system (hereinafter, also referred to as absolute coordinates) having a predetermined position based on a specific object such as a landmark as an origin of the coordinates, Two coordinates are used: the robot center coordinate system (hereinafter also referred to as relative coordinates) with the robot itself as the center (origin of coordinates).
- the robot center 201 is fixed at the center using the joint angle at which the sensor data force is also determined.
- a homogeneous transformation matrix and the like in the camera coordinate system are derived from the robot center coordinate system, and a distance image including the homogeneous transformation matrix and the corresponding three-dimensional distance data is represented in a plane. Output to detector 3.
- the plane detecting apparatus not only detects the dominant plane included in the acquired image as in the above-described NO-F In order to make it possible to detect the plane even in the case of, the plane is detected by the line segment extension method. At this time, by generating a line segment according to the distribution of the distance data points, a detection result that is robust against measurement noise can be obtained.
- the robot apparatus equipped with the flat panel detection device according to the present embodiment detects stairs ST included in the field of view.
- FIGS. Figures 12A and 13A are views of the steps from the front, Figure 12B), Figure 13B is a view of the steps from the side, and Figures 12C and 13C are views of the steps with oblique forces.
- a surface (a surface on which a foot or a movable leg portion is placed) used by a person or a robot device to go up and down stairs is referred to as a tread surface, and a tread force of the next tread surface.
- Height (the height of one staircase) is to be kicked up. Stairs are counted as the first and second steps from the side closer to the ground.
- the staircase ST1 shown in Fig. 12 is a three-step staircase, with a kick-up of 4cm, the size of the first and second steps is 30cm in width, 10cm in depth, and only the third step, which is the top step, It is 30cm wide and 21cm deep.
- the staircase ST2 shown in Fig. 13 is also a three-step staircase, with a 3cm kick-up, the size of the treads of the first and second steps is 33cm in width, 12cm in depth, and only the third step, the top step, It is 33cm wide and 32cm deep.
- FIGS. 14 to 16 show the staircase ST2 shown in FIG. 13, and FIGS.14A, 15A, and 16A show the case where the staircase shown in FIG. 13 is photographed by the stereo vision system from the front, side, and oblique front, respectively.
- FIGS. 14B to 16D are schematic diagrams illustrating images, and are diagrams illustrating three-dimensional distance data acquired from the images illustrated in FIGS. 14A, 15A, and 16A.
- FIG. 14A when the staircase ST2 is photographed from the front, three-dimensional distance data is as shown in FIGS. 14B to 14D.
- the horizontal axis is the y direction
- the vertical axis is the x direction
- the size in the z axis direction (height direction) is closer to white as the height increases, with the ground surface of the robot apparatus 201 being 0. It is indicated by such a shading value.
- the data points with similar shades (gray values) are at the same height, and as shown in FIG.
- the shading of the data points in the area corresponding to the tread of the third step from the second step and the second step is lighter.
- a substantially trapezoidal area in which the distance data is shown indicates a range (field of view range) in which the robot apparatus can photograph.
- the distance data points are divided into approximately four levels of shading, but the darkest spot corresponding to the region in the smallest z direction indicates the floor surface.
- FIG. 14C shows the horizontal axis in the y direction, the vertical axis in the z direction, and the X direction in color shading. In this figure, the shading becomes smaller as the distance in the X direction increases.
- the horizontal axis is the X direction
- the vertical axis is the z direction
- the y direction is represented by shading according to the distance.
- the robot apparatus 201 photographs the side surface of the staircase ST2, as shown in FIGS. 15A to 15D, the data points existing in the upper region where the X-axis is large have the same shading as the height of 0. This indicates that the result is obtained by measuring the floor at the back of the staircase ST2. Also, in the oblique imaging shown in FIGS. 16A to 16D, the four areas indicating the floor surface and the 13th step tread are shown in different shades according to the height, and are clearly distinguished. Do it, show it! /
- a pattern (texture) is required on the surface of the stage ST2.
- parallax can be obtained by two cameras, parallax cannot be calculated for a pattern without a pattern, and the distance cannot be measured accurately.
- the measurement accuracy of the distance data in the stereo vision system depends on the texture to be measured.
- the parallax indicates the difference between a point in space mapped to the left eye and the right eye, and changes according to the distance from the camera.
- the head unit of the robot apparatus is provided with a stereo camera 11RZL constituting a stereo vision system, and also outputs, for example, infrared light as a projection unit to the head unit, for example.
- a light source 12 is provided.
- the light source 12 projects (irradiates) an object such as a stairless ST3 having no pattern, an object having little or no texture, a wall, etc., and operates as a pattern giving means for giving a random pattern PT. .
- the means for applying the random pattern PT is not limited to a light source that projects infrared light.
- a robot device may write a pattern on an object by itself, but if it is infrared light, it is invisible to human eyes, but a pattern that can be observed by a CCD camera mounted on the robot device. Can be granted.
- FIG. 18 is a diagram illustrating a plane detection method using the line segment extension method.
- processing is performed on a data string in a row direction or a column direction in an image 11 taken from a focal point F. For example, in a row of pixels (image row) in an image, if a distance data point belongs to the same plane, it becomes a straight line.
- Generate a line segment based on the obtained line segment group that also includes a plurality of line force, a method of estimating and detecting a plane based on the line group that is considered to constitute the same plane.
- FIG. 19 is a flowchart showing the plane detection processing by the line segment extension method.
- a distance image is input (step S1), and a data point force estimated to belong to the same plane in each pixel column in a row direction (or a column direction) of the distance image also has a line segment.
- Ask (Step S2).
- a line segment presumed to belong to the same plane is extracted from the group of line segments, and a plane having these line force is obtained (step S3).
- a region serving as a seed of a plane hereinafter referred to as a "seed region" is selected, and a corresponding region type is selected.
- This selection requires that three line segments, including one line in the vertically adjacent row direction (or the right and left adjacent column direction), be on the same plane.
- the plane to which the selected region type including the three line segments belongs is set as a reference plane, and a plane that is determined by averaging the three line segments is determined.
- an area composed of three line segments is defined as a reference plane area.
- a straight line composed of pixel columns in the row direction (or column direction) adjacent to the selected region type is the same plane as the reference plane by comparing spatial distances.
- the adjacent line segment is added to the reference plane area (area extension processing), and the above-mentioned reference plane is updated to include the added line segment (plane update processing), and this is added to the plane area. This operation is repeated until no line segment on the same plane exists in the adjacent data string.
- the above-described area type is searched, and the plane updating and the area expansion processing are performed. Repeat until no more line segments exist. Finally, those that form the same plane are connected from among the plurality of obtained region groups.
- a plane recalculation process for obtaining a plane again by excluding a line segment that deviates from the plane force by a predetermined threshold or more out of the obtained line segment group belonging to the plane is further provided as step S4. Force to make a flat surface The details will be described later.
- the process of detecting a line segment from the three-dimensional distance data and combining the regions on the same plane into one plane is a plane detection process by the conventional line segment extension method. Is different from the conventional one in the line segment extraction method in step S2. That is, as described above, even if a line segment is obtained from a distance data point to generate a line segment so as to fit the distance data point as much as possible, if the threshold value is not changed according to the accuracy of the distance data, over- segmentation or under-segmentation. Therefore, in the present embodiment, in this line segment extraction, a method of adaptively changing the threshold value in accordance with the accuracy of distance data and noise by analyzing the distribution of distance data is introduced.
- the line extractor 4 receives the three-dimensional range image from the stereo vision system 2 and determines that each column or each row of the range image is on the same plane in the three-dimensional space. Detect the estimated line segment.
- line segment extraction over-segmentation and under-segmentation problems due to measurement noise, etc., that is, multiple planes are originally recognized as one plane,
- an algorithm Adaptive Line Fitting that adaptively fits line segments according to the distribution of data points.
- the line segment extraction unit 4 first roughly extracts a line segment as a first line segment using a relatively large threshold value, and then data points belonging to the extracted first line segment.
- the distribution of the data point group with respect to a line segment as a second line segment obtained from the group by the least square method described later is analyzed. That is, the data points are extracted by roughly estimating whether or not they are present on the same plane, and whether or not there is a bias in the distribution of the data points in the extracted data points is analyzed to see if they exist on the same plane. Re-estimate the force To do.
- the distribution of the data points is analyzed, and when the data point group fits in a zig-zag-shape, which will be described later, a process of dividing the data point group assuming that the distribution is biased. Then, an algorithm that adaptively extracts line segments for the noise contained in the data point group by repeating this process is used.
- FIG. 20 is a flowchart showing details of the processing in the line segment extraction unit 4, that is, the processing in step S2 in FIG.
- distance data is input to the line segment extraction unit 4.
- a data point group that is estimated to exist on the same plane in a three-dimensional space is extracted.
- the data points that are estimated to be on the same plane in the three-dimensional space are those whose distance in the three-dimensional space between the data points is less than a predetermined threshold, for example, the distance between adjacent data points is 6 cm or less.
- a set of data points can be obtained, and this is extracted as a data point group (P [0 ⁇ n-1]) (step SI 1).
- step S 12 it is checked whether or not the number of samples ⁇ included in this data point group ⁇ [0 ⁇ ⁇ 1] is larger than the minimum number of samples required for processing (required minimum value) min_n (step S 12) If the number of data n is smaller than the required minimum value min_n (S2: YES), an empty set is output as a detection result, and the process ends.
- the data point group data point group ⁇ [0 ⁇ ⁇ 1] is set to the point of interest (division point) brk Then, it is divided into two data point groups ⁇ [0 ⁇ brk] and P [brk ' ⁇ n-1] (step S18).
- step S15 when the maximum distance dist is smaller than the data point group division threshold value max_d (S14: NO), the data point group ⁇ [0 ⁇ ⁇ 1] force An equation line is obtained (step S15), and a line segment L2 indicated by the equation line is generated as a second line segment. Then, it is checked whether or not the data point group ⁇ [0 ⁇ ⁇ -1] is a Zig-Zag-Shape described later for this line segment L2 (step S16). If it is not a Zig-Zag-Shape (step S16) S16: NO), obtained The equation line added to the line segment is added to the line segment extraction result list (step SI7), and the process ends.
- step S16 if the line segment determined in step S15 is determined to be a Zig-Zag-Shape (S16: YES), step S16 is performed in the same manner as step S14 described above. Proceeding to S18, the data point group is divided into two data point groups ⁇ [0 ⁇ 'brk] and P [brk' ⁇ ⁇ -1] at the point of interest brk for which the distance dist was obtained in step S13. . When two data point groups are obtained in step S18, the processing from step S11 is performed again recursively. This process is repeated until all the divided data points are not divided, that is, until all the data points pass through step S17. Get a list of minute extraction results. By such processing, it is possible to eliminate the influence of noise from the data point group ⁇ [0 ⁇ ⁇ -1] and accurately detect a line segment group having a plurality of line force components.
- step S13 it has been described that the line segment L1 connecting the end points of the data point group ⁇ [0 ⁇ ⁇ -1] is generated.
- the data point group ⁇ [0 ⁇ ⁇ -1] If necessary, such as the distribution and properties of [0 ⁇ ⁇ -1] force, the line segment L1 may be obtained by the least squares.
- the point of interest brk is one point having the largest distance from the line segment L1 connecting the end points.
- the point of interest brk is the same as the line segment obtained by the least square as described above.
- the data point group ⁇ [0 ⁇ ⁇ -1] at all of these points or at least one selected point May be divided.
- a method of generating a line segment using least squares in step S15 Given a given n data point group ⁇ [0 ⁇ ⁇ -1], we will show how to find the equation of the straight line that best fits the data point group.
- the model of the equation of the straight line is expressed by the following equation (1).
- a method of determining the zigzag shape (Zig-Zag-Shape) in step S16 will be described.
- FIG. 22 is a flowchart showing a Zig-Zag-Shape determination method.
- a data point group ⁇ [0 ⁇ ⁇ -1] and a straight line Line, d, ⁇ ) are input (step S20).
- ⁇ indicates the standard deviation of the point sequence.
- a counter that counts the number of consecutive data points on the same side (hereinafter referred to as a continuous point counter).
- a count value count Is set to 1 (step S22).
- sign (x) is a function that returns the sign (+ or 1) of the value of X
- sdist (i) is calculated as P [i] .xcos a + P [i] .ycos ⁇ + d Indicates the positive / negative distance from the i-th data point in the straight line Line.
- Val has the data point P [0] on which side of the straight line Line
- the count value i of a counter for counting data points (hereinafter, referred to as a data point counter, and this count value is referred to as a count value i) is set to 1 (step S23).
- the count value i of the data point counter is smaller than the number n of data (step S24: YES)
- the data point P [i] which is the data point of the next data (hereinafter, i-th)
- i-th the data point of the next data
- the val obtained in step S22 and the val obtained in step S25 are Val and
- step S26 If 0 is not the same as val (step S26: NO), substitute val for val and count the continuous point counter.
- Substitute 1 for the count value count (step S28), increment the count value i of the data point counter (step S30), and return to the processing from step S24.
- step S26 YES
- Points P [i-1] and P [i] are determined to be on the same side of the straight line Line, and the force point value count of the continuous point counter is incremented by one (step S27). Further, it is determined whether or not the count value count of the continuous point counter is larger than the minimum number of data points min_c to be determined as Zig-Zag-Shape (step S29). If it is larger (step S29: YES), Judge as Zig-Zag-Shape, output TRUE and end the process.
- step S29 NO
- step S30 the count value i of the data point counter is incremented (step S30)
- step S2 Repeat the process from 4.
- step S24 the processing from step S24 is continued until the count value i of the data point counter reaches the number n of data points, and when the count value i ⁇ n, FALSE is output to perform the processing. finish.
- FIG. 24 is a block diagram illustrating a processing unit that performs a Zig-Zag-Shape determination process. As shown in FIG. 24, the Zig-Zag-Shape discrimination processing unit 20 receives n data point groups P [0 • ⁇ ⁇ 1] and sequentially converts each data point P [i] to a straight line.
- a direction determining unit 21 that determines which side is located and outputs the determination result Val, a delay unit 22 for comparing the result of the next data with the result of the direction determining unit 21, and a data point P
- the comparison unit 23 that compares the direction discrimination result Val at [i] with the direction discrimination result Val at the data point P [i-1], and the comparison unit 23
- a comparison unit 25 compares the count value count of the point counter 24 with the minimum data point number min_c read from the minimum data point number storage unit 26.
- the operation of the Zig-Zag-Shape discrimination processing unit is as follows. That is, the direction discriminating unit 21 obtains a straight line Line by the least squares method for the data point group ⁇ [0 ⁇ ⁇ -1] force, and calculates a positive / negative distance between each data point P [i] and the straight line Line. , And outputs its sign. When a positive or negative sign with respect to the distance to the straight line Line of the data point P [i-1] is input, the delay unit 2 2 outputs the data until the next positive or negative sign of the data point P [i] is input. Is stored.
- the comparing unit 23 compares the above-mentioned positive and negative signs of the data point P [i] and the data point P [i-1], and if the sign is the same, outputs a signal for incrementing the count value count of the connection point counter 24. Outputs a signal that substitutes 1 for the count value count if the sign is different.
- the comparing unit 25 compares the count value count with the minimum data point number min_c, and determines the minimum data point number min_c. When the count value count is large, a signal indicating that the data point group ⁇ [0 ⁇ ⁇ -1] is zigzag is output.
- the region extension unit 5 receives the line group obtained by the line segment extraction unit 4 as an input, and determines which plane each of the line segments belongs to by fitting a sequence of points to the plane (Plane Fitting), A region consisting of a given group of line segments is separated into a plurality of planes (plane regions). The following method is used to separate into multiple planes.
- a plane (reference plane) obtained by these three line segments is a seed of a plane, and a region including the three line segments is called a seed region.
- FIG. 25 is a schematic diagram for explaining the area expansion processing.
- three line segments 32a to 32c indicated by bold lines are selected as the region types.
- the region consisting of these three line segments 32a-32c is the region type.
- one plane (reference plane) P is obtained from these three line segments 32a-32c.
- a line segment which is the same plane as the plane P is selected in the data string 33 or 34 adjacent to the outermost line segment 32a or 32c of the region type outside the region type, respectively.
- the line segment 33a is selected.
- a plane P ′ consisting of these four line segments is obtained, and the reference plane P is updated.
- Step S3 in FIG. 19 is repeated.
- the plane can be obtained by the least squares method as a value that minimizes the value shown in the following equation (5).
- m is the Cramer's rule, which solves a system of linear equations with the determinant.
- n data The root mean square (RMS) residual (RMS) of the plane equation indicating the degree of deviation of the data point group from the plane equation can be calculated by the following equation (8). . Also in this case, the following equation (8) can be obtained using the above two moments of the n data points.
- FIG. 26 is a flowchart showing the procedure of the area type search processing and the area expansion processing.
- the region type is selected by first selecting three adjacent line segments (1, 1, 1) in the row direction or column direction data string used in the line segment extraction. The pixel position of each line segment (1, 1), (1, 1) is
- Each data point has an index indicating the pixel position in the image.For example, if the data point is a line segment in the data column in the row direction, the index is compared and the data segment is overlapped in the column direction. ⁇ Compare whether or not. If this search is successful (step S32: YES), the above equation (7) is used. Use the above to calculate (6-1). As a result, the plane parameters n and d can be determined, and are used to calculate the mean square error (1, 1, 1) of the plane equation shown in the above equation (8) (step S31).
- Step S34 If it is larger than the predetermined threshold th1, the flow returns to step S31 again, and the rms
- the region is extended by the line segment extension method from the region type thus selected. That is, first, a line segment that is a candidate to be added to the region type region is searched (step S35). Note that this area also includes an updated area type described later when the area type has already been updated.
- the candidate line segments are the line segments included in the region type region (for example, 1).
- step S36 the mean square error rms (1) of the plane equation is calculated.
- step S38 the plane parameters are updated (step S38), and the processing from step S35 is repeated again.
- step S36: NO the process returns to step S31, and the region type is searched again. Then, when there is no region type included in the line segment group (step S32: NO), the plane parameters obtained so far are output and the processing is terminated.
- the region type is searched, the determination is made as to whether or not the three line segments belong to the same plane, and the reference plane or the updated plane that has been updated when performing the region extension processing.
- Equation (8) is used to determine whether or not the force belongs to That is, if the root mean square error rms of the plane equation is less than the predetermined threshold (th 1), the line segment (group) is the same rms
- the plane is estimated to belong to the plane, and the plane is calculated again as a plane including the line segment.
- the root mean square error rms of the plane equation to determine whether or not they belong to the same plane, even if the noise is more robust and contains fine steps, it can be accurately calculated.
- a plane can be extracted. The reason will be described below.
- FIG. 27 is a diagram showing the effect, and even if the distance between the end point and the straight line is equal, the plane equation FIG. 6 is a schematic diagram showing an example in which the root mean square error rms is different.
- a straight line La intersecting the plane P Fig. 27A
- a straight line Lb parallel to the plane P and shifted by a predetermined distance Fig. 27B
- the square root of the plane equation obtained from the straight line Lb in FIG. 27B is compared with the square mean error rms (La) of the plane equation obtained from the straight line La in FIG. 27A.
- Average error rms (Lb) is larger. That is, when the straight line La intersects with the plane P as shown in Fig. 27A, the mean square error rms of the plane equation is relatively small and is often influenced by noise. In such a case, there is a high probability that the straight line Lb in which the mean square error rms of the plane equation is large is not the same plane as the plane P but a different plane P '.
- the root mean square error rms of the plane equation is calculated as in this embodiment, and this value is calculated as follows. If the distance is less than the predetermined threshold (th 2), it is preferable to determine that the plane is the same. ⁇ rms Note that the distance between the end point of the line segment and the plane is the same as before, depending on the environment and the properties of the distance data. If the distance is equal to or less than the threshold value, the line segment may be included in the plane, or may be combined.
- the distance is adaptively determined by the properties of the distance data included in the line segment. If a low threshold value is set for a group of line segments containing a lot of noise, many line segments will be divided into different regions, and region expansion will not be performed properly.
- the threshold value (th 2) is set to the noise rms
- th 3 is a constant that defines the lower limit of the threshold (th 2)
- d is the Mahalanobis distance
- sigmaO represents the variance of the line segment. Data containing much noise has a large line segment variance sigmaO, a large threshold (th2), and a large allowable range for area expansion.
- the sum E of the error of the data point and the linear equation expressed by the above equation (2) is used as sigmaO, and the lower limit th 3 is the allowable error threshold th fit rms of the line segment used in the test of the region type.
- the mean square error rms of the plane equation is updated from the values of the two moments obtained during line segment extraction for the data point group, It can be easily calculated by the above equation (8).
- rms (l, 1, 1) is calculated by using the above equation (6) to calculate the plane equation 2 for all three straight lines.
- 1 is the index of 1 in the pixel column or row
- Neighbor (index) is a function that returns an index adjacent to the given index, for example, ⁇ index-1, index + 1 ⁇ .
- the plane equation is re-executed in step S4.
- Perform post processing for example, the deviation of the plane force of the distance data point or the line segment determined to belong to the plane indicated by the plane equation updated and finally obtained as described above is calculated, and a predetermined value is calculated.
- the influence of noise can be further reduced by excluding the distance data points or line segments that deviate from the plane and updating the plane equation again.
- step S4 will be described in detail.
- a method of calculating the plane equation again in two steps will be described.
- the data point is determined. Is included in the adjacent plane. If a data point that does not belong to any plane and has a plane whose distance is equal to or less than a relatively large threshold value, for example, 1.5 cm, can be detected, the data point is included in the plane. .
- a relatively large threshold value for example, 1.5 cm
- Fig. 30A is a schematic diagram showing the floor surface when looking down at the floor surface with the robot device standing
- Fig. 30B shows the z-axis with x on the vertical axis, y on the horizontal axis, and shading of each data point.
- FIG. 4 is a diagram showing three-dimensional distance data, and further shows a data point group force assumed to be on the same plane in a pixel column force line segment extraction process in a row direction in which a straight line is detected.
- FIG. 30C shows a plane region obtained by the region extension processing from the straight line group shown in FIG. 30B.
- FIG. 31 shows the result when one step is placed on the floor.
- a single step ST3 is placed on the floor F.
- FIG.31B shows the experimental conditions. If the distance between the point of interest and the straight line (line segment) exceeds 3 ⁇ 4ax_d, the data point group is divided.
- Correction extraction indicates the number of successful plane detections performed by the line segment expansion method for a total of 10 line segment extractions for each data row in the row direction.
- Correct extraction indicates the success or failure of extraction for each data column in the column direction.
- No. 1-No. 5 are the conditions for plane detection processing by the conventional line extension method that does not incorporate the Zig-Zag-Shape discrimination processing described above, and No. 6 is the Zig-Zag-Shape discrimination processing. The conditions of the plane detection method performed in the present embodiment are shown.
- FIGS.31C and 31D are diagrams showing the results of plane detection by the line segment extension method, and the results of plane detection by the method according to the present embodiment and the results of plane detection by the conventional line segment extension method, respectively.
- the result (comparative example) is shown.
- FIGS. 32B and 32C show a case where three-dimensional distance data is acquired from a captured image.
- the left diagram shows an example in which a line segment is extracted from a pixel column in the row direction (distance data column)
- the right diagram shows an example in which a line segment is extracted from a pixel column in the column direction (distance data column).
- FIG. 33 and FIG. 34 are diagrams showing examples in which planes are detected by acquiring three-dimensional distance data of images taken of different steps. As shown in FIGS. 33 and 34, in all cases, all treads can be detected as flat surfaces. Fig. 34B shows a part of the floor surface on another plane. Indicates that the detection was successful.
- a large threshold is set to divide a line segment, and then data points exceeding the threshold are determined by Zig-Zag-Shape discrimination processing. If the straight line does not have a zigzag shape, the line segment is divided as a straight line consisting of multiple planes instead of noise, so multiple planes are accurately detected from distance information including noise. It becomes possible.
- the uneven floor surface constituted by a plurality of planes is not erroneously recognized as a walkable plane, and the movement of the robot device is further simplified.
- the present invention is not limited to only the above-described embodiments, and various changes can be made without departing from the spirit of the present invention.
- one or more of the above-described processes such as the line segment extraction process, the zigzag shape verification process, and the region enlarging process may be implemented by hardware, but a computer program may be provided in an arithmetic unit (CPU). It may be realized by executing.
- CPU arithmetic unit
- it When it is a computer program, it can be provided by being recorded on a recording medium, or can be provided by being transmitted via the Internet or another transmission medium.
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Multimedia (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Chemical & Material Sciences (AREA)
- Combustion & Propulsion (AREA)
- Transportation (AREA)
- Mechanical Engineering (AREA)
- Image Analysis (AREA)
- Image Processing (AREA)
- Length Measuring Devices By Optical Means (AREA)
Abstract
Description
Claims
Priority Applications (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US10/593,150 US8289321B2 (en) | 2004-03-17 | 2005-03-17 | Method and apparatus for detecting plane, and robot apparatus having apparatus for detecting plane |
JP2006511066A JP4636016B2 (ja) | 2004-03-17 | 2005-03-17 | 平面検出装置、平面検出方法、及び平面検出装置を搭載したロボット装置 |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2004-077215 | 2004-03-17 | ||
JP2004077215 | 2004-03-17 |
Publications (2)
Publication Number | Publication Date |
---|---|
WO2005088244A1 true WO2005088244A1 (ja) | 2005-09-22 |
WO2005088244A9 WO2005088244A9 (ja) | 2008-03-13 |
Family
ID=34975694
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/JP2005/004839 WO2005088244A1 (ja) | 2004-03-17 | 2005-03-17 | 平面検出装置、平面検出方法、及び平面検出装置を搭載したロボット装置 |
Country Status (3)
Country | Link |
---|---|
US (1) | US8289321B2 (ja) |
JP (1) | JP4636016B2 (ja) |
WO (1) | WO2005088244A1 (ja) |
Cited By (15)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2009527751A (ja) * | 2006-02-22 | 2009-07-30 | シーメンス アクチエンゲゼルシヤフト | 旋回可能なセンサ装置を用いた物体検出方法 |
JP2011022805A (ja) * | 2009-07-16 | 2011-02-03 | Nippon Signal Co Ltd:The | 画像処理装置 |
JP2011186749A (ja) * | 2010-03-08 | 2011-09-22 | Optex Co Ltd | 距離画像における平面推定方法および距離画像カメラ |
JP2012037490A (ja) * | 2010-08-11 | 2012-02-23 | Pasuko:Kk | データ解析装置、データ解析方法、及びプログラム |
JP2012123750A (ja) * | 2010-12-10 | 2012-06-28 | Toshiba Alpine Automotive Technology Corp | 車両用画像処理装置および車両用画像処理方法 |
JP2014199586A (ja) * | 2013-03-29 | 2014-10-23 | 株式会社パスコ | 多平面構造物の凹凸抽出装置、多平面構造物の凹凸抽出方法、及びプログラム |
JP2015004588A (ja) * | 2013-06-20 | 2015-01-08 | 株式会社パスコ | データ解析装置、データ解析方法、及びプログラム |
JP2015095000A (ja) * | 2013-11-08 | 2015-05-18 | キヤノン株式会社 | 画像処理装置および画像処理方法 |
JP2015108621A (ja) * | 2013-12-04 | 2015-06-11 | 三菱電機株式会社 | 3d点群センサーデータから平面を抽出する方法 |
WO2016084389A1 (ja) * | 2014-11-28 | 2016-06-02 | パナソニックIpマネジメント株式会社 | モデリング装置、3次元モデル生成装置、モデリング方法、プログラム |
JP2016534461A (ja) * | 2013-08-30 | 2016-11-04 | クアルコム,インコーポレイテッド | 物理的光景を表すための方法および装置 |
JP2017138219A (ja) * | 2016-02-04 | 2017-08-10 | 株式会社デンソー | 物体認識装置 |
JP2021009545A (ja) * | 2019-07-01 | 2021-01-28 | セイコーエプソン株式会社 | 印刷制御装置、印刷制御プログラム、及び、印刷物生産方法 |
US20210223368A1 (en) * | 2017-11-21 | 2021-07-22 | Faro Technologies, Inc. | System for surface analysis and method thereof |
US11302023B2 (en) * | 2018-01-23 | 2022-04-12 | Apple Inc. | Planar surface detection |
Families Citing this family (20)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US7916935B2 (en) * | 2006-09-19 | 2011-03-29 | Wisconsin Alumni Research Foundation | Systems and methods for automatically determining 3-dimensional object information and for controlling a process based on automatically-determined 3-dimensional object information |
FR2929873B1 (fr) * | 2008-04-09 | 2010-09-03 | Aldebaran Robotics | Architecture de controle-commande d'un robot mobile utilisant des membres articules |
KR101495333B1 (ko) * | 2008-07-02 | 2015-02-25 | 삼성전자 주식회사 | 장애물 검출 장치 및 방법 |
JP2013047662A (ja) * | 2011-07-27 | 2013-03-07 | Ihi Corp | 対象物体の検出方法、検出装置及びプログラム |
KR101820299B1 (ko) * | 2011-11-23 | 2018-03-02 | 삼성전자주식회사 | 3차원 데이터 영상의 계단 인식 방법 |
US9269155B2 (en) * | 2012-04-05 | 2016-02-23 | Mediatek Singapore Pte. Ltd. | Region growing method for depth map/color image |
US9582932B2 (en) * | 2012-06-05 | 2017-02-28 | Apple Inc. | Identifying and parameterizing roof types in map data |
US20200409382A1 (en) * | 2014-11-10 | 2020-12-31 | Carnegie Mellon University | Intelligent cleaning robot |
JP2017181291A (ja) * | 2016-03-30 | 2017-10-05 | 富士通株式会社 | 距離測定装置、距離測定方法及びプログラム |
WO2018108832A1 (en) * | 2016-12-14 | 2018-06-21 | Starship Technologies Oü | Robot, system and method detecting and/or responding to transitions in height |
US10077047B2 (en) | 2017-02-10 | 2018-09-18 | Waymo Llc | Using wheel orientation to determine future heading |
CN108510540B (zh) * | 2017-02-23 | 2020-02-07 | 杭州海康威视数字技术股份有限公司 | 立体视觉摄像机及其高度获取方法 |
JP7148229B2 (ja) * | 2017-07-31 | 2022-10-05 | 株式会社トプコン | 三次元点群データの縦断面図作成方法,そのための測量データ処理装置,および測量システム |
US20220197298A1 (en) * | 2018-06-11 | 2022-06-23 | Jabil Inc. | Apparatus, system, and method of docking for autonomous robot navigation |
US11599128B2 (en) | 2020-04-22 | 2023-03-07 | Boston Dynamics, Inc. | Perception and fitting for a stair tracker |
US11548151B2 (en) | 2019-04-12 | 2023-01-10 | Boston Dynamics, Inc. | Robotically negotiating stairs |
CN110216661B (zh) * | 2019-04-29 | 2020-12-22 | 北京云迹科技有限公司 | 跌落区域识别的方法及装置 |
US11796637B1 (en) * | 2020-09-10 | 2023-10-24 | Amazon Technologies, Inc. | Fall detection on uneven surfaces using radar |
CN113175987A (zh) * | 2021-04-09 | 2021-07-27 | 东南大学 | 一种考虑环境温度变异的桥梁动力特性异常预警方法 |
CN113390431B (zh) * | 2021-06-17 | 2022-09-30 | 广东工业大学 | 动态生成参考线的方法、装置、计算机设备和存储介质 |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPH0793541A (ja) * | 1993-05-26 | 1995-04-07 | Matsushita Electric Works Ltd | 形状認識方法 |
JP2001062566A (ja) * | 1999-08-30 | 2001-03-13 | Kobe Steel Ltd | 溶接線位置検出装置 |
Family Cites Families (13)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPH03176701A (ja) | 1989-12-05 | 1991-07-31 | Toshiba Corp | N対1バックアップコントローラ |
JPH03278467A (ja) | 1990-03-27 | 1991-12-10 | Canon Inc | 薄膜半導体装置 |
JP3192736B2 (ja) | 1992-02-10 | 2001-07-30 | 本田技研工業株式会社 | 移動体の階段などの認識方法 |
JP3176701B2 (ja) | 1992-04-15 | 2001-06-18 | 本田技研工業株式会社 | 移動体の現在位置認識処理装置 |
JP3278467B2 (ja) | 1992-08-18 | 2002-04-30 | 本田技研工業株式会社 | 移動ロボットの制御装置 |
JP3330710B2 (ja) | 1993-12-30 | 2002-09-30 | 本田技研工業株式会社 | 移動ロボットの位置検知および制御装置 |
JPH08161493A (ja) * | 1994-12-08 | 1996-06-21 | Mazda Motor Corp | 線形状検出方法およびその装置 |
US5978504A (en) * | 1997-02-19 | 1999-11-02 | Carnegie Mellon University | Fast planar segmentation of range data for mobile robots |
JP3945279B2 (ja) | 2002-03-15 | 2007-07-18 | ソニー株式会社 | 障害物認識装置、障害物認識方法、及び障害物認識プログラム並びに移動型ロボット装置 |
US20040138780A1 (en) * | 2002-11-15 | 2004-07-15 | Lewis Murray Anthony | Certain principles of biomorphic robots |
JP3994950B2 (ja) | 2003-09-19 | 2007-10-24 | ソニー株式会社 | 環境認識装置及び方法、経路計画装置及び方法、並びにロボット装置 |
US7653216B2 (en) * | 2003-12-23 | 2010-01-26 | Carnegie Mellon University | Polyhedron recognition system |
JP4618247B2 (ja) | 2004-03-17 | 2011-01-26 | ソニー株式会社 | ロボット装置及びその動作制御方法 |
-
2005
- 2005-03-17 WO PCT/JP2005/004839 patent/WO2005088244A1/ja active Application Filing
- 2005-03-17 JP JP2006511066A patent/JP4636016B2/ja not_active Expired - Fee Related
- 2005-03-17 US US10/593,150 patent/US8289321B2/en not_active Expired - Fee Related
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPH0793541A (ja) * | 1993-05-26 | 1995-04-07 | Matsushita Electric Works Ltd | 形状認識方法 |
JP2001062566A (ja) * | 1999-08-30 | 2001-03-13 | Kobe Steel Ltd | 溶接線位置検出装置 |
Non-Patent Citations (1)
Title |
---|
JIANG X. ET AL: "Fast segmentation of range immages into planar regions by scan line grouping.", MACHINE VISION AND APPLICATIONS, vol. 7, no. 2, 1994, pages 115 - 122, XP002989528 * |
Cited By (19)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2009527751A (ja) * | 2006-02-22 | 2009-07-30 | シーメンス アクチエンゲゼルシヤフト | 旋回可能なセンサ装置を用いた物体検出方法 |
JP2011022805A (ja) * | 2009-07-16 | 2011-02-03 | Nippon Signal Co Ltd:The | 画像処理装置 |
JP2011186749A (ja) * | 2010-03-08 | 2011-09-22 | Optex Co Ltd | 距離画像における平面推定方法および距離画像カメラ |
JP2012037490A (ja) * | 2010-08-11 | 2012-02-23 | Pasuko:Kk | データ解析装置、データ解析方法、及びプログラム |
JP2012123750A (ja) * | 2010-12-10 | 2012-06-28 | Toshiba Alpine Automotive Technology Corp | 車両用画像処理装置および車両用画像処理方法 |
JP2014199586A (ja) * | 2013-03-29 | 2014-10-23 | 株式会社パスコ | 多平面構造物の凹凸抽出装置、多平面構造物の凹凸抽出方法、及びプログラム |
JP2015004588A (ja) * | 2013-06-20 | 2015-01-08 | 株式会社パスコ | データ解析装置、データ解析方法、及びプログラム |
JP2016534461A (ja) * | 2013-08-30 | 2016-11-04 | クアルコム,インコーポレイテッド | 物理的光景を表すための方法および装置 |
JP2015095000A (ja) * | 2013-11-08 | 2015-05-18 | キヤノン株式会社 | 画像処理装置および画像処理方法 |
JP2015108621A (ja) * | 2013-12-04 | 2015-06-11 | 三菱電機株式会社 | 3d点群センサーデータから平面を抽出する方法 |
WO2016084389A1 (ja) * | 2014-11-28 | 2016-06-02 | パナソニックIpマネジメント株式会社 | モデリング装置、3次元モデル生成装置、モデリング方法、プログラム |
JPWO2016084389A1 (ja) * | 2014-11-28 | 2017-08-31 | パナソニックIpマネジメント株式会社 | モデリング装置、3次元モデル生成装置、モデリング方法、プログラム |
US10127709B2 (en) | 2014-11-28 | 2018-11-13 | Panasonic Intellectual Property Management Co., Ltd. | Modeling device, three-dimensional model generating device, modeling method, and program |
JP2017138219A (ja) * | 2016-02-04 | 2017-08-10 | 株式会社デンソー | 物体認識装置 |
US20210223368A1 (en) * | 2017-11-21 | 2021-07-22 | Faro Technologies, Inc. | System for surface analysis and method thereof |
US11879997B2 (en) * | 2017-11-21 | 2024-01-23 | Faro Technologies, Inc. | System for surface analysis and method thereof |
US11302023B2 (en) * | 2018-01-23 | 2022-04-12 | Apple Inc. | Planar surface detection |
JP2021009545A (ja) * | 2019-07-01 | 2021-01-28 | セイコーエプソン株式会社 | 印刷制御装置、印刷制御プログラム、及び、印刷物生産方法 |
JP7395856B2 (ja) | 2019-07-01 | 2023-12-12 | セイコーエプソン株式会社 | 印刷制御装置、印刷制御プログラム、及び、印刷物生産方法 |
Also Published As
Publication number | Publication date |
---|---|
US8289321B2 (en) | 2012-10-16 |
JP4636016B2 (ja) | 2011-02-23 |
WO2005088244A9 (ja) | 2008-03-13 |
US20070257910A1 (en) | 2007-11-08 |
JPWO2005088244A1 (ja) | 2008-01-31 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2005088244A1 (ja) | 平面検出装置、平面検出方法、及び平面検出装置を搭載したロボット装置 | |
JP4618247B2 (ja) | ロボット装置及びその動作制御方法 | |
JP4479372B2 (ja) | 環境地図作成方法、環境地図作成装置、及び移動型ロボット装置 | |
CN103177269B (zh) | 用于估计对象姿态的设备和方法 | |
Fritsch et al. | Multi-modal anchoring for human–robot interaction | |
JP5873442B2 (ja) | 物体検出装置および物体検出方法 | |
CA2748037C (en) | Method and system for gesture recognition | |
Ye et al. | A depth camera motion analysis framework for tele-rehabilitation: Motion capture and person-centric kinematics analysis | |
WO2012046392A1 (ja) | 姿勢推定装置及び姿勢推定方法 | |
JP2019125057A (ja) | 画像処理装置及びその方法、プログラム | |
JP6708260B2 (ja) | 情報処理装置、情報処理方法、およびプログラム | |
Tulyakov et al. | Robust real-time extreme head pose estimation | |
CN108875586B (zh) | 一种基于深度图像与骨骼数据多特征融合的功能性肢体康复训练检测方法 | |
JP2003271975A (ja) | 平面抽出方法、その装置、そのプログラム、その記録媒体及び平面抽出装置搭載型ロボット装置 | |
Krzeszowski et al. | DTW-based gait recognition from recovered 3-D joint angles and inter-ankle distance | |
Pradeep et al. | Piecewise planar modeling for step detection using stereo vision | |
JP2009288917A (ja) | 情報処理装置、情報処理方法、およびプログラム | |
JP6834590B2 (ja) | 3次元データ取得装置及び方法 | |
Struebig et al. | Stair and ramp recognition for powered lower limb exoskeletons | |
CN117238031A (zh) | 一种虚拟人的动作捕捉方法与系统 | |
WO2020149149A1 (en) | Information processing apparatus, information processing method, and program | |
CN116830165A (zh) | 人体姿态判断方法及使用该方法的移动机器 | |
JP4407244B2 (ja) | ロボット装置及びその物体学習方法 | |
JP2003346150A (ja) | 床面認識装置及び床面認識方法並びにロボット装置 | |
Akhavizadegan et al. | REAL-TIME AUTOMATED CONTOUR BASED MOTION TRACKING USING A SINGLE-CAMERA FOR UPPER LIMB ANGULAR MOTION MEASUREMENT |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
AK | Designated states |
Kind code of ref document: A1 Designated state(s): AE AG AL AM AT AU AZ BA BB BG BR BW BY BZ CA CH CN CO CR CU CZ DE DK DM DZ EC EE EG ES FI GB GD GE GH GM HR HU ID IL IN IS JP KE KG KP KR KZ LC LK LR LS LT LU LV MA MD MG MK MN MW MX MZ NA NI NO NZ OM PG PH PL PT RO RU SC SD SE SG SK SL SM SY TJ TM TN TR TT TZ UA UG US UZ VC VN YU ZA ZM ZW |
|
AL | Designated countries for regional patents |
Kind code of ref document: A1 Designated state(s): GM KE LS MW MZ NA SD SL SZ TZ UG ZM ZW AM AZ BY KG KZ MD RU TJ TM AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HU IE IS IT LT LU MC NL PL PT RO SE SI SK TR BF BJ CF CG CI CM GA GN GQ GW ML MR NE SN TD TG |
|
121 | Ep: the epo has been informed by wipo that ep was designated in this application | ||
WWE | Wipo information: entry into national phase |
Ref document number: 2006511066 Country of ref document: JP |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
WWW | Wipo information: withdrawn in national office |
Country of ref document: DE |
|
WWE | Wipo information: entry into national phase |
Ref document number: 10593150 Country of ref document: US |
|
122 | Ep: pct application non-entry in european phase | ||
WWP | Wipo information: published in national office |
Ref document number: 10593150 Country of ref document: US |