EP3332299A1 - Dispositif et procédé pour la détection d'obstacles adaptés à un robot mobile - Google Patents
Dispositif et procédé pour la détection d'obstacles adaptés à un robot mobileInfo
- Publication number
- EP3332299A1 EP3332299A1 EP16750807.6A EP16750807A EP3332299A1 EP 3332299 A1 EP3332299 A1 EP 3332299A1 EP 16750807 A EP16750807 A EP 16750807A EP 3332299 A1 EP3332299 A1 EP 3332299A1
- Authority
- EP
- European Patent Office
- Prior art keywords
- robot
- vii
- environment
- objects
- cloud
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Withdrawn
Links
- 238000000034 method Methods 0.000 title claims description 38
- 238000012545 processing Methods 0.000 claims abstract description 21
- 238000003384 imaging method Methods 0.000 claims abstract description 5
- 238000013507 mapping Methods 0.000 claims description 15
- 230000006399 behavior Effects 0.000 claims description 7
- 238000009434 installation Methods 0.000 claims description 6
- 230000001594 aberrant effect Effects 0.000 claims description 4
- 238000003860 storage Methods 0.000 abstract description 2
- 238000001514 detection method Methods 0.000 description 10
- 230000033001 locomotion Effects 0.000 description 8
- 238000005259 measurement Methods 0.000 description 7
- 230000007613 environmental effect Effects 0.000 description 5
- 230000000875 corresponding effect Effects 0.000 description 4
- 239000012636 effector Substances 0.000 description 4
- 238000006073 displacement reaction Methods 0.000 description 3
- 238000001914 filtration Methods 0.000 description 3
- 238000004519 manufacturing process Methods 0.000 description 3
- 238000012986 modification Methods 0.000 description 3
- 230000004048 modification Effects 0.000 description 3
- 230000003287 optical effect Effects 0.000 description 3
- 230000008569 process Effects 0.000 description 3
- 239000000654 additive Substances 0.000 description 2
- 230000000996 additive effect Effects 0.000 description 2
- 238000004364 calculation method Methods 0.000 description 2
- 230000008859 change Effects 0.000 description 2
- 230000001276 controlling effect Effects 0.000 description 2
- 238000009826 distribution Methods 0.000 description 2
- 238000005553 drilling Methods 0.000 description 2
- 230000003068 static effect Effects 0.000 description 2
- 241000772415 Neovison vison Species 0.000 description 1
- 230000009471 action Effects 0.000 description 1
- 238000004458 analytical method Methods 0.000 description 1
- 238000013459 approach Methods 0.000 description 1
- 230000003930 cognitive ability Effects 0.000 description 1
- 230000000295 complement effect Effects 0.000 description 1
- 239000012141 concentrate Substances 0.000 description 1
- 230000002596 correlated effect Effects 0.000 description 1
- 238000010586 diagram Methods 0.000 description 1
- 238000004146 energy storage Methods 0.000 description 1
- VJYFKVYYMZPMAB-UHFFFAOYSA-N ethoprophos Chemical compound CCCSP(=O)(OCC)SCCC VJYFKVYYMZPMAB-UHFFFAOYSA-N 0.000 description 1
- 238000001125 extrusion Methods 0.000 description 1
- 230000006870 function Effects 0.000 description 1
- 238000000227 grinding Methods 0.000 description 1
- 231100001261 hazardous Toxicity 0.000 description 1
- 238000005286 illumination Methods 0.000 description 1
- 238000009776 industrial production Methods 0.000 description 1
- 238000002329 infrared spectrum Methods 0.000 description 1
- 230000010354 integration Effects 0.000 description 1
- 230000005865 ionizing radiation Effects 0.000 description 1
- 230000004807 localization Effects 0.000 description 1
- 238000003754 machining Methods 0.000 description 1
- 239000000463 material Substances 0.000 description 1
- 238000003801 milling Methods 0.000 description 1
- 238000004091 panning Methods 0.000 description 1
- 239000000843 powder Substances 0.000 description 1
- 239000003380 propellant Substances 0.000 description 1
- 230000000272 proprioceptive effect Effects 0.000 description 1
- 230000001141 propulsive effect Effects 0.000 description 1
- 230000008439 repair process Effects 0.000 description 1
- 230000001953 sensory effect Effects 0.000 description 1
- 239000002689 soil Substances 0.000 description 1
- 238000005507 spraying Methods 0.000 description 1
- 229920001169 thermoplastic Polymers 0.000 description 1
- 239000004416 thermosoftening plastic Substances 0.000 description 1
- 230000000007 visual effect Effects 0.000 description 1
Classifications
-
- G—PHYSICS
- G05—CONTROLLING; REGULATING
- G05D—SYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
- G05D1/00—Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
- G05D1/02—Control of position or course in two dimensions
- G05D1/021—Control of position or course in two dimensions specially adapted to land vehicles
- G05D1/0231—Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means
- G05D1/0246—Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using a video camera in combination with image processing means
- G05D1/0248—Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using a video camera in combination with image processing means in combination with a laser
Definitions
- the invention relates to a device and a method for detecting obstacles adapted to a mobile robot.
- the invention is more particularly, but not exclusively, suitable for driving an autonomous robot, operating in a congested environment, with obstacles that can move over time.
- An autonomous mobile robot is localized in its environment according to a simultaneous mapping and localization method designated under the general acronym of "SLAM" for “Simultaneous Location And Mapping”.
- the robot locates its position in initial mapping, using proprioceptive sensors, such as an odometer, and by means of environmental sensors such as vision sensors, or by means of beacons.
- proprioceptive sensors such as an odometer
- environmental sensors such as vision sensors, or by means of beacons.
- the robot When the environment is changing, by the appearance of obstacles not mapped initially or by the irruption into the environment of the robot of moving objects such as another robot or an operator, the robot must, to take into account these changes, on the one hand, detect the modifications of this environment and on the other hand integrate these modifications into its cartography, so as to modify its trajectory if necessary, or more generally its behavior, according to the detected modifications.
- the robot environment In an industrial production environment such as the assembly of an aircraft structure or a naval structure, the robot environment includes "traps" such as traps, as well as objects or operators with which the robot absolutely can not contact or even approach for security reasons. Also, the detection of obstacles and their integration in the cartography must be carried out at a sufficient distance from said obstacles.
- Certain obstacles such as hatches or wells, known as negative obstacles, are likely to be crossed if they are covered with a grid and impassable in other circumstances. So, the robot must be able to detect an obstacle remotely, from its sensory means, to determine the contours and locate said obstacle in relation to its position and to determine whether or not it can cross this obstacle without modifying its trajectory. In the case where the obstacle is moving, it must, in addition, estimate its trajectory and speed. These tasks are not easy to implement.
- a stereoscopic camera, a radar, or a laser scanning system is able to detect and locate an object known in its environment, for example to use it as a bitter and draw an allothetic information from positioning
- the individual identification of a priori unknown objects in a scene instantly visualized by such a device is a complex task, especially when several obstacles, not initially mapped , are likely to be simultaneously in the field observed by the robot at different distances.
- the document WO 2014 152254 describes an onboard device for the detection of obstacles in an environment and a method implementing this device.
- the method comprises comparing a three-dimensional reference image of the environment obtained from two images whose point of view is shifted, with a current three-dimensional image of this environment obtained by similar means. This information is superimposed so as to reveal the disparities and thus detect changes in the environment and the presence of foreign objects.
- This technique of the prior art requires heavy computer processing means and the need to obtain reference images of the environment. Thus, this method the prior art remains limited to relatively stable environments, which do not correspond to the applications covered by the invention.
- the device described in this prior art uses information from distance sensors, obtained by a radar or a scanning by means of a laser, and video information obtained by a stereoscopic camera. These two types of information are combined to self-calibrate and make disparities more robust, thus, according to this prior art, these two types of sensors are used as two independent sources of information, which further justifies their use. for these purposes of self-calibration
- the document US 2014 136414 describes an autonomous vehicle adapted to deliver in a community. Said robot moves on a sidewalk, in a community or suburban area, that is to say in a relatively open environment.
- the document US 2008 027591 describes a semi-autonomous vehicle capable of evolving in an urban environment, particularly during military operations.
- the vehicle has a certain degree of autonomy but remains permanently connected to a remote human operator for his movements.
- the invention aims at solving the disadvantages of the prior art and for this purpose concerns a robot adapted to evolve in a defined environment, in particular an aircraft fuselage section in a phase of installation of the systems in said section, which robot is provided with a plurality of sensors comprising:
- a vision device combining a lighting unit and
- processing and control means comprising a computer and memory means comprising a map of the defined environment, able to acquire the information from the sensors, whose processing and control means include in memory the three-dimensional characteristics of all the objects likely to be in the defined environment.
- the robot object of the invention combines information from different sensors, each providing a different view of the same concentric environment, to detect the presence of obstacles.
- the calculator is able to associate the contours acquired during the observation of the concentric environment of the robot to specific objects and to deduce behaviors according to the nature of these obstacles objects, in order to define the behavior to adopt. to said obstacles.
- the defined environment whose mapping is stored in the memory of the processing and control means of the robot corresponds to a defined phase of the assembly operations and the robot uses its environmental recognition capabilities to take account of the evolution of said environment compared to the initial mapping.
- LiDAR is applied to an active sensor measuring the round trip delay of a light beam emitted by a laser to determine the position and distance of a target from the transmitter.
- a vision device combining a lighting unit and an imaging unit is described in WO2007 / 043036.
- This type of device is known commercially under the Carminé® brand and used in particular in Kinect® type cameras.
- the angular resolution of the LiDAR robot object of the invention is less than or equal to 0.25 °.
- said system is able to detect an obstacle of small thickness, for example a cable of 8 mm in diameter, at a distance of 2 meters.
- the plurality of sensors of the robot which is the subject of the invention comprises:
- said second LiDAR is more particularly specialized for the detection of negative obstacles and carries out its concentric scanning in masked time and possibly at a frequency different from the first LiDAR, which also makes it possible to specialize the first LiDAR for particular detection tasks.
- the invention also relates to a method for controlling the evolution of a robot according to the invention in a defined congested environment, which method comprises steps of:
- step iv processing the data from the scan of step iv) so as to isolate regions of interest
- step vii. classifying the obstacles present in the environment from the image obtained in step vi) according to their crossing characteristics by the robot, the ranking operation comprising the steps of:
- VII.B downsample the point cloud obtained at sequence vii. at) ; VII.C. eliminate the aberrant singular points of the scatter plot obtained in sequence vii.b);
- step viii adapt the trajectory the behavior of the robot and function of information from step vii) and the task to be performed defined in step iii).
- the robot object of the invention uses the information from the plurality of sensors to analyze its environment, more particularly the LiDAR systems, insensitive to light and atmospheric conditions allows a first detection and classification of obstacles likely, including at relatively large distances.
- This first classification makes it possible to concentrate the action of the mink sensors on the areas of interest which are thus observed with a higher resolution.
- the combined use of the two types of vision means makes it possible to reduce the measurement noise and the amount of data to be processed for a more precise classification of the obstacles.
- the different filtering sequences make the calculation faster and the detection of obstacles more robust vis-à-vis in particular "false obstacles" that is to say zones detected as obstacles but which are only artifices of measurement .
- step vii) of the method which is the subject of the invention comprises sequences consisting of:
- vii. g identify the objects present in the image by projecting the contours of said objects from the three-dimensional characteristics of said objects obtained in step ii).
- the robot is aware of the nature of the obstacles present in its environment.
- the recognition of the objects present in the environment also makes it possible to associate with these objects spatial attributes which are not accessible during the measurement. For example, it makes it possible to associate with an object that would only be seen on the wafer, a length in a direction perpendicular to said wafer.
- object recognition is also useful for tracking said objects and their relative trajectory between two successive measurements.
- the method which is the subject of the invention comprises, before step viii), the steps of:
- xi determine the trajectory of the objects detected in the robot environment.
- the method that is the subject of the invention enables the robot to obtain a dynamic vision of its environment.
- the image acquisitions of the environment of steps iv) and vi) of the method that is the subject of the invention are carried out continuously and stored in a buffer memory zone before being processed.
- the robot has a frequently updated vision of its environment and combines if necessary several successive acquisitions of this environment to obtain a more robust representation.
- the environment of the robot comprises a second robot sharing the same cartography of the environment, the two robots being able to exchange information, which method comprises a step consisting of:
- the method which is the subject of the invention comprises the steps of:
- xiii. perform a soil sweep using a LiDAR and detect a negative obstacle
- FIG. 1 shows in a schematic side view, an embodiment of a robot according to the invention in its environment
- FIG. 2 represents a logic diagram of an exemplary embodiment of the method that is the subject of the invention
- FIG. 3 shows schematically; according to the same view as FIG. 1, the principle of detection of a negative obstacle by the robot and the method which are the subject of the invention.
- the robot (100) object of the invention comprises a base (1 10) supporting motorization means (not shown), energy storage means such as a battery of accumulators (not shown) as well as computer means (not shown) used to process the information and drive said robot, including the motorization means.
- the motorization means control displacement means, for example wheels (120) according to this embodiment.
- Said displacement means comprise propulsive elements and directional elements so as to allow the robot (100) to follow a trajectory defined in its environment.
- the base supports mechanized tooling means, for example, according to this embodiment, the base supports an articulated arm (130) terminated by an effector (135).
- said effector (135) is suitable for the installation of fasteners such as bolts or rivets, or else it is suitable for producing a continuous or spot weld, or else comprises machining means. such as drilling or boring means or means for milling, grinding or deburring, without these examples being limiting or exhaustive.
- the robot (100) object of the invention comprises a plurality of sensors whose information they can acquire, used alone or in combination, allow the robot to know its concentric environment.
- the concentric environment of the robot corresponds to a zone analyzed, centered on the robot and corresponding to the surface or the volume visible by the sensors installed on said robot, possibly moving said sensors to perform a scan, or alternatively or complementary by performing a rotation or a portion of rotation of the robot on him- even.
- the robot object of the invention comprises a first LiDAR (140) mounted on a platform (145) for moving said LiDAR in panoramic movements.
- Said panoramic platform makes it possible to move the LiDAR along two axes, for example a horizontal panoramic axis and a vertical panoramic axis, corresponding to yaw and pitch movements with respect to the optical axis of the LiDAR.
- the panoramic platform makes it possible to move the LiDAR along a vertical panoramic axis (yaw) and around the optical axis of the sensor in a roll motion.
- the platform (145) includes angular position sensors such that the platform orientation information is combined with the information from the LiDAR during each scan.
- a LiDAR consists of a device capable of emitting a laser beam and a sensor capable of detecting such a laser beam.
- the LiDAR emits a laser beam and receives on its sensor said beam reflected by a target, that is to say an object in the environment of the robot.
- the time between the emission of the reception makes it possible to measure the distance of the point on which this reflection takes place and thus to position said point with respect to the LiDAR.
- the LiDAR used by the robot which is the subject of the invention is of the Hokuyo UTM-30LX-EX type, making it possible to measure up to a distance of 30 meters and having a resolution of the order millimeter in depth and an angular resolution of 0.25 °.
- the scanning performed by LiDAR describes a plane sector of 270 ° in 25 ms.
- the panoramic movement, in particular vertical makes it possible to obtain a three-dimensional image of the environment.
- Horizontal panning increases the field of view at short range.
- the resolution of this LiDAR makes it possible to detect the presence of a cable 8 mm in diameter at 2 meters from the robot.
- This performance is more particularly adapted to the use of the robot object of the invention in an aeronautical environment, in particular in a fuselage section.
- the skilled person adapts the resolution of said LiDAR according to the environment of the robot, from the following formula:
- D is the distance at which obstacle must be detected
- d is the dimension visible from the obstacle, for example the diameter of the cable
- 0 is the angular resolution of the LiDAR, this data being provided by the manufacturer.
- the robot comprises a second LiDAR (150) pointed towards the ground and able to detect obstacles, more particularly negative obstacles (holes) at ground level.
- the robot object of the invention uses for this second LiDAR a Hokuyo URG-04LX material. This equipment can scan a flat area between 60 mm and 4 meters from the robot at a scan angle of 240 ° in 100 ms.
- LiDAR systems allow to quickly obtain a representation of the environment, including remote distance.
- these devices are not sensitive to the conditions of illumination of objects and are insensitive to atmospheric conditions.
- the robot object of the invention comprises two cameras (161, 162) monocular digital distant from each other.
- the two cameras are the same and are mounted so that their optical axes are parallel and their focal planes in the same plane.
- the images of said cameras are usable independently of one another or jointly so as to obtain a stereoscopic image of the environment, or to use said cameras as a range finder vis-à-vis specific environmental details robot (100).
- both cameras are cameras providing a color image.
- the robot object of the invention comprises a device (170) for vision comprising imaging means and projection means in the infrared spectrum.
- a device (170) for vision comprising imaging means and projection means in the infrared spectrum.
- the robot object of the invention uses for this purpose a Carminé® device.
- Such a device projects an infrared light source structured on the environment which allows it to obtain depth information for each elementary point of the image.
- the set of sensors of the robot object of the invention is placed in a modular manner in different locations on said robot according to the nature of the obstacles encountered on the site in which the robot evolves to optimize the detection of said obstacles.
- the robot evolves in a defined environment, for example a stretch of aircraft fuselage, or a hull of a ship or submarine, which environment is mapped to the storage means of the robot, but in which non-mapped objects are likely to appear and collide with said robot.
- these are static obstacles, such as thresholds or hatches (191), tools placed in the environment, in particular by operators, for example a service (192), or objects suspended from the ceiling, eg cables (193).
- the robot To move in the collision-free environment, the robot must detect these objects and define trajectories or appropriate behavior. For example, it will have to bypass a hatch (191), but it is possible to pass under a cable using an appropriate configuration of the arm (130) manipulator.
- Static obstacles are added mobile obstacles (not shown) such as operators or other robots, or cobots.
- the robot object of the invention must evaluate its environment quickly and make appropriate decisions, but as an autonomous robot, it has only a limited computing capacity.
- the principle of the method that is the subject of the invention consists in using the particularities of each sensor in order to focus the robot's calculation means on the useful zones of the representation of the environment, which allows a fast processing of the information and uses less computing resources.
- the LiDAR (140,150) gives an environment rich in depth information but with few details.
- the image from digital video cameras (161, 162) is rich in detail but poorer in depth information.
- the image from the device (170) of vision is intermediate but allows to enrich in a given area the representation of LiDAR.
- FIG. 2 according to initialization steps of the method that is the subject of the invention, a cartography of the evolution medium of the robot is loaded (210) in the memory means of said robot.
- the robot object of the invention is mainly intended to evolve in a relatively small space spatially like an aircraft fuselage section, for example to perform system installation operations in said section. These operations consist, for example, to install supports in said section and to perform drilling and shore operations, see assisting an operator in these operations. Also, as the systems are installed in the section, the mapping of it evolves.
- said mapping loaded in the memory means takes into account the evolution of the progress of work.
- the cartography is a 3D cartography or a 2D cartography in which the contour of the obstacles is projected.
- each object is associated with an identification code or label, a three-dimensional representation of the object in a reference frame which is linked to it and a security volume describing a polyhedral or ellipsoidal envelope surrounding the object. object away from it and defining an area that should not be crossed for security reasons.
- the three-dimensional definition of the object is advantageously a simplified definition limited to its outer contours.
- the information relating to each object includes color and hue information, thus, by choosing a suitable hue for each object the presence of it is easily detected from a color image.
- the list is updated according to the manufacturing phase concerned, so as to limit it to what is strictly necessary.
- object should be understood in a broad sense and includes, among other things, a three-dimensional definition with a volume of security, and depending on the environment, the color and hue information relating to a human operator that may evolve. in the environment.
- the mapping of the intervention medium is recorded in the memory means of the robot, the latter is able to locate in this medium at any time using idiothetic information, for example the number of wheel turns propellant wheels and the orientation of the directional wheels, or by a digital compass, and allothetic information, for example by triangulation of tags placed in the intervention environment.
- the tasks that the robot is made to perform in the environment and the location of these tasks in the environment are also recorded (216) in the memory means.
- the robot simultaneously performs a scanning (221) of the environment by means of the first LiDAR, a scanning (222) of the environment by means of the second LiDAR, when comprises two, image acquisition (223) by means of the vision device, and image acquisition (224) by means of each digital video camera.
- These acquisitions are recorded (230) in a buffer memory and timestamped in order to be processed.
- several successive acquisitions are averaged.
- the first processing (240) consists, from images from one or more LiDARs, in extracting from these images areas of interest. These areas of interest are, for example, areas of acquisition whose outlines differ from the map.
- a second treatment said areas of interest are examined by their representation in the acquisition performed by the viewing means.
- a first processing sequence (251) the corresponding point cloud is filtered from the measured conditions of reflection in the infrared. This first filtering sequence makes it possible to eliminate the remote points of the robot from the processing.
- a second (252) processing sequence the point cloud is downsampled.
- the space is meshed according to an appropriate polyhedral mesh, for example a cubic mesh.
- all the points in the same mesh are replaced by a point positioned at the center of said mesh and taking as value the average of the points or any other appropriate weighting. This treatment reduces the noise and the number of points to be treated.
- a third (253) processing sequence consists in eliminating from the cloud of points the singular points considered as aberrant. For this purpose, each measurement obtained at a point is compared to the measurement obtained on its closest neighbors by calculating the square of the measurement difference between these points. Assuming that for a given group of points, the distribution of the measure follows a law of x 2 , the points at more than n standard deviations of the mean are eliminated. This technique avoids stochastic variations inside the scatterplot, but it limits the ability to detect sharp edges.
- the orientation of the normals is calculated at each point of the point cloud. The orientation of the norms makes it possible in the course of an analysis step (255) to determine the points and the zones presenting potential difficulties of crossing. Thus, an area is passable when its normal is vertical. When the normal has a horizontal component, the zone presents difficulties of crossing.
- an identification step (260) the image from one or both digital video cameras is superimposed on the image obtained previously.
- the high resolution of these images makes it possible, in particular, to identify the contours or the hue of the objects likely to be found in the zones presenting crossing difficulties, and, by combining this information with the information relating to the normals to the surfaces, to identify and to label objects in the environment.
- the robot is able to decide (270) an appropriate avoidance sequence.
- the preceding steps (221 ... 260) are repeated and the acquisitions are correlated (265) so as to determine a speed of movement and a trajectory of the present objects. in the robot environment.
- the decision (270) of an avoidance sequence takes this information into account.
- the LiDAR or the second LiDAR (150) of the robot object of the invention is pointed or pointable towards the ground to detect a negative obstacle (191).
- the acquisition made on this occasion constitutes a second cloud of points.
- the negative obstacle is replaced, for processing, by a positive virtual obstacle (391) having horizontal normals (390) on its lateral faces.
- said virtual obstacle (391) resumes the contour of the actual negative obstacle (191), but a predefined lateral wall height, for example 1 meter, so that it is immediately recognizable.
- Said virtual obstacle is integrated in the scene observed and treated with the remainder of the acquisition from the fourth step of the method object of the invention.
- the invention achieves the intended objectives, in particular, the invention allows an autonomous robot to locate accurately in a changing and congested environment. Said robot is thus advantageously put for assembly operations and system installations including in an aircraft section or in a naval structure.
- the autonomous robot object of the invention is particularly suitable for interventions in environments difficult to access or in particular environmental or hazardous conditions such as in the presence of gas, heat or cold or ionizing radiation.
- the robot object of the invention comprises means specifically adapted for repair operations on hull or fuselage, said operations comprising additive manufacturing operations.
- the robot object of the invention comprises according to this embodiment:
- an additive manufacturing effector carried by the articulated arm for example a molten powder spraying nozzle or a thermoplastic wire extrusion nozzle;
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Aviation & Aerospace Engineering (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Multimedia (AREA)
- Electromagnetism (AREA)
- Optics & Photonics (AREA)
- Radar, Positioning & Navigation (AREA)
- Remote Sensing (AREA)
- General Physics & Mathematics (AREA)
- Automation & Control Theory (AREA)
- Manipulator (AREA)
- Control Of Position, Course, Altitude, Or Attitude Of Moving Bodies (AREA)
Abstract
Description
Claims
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
FR1557615A FR3039904B1 (fr) | 2015-08-07 | 2015-08-07 | Dispositif et procede pour la detection d’obstacles adaptes a un robot mobile |
PCT/EP2016/068916 WO2017025521A1 (fr) | 2015-08-07 | 2016-08-08 | Dispositif et procédé pour la détection d'obstacles adaptés à un robot mobile |
Publications (1)
Publication Number | Publication Date |
---|---|
EP3332299A1 true EP3332299A1 (fr) | 2018-06-13 |
Family
ID=54207576
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
EP16750807.6A Withdrawn EP3332299A1 (fr) | 2015-08-07 | 2016-08-08 | Dispositif et procédé pour la détection d'obstacles adaptés à un robot mobile |
Country Status (3)
Country | Link |
---|---|
EP (1) | EP3332299A1 (fr) |
FR (1) | FR3039904B1 (fr) |
WO (1) | WO2017025521A1 (fr) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111988524A (zh) * | 2020-08-21 | 2020-11-24 | 广东电网有限责任公司清远供电局 | 一种无人机与摄像头协同避障方法、服务器及存储介质 |
Families Citing this family (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
GB201703968D0 (en) * | 2017-03-13 | 2017-04-26 | Computational Eng Sg | System for building situation awareness |
DE102017221134A1 (de) * | 2017-11-27 | 2019-05-29 | Robert Bosch Gmbh | Verfahren und Vorrichtung zum Betreiben eines mobilen Systems |
CN113126640B (zh) * | 2019-12-31 | 2022-06-28 | 北京三快在线科技有限公司 | 用于无人机的障碍物探测方法、装置、无人机和存储介质 |
CN111152237B (zh) * | 2020-01-22 | 2023-12-22 | 深圳国信泰富科技有限公司 | 一种两侧设置激光雷达的机器人头部及其环境采样方法 |
CN111429520B (zh) * | 2020-03-02 | 2023-11-03 | 广州视源电子科技股份有限公司 | 负障碍物检测方法、装置、终端设备和存储介质 |
CN111427363B (zh) * | 2020-04-24 | 2023-05-05 | 深圳国信泰富科技有限公司 | 一种机器人导航控制方法及系统 |
CN115040031B (zh) * | 2021-02-26 | 2024-03-29 | 云米互联科技(广东)有限公司 | 环境信息采集方法、扫地机、终端设备及可读存储介质 |
CN113900435B (zh) * | 2021-08-31 | 2022-09-27 | 深圳蓝因机器人科技有限公司 | 基于双摄像头的移动机器人避障方法、设备、介质及产品 |
Family Cites Families (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
DE59809476D1 (de) * | 1997-11-03 | 2003-10-09 | Volkswagen Ag | Autonomes Fahrzeug und Verfahren zur Steuerung eines autonomen Fahrzeuges |
US8577538B2 (en) * | 2006-07-14 | 2013-11-05 | Irobot Corporation | Method and system for controlling a remote vehicle |
US9373149B2 (en) * | 2006-03-17 | 2016-06-21 | Fatdoor, Inc. | Autonomous neighborhood vehicle commerce network and community |
US7822266B2 (en) * | 2006-06-02 | 2010-10-26 | Carnegie Mellon University | System and method for generating a terrain model for autonomous navigation in vegetation |
US9400503B2 (en) * | 2010-05-20 | 2016-07-26 | Irobot Corporation | Mobile human interface robot |
US9463574B2 (en) * | 2012-03-01 | 2016-10-11 | Irobot Corporation | Mobile inspection robot |
CN105164549B (zh) * | 2013-03-15 | 2019-07-02 | 优步技术公司 | 用于机器人的多传感立体视觉的方法、系统和设备 |
-
2015
- 2015-08-07 FR FR1557615A patent/FR3039904B1/fr active Active
-
2016
- 2016-08-08 EP EP16750807.6A patent/EP3332299A1/fr not_active Withdrawn
- 2016-08-08 WO PCT/EP2016/068916 patent/WO2017025521A1/fr active Application Filing
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111988524A (zh) * | 2020-08-21 | 2020-11-24 | 广东电网有限责任公司清远供电局 | 一种无人机与摄像头协同避障方法、服务器及存储介质 |
Also Published As
Publication number | Publication date |
---|---|
WO2017025521A1 (fr) | 2017-02-16 |
FR3039904B1 (fr) | 2019-06-14 |
FR3039904A1 (fr) | 2017-02-10 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
FR3039904B1 (fr) | Dispositif et procede pour la detection d’obstacles adaptes a un robot mobile | |
US10884110B2 (en) | Calibration of laser and vision sensors | |
CA2940824C (fr) | Systeme radar a lumiere a resolution variable | |
US11657595B1 (en) | Detecting and locating actors in scenes based on degraded or supersaturated depth data | |
US10776639B2 (en) | Detecting objects based on reflectivity fingerprints | |
US11754721B2 (en) | Visualization and semantic monitoring using lidar data | |
Barry et al. | Pushbroom stereo for high-speed navigation in cluttered environments | |
US11527084B2 (en) | Method and system for generating a bird's eye view bounding box associated with an object | |
Manduchi et al. | Obstacle detection and terrain classification for autonomous off-road navigation | |
EP3152593B1 (fr) | Dispositif de detection a plans croises d'un obstacle et procede de detection mettant en oeuvre un tel dispositif | |
Khan et al. | Stereovision-based real-time obstacle detection scheme for unmanned ground vehicle with steering wheel drive mechanism | |
Filisetti et al. | Developments and applications of underwater LiDAR systems in support of marine science | |
JP2022547262A (ja) | Lidarの視野を変更するためのシステムおよび方法 | |
EP2924458A2 (fr) | Procédé de détection et de visualisation des obstacles artificiels d'un aéronef à voilure tournante | |
US20240073545A1 (en) | System and method for reducing stray light interference in optical systems | |
WO2014146884A1 (fr) | Procede d'observation d'une zone au moyen d'un drone | |
KR101888170B1 (ko) | 무인 수상정의 장애물 탐지 시 노이즈 제거방법 및 장치 | |
EP3757943B1 (fr) | Procédé et dispositif de télémétrie passive par traitement d'image et utilisation de modeles en trois dimensions | |
US20240219542A1 (en) | Auto-level step for extrinsic calibration | |
Guerrero-Bañales et al. | Use of LiDAR for Negative Obstacle Detection: A Thorough Review | |
Hwang | A Vehicle Tracking System Using Thermal and Lidar Data | |
Seidaliyeva et al. | LiDAR Technology for UAV Detection: From Fundamentals and Operational Principles to Advanced Detection and Classification Techniques | |
CN118226463A (zh) | 广角深度成像模块和包括广角深度成像模块的现实捕获设备 | |
WO2023107320A1 (fr) | Imagerie lidar 3d non contiguë de cibles à mouvement complexe |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
STAA | Information on the status of an ep patent application or granted ep patent |
Free format text: STATUS: THE INTERNATIONAL PUBLICATION HAS BEEN MADE |
|
PUAI | Public reference made under article 153(3) epc to a published international application that has entered the european phase |
Free format text: ORIGINAL CODE: 0009012 |
|
STAA | Information on the status of an ep patent application or granted ep patent |
Free format text: STATUS: REQUEST FOR EXAMINATION WAS MADE |
|
17P | Request for examination filed |
Effective date: 20180306 |
|
AK | Designated contracting states |
Kind code of ref document: A1 Designated state(s): AL AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MK MT NL NO PL PT RO RS SE SI SK SM TR |
|
AX | Request for extension of the european patent |
Extension state: BA ME |
|
RIN1 | Information on inventor provided before grant (corrected) |
Inventor name: MARQUEZ-GAMEZ, DAVID |
|
DAV | Request for validation of the european patent (deleted) | ||
DAX | Request for extension of the european patent (deleted) | ||
STAA | Information on the status of an ep patent application or granted ep patent |
Free format text: STATUS: EXAMINATION IS IN PROGRESS |
|
17Q | First examination report despatched |
Effective date: 20190206 |
|
STAA | Information on the status of an ep patent application or granted ep patent |
Free format text: STATUS: EXAMINATION IS IN PROGRESS |
|
GRAP | Despatch of communication of intention to grant a patent |
Free format text: ORIGINAL CODE: EPIDOSNIGR1 |
|
STAA | Information on the status of an ep patent application or granted ep patent |
Free format text: STATUS: GRANT OF PATENT IS INTENDED |
|
INTG | Intention to grant announced |
Effective date: 20210810 |
|
STAA | Information on the status of an ep patent application or granted ep patent |
Free format text: STATUS: THE APPLICATION IS DEEMED TO BE WITHDRAWN |
|
18D | Application deemed to be withdrawn |
Effective date: 20211221 |